hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
list
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
list
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
list
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
list
cell_types
list
cell_type_groups
list
cbb4a2f222c67527e63384dc3dfafe465817206e
10,202
ipynb
Jupyter Notebook
TJ-03 Progressive Elaboration of Tasks in TaskJuggler.ipynb
karlbenedict/workshops-pm
57928d11e93eeb89eda55287ee6b2ed0defaca97
[ "Apache-2.0" ]
1
2019-08-10T09:47:31.000Z
2019-08-10T09:47:31.000Z
TJ-03 Progressive Elaboration of Tasks in TaskJuggler.ipynb
karlbenedict/workshops-pm
57928d11e93eeb89eda55287ee6b2ed0defaca97
[ "Apache-2.0" ]
null
null
null
TJ-03 Progressive Elaboration of Tasks in TaskJuggler.ipynb
karlbenedict/workshops-pm
57928d11e93eeb89eda55287ee6b2ed0defaca97
[ "Apache-2.0" ]
1
2021-12-23T15:50:51.000Z
2021-12-23T15:50:51.000Z
43.974138
309
0.482847
[ [ [ "## Progressive Elaboration of Tasks\n\n[Progressive elaboration](https://project-management-knowledge.com/definitions/p/progressive-elaboration/)\nis the process of adding additional detail and fidelity to the project plan\nas additional or more complete information becomes available. The process of progressive elaboration allows the project team to begin with a sketch of their\nproject plan that becomes more detailed as needed and as development priorities\nemerge from the emergent detailed picture. \n\nIn this demonstration and exercise we will focus on the creation of subtasks, and identifying dependencies between tasks that impact the sequence of work. Our next demo and exercise will focus on assigning tasks to resources and developing an effort instead of duration-focused view of our project. \n\n[foundation-03.tjp file](Sample-Files/foundation-03.tjp)\n\n project foundation \"Foundation Project\" 2018-07-01 - 2019-06-30 {\n currency \"USD\"\n timeformat \"%Y-%m-%d\"\n numberformat \"-\" \"\" \",\" \".\" 1\n currencyformat \"(\" \")\" \",\" \".\" 0\n now 2018-07-01-00:00\n weekstartsmonday\n workinghours mon - fri 9:00 - 12:00, 13:00 - 18:00\n workinghours sat, sun off\n }\n \n ############## accounts ################\n \n ############## resources ###############\n \n ############## tasks ###################\n task projectstart \"Project Start\" {\n start ${projectstart}\n }\n \n \n task doing \"Making the Goods\" {\n start ${projectstart}\n task buy_materials \"Buy the materials\" {\n duration 1m\n }\n task glue_together \"Glue everything together\" {\n depends !buy_materials # relative reference to task\n duration 8w\n }\n task cleanup \"Clean up the mess\" {\n depends doing.glue_together # absolute reference to task\n duration 30d\n }\n }\n \n task refining \"Refining the Goods\" {\n depends doing\n task paint \"Paint the items\" {\n duration 3w\n }\n task attach_bells \"Screw bells onto items\" {\n depends !paint\n task buy_bells \"Buy bells\"{duration 1w}\n task use_screwdriver \"Use screwdriver\" {\n depends !buy_bells\n duration 1m\n }\n }\n task repaint \"Repaint the items\" {\n depends !attach_bells\n duration 2m\n }\n task explain \"Explain to Boss what you spent the last 3 months doing\" {\n depends !repaint\n duration 2h\n }\n }\n \n task selling \"Selling the Goods\" {\n depends refining.explain\n duration 4m\n }\n \n ############## reports #################\n \n taskreport \"reports/foundation-03_taskreport\" {\n formats html, csv\n headline \"Project Breakdown\"\n }\n", "_____no_output_____" ] ], [ [ "%%bash\ncd Sample-Files\ntj3 foundation-03.tjp", "TaskJuggler v3.6.0 - A Project Management Software\n\nCopyright (c) 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016\n by Chris Schlaeger <[email protected]>\n\nThis program is free software; you can redistribute it and/or modify it under\nthe terms of version 2 of the GNU General Public License as published by the\nFree Software Foundation.\n\nReading file foundation-03.tjp ...\rReading file foundation-03.tjp ...\rReading file foundation-03.tjp [ Done ]\nPreparing scenario Plan Scenario ...\rPreparing scenario Plan Scenario [=======81%== ]\rPreparing scenario Plan Scenario [=======82%=== ]\rPreparing scenario Plan Scenario [=======83%=== ]\rPreparing scenario Plan Scenario [=======84%=== ]\rPreparing scenario Plan Scenario [=======85%=== ]\rPreparing scenario Plan Scenario [=======87%=== ]\rPreparing scenario Plan Scenario [=======88%==== ]\rPreparing scenario Plan Scenario [=======90%==== ]\rPreparing scenario Plan Scenario [=======95%===== ]\rPreparing scenario Plan Scenario [=======99%===== ]\rPreparing scenario Plan Scenario [======100%======]\rPreparing scenario Plan Scenario [ Done ]\nScheduling scenario Plan Scenario ...\rScheduling scenario Plan Scenario [ Done ]\nChecking scenario Plan Scenario ...\rChecking scenario Plan Scenario [= 7% ]\rChecking scenario Plan Scenario [== 15% ]\rChecking scenario Plan Scenario [=== 23% ]\rChecking scenario Plan Scenario [==== 30% ]\rChecking scenario Plan Scenario [====== 38% ]\rChecking scenario Plan Scenario [=======46% ]\rChecking scenario Plan Scenario [=======53% ]\rChecking scenario Plan Scenario [=======61% ]\rChecking scenario Plan Scenario [=======69%= ]\rChecking scenario Plan Scenario [=======76%== ]\rChecking scenario Plan Scenario [=======84%=== ]\rChecking scenario Plan Scenario [=======92%==== ]\rChecking scenario Plan Scenario [======100%======]\rChecking scenario Plan Scenario [ Done ]\nReport reports/foundation-03_taskreport ...\rReport reports/foundation-03_taskreport [|]\rReport reports/foundation-03_taskreport [/]\rReport reports/foundation-03_taskreport [ Done ]\n" ] ], [ [ "producing this HTML report:\n\n[Sample-Files/reports/foundation-03_taskreport.html](Sample-Files/reports/foundation-03_taskreport.html)\n\nand the following CSV file:\n\n[Sample-Files/reports/foundation-03_taskreport.csv](Sample-Files/reports/foundation-03_taskreport.csv)", "_____no_output_____" ], [ "## Practice ...\n\nBased on the fleshed out skeleton (collection of high-level tasks) you previously developed, you can now\nadd some subtasks, and dependencies to those tasks to develop a more detailed plan. When done you can generate \nnew HTML and CSV reports that illustrate/contain the results of the project scheduling processs\nin TaskJuggler. \n\n### Activity:\n\n1. Modify your previously created fleshed out `.tjp` file or create a new file based on the content of your \"skeleton\" file and add some sub-tasks and dependencies to your plan.\n\n2. Run the TaskJuggler scheduler to test your skeleton to make sure that it does not generate any errors. If it does, see if you can fix them and re-run the scheduler. \n\n<video controls src=\"images/Timer10Minutestory.mov\" />", "_____no_output_____" ], [ "-------------------\n[(0)](TJ-00%20What%20is%20TaskJuggler.ipynb) -- \n[(1)](TJ-01%20Project%20Skeleton.ipynb) -- \n[(2)](TJ-02%20A%20Fleshed%20Out%20TaskJuggler%20Outline.ipynb) -- \n(3) -- \n[(4)](TJ-04%20Assigning%20Resources%20%26%20Cost%20Estimation%20in%20TaskJuggler.ipynb) --\n[(5)](TJ-05%20Project%20Tracking%20in%20TaskJuggler.ipynb) -- \n[(6)](TJ-06%20Visualization%20%26%20Reporting%20in%20TaskJuggler.ipynb)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
cbb4c352d5186ed13357409e63f33df1e2d19437
10,201
ipynb
Jupyter Notebook
TensorFlow/Burgers Equation/Solution evolution/.ipynb_checkpoints/solution_evolution_SVCCA-checkpoint.ipynb
bhanudaysharma/Physics-Informed-Neural-Networks
cee02f1a240c5b289863a841289c17ebc173ea04
[ "MIT" ]
112
2021-02-02T12:12:43.000Z
2022-03-31T11:45:07.000Z
TensorFlow/Burgers Equation/Solution evolution/.ipynb_checkpoints/solution_evolution_SVCCA-checkpoint.ipynb
Zzeyuan/Physics-Informed-Neural-Networks
7d26e55cc665b79b652b226860177fa4defe9190
[ "MIT" ]
2
2021-05-22T21:24:54.000Z
2021-11-11T00:20:23.000Z
TensorFlow/Burgers Equation/Solution evolution/.ipynb_checkpoints/solution_evolution_SVCCA-checkpoint.ipynb
Zzeyuan/Physics-Informed-Neural-Networks
7d26e55cc665b79b652b226860177fa4defe9190
[ "MIT" ]
33
2021-04-01T08:56:40.000Z
2022-03-30T10:46:27.000Z
41.133065
1,481
0.522792
[ [ [ "# *Import Libraries*", "_____no_output_____" ] ], [ [ "import scipy.io\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport sys \nsys.path.append('/home/bhustali/.conda/envs/tf2/svcca-master')\nimport cca_core", "_____no_output_____" ] ], [ [ "# Simple Example", "_____no_output_____" ] ], [ [ "# # assume A_fake has 20 neurons and we have their activations on 2000 datapoints\n# A_fake = np.random.randn(20, 2000)\n\n# # B_fake has 50 neurons with activations on the same 1000 datapoints\n# # Note A and B do *not* have to have the same number of neurons\n# B_fake = np.random.randn(50, 1000)\n\n# # computing CCA simliarty between A_fake, B_fake\n# # We expect similarity should be very low, because the random activations are not correlated\n# results = cca_core.get_cca_similarity(A_fake, B_fake,compute_dirns=True, verbose=True)\n\n# print(\"Returned Information: \\n\")\n# print(results.keys())\n# print(\"Single number for summarizing similarity\")\n# print('{:.4f}'.format(np.mean(results[\"cca_coef1\"])))", "adding eps to diagonal and taking inverse\ntaking square root\ndot products...\ntrying to take final svd\ncomputed everything!\nReturned Information: \n\ndict_keys(['coef_x', 'invsqrt_xx', 'full_coef_x', 'full_invsqrt_xx', 'coef_y', 'invsqrt_yy', 'full_coef_y', 'full_invsqrt_yy', 'neuron_means1', 'neuron_means2', 'cca_coef1', 'cca_coef2', 'x_idxs', 'y_idxs', 'mean', 'sum', 'cca_dirns1', 'cca_dirns2'])\nSingle number for summarizing similarity\n0.0841\n" ] ], [ [ "# SVCCA Layers", "_____no_output_____" ] ], [ [ "for k in range(1,587):\n \n data_A = scipy.io.loadmat('/home/bhustali/data/mat/neuron_output (586).mat')\n data_B = scipy.io.loadmat('/home/bhustali/data/mat/neuron_output ' + '(' + str(k)+ ')' + '.mat')\n \n S = np.zeros((8,8))\n \n for i in range (1,9):\n for j in range (1,9):\n \n #extract layer outputs l_1 and l_2\n A_layer = data_A['layer_' + str(i)].transpose()\n B_layer = data_B['layer_' + str(j) ].transpose()\n \n # Mean subtract activations\n '''\n a = np.array([[1, 2], [3, 4]])\n np.mean(a, axis=1) = array([1.5, 3.5]) \n '''\n c_A_layer = A_layer - np.mean(A_layer, axis=1, keepdims=True)\n c_B_layer = B_layer - np.mean(B_layer, axis=1, keepdims=True)\n \n #Singular Value Decomposition(SVD)\n #obtain l_1' and l_2'\n '''\n U, S, Vh = np.linalg.svd(A, full_matrices=False)\n\n U = [mxs] = Left-singular vector of A\n S = [sxs] = Singular values of A\n Vh = [sxp] = Right-singular vectors of A\n '''\n u_A, s_A, vh_A = np.linalg.svd(c_A_layer, full_matrices=False)\n u_B, s_B, vh_B = np.linalg.svd(c_B_layer, full_matrices=False)\n \n sv = 10 #select an appropriate value\n\n# print(\"Percentage of variance explained by 'sv' singular vectors\", 100*np.sum(s_A[:sv])/np.sum(s_A))\n\n '''singular vectors = S * Vh \n Equivalent to Uh * A = Uh* (U*S*Vh) = S*Vh\n '''\n #We compute the subspace with 'sv' largest singular values\n sv_A_layer = np.matmul(s_A[:sv]*np.eye(sv), vh_A[:sv])\n sv_B_layer = np.matmul(s_B[:sv]*np.eye(sv), vh_B[:sv])\n \n svcca_results = cca_core.get_cca_similarity(sv_A_layer, sv_B_layer, epsilon=0, threshold=1,compute_dirns=True, verbose=False)\n \n S[i-1,j-1] = np.mean(svcca_results[\"cca_coef1\"])\n \n fig, ax = plt.subplots()\n ax.matshow(S, cmap=plt.cm.Blues)\n ax.set_xticklabels([0,'B1','B2','B3','B4','B5','B6','B7','B8'])\n ax.set_yticklabels([0,'A1','A2','A3','A4','A5','A6','A7','A8'])\n\n for l in range(8):\n for m in range(8):\n c = S[m,l]\n ax.text(l, m, f\"{c:.2f}\", va='center', ha='center')\n\n plt.savefig('/home/bhustali/data/movie/' + str(k) + '.png',dpi = 100)\n \n plt.close('all')", "_____no_output_____" ] ], [ [ "# How do layer outputs change during optimization?\n\nHere, we measure the correlation of the output after each iteration (data_B) with the outputs of the layers before training (data_A)", "_____no_output_____" ] ], [ [ "# s = np.zeros((587,8))\n\n# #outputs of the layers before training\n# data_A = scipy.io.loadmat('data/mat/neuron_output (0).mat')\n\n# for i in range (1,9):\n\n# for k in range(0,587):\n \n# #output after each iteration\n# data_B = scipy.io.loadmat('data/mat/neuron_output ' + '(' + str(k)+ ')' + '.mat')\n\n# j = i\n\n# A_layer = data_A['layer_' + str(i)].transpose()\n# B_layer = data_B['layer_' + str(j) ].transpose()\n\n# #shift mean to 0 \n# c_A_layer = A_layer - np.mean(A_layer, axis=1, keepdims=True)\n# c_B_layer = B_layer - np.mean(B_layer, axis=1, keepdims=True)\n \n# # Singular Value Decomposition(SVD)\n# u_A, s_A, vh_A = np.linalg.svd(c_A_layer, full_matrices=False)\n# u_B, s_B, vh_B = np.linalg.svd(c_B_layer, full_matrices=False)\n\n# sv = 10 #select an appropriate value\n\n# sv_A_layer = np.matmul(s_A[:sv]*np.eye(sv), vh_A[:sv])\n# sv_B_layer = np.matmul(s_B[:sv]*np.eye(sv), vh_B[:sv])\n\n# #compute similarity\n# svcca_results = cca_core.get_cca_similarity(sv_A_layer, sv_B_layer, epsilon=0, threshold=1,\n# compute_dirns=True, verbose=False)\n\n# s[k,i-1] = np.mean(svcca_results[\"cca_coef1\"])\n\n# #Plotting \n# fig, ax = plt.subplots()\n# for i in range (0,8):\n# ax.plot(s[:,i], label = str(i+1))\n# ax.set(xlabel='iterations', ylabel='rho',\n# title='Convergence')\n# ax.grid()\n# plt.legend()\n\n# # fig.savefig('divergence.png', dpi = 500) ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb4c7ee044442890bf079a43506001b17011f82
304,911
ipynb
Jupyter Notebook
_doc/notebooks/challenges/city_tour/city_tour_data_preparation.ipynb
sdpython/ensae_projects
9647751da053c09fa35402527b294e02a4e6e2ad
[ "MIT" ]
1
2020-11-22T10:24:54.000Z
2020-11-22T10:24:54.000Z
_doc/notebooks/challenges/city_tour/city_tour_data_preparation.ipynb
sdpython/ensae_projects
9647751da053c09fa35402527b294e02a4e6e2ad
[ "MIT" ]
13
2017-11-20T00:20:45.000Z
2021-01-05T14:13:51.000Z
_doc/notebooks/challenges/city_tour/city_tour_data_preparation.ipynb
sdpython/ensae_projects
9647751da053c09fa35402527b294e02a4e6e2ad
[ "MIT" ]
null
null
null
284.431903
96,118
0.912496
[ [ [ "# Walk through all streets in a city\n\nPreparation of the examples for the challenge: find the shortest path through a set of streets.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "_____no_output_____" ] ], [ [ "## Problem description", "_____no_output_____" ], [ "Find the shortest way going through all streets from a set of streets? This problem is known as the *Route inspection problem*.", "_____no_output_____" ], [ "## Data\n\n[Seattle streets](https://data.seattle.gov/dataset/Street-Network-Database/afip-2mzr/data) from [data.seattle.gov](https://data.seattle.gov/)", "_____no_output_____" ], [ "### Read the data", "_____no_output_____" ] ], [ [ "import shapefile, os\nif os.path.exists(\"Street_Network_Database/WGS84/Street_Network_Database.shp\"):\n rshp = shapefile.Reader(\"Street_Network_Database/WGS84/Street_Network_Database.shp\")\n shapes = rshp.shapes()\n records = rshp.records()\nelse:\n from pyensae.datasource import download_data\n files = download_data(\"WGS84_seattle_street.zip\")\n rshp = shapefile.Reader(\"Street_Network_Database.shp\")\n shapes = rshp.shapes()\n records = rshp.records() ", "_____no_output_____" ], [ "shapes[0].__dict__", "_____no_output_____" ], [ "{k[0]:v for k,v in zip(rshp.fields[1:], records[0])}", "_____no_output_____" ], [ "from ensae_projects.datainc.data_geo_streets import get_fields_description\nget_fields_description()", "_____no_output_____" ] ], [ [ "### Display the streets", "_____no_output_____" ] ], [ [ "streets5 = list(zip(records[:5], shapes[:5]))\nstreets5[2][1].points", "_____no_output_____" ], [ "import folium\nfrom random import randint\nfrom pyensae.notebookhelper import folium_html_map\n\nc = streets5[0][1]\nmap_osm = folium.Map(location=[c.bbox[1], c.bbox[0]], zoom_start=9)\nfor rec, shape in streets5:\n d = {k[0]: v for k,v in zip(rshp.fields[1:], rec)}\n map_osm.add_child(folium.Marker([shape.points[0][1], shape.points[0][0]], popup=str(d[\"ORD_STNAME\"])))\n map_osm.add_child(folium.PolyLine(locations=[[_[1], _[0]] for _ in shape.points], weight=10))\nfolium_html_map(map_osm, width=\"60%\")", "_____no_output_____" ] ], [ [ "## Find connected streets", "_____no_output_____" ] ], [ [ "street0 = streets5[0][1].points\nstreet0", "_____no_output_____" ], [ "def connect_streets(st1, st2):\n a1, b1 = st1[0], st1[-1]\n a2, b2 = st2[0], st2[-1]\n connect = []\n if a1 == a2:\n connect.append((0, 0))\n if a1 == b2:\n connect.append((0, 1))\n if b1 == a2:\n connect.append((1, 0))\n if b1 == b2:\n connect.append((1, 1))\n return tuple(connect) if connect else None\n\nneighbours = []\nfor i, street in enumerate(shapes):\n points = street.points\n con = connect_streets(street0, points)\n if con:\n neighbours.append(i)\n \nneighbours ", "_____no_output_____" ], [ "import folium\nfrom pyensae.notebookhelper import folium_html_map\nc = shapes[neighbours[0]]\nmap_osm = folium.Map(location=[c.bbox[1], c.bbox[0]], zoom_start=15)\npoints = set()\nfor index in neighbours:\n rec, shape = records[index], shapes[index]\n corners = [(_[1], _[0]) for _ in shape.points]\n map_osm.add_child(folium.PolyLine(locations=corners, weight=10))\n points |= set([corners[0], corners[-1]])\nfor x, y in points:\n map_osm.add_child(folium.Marker((x, y), popup=str(index)))\nfolium_html_map(map_osm, width=\"50%\")", "_____no_output_____" ], [ "c = shapes[neighbours[0]]\nmap_osm = folium.Map(location=[c.bbox[1], c.bbox[0]], zoom_start=15)\npoints = set()\nfor index in neighbours:\n rec, shape = records[index], shapes[index]\n corners = [(_[1], _[0]) for _ in shape.points]\n map_osm.add_child(folium.PolyLine(locations=corners, weight=10))\n points |= set([corners[0], corners[-1]])\nfor x, y in points:\n map_osm.add_child(folium.CircleMarker((x, y), popup=str(index), radius=8, fill_color=\"yellow\"))\nfolium_html_map(map_osm, width=\"50%\")", "_____no_output_____" ] ], [ [ "## Extraction of all streets in a short perimeter", "_____no_output_____" ] ], [ [ "from shapely.geometry import Point, LineString\n\ndef enumerate_close(x, y, shapes, th=None):\n p = Point(x,y)\n for i, shape in enumerate(shapes):\n obj = LineString(shape.points)\n d = p.distance(obj)\n if th is None or d <= th:\n yield d, i\n\nx, y = shapes[0].points[0]\ncloses = list(enumerate_close(x, y, shapes))\ncloses.sort()\ncloses[:10]", "_____no_output_____" ], [ "import folium\nfrom ensae_projects.datainc.data_geo_streets import folium_html_street_map\nfolium_html_street_map([_[1] for _ in closes[:20]], shapes, html_width=\"50%\", zoom_start=15)", "_____no_output_____" ], [ "def complete_subset_streets(subset, shapes):\n extension = []\n for i, shape in enumerate(shapes):\n add = []\n for s in subset:\n to = shapes[s]\n if s != i:\n con = connect_streets(shapes[s].points, shapes[i].points)\n if con is not None:\n add.extend([_[1] for _ in con])\n if len(set(add)) == 2:\n extension.append(i)\n return extension\n\nsubset = [index for dist, index in closes[:20]]\nnewset = set(subset + complete_subset_streets(subset, shapes))\n\nprint(list(sorted(newset)))\nfolium_html_street_map(newset, shapes, html_width=\"50%\", zoom_start=15)", "[0, 107, 1003, 1101, 1670, 2418, 2803, 2808, 3353, 4553, 4994, 6265, 6488, 6712, 8378, 9118, 9989, 11274, 11394, 12783, 15023, 17680, 29114, 30370]\n" ], [ "from ensae_projects.datainc.data_geo_streets import build_streets_vertices\nvertices, edges = build_streets_vertices(newset, shapes)\nvertices[:3], edges[:3]", "_____no_output_____" ], [ "from ensae_projects.datainc.data_geo_streets import plot_streets_network\nplot_streets_network(newset, edges, vertices, shapes, figsize=(10,10));", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cbb4d7eccb75ca8e578d5e80a0d71fd21151a98b
16,563
ipynb
Jupyter Notebook
classes/visualization_part2.ipynb
betteridiot/b575f19
bdc63a0be51f5e4b8177da966a6378c97b5129be
[ "BSD-3-Clause" ]
7
2019-09-03T15:04:19.000Z
2019-10-24T01:57:21.000Z
classes/visualization_part2.ipynb
mitreac/b575f19
bdc63a0be51f5e4b8177da966a6378c97b5129be
[ "BSD-3-Clause" ]
26
2019-09-20T19:10:34.000Z
2019-12-10T16:20:40.000Z
classes/visualization_part2.ipynb
betteridiot/b575f19
bdc63a0be51f5e4b8177da966a6378c97b5129be
[ "BSD-3-Clause" ]
7
2019-09-04T14:07:34.000Z
2019-12-03T18:34:35.000Z
25.019637
212
0.550202
[ [ [ "### Data Visualization", "_____no_output_____" ], [ "#### `matplotlib` - from the documentation:\nhttps://matplotlib.org/3.1.1/tutorials/introductory/pyplot.html\n\n`matplotlib.pyplot` is a collection of command style functions <br>\nEach pyplot function makes some change to a figure <br>\n`matplotlib.pyplot` preserves ststes across function calls\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "Call signatures::\n```\n plot([x], y, [fmt], data=None, **kwargs)\n plot([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs)\n```", "_____no_output_____" ], [ "Quick plot", "_____no_output_____" ], [ "The main usage of `plt` is the `plot()` and `show()` functions", "_____no_output_____" ], [ "https://matplotlib.org/3.1.1/api/pyplot_summary.html <br>\nhttps://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.plot.html <br>\nhttps://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.show.html <br>\nhttps://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.legend.html<br>\nhttps://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.figure.html<br>\nhttps://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.subplot.html<br>\nhttps://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.annotate.html<br>\n", "_____no_output_____" ] ], [ [ "df_iris = pd.read_csv('https://raw.githubusercontent.com/uiuc-cse/data-fa14/gh-pages/data/iris.csv')\ndf_iris.head()", "_____no_output_____" ], [ "colors = {'setosa':'red', 'versicolor':'orange', 'virginica':'blue'}\ndef get_col(spec):\n return colors[spec]\ncolors_col = df_iris.species.apply(get_col)\nplt.scatter(\"petal_length\",\"petal_width\", data=df_iris, c = colors_col, s = 7, marker = \"o\")\n\nlegend_elements = [plt.Line2D([0], [0], marker='o', linestyle=\"\", color=colors[\"setosa\"], label=\"setosa\"),\n plt.Line2D([0], [0], marker='o', linestyle=\"\", color=colors[\"versicolor\"], label=\"versicolor\"),\n plt.Line2D([0], [0], marker='o', linestyle=\"\", color=colors[\"virginica\"], label=\"virginica\")]\n\nplt.legend(handles=legend_elements,loc=\"upper left\", title=\"Species\")\nplt.show()", "_____no_output_____" ] ], [ [ "https://python-graph-gallery.com/matplotlib/", "_____no_output_____" ], [ "#### Using pandas `.plot()`", "_____no_output_____" ] ], [ [ "df_iris.groupby(\"species\").mean().plot(kind='bar')\nplt.show()", "_____no_output_____" ], [ "df_iris.plot(x= \"petal_length\", y = \"petal_width\" ,kind = \"scatter\", color = colors_col)\nplt.savefig('output1.png')", "_____no_output_____" ] ], [ [ "https://github.com/pandas-dev/pandas/blob/v0.25.0/pandas/plotting/_core.py#L504-L1533", "_____no_output_____" ], [ "https://python-graph-gallery.com/wp-content/uploads/Matplotlib_cheatsheet_datacamp.png", "_____no_output_____" ], [ "<img src = \"https://python-graph-gallery.com/wp-content/uploads/Matplotlib_cheatsheet_datacamp.png\" width = \"1000\"/>", "_____no_output_____" ], [ "### `seaborn` - dataset-oriented plotting", "_____no_output_____" ], [ "Seaborn is a library that specializes in making *prettier* `matplotlib` plots of statistical data. <br>\nIt is built on top of matplotlib and closely integrated with pandas data structures.\n\nhttps://seaborn.pydata.org/introduction.html<br>\nhttps://python-graph-gallery.com/seaborn/", "_____no_output_____" ] ], [ [ "import seaborn as sns", "_____no_output_____" ] ], [ [ "`seaborn` lets users *style* their plotting environment.<br>\nThere are 5 preset themes: darkgrid (default), whitegrid, dark, white, and ticks.<br>\nhttps://seaborn.pydata.org/tutorial/aesthetics.html\n\nHowever, you can always use `matplotlib`'s `plt.style`\nhttps://matplotlib.org/3.1.1/gallery/style_sheets/style_sheets_reference.html", "_____no_output_____" ] ], [ [ "sns.set(style='whitegrid')", "_____no_output_____" ], [ "#dir(sns)", "_____no_output_____" ], [ "sns.scatterplot(x='petal_length',y='petal_width',data=df_iris)\nplt.show()", "_____no_output_____" ], [ "with plt.style.context(('ggplot')):\n sns.scatterplot(x='petal_length',y='petal_width',data=df_iris)\nplt.show()", "_____no_output_____" ], [ "sns.scatterplot(x='petal_length',y='petal_width', hue = \"species\",data=df_iris)\nplt.show()", "_____no_output_____" ] ], [ [ "#### Violin plot", "_____no_output_____" ], [ "Fancier box plot that gets rid of the need for 'jitter' to show the inherent distribution of the data points", "_____no_output_____" ] ], [ [ "sns.set(style=\"dark\")\nfig, axes = plt.subplots(figsize=(7, 3))\nsns.violinplot(data=df_iris, ax=axes)\naxes.set_ylabel('value')\naxes.set_xlabel('feature')\nplt.show()", "_____no_output_____" ] ], [ [ "#### Distplot", "_____no_output_____" ] ], [ [ "sns.set(style='dark', palette='muted')\n\n# 1 column, 4 rows\nf, axes = plt.subplots(4,1, figsize=(10,10), sharex=True)\n\n# Regular displot\nsns.distplot(df_iris.petal_length, ax=axes[0])\n\n# Change the color\nsns.distplot(df_iris.petal_width, kde=False, ax=axes[1], color='orange')\n\n# Show the Kernel density estimate\nsns.distplot(df_iris.sepal_width, hist=False, kde_kws={'shade':True}, ax=axes[2], color='purple')\n\n# Show the rug\nsns.distplot(df_iris.sepal_length, hist=False, rug=True, ax=axes[3], color='green')\nplt.show()", "_____no_output_____" ] ], [ [ "#### FacetGrid", "_____no_output_____" ] ], [ [ "sns.set()\ncolumns = ['species', 'petal_length', 'petal_width']\nfacet_column = 'species'\ng = sns.FacetGrid(df_iris.loc[:,columns], col=facet_column, hue=facet_column)\ng.map(plt.scatter, 'petal_length', 'petal_width')", "_____no_output_____" ], [ "sns.relplot(x=\"petal_length\", y=\"petal_width\", col=\"species\",\n hue=\"species\", style=\"species\", size=\"sepal_width\",\n data=df_iris)\nplt.show()", "_____no_output_____" ] ], [ [ "https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html", "_____no_output_____" ] ], [ [ "sns.catplot(x=\"species\", y=\"petal_length\", data=df_iris)\nplt.show()", "_____no_output_____" ], [ "sns.catplot(kind=\"box\", data=df_iris)\nplt.show()", "_____no_output_____" ], [ "# https://seaborn.pydata.org/tutorial/categorical.html\ntips = sns.load_dataset(\"tips\")\nprint(tips.head())\nsns.catplot(x=\"day\", y=\"total_bill\", hue=\"smoker\", kind=\"box\", data=tips)\nplt.show()", "_____no_output_____" ] ], [ [ "Plot the tips by day with two side by side box plots for males and females and different subplots for the time of the meal (lunch/dinner). \n\n\n", "_____no_output_____" ] ], [ [ "# help(sns.catplot)", "_____no_output_____" ], [ "sns.pairplot(df_iris, hue='species', height=2.5);\n", "_____no_output_____" ] ], [ [ "https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Seaborn_Cheat_Sheet.pdf", "_____no_output_____" ], [ "<img src = \"https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Seaborn_Cheat_Sheet.pdf\" width = \"1000\"/>", "_____no_output_____" ], [ "### `plotnine` - R ggplot2 in python", "_____no_output_____" ], [ "plotnine is an implementation of a grammar of graphics in Python, it is based on ggplot2. The grammar allows users to compose plots by explicitly mapping data to the visual objects that make up the plot.\n\nPlotting with a grammar is powerful, it makes custom (and otherwise complex) plots are easy to think about and then create, while the simple plots remain simple.\n\n", "_____no_output_____" ] ], [ [ "#!pip install plotnine", "_____no_output_____" ] ], [ [ "https://plotnine.readthedocs.io/en/stable/", "_____no_output_____" ] ], [ [ "from plotnine import *", "_____no_output_____" ] ], [ [ "https://plotnine.readthedocs.io/en/stable/api.html", "_____no_output_____" ] ], [ [ "p = ggplot(data=df_iris) + aes(x=\"petal_length\", y = \"petal_width\") + geom_point()\n", "_____no_output_____" ], [ "# add transparency - to address overlapping points\nggplot(data=df_iris) + aes(x=\"petal_length\", y = \"petal_width\") + geom_point(size = 5, alpha=0.5)", "_____no_output_____" ], [ "# change point size \nggplot(data=df_iris) + aes(x=\"petal_length\", y = \"petal_width\") + geom_point(size = 0.7, alpha=0.7)", "_____no_output_____" ], [ "# more parameters \nggplot(data=df_iris) + aes(x=\"petal_length\", y = \"petal_width\") + geom_point() + scale_x_log10() + xlab(\"Petal Length\")", "_____no_output_____" ], [ "n = \"3\"\nfeatures = \"length and width\"\ntitle = f'species : {n}; petal : {features}'\n#title = 'species : {}; petal : {}'.format(n,features)\n\n\nggplot(data=df_iris) +aes(x='petal_length',y='petal_width',color=\"species\") \\\n + geom_point(size=0.7) + facet_wrap('~species',nrow=3) \\\n + theme(figure_size=(7,9)) + ggtitle(title)\n", "_____no_output_____" ], [ "p = ggplot(data=df_iris) + aes(x='petal_length') \\\n + geom_histogram(binwidth=1,color='black',fill='grey')\np", "_____no_output_____" ], [ "ggsave(plot=p, filename='hist_plot_with_plotnine.png')", "_____no_output_____" ], [ "tips = sns.load_dataset(\"tips\")\nprint(tips.head())\n\nggplot(aes(x=\"day\", y=\"tip\",\\\n color=\"smoker\"), data=tips) \\\n + geom_boxplot()\\\n + geom_jitter(width=0.05, alpha=0.4) \\\n + facet_grid(\".~smoker\")\n\n\n\n", "_____no_output_____" ] ], [ [ "http://cmdlinetips.com/2018/05/plotnine-a-python-library-to-use-ggplot2-in-python/ <br>\nhttps://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf", "_____no_output_____" ], [ "<img src = \"https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf\" width = \"1000\"/>", "_____no_output_____" ], [ "Use ggplot to plot the sepal_length in boxplots separated by species, add new axes labels and make the y axis values log10.\n\n* Write a function that takes as a parameter a line of the dataframe and if the species is \n** setosa it returns the petal_length\n** versicolor it returns the petal_width\n** virginica it returns the sepal_length\n\nApply this function to every line in the dataset. <br>\nUse ggplot to make a histogram of the resulted values.", "_____no_output_____" ] ], [ [ "#dir()", "_____no_output_____" ] ], [ [ "https://plotnine.readthedocs.io/en/stable/api.html\n\nLook for scale functions.", "_____no_output_____" ], [ "More resources: \n\nhttps://github.com/swyder/plotnine_tutorial/blob/master/plotnine_demo_sw.ipynb <br>\nhttps://datacarpentry.org/python-ecology-lesson/07-visualization-ggplot-python/", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cbb4e24b5a341ea3d65e9177307bc65679c74436
610,112
ipynb
Jupyter Notebook
notebooks/PhiK Correlation/PhiK.ipynb
aryasoni98/Statistics-and-Econometrics-for-Data-Science
6e1e67b49289a74522d65ea52b843801756c6c5f
[ "MIT" ]
41
2020-12-05T20:15:47.000Z
2022-02-07T06:00:01.000Z
notebooks/PhiK Correlation/PhiK.ipynb
Econometrics/Statistics-and-Econometrics-for-Data-Science
c7bf6ac053f0f7791b1fdbd5ed0e0d06387544a1
[ "MIT" ]
173
2020-11-29T18:37:22.000Z
2022-03-06T04:03:08.000Z
notebooks/PhiK Correlation/PhiK.ipynb
Econometrics/Statistics-and-Econometrics-for-Data-Science
c7bf6ac053f0f7791b1fdbd5ed0e0d06387544a1
[ "MIT" ]
112
2020-12-04T12:40:23.000Z
2021-12-16T17:29:15.000Z
1,718.625352
479,616
0.956146
[ [ [ "<h1>Phi K Correlation</h1>", "_____no_output_____" ], [ "Phi K correlation is a newly emerging correlation cofficient with following advantages:\n\n- it can work consistently between categorical, ordinal and interval variables\n- it can capture non-linear dependency\n- it reverts to the Pearson correlation coefficient in case of a bi-variate normal input distribution", "_____no_output_____" ] ], [ [ "import phik\n\nfrom phik import resources\nfrom phik.binning import bin_data\nfrom phik.decorators import *\nfrom phik.report import plot_correlation_matrix\n\n\nimport pandas as pd \nimport numpy as np\nfrom sklearn.preprocessing import OneHotEncoder, LabelEncoder, StandardScaler, MinMaxScaler\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport networkx as nx", "_____no_output_____" ], [ "#loading the SalePrice dataset\ndf=pd.read_csv('dataset.csv')\ndf.drop(['Id'], axis=1,inplace=True)", "_____no_output_____" ] ], [ [ "**Preprocessing**", "_____no_output_____" ] ], [ [ "#Preprocessing the data\nclass PreProcessor:\n def __init__(self):\n #treating certain categorical columns as ordinal\n self.encoder={}\n self.encoder['LotShape']={'Reg':0,'IR1':1,'IR2':2,'IR3':3}\n self.encoder['LandSlope']={'Gtl':1, 'Mod':2, 'Sev':3}\n self.encoder['GarageFinish']={'Fin':3, 'RFn':2, 'Unf':1, 'VNA':0}\n self.encoder['BsmtExposure']={'Gd':4,'Av':3,'Mn':2,'No':1,'VNA':0}\n self.encoder['Functional']={'Typ':0,'Min1':1,'Min2':2,'Mod':3,'Maj1':4,'Maj2':5,'Sev':6,'Sal':7}\n self.encoder['PavedDrive']={'Y':2,'P':1,'N':0}\n #columns with values as Ex,Gd,TA,Fa,Po,VNA can be treated as ordinal\n ratings={'Ex':5,'Gd':4,'TA':3,'Fa':2,'Po':1,'VNA':0}\n rated_cols=['ExterQual', 'ExterCond','BsmtQual','BsmtCond','KitchenQual','FireplaceQu','GarageQual', 'GarageCond']\n for col in rated_cols:\n self.encoder[col]=ratings\n self.categorical_encoded=self.encoder.keys()\n \n \n \n def preprocessing1(self,df):\n #drop columns with mostly one value or mostly missing values\n dropped_cols=['Street', 'Alley', 'Utilities', 'Condition2', 'RoofMatl', 'Heating','LowQualFinSF', '3SsnPorch', 'PoolArea', 'PoolQC', 'Fence', 'MiscFeature', 'MiscVal']\n df.drop(dropped_cols, axis=1, inplace=True)\n \n #treating missing values\n #Filling missing values with median\n col1=['LotFrontage','MasVnrArea']\n for col in col1:\n df[col].fillna(df[col].median(), inplace=True)\n #Fill missing values with new category \"VNA\"\n col2=['BsmtQual','BsmtCond','BsmtExposure','BsmtFinType1','BsmtFinType2','GarageType','GarageFinish','GarageQual','GarageCond','FireplaceQu','MasVnrType', 'Electrical']\n for col in col2:\n df[col].fillna('VNA', inplace=True)\n \n #Replacing Na values in GarageYrBlt with corresponding values in YearBuilt\n df.loc[(pd.isnull(df.GarageYrBlt)), 'GarageYrBlt'] = df.YearBuilt\n \n #encoding categorical columns to ordinal \n for col in self.categorical_encoded:\n df[col]=df[col].apply(lambda val: self.encoder[col][val])\n \n #apply lable encoder\n for col in df.select_dtypes(include=['object']).columns:\n df[col] = LabelEncoder().fit_transform(df[col])\n \n return df\n \n def preprocessing2(self,df):\n df=self.preprocessing1(df)\n \n #filtered columns\n numerical_filtered=['YearBuilt','TotRmsAbvGrd','GrLivArea','1stFlrSF','GarageYrBlt','YearRemodAdd','GarageArea','SalePrice']\n ordinal_filtered=['GarageCars','OverallQual','Fireplaces','GarageFinish','BsmtFullBath','KitchenQual','FullBath','FireplaceQu','BsmtQual','TotalBsmtSF']\n categorical_filtered=['MSZoning', 'Neighborhood', 'Foundation', 'BsmtFinType1', 'HeatingQC', 'CentralAir', 'GarageType', 'SaleCondition', 'MSSubClass', 'MasVnrType']\n \n return df[numerical_filtered+ordinal_filtered+categorical_filtered], numerical_filtered\n \n ", "_____no_output_____" ], [ "#create pre processor object\npre_processor=PreProcessor()\n\n#preprocess the data and get interval column\npreprocessed_df, interval_cols=pre_processor.preprocessing2(df)", "_____no_output_____" ] ], [ [ "**PhiK correlation**", "_____no_output_____" ] ], [ [ "# get the phi_k correlation matrix between all variables\ncoerr_mat=preprocessed_df.phik_matrix(interval_cols=interval_cols)\n\n#colour map\ncmap = sns.diverging_palette(220, 10, as_cmap=True)\n\n#plotting phik correlation\nplot_correlation_matrix(coerr_mat.values, x_labels=coerr_mat.columns, y_labels=coerr_mat.index, \n vmin=0, vmax=1, color_map=cmap, title=r'correlation $\\phi_K$', fontsize_factor=1,\n figsize=(7*3,5.5*3))\nplt.tight_layout()\nplt.show", "_____no_output_____" ] ], [ [ "**Finding highly correlated features based on above heat map and vizualizing it as a graph**", "_____no_output_____" ] ], [ [ "class GraphVisualization:\n \n def __init__(self):\n \n # visual is a list which stores all \n # the set of edges that constitutes a\n # graph\n self.visual = []\n \n # addEdge function inputs the vertices of an\n # edge and appends it to the visual list\n def addEdge(self, a, b):\n temp = [a, b]\n self.visual.append(temp)\n \n # In visualize function G is an object of\n # class Graph given by networkx G.add_edges_from(visual)\n # creates a graph with a given list\n # nx.draw_networkx(G) - plots the graph\n # plt.show() - displays the graph\n def visualize(self):\n G = nx.Graph()\n G.add_edges_from(self.visual)\n nx.draw_shell(G, alpha = 0.7, with_labels = True, edge_color ='.4', cmap = cmap, font_size=12 )\n plt.title(\"correlation vizualization as graph\")\n plt.style.use('ggplot')\n plt.figure(figsize=(8,5))\n plt.show()", "_____no_output_____" ], [ "G = GraphVisualization()\nfor col1 in preprocessed_df.columns:\n for col2 in preprocessed_df.columns:\n if col1!=col2:\n #if the correlation is greater than 0.9, add an edge to the graph\n if coerr_mat[col1][col2]>0.9:\n G.addEdge(col1,col2) \n\n\n\nG.visualize()", "_____no_output_____" ] ], [ [ "Based on graph plot using PhiK correlation ,following features are highly correlated:\n- GarageArea and GarageCars\n- GarageTrBlt and YearBuilt\n- Neighborhood and MSZoning\n- TotalBsmtSF is highly correlated with 1stFlrSF, SalePrice, BsmtQual, GrLivArea and Neighborhood\n", "_____no_output_____" ], [ "**Global PhiK Correlations** \nThis metric signifies how much a column is correlated with all other columns in the dataset ", "_____no_output_____" ] ], [ [ "# get global correlations based on phi_k correlation matrix\nglobal_coerr=preprocessed_df.global_phik(interval_cols=interval_cols)\n\n\n#plotting global phik correlation\nplot_correlation_matrix(global_coerr[0], x_labels=[\"correlation\"], y_labels=global_coerr[1],vmin=0, vmax=1, color_map=cmap, title=r'global correlation $\\phi_K$', fontsize_factor=1,figsize=(7*3,5.5*3))", "C:\\Users\\Aparna Sakshi\\AppData\\Roaming\\Python\\Python37\\site-packages\\phik\\phik.py:250: RuntimeWarning: invalid value encountered in sqrt\n global_correlations = np.array([[np.sqrt(1 - 1/(V[i][i] * Vinv[i][i]))] for i in range(V.shape[0])])\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ] ]
cbb4f1bc51f3f9d7842e14fb1e45ee076679226c
2,150
ipynb
Jupyter Notebook
data_structures/linked_list/python/Linked List Traversal.ipynb
jeremie-gauthier/al-go-rithms
ca7410555704f949fd65bb4d99457546f3fe6c62
[ "CC0-1.0" ]
1,253
2017-06-06T07:19:25.000Z
2022-03-30T17:07:58.000Z
data_structures/linked_list/python/Linked List Traversal.ipynb
jeremie-gauthier/al-go-rithms
ca7410555704f949fd65bb4d99457546f3fe6c62
[ "CC0-1.0" ]
554
2017-09-29T18:56:01.000Z
2022-02-21T15:48:13.000Z
data_structures/linked_list/python/Linked List Traversal.ipynb
jeremie-gauthier/al-go-rithms
ca7410555704f949fd65bb4d99457546f3fe6c62
[ "CC0-1.0" ]
2,226
2017-09-29T19:59:59.000Z
2022-03-25T08:59:55.000Z
23.369565
73
0.473023
[ [ [ "# A simple Python program for traversal of a linked list \n \n# Node class \nclass Node: \n \n # Function to initialise the node object \n def __init__(self, data): \n self.data = data # Assign data \n self.next = None # Initialize next as null \n \n \n# Linked List class contains a Node object \nclass LinkedList: \n \n # Function to initialize head \n def __init__(self): \n self.head = None\n \n # This function prints contents of linked list \n # starting from head \n def printList(self): \n temp = self.head \n while (temp): \n print(temp.data) \n temp = temp.next\n \n \n# Code execution starts here \nif __name__=='__main__': \n \n # Start with the empty list \n llist = LinkedList() \n \n llist.head = Node(1) \n second = Node(2) \n third = Node(3) \n \n llist.head.next = second; # Link first node with second \n second.next = third; # Link second node with the third node \n \n llist.printList() ", "1\n2\n3\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
cbb4fb17d65f88e9d5fa70bec90ecf9aedd7232f
1,119
ipynb
Jupyter Notebook
PreFall2018/02-Python-and-Data/Play With Notebooks.ipynb
ecl95/LectureNotes
45e01baa3ca2adee7188d0dc666a7b76429924e7
[ "BSD-2-Clause" ]
103
2016-01-07T05:27:16.000Z
2022-02-18T03:56:41.000Z
PreFall2018/02-Python-and-Data/Play With Notebooks.ipynb
ecl95/LectureNotes
45e01baa3ca2adee7188d0dc666a7b76429924e7
[ "BSD-2-Clause" ]
4
2016-01-07T19:45:08.000Z
2020-05-05T21:46:51.000Z
PreFall2018/02-Python-and-Data/Play With Notebooks.ipynb
ecl95/LectureNotes
45e01baa3ca2adee7188d0dc666a7b76429924e7
[ "BSD-2-Clause" ]
103
2016-01-07T14:40:11.000Z
2020-09-09T06:05:30.000Z
15.760563
48
0.473637
[ [ [ "a = 3+9\n", "_____no_output_____" ], [ "a = 5", "_____no_output_____" ], [ "b = a + 4\nb", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cbb5014f2db5d581a3ad2ec784f4a74d85b06f01
3,827
ipynb
Jupyter Notebook
Day_23_Bitwise_AND_on_Number_Range.ipynb
bundickm/LeetCode-30-Day-Challenge
fc51489818dad31d314faaa4152e2bf8b77d854c
[ "MIT" ]
null
null
null
Day_23_Bitwise_AND_on_Number_Range.ipynb
bundickm/LeetCode-30-Day-Challenge
fc51489818dad31d314faaa4152e2bf8b77d854c
[ "MIT" ]
null
null
null
Day_23_Bitwise_AND_on_Number_Range.ipynb
bundickm/LeetCode-30-Day-Challenge
fc51489818dad31d314faaa4152e2bf8b77d854c
[ "MIT" ]
null
null
null
24.690323
267
0.389078
[ [ [ "<a href=\"https://colab.research.google.com/github/bundickm/LeetCode-30-Day-Challenge/blob/master/Day_23_Bitwise_AND_on_Number_Range.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Problem\n\nGiven a range [m, n] where 0 <= m <= n <= 2147483647, return the bitwise AND of all numbers in this range, inclusive.\n\nExample 1:\n```\nInput: [5,7]\nOutput: 4\n```\nExample 2:\n```\nInput: [0,1]\nOutput: 0\n```", "_____no_output_____" ], [ "# Solution", "_____no_output_____" ] ], [ [ "def range_bitwise_and(m: int, n: int) -> int:\n if m == n:\n return m\n\n while(m < n):\n n -= (n & -n)\n return n", "_____no_output_____" ], [ "def range_bitwise_and(m: int, n: int) -> int:\n print('m start:', m)\n print('n start:', n)\n print('n binary:', bin(n))\n print()\n if m == n:\n return m\n\n while(m < n):\n print('n:', n)\n print('n binary:', bin(n))\n print('binary n & -n:', bin(n & -n))\n n -= (n & -n)\n print()\n return n", "_____no_output_____" ], [ "range_bitwise_and(5, 10)", "m start: 5\nn start: 10\nn binary: 0b1010\n\nn: 10\nn binary: 0b1010\nbinary n & -n: 0b10\n\nn: 8\nn binary: 0b1000\nbinary n & -n: 0b1000\n\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ] ]
cbb50dc4f9c21e9675346387442e97202de7a9e4
162,169
ipynb
Jupyter Notebook
matrix2_day2-visualisation.ipynb
izabeera/dw_matrix_car
f9cd443131017cff8df30367976f89e863734b76
[ "MIT" ]
null
null
null
matrix2_day2-visualisation.ipynb
izabeera/dw_matrix_car
f9cd443131017cff8df30367976f89e863734b76
[ "MIT" ]
null
null
null
matrix2_day2-visualisation.ipynb
izabeera/dw_matrix_car
f9cd443131017cff8df30367976f89e863734b76
[ "MIT" ]
null
null
null
162,169
162,169
0.927539
[ [ [ "import pandas as pd\nimport numpy as np", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ], [ "cd '/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_car/'", "/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_car\n" ], [ "df = pd.read_hdf('data/car.h5')\n", "_____no_output_____" ], [ "pip install --upgrade tables", "Collecting tables\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/ed/c3/8fd9e3bb21872f9d69eb93b3014c86479864cca94e625fd03713ccacec80/tables-3.6.1-cp36-cp36m-manylinux1_x86_64.whl (4.3MB)\n\u001b[K |████████████████████████████████| 4.3MB 4.5MB/s \n\u001b[?25hRequirement already satisfied, skipping upgrade: numexpr>=2.6.2 in /usr/local/lib/python3.6/dist-packages (from tables) (2.7.1)\nRequirement already satisfied, skipping upgrade: numpy>=1.9.3 in /usr/local/lib/python3.6/dist-packages (from tables) (1.17.5)\nInstalling collected packages: tables\n Found existing installation: tables 3.4.4\n Uninstalling tables-3.4.4:\n Successfully uninstalled tables-3.4.4\nSuccessfully installed tables-3.6.1\n" ], [ "df.columns.values", "_____no_output_____" ], [ "df['price_value'].hist(bins=100)", "_____no_output_____" ], [ "df['price_value'].describe()", "_____no_output_____" ], [ "df['param_marka-pojazdu'].unique()", "_____no_output_____" ], [ "df.groupby('param_marka-pojazdu')['price_value'].agg(np.mean)", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "(\n df\n .groupby('param_marka-pojazdu')['price_value']\n .agg(np.mean)\n .sort_values(ascending=False)\n .head(50)\n .plot(kind='bar', figsize=(15,5))\n)", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "def group_and_draw(feat_groupby, feat_agg='price_value', agg_func=[np.mean, np.median, np.size], feat_sort='mean', top=50, subplots=True):\n return ( df\n .groupby(feat_groupby)[feat_agg]\n .agg(agg_func)\n .sort_values(by=feat_sort, ascending=False)\n .head(top)\n .plot(kind='bar', figsize=(15,5), subplots=subplots)\n )\n\n", "_____no_output_____" ], [ "group_and_draw('param_marka-pojazdu');\n", "_____no_output_____" ], [ "group_and_draw('param_kraj-pochodzenia', feat_sort='size');\n", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "group_and_draw('param_kolor', feat_sort='mean');\n", "_____no_output_____" ], [ "ls\n", "\u001b[0m\u001b[01;34mdata\u001b[0m/ LICENSE README.md\n" ], [ "", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb5164b6db38a0651b73201332b155f6a13118e
33,743
ipynb
Jupyter Notebook
_posts/ipython-notebooks/ukelectionbbg.ipynb
artoonie/documentation
47fc868078bea66b2e44b6251aac418a913e3a58
[ "CC-BY-3.0" ]
1
2020-01-15T17:15:26.000Z
2020-01-15T17:15:26.000Z
_posts/ipython-notebooks/ukelectionbbg.ipynb
jamwine/documentation
b491dccd307db60c7e0ac66214c8d55c28310a60
[ "CC-BY-3.0" ]
15
2020-06-30T21:21:30.000Z
2021-08-02T21:16:33.000Z
_posts/ipython-notebooks/ukelectionbbg.ipynb
jamwine/documentation
b491dccd307db60c7e0ac66214c8d55c28310a60
[ "CC-BY-3.0" ]
1
2019-11-10T04:01:48.000Z
2019-11-10T04:01:48.000Z
40.752415
727
0.577216
[ [ [ "Author: Saeed Amen (@thalesians) - Managing Director & Co-founder of [the Thalesians](http://www.thalesians.com)", "_____no_output_____" ], [ "## Introduction", "_____no_output_____" ], [ "With the UK general election in early May 2015, we thought it would be a fun exercise to demonstrate how you can investigate market price action over historial elections. We shall be using Python, together with Plotly for plotting. Plotly is a free web-based platform for making graphs. You can keep graphs private, make them public, and run Plotly on your [Chart Studio Enterprise on your own servers](https://plot.ly/product/enterprise/). You can find more details [here](https://plot.ly/python/getting-started/).", "_____no_output_____" ], [ "## Getting market data with Bloomberg", "_____no_output_____" ], [ "To get market data, we shall be using Bloomberg. As a starting point, we have used bbg_py from [Brian Smith's TIA project](https://github.com/bpsmith/tia/tree/master/tia/bbg), which allows you to access Bloomberg via COM (older method), modifying it to make it compatible for Python 3.4. Whilst, we shall note use it to access historical daily data, there are functions which enable us to download intraday data. This method is only compatible with 32 bit versions of Python and assumes you are running the code on a Bloomberg terminal (it won't work without a valid Bloomberg licence).\n\nIn my opinion a better way to access Bloomberg via Python, is via the official Bloomberg open source Python Open Source Graphing Library, however, at time of writing the official version is not yet compatible with Python 3.4. Fil Mackay has created a Python 3.4 compatible version of this [here](https://github.com/filmackay/blpapi-py), which I have used successfully. Whilst it takes slightly more time to configure (and compile using Windows SDK 7.1), it has the benefit of being compatible with 64 bit Python, which I have found invaluable in my analysis (have a read of [this](http://ta.speot.is/2012/04/09/visual-studio-2010-sp1-windows-sdk-7-1-install-order/) in case of failed installations of Windows SDK 7.1).\n\nQuandl can be used as an alternative data source, if you don't have access to a Bloomberg terminal, which I have also included in the code.", "_____no_output_____" ], [ "## Breaking down the steps in Python", "_____no_output_____" ], [ "Our project will consist of several parts:\n- bbg_com - low level interaction with BBG COM object (adapted for Python 3.4) (which we are simply calling)\n- datadownloader - wrapper for BBG COM, Quandl and CSV access to data\n- eventplot - reusuable functions for interacting with Plotly and creating event studies\n- ukelection - kicks off the whole script process", "_____no_output_____" ], [ "### Downloading the market data", "_____no_output_____" ], [ "As with any sort of financial market analysis, the first step is obtaining market data. We create the DataDownloader class, which acts a wrapper for Bloomberg, Quandl and CSV market data. We write a single function \"download_time_series\" for this. We could of course extend this for other data sources such as Yahoo Finance. Our output will be Pandas based dataframes. We want to make this code generic, so the tickers are not hard coded.", "_____no_output_____" ] ], [ [ "# for time series manipulation\nimport pandas\n\nclass DataDownloader:\n def download_time_series(self, vendor_ticker, pretty_ticker, start_date, source, csv_file = None):\n\n if source == 'Quandl':\n import Quandl\n # Quandl requires API key for large number of daily downloads\n # https://www.quandl.com/help/api\n spot = Quandl.get(vendor_ticker) # Bank of England's database on Quandl\n spot = pandas.DataFrame(data=spot['Value'], index=spot.index)\n spot.columns = [pretty_ticker]\n\n elif source == 'Bloomberg':\n from bbg_com import HistoricalDataRequest\n req = HistoricalDataRequest([vendor_ticker], ['PX_LAST'], start = start_date)\n req.execute()\n\n spot = req.response_as_single()\n spot.columns = [pretty_ticker]\n elif source == 'CSV':\n dateparse = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d')\n\n # in case you want to use a source other than Bloomberg/Quandl\n spot = pandas.read_csv(csv_file, index_col=0, parse_dates=0, date_parser=dateparse)\n\n return spot", "_____no_output_____" ] ], [ [ "### Generic functions for event study and Plotly plotting", "_____no_output_____" ], [ "We now focus our efforts on the EventPlot class. Here we shall do our basic analysis. We shall aslo create functions for creating plotly traces and layouts that we shall reuse a number of times. The analysis we shall conduct is fairly simple. Given a time series of spot, and a number of dates, we shall create an event study around these times for that asset. We also include the \"Mean\" move over all the various dates.", "_____no_output_____" ] ], [ [ "# for dates\nimport datetime\n\n# time series manipulation\nimport pandas\n\n# for plotting data\nimport plotly\nfrom plotly.graph_objs import *\n\nclass EventPlot: \n def event_study(self, spot, dates, pre, post, mean_label = 'Mean'):\n # event_study - calculates the asset price moves over windows around event days\n #\n # spot = price of asset to study\n # dates = event days to anchor our event study\n # pre = days before the event day to start our study\n # post = days after the event day to start our study\n #\n\n data_frame = pandas.DataFrame()\n\n # for each date grab spot data the days before and after\n for i in range(0, len(dates)):\n mid_index = spot.index.searchsorted(dates[i])\n start_index = mid_index + pre\n finish_index = mid_index + post + 1\n\n x = (spot.ix[start_index:finish_index])[spot.columns.values[0]]\n\n data_frame[dates[i]] = x.values\n\n data_frame.index = range(pre, post + 1)\n\n data_frame = data_frame / data_frame.shift(1) - 1 # returns\n\n # add the mean on to the end\n data_frame[mean_label] = data_frame.mean(axis=1)\n\n data_frame = 100.0 * (1.0 + data_frame).cumprod() # index\n data_frame.ix[pre,:] = 100\n\n return data_frame", "_____no_output_____" ] ], [ [ "We write a function to convert dates represented in a string format to Python format.", "_____no_output_____" ] ], [ [ " def parse_dates(self, str_dates):\n # parse_dates - parses string dates into Python format\n #\n # str_dates = dates to be parsed in the format of day/month/year\n #\n\n dates = []\n\n for d in str_dates:\n dates.append(datetime.datetime.strptime(d, '%d/%m/%Y'))\n\n return dates\n \n EventPlot.parse_dates = parse_dates", "_____no_output_____" ] ], [ [ "Our next focus is on the Plotly functions which create a layout. This enables us to specify axes labels, the width and height of the final plot and so on. We could of course add further properties into it.", "_____no_output_____" ] ], [ [ " def create_layout(self, title, xaxis, yaxis, width = -1, height = -1):\n # create_layout - populates a layout object\n # title = title of the plot\n # xaxis = xaxis label\n # yaxis = yaxis label\n # width (optional) = width of plot\n # height (optional) = height of plot\n #\n\n layout = Layout(\n title = title,\n xaxis = plotly.graph_objs.XAxis(\n title = xaxis,\n showgrid = False\n ),\n yaxis = plotly.graph_objs.YAxis(\n title= yaxis,\n showline = False\n )\n )\n\n if width > 0 and height > 0:\n layout['width'] = width\n layout['height'] = height\n\n return layout\n \n EventPlot.create_layout = create_layout", "_____no_output_____" ] ], [ [ "Earlier, in the DataDownloader class, our output was Pandas based dataframes. Our convert_df_plotly function will convert these each series from Pandas dataframe into plotly traces. Along the way, we shall add various properties such as markers with varying levels of opacity, graduated coloring of lines (which uses colorlover) and so on.", "_____no_output_____" ] ], [ [ " def convert_df_plotly(self, dataframe, axis_no = 1, color_def = ['default'],\n special_line = 'Mean', showlegend = True, addmarker = False, gradcolor = None):\n # convert_df_plotly - converts a Pandas data frame to Plotly format for line plots\n # dataframe = data frame due to be converted\n # axis_no = axis for plot to be drawn (default = 1)\n # special_line = make lines named this extra thick\n # color_def = color scheme to be used (default = ['default']), colour will alternate in the list\n # showlegend = True or False to show legend of this line on plot\n # addmarker = True or False to add markers\n # gradcolor = Create a graduated color scheme for the lines\n #\n # Also see http://nbviewer.ipython.org/gist/nipunreddevil/7734529 for converting dataframe to traces\n # Also see http://moderndata.plot.ly/color-scales-in-ipython-notebook/\n\n x = dataframe.index.values\n\n traces = []\n\n # will be used for market opacity for the markers\n increments = 0.95 / float(len(dataframe.columns))\n\n if gradcolor is not None:\n try:\n import colorlover as cl\n color_def = cl.scales[str(len(dataframe.columns))]['seq'][gradcolor]\n except:\n print('Check colorlover installation...')\n\n i = 0\n\n for key in dataframe:\n scatter = plotly.graph_objs.Scatter(\n x = x,\n y = dataframe[key].values,\n name = key,\n xaxis = 'x' + str(axis_no),\n yaxis = 'y' + str(axis_no),\n showlegend = showlegend)\n\n # only apply color/marker properties if not \"default\"\n if color_def[i % len(color_def)] != \"default\":\n if special_line in str(key):\n # special case for lines labelled \"mean\"\n # make line thicker\n scatter['mode'] = 'lines'\n scatter['line'] = plotly.graph_objs.Line(\n color = color_def[i % len(color_def)],\n width = 2\n )\n else:\n line_width = 1\n\n # set properties for the markers which change opacity\n # for markers make lines thinner\n if addmarker:\n opacity = 0.05 + (increments * i)\n scatter['mode'] = 'markers+lines'\n scatter['marker'] = plotly.graph_objs.Marker(\n color=color_def[i % len(color_def)], # marker color\n opacity = opacity,\n size = 5)\n line_width = 0.2\n\n else:\n scatter['mode'] = 'lines'\n\n scatter['line'] = plotly.graph_objs.Line(\n color = color_def[i % len(color_def)],\n width = line_width)\n \n i = i + 1\n\n traces.append(scatter)\n\n return traces\n \n EventPlot.convert_df_plotly = convert_df_plotly", "_____no_output_____" ] ], [ [ "### UK election analysis", "_____no_output_____" ], [ "We've now created several generic functions for downloading data, doing an event study and also for helping us out with plotting via Plotly. We now start work on the ukelection.py script, for pulling it all together. As a very first step we need to provide credentials for Plotly (you can get your own Plotly key and username [here](https://plot.ly/python/getting-started/)).", "_____no_output_____" ] ], [ [ "# for time series/maths\nimport pandas\n\n# for plotting data\nimport plotly\nimport plotly.plotly as py\nfrom plotly.graph_objs import *\n\ndef ukelection(): \n # Learn about API authentication here: https://plot.ly/python/getting-started\n # Find your api_key here: https://plot.ly/settings/api\n plotly_username = \"thalesians\"\n plotly_api_key = \"XXXXXXXXX\"\n\n plotly.tools.set_credentials_file(username=plotly_username, api_key=plotly_api_key)", "_____no_output_____" ] ], [ [ "Let's download our market data that we need (GBP/USD spot data) using the DataDownloader class. As a default, I've opted to use Bloomberg data. You can try other currency pairs or markets (for example FTSE), to compare results for the event study. Note that obviously each data vendor will have a different ticker in their system for what could well be the same asset. With FX, care must be taken to know which close the vendor is snapping. As a default we have opted for BGN, which for GBP/USD is the NY close value.", "_____no_output_____" ] ], [ [ " ticker = 'GBPUSD' # will use in plot titles later (and for creating Plotly URL)\n\n ##### download market GBP/USD data from Quandl, Bloomberg or CSV file\n source = \"Bloomberg\"\n # source = \"Quandl\"\n # source = \"CSV\"\n\n csv_file = None\n\n event_plot = EventPlot()\n \n data_downloader = DataDownloader()\n start_date = event_plot.parse_dates(['01/01/1975'])\n\n if source == 'Quandl':\n vendor_ticker = \"BOE/XUDLUSS\"\n elif source == 'Bloomberg':\n vendor_ticker = 'GBPUSD BGN Curncy'\n elif source == 'CSV':\n vendor_ticker = 'GBPUSD'\n csv_file = 'D:/GBPUSD.csv'\n\n spot = data_downloader.download_time_series(vendor_ticker, ticker, start_date[0], source, csv_file = csv_file)", "_____no_output_____" ] ], [ [ "The most important part of the study is getting the historical UK election dates! We can obtain these from Wikipedia. We then convert into Python format. We need to make sure we filter the UK election dates, for where we have spot data available.", "_____no_output_____" ] ], [ [ " labour_wins = ['28/02/1974', '10/10/1974', '01/05/1997', '07/06/2001', '05/05/2005']\n conservative_wins = ['03/05/1979', '09/06/1983', '11/06/1987', '09/04/1992', '06/05/2010']\n\n # convert to more easily readable format\n labour_wins_d = event_plot.parse_dates(labour_wins)\n conservative_wins_d = event_plot.parse_dates(conservative_wins)\n\n # only takes those elections where we have data\n labour_wins_d = [d for d in labour_wins_d if d > spot.index[0].to_pydatetime()]\n conservative_wins_d = [d for d in conservative_wins_d if d > spot.index[0].to_pydatetime()]\n\n spot.index.name = 'Date'", "_____no_output_____" ] ], [ [ "We then call our event study function in EventPlot on our spot data, which compromises of the 20 days before up till the 20 days after the UK general election. We shall plot these lines later.", "_____no_output_____" ] ], [ [ " # number of days before and after for our event study\n pre = -20\n post = 20\n\n # calculate spot path during Labour wins\n labour_wins_spot = event_plot.event_study(spot, labour_wins_d, pre, post, mean_label = 'Labour Mean')\n\n # calculate spot path during Conservative wins\n conservative_wins_spot = event_plot.event_study(spot, conservative_wins_d, pre, post, mean_label = 'Conservative Mean')", "_____no_output_____" ] ], [ [ "Define our xaxis and yaxis labels, as well as our source, which we shall later include in the title.", "_____no_output_____" ] ], [ [ " ##### Create separate plots of price action during Labour and Conservative wins\n xaxis = 'Days'\n yaxis = 'Index'\n source_label = \"Source: @thalesians/BBG/Wikipedia\"", "_____no_output_____" ] ], [ [ "We're finally ready for our first plot! We shall plot GBP/USD moves over Labour election wins, using the default palette and then we shall embed it into the sheet, using the URL given to us from the Plotly website.", "_____no_output_____" ] ], [ [ " ###### Plot market reaction during Labour UK election wins\n ###### Using default color scheme\n\n title = ticker + ' during UK gen elect - Lab wins' + '<BR>' + source_label\n\n fig = Figure(data=event_plot.convert_df_plotly(labour_wins_spot),\n layout=event_plot.create_layout(title, xaxis, yaxis)\n )\n\n py.iplot(fig, filename='labour-wins-' + ticker)", "_____no_output_____" ] ], [ [ "The \"iplot\" function will send it to Plotly's server (provided we have all the dependencies installed).", "_____no_output_____" ], [ "Alternatively, we could embed the HTML as an image, which we have taken from the Plotly website. Note this approach will yield a static image which is fetched from Plotly's servers. It also possible to write the image to disk. Later we shall show the embed function.", "_____no_output_____" ], [ "<div>\n <a href=\"https://plot.ly/~thalesians/244/\" target=\"_blank\" title=\"GBPUSD during UK gen elect - Lab wins&lt;br&gt;Source: @thalesians/BBG/Wikipedia\" style=\"display: block; text-align: center;\"><img src=\"https://plot.ly/~thalesians/244.png\" alt=\"GBPUSD during UK gen elect - Lab wins&lt;br&gt;Source: @thalesians/BBG/Wikipedia\" style=\"max-width: 100%;\" onerror=\"this.onerror=null;this.src='https://plot.ly/404.png';\" /></a>\n <script data-plotly=\"thalesians:244\" src=\"https://plot.ly/embed.js\" async></script>\n</div>\n", "_____no_output_____" ], [ "We next plot GBP/USD over Conservative wins. In this instance, however, we have a graduated 'Blues' color scheme, given obviously that blue is the color of the Conserative party in the UK!", "_____no_output_____" ] ], [ [ " ###### Plot market reaction during Conservative UK election wins\n ###### Using varying shades of blue for each line (helped by colorlover library)\n\n title = ticker + ' during UK gen elect - Con wins ' + '<BR>' + source_label\n\n # also apply graduated color scheme of blues (from light to dark)\n # see http://moderndata.plot.ly/color-scales-in-ipython-notebook/ for details on colorlover package\n # which allows you to set scales\n fig = Figure(data=event_plot.convert_df_plotly(conservative_wins_spot, gradcolor='Blues', addmarker=False),\n layout=event_plot.create_layout(title, xaxis, yaxis),\n )\n\n plot_url = py.iplot(fig, filename='conservative-wins-' + ticker)", "_____no_output_____" ] ], [ [ "Embed the chart into the document using \"embed\". This essentially embeds the Javascript code, necessary to make it interactive.", "_____no_output_____" ] ], [ [ "import plotly.tools as tls\n\ntls.embed(\"https://plot.ly/~thalesians/245\")", "_____no_output_____" ] ], [ [ "Our final plot, will consist of three subplots, Labour wins, Conservative wins, and average moves for both. We also add a grid and a grey background for each plot.", "_____no_output_____" ] ], [ [ " ##### Plot market reaction during Conservative UK election wins\n ##### create a plot consisting of 3 subplots (from left to right)\n ##### 1. Labour wins, 2. Conservative wins, 3. Conservative/Labour mean move\n\n # create a dataframe which grabs the mean from the respective Lab & Con election wins\n mean_wins_spot = pandas.DataFrame()\n mean_wins_spot['Labour Mean'] = labour_wins_spot['Labour Mean']\n mean_wins_spot['Conservative Mean'] = conservative_wins_spot['Conservative Mean']\n\n fig = plotly.tools.make_subplots(rows=1, cols=3)\n\n # apply different color scheme (red = Lab, blue = Con)\n # also add markets, which will have varying levels of opacity\n fig['data'] += Data(\n event_plot.convert_df_plotly(conservative_wins_spot, axis_no=1, \n color_def=['blue'], addmarker=True) +\n event_plot.convert_df_plotly(labour_wins_spot, axis_no=2, \n color_def=['red'], addmarker=True) +\n event_plot.convert_df_plotly(mean_wins_spot, axis_no=3, \n color_def=['red', 'blue'], addmarker=True, showlegend = False)\n )\n \n fig['layout'].update(title=ticker + ' during UK gen elects by winning party ' + '<BR>' + source_label)\n\n # use the scheme from https://plot.ly/python/bubble-charts-tutorial/\n # can use dict approach, rather than specifying each separately\n axis_style = dict(\n gridcolor='#FFFFFF', # white grid lines\n ticks='outside', # draw ticks outside axes\n ticklen=8, # tick length\n tickwidth=1.5 # and width\n )\n\n # create the various axes for the three separate charts\n fig['layout'].update(xaxis1=plotly.graph_objs.XAxis(axis_style, title=xaxis))\n fig['layout'].update(yaxis1=plotly.graph_objs.YAxis(axis_style, title=yaxis))\n\n fig['layout'].update(xaxis2=plotly.graph_objs.XAxis(axis_style, title=xaxis))\n fig['layout'].update(yaxis2=plotly.graph_objs.YAxis(axis_style))\n\n fig['layout'].update(xaxis3=plotly.graph_objs.XAxis(axis_style, title=xaxis))\n fig['layout'].update(yaxis3=plotly.graph_objs.YAxis(axis_style))\n\n fig['layout'].update(plot_bgcolor='#EFECEA') # set plot background to grey\n\n plot_url = py.iplot(fig, filename='labour-conservative-wins-'+ ticker + '-subplot')", "This is the format of your plot grid:\n[ (1,1) x1,y1 ] [ (1,2) x2,y2 ] [ (1,3) x3,y3 ]\n\n" ] ], [ [ "This time we use \"embed\", which grab the plot from Plotly's server, we did earlier (given we have already uploaded it).", "_____no_output_____" ] ], [ [ "import plotly.tools as tls\n\ntls.embed(\"https://plot.ly/~thalesians/246\")", "_____no_output_____" ] ], [ [ "<B>That's about it!</B> I hope the code I've written proves fruitful for creating some very cool Plotly plots and also for doing some very timely analysis ahead of the UK general election! Hoping this will be first of many blogs on using Plotly data.", "_____no_output_____" ], [ "The analysis in this blog is based on a report I wrote for Thalesians, a quant finance thinktank. If you are interested in getting access to the full copy of the report (Thalesians: My kingdom for a vote - The definitive quant guide to UK general elections), feel free to e-mail me at <b>[email protected]</b> or tweet me <b>@thalesians</b>", "_____no_output_____" ], [ "## Want to hear more about global macro and UK election developments?", "_____no_output_____" ], [ "If you're interested in FX and the UK general election, come to our Thalesians panel in London on April 29th 2015 at 7.30pm in Canary Wharf, which will feature, Eric Burroughs (Reuters - FX Buzz Editor), Mark Cudmore (Bloomberg - First Word EM Strategist), Jordan Rochester (Nomura - FX strategist), Jeremy Wilkinson-Smith (Independent FX trader) and myself as the moderator. Tickets are available [here](http://www.meetup.com/thalesians/events/221147156/)", "_____no_output_____" ], [ "## Biography", "_____no_output_____" ], [ "<b>Saeed Amen</b> is the managing director and co-founder of the Thalesians. He has a decade of experience creating and successfully running systematic trading models at Lehman Brothers, Nomura and now at the Thalesians. Independently, he runs a systematic trading model with proprietary capital. He is the author of Trading Thalesians – What the ancient world can teach us about trading today (Palgrave Macmillan). He graduated with a first class honours master’s degree from Imperial College in Mathematics & Computer Science. He is also a fan of Python and has written an extensive library for financial market backtesting called PyThalesians.\n<BR>\n\nFollow the Thalesians on Twitter @thalesians and get my book on Amazon [here](http://www.amazon.co.uk/Trading-Thalesians-Saeed-Amen/dp/113739952X)", "_____no_output_____" ], [ "All the code here is available to download from the [Thalesians GitHub page](https://github.com/thalesians/pythalesians)", "_____no_output_____" ] ], [ [ "from IPython.display import display, HTML\n\ndisplay(HTML('<link href=\"//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700\" rel=\"stylesheet\" type=\"text/css\" />'))\ndisplay(HTML('<link rel=\"stylesheet\" type=\"text/css\" href=\"http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css\">'))\n\n! pip install publisher --upgrade\nimport publisher\npublisher.publish(\n 'ukelectionbbg.ipynb', 'ipython-notebooks/ukelectionbbg/', 'Plotting GBP/USD price action around UK general elections', \n 'Create interactive graphs with market data, IPython Notebook and Plotly', name='Plot MP Action in GBP/USD around UK General Elections')", "Requirement already up-to-date: publisher in /Users/chriddyp/Repos/venvpy27/lib/python2.7/site-packages/publisher-0.4-py2.7.egg\r\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ] ]
cbb5194cfe1df82a799a22781bfa9e21cecc3039
60,365
ipynb
Jupyter Notebook
HURDAT_JRDISCHG- Michael 2018.ipynb
williampc8985/VT-JamesRiver
6bacd10f4fd6158db74973ddc1abd89b650efc9f
[ "MIT" ]
null
null
null
HURDAT_JRDISCHG- Michael 2018.ipynb
williampc8985/VT-JamesRiver
6bacd10f4fd6158db74973ddc1abd89b650efc9f
[ "MIT" ]
null
null
null
HURDAT_JRDISCHG- Michael 2018.ipynb
williampc8985/VT-JamesRiver
6bacd10f4fd6158db74973ddc1abd89b650efc9f
[ "MIT" ]
null
null
null
154.782051
50,124
0.889787
[ [ [ "import pandas as pd", "_____no_output_____" ], [ "#This is the Richmond USGS Data gage\nriver_richmnd = pd.read_csv('JR_Richmond02037500.csv')", "/Users/williampc/opt/anaconda3/envs/geop/lib/python3.9/site-packages/IPython/core/interactiveshell.py:3165: DtypeWarning: Columns (7) have mixed types.Specify dtype option on import or set low_memory=False.\n has_raised = await self.run_ast_nodes(code_ast.body, cell_name,\n" ], [ "river_richmnd.dropna();", "_____no_output_____" ], [ "#Hurricane data for the basin - Names of Relevant Storms - This will be used for getting the storms from the larger set\nJR_stormnames = pd.read_csv('gis_match.csv')\n", "_____no_output_____" ], [ "# Bring in the Big HURDAT data, from 1950 forward (satellites and data quality, etc.)\nHURDAT = pd.read_csv('hurdatcleanva_1950_present.csv')\n", "_____no_output_____" ], [ "VA_JR_stormmatch = JR_stormnames.merge(HURDAT)\n", "_____no_output_____" ], [ "# Now the common storms for the James Basin have been created. We now have time and storms together for the basin\n#checking some things about the data", "_____no_output_____" ], [ "# How many unique storms within the basin since 1950? 62 here and 53 in the Data on the Coast.NOAA.gov's website. \n#I think we are close enough here, digging may show some other storms, but I think we have at least captured the ones \n#from NOAA\nlen(VA_JR_stormmatch['Storm Number'].unique());", "_____no_output_____" ], [ "#double ck the lat and long parameters\nprint(VA_JR_stormmatch['Lat'].min(),\nVA_JR_stormmatch['Lon'].min(),\nVA_JR_stormmatch['Lat'].max(),\nVA_JR_stormmatch['Lon'].max())", "36.1 -83.7 39.9 -75.1\n" ], [ "#Make a csv of this data\nVA_JR_stormmatch.to_csv('storms_in_basin.csv', sep=',',encoding = 'utf-8')", "_____no_output_____" ], [ "#names of storms \nlen(VA_JR_stormmatch['Storm Number'].unique())\nVA_JR_stormmatch['Storm Number'].unique()\nnumbers = VA_JR_stormmatch['Storm Number']", "_____no_output_____" ], [ "#grab a storm from this list and lok at the times\n#Bill = pd.DataFrame(VA_JR_stormmatch['Storm Number'=='AL032003'])\n\nstorm = VA_JR_stormmatch[(VA_JR_stormmatch[\"Storm Number\"] == 'AL142018')]\nstorm\n#so this is the data for a storm named Bill that had a pth through the basin * BILL WAS A BACKDOOR Storm\n\n", "_____no_output_____" ], [ "# plotting for the USGS river Gage data \nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom climata.usgs import DailyValueIO\nfrom datetime import datetime\nfrom pandas.plotting import register_matplotlib_converters\nimport numpy as np\n\nregister_matplotlib_converters()\nplt.style.use('ggplot')\nplt.rcParams['figure.figsize'] = (20.0, 10.0)\n# set parameters\nnyears = 1\nndays = 365 * nyears\nstation_id = \"02037500\"\nparam_id = \"00060\"\n\ndatelist = pd.date_range(end=datetime.today(), periods=ndays).tolist()\n#take an annual average for the river\nannual_data = DailyValueIO(\n start_date=\"2018-01-01\",\n end_date=\"2019-01-01\",\n station=station_id,\n parameter=param_id,)\nfor series in annual_data:\n flow = [r[1] for r in series.data]\n si_flow_annual = np.asarray(flow) * 0.0283168\n flow_mean = np.mean(si_flow_annual)\n\n#now for the storm \ndischg = DailyValueIO(\n start_date=\"2018-10-09\",\n end_date=\"2018-10-25\",\n station=station_id,\n parameter=param_id,)\n#create lists of date-flow values\nfor series in dischg:\n flow = [r[1] for r in series.data]\n si_flow = np.asarray(flow) * 0.0283168\n dates = [r[0] for r in series.data]\nplt.plot(dates, si_flow)\nplt.axhline(y=flow_mean, color='r', linestyle='-')\nplt.xlabel('Date')\nplt.ylabel('Discharge (m^3/s)')\nplt.title(\"EX Michael - 2018 (Gulf)\")\nplt.xticks(rotation='vertical')\nplt.show()", "_____no_output_____" ], [ "max(si_flow)", "_____no_output_____" ], [ "percent_incr= (abs(max(si_flow)-flow_mean)/abs(flow_mean))*100\npercent_incr", "_____no_output_____" ], [ "#take an annual average for the river\nannual_data = DailyValueIO(\n start_date=\"2018-03-01\",\n end_date=\"2018-10-01\",\n station=station_id,\n parameter=param_id,)\nfor series in annual_data:\n flow = [r[1] for r in series.data]\n si_flow_annual = np.asarray(flow) * 0.0283168\n flow_mean_season = np.mean(si_flow_annual)\nprint(abs(flow_mean-flow_mean_season))", "49.14453870001279\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb51b543d0cbb7d1af85b4b6056ae29adae66ec
1,338
ipynb
Jupyter Notebook
fun_coding/codility/4_PermCheck(Counting Elements).ipynb
kangyeolyoun2/python-coding-practice-folder
f7808b47d0ead3e273008bd5641b5b90fc4173fb
[ "MIT" ]
null
null
null
fun_coding/codility/4_PermCheck(Counting Elements).ipynb
kangyeolyoun2/python-coding-practice-folder
f7808b47d0ead3e273008bd5641b5b90fc4173fb
[ "MIT" ]
null
null
null
fun_coding/codility/4_PermCheck(Counting Elements).ipynb
kangyeolyoun2/python-coding-practice-folder
f7808b47d0ead3e273008bd5641b5b90fc4173fb
[ "MIT" ]
null
null
null
17.605263
56
0.457399
[ [ [ "<img src=\"./imgs/4_PermCheck.png\" width=\"80%\">", "_____no_output_____" ] ], [ [ "#O(N) or O(N * log(N)) 1st try\ndef solution(A):\n A.sort()\n B = [i+1 for i in range(len(A))]\n if A == B:\n return 1\n else:\n return 0", "_____no_output_____" ], [ "solution([4,1,3,2])", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ] ]
cbb525791829fccae6d25e110d641a7e986f0d31
19,017
ipynb
Jupyter Notebook
sagemaker-debugger/tensorflow_keras_custom_rule/tf-keras-custom-rule.ipynb
hephaex/amazon-sagemaker-examples
8b148dfdeb169ef3240eee6058afb5a231520c26
[ "Apache-2.0" ]
2
2019-12-23T04:09:33.000Z
2021-02-16T21:20:56.000Z
sagemaker-debugger/tensorflow_keras_custom_rule/tf-keras-custom-rule.ipynb
jackie930/amazon-sagemaker-examples
dcaf38059d4d30b32f44a8daf4a037476a21d25f
[ "Apache-2.0" ]
3
2020-09-26T01:22:59.000Z
2022-02-10T02:17:11.000Z
sagemaker-debugger/tensorflow_keras_custom_rule/tf-keras-custom-rule.ipynb
jackie930/amazon-sagemaker-examples
dcaf38059d4d30b32f44a8daf4a037476a21d25f
[ "Apache-2.0" ]
1
2019-12-17T10:39:13.000Z
2019-12-17T10:39:13.000Z
60.951923
1,037
0.695588
[ [ [ "# Amazon SageMaker - Debugging with custom rules\n[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is managed platform to build, train and host maching learning models. Amazon SageMaker Debugger is a new feature which offers the capability to debug machine learning models during training by identifying and detecting problems with the models in near real-time. \n\nIn this notebook, we'll show you how to use a custom rule to monitor your training job. All through a tf.keras ResNet example.\n\n## How does Amazon SageMaker Debugger work?\n\nAmazon SageMaker Debugger lets you go beyond just looking at scalars like losses and accuracies during training and gives you full visibility into all tensors 'flowing through the graph' during training. Furthermore, it helps you monitor your training in near real-time using rules and provides you alerts, once it has detected inconsistency in training flow.\n\n### Concepts\n* **Tensors**: These represent the state of the training network at intermediate points during its execution\n* **Debug Hook**: Hook is the construct with which Amazon SageMaker Debugger looks into the training process and captures the tensors requested at the desired step intervals\n* **Rule**: A logical construct, implemented as Python code, which helps analyze the tensors captured by the hook and report anomalies, if at all\n\nWith these concepts in mind, let's understand the overall flow of things that Amazon SageMaker Debugger uses to orchestrate debugging.\n\n### Saving tensors during training\n\nThe tensors captured by the debug hook are stored in the S3 location specified by you. There are two ways you can configure Amazon SageMaker Debugger to save tensors:\n\n#### With no changes to your training script\nIf you use one of the Amazon SageMaker provided [Deep Learning Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html) for 1.15, then you don't need to make any changes to your training script for the tensors to be stored. Amazon SageMaker Debugger will use the configuration you provide through the Amazon SageMaker SDK's Tensorflow `Estimator` when creating your job to save the tensors in the fashion you specify. You can review the script we are going to use at [src/tf_keras_resnet_zerocodechange.py](src/tf_keras_resnet_zerocodechange.py). You will note that this is an untouched TensorFlow Keras script which uses the `tf.keras` interface. Please note that Amazon SageMaker Debugger only supports `tf.keras`, `tf.estimator` and `tf.MonitoredSession` interfaces for the zero script change experience. Full description of support is available at [Amazon SageMaker Debugger with TensorFlow](https://github.com/awslabs/sagemaker-debugger/tree/master/docs/tensorflow.md)\n\n#### Orchestrating your script to store tensors\n For other containers, you need to make couple of lines of changes to your training script. Amazon SageMaker Debugger exposes a library `smdebug` which allows you to capture these tensors and save them for analysis. It's highly customizable and allows to save the specific tensors you want at different frequencies and possibly with other configurations. Refer [DeveloperGuide](https://github.com/awslabs/sagemaker-debugger/tree/master/docs) for details on how to use Amazon SageMaker Debugger library with your choice of framework in your training script. Here we have an example script orchestrated at [src/tf_keras_resnet_byoc.py](src/tf_keras_resnet_byoc.py). In addition to this, you will need to ensure that your container has the `smdebug` library installed in this case, and specify your container image URI when creating the SageMaker Estimator below. Please refer [SageMaker Documentation](https://sagemaker.readthedocs.io/en/stable/sagemaker.tensorflow.html) on how to do that.\n\n### Analysis of tensors\n\nAmazon SageMaker Debugger can be configured to run debugging ***Rules*** on the tensors saved from the training job. At a very broad level, a rule is Python code used to detect certain conditions during training. Some of the conditions that a data scientist training an algorithm may care about are monitoring for gradients getting too large or too small, detecting overfitting, and so on. Amazon SageMaker Debugger comes pre-packaged with certain built-in rules. Users can write their own rules using the APIs provided by Amazon SageMaker Debugger through the `smdebug` library. You can also analyze raw tensor data outside of the Rules construct in say, a SageMaker notebook, using these APIs. Please refer [Analysis Developer Guide](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/api.md) for more on these APIs.\n", "_____no_output_____" ], [ "## Training TensorFlow Keras models with Amazon SageMaker Debugger\n\n### Amazon SageMaker TensorFlow as a framework\n\nTrain a TensorFlow Keras model in this notebook with Amazon Sagemaker Debugger enabled and monitor the training jobs with rules. This is done using Amazon SageMaker [TensorFlow 1.15.0](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html) Container as a framework", "_____no_output_____" ] ], [ [ "import boto3\nimport os\nimport sagemaker\nfrom sagemaker.tensorflow import TensorFlow", "_____no_output_____" ] ], [ [ "Import the libraries needed for the demo of Amazon SageMaker Debugger.", "_____no_output_____" ] ], [ [ "from sagemaker.debugger import Rule, DebuggerHookConfig, TensorBoardOutputConfig, CollectionConfig\nimport smdebug_rulesconfig as rule_configs", "_____no_output_____" ] ], [ [ "Now define the entry point for the training script", "_____no_output_____" ] ], [ [ "# define the entrypoint script\nentrypoint_script='src/tf_keras_resnet_zerocodechange.py'", "_____no_output_____" ] ], [ [ "### Setting up the Estimator\n\nNow it's time to setup our SageMaker TensorFlow Estimator. There are new parameters with the estimator to enable your training job for debugging through Amazon SageMaker Debugger. These new parameters are explained below\n\n* **debugger_hook_config**: This new parameter accepts a local path where you wish your tensors to be written to and also accepts the S3 URI where you wish your tensors to be uploaded to. It also accepts CollectionConfigurations which specify which tensors will be saved from the training job.\n* **rules**: This new parameter will accept a list of rules you wish to evaluate against the tensors output by this training job. For rules, \n\nAmazon SageMaker Debugger supports two types of rules\n* **Amazon SageMaker Rules**: These are rules curated by the Amazon SageMaker team and you can choose to evaluate them against your training job.\n* **Custom Rules**: You can optionally choose to write your own rule as a Python source file and have it evaluated against your training job. To provide SageMaker Debugger to evaluate this rule, you would have to provide the S3 location of the rule source and the evaluator image.\n\n#### Creating your own custom rule\n\nLet us look at how you can create your custom rule briefly before proceeding to use it with your training job. Please see the [documentation](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/analysis.md) to learn more about structuring your rules and other related concepts.\n\n##### **Summary of what the custom rule evaluates**\nFor demonstration purposes, below is a rule that tries to track whether gradients are too large. The custom rule looks at the tensors in the collection \"gradients\" saved during training and attempt to get the absolute value of the gradients in each step of the training. If the mean of the absolute values of gradients in any step is greater than a specified threshold, mark the rule as 'triggering'. Let us look at how to structure the rule source.\n\nAny custom rule logic you want to be evaluated should extend the `Rule` interface provided by Amazon SageMaker Debugger\n\n```python\nfrom smdebug.rules.rule import Rule\n\nclass CustomGradientRule(Rule):\n```\n\nNow implement the class methods for the rule. Doing this allows Amazon SageMaker to understand the intent of the rule and evaluate it against your training tensors.\n\n##### Rule class constructor\n\nIn order for Amazon SageMaker to instantiate your rule, your rule class constructor must conform to the following signature.\n```python\n def __init__(self, base_trial, other_trials, <other parameters>)\n```\n###### Arguments\n- `base_trial (Trial)`: This defines the primary [Trial](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/analysis.md#trial) that your rule is anchored to. This is an object of class type `Trial`.\n\n- `other_trials (list[Trial])`: *(Optional)* This defines a list of 'other' trials you want your rule to look at. This is useful in the scenarios when you're comparing tensors from the base_trial to tensors from some other trials. \n\n- `<other parameters>`: This is similar to `**kwargs` where you can pass in however many string parameters in your constructor signature. Note that SageMaker would only be able to support supplying string types for these values at runtime (see how, later).\n\n##### Defining the rule logic to be invoked at each step:\n\nThis defines the logic to invoked for each step. Essentially, this is where you decide whether the rule should trigger or not. In this case, you're concerned about the gradients getting too large. So, get the [tensor reduction](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/analysis.md#reduction_value) \"mean\" for each step and see if it's value is larger than a threshold.\n\n```python\n def invoke_at_step(self, step):\n for tname in self.base_trial.tensor_names(collection=\"gradients\"):\n t = self.base_trial.tensor(tname)\n abs_mean = t.reduction_value(step, \"mean\", abs=True)\n if abs_mean > self.threshold:\n return True\n return False\n```\n\n#### Using your custom rule with SageMaker Estimator\n\nBelow we create the rule configuration using the `Rule.custom` method, and then pass it to the SageMaker TensorFlow estimator to kick off the job. Note that you need to pass the rule evaluator container image for custom rules. Please refer AWS Documentation on SageMaker documentation to find the image URI for your region. We will soon have this be automatically taken care of by the SageMaker SDK. You can also provide your own image, please refer to [this repository](https://github.com/awslabs/sagemaker-debugger-rules-container) for instructions on how to build such a container.", "_____no_output_____" ] ], [ [ "custom_rule = Rule.custom(\n name='MyCustomRule', # used to identify the rule\n # rule evaluator container image\n image_uri='759209512951.dkr.ecr.us-west-2.amazonaws.com/sagemaker-debugger-rule-evaluator:latest', \n instance_type='ml.t3.medium', # instance type to run the rule evaluation on\n source='rules/my_custom_rule.py', # path to the rule source file\n rule_to_invoke='CustomGradientRule', # name of the class to invoke in the rule source file\n volume_size_in_gb=30, # EBS volume size required to be attached to the rule evaluation instance\n collections_to_save=[CollectionConfig(\"gradients\")], \n # collections to be analyzed by the rule. since this is a first party collection we fetch it as above\n rule_parameters={\n \"threshold\": \"20.0\" # this will be used to intialize 'threshold' param in your constructor\n }\n)\n", "_____no_output_____" ] ], [ [ "Before you proceed and create our training job, let us take a closer look at the parameters used to create the Rule configuration above:\n\n* `name`: This is used to identify this particular rule among the suite of rules you specified to be evaluated.\n* `image_uri`: This is the image of the container that has the logic of understanding your custom rule sources and evaluating them against the collections you save in the training job. You can get the list of open sourced SageMaker rule evaluator images [here]()\n* `instance_type`: The type of the instance you want to run the rule evaluation on\n* `source`: This is the local path or the Amazon S3 URI of your rule source file.\n* `rule_to_invoke`: This specifies the particular Rule class implementation in your source file which you want to be evaluated. SageMaker supports only 1 rule to be evaluated at a time in a rule job. Your source file can have multiple Rule class implementations, though.\n* `collections_to_save`: This specifies which collections are necessary to be saved for this rule to run. Note that providing this collection does not necessarily mean the rule will actually use these collections. You might want to take such parameters for the rule through the next argument `rule_parameters`.\n* `rule_parameters`: This provides the runtime values of the parameter in your constructor. You can still choose to pass in other values which may be necessary for your rule to be evaluated. Any value in this map is available as an environment variable and can be accessed by your rule script using `$<rule_parameter_key>`\n\nYou can read more about custom rule evaluation in Amazon SageMaker in this [documentation](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/analysis.md)\n\nLet us now create the estimator and call `fit()` on our estimator to start the training job and rule evaluation job in parallel.", "_____no_output_____" ] ], [ [ "estimator = TensorFlow(\n role=sagemaker.get_execution_role(),\n base_job_name='smdebug-custom-rule-demo-tf-keras',\n train_instance_count=1,\n train_instance_type='ml.p2.xlarge',\n entry_point=entrypoint_script,\n framework_version='1.15',\n py_version='py3',\n train_max_run=3600,\n script_mode=True,\n ## New parameter\n rules = [custom_rule]\n)\n\n# After calling fit, Amazon SageMaker starts one training job and one rule job for you.\n# The rule evaluation status is visible in the training logs\n# at regular intervals\n\nestimator.fit(wait=False)", "_____no_output_____" ] ], [ [ "## Result \n\nAs a result of calling the `fit(wait=False)`, two jobs were kicked off in the background. Amazon SageMaker Debugger kicked off a rule evaluation job for our custom gradient logic in parallel with the training job. You can review the status of the above rule job as follows.", "_____no_output_____" ] ], [ [ "import time\nstatus = estimator.latest_training_job.rule_job_summary()\nwhile status[0]['RuleEvaluationStatus'] == 'InProgress':\n status = estimator.latest_training_job.rule_job_summary()\n print(status)\n time.sleep(10)\n ", "_____no_output_____" ] ], [ [ "Once the rule job starts and you see the RuleEvaluationJobArn above, we can see the logs for the rule job in Cloudwatch. To do that, we'll use this utlity function to get a link to the rule job logs.", "_____no_output_____" ] ], [ [ "def _get_rule_job_name(training_job_name, rule_configuration_name, rule_job_arn):\n \"\"\"Helper function to get the rule job name with correct casing\"\"\"\n return \"{}-{}-{}\".format(\n training_job_name[:26], rule_configuration_name[:26], rule_job_arn[-8:]\n )\n \ndef _get_cw_url_for_rule_job(rule_job_name, region):\n return \"https://{}.console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix\".format(region, region, rule_job_name)\n\n\ndef get_rule_jobs_cw_urls(estimator):\n training_job = estimator.latest_training_job\n training_job_name = training_job.describe()[\"TrainingJobName\"]\n rule_eval_statuses = training_job.describe()[\"DebugRuleEvaluationStatuses\"]\n \n result={}\n for status in rule_eval_statuses:\n if status.get(\"RuleEvaluationJobArn\", None) is not None:\n rule_job_name = _get_rule_job_name(training_job_name, status[\"RuleConfigurationName\"], status[\"RuleEvaluationJobArn\"])\n result[status[\"RuleConfigurationName\"]] = _get_cw_url_for_rule_job(rule_job_name, boto3.Session().region_name)\n return result\n\nget_rule_jobs_cw_urls(estimator)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb52e0b19ea620135fb2f287ba9e0e99313b893
632,630
ipynb
Jupyter Notebook
notebooks/source/bayesian_hierarchical_linear_regression.ipynb
amalvaidya/numpyro
eb3907adb47d3df85da15a58bf30393e48922ee3
[ "Apache-2.0" ]
1,394
2019-03-19T16:28:45.000Z
2022-03-31T18:03:26.000Z
notebooks/source/bayesian_hierarchical_linear_regression.ipynb
amalvaidya/numpyro
eb3907adb47d3df85da15a58bf30393e48922ee3
[ "Apache-2.0" ]
964
2019-03-21T05:02:01.000Z
2022-03-31T18:27:31.000Z
notebooks/source/bayesian_hierarchical_linear_regression.ipynb
amalvaidya/numpyro
eb3907adb47d3df85da15a58bf30393e48922ee3
[ "Apache-2.0" ]
163
2019-03-20T17:23:15.000Z
2022-03-31T13:39:29.000Z
920.858806
506,372
0.950937
[ [ [ "# Bayesian Hierarchical Linear Regression\nAuthor: [Carlos Souza](mailto:[email protected])\n\nProbabilistic Machine Learning models can not only make predictions about future data, but also **model uncertainty**. In areas such as **personalized medicine**, there might be a large amount of data, but there is still a relatively **small amount of data for each patient**. To customize predictions for each person it becomes necessary to **build a model for each person** — with its inherent **uncertainties** — and to couple these models together in a **hierarchy** so that information can be borrowed from other **similar people** [1].\n\nThe purpose of this tutorial is to demonstrate how to **implement a Bayesian Hierarchical Linear Regression model using NumPyro**. To motivate the tutorial, I will use [OSIC Pulmonary Fibrosis Progression](https://www.kaggle.com/c/osic-pulmonary-fibrosis-progression) competition, hosted at Kaggle.\n\n## 1. Understanding the task\nPulmonary fibrosis is a disorder with no known cause and no known cure, created by scarring of the lungs. In this competition, we were asked to predict a patient’s severity of decline in lung function. Lung function is assessed based on output from a spirometer, which measures the forced vital capacity (FVC), i.e. the volume of air exhaled.\n\nIn medical applications, it is useful to **evaluate a model's confidence in its decisions**. Accordingly, the metric used to rank the teams was designed to reflect **both the accuracy and certainty of each prediction**. It's a modified version of the Laplace Log Likelihood (more details on that later).\n\nLet's explore the data and see what's that all about:", "_____no_output_____" ] ], [ [ "!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro arviz", "_____no_output_____" ], [ "import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "train = pd.read_csv(\n \"https://gist.githubusercontent.com/ucals/\"\n \"2cf9d101992cb1b78c2cdd6e3bac6a4b/raw/\"\n \"43034c39052dcf97d4b894d2ec1bc3f90f3623d9/\"\n \"osic_pulmonary_fibrosis.csv\"\n)\ntrain.head()", "_____no_output_____" ] ], [ [ "In the dataset, we were provided with a baseline chest CT scan and associated clinical information for a set of patients. A patient has an image acquired at time Week = 0 and has numerous follow up visits over the course of approximately 1-2 years, at which time their FVC is measured. For this tutorial, I will use only the Patient ID, the weeks and the FVC measurements, discarding all the rest. Using only these columns enabled our team to achieve a competitive score, which shows the power of Bayesian hierarchical linear regression models especially when gauging uncertainty is an important part of the problem.\n\nSince this is real medical data, the relative timing of FVC measurements varies widely, as shown in the 3 sample patients below:", "_____no_output_____" ] ], [ [ "def chart(patient_id, ax):\n data = train[train[\"Patient\"] == patient_id]\n x = data[\"Weeks\"]\n y = data[\"FVC\"]\n ax.set_title(patient_id)\n ax = sns.regplot(x, y, ax=ax, ci=None, line_kws={\"color\": \"red\"})\n\n\nf, axes = plt.subplots(1, 3, figsize=(15, 5))\nchart(\"ID00007637202177411956430\", axes[0])\nchart(\"ID00009637202177434476278\", axes[1])\nchart(\"ID00010637202177584971671\", axes[2])", "_____no_output_____" ] ], [ [ "On average, each of the 176 provided patients made 9 visits, when FVC was measured. The visits happened in specific weeks in the [-12, 133] interval. The decline in lung capacity is very clear. We see, though, they are very different from patient to patient.\n\nWe were are asked to predict every patient's FVC measurement for every possible week in the [-12, 133] interval, and the confidence for each prediction. In other words: we were asked fill a matrix like the one below, and provide a confidence score for each prediction:\n\n<img src=\"https://i.ibb.co/0Z9kW8H/matrix-completion.jpg\" alt=\"drawing\" width=\"600\"/>\n\nThe task was perfect to apply Bayesian inference. However, the vast majority of solutions shared by Kaggle community used discriminative machine learning models, disconsidering the fact that most discriminative methods are very poor at providing realistic uncertainty estimates. Because they are typically trained in a manner that optimizes the parameters to minimize some loss criterion (e.g. the predictive error), they do not, in general, encode any uncertainty in either their parameters or the subsequent predictions. Though many methods can produce uncertainty estimates either as a by-product or from a post-processing step, these are typically heuristic based, rather than stemming naturally from a statistically principled estimate of the target uncertainty distribution [2].\n\n## 2. Modelling: Bayesian Hierarchical Linear Regression with Partial Pooling\nThe simplest possible linear regression, not hierarchical, would assume all FVC decline curves have the same $\\alpha$ and $\\beta$. That's the **pooled model**. In the other extreme, we could assume a model where each patient has a personalized FVC decline curve, and **these curves are completely unrelated**. That's the **unpooled model**, where each patient has completely separate regressions.\n\nHere, I'll use the middle ground: **Partial pooling**. Specifically, I'll assume that while $\\alpha$'s and $\\beta$'s are different for each patient as in the unpooled case, **the coefficients all share similarity**. We can model this by assuming that each individual coefficient comes from a common group distribution. The image below represents this model graphically:\n\n<img src=\"https://i.ibb.co/H7NgBfR/Artboard-2-2x-100.jpg\" alt=\"drawing\" width=\"600\"/>\n\nMathematically, the model is described by the following equations:\n\n\\begin{align}\n\\mu_{\\alpha} &\\sim \\mathcal{N}(0, 100) \\\\\n\\sigma_{\\alpha} &\\sim |\\mathcal{N}(0, 100)| \\\\\n\\mu_{\\beta} &\\sim \\mathcal{N}(0, 100) \\\\\n\\sigma_{\\beta} &\\sim |\\mathcal{N}(0, 100)| \\\\\n\\alpha_i &\\sim \\mathcal{N}(\\mu_{\\alpha}, \\sigma_{\\alpha}) \\\\\n\\beta_i &\\sim \\mathcal{N}(\\mu_{\\beta}, \\sigma_{\\beta}) \\\\\n\\sigma &\\sim \\mathcal{N}(0, 100) \\\\\nFVC_{ij} &\\sim \\mathcal{N}(\\alpha_i + t \\beta_i, \\sigma)\n\\end{align}\n\nwhere *t* is the time in weeks. Those are very uninformative priors, but that's ok: our model will converge!\n\nImplementing this model in NumPyro is pretty straightforward:", "_____no_output_____" ] ], [ [ "import numpyro\nfrom numpyro.infer import MCMC, NUTS, Predictive\nimport numpyro.distributions as dist\nfrom jax import random\n\nassert numpyro.__version__.startswith(\"0.8.0\")", "_____no_output_____" ], [ "def model(PatientID, Weeks, FVC_obs=None):\n μ_α = numpyro.sample(\"μ_α\", dist.Normal(0.0, 100.0))\n σ_α = numpyro.sample(\"σ_α\", dist.HalfNormal(100.0))\n μ_β = numpyro.sample(\"μ_β\", dist.Normal(0.0, 100.0))\n σ_β = numpyro.sample(\"σ_β\", dist.HalfNormal(100.0))\n\n unique_patient_IDs = np.unique(PatientID)\n n_patients = len(unique_patient_IDs)\n\n with numpyro.plate(\"plate_i\", n_patients):\n α = numpyro.sample(\"α\", dist.Normal(μ_α, σ_α))\n β = numpyro.sample(\"β\", dist.Normal(μ_β, σ_β))\n\n σ = numpyro.sample(\"σ\", dist.HalfNormal(100.0))\n FVC_est = α[PatientID] + β[PatientID] * Weeks\n\n with numpyro.plate(\"data\", len(PatientID)):\n numpyro.sample(\"obs\", dist.Normal(FVC_est, σ), obs=FVC_obs)", "_____no_output_____" ] ], [ [ "That's all for modelling!\n\n## 3. Fitting the model\nA great achievement of Probabilistic Programming Languages such as NumPyro is to decouple model specification and inference. After specifying my generative model, with priors, condition statements and data likelihood, I can leave the hard work to NumPyro's inference engine. \n\nCalling it requires just a few lines. Before we do it, let's add a numerical Patient ID for each patient code. That can be easily done with scikit-learn's LabelEncoder:", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import LabelEncoder\n\nle = LabelEncoder()\ntrain[\"PatientID\"] = le.fit_transform(train[\"Patient\"].values)\n\nFVC_obs = train[\"FVC\"].values\nWeeks = train[\"Weeks\"].values\nPatientID = train[\"PatientID\"].values", "_____no_output_____" ] ], [ [ "Now, calling NumPyro's inference engine:", "_____no_output_____" ] ], [ [ "nuts_kernel = NUTS(model)\n\nmcmc = MCMC(nuts_kernel, num_samples=2000, num_warmup=2000)\nrng_key = random.PRNGKey(0)\nmcmc.run(rng_key, PatientID, Weeks, FVC_obs=FVC_obs)\n\nposterior_samples = mcmc.get_samples()", "sample: 100%|██████████| 4000/4000 [00:20<00:00, 195.69it/s, 63 steps of size 1.06e-01. acc. prob=0.89] \n" ] ], [ [ "## 4. Checking the model\n### 4.1. Inspecting the learned parameters\nFirst, let's inspect the parameters learned. To do that, I will use [ArviZ](https://arviz-devs.github.io/arviz/), which perfectly integrates with NumPyro:", "_____no_output_____" ] ], [ [ "import arviz as az\n\ndata = az.from_numpyro(mcmc)\naz.plot_trace(data, compact=True);", "_____no_output_____" ] ], [ [ "Looks like our model learned personalized alphas and betas for each patient!\n\n### 4.2. Visualizing FVC decline curves for some patients\nNow, let's visually inspect FVC decline curves predicted by our model. We will completely fill in the FVC table, predicting all missing values. The first step is to create a table to fill:", "_____no_output_____" ] ], [ [ "pred_template = []\nfor i in range(train[\"Patient\"].nunique()):\n df = pd.DataFrame(columns=[\"PatientID\", \"Weeks\"])\n df[\"Weeks\"] = np.arange(-12, 134)\n df[\"PatientID\"] = i\n pred_template.append(df)\npred_template = pd.concat(pred_template, ignore_index=True)", "_____no_output_____" ] ], [ [ "Predicting the missing values in the FVC table and confidence (sigma) for each value becomes really easy:", "_____no_output_____" ] ], [ [ "PatientID = pred_template[\"PatientID\"].values\nWeeks = pred_template[\"Weeks\"].values\npredictive = Predictive(model, posterior_samples, return_sites=[\"σ\", \"obs\"])\nsamples_predictive = predictive(random.PRNGKey(0), PatientID, Weeks, None)", "_____no_output_____" ] ], [ [ "Let's now put the predictions together with the true values, to visualize them:", "_____no_output_____" ] ], [ [ "df = pd.DataFrame(columns=[\"Patient\", \"Weeks\", \"FVC_pred\", \"sigma\"])\ndf[\"Patient\"] = le.inverse_transform(pred_template[\"PatientID\"])\ndf[\"Weeks\"] = pred_template[\"Weeks\"]\ndf[\"FVC_pred\"] = samples_predictive[\"obs\"].T.mean(axis=1)\ndf[\"sigma\"] = samples_predictive[\"obs\"].T.std(axis=1)\ndf[\"FVC_inf\"] = df[\"FVC_pred\"] - df[\"sigma\"]\ndf[\"FVC_sup\"] = df[\"FVC_pred\"] + df[\"sigma\"]\ndf = pd.merge(\n df, train[[\"Patient\", \"Weeks\", \"FVC\"]], how=\"left\", on=[\"Patient\", \"Weeks\"]\n)\ndf = df.rename(columns={\"FVC\": \"FVC_true\"})\ndf.head()", "_____no_output_____" ] ], [ [ "Finally, let's see our predictions for 3 patients:", "_____no_output_____" ] ], [ [ "def chart(patient_id, ax):\n data = df[df[\"Patient\"] == patient_id]\n x = data[\"Weeks\"]\n ax.set_title(patient_id)\n ax.plot(x, data[\"FVC_true\"], \"o\")\n ax.plot(x, data[\"FVC_pred\"])\n ax = sns.regplot(x, data[\"FVC_true\"], ax=ax, ci=None, line_kws={\"color\": \"red\"})\n ax.fill_between(x, data[\"FVC_inf\"], data[\"FVC_sup\"], alpha=0.5, color=\"#ffcd3c\")\n ax.set_ylabel(\"FVC\")\n\n\nf, axes = plt.subplots(1, 3, figsize=(15, 5))\nchart(\"ID00007637202177411956430\", axes[0])\nchart(\"ID00009637202177434476278\", axes[1])\nchart(\"ID00011637202177653955184\", axes[2])", "_____no_output_____" ] ], [ [ "The results are exactly what we expected to see! Highlight observations:\n\n- The model adequately learned Bayesian Linear Regressions! The orange line (learned predicted FVC mean) is very inline with the red line (deterministic linear regression). But most important: it learned to predict uncertainty, showed in the light orange region (one sigma above and below the mean FVC line)\n- The model predicts a higher uncertainty where the data points are more disperse (1st and 3rd patients). Conversely, where the points are closely grouped together (2nd patient), the model predicts a higher confidence (narrower light orange region)\n- Finally, in all patients, we can see that the uncertainty grows as the look more into the future: the light orange region widens as the # of weeks grow!\n\n### 4.3. Computing the modified Laplace Log Likelihood and RMSE\n\nAs mentioned earlier, the competition was evaluated on a modified version of the Laplace Log Likelihood. In medical applications, it is useful to evaluate a model's confidence in its decisions. Accordingly, the metric is designed to reflect both the accuracy and certainty of each prediction.\n\nFor each true FVC measurement, we predicted both an FVC and a confidence measure (standard deviation $\\sigma$). The metric was computed as:\n\n\\begin{align}\n\\sigma_{clipped} &= max(\\sigma, 70) \\\\\n\\delta &= min(|FVC_{true} - FVC_{pred}|, 1000) \\\\\nmetric &= -\\dfrac{\\sqrt{2}\\delta}{\\sigma_{clipped}} - \\ln(\\sqrt{2} \\sigma_{clipped})\n\\end{align}\n\nThe error was thresholded at 1000 ml to avoid large errors adversely penalizing results, while the confidence values were clipped at 70 ml to reflect the approximate measurement uncertainty in FVC. The final score was calculated by averaging the metric across all (Patient, Week) pairs. Note that metric values will be negative and higher is better.\n\nNext, we calculate the metric and RMSE:", "_____no_output_____" ] ], [ [ "y = df.dropna()\nrmse = ((y[\"FVC_pred\"] - y[\"FVC_true\"]) ** 2).mean() ** (1 / 2)\nprint(f\"RMSE: {rmse:.1f} ml\")\n\nsigma_c = y[\"sigma\"].values\nsigma_c[sigma_c < 70] = 70\ndelta = (y[\"FVC_pred\"] - y[\"FVC_true\"]).abs()\ndelta[delta > 1000] = 1000\nlll = -np.sqrt(2) * delta / sigma_c - np.log(np.sqrt(2) * sigma_c)\nprint(f\"Laplace Log Likelihood: {lll.mean():.4f}\")", "RMSE: 122.1 ml\nLaplace Log Likelihood: -6.1376\n" ] ], [ [ "What do these numbers mean? It means if you adopted this approach, you would **outperform most of the public solutions** in the competition. Curiously, the vast majority of public solutions adopt a standard deterministic Neural Network, modelling uncertainty through a quantile loss. **Most of the people still adopt a frequentist approach**.\n\n**Uncertainty** for single predictions becomes more and more important in machine learning and is often a requirement. **Especially when the consequenses of a wrong prediction are high**, we need to know what the probability distribution of an individual prediction is. For perspective, Kaggle just launched a new competition sponsored by Lyft, to build motion prediction models for self-driving vehicles. \"We ask that you predict a few trajectories for every agent **and provide a confidence score for each of them**.\"\n\nFinally, I hope the great work done by Pyro/NumPyro developers help democratize Bayesian methods, empowering an ever growing community of researchers and practitioners to create models that can not only generate predictions, but also assess uncertainty in their predictions.", "_____no_output_____" ], [ "## References\n\n1. Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature 521, 452–459 (2015). https://doi.org/10.1038/nature14541\n\n2. Rainforth, Thomas William Gamlen. Automating Inference, Learning, and Design Using Probabilistic Programming. University of Oxford, 2017.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cbb53292387e28c24695cd09a0323a202da80d9e
476,625
ipynb
Jupyter Notebook
models/LSTM_OHCL-5min-candle-bitquery.ipynb
SohaMohajeri/datascience
19394c6778e82e8378e32b99b3f3bf3bd45c1f43
[ "Apache-2.0" ]
4
2021-07-06T21:26:00.000Z
2022-03-17T08:37:41.000Z
models/LSTM_OHCL-5min-candle-bitquery.ipynb
SohaMohajeri/datascience
19394c6778e82e8378e32b99b3f3bf3bd45c1f43
[ "Apache-2.0" ]
null
null
null
models/LSTM_OHCL-5min-candle-bitquery.ipynb
SohaMohajeri/datascience
19394c6778e82e8378e32b99b3f3bf3bd45c1f43
[ "Apache-2.0" ]
1
2021-05-20T20:36:18.000Z
2021-05-20T20:36:18.000Z
221.892458
78,872
0.895572
[ [ [ "# 1- Importing libraries", "_____no_output_____" ] ], [ [ "import ast\nimport json\nimport requests\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom matplotlib.ticker import StrMethodFormatter\nfrom matplotlib.dates import DateFormatter\nfrom sklearn.preprocessing import MinMaxScaler\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Activation, Dense, Dropout, LSTM\nfrom sklearn import metrics ", "_____no_output_____" ] ], [ [ "# 2- Getting real-time crptocurrency data", "_____no_output_____" ] ], [ [ "#!/usr/bin/python\n# -*- coding: utf-8 -*-\nimport requests\n\n\ndef run_query(query): # A simple function to use requests.post to make the API call.\n headers = {'X-API-KEY': 'BQYjLXSsm32NnV6FM4eudu9xYt2L3AsW'}\n request = requests.post('https://graphql.bitquery.io/',\n json={'query': query}, headers=headers)\n if request.status_code == 200:\n return request.json()\n else:\n raise Exception('Query failed and return code is {}. {}'.format(request.status_code,\n query))\n\n\n# The GraphQL query\n\nquery = \"\"\"\nquery\n{\n ethereum(network: ethereum) {\n dexTrades(\n options: {limit: 100000, asc: \"timeInterval.minute\"}\n date: {since: \"2021-04-21\"}\n exchangeName: {is: \"Uniswap\"}\n baseCurrency: {is: \"0xdac17f958d2ee523a2206206994597c13d831ec7\"}\n quoteCurrency: {is: \"0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2\"}\n ) {\n timeInterval {\n minute(count: 5)\n }\n baseCurrency {\n symbol\n address\n }\n baseAmount\n quoteCurrency {\n symbol\n address\n }\n quoteAmount\n trades: count\n quotePrice\n maximum_price: quotePrice(calculate: maximum)\n minimum_price: quotePrice(calculate: minimum)\n open_price: minimum(of: block, get: quote_price)\n close_price: maximum(of: block, get: quote_price)\n }\n }\n}\n\n\n\"\"\"\nresult = run_query(query) # Execute the query\n", "_____no_output_____" ], [ "data=pd.DataFrame(result['data']['ethereum']['dexTrades'])\ndata.tail(2)", "_____no_output_____" ] ], [ [ "# 3- Data cleaning", "_____no_output_____" ] ], [ [ "data.isnull().sum()", "_____no_output_____" ], [ "time=[]\nfor x in range(0, data.shape[0]):\n time.append(data['timeInterval'].iloc[x]['minute'])\n \ndata['timeInterval']= time\ndata.head(2)", "_____no_output_____" ], [ "type(data['close_price'].iloc[0])", "_____no_output_____" ], [ "data['close_price']= data['close_price'].apply(lambda x: float(x))", "_____no_output_____" ], [ "type(data['close_price'].iloc[0])", "_____no_output_____" ] ], [ [ "# 4- Setting time as index", "_____no_output_____" ] ], [ [ "data=data.set_index('timeInterval')\ndata.head(2)", "_____no_output_____" ] ], [ [ "# 5- Converting time to timestamp", "_____no_output_____" ] ], [ [ "type(data.index[0])", "_____no_output_____" ], [ "data.index=pd.to_datetime(data.index)", "_____no_output_____" ], [ "type(data.index[0])", "_____no_output_____" ], [ "data.shape", "_____no_output_____" ] ], [ [ "# 6- Splitting train & test sets", "_____no_output_____" ] ], [ [ "def train_test_split(df, test_size):\n split = df.shape[0] - int(test_size * df.shape[0])\n train_set = df.iloc[:split]\n test_set = df.iloc[split:]\n return train_set, test_set\n\ntrain_set, test_set =train_test_split(data, 0.3) #checked test size 0.2 but the result for 0.3 is better\nprint('train_set.shape: ', train_set.shape)\nprint('test_set.shape: ', test_set.shape)", "train_set.shape: (5384, 10)\ntest_set.shape: (2307, 10)\n" ] ], [ [ "# 7- Plotting train & test sets", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(13,7))\ntrain_set['close_price'].plot(color='b')\ntest_set['close_price'].plot(color='r')\nplt.xlabel('Time', fontsize=14)\nplt.ylabel('Close Price ', fontsize=14)\nplt.gca().yaxis.set_major_formatter(StrMethodFormatter('{x:,.6f}')) \nplt.legend(['Train', 'Test'], loc='best',fontsize=14 )\nplt.show()", "_____no_output_____" ] ], [ [ "# 9- Normalizing data- zero scaling", "_____no_output_____" ] ], [ [ "def zero_scaling(df):\n \n return df / df.iloc[0] - 1\n\n\ndef sliding_window(df, len_window, zero):\n \n window = []\n for a in range(df.shape[0] - len_window):\n sub = df[a: (a + len_window)].copy()\n if zero:\n sub = zero_scaling(sub)\n window.append(sub.values)\n return np.array(window)\n\n\ndef prepare_data(df, column, len_window, zero):\n \n train_data = train_set[[column]]\n test_data = test_set[[column]]\n \n X_train = sliding_window(train_data, len_window, zero) \n X_test = sliding_window(test_data, len_window, zero) \n\n y_train = train_data[column][len_window:].values\n y_test = test_data[column][len_window:].values\n\n if zero:\n y_train = y_train / train_data[column][:-len_window].values - 1\n y_test = y_test / test_data[column][:-len_window].values - 1\n\n return train_data, test_data, X_train, X_test, y_train, y_test", "_____no_output_____" ], [ "train_data, test_data, X_train, X_test, y_train, y_test = prepare_data(data, 'close_price', len_window=5, zero=True)", "_____no_output_____" ], [ "X_train.shape", "_____no_output_____" ] ], [ [ "# 10- Building LSTM model- 2 layers", "_____no_output_____" ] ], [ [ "model_1 = Sequential()\n \n#use input_shape (tuple of integers) when using this layer as the first layer in a model\n\nmodel_1.add(LSTM(units=100, input_shape=(X_train.shape[1], X_train.shape[2])) ) \nmodel_1.add(Dropout(0.2))\n\n\nmodel_1.add(Dense(units=1 )) # tedade noroun ha\nmodel_1.add(Activation('linear')) #activation ra besoorate layer joda add konim natije behtar ast\n\nmodel_1.compile(loss='mse', optimizer='adam')", "_____no_output_____" ], [ "# Fitting to the training set\nmodel_1.fit(X_train,y_train,epochs=30,batch_size=32) ", "Epoch 1/30\n169/169 [==============================] - 2s 4ms/step - loss: 5.4025e-05\nEpoch 2/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.6913e-05\nEpoch 3/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.7016e-05\nEpoch 4/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.6887e-05\nEpoch 5/30\n169/169 [==============================] - 1s 3ms/step - loss: 2.3906e-05\nEpoch 6/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3446e-05\nEpoch 7/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.4683e-05\nEpoch 8/30\n169/169 [==============================] - 1s 5ms/step - loss: 2.3066e-05\nEpoch 9/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.4230e-05A: 0s - los\nEpoch 10/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3756e-05\nEpoch 11/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3560e-05\nEpoch 12/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.4857e-05\nEpoch 13/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.4921e-05\nEpoch 14/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3676e-05\nEpoch 15/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3513e-05\nEpoch 16/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3800e-05\nEpoch 17/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3950e-05\nEpoch 18/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3751e-05A: 0s - loss\nEpoch 19/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3905e-05\nEpoch 20/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.5175e-05\nEpoch 21/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.4283e-05\nEpoch 22/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.4009e-05\nEpoch 23/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.4323e-05\nEpoch 24/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3002e-05\nEpoch 25/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.4403e-05\nEpoch 26/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3227e-05\nEpoch 27/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3992e-05\nEpoch 28/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3417e-05\nEpoch 29/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.2648e-05\nEpoch 30/30\n169/169 [==============================] - 1s 4ms/step - loss: 2.3814e-05\n" ], [ "pd.DataFrame(model_1.history.history).plot(figsize=(8,6))\nplt.xlabel('Epoch', fontsize=12)\nplt.ylabel('Loss', fontsize=12)\nplt.title('Training Loss Per Epoch', fontsize=14)\nplt.show()", "_____no_output_____" ], [ "prediction_1=model_1.predict(X_test).squeeze() # use squeeze to convert to 1d array ", "_____no_output_____" ], [ "assert (len(prediction_1)==len(y_test))", "_____no_output_____" ], [ "plt.figure(figsize=(8,5))\nplt.plot(y_test, y_test, color='b')\nplt.scatter(y_test, prediction_1, color='r')\nplt.xlabel('y_test', fontsize=12)\nplt.ylabel('Prediction', fontsize=12)\nplt.title('Close Price Prediction, 2-Layer Model, zero scaling', fontsize=14)\nplt.show()", "_____no_output_____" ], [ "print('Mean Absolute Error: ', metrics.mean_absolute_error(y_test, prediction_1))", "Mean Absolute Error: 0.0046765956125725625\n" ], [ "predicted_close_price_1= pd.DataFrame(data=(prediction_1 + 1) * (test_data['close_price'][:-5].values) , index=test_data[5:].index ,columns=['predicted_close_price'] )\npredicted_close_price_1;\n\nmerged_1=pd.merge(test_data, predicted_close_price_1, on='timeInterval', how='left')\nmerged_1[5:]", "_____no_output_____" ], [ "merged_1.isnull().sum()", "_____no_output_____" ], [ "plt.figure(figsize=(13,7))\nmerged_1['close_price'].plot(color='r')\nmerged_1['predicted_close_price'].plot(color='g')\nplt.title('Close Price Prediction, 2-Layer Model, Zero Scaling',fontsize=16)\nplt.xlabel('Time', fontsize=13)\nplt.ylabel('Close Price', fontsize=13)\nplt.legend(['Actual Close Price', 'Predicted Close Price'], loc='best',fontsize=13)\nplt.gca().yaxis.set_major_formatter(StrMethodFormatter('{x:,.6f}')) \nplt.show()", "_____no_output_____" ] ], [ [ "# 11- Predicting on brand new data", "_____no_output_____" ] ], [ [ "#size of the data we use to predict should always be at least one unit bigger than window_len\nfrom random import randint\n\ndef rand(len_window, df):\n return randint(len_window + 1 , df.shape[0])", "_____no_output_____" ], [ "random_shape=rand(5, data)\nrandom_shape", "_____no_output_____" ], [ "new=data[['close_price']].iloc[0: random_shape]\nsliding_window(new, 5, True);\nprediction=model_1.predict(sliding_window(new, 5, True)).squeeze()\nassert(len(prediction)==len( new['close_price'][:-5]))\npredicted_close_price= pd.DataFrame(data=(prediction + 1) * (new['close_price'][:-5].values) , index=new[5:].index ,columns=['predicted close'] )\npd.merge(new, predicted_close_price, on='timeInterval', how='left')[5:]", "_____no_output_____" ] ], [ [ "# 12- Backup scenarios", "_____no_output_____" ], [ "### 12-1- LSTM model- 6 layers + zero scaling", "_____no_output_____" ] ], [ [ "# The LSTM architecture\nmodel_2 = Sequential()\n\n# First LSTM layer with Dropout regularisation\n#default activation` == `tanh`\n#default recurrent_activation == sigmoid.\n #return_sequences: Boolean. Whether to return the last output.Default: `False`.\n\nmodel_2.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],X_train.shape[2])))\nmodel_2.add(Dropout(0.2))\n\n# Second LSTM layer\nmodel_2.add(LSTM(units=50, return_sequences=True))\nmodel_2.add(Dropout(0.2))\n\n# Third LSTM layer\nmodel_2.add(LSTM(units=50, return_sequences=True))\nmodel_2.add(Dropout(0.2))\n\n\n# Fourth LSTM layer\nmodel_2.add(LSTM(units=50))\nmodel_2.add(Dropout(0.2))\n\n\n# The output layer\nmodel_2.add(Dense(units=1))\n\n\n# Compiling the RNN\nmodel_2.compile(optimizer='rmsprop',loss='mean_squared_error')", "_____no_output_____" ], [ "# Fitting to the training set\nmodel_2.fit(X_train,y_train,epochs=30,batch_size=32)", "Epoch 1/30\n169/169 [==============================] - 10s 13ms/step - loss: 1.1318e-04\nEpoch 2/30\n169/169 [==============================] - 2s 12ms/step - loss: 8.2664e-05\nEpoch 3/30\n169/169 [==============================] - 2s 12ms/step - loss: 8.0822e-05: 0s - loss: 8.08\nEpoch 4/30\n169/169 [==============================] - 2s 11ms/step - loss: 7.5970e-05\nEpoch 5/30\n169/169 [==============================] - 2s 12ms/step - loss: 4.7306e-05\nEpoch 6/30\n169/169 [==============================] - 2s 11ms/step - loss: 3.9225e-05\nEpoch 7/30\n169/169 [==============================] - 2s 11ms/step - loss: 3.3716e-05\nEpoch 8/30\n169/169 [==============================] - 2s 11ms/step - loss: 3.0943e-05\nEpoch 9/30\n169/169 [==============================] - 2s 11ms/step - loss: 3.1280e-05\nEpoch 10/30\n169/169 [==============================] - 2s 11ms/step - loss: 3.0644e-05\nEpoch 11/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.9484e-05\nEpoch 12/30\n169/169 [==============================] - 2s 12ms/step - loss: 2.8003e-05\nEpoch 13/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.8841e-05\nEpoch 14/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.8725e-05\nEpoch 15/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.7812e-05\nEpoch 16/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.6737e-05\nEpoch 17/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.8053e-05\nEpoch 18/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.6022e-05\nEpoch 19/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.8892e-05\nEpoch 20/30\n169/169 [==============================] - 2s 12ms/step - loss: 2.6859e-05: 0s - loss: 2.6\nEpoch 21/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.6125e-05\nEpoch 22/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.7446e-05\nEpoch 23/30\n169/169 [==============================] - 2s 12ms/step - loss: 2.6076e-05\nEpoch 24/30\n169/169 [==============================] - 2s 12ms/step - loss: 2.6323e-05\nEpoch 25/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.6974e-05\nEpoch 26/30\n169/169 [==============================] - 2s 12ms/step - loss: 2.7076e-05\nEpoch 27/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.5389e-05\nEpoch 28/30\n169/169 [==============================] - 2s 13ms/step - loss: 2.6588e-05\nEpoch 29/30\n169/169 [==============================] - 2s 11ms/step - loss: 2.5246e-05\nEpoch 30/30\n169/169 [==============================] - 2s 12ms/step - loss: 2.6163e-05\n" ], [ "pd.DataFrame(model_2.history.history).plot(figsize=(8,6))\nplt.xlabel('Epoch', fontsize=12)\nplt.ylabel('Loss', fontsize=12)\nplt.title('Training Loss Per Epoch', fontsize=14)\nplt.show()", "_____no_output_____" ], [ "prediction_2=model_2.predict(X_test).squeeze() # use squeeze to convert to 1d array \n\nassert (len(prediction_2)==len(y_test))", "_____no_output_____" ], [ "plt.figure(figsize=(8,5))\nplt.plot(y_test, y_test, color='b')\nplt.scatter(y_test, prediction_2, color='r')\nplt.xlabel('y_test', fontsize=12)\nplt.ylabel('Prediction', fontsize=12)\nplt.title('Close Price Prediction, 6-Layer Model, zero scaling', fontsize=14)\nplt.show()", "_____no_output_____" ], [ "print('Mean Absolute Error: ', metrics.mean_absolute_error(y_test, prediction_2))", "Mean Absolute Error: 0.00454281633726031\n" ], [ "predicted_close_price_2= pd.DataFrame(data=(prediction_2 + 1) * (test_data['close_price'][:-5].values) , index=test_data[5:].index ,columns=['predicted_close_price'] )\npredicted_close_price_2;\n\nmerged_2=pd.merge(test_data, predicted_close_price_2, on='timeInterval', how='left')\nmerged_2;", "_____no_output_____" ], [ "plt.figure(figsize=(13,7))\nmerged_2['close_price'].plot(color='r')\nmerged_2['predicted_close_price'].plot(color='g')\nplt.title('Close Price Prediction, 6-Layer Model, Zero Scaling',fontsize=16)\nplt.xlabel('Time', fontsize=13)\nplt.ylabel('Close Price', fontsize=13)\nplt.gca().yaxis.set_major_formatter(StrMethodFormatter('{x:,.6f}')) \nplt.legend(['Actual Close Price', 'Predicted Close Price'], loc='best',fontsize=13)\nplt.show()", "_____no_output_____" ] ], [ [ "### 12-2- LSTM model- 2 layers + MinMaxScaler", "_____no_output_____" ] ], [ [ "train_data = train_set[['close_price']]\ntest_data = test_set[['close_price']]\n\ntrain_data_values=train_data.values\ntest_data_values=test_data.values", "_____no_output_____" ], [ "#Scaling/Normalizing the whole Training set\nsc = MinMaxScaler(feature_range=(0,1))\ntrain_data_values_scaled = sc.fit_transform(train_data_values)", "_____no_output_____" ], [ "# Since LSTMs store long term memory state, we create a data structure with 5 timesteps and 1 output\n# So for each element of training set, we have 5 previous training set elements \n\nX_train = []\ny_train = []\nfor i in range(5,train_data_values.shape[0]):\n X_train.append(train_data_values_scaled[i-5:i,0]) #window up to\n y_train.append(train_data_values_scaled[i,0]) #one value after the window\nX_train, y_train = np.array(X_train), np.array(y_train)", "_____no_output_____" ], [ "print(X_train.shape)\nprint(y_train.shape)", "(5379, 5)\n(5379,)\n" ], [ "# Reshaping X_train for efficient modelling\nX_train=X_train.reshape(X_train.shape[0],X_train.shape[1],1)\nX_train.shape", "_____no_output_____" ], [ "model_3 = Sequential()\n \n#use input_shape (tuple of integers) when using this layer as the first layer in a model\n\nmodel_3.add(LSTM(units=100, input_shape=(X_train.shape[1], X_train.shape[2])) ) \nmodel_3.add(Dropout(0.2))\n\nmodel_3.add(Dense(units=1 )) \nmodel_3.add(Activation('linear')) \n\nmodel_3.compile(loss='mse', optimizer='adam')", "_____no_output_____" ], [ "# Fitting to the training set\nmodel_3.fit(X_train,y_train,epochs=30,batch_size=32)", "Epoch 1/30\n169/169 [==============================] - 3s 4ms/step - loss: 0.0670\nEpoch 2/30\n169/169 [==============================] - 1s 4ms/step - loss: 0.0015\nEpoch 3/30\n169/169 [==============================] - 1s 5ms/step - loss: 0.0012\nEpoch 4/30\n169/169 [==============================] - 1s 4ms/step - loss: 0.0011\nEpoch 5/30\n169/169 [==============================] - 1s 4ms/step - loss: 9.6664e-04\nEpoch 6/30\n169/169 [==============================] - 1s 4ms/step - loss: 9.5423e-04\nEpoch 7/30\n169/169 [==============================] - 1s 4ms/step - loss: 8.9271e-04\nEpoch 8/30\n169/169 [==============================] - 1s 4ms/step - loss: 8.5956e-04\nEpoch 9/30\n169/169 [==============================] - 1s 4ms/step - loss: 8.2921e-04\nEpoch 10/30\n169/169 [==============================] - 1s 5ms/step - loss: 8.9536e-04\nEpoch 11/30\n169/169 [==============================] - 1s 4ms/step - loss: 8.2570e-04\nEpoch 12/30\n169/169 [==============================] - 1s 4ms/step - loss: 7.9549e-04\nEpoch 13/30\n169/169 [==============================] - 1s 4ms/step - loss: 7.8733e-04\nEpoch 14/30\n169/169 [==============================] - 1s 4ms/step - loss: 7.4231e-04\nEpoch 15/30\n169/169 [==============================] - 1s 4ms/step - loss: 8.0120e-04\nEpoch 16/30\n169/169 [==============================] - 1s 4ms/step - loss: 7.7239e-04\nEpoch 17/30\n169/169 [==============================] - 1s 4ms/step - loss: 6.7461e-04\nEpoch 18/30\n169/169 [==============================] - 1s 4ms/step - loss: 6.7400e-04\nEpoch 19/30\n169/169 [==============================] - 1s 4ms/step - loss: 6.4077e-04\nEpoch 20/30\n169/169 [==============================] - 1s 4ms/step - loss: 6.5944e-04\nEpoch 21/30\n169/169 [==============================] - 1s 4ms/step - loss: 6.2907e-04\nEpoch 22/30\n169/169 [==============================] - 1s 4ms/step - loss: 6.2594e-04\nEpoch 23/30\n169/169 [==============================] - 1s 4ms/step - loss: 5.5164e-04\nEpoch 24/30\n169/169 [==============================] - 1s 4ms/step - loss: 5.5605e-04\nEpoch 25/30\n169/169 [==============================] - 1s 4ms/step - loss: 5.9959e-04\nEpoch 26/30\n169/169 [==============================] - 1s 4ms/step - loss: 5.0601e-04\nEpoch 27/30\n169/169 [==============================] - 1s 4ms/step - loss: 5.4164e-04\nEpoch 28/30\n169/169 [==============================] - 1s 4ms/step - loss: 5.0797e-04\nEpoch 29/30\n169/169 [==============================] - 1s 4ms/step - loss: 4.7525e-04\nEpoch 30/30\n169/169 [==============================] - 1s 4ms/step - loss: 4.9279e-04\n" ], [ "pd.DataFrame(model_3.history.history).plot(figsize=(8,6))\nplt.xlabel('Epoch', fontsize=12)\nplt.ylabel('Loss', fontsize=12)\nplt.title('Training Loss Per Epoch', fontsize=14)\nplt.show()", "_____no_output_____" ], [ "test_data_values_scaled = sc.fit_transform(test_data_values) # we only do transfrom on test set not fit", "_____no_output_____" ], [ "X_test = []\ny_test = []\nfor i in range(5,test_set.shape[0]):\n X_test.append(test_data_values_scaled[i-5:i,0]) #yek panjereh ja tayi ta sare\n y_test.append(test_data_values_scaled[i,0]) #tak element bade panjere\nX_test, y_test = np.array(X_test), np.array(y_test)\n\nX_test.shape", "_____no_output_____" ], [ "X_test=X_test.reshape(X_test.shape[0],X_test.shape[1],1)\n\nX_test.shape", "_____no_output_____" ], [ "prediction_3=model_3.predict(X_test) #do not use squeeze, otherwise will get error in inverse scaler\nassert (len(prediction_3)==len(y_test))", "_____no_output_____" ], [ "plt.figure(figsize=(8,5))\nplt.plot(y_test, y_test, color='b')\nplt.scatter(y_test, prediction_3, color='r')\nplt.xlabel('y_test', fontsize=12)\nplt.ylabel('Prediction', fontsize=12)\nplt.title('Close Price Prediction, 2-Layer Model, MinMax scaling', fontsize=14)\nplt.show()", "_____no_output_____" ], [ "print('Mean Absolute Error: ', metrics.mean_absolute_error(y_test, prediction_3))", "Mean Absolute Error: 0.01792423254519052\n" ], [ "predicted_close_price_3 = sc.inverse_transform(prediction_3)\n\npredicted_close_price_3= pd.DataFrame(data= predicted_close_price_3, index=test_set[5:].index ,columns=['predicted_close_price'] )\npredicted_close_price_3;", "_____no_output_____" ], [ "merged_3=pd.merge(test_data, predicted_close_price_3, on='timeInterval', how='left')\nmerged_3", "_____no_output_____" ], [ "merged_3.isnull().sum()", "_____no_output_____" ], [ "plt.figure(figsize=(13,7))\nmerged_3['close_price'].plot(color='r')\nmerged_3['predicted_close_price'].plot(color='g')\nplt.title('Close Price Prediction, 2-Layer Model, MinMax Scaling',fontsize=16)\nplt.xlabel('Time', fontsize=13)\nplt.ylabel('Close Price', fontsize=13)\nplt.gca().yaxis.set_major_formatter(StrMethodFormatter('{x:,.6f}')) \nplt.legend(['Actual Close Price', 'Predicted Close Price'], loc='best',fontsize=13)\nplt.show()", "_____no_output_____" ] ], [ [ "# 13- Conclusion", "_____no_output_____" ], [ "Based on the Mean Absolute Error values and Close Price Prediction plots, the 2-layer predictive model executed on the data normalized by zero_scaling function has the best performance.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
cbb533563f8fd57878558c630b2e4c998f906bb7
2,551
ipynb
Jupyter Notebook
web_visualizations/.ipynb_checkpoints/csv_to_html-checkpoint.ipynb
marc-haddad/web-design-challenge
6a88a5123b0e9b858e0c6e6ce2aabc4d48335d56
[ "ADSL" ]
null
null
null
web_visualizations/.ipynb_checkpoints/csv_to_html-checkpoint.ipynb
marc-haddad/web-design-challenge
6a88a5123b0e9b858e0c6e6ce2aabc4d48335d56
[ "ADSL" ]
null
null
null
web_visualizations/.ipynb_checkpoints/csv_to_html-checkpoint.ipynb
marc-haddad/web-design-challenge
6a88a5123b0e9b858e0c6e6ce2aabc4d48335d56
[ "ADSL" ]
null
null
null
35.929577
969
0.579773
[ [ [ "import pandas as pd\nimport os\n\npath = \"Resources/cities.csv\"\n\ndata = pd.read_csv(path)\n\ndata = data.drop(columns=[\"City_ID\"])\n\nhtml = pd.DataFrame.to_html(data)\n\noutfile = os.path.join(\"cities_data.html\")\nwith open(outfile, 'w') as cities:\n textwriter = cities.write\n # Writes out html\n textwriter(html)\n\n\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
cbb534768fe8c2477a10307a8d53fc3f7bec84e8
44,942
ipynb
Jupyter Notebook
Neural_Style_Transfer_with_Eager_Execution_question.ipynb
st24hour/tutorial
00f31b4c9cb96b5cd0a3308756c527b0dee2fe8f
[ "Apache-2.0" ]
3
2019-09-01T04:58:06.000Z
2020-05-18T05:28:19.000Z
Neural_Style_Transfer_with_Eager_Execution_question.ipynb
st24hour/tutorial
00f31b4c9cb96b5cd0a3308756c527b0dee2fe8f
[ "Apache-2.0" ]
null
null
null
Neural_Style_Transfer_with_Eager_Execution_question.ipynb
st24hour/tutorial
00f31b4c9cb96b5cd0a3308756c527b0dee2fe8f
[ "Apache-2.0" ]
4
2019-07-02T06:04:35.000Z
2019-07-15T16:26:23.000Z
35.896166
504
0.523363
[ [ [ "<a href=\"https://colab.research.google.com/github/st24hour/tutorial/blob/master/Neural_Style_Transfer_with_Eager_Execution_question.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Neural Style Transfer with tf.keras\n", "_____no_output_____" ], [ "## Overview\n\n이 튜토리얼에서 우리는 딥러닝을 사용하여 다른 이미지의 스타일로 이미지를 구성하는 방법을 배우게됩니다 (피카소나 반 고흐처럼 그릴 수 있기를 바란 적 있습니까?). 이것은 **neural style transfer**라고 알려져 있습니다. 이것은 Leon A. Gatys의 논문 인 [A Neural Algorithm of Artistic Style](https://arxiv.org/abs/1508.06576)에 설명되어 있으며 반드시 읽어 봐야합니다.\n\n그런데, neural style transfer가 무엇일까요?\n\n Neural style transfer는 3 가지 이미지, **콘텐츠** 이미지, **스타일 참조** 이미지 (유명한 화가의 작품 등) 및 원하는 **입력 이미지** 를 사용하는 최적화 기술입니다: 입력 이미지가 콘텐츠 이미지처럼 보이도록 변형되지만 스타일 이미지의 스타일처럼 \"색칠\"되도록 서로 섞습니다.\n\n예를 들어, 이 거북이와 Katsushika Hokusai의 이미지 *The Great Wave off Kanagawa*를 봅시다.\n\n<img src=\"https://github.com/tensorflow/models/blob/master/research/nst_blogpost/Green_Sea_Turtle_grazing_seagrass.jpg?raw=1\" alt=\"Drawing\" style=\"width: 200px;\"/>\n<img src=\"https://github.com/tensorflow/models/blob/master/research/nst_blogpost/The_Great_Wave_off_Kanagawa.jpg?raw=1\" alt=\"Drawing\" style=\"width: 200px;\"/>\n\n[Image of Green Sea Turtle](https://commons.wikimedia.org/wiki/File:Green_Sea_Turtle_grazing_seagrass.jpg)\n-By P.Lindgren [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Common\n\n\nHokusai가 거북이의 그림을 자신의 스타일로 그리기로 결정했다면 어떻게 될까요? 이와 같을까요?\n\n<img src=\"https://github.com/tensorflow/models/blob/master/research/nst_blogpost/wave_turtle.png?raw=1\" alt=\"Drawing\" style=\"width: 500px;\"/>\n\nNeural style transfer는 신경 네트워크의 기능과 내부 표현을 보여주는 재미있고 흥미로운 기술입니다.\n \n\nNeural style transfer의 원리는 두 이미지의 내용이 얼마나 다른지, $L_{content}$, 두 이미지의 스타일이 얼마나 다른지, $L_{ style}$를 표현하는 두 거리 함수를 정의하는 것입니다. 그 다음, 3 개의 이미지, 원하는 스타일 이미지, 원하는 컨텐츠 이미지 및 입력 이미지 (컨텐츠 이미지로 초기화 됨)가 주어지면 입력 이미지를 콘텐츠 이미지와 콘텐츠 거리가 최소화 되도록, 스타일 이미지와 스타일 거리가 최소화 되도록 변환합니다. 요약하면 기본 입력 이미지, 일치시킬 콘텐츠 이미지 및 일치시키고자하는 스타일 이미지를 사용합니다. 우리는 backpropagation으로 컨텐츠 및 스타일 거리 (losses)를 최소화하고 컨텐츠 이미지의 컨텐츠 및 스타일 이미지의 스타일과 일치하는 이미지를 만듭니다.\n\n### 다루게 될 개념들:\n이 튜토리얼에서 우리는 실제 경험을 쌓고 다음 개념을 중심으로 실습할 것입니다.\n\n* **Eager Execution** - Operation을 즉각적으로 평가하는 TensorFlow의 imperative programming 환경 사용 \n * [Learn more about eager execution](https://www.tensorflow.org/programmers_guide/eager)\n * [See it in action](https://www.tensorflow.org/get_started/eager)\n* **모델 정의를 위해 [Functional API](https://keras.io/getting-started/functional-api-guide/) 사용** - Functional API를 사용하여 필요한 중간 activations에 대한 액세스를 제공 할 모델을 만들 것입니다.\n* **Pretrained model의 feature를 활용** - Pretrained된 모델과 그 feature map을 사용하는 방법을 배웁니다.\n* **Custom training loops 구현** - 입력 parameter와 관련된 주어진 손실을 최소화하기 위해 optimizer를 설정하는 방법을 살펴 보겠습니다.\n\n### Style transfer의 일반적인 단계들 :\n\n1. Visualize data\n2. Basic Preprocessing/preparing our data\n3. Set up loss functions \n4. Create model\n5. Optimize for loss function", "_____no_output_____" ], [ "## Setup\n\n### Download Images", "_____no_output_____" ] ], [ [ "import os\nimg_dir = '/tmp/nst'\nif not os.path.exists(img_dir):\n os.makedirs(img_dir)\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/b/b4/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/0/00/Tuebingen_Neckarfront.jpg\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/6/68/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg", "_____no_output_____" ] ], [ [ "### Import and configure modules", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport matplotlib as mpl\nmpl.rcParams['figure.figsize'] = (10,10)\nmpl.rcParams['axes.grid'] = False\n\nimport numpy as np\nfrom PIL import Image\nimport time\nimport functools", "_____no_output_____" ], [ "import tensorflow as tf\nimport tensorflow.contrib.eager as tfe\n\nfrom tensorflow.python.keras.preprocessing import image as kp_image\nfrom tensorflow.python.keras import models \nfrom tensorflow.python.keras import losses\nfrom tensorflow.python.keras import layers\nfrom tensorflow.python.keras import backend as K", "_____no_output_____" ] ], [ [ "우리는 [eager execution](https://www.tensorflow.org/guide/eager)을 가능하게 하는 것으로 시작할 것입니다. Eager execution은 우리가 가장 명확하고 가장 판독 가능한 방식으로 작업할 수 있게 해줍니다.", "_____no_output_____" ] ], [ [ "\"\"\"\nStart eager execution\n\"\"\" \n\nprint(\"Eager execution: {}\".format(tf.executing_eagerly()))", "_____no_output_____" ], [ "# Set up some global values here\ncontent_path = '/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg'\nstyle_path = '/tmp/nst/The_Great_Wave_off_Kanagawa.jpg'", "_____no_output_____" ] ], [ [ "## Visualize the input", "_____no_output_____" ] ], [ [ "def load_img(path_to_img):\n max_dim = 512\n img = Image.open(path_to_img)\n long = max(img.size)\n scale = max_dim/long\n img = img.resize((round(img.size[0]*scale), round(img.size[1]*scale)), Image.ANTIALIAS)\n \n img = kp_image.img_to_array(img)\n \n # We need to broadcast the image array such that it has a batch dimension \n img = np.expand_dims(img, axis=0)\n return img", "_____no_output_____" ], [ "def imshow(img, title=None):\n # Remove the batch dimension\n out = np.squeeze(img, axis=0)\n # Normalize for display \n out = out.astype('uint8')\n plt.imshow(out)\n if title is not None:\n plt.title(title)\n plt.imshow(out)", "_____no_output_____" ] ], [ [ "이들은 콘텐츠 및 스타일 입력 이미지입니다. 우리는 콘텐츠 이미지의 콘텐츠로 이미지를 \"생성\"하지만 스타일 이미지의 스타일을 사용하기를 바랍니다.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(10,10))\n\ncontent = load_img(content_path).astype('uint8')\nstyle = load_img(style_path).astype('uint8')\n\nplt.subplot(1, 2, 1)\nimshow(content, 'Content Image')\n\nplt.subplot(1, 2, 2)\nimshow(style, 'Style Image')\nplt.show()", "_____no_output_____" ] ], [ [ "## Prepare the data\n이미지를 쉽게 로드하고 사전 처리 할 수있는 메소드를 만들어 봅시다. 우리는 VGG 학습 과정과 동일한 전처리 과정을 수행합니다. VGG 네트워크는 각 채널이 `mean = [103.939, 116.779, 123.68]` 및 채널 BGR로 normalize 된 이미지로 학습됩니다.", "_____no_output_____" ] ], [ [ "def load_and_process_img(path_to_img):\n img = load_img(path_to_img)\n img = tf.keras.applications.vgg19.preprocess_input(img)\n return img", "_____no_output_____" ] ], [ [ "최적화의 결과를 보기 위해서 우리는 역 사전 처리 단계를 수행해야합니다. 또한 최적화 된 이미지는 $- \\infty$에서 $- \\infty$ 사이의 값을 가질 수 있으므로 0-255 범위에서 값을 유지하려면 clip해야합니다.", "_____no_output_____" ] ], [ [ "def deprocess_img(processed_img):\n x = processed_img.copy()\n if len(x.shape) == 4:\n x = np.squeeze(x, 0)\n assert len(x.shape) == 3, (\"Input to deprocess image must be an image of \"\n \"dimension [1, height, width, channel] or [height, width, channel]\")\n if len(x.shape) != 3:\n raise ValueError(\"Invalid input to deprocessing image\")\n \n # perform the inverse of the preprocessiing step\n x[:, :, 0] += 103.939\n x[:, :, 1] += 116.779\n x[:, :, 2] += 123.68\n x = x[:, :, ::-1]\n\n x = np.clip(x, 0, 255).astype('uint8')\n return x", "_____no_output_____" ] ], [ [ "### Define content and style representations\n이미지의 콘텐츠 표현과 스타일 표현을 모두 얻으려면 우리 모델에서 중간 layer를 살펴볼 것입니다. 이러한 중간 layer는 고차원 feature를 나타냅니다. 우리는 이미지 분류에서 pretrained된 VGG19 네트워크를 사용할 것입니다. 이러한 중간 layer는 이미지의 콘텐츠 및 스타일 표현을 정의하는 데 필요합니다. 입력 이미지를 이러한 중간 layer에서 해당 스타일 및 내용 타겟 표현과 일치 시키도록 할 것입니다.\n\n\n#### Why intermediate layers?\n\n미리 학습된 이미지 분류 네트워크에서 이러한 중간 결과물로 스타일과 컨텐츠 표현을 정의 할 수 있는 이유가 궁금 할 수 있습니다. High level에서 이 현상은 네트워크가 이미지 분류를 수행하기 위해 (네트워크가 수행하도록 훈련 된) 이미지를 이해해야한다는 사실로 설명 할 수 있습니다. 여기에는 raw 이미지를 입력으로 사용하여 raw 이미지 내에있는 복잡한 피쳐를 이해하는 것으로 바뀌는 변환을 통해 내부 표현을 만드는 작업이 포함됩니다. 이것은 컨벌루션 뉴럴 네트워크가 잘 일반화 될 수있는 이유 중 일부입니다. 배경 노이즈 및 기타 배경과 같은 부분에 상관없이 클래스 (예 : 고양이 vs. 개) 내에서 불변하는 특징을 정의 할 수 있습니다. 따라서 원본 이미지가 입력되고 분류 레이블이 출력되는 곳 사이의 어떤 곳에서 모델은 복잡한 피쳐 추출기로 사용됩니다. 따라서 중간 레이어에 액세스하여 입력 이미지의 내용과 스타일을 설명 할 수 있습니다.\n\n특히 네트워크에서 다음과 같은 중간 계층을 가져옵니다.\n", "_____no_output_____" ], [ "참고: VGG19 architecture\n\n<img src=\"https://www.researchgate.net/profile/Clifford_Yang/publication/325137356/figure/fig2/AS:670371271413777@1536840374533/llustration-of-the-network-architecture-of-VGG-19-model-conv-means-convolution-FC-means.jpg\" alt=\"Drawing\" style=\"width: 200px;\"/>", "_____no_output_____" ] ], [ [ "# Content layer where will pull our feature maps\ncontent_layers = ['block5_conv2'] \n\n# Style layer we are interested in\nstyle_layers = ['block1_conv1',\n 'block2_conv1',\n 'block3_conv1', \n 'block4_conv1', \n 'block5_conv1'\n ]\n\nnum_content_layers = len(content_layers)\nnum_style_layers = len(style_layers)", "_____no_output_____" ] ], [ [ "## Build the Model \n우리는 [VGG19](https://keras.io/applications/#vgg19)를 불러오고 우리의 입력 tensor를 모델에 입력으로 줄 것입니다. 이것은 콘텐츠, 스타일, 그리고 생성하는 이미지의 feature map (내용 및 스타일 표현)을 추출할 수 있도록 해줍니다.\n\n원래의 논문에서와 같이 VGG19 네트워크를 이용할 것입니다. 또한 VGG19는 ResNet, Inception 등과 비교하면 비교적 단순한 모델이므로 feature map이 실제로 스타일 이전에 더 효과적입니다.", "_____no_output_____" ], [ "우리의 스타일 및 콘텐츠 feature map에 해당하는 중간 layer의 출력을 얻을 것이며, Keras [**Functional API**](https://keras.io/getting-started/functional-api-guide/)를 사용하여 모델이 원하는 출력을 하도록 정의합니다.\n\n모델을 정의하는 Functional API를 사용하면 단순히 입력 및 출력을 정의 할 수 있습니다.\n\n`model = Model(inputs, outputs)`\n\n참고: [tf.keras.applications.vgg19.VGG19()](https://keras.io/applications/#vgg19)\n", "_____no_output_____" ] ], [ [ "def get_model():\n \"\"\" Creates our model with access to intermediate layers. \n \n This function will load the VGG19 model and access the intermediate layers. \n These layers will then be used to create a new model that will take input image\n and return the outputs from these intermediate layers from the VGG model. \n \n Returns:\n returns a keras model that takes image inputs and outputs the style and \n content intermediate layers. \n \"\"\"\n # Load our model. We load pretrained VGG, trained on imagenet data\n \n \"\"\"\n Load Imagenet pretrained VGG19 network. You don't need to load FC layers\n vgg = \n \"\"\"\n vgg.trainable = False\n # Get output layers corresponding to style and content layers \n style_outputs = [vgg.get_layer(name).output for name in style_layers]\n content_outputs = [vgg.get_layer(name).output for name in content_layers]\n model_outputs = style_outputs + content_outputs\n # Build model \n return models.Model(vgg.input, model_outputs)", "_____no_output_____" ] ], [ [ "위의 코드에서 pretrained된 이미지 분류 네트워크를 로드합니다. 그런 다음 이전에 정의한 관심있는 layer를 불러옵니다. 그런 다음 모델의 입력을 이미지로 설정하고 출력을 스타일 및 콘텐츠 레이어의 출력으로 설정하여 모델을 정의합니다. 즉, 입력 이미지를 가져와 콘텐츠 및 스타일 중간 레이어를 출력하는 모델을 만들었습니다.\n", "_____no_output_____" ], [ "## Define and create our loss functions (content and style distances)", "_____no_output_____" ], [ "### Content Loss", "_____no_output_____" ], [ "우리의 콘텐츠 loss 정의는 실제로 매우 간단합니다. 원하는 콘텐츠 이미지와 기본 입력 이미지를 네트워크에 전달합니다. 이렇게하면 모델에서 출력되는 중간 레이어 출력 (위에 정의 된 레이어에서)이 반환됩니다. 그런 다음 우리는 단순히 그 이미지들의 두 중간 representation 사이의 유클리드 거리를 취합니다.\n\n\n보다 공식적으로, 콘텐츠 손실은 출력 이미지 $x$와 콘텐츠 이미지 $p$에서 콘텐츠까지의 거리를 설명하는 함수입니다. $ C_{nn} $은 미리 훈련 된 deep convolutional neural network라고 합시다. 우리는 [VGG19](https://keras.io/applications/#vgg19)를 사용할 것입니다. $X$를 임의의 이미지라고 하면 $C_{nn}(X)$ 는 네트워크에 X를 넣은 것입니다. $F^l_{ij}(x) \\in C_{nn}(x)$ 와 $P^l_{ij}(p) \\in C_{nn}(p)$ 를 각각 입력으로 $x$ 와 $p$ 를 넣었을때 layer $l$ 에서의 중간 feature representation이라고 합시다. 그리면 우리는 콘텐츠 거리(loss)를 수식적으로 다음과 같이 정의 할 수 있습니다: $$L^l_{content}(p, x) = \\sum_{i, j} (F^l_{ij}(x) - P^l_{ij}(p))^2$$\n\n우리는 일반적인 방식으로 backpropagation을 수행하여 이러한 콘텐츠 loss를 최소화합니다. 따라서 특정 레이어 (content_layer에 정의 됨)에서 원본 콘텐츠 이미지와 같은 응답을 생성 할 때까지 초기 이미지를 변경합니다.\n\n이것은 매우 간단하게 구현 될 수 있습니다. 입력 이미지 $x$, 그리고 우리의 콘텐트 이미지 $p$를 입력으로 받은 네트워크의 레이어 $l$에서 feature map을 입력으로 받아서 컨텐츠 거리를 반환합니다.\n", "_____no_output_____" ], [ "### Computing content loss\n실제로 원하는 각 레이어에서 콘텐츠 loss를 추가 할 것입니다. 이 방법은 우리가 모델을 통해 입력 이미지를 공급할 때마다 (eager에서는 단순하게 `model(input_image)`입니다!) 모델을 통한 모든 컨텐츠 손실이 적절하게 계산 될 것이고 eager로 실행하기 때문에 모든 gradients가 계산됩니다 .", "_____no_output_____" ] ], [ [ "def get_content_loss(base_content, target):\n return tf.reduce_mean(tf.square(base_content - target))", "_____no_output_____" ] ], [ [ "### Style Loss", "_____no_output_____" ], [ "스타일 loss 계산은 좀 더 복잡하지만 동일한 원칙을 따르며, 이번에는 네트워크에 기본 입력 이미지와 스타일 이미지를 입력으로 줍니다. 그러나 기본 입력 이미지와 스타일 이미지의 중간 출력을 그대로 비교하는 대신 두 출력의 Gram matrix를 비교합니다.\n\n수학적으로, 우리는 기본 입력 이미지 $x$와 스타일 이미지 $a$의 style loss를 두 이미지의 스타일 표현(gram matrix)의 거리로 정의합니다. 우리는 이미지의 스타일 표현을 gram matrix $G^l$로 주어지는 서로 다른 필터 응답의 correlation으로 설명합니다. 여기서 $G^l_{ij}$는 벡터화 된 feature map $i$와 $j$의 내적 (inner product) 입니다. 우리는 특정 이미지의 feature map에서 생성된 $G^l_{ij}$가 feature map $i$와 $j$ 사이의 correlation을 나타낸다는 것을 알 수 있습니다.\n\n기본 입력 이미지의 스타일을 생성하기 위해 콘텐츠 이미지에서 gradient descent를 수행하여 스타일 이미지의 스타일 표현과 일치하는 이미지로 변환합니다.이를 위해 스타일 이미지와 입력 이미지 사이의 mean square 거리를 최소화하도록 만듭니다. 총 스타일 손실에 대한 각 layer의 contribution은 다음과 같습니다: \n\n$$E_l = \\frac{1}{4N_l^2M_l^2} \\sum_{i,j}(G^l_{ij} - A^l_{ij})^2$$\n\n$G^l_{ij}$ 와 $A^l_{ij}$는 각각 layer $l$에서 \n$x$ 와 $a$의 스타일 표현입니다. $N_l$는 각 사이즈가 $M_l = height * width$인 feature map 수를 나타냅니다. 따라서 전체 스타일 loss는\n\n$$L_{style}(a, x) = \\sum_{l \\in L} w_l E_l$$\n\n입니다. 여기서 우리는 각 layer의 loss contribution을 $w_l$로 가중치 주었습니다. 우리의 경우에 각 layer를 동일하게 가중치 주었습니다($w_l =\\frac{1}{|L|}$).", "_____no_output_____" ], [ "### Total loss\n만들고자하는 이미지는 콘텐츠 이미지와 $L_{content}$가 작고 스타일 이미지와 $L_{style}$이 작아지도록 하는 이미지입니다. 따라서 전체 목적 함수(loss)는 다음과 같습니다:\n\n$$L_{total}(p, a, x) = \\alpha L_{content}(p, x)+\\beta L_{style}(a, x)$$\n\n$\\alpha$와 $\\beta$는 각각 콘텐트와 스타일 loss에 곱해지는 weight 값 입니다.", "_____no_output_____" ], [ "### Computing style loss\n이번에도 style loss를 거리 metric으로 구현합니다. \n\nget_style_loss는 $E_l$을 구하는 함수입니다.", "_____no_output_____" ] ], [ [ "def gram_matrix(input_tensor):\n # We make the image channels first \n channels = int(input_tensor.shape[-1])\n a = tf.reshape(input_tensor, [-1, channels])\n n = tf.shape(a)[0]\n gram = tf.matmul(a, a, transpose_a=True)\n return gram / tf.cast(n, tf.float32)\n\ndef get_style_loss(base_style, gram_target):\n \"\"\"Expects two images of dimension h, w, c\"\"\"\n # height, width, num filters of each layer\n # We scale the loss at a given layer by the size of the feature map and the number of filters\n height, width, channels = base_style.get_shape().as_list()\n gram_style = gram_matrix(base_style)\n \n return tf.reduce_mean(tf.square(gram_style - gram_target))# / (4. * (channels ** 2) * (width * height) ** 2)", "_____no_output_____" ] ], [ [ "## Apply style transfer to our images\n", "_____no_output_____" ], [ "### Run Gradient Descent \n우리는 loss를 최소화하기 위해 [Adam](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam)* optimizer를 사용합니다. 반복적으로 출력 이미지를 업데이트하여 loss를 최소화합니다. 네트워크와 관련된 weight를 업데이트하지 않고 대신 입력 이미지를 조정하여 loss를 최소화합니다. 이를 위해서는 loss와 gradients를 계산하는 방법을 알아야합니다.", "_____no_output_____" ], [ "우리는 콘텐츠와 스타일 이미지를 불러오고, 네트워크를 통해 feed forward하며, 모델에서 콘텐츠 및 스타일 feature representation을 출력하는 작은 도우미 함수를 정의 할 것입니다.", "_____no_output_____" ] ], [ [ "def get_feature_representations(model, content_path, style_path):\n \"\"\"Helper function to compute our content and style feature representations.\n\n This function will simply load and preprocess both the content and style \n images from their path. Then it will feed them through the network to obtain\n the outputs of the intermediate layers. \n \n Arguments:\n model: The model that we are using.\n content_path: The path to the content image.\n style_path: The path to the style image\n \n Returns:\n returns the style features and the content features. \n \"\"\"\n # Load our images in \n content_image = load_and_process_img(content_path)\n style_image = load_and_process_img(style_path)\n \n # batch compute content and style features\n style_outputs = model(style_image)\n content_outputs = model(content_image)\n \n \n # Get the style and content feature representations from our model \n style_features = [style_layer[0] for style_layer in style_outputs[:num_style_layers]]\n content_features = [content_layer[0] for content_layer in content_outputs[num_style_layers:]]\n return style_features, content_features", "_____no_output_____" ] ], [ [ "### Computing the loss and gradients\n여기서는 [**tf.GradientTape**](https://www.tensorflow.org/programmers_guide/eager#computing_gradients)을 사용하여 gradient를 계산합니다. 나중에 gradient를 계산하기위한 operation을 추적하여 자동 미분화를 가능하게 합니다. Forward pass중에 작업을 기록한 다음 backward pass시에 입력 이미지에 대하여 loss 함수의 gradient를 계산할 수 있습니다.\n", "_____no_output_____" ] ], [ [ "def compute_loss(model, loss_weights, init_image, gram_style_features, content_features):\n \"\"\"This function will compute the loss total loss.\n \n Arguments:\n model: The model that will give us access to the intermediate layers\n loss_weights: The weights of each contribution of each loss function. \n (style weight, content weight, and total variation weight)\n init_image: Our initial base image. This image is what we are updating with \n our optimization process. We apply the gradients wrt the loss we are \n calculating to this image.\n gram_style_features: Precomputed gram matrices corresponding to the \n defined style layers of interest.\n content_features: Precomputed outputs from defined content layers of \n interest.\n \n Returns:\n returns the total loss, style loss, content loss, and total variational loss\n \"\"\"\n style_weight, content_weight = loss_weights\n \n # Feed our init image through our model. This will give us the content and \n # style representations at our desired layers. Since we're using eager\n # our model is callable just like any other function!\n model_outputs = model(init_image)\n \n style_output_features = model_outputs[:num_style_layers]\n content_output_features = model_outputs[num_style_layers:]\n \n style_score = 0\n content_score = 0\n\n # Accumulate style losses from all layers\n # Here, we equally weight each contribution of each loss layer\n weight_per_style_layer = 1.0 / float(num_style_layers)\n for target_style, comb_style in zip(gram_style_features, style_output_features):\n style_score += weight_per_style_layer * get_style_loss(comb_style[0], target_style)\n \n # Accumulate content losses from all layers \n weight_per_content_layer = 1.0 / float(num_content_layers)\n for target_content, comb_content in zip(content_features, content_output_features):\n content_score += weight_per_content_layer* get_content_loss(comb_content[0], target_content)\n \n style_score *= style_weight\n content_score *= content_weight\n\n # Get total loss\n loss = style_score + content_score \n return loss, style_score, content_score", "_____no_output_____" ] ], [ [ "Gradients를 구하는 것은 쉽습니다:", "_____no_output_____" ] ], [ [ "def compute_grads(cfg):\n with tf.GradientTape() as tape: \n all_loss = compute_loss(**cfg)\n # Compute gradients wrt input image\n total_loss = all_loss[0]\n return tape.gradient(total_loss, cfg['init_image']), all_loss", "_____no_output_____" ] ], [ [ "### Optimization loop\n[Adam optimizer](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)", "_____no_output_____" ] ], [ [ "import IPython.display\n\ndef run_style_transfer(content_path, \n style_path,\n num_iterations=1000,\n content_weight=1e3, \n style_weight=1e-2): \n # We don't need to (or want to) train any layers of our model, so we set their\n # trainable to false. \n model = get_model() \n for layer in model.layers:\n layer.trainable = False\n \n # Get the style and content feature representations (from our specified intermediate layers) \n style_features, content_features = get_feature_representations(model, content_path, style_path)\n gram_style_features = [gram_matrix(style_feature) for style_feature in style_features]\n \n # Set initial image\n init_image = load_and_process_img(content_path)\n init_image = tfe.Variable(init_image, dtype=tf.float32)\n # Create our optimizer\n opt = tf.train.AdamOptimizer(learning_rate=5, beta1=0.99, epsilon=1e-1)\n\n # For displaying intermediate images \n iter_count = 1\n \n # Store our best result\n best_loss, best_img = float('inf'), None\n \n # Create a nice config \n loss_weights = (style_weight, content_weight)\n cfg = {\n 'model': model,\n 'loss_weights': loss_weights,\n 'init_image': init_image,\n 'gram_style_features': gram_style_features,\n 'content_features': content_features\n }\n \n # For displaying\n num_rows = 2\n num_cols = 5\n display_interval = num_iterations/(num_rows*num_cols)\n start_time = time.time()\n global_start = time.time()\n \n norm_means = np.array([103.939, 116.779, 123.68])\n min_vals = -norm_means\n max_vals = 255 - norm_means \n \n imgs = []\n for i in range(num_iterations):\n grads, all_loss = compute_grads(cfg)\n loss, style_score, content_score = all_loss\n \"\"\"\n Apply_gradients\n \"\"\"\n clipped = tf.clip_by_value(init_image, min_vals, max_vals)\n init_image.assign(clipped)\n end_time = time.time() \n \n if loss < best_loss:\n # Update best loss and best image from total loss. \n best_loss = loss\n best_img = deprocess_img(init_image.numpy())\n\n if i % display_interval== 0:\n start_time = time.time()\n \n # Use the .numpy() method to get the concrete numpy array\n plot_img = init_image.numpy()\n plot_img = deprocess_img(plot_img)\n imgs.append(plot_img)\n IPython.display.clear_output(wait=True)\n IPython.display.display_png(Image.fromarray(plot_img)) # NumPy 배열을 Image 객체로 변환\n print('Iteration: {}'.format(i)) \n print('Total loss: {:.4e}, ' \n 'style loss: {:.4e}, '\n 'content loss: {:.4e}, '\n 'time: {:.4f}s'.format(loss, style_score, content_score, time.time() - start_time))\n print('Total time: {:.4f}s'.format(time.time() - global_start))\n IPython.display.clear_output(wait=True)\n plt.figure(figsize=(14,4))\n for i,img in enumerate(imgs):\n plt.subplot(num_rows,num_cols,i+1)\n plt.imshow(img)\n plt.xticks([])\n plt.yticks([])\n \n return best_img, best_loss ", "_____no_output_____" ], [ "best, best_loss = run_style_transfer(content_path, \n style_path, num_iterations=1000)", "_____no_output_____" ], [ "Image.fromarray(best)", "_____no_output_____" ] ], [ [ "## Visualize outputs\n우리는 출력 이미지에 적용된 processing을 제거하기 위해 출력 이미지를 \"deprocess\"합니다.", "_____no_output_____" ] ], [ [ "def show_results(best_img, content_path, style_path, show_large_final=True):\n plt.figure(figsize=(10, 5))\n content = load_img(content_path) \n style = load_img(style_path)\n\n plt.subplot(1, 2, 1)\n imshow(content, 'Content Image')\n\n plt.subplot(1, 2, 2)\n imshow(style, 'Style Image')\n\n if show_large_final: \n plt.figure(figsize=(10, 10))\n\n plt.imshow(best_img)\n plt.title('Output Image')\n plt.show()", "_____no_output_____" ], [ "show_results(best, content_path, style_path)", "_____no_output_____" ] ], [ [ "## Try it on other images\nImage of Tuebingen \n\nPhoto By: Andreas Praefcke [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], from Wikimedia Commons", "_____no_output_____" ], [ "### Starry night + Tuebingen", "_____no_output_____" ] ], [ [ "best_starry_night, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg',\n '/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg')", "_____no_output_____" ], [ "show_results(best_starry_night, '/tmp/nst/Tuebingen_Neckarfront.jpg',\n '/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg')", "_____no_output_____" ] ], [ [ "### Pillars of Creation + Tuebingen", "_____no_output_____" ] ], [ [ "best_poc_tubingen, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg', \n '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')", "_____no_output_____" ], [ "show_results(best_poc_tubingen, \n '/tmp/nst/Tuebingen_Neckarfront.jpg',\n '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')", "_____no_output_____" ] ], [ [ "### Kandinsky Composition 7 + Tuebingen", "_____no_output_____" ] ], [ [ "best_kandinsky_tubingen, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg', \n '/tmp/nst/Vassily_Kandinsky,_1913_-_Composition_7.jpg')", "_____no_output_____" ], [ "show_results(best_kandinsky_tubingen, \n '/tmp/nst/Tuebingen_Neckarfront.jpg',\n '/tmp/nst/Vassily_Kandinsky,_1913_-_Composition_7.jpg')", "_____no_output_____" ] ], [ [ "### Pillars of Creation + Sea Turtle", "_____no_output_____" ] ], [ [ "best_poc_turtle, best_loss = run_style_transfer('/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg', \n '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')", "_____no_output_____" ], [ "show_results(best_poc_turtle, \n '/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg',\n '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')", "_____no_output_____" ] ], [ [ "## 주요 요점\n\n### 다룬 내용들:\n\n* 우리는 몇 가지 다른 loss 함수를 구축하고 이러한 손실을 최소화도록 입력 영상을 변환하기 위해 backpropagation 사용했습니다\n * 이를 위해 **pretrained된 모델**을 로드하고 학습 된 feature map을 사용하여 이미지의 콘텐츠 및 스타일 표현을 설명해야했습니다.\n * 우리의 주요 loss 함수는 주로 이러한 다양한 representation의 관점에서 거리를 계산하는 것이 었습니다\n* 우리는 이것을 custom model과 **eager execution**으로 구현하였습니다.\n* 우리는 Functional API를 이용하여 우리의 custom model을 만들었습니다.\n * Eager execution은 자연스러운 Python control flow를 사용하여 텐서로 동적으로 작업 할 수있게 해줍니다.\n * 우리는 텐서를 직접 조작하여 디버깅을하고 텐서로 작업하는 것을 더 쉽게 만듭니다.\n* **tf.gradient**를 사용하여 optimizer 업데이트 규칙을 적용하고 이미지를 반복적으로 업데이트했습니다. Optimizer는 입력 이미지와 관련하여 주어진 loss를 최소화했습니다.", "_____no_output_____" ], [ "\n**[Image of Tuebingen](https://commons.wikimedia.org/wiki/File:Tuebingen_Neckarfront.jpg)** \nPhoto By: Andreas Praefcke [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], from Wikimedia Commons\n\n**[Image of Green Sea Turtle](https://commons.wikimedia.org/wiki/File:Green_Sea_Turtle_grazing_seagrass.jpg)**\nBy P.Lindgren [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons\n\n", "_____no_output_____" ], [ "# Report\n\n1. 튀빙겐 사진을 고흐의 starry night 스타일로 바꿔봅시다. content_weight=1e3, style_weight=1e-2\n \n2. 튀빙겐 사진을 고흐의 starry night 스타일로 바꿔봅시다. content_weight=1e3, style_weight=1e-0\n\n3. 튀빙겐 사진을 고흐의 starry night 스타일로 바꿔봅시다. content_weight=1e3, style_weight=1e--4\n\n4. 튀빙겐 사진을 고흐의 starry night 스타일로 바꿔봅시다. content_weight=1e1, style_weight=1e-2\n \n5. 튀빙겐 사진을 고흐의 starry night 스타일로 바꿔봅시다. content_weight=1e5, style_weight=1e-2\n\nQ) $\\alpha$(content_weight)와 $\\beta$(style_weight)의 역할은 무엇입니까?\n\n\n#### 참고) 파일 경로, 이름\n> 튀빙겐: '/tmp/nst/Tuebingen_Neckarfront.jpg'\n\n> starry night: '/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg'", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
cbb534a53c0948067c473b85ae5a0389efc82722
7,041
ipynb
Jupyter Notebook
Python_API/VacationPy/__pycache__/Vacation.ipynb
studam/Python_APIs
fed477563c0db28d871a8da8ebc3c416a855a9b2
[ "ADSL" ]
null
null
null
Python_API/VacationPy/__pycache__/Vacation.ipynb
studam/Python_APIs
fed477563c0db28d871a8da8ebc3c416a855a9b2
[ "ADSL" ]
null
null
null
Python_API/VacationPy/__pycache__/Vacation.ipynb
studam/Python_APIs
fed477563c0db28d871a8da8ebc3c416a855a9b2
[ "ADSL" ]
null
null
null
29.3375
506
0.556739
[ [ [ "# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport requests\nimport gmaps\nimport os\nimport json\n\n# Import API key\nfrom api_keys import g_key", "_____no_output_____" ], [ "# Loan CSV file generated from WeatherPy Folder\nweather_data_to_load = \"../WeatherPy/weather_df.csv\"\nweather_data = pd.read_csv(weather_data_to_load)\ndropna_weather_data = weather_data.dropna()\ndel dropna_weather_data[\"Unnamed: 0\"]\ndropna_weather_data.head(20)", "_____no_output_____" ], [ "# Configure gmaps\ngmaps.configure(api_key=g_key)\n\n# Locations\nlocations = dropna_weather_data[[\"Latitude\", \"Longitude\"]]\n\nhumidity = dropna_weather_data[\"Humidity (%)\"].astype(float)", "_____no_output_____" ], [ "# Plot Heatmap\nfig = gmaps.figure()\n\n# Create heat layer\nheat_layer = gmaps.heatmap_layer(locations, weights=humidity, \n dissipating=False, max_intensity=100,\n point_radius=2)\n\n# Add layer\nfig.add_layer(heat_layer)\n\n# Display figure\nfig", "_____no_output_____" ], [ "# Filter vacation with zero cloudiness\nvacation_no_cloud = dropna_weather_data[dropna_weather_data[\"Cloudiness (%)\"] == 0]\n# Filter vacation with max temp above 70 degrees F\nvacation_above_70_degrees = vacation_no_cloud[vacation_no_cloud[\"Max Temp (F)\"] > 70]\n# Filter vacation with max temp below 80 degrees F\nvacation_below_80_degrees = vacation_above_70_degrees[vacation_above_70_degrees[\"Max Temp (F)\"] < 80]\n# Filter vacation with wind speed below 10 mph\nvacation_slow_wind = vacation_below_80_degrees[vacation_below_80_degrees[\"Wind Speed (mph)\"] < 10]\n# Filter vacation with humidity below 60 %\nperfect_vacation = vacation_slow_wind[vacation_slow_wind[\"Humidity (%)\"] < 60]\n# Set Index\nindexed_perfect_vacation = perfect_vacation.reset_index()\ndel indexed_perfect_vacation[\"index\"]\nindexed_perfect_vacation", "_____no_output_____" ], [ "vaca_locations = indexed_perfect_vacation[[\"Latitude\", \"Longitude\"]]\n\nvaca_humidity = indexed_perfect_vacation[\"Humidity (%)\"].astype(float)\n\n# Plot Heatmap\nvaca_fig = gmaps.figure()\n\n# Create heat layer\nvaca_heat_layer = gmaps.heatmap_layer(vaca_locations, weights=vaca_humidity, \n dissipating=False, max_intensity=50,\n point_radius=2.5)\n\n# Add layer\nvaca_fig.add_layer(vaca_heat_layer)\n\n# Display figure\nvaca_fig", "_____no_output_____" ], [ "# Hotel variable\nhotels = []\n\n# Loop through narrowed down dataframe to get nearest hotel\nfor city in range(len(indexed_perfect_vacation[\"City\"])):\n\n lat = indexed_perfect_vacation.loc[city][\"Latitude\"]\n lng = indexed_perfect_vacation.loc[city][\"Longitude\"]\n\n city_coords = f\"{lat},{lng}\"\n\n params = {\n \"location\": city_coords, \n \"types\": \"lodging\",\n \"radius\": 5000,\n \"key\": g_key\n }\n\n base_url = \"https://maps.googleapis.com/maps/api/place/nearbysearch/json\" \n\n hotel_request = requests.get(base_url, params=params)\n hotel_response = hotel_request.json()\n\n try:\n hotels.append(hotel_response[\"results\"][0][\"name\"])\n except:\n hotels.append(\"Nearest hotel not found\")\n\n# Dataframe with nearest hotel\nindexed_perfect_vacation[\"Nearest Hotel\"] = hotels\nindexed_perfect_vacation", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
cbb53e5663670903e774c9bcbaa91bb791a0848a
772,384
ipynb
Jupyter Notebook
TP 1/TP 1 - Pierre OREISTEIN.ipynb
PierreOreistein/MVA-RandomMatrix
40f090e757438b7c6108660fa6d77956b93db0e3
[ "MIT" ]
null
null
null
TP 1/TP 1 - Pierre OREISTEIN.ipynb
PierreOreistein/MVA-RandomMatrix
40f090e757438b7c6108660fa6d77956b93db0e3
[ "MIT" ]
null
null
null
TP 1/TP 1 - Pierre OREISTEIN.ipynb
PierreOreistein/MVA-RandomMatrix
40f090e757438b7c6108660fa6d77956b93db0e3
[ "MIT" ]
1
2022-01-23T23:52:30.000Z
2022-01-23T23:52:30.000Z
336.404181
163,472
0.922738
[ [ [ "# 0 - Information", "_____no_output_____" ], [ "# 1 - Packages", "_____no_output_____" ] ], [ [ "# Math packages\nimport numpy as np\nfrom scipy import optimize\nfrom scipy.stats import norm\n\n# Graphix packages\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()\n\n# Progress bar\nfrom tqdm import tqdm", "_____no_output_____" ] ], [ [ "# 2 - Séparation exacte du spectre", "_____no_output_____" ], [ "## Question 2.1", "_____no_output_____" ] ], [ [ "def generateCovarianceMatrices(lambdas_l, n=1000, nb=250, law=\"normal\"):\n \"\"\"Generate random covariances matrices for the given list of lambdas.\"\"\"\n \n # Extract N\n N = len(lambdas_l)\n\n # Initialisation of the seed\n np.random.seed(42)\n \n # Generate R_N\n R_N_1_2 = np.diag(np.sqrt(lambdas_l))\n \n # Saving array of all the generated covariances matrices\n result = []\n \n # Loop for generating all the covariances matrices\n for i in tqdm(range(nb)):\n \n if law == \"normal\":\n # Samples normale distributions\n U = np.random.normal(0, 1, size=(N, n))\n V = np.random.normal(0, 1, size=(N, n))\n \n # Compute X with a normal distribution\n X_N = (U + 1j * V) / np.sqrt(2)\n \n if law == \"student\":\n U = np.random.standard_t(3, size=(N, n))\n V = np.random.standard_t(3, size=(N, n))\n \n # Compute X with a normal distribution\n X_N = (U + 1j * V) / np.sqrt(6)\n\n # Covariance matrix\n # cov = 1 / N * np.dot(X_N.H, np.dot(R_N, X_N))\n cov = 1 / n * R_N_1_2 @ X_N @ np.conj(X_N).T @ R_N_1_2\n \n # Update result\n result.append(cov)\n \n return result", "_____no_output_____" ], [ "def samplingSpectrum(lambdas_l, n=1000, nb=200, law=\"normal\"):\n \"\"\"Generate covariances matrices for the given lambdas and compute the distribution\n of their spectrum.\"\"\"\n \n # Generate the covariances matrices\n cov_l = generateCovarianceMatrices(lambdas_l, n=n, nb=nb, law=law)\n \n # Saving array\n eigvals_l = []\n \n # Loop over each covariances matrices for computing their spectrum.\n for cov in tqdm(cov_l):\n \n # Compute the spectrum\n eigvals, vectors = np.linalg.eigh(cov)\n \n # Update eigvals_l\n eigvals_l.extend(eigvals)\n \n return eigvals_l", "_____no_output_____" ], [ "def displayDistribution(lambdas_l, eigvals_l, N, n, nb, bins=50, c_i=True,\n name=\"Distribution_Spectrum\"):\n \"\"\"Display the distribution of eigvals given as argument.\"\"\"\n \n # Display the ratio between N and n\n print(\"c: {}\".format(N/n))\n \n # Display each c_i\n if c_i:\n lambdas, counts = np.unique(lambdas_l, return_counts=True)\n\n # Count proportion for each true eigenvalue\n nb_lambdas = len(lambdas)\n diff = np.abs(np.tile(np.array(eigvals_l).reshape((-1, 1)), (1, nb_lambdas)) - lambdas)\n lambdas_associated = lambdas[list(np.argmax(-diff, axis=1).reshape(-1))]\n _, counts_observed = np.unique(lambdas_associated, return_counts=True)\n\n for i, count_i in enumerate(counts):\n\n # Count the number of time lamda is present in lambdas_l\n N_i = count_i / N\n N_i_observed = counts_observed[i] / (N * nb)\n\n # Display c_i\n print(\"c_{} (lambda={}): {}\".format(i, lambdas[i], N_i))\n print(\"Proportion observed: {}\".format(N_i_observed))\n \n # Display max of eigenvalues\n print(\"\\nMax of eigen values: {}\".format(np.max(eigvals_l)))\n \n # Display the distribution\n plt.figure(figsize=(15, 8))\n plt.hist(eigvals_l, bins=bins)\n \n # Save figure\n plt.savefig(\"./Results/\" + name + \".png\", dpi=150, bbox_inches='tight', pad_inches=0)", "_____no_output_____" ], [ "def generateLambdas(N=20, lambdas=[1, 3, 7], p=[0.2, 0.3, 0.5]):\n \"\"\"Generate random lambdas_l.\"\"\"\n \n # Generate the lambdas\n lambdas_l = np.sort(np.random.choice(lambdas, p=p, size=N))\n \n return lambdas_l", "_____no_output_____" ], [ "# Choice of N and n and nb\nN = 20 # Number of samples per covariances matrix\nn = 1000 # Number of features\nnb = 500 # Number of covariance matrices to generate\n\n# List of possible lambdas and their probabilities\nlambdas = [1, 3, 7]\np = np.array([0.2, 0.3, 0.5])\np /= np.sum(p)\n\n# Generate the lambdas\nlambdas_l = generateLambdas(N=N, lambdas=lambdas, p=p)\n\n# Generate the covariances and compute their spectrum\neigvals_l = samplingSpectrum(lambdas_l, n=n, nb=nb)\n\n# Display the distribution\ndisplayDistribution(lambdas_l, eigvals_l, N, n, nb)", "100%|██████████| 500/500 [00:02<00:00, 190.46it/s]\n100%|██████████| 500/500 [00:00<00:00, 3064.16it/s]\n" ] ], [ [ "Commentaires:\n-------------\nOn choisit la loi normale comme loi ayant un moment d'ordre 4 fini.\nOn observe alors les trois propriétés attendues:\n* Pas de valeur propre qui s'éloigne aux infinis\n* Séparation exacte du support de chaque valeur propre\n* Le nombre de valeurs propres dans chaque composante connexe est proportionnel à $N_i$", "_____no_output_____" ], [ "## Question 2.2", "_____no_output_____" ], [ "Dans cette question, on choisit une loi de moment d'ordre 4 infini (pour les figures ci-dessous, on utilise la loi de Student).", "_____no_output_____" ] ], [ [ "# Choice of N and n\nN = 20\nn = 1000\nnb = 500\n\n# List of possible lambdas and their probabilities\nlambdas = [1, 3, 7]\np = np.array([0.2, 0.3, 0.5])\np /= np.sum(p)\n\n# Generate the lambdas\nlambdas_l = generateLambdas(N=N, lambdas=lambdas, p=p)\n\n# Generate the covariances and compute their spectrum\neigvals_l = samplingSpectrum(lambdas_l, n=n, nb=nb, law=\"student\")\n\n# Display the distribution\ndisplayDistribution(lambdas_l, eigvals_l, N, n, nb, c_i=False,\n name=\"Distribution_Spectrum_Not_separated\")", "100%|██████████| 500/500 [00:05<00:00, 97.83it/s] \n100%|██████████| 500/500 [00:00<00:00, 2848.17it/s]\n" ] ], [ [ "Commentaires:\n------------------\nCette fois, on observe:\n* Certaines valeurs propres tendent vers l'infini, ou du moins sont beaucoup plus grande qu'attendu\n* Il n'y a plus de séparation exacte du spectre", "_____no_output_____" ], [ "# 3 - Graphe de x(t)", "_____no_output_____" ], [ "## Question 3.1", "_____no_output_____" ] ], [ [ "def F(t, x, y, c=0.01, lambdas=lambdas, p=p):\n \"\"\"Compute the value of the function of fixed point t(z).\"\"\"\n \n # Initialise result\n result = -(x + 1j * y)\n \n # Loop over all the lambdas\n for k, l in enumerate(lambdas):\n \n result += c * p[k] * (l / (1 + l * t))\n \n return 1 / result", "_____no_output_____" ], [ "def fixedPoint(x, y=10e-5, c=0.01, init=0, lambdas=lambdas, p=p):\n \"\"\"Find the fixed point of F for the given x and y.\"\"\"\n \n # Find the fixed point of F\n func = lambda t : F(t, x, y, c, lambdas=lambdas, p=p)\n t = optimize.fixed_point(func, [init])\n \n return t", "_____no_output_____" ], [ "def graphf(ax, c=0.01, y=10e-5, nb=500, lambdas=lambdas, p=p, name=\"Graph_f\"):\n \"\"\"Display the graph of f.\"\"\"\n \n # Array of real\n x_l = np.linspace(0.5, max(lambdas) + 3, nb)\n \n # Initialisation of graph\n graph = []\n f_x = 0\n \n # Loop over all x\n for x in tqdm(x_l):\n \n f_x = 1 / (np.pi) * np.imag(fixedPoint(x, y=y, c=c, init=f_x, lambdas=lambdas, p=p))\n graph.append(f_x)\n \n # Convert graph as an array\n graph = np.array(graph).reshape(-1)\n\n # Display the graph of f\n ax.plot(x_l, graph)\n ax.set_xlabel(\"x\", fontsize=22)\n ax.set_ylabel(\"f(x)\", fontsize=22)\n ax.set_title(\"f(x) for c={}\".format(c), fontsize=22)\n \ndef graphX(ax, c=0.01, nb=1000, lambdas=lambdas, p=p, name=\"Graph_X\"):\n \"\"\"Display the graph of x.\"\"\"\n \n # Array of t\n t_l = np.linspace(-1.5, 1.5, nb)\n \n # Filtering of t_l such as -1/t is not in lambdas\n t_l = [t for t in t_l if not((-1 / t) in lambdas)]\n \n # Initialisation of graph\n graph = []\n f_x = 0\n \n # Loop over all t\n for t in tqdm(t_l):\n \n # Initialisation of x\n x = -1 / t\n \n for k, l in enumerate(lambdas):\n \n # Update x\n x += c * p[k] * (l / (1 + l * t))\n \n graph.append(x)\n \n # Convert graph as an array\n graph = np.array(graph).reshape(-1)\n\n # Display the graph of f\n ax.plot(t_l, graph)\n ax.set_ylim(-5, 15)\n ax.grid(True)\n ax.axhline(y=0, color='k')\n ax.axvline(x=0, color='k')\n ax.grid(True)\n ax.set_xlabel(\"t\", fontsize=22)\n ax.set_ylabel(\"x(t)\", fontsize=22)\n ax.set_title(\"x(t) for c={}\".format(c), fontsize=22)\n \n # Save figure\n if not(name is None):\n fig.savefig(\"./Results/\" + name + \".png\", dpi=150, bbox_inches='tight', pad_inches=0)", "_____no_output_____" ], [ "# Display the graph of f\nfig, axs = plt.subplots(1, 1, figsize=(10, 8))\ngraphf(axs)", "100%|██████████| 500/500 [00:00<00:00, 558.38it/s]\n" ] ], [ [ "Commentaires:\n------------------\n\nOn retrouve la loi du spectre observées précédement via la méthode des histogrammes.", "_____no_output_____" ], [ "## Question 3.2", "_____no_output_____" ] ], [ [ "def graphX(ax, c=0.01, nb=1000, lambdas=lambdas, p=p, name=\"Graph_X\"):\n \"\"\"Display the graph of x.\"\"\"\n \n # Array of t\n t_l = np.linspace(-1.5, 1.5, nb)\n \n # Filtering of t_l such as -1/t is not in lambdas\n t_l = [t for t in t_l if not((-1 / t) in lambdas)]\n \n # Initialisation of graph\n graph = []\n f_x = 0\n \n # Loop over all t\n for t in tqdm(t_l):\n \n # Initialisation of x\n x = -1 / t\n \n for k, l in enumerate(lambdas):\n \n # Update x\n x += c * p[k] * (l / (1 + l * t))\n \n graph.append(x)\n \n # Convert graph as an array\n graph = np.array(graph).reshape(-1)\n\n # Display the graph of f\n ax.plot(t_l, graph)\n ax.set_ylim(-5, 15)\n ax.grid(True)\n ax.axhline(y=0, color='k')\n ax.axvline(x=0, color='k')\n ax.grid(True)\n ax.set_xlabel(\"t\", fontsize=22)\n ax.set_ylabel(\"x(t)\", fontsize=22)\n ax.set_title(\"x(t) for c={}\".format(c), fontsize=22)\n \n # Save figure\n if not(name is None):\n fig.savefig(\"./Results/\" + name + \".png\", dpi=150, bbox_inches='tight', pad_inches=0)", "_____no_output_____" ], [ "# Display the graph of x\nfig, axs = plt.subplots(1, 1, figsize=(10, 8))\ngraphX(axs)", "100%|██████████| 1000/1000 [00:00<00:00, 230468.93it/s]\n" ] ], [ [ "Commentaires:\n------------------\nOn retrouve bien les asymptotes verticales aux voisinnages des $-1 / \\lambda^R_k$", "_____no_output_____" ], [ "## Question 3.3", "_____no_output_____" ], [ "### Study of the influence of c", "_____no_output_____" ] ], [ [ "def studyC(name=\"Study_C\"):\n \"\"\"Study the influence of the parameters.\"\"\"\n \n # Array of c to test\n c_l = [0.01, 0.2, 2]\n \n # Display the graph of x\n fig, axs = plt.subplots(2, 3, figsize=(25, 16))\n \n # Compute the graph of f for every c\n for k, c_k in enumerate(c_l):\n \n # Add graph of f for c[i]\n graphf(axs[0, k], c=c_k, name=None)\n \n # Add graph of f for c[i]\n graphX(axs[1, k], c=c_k, name=None)\n \n # Save figure\n fig.savefig(\"./Results/\" + name + \".png\", dpi=150, bbox_inches='tight', pad_inches=0)", "_____no_output_____" ], [ "studyC()", "100%|██████████| 500/500 [00:00<00:00, 530.04it/s]\n100%|██████████| 1000/1000 [00:00<00:00, 231819.16it/s]\n100%|██████████| 500/500 [00:01<00:00, 268.06it/s]\n100%|██████████| 1000/1000 [00:00<00:00, 263577.20it/s]\n100%|██████████| 500/500 [00:01<00:00, 251.98it/s]\n100%|██████████| 1000/1000 [00:00<00:00, 175501.23it/s]\n" ] ], [ [ "Commentaires:\n-------------\n\nCi-dessus sont représentés sur la première ligne le graphe de f et sur la second ligne le graphe de x pour des valeurs de c égales à 0.01, 0.05 et 2.\n\n* Comme on peut l'observer, lorsque que c est faible (cas de c=0.01), les graphes de f et de x correspondent à ce qui étaient attendus.\n\n* Lorsque c augmente (cas de c=0.2), on constate que du \"bruit\" aparaît par rapport aux graphes que l'on pourrait attendre de f et de x et que la distinction entre les clusters est moins nette. De même les parties croissantes du graphe de x sont plus étroite.\n\n* Finalement, quand c est grand (c=2), on constate que le graphe de f ne correspond plus du tout à ce que l'on peut espérer. De plus le graphe de x admet toujours les mêmes asymptotes mais la croissance de cette fonction n'est plus cohérent avec ce qui est attendu.", "_____no_output_____" ] ], [ [ "def studyCi(lambdas=lambdas, name=\"Study_C_i\"):\n \"\"\"Study the influence of the parameters.\"\"\"\n \n # Array of c_i to test\n ci_l = [[3, 7, 10],\n [14, 3, 2],\n [1, 10, 9]]\n \n # Display the graph of x\n fig, axs = plt.subplots(2, 3, figsize=(25, 16))\n \n # Compute the graph of f for every c\n for k, ci_k in enumerate(ci_l):\n \n # Computation of p\n p = ci_k / np.sum(ci_k)\n print(p)\n \n # Add graph of f for c[i]\n graphf(axs[0, k], lambdas=lambdas, p=p, name=None)\n \n # Add graph of f for c[i]\n graphX(axs[1, k], lambdas=lambdas, p=p, name=None)\n \n # Save figure\n fig.savefig(\"./Results/\" + name + \".png\", dpi=150, bbox_inches='tight', pad_inches=0)", "_____no_output_____" ], [ "studyCi()", "\r 0%| | 0/500 [00:00<?, ?it/s]" ] ], [ [ "Commentaires:\n------------------\n\nCi-dessus sont représentés les graphes de f (première ligne) et de x (deuxième ligne). On étudie ici l'influence du choix des c_i. On a pris les mêmes lambdas que précédement (1, 3 et 7) mais on a représenté les graphes de f et x pour différents choix de c_i: ([0.15, 0.35, 0.5], [0.73, 0.16, 0.11] et [0.05, 0.5, 0.45] respectivement)\n\n\\vspace{2em}\n\nInfluence sur le graphe de f:\n* Comme vu précédement, on retrouve bien le fait que les parties connexes liés à une valeur propre sont proportionnel à c_i.\n\n* On constate aussi que lorsque qu'un $\\lambda_i$ est associé à un c_i faible, alors l'intervalle de sa partie connexe associé est peu étendu.\n\n\\vspace{2em}\n\nInfluence sur le graphe de x:\n* Lorsqu'on a un $c_i$ élevé, on constate une meilleur approximation au niveau de l'asymptote associée à $\\lambda_i$.", "_____no_output_____" ] ], [ [ "def studyLambdas(name=\"Study_Lambdas\"):\n \"\"\"Study the influence of the parameters.\"\"\"\n \n # Array of c_i to test\n lambdas_l = [[1, 3, 7],\n [1, 2, 3],\n [1, 10, 20]]\n \n # Display the graph of x\n fig, axs = plt.subplots(2, 3, figsize=(25, 16))\n \n # Compute the graph of f for every c\n for k, lambdas_k in enumerate(lambdas_l):\n \n # Add graph of f for c[i]\n graphf(axs[0, k], lambdas=lambdas_k, name=None)\n \n # Add graph of f for c[i]\n graphX(axs[1, k], lambdas=lambdas_k, name=None)\n \n # Save figure\n fig.savefig(\"./Results/\" + name + \".png\", dpi=150, bbox_inches='tight', pad_inches=0)", "_____no_output_____" ], [ "studyLambdas()", "100%|██████████| 500/500 [00:00<00:00, 551.33it/s]\n100%|██████████| 1000/1000 [00:00<00:00, 256485.29it/s]\n100%|██████████| 500/500 [00:00<00:00, 617.47it/s]\n100%|██████████| 999/999 [00:00<00:00, 256228.81it/s]\n100%|██████████| 500/500 [00:01<00:00, 476.79it/s]\n100%|██████████| 1000/1000 [00:00<00:00, 228647.19it/s]\n" ] ], [ [ "Commentaires:\n-----------------\n\nCi-dessus sont représentés les graphes de f (première ligne) et de x (deuxième ligne). On étudie ici l'influence du choix des $\\lambda_i$. On a pris les mêmes $c_i$ que précédement ([0.2, 0.3, 05]) mais on a représenté les graphes de f et x pour différents choix de $\\lambda_i$: ([1, 3, 7], [1, 2, 3] et [1, 10, 20] respectivement).\n\n\\vspace{2em}\n\nInfluence sur le graphe de f:\n* On constate que plus un $\\lambda_i$ est grand, plus sa partie connexe associé est étendue.\n\n\\vspace{2em}\n\nInfluence sur le graphe de x:\n* On constate que plus les $\\lambda_i$ sont grands, plus leur asymptote associé ont tendance à se chevaucher vu qu'elles tendent toutes vers l'asymptote $x=0$.\n\n", "_____no_output_____" ], [ "# 4 - Estimation des $\\lambda_i^R$", "_____no_output_____" ], [ "## Question 4.1", "_____no_output_____" ], [ "Grâce aux simulations réalisées pour la partie séparation exacte de spectre, on sait que l'ensemble de paramètres (N=20, n=1000, nb=500, law=\"normal\", lambdas=[1, 3, 7]) est un ensemble qui satisfait les conditions énoncées.", "_____no_output_____" ], [ "## Question 4.2.(a)", "_____no_output_____" ], [ "En utilisant la fonction d'approximation $f(x) = x$ et en utilisant les résultats vu en cours ainsi que les résultats des exercices 3 et 4, on obtient les résultats ci-dessous:\n\\begin{equation*}\n \\lambda_k(R_N) \\approx \\frac{n}{N_k} \\sum_{i \\in \\mathcal{C}_k} (\\lambda_i - \\eta_i)\n\\end{equation*}\n\nAvec:\n\\begin{itemize}\n \\item $\\mathcal{C}_k$ un contour entourant $\\lambda_k$.\n \\item $\\eta_i$ sont les valeurs propres de $\\Lambda - aa^*$ où $\\Lambda = diag(\\lambda_1, \\cdots, \\lambda_n)$ et $a = \\frac{1}{n}(\\sqrt{\\lambda_1}, \\cdots, \\sqrt{\\lambda_n})^T$.\n\\end{itemize}", "_____no_output_____" ], [ "Grâce aux résultats de l'exercice 3 de la feuille d'exercice \"Inférence\", on peut estimer facilement les eta_i.", "_____no_output_____" ] ], [ [ "def computeEta(cov, n=1000):\n \"\"\"Compute the eta associated to cov.\"\"\"\n \n # Compute the eigen values of cov.\n eigvals = np.real(np.linalg.eigvals(cov))\n\n # Build \\Lambda\n L = np.diag(eigvals)\n\n # Build a\n a = 1 / np.sqrt(n) * np.sqrt(eigvals).reshape((-1, 1))\n\n # Build \\Lambda - aa*\n R = L - a @ a.T\n\n # Compute the eta\n etas = np.linalg.eigvals(R)\n \n return etas", "_____no_output_____" ], [ "def estimationLambda(cov_l, lambdas_l, C=[[0.1, 1.9], [2, 4], [5, 9]], n=1000,\n lambdas_init=[1, 3, 7]):\n \"\"\"Estimate \\Lambda_k. Take for argument a contour of Lambda_k and samples of cov.\"\"\"\n \n # Number of samples\n nb = len(cov_l)\n \n # Shape of cov\n N, _ = np.shape(cov_l[0])\n \n # Initialisation of the estimation of lambda_k\n estimators_l = [[] for i in lambdas_init]\n \n # Loop over all the matrices inside cov_l\n for cov in tqdm(cov_l):\n \n # Compute the eta associated\n etas = computeEta(cov, n=n)\n \n # Compute the lambdas associated\n lambdas = np.real(np.linalg.eigvals(cov))\n \n # Loop for each lambdas to approximate\n for k, lambda_k_true in enumerate(lambdas_init):\n \n # Count N_k\n N_k = list(lambdas_l).count(lambda_k_true)\n \n # Filter all the value inside etas and lambdas that are in C\n etas_C = list(filter(lambda x: (C[k][0] <= x) & (x <= C[k][1]), etas))\n lambdas_C = list(filter(lambda x: (C[k][0] <= x) & (x <= C[k][1]), lambdas))\n\n # Update the estimation of lambda_k\n estimators_l[k].append(n / N_k * (np.sum(lambdas_C) - np.sum(etas_C)))\n \n return np.array(estimators_l)", "_____no_output_____" ], [ "# Choice of N and n\nN = 20\nn = 1000\nnb = 100\n\n# List of possible lambdas and their probabilities\nlambdas = [1, 3, 7]\np = np.array([0.2, 0.3, 0.5])\np /= np.sum(p)\n\n# Generate the lambdas\nlambdas_l = generateLambdas(N=N, lambdas=lambdas, p=p)\n\n# Generate the covariances matrices\ncov_l = generateCovarianceMatrices(lambdas_l, n=n, nb=nb, law=\"normal\")\n\n# Estimation of \\lambda_k\nC = [[0.1, 1.9], [2, 4], [5, 9]]\nestimators_l = estimationLambda(cov_l, lambdas_l, lambdas_init=lambdas, C=C, n=n)\n\n# Display the estimation\nprint(estimators_l.mean(axis=1))", "100%|██████████| 100/100 [00:00<00:00, 180.94it/s]\n100%|██████████| 100/100 [00:00<00:00, 713.77it/s]" ] ], [ [ "Commentaires:\n------------------\n\nPour chacun des lambdas réel, on trouve une bonne estimation avec une erreur inférieur à 1%", "_____no_output_____" ], [ "## Question 4.2.(b)", "_____no_output_____" ] ], [ [ "def displayDistributionLambdasK(N=60, n=3000, nb=5000, p=[0.2, 0.3, 0.5], lambdas=[1, 3, 7],\n C=[[0.1, 1.9], [2, 4], [5, 9]], bins=25,\n name=\"Distrib_Lambda_K\"):\n \"\"\"Display the distribution of the estimator of the lambdas given as argument.\"\"\"\n \n # Generate the lambdas\n lambdas_l = generateLambdas(N=N, lambdas=lambdas, p=p)\n print(np.unique(lambdas_l, return_counts=True))\n # Generate the covariances matrices\n cov_l = generateCovarianceMatrices(lambdas_l, n=n, nb=nb, law=\"normal\")\n \n # Estimation of \\lambda_k\n estimators_l = estimationLambda(cov_l, lambdas_l, lambdas_init=lambdas, C=C, n=n)\n \n # Initialisation of the figure\n fig, axs = plt.subplots(1, 3, figsize=(21, 8))\n \n # Computation of the MSE for each lambda_k\n for k, lambda_k in enumerate(lambdas):\n\n # Display the mean and the std of the estimators of lambdas_k\n mean = np.mean(estimators_l[k])\n std = np.std(estimators_l[k])\n print(\"Estimators of \\lambda_k, Mean: {}, Std: {}\".format(mean, std))\n \n # Compute the histogram of estimator_l[k]\n values_true, edges_true = np.histogram(estimators_l[k], bins=bins)\n values_true = values_true / np.sum(values_true)\n edges_true = (edges_true[1:] + edges_true[:-1]) / 2\n width_true = 0.8 * (edges_true[1]- edges_true[0])\n \n # Values for the gaussian\n samples = np.random.normal(mean, std, 100000)\n values_g, edges_g = np.histogram(samples, bins=bins)\n values_g = values_g / np.sum(values_g)\n edges_g = (edges_g[1:] + edges_g[:-1]) / 2\n width_g = 0.8 * (edges_g[1]- edges_g[0])\n \n # Display the distribution\n axs[k].bar(edges_true, values_true, width=width_true,\n label=\"Histogram of \\hat\\lambda_k\", color=\"b\")\n axs[k].plot(edges_g, values_g, label=\"Approximated Gaussian\", color=\"r\", alpha=0.5,\n linewidth=5)\n axs[k].legend()\n axs[k].grid(True)\n \n # Save figure\n fig.savefig(\"./Results/\" + name + \".png\", dpi=150, bbox_inches='tight', pad_inches=0)", "_____no_output_____" ], [ "# Display distribution of the lambdas_k\ndisplayDistributionLambdasK()", " 0%| | 3/5000 [00:00<02:59, 27.80it/s]" ] ], [ [ "Commentaires:\n------------------\n\nD'après les histogrammes précédents, il semblerait que des estimateurs suivant des lois gaussiennes de moyennes $\\lambda_k$ et de variances dépendant des $\\lambda_k$ soient de bons estimateurs. Ici, on a pris pour variance, la variance empirique des $\\lambda_k$.", "_____no_output_____" ], [ "## Question 4.2.(c)", "_____no_output_____" ] ], [ [ "def displayMSEN(nb_N=10, nb=100, p=[0.2, 0.3, 0.5], lambdas=[1, 3, 7],\n C=[[0.1, 1.9], [2, 4], [5, 9]], name=\"MSE_N\"):\n \"\"\"Display the MSE.\"\"\"\n \n # Saving array of mse errors\n mse_l = [[] for i in range(len(lambdas))]\n \n # Sequences of N\n N_l = np.linspace(5, 80, nb_N, dtype=np.int)\n n_l = np.array([N * 50 for N in N_l])\n \n # Loop over each N in N_L\n for i, N in enumerate(N_l):\n \n # Extract n\n n = n_l[i]\n\n # Generate the lambdas\n lambdas_l = generateLambdas(N=N, lambdas=lambdas, p=p)\n \n # Generate the covariances matrices\n cov_l = generateCovarianceMatrices(lambdas_l, n=n, nb=nb, law=\"normal\")\n\n # Estimation of \\lambda_k\n estimators_l = estimationLambda(cov_l, lambdas_l, lambdas_init=lambdas, C=C, n=n)\n\n # Loop over each lambdas\n for k, lambda_k in enumerate(lambdas):\n \n # Compute MSE\n mse = np.mean((estimators_l[k] - lambda_k) ** 2)\n mse_l[k].append(mse)\n \n # Initialisation of the figure\n fig, axs = plt.subplots(1, 1, figsize=(10, 8))\n axs.grid(True)\n axs.set_ylabel(\"MSE\", fontsize=22)\n axs.set_xlabel(\"N\", fontsize=22)\n axs.set_title(\"MSE over N\", fontsize=22)\n \n # Display the mse for each lambdas\n for k, lambda_k in enumerate(lambdas):\n \n # Display the mse over N for the current lambda_k\n axs.plot(N_l, mse_l[k], label=\"\\lambda_k = {}\".format(lambda_k),\n marker=\"x\", linestyle=\"--\")\n \n # Activation of the legend\n axs.legend()\n \n # Save figure\n fig.savefig(\"./Results/\" + name + \".png\", dpi=150, bbox_inches='tight', pad_inches=0)", "_____no_output_____" ], [ "# Display MSE over N\ndisplayMSEN()", "100%|██████████| 100/100 [00:00<00:00, 4605.58it/s]\n100%|██████████| 100/100 [00:00<00:00, 4206.88it/s]\n100%|██████████| 100/100 [00:00<00:00, 704.25it/s]\n100%|██████████| 100/100 [00:00<00:00, 1275.92it/s]\n100%|██████████| 100/100 [00:00<00:00, 228.85it/s]\n100%|██████████| 100/100 [00:00<00:00, 705.29it/s]\n100%|██████████| 100/100 [00:00<00:00, 110.09it/s]\n100%|██████████| 100/100 [00:00<00:00, 477.10it/s]\n100%|██████████| 100/100 [00:01<00:00, 66.82it/s]\n100%|██████████| 100/100 [00:00<00:00, 358.73it/s]\n100%|██████████| 100/100 [00:02<00:00, 47.22it/s]\n100%|██████████| 100/100 [00:00<00:00, 265.91it/s]\n100%|██████████| 100/100 [00:03<00:00, 32.15it/s]\n100%|██████████| 100/100 [00:00<00:00, 183.51it/s]\n100%|██████████| 100/100 [00:04<00:00, 24.82it/s]\n100%|██████████| 100/100 [00:00<00:00, 142.56it/s]\n100%|██████████| 100/100 [00:05<00:00, 19.19it/s]\n100%|██████████| 100/100 [00:01<00:00, 55.82it/s]\n100%|██████████| 100/100 [00:06<00:00, 13.98it/s]\n100%|██████████| 100/100 [00:02<00:00, 34.93it/s]\n" ] ], [ [ "Commentaires:\n------------------\n\nGrâce à ce graphique, on en déduit que les estimateurs $\\hat\\lambda_k^N$ ont une variance qui décroît avec N et qui dépend aussi de $\\lambda_k$ de manière croissante.", "_____no_output_____" ], [ "## Question 4.2.(d)", "_____no_output_____" ], [ "Pour notre approximation naive, on va simplement prendre la moyenne des lambdas empiriques dans chacune des parties connexes.", "_____no_output_____" ] ], [ [ "def naiveEstimation(N, n, cov_l, lambdas=[1, 3, 7], C=[[0.1, 1.9], [2, 4], [5, 9]]):\n \"\"\"Naive estimation of the lambdas.\"\"\"\n \n # Number of samples of cov\n nb = len(cov_l)\n \n # Resulting array of estimators\n estimators_l = [0 for i in range(len(lambdas))]\n \n # Loop over the cov\n for cov in cov_l:\n \n # Compute the eigen values of cov\n eigvals = np.real(np.linalg.eigvals(cov))\n \n # Loop over each cluster for updating the estimation of lambda_k\n for k in range(len(lambdas)):\n \n # Filter all the value inside etas and lambdas that are in C\n lambdas_C = list(filter(lambda x: (C[k][0] <= x) & (x <= C[k][1]), eigvals))\n \n # Update estimators\n estimators_l[k] += np.mean(lambdas_C) / nb\n \n return estimators_l", "_____no_output_____" ], [ "def displayMSENaive(nb_N=10, nb=20, p=[0.2, 0.3, 0.5], lambdas=[1, 3, 7],\n C=[[0.1, 1.9], [2, 4], [5, 9]], name=\"naive_MSE\"):\n \"\"\"Display the MSE.\"\"\"\n \n # Saving array of mse errors\n mse_l = [[] for i in range(len(lambdas))]\n \n # Sequences of N\n N_l = np.linspace(5, 1000, nb_N, dtype=np.int)\n n_l = np.array([N * 50 for N in N_l])\n \n # Loop over each N in N_L\n for i, N in enumerate(N_l):\n \n # Extract n\n n = n_l[i]\n\n # Generate the lambdas\n lambdas_l = generateLambdas(N=N, lambdas=lambdas, p=p)\n \n # Generate the covariances matrices\n cov_l = generateCovarianceMatrices(lambdas_l, n=n, nb=nb, law=\"normal\")\n\n # Estimation of \\lambda_k\n estimators_l = naiveEstimation(N, n, cov_l, lambdas=lambdas, C=C)\n\n # Loop over each lambdas\n for k, lambda_k in enumerate(lambdas):\n \n # Compute MSE\n mse = np.mean((estimators_l[k] - lambda_k) ** 2)\n mse_l[k].append(mse)\n \n # Initialisation of the figure\n fig, axs = plt.subplots(1, 1, figsize=(10, 8))\n axs.grid(True)\n axs.set_ylabel(\"MSE\", fontsize=22)\n axs.set_xlabel(\"N\", fontsize=22)\n axs.set_title(\"MSE over N\", fontsize=22)\n \n # Display the mse for each lambdas\n for k, lambda_k in enumerate(lambdas):\n \n # Display the mse over N for the current lambda_k\n axs.plot(N_l, mse_l[k], label=\"\\lambda_k = {}\".format(lambda_k),\n marker=\"x\", linestyle=\"--\")\n \n # Activate the legend\n axs.legend()\n \n # Save figure\n fig.savefig(\"./Results/\" + name + \".png\", dpi=150, bbox_inches='tight', pad_inches=0)", "_____no_output_____" ], [ "displayMSENaive()", "100%|██████████| 20/20 [00:00<00:00, 3125.06it/s]\n/home/pierre/anaconda3/lib/python3.6/site-packages/numpy/core/fromnumeric.py:3118: RuntimeWarning: Mean of empty slice.\n out=out, **kwargs)\n/home/pierre/anaconda3/lib/python3.6/site-packages/numpy/core/_methods.py:85: RuntimeWarning: invalid value encountered in double_scalars\n ret = ret.dtype.type(ret / rcount)\n100%|██████████| 20/20 [00:02<00:00, 6.89it/s]\n100%|██████████| 20/20 [00:09<00:00, 2.09it/s]\n100%|██████████| 20/20 [00:22<00:00, 1.17s/it]\n100%|██████████| 20/20 [00:45<00:00, 2.31s/it]\n100%|██████████| 20/20 [01:48<00:00, 6.70s/it]\n100%|██████████| 20/20 [02:09<00:00, 6.10s/it]\n100%|██████████| 20/20 [03:02<00:00, 9.39s/it]\n100%|██████████| 20/20 [04:27<00:00, 12.24s/it]\n100%|██████████| 20/20 [05:52<00:00, 17.94s/it]\n" ] ], [ [ "Commentaires:\n------------------\n\nOn constate que l'erreur ne décroit plus avec N pour notre estimateur naïf. On en déduit donc que celui-ci est biaisé et non consistant. De même, on constate que cette erreur croît avec la valeur de $\\lambda_k$. Néanmoins, l'erreur reste très faible ($10^{-29}$) et ainsi l'estimateur naïf peut être utile car il est compliqué à estimer. ", "_____no_output_____" ], [ "## Question 4.3", "_____no_output_____" ], [ "### Paramétrisation critique", "_____no_output_____" ] ], [ [ "# Choice of N and n and nb\nN = 200 # Number of samples per covariances matrix\nn = 1000 # Number of features\nnb = 500 # Number of covariance matrices to generate\n\n# List of possible lambdas and their probabilities\nlambdas = [1, 3]\np = np.array([0.4, 0.6])\np /= np.sum(p)\n\n# Generate the lambdas\nlambdas_l = generateLambdas(N=N, lambdas=lambdas, p=p)\n\n# Generate the covariances and compute their spectrum\neigvals_l = samplingSpectrum(lambdas_l, n=n, nb=nb)\n\n# Display the distribution\ndisplayDistribution(lambdas_l, eigvals_l, N, n, nb, name=\"Distrib_Spectrum_Critic\")", "100%|██████████| 500/500 [00:26<00:00, 18.53it/s]\n100%|██████████| 500/500 [00:15<00:00, 41.47it/s]\n" ] ], [ [ "Commentaires:\n------------------\n\nAprès plusieurs expérimentations, on constate donc que $c_0 = 0.2$ ext une paramétrisation critique.", "_____no_output_____" ], [ "## Question 4.4.(a)", "_____no_output_____" ] ], [ [ "def displayMSEn(N=20, nb_n=15, nb=20, p=[0.4, 0.6], lambdas=[1, 3], C=[[0.1, 1.9], [2, 4]],\n name=\"MSE_n\", estimator_func=estimationLambda):\n \"\"\"Display the MSE over n.\"\"\"\n \n # Saving array of mse errors\n mse_l = [[] for i in range(len(lambdas))]\n \n # Sequences of N\n n_l = np.linspace(10, 1000, nb_n, dtype=int)\n \n # Loop over each N in N_L\n for i, N in enumerate(n_l):\n \n # Extract n\n n = n_l[i]\n\n # Generate the lambdas\n lambdas_l = generateLambdas(N=N, lambdas=lambdas, p=p)\n \n # Generate the covariances matrices\n cov_l = generateCovarianceMatrices(lambdas_l, n=n, nb=nb, law=\"normal\")\n\n # Estimation of \\lambda_k\n estimators_l = estimator_func(cov_l, lambdas_l, lambdas_init=lambdas, C=C, n=n)\n\n # Loop over each lambdas\n for k, lambda_k in enumerate(lambdas):\n \n # Compute MSE\n mse = np.mean((estimators_l[k] - lambda_k) ** 2)\n mse_l[k].append(mse)\n \n # Initialisation of the figure\n fig, axs = plt.subplots(1, 1, figsize=(10, 8))\n axs.grid(True)\n axs.set_ylabel(\"MSE\", fontsize=22)\n axs.set_xlabel(\"n\", fontsize=22)\n axs.set_title(\"MSE over n\", fontsize=22)\n \n # Display the mse for each lambdas\n for k, lambda_k in enumerate(lambdas):\n \n # Display the mse over N for the current lambda_k\n axs.plot(n_l, mse_l[k], label=\"\\lambda_k = {}\".format(lambda_k),\n marker=\"x\", linestyle=\"--\")\n \n # Activation of the legend\n axs.legend()\n \n # Save figure\n fig.savefig(\"./Results/\" + name + \".png\", dpi=150, bbox_inches='tight', pad_inches=0)", "_____no_output_____" ], [ "displayMSEn()", "100%|██████████| 20/20 [00:00<00:00, 9463.68it/s]\n100%|██████████| 20/20 [00:00<00:00, 2348.83it/s]\n100%|██████████| 20/20 [00:00<00:00, 707.14it/s]\n100%|██████████| 20/20 [00:00<00:00, 34.85it/s]\n100%|██████████| 20/20 [00:00<00:00, 124.57it/s]\n100%|██████████| 20/20 [00:02<00:00, 7.54it/s]\n100%|██████████| 20/20 [00:00<00:00, 61.02it/s]\n100%|██████████| 20/20 [00:05<00:00, 3.76it/s]\n100%|██████████| 20/20 [00:00<00:00, 33.50it/s]\n100%|██████████| 20/20 [00:11<00:00, 1.76it/s]\n100%|██████████| 20/20 [00:01<00:00, 17.59it/s]\n100%|██████████| 20/20 [00:19<00:00, 1.13it/s]\n100%|██████████| 20/20 [00:01<00:00, 13.14it/s]\n100%|██████████| 20/20 [00:27<00:00, 1.38s/it]\n100%|██████████| 20/20 [00:02<00:00, 9.64it/s]\n100%|██████████| 20/20 [00:41<00:00, 1.98s/it]\n100%|██████████| 20/20 [00:02<00:00, 7.68it/s]\n100%|██████████| 20/20 [01:01<00:00, 3.15s/it]\n100%|██████████| 20/20 [00:03<00:00, 5.08it/s]\n100%|██████████| 20/20 [01:20<00:00, 4.21s/it]\n100%|██████████| 20/20 [00:05<00:00, 3.78it/s]\n100%|██████████| 20/20 [01:40<00:00, 5.24s/it]\n100%|██████████| 20/20 [00:07<00:00, 3.04it/s]\n100%|██████████| 20/20 [01:49<00:00, 5.59s/it]\n100%|██████████| 20/20 [00:07<00:00, 2.56it/s]\n100%|██████████| 20/20 [02:08<00:00, 6.29s/it]\n100%|██████████| 20/20 [00:08<00:00, 2.33it/s]\n100%|██████████| 20/20 [02:28<00:00, 8.06s/it]\n100%|██████████| 20/20 [00:10<00:00, 2.02it/s]\n100%|██████████| 20/20 [02:40<00:00, 8.00s/it]\n" ] ], [ [ "Commentaires:\n------------------\n\n", "_____no_output_____" ], [ "## Question 4.4.(b)", "_____no_output_____" ] ], [ [ "def computeGn(x, lambdas_l, n):\n \"\"\"Compute the value of g_n in x.\"\"\"\n \n # Initialisation of g_n\n g_n = 0\n \n # N\n N = len(lambdas_l)\n \n # Loop over the lambda_i\n for lambda_i in lambdas_l:\n \n # Update g_n\n g_n += 1 / n * (1 / (lambda_i - x) ** 2)\n \n # Add zero lambdas\n for i in range(N - n):\n \n # Update g_n\n g_n += 1 / n * (1 / x ** 2)\n \n return g_n", "_____no_output_____" ], [ "np.sqrt(-1)", "/home/pierre/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: RuntimeWarning: invalid value encountered in sqrt\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "def resolveSystem(e_1, e_2, lambdas_l, lambdas_init, plus=True):\n \"\"\"Resolution of the system for estimating \\lambda_1 and \\lambda_2.\"\"\"\n \n # Estimations of N_1 and N_2\n N_1 = list(lambdas_l).count(lambdas_init[0])\n N_2 = list(lambdas_l).count(lambdas_init[1])\n \n # Compute the sqrt\n sub_sqrt = max(0, 1 / (N_1 * (N_1 + N_2) ** 2) * ((-N_2) * e_1 ** 2 +\n N_2 * (N_1 + N_2) * e_2))\n sqrt = np.sqrt(sub_sqrt)\n \n # Computation of \\lambda_1\n if plus:\n lambda_1 = e_1 / (N_1 + N_2) + sqrt\n else:\n lambda_1 = e_1 / (N_1 + N_2) - sqrt\n \n # Computation of \\lambda_2\n lambda_2 = e_1 / N_2 - N_1 / N_2 * lambda_1\n \n return lambda_1, lambda_2", "_____no_output_____" ], [ "def estimationLambdaNotSeparated(cov_l, lambdas_l, C=[0.1, 10], n=1000,\n lambdas_init=[1, 3], plus=False):\n \"\"\"Estimate \\Lambda_k in the case where the two lambdas are not separeted.\"\"\"\n \n # Number of samples\n nb = len(cov_l)\n \n # Shape of cov\n N, _ = np.shape(cov_l[0])\n \n # Initialisation of lambdas\n estimators_l = [[] for i in lambdas_init]\n \n # Loop over all the matrices inside cov_l\n for cov in tqdm(cov_l):\n \n # Compute the eta associated\n etas = computeEta(cov, n=n)\n \n # Compute the lambdas associated\n lambdas = np.real(np.linalg.eigvals(cov))\n \n # Filter all the value inside etas and lambdas that are in C\n etas_C = list(filter(lambda x: (C[0] <= x) & (x <= C[1]), etas))\n lambdas_C = list(filter(lambda x: (C[0] <= x) & (x <= C[1]), lambdas))\n\n # g_n(\\eta_c)\n g_n_etas_C = np.array([computeGn(eta_i, lambdas, n) for eta_i in etas_C])\n\n # Update the estimation of lambda_k\n e_1 = n * (np.sum(lambdas_C) - np.sum(etas_C))\n e_2 = n * np.sum( 1 / np.array(g_n_etas_C))\n \n # Resolution of the system\n lambda_1, lambda_2 = resolveSystem(e_1, e_2, lambdas_l, lambdas_init, plus=plus)\n \n # Update estimators\n estimators_l[0].append(lambda_1)\n estimators_l[1].append(lambda_2)\n \n return np.array(estimators_l)", "_____no_output_____" ], [ "# Choice of N and n\nN = 20\nn = 100\nnb = 100\n\n# List of possible lambdas and their probabilities\nlambdas = [1, 3]\np = np.array([0.4, 0.6])\np /= np.sum(p)\n\n# Generate the lambdas\nlambdas_l = generateLambdas(N=N, lambdas=lambdas, p=p)\n\n# Generate the covariances matrices\ncov_l = generateCovarianceMatrices(lambdas_l, n=n, nb=nb, law=\"normal\")\n\n# Estimation of \\lambda_k\nestimators_l = estimationLambdaNotSeparated(cov_l, lambdas_l, lambdas_init=lambdas, n=n,\n plus=False)\n\n# Display the estimation\nprint(estimators_l.mean(axis=1))", "100%|██████████| 100/100 [00:00<00:00, 2793.17it/s]\n100%|██████████| 100/100 [00:00<00:00, 913.66it/s]" ] ], [ [ "## Question 4.4.(c)", "_____no_output_____" ] ], [ [ "displayMSEn(nb=20, C=[0.1, 10], estimator_func=estimationLambdaNotSeparated,\n name=\"MSE_n_Not_Separated\")", "100%|██████████| 20/20 [00:00<00:00, 8344.38it/s]\n100%|██████████| 20/20 [00:00<00:00, 2357.41it/s]\n100%|██████████| 20/20 [00:00<00:00, 973.10it/s]\n100%|██████████| 20/20 [00:00<00:00, 25.92it/s]\n100%|██████████| 20/20 [00:00<00:00, 98.42it/s] \n100%|██████████| 20/20 [00:04<00:00, 5.14it/s]\n100%|██████████| 20/20 [00:00<00:00, 62.74it/s]\n100%|██████████| 20/20 [00:07<00:00, 2.53it/s]\n100%|██████████| 20/20 [00:00<00:00, 36.93it/s]\n100%|██████████| 20/20 [00:13<00:00, 1.52it/s]\n100%|██████████| 20/20 [00:00<00:00, 23.52it/s]\n100%|██████████| 20/20 [00:19<00:00, 1.06it/s]\n100%|██████████| 20/20 [00:01<00:00, 14.66it/s]\n100%|██████████| 20/20 [00:29<00:00, 1.47s/it]\n100%|██████████| 20/20 [00:02<00:00, 9.50it/s]\n100%|██████████| 20/20 [00:51<00:00, 2.46s/it]\n100%|██████████| 20/20 [00:02<00:00, 7.07it/s]\n100%|██████████| 20/20 [01:05<00:00, 3.30s/it]\n100%|██████████| 20/20 [00:03<00:00, 5.44it/s]\n100%|██████████| 20/20 [01:18<00:00, 3.92s/it]\n100%|██████████| 20/20 [00:04<00:00, 3.99it/s]\n100%|██████████| 20/20 [01:38<00:00, 5.02s/it]\n100%|██████████| 20/20 [00:05<00:00, 3.25it/s]\n100%|██████████| 20/20 [01:52<00:00, 5.67s/it]\n100%|██████████| 20/20 [00:07<00:00, 2.58it/s]\n100%|██████████| 20/20 [02:27<00:00, 7.53s/it]\n100%|██████████| 20/20 [00:09<00:00, 2.00it/s]\n100%|██████████| 20/20 [02:38<00:00, 7.75s/it]\n100%|██████████| 20/20 [00:11<00:00, 1.77it/s]\n100%|██████████| 20/20 [03:04<00:00, 9.03s/it]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
cbb55fe2ef53e681369fd60f091d6d5b67d6d068
145,467
ipynb
Jupyter Notebook
demos/embeddings/keras-node2vec-embeddings.ipynb
vishalbelsare/stellargraph
3c2c8c18ab4c5c16660f350d8e23d7dc39e738de
[ "Apache-2.0" ]
2,428
2018-09-14T01:47:25.000Z
2022-03-31T06:43:35.000Z
demos/embeddings/keras-node2vec-embeddings.ipynb
vishalbelsare/stellargraph
3c2c8c18ab4c5c16660f350d8e23d7dc39e738de
[ "Apache-2.0" ]
1,749
2018-09-14T01:31:19.000Z
2022-03-17T08:40:16.000Z
demos/embeddings/keras-node2vec-embeddings.ipynb
vishalbelsare/stellargraph
3c2c8c18ab4c5c16660f350d8e23d7dc39e738de
[ "Apache-2.0" ]
390
2018-09-18T07:05:44.000Z
2022-03-24T00:40:23.000Z
235.383495
125,272
0.918119
[ [ [ "# Node2Vec representation learning with Stellargraph components", "_____no_output_____" ], [ "<table><tr><td>Run the latest release of this notebook:</td><td><a href=\"https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/embeddings/keras-node2vec-embeddings.ipynb\" alt=\"Open In Binder\" target=\"_parent\"><img src=\"https://mybinder.org/badge_logo.svg\"/></a></td><td><a href=\"https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/embeddings/keras-node2vec-embeddings.ipynb\" alt=\"Open In Colab\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\"/></a></td></tr></table>", "_____no_output_____" ], [ "This example demonstrates how to apply components from the stellargraph library to perform representation learning via Node2Vec. This uses a Keras implementation of Node2Vec available in stellargraph instead of the reference implementation provided by ``gensim``. This implementation provides flexible interfaces to downstream tasks for end-to-end learning.\n\n<a name=\"refs\"></a>\n**References**\n\n[1] Node2Vec: Scalable Feature Learning for Networks. A. Grover, J. Leskovec. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016. ([link](https://snap.stanford.edu/node2vec/))\n\n[2] Distributed representations of words and phrases and their compositionality. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. In Advances in Neural Information Processing Systems (NIPS), pp. 3111-3119, 2013. ([link](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf))\n\n[3] word2vec Parameter Learning Explained. X. Rong. arXiv preprint arXiv:1411.2738. 2014 Nov 11. ([link](https://arxiv.org/pdf/1411.2738.pdf))", "_____no_output_____" ], [ "## Introduction\nFollowing word2vec [2,3], for each (``target``,``context``) node pair $(v_i,v_j)$ collected from random walks, we learn the representation for the target node $v_i$ by using it to predict the existence of context node $v_j$, with the following three-layer neural network.", "_____no_output_____" ], [ "![](word2vec-illustration.png)", "_____no_output_____" ], [ "Node $v_i$'s representation in the hidden layer is obtained by multiplying $v_i$'s one-hot representation in the input layer with the input-to-hidden weight matrix $W_{in}$, which is equivalent to look up the $i$th row of input-to-hidden weight matrix $W_{in}$. The existence probability of each node conditioned on node $v_i$ is outputted in the output layer, which is obtained by multiplying $v_i$'s hidden-layer representation with the hidden-to-out weight matrix $W_{out}$ followed by a softmax activation. To capture the ``target-context`` relation between $v_i$ and $v_j$, we need to maximize the probability $\\mathrm{P}(v_j|v_i)$. However, computing $\\mathrm{P}(v_j|v_i)$ is time consuming, which involves the matrix multiplication between $v_i$'s hidden-layer representation and the hidden-to-out weight matrix $W_{out}$.", "_____no_output_____" ], [ "To speed up the computing, we adopt the negative sampling strategy [2,3]. For each (``target``, ``context``) node pair, we sample a negative node $v_k$, which is not $v_i$'s context. To obtain the output, instead of multiplying $v_i$'s hidden-layer representation with the hidden-to-out weight matrix $W_{out}$ followed by a softmax activation, we only calculate the dot product between $v_i$'s hidden-layer representation and the $j$th column as well as the $k$th column of the hidden-to-output weight matrix $W_{out}$ followed by a sigmoid activation respectively. According to [3], the original objective to maximize $\\mathrm{P}(v_j|v_i)$ can be approximated by minimizing the cross entropy between $v_j$ and $v_k$'s outputs and their ground-truth labels (1 for $v_j$ and 0 for $v_k$).", "_____no_output_____" ], [ "Following [2,3], we denote the rows of the input-to-hidden weight matrix $W_{in}$ as ``input_embeddings`` and the columns of the hidden-to-out weight matrix $W_{out}$ as ``output_embeddings``. To build the Node2Vec model, we need look up ``input_embeddings`` for target nodes and ``output_embeddings`` for context nodes and calculate their inner product together with a sigmoid activation.", "_____no_output_____" ] ], [ [ "# install StellarGraph if running on Google Colab\nimport sys\nif 'google.colab' in sys.modules:\n %pip install -q stellargraph[demos]==1.3.0b", "_____no_output_____" ], [ "# verify that we're using the correct version of StellarGraph for this notebook\nimport stellargraph as sg\n\ntry:\n sg.utils.validate_notebook_version(\"1.3.0b\")\nexcept AttributeError:\n raise ValueError(\n f\"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>.\"\n ) from None", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nfrom sklearn.manifold import TSNE\n\nimport os\nimport networkx as nx\nimport numpy as np\nimport pandas as pd\nfrom tensorflow import keras\n\nfrom stellargraph import StellarGraph\nfrom stellargraph.data import BiasedRandomWalk\nfrom stellargraph.data import UnsupervisedSampler\nfrom stellargraph.data import BiasedRandomWalk\nfrom stellargraph.mapper import Node2VecLinkGenerator, Node2VecNodeGenerator\nfrom stellargraph.layer import Node2Vec, link_classification\n\nfrom stellargraph import datasets\nfrom IPython.display import display, HTML\n\n%matplotlib inline", "_____no_output_____" ] ], [ [ "### Dataset\n\n\nFor clarity, we use only the largest connected component, ignoring isolated nodes and subgraphs; having these in the data does not prevent the algorithm from running and producing valid results.", "_____no_output_____" ] ], [ [ "dataset = datasets.Cora()\ndisplay(HTML(dataset.description))\nG, subjects = dataset.load(largest_connected_component_only=True)", "_____no_output_____" ], [ "print(G.info())", "StellarGraph: Undirected multigraph\n Nodes: 2485, Edges: 5209\n\n Node types:\n paper: [2485]\n Features: float32 vector, length 1433\n Edge types: paper-cites->paper\n\n Edge types:\n paper-cites->paper: [5209]\n Weights: all 1 (default)\n Features: none\n" ] ], [ [ "### The Node2Vec algorithm\n\nThe Node2Vec algorithm introduced in [[1]](#refs) is a 2-step representation learning algorithm. The two steps are:\n\n1. Use random walks to generate sentences from a graph. A sentence is a list of node ids. The set of all sentences makes a corpus.\n\n2. The corpus is then used to learn an embedding vector for each node in the graph. Each node id is considered a unique word/token in a dictionary that has size equal to the number of nodes in the graph. The Word2Vec algorithm [[2]](#refs) is used for calculating the embedding vectors.\n\nIn this implementation, we train the Node2Vec algorithm in the following two steps:\n\n1. Generate a set of (`target`, `context`) node pairs through starting the biased random walk with a fixed length at per node. The starting nodes are taken as the target nodes and the following nodes in biased random walks are taken as context nodes. For each (`target`, `context`) node pair, we generate 1 negative node pair.\n\n2. Train the Node2Vec algorithm through minimizing cross-entropy loss for `target-context` pair prediction, with the predictive value obtained by performing the dot product of the 'input embedding' of the target node and the 'output embedding' of the context node, followed by a sigmoid activation.", "_____no_output_____" ], [ "Specify the optional parameter values: the number of walks to take per node, the length of each walk. Here, to guarantee the running efficiency, we respectively set `walk_number` and `walk_length` to 100 and 5. Larger values can be set to them to achieve better performance.", "_____no_output_____" ] ], [ [ "walk_number = 100\nwalk_length = 5", "_____no_output_____" ] ], [ [ "Create the biased random walker to perform context node sampling, with the specified parameters.", "_____no_output_____" ] ], [ [ "walker = BiasedRandomWalk(\n G,\n n=walk_number,\n length=walk_length,\n p=0.5, # defines probability, 1/p, of returning to source node\n q=2.0, # defines probability, 1/q, for moving to a node away from the source node\n)", "_____no_output_____" ] ], [ [ "Create the UnsupervisedSampler instance with the biased random walker.", "_____no_output_____" ] ], [ [ "unsupervised_samples = UnsupervisedSampler(G, nodes=list(G.nodes()), walker=walker)", "_____no_output_____" ] ], [ [ "Set the batch size and the number of epochs.", "_____no_output_____" ] ], [ [ "batch_size = 50\nepochs = 2", "_____no_output_____" ] ], [ [ "Define an attri2vec training generator, which generates a batch of (index of target node, index of context node, label of node pair) pairs per iteration.", "_____no_output_____" ] ], [ [ "generator = Node2VecLinkGenerator(G, batch_size)", "_____no_output_____" ] ], [ [ "Build the Node2Vec model, with the dimension of learned node representations set to 128.", "_____no_output_____" ] ], [ [ "emb_size = 128\nnode2vec = Node2Vec(emb_size, generator=generator)", "_____no_output_____" ], [ "x_inp, x_out = node2vec.in_out_tensors()", "_____no_output_____" ] ], [ [ "Use the link_classification function to generate the prediction, with the 'dot' edge embedding generation method and the 'sigmoid' activation, which actually performs the dot product of the ``input embedding`` of the target node and the ``output embedding`` of the context node followed by a sigmoid activation. ", "_____no_output_____" ] ], [ [ "prediction = link_classification(\n output_dim=1, output_act=\"sigmoid\", edge_embedding_method=\"dot\"\n)(x_out)", "link_classification: using 'dot' method to combine node embeddings into edge embeddings\n" ] ], [ [ "Stack the Node2Vec encoder and prediction layer into a Keras model. Our generator will produce batches of positive and negative context pairs as inputs to the model. Minimizing the binary crossentropy between the outputs and the provided ground truth is much like a regular binary classification task.", "_____no_output_____" ] ], [ [ "model = keras.Model(inputs=x_inp, outputs=prediction)\n\nmodel.compile(\n optimizer=keras.optimizers.Adam(lr=1e-3),\n loss=keras.losses.binary_crossentropy,\n metrics=[keras.metrics.binary_accuracy],\n)", "_____no_output_____" ] ], [ [ "Train the model.", "_____no_output_____" ] ], [ [ "history = model.fit(\n generator.flow(unsupervised_samples),\n epochs=epochs,\n verbose=1,\n use_multiprocessing=False,\n workers=4,\n shuffle=True,\n)", "Train for 39760 steps\nEpoch 1/2\n39760/39760 [==============================] - 155s 4ms/step - loss: 0.2924 - binary_accuracy: 0.8557\nEpoch 2/2\n39760/39760 [==============================] - 238s 6ms/step - loss: 0.1096 - binary_accuracy: 0.9641\n" ] ], [ [ "## Visualise Node Embeddings", "_____no_output_____" ], [ "Build the node based model for predicting node representations from node ids and the learned parameters. Below a Keras model is constructed, with `x_inp[0]` as input and `x_out[0]` as output. Note that this model's weights are the same as those of the corresponding node encoder in the previously trained node pair classifier.", "_____no_output_____" ] ], [ [ "x_inp_src = x_inp[0]\nx_out_src = x_out[0]\nembedding_model = keras.Model(inputs=x_inp_src, outputs=x_out_src)", "_____no_output_____" ] ], [ [ "Get the node embeddings from node ids.", "_____no_output_____" ] ], [ [ "node_gen = Node2VecNodeGenerator(G, batch_size).flow(subjects.index)\nnode_embeddings = embedding_model.predict(node_gen, workers=4, verbose=1)", "50/50 [==============================] - 0s 1ms/step\n" ] ], [ [ "Transform the embeddings to 2d space for visualisation.", "_____no_output_____" ] ], [ [ "transform = TSNE # PCA\n\ntrans = transform(n_components=2)\nnode_embeddings_2d = trans.fit_transform(node_embeddings)", "_____no_output_____" ], [ "# draw the embedding points, coloring them by the target label (paper subject)\nalpha = 0.7\nlabel_map = {l: i for i, l in enumerate(np.unique(subjects))}\nnode_colours = [label_map[target] for target in subjects]\n\nplt.figure(figsize=(7, 7))\nplt.axes().set(aspect=\"equal\")\nplt.scatter(\n node_embeddings_2d[:, 0],\n node_embeddings_2d[:, 1],\n c=node_colours,\n cmap=\"jet\",\n alpha=alpha,\n)\nplt.title(\"{} visualization of node embeddings\".format(transform.__name__))\nplt.show()", "_____no_output_____" ] ], [ [ "### Downstream task\n\nThe node embeddings calculated using Node2Vec can be used as feature vectors in a downstream task such as node attribute inference (e.g., inferring the subject of a paper in Cora), community detection (clustering of nodes based on the similarity of their embedding vectors), and link prediction (e.g., prediction of citation links between papers).", "_____no_output_____" ], [ "<table><tr><td>Run the latest release of this notebook:</td><td><a href=\"https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/embeddings/keras-node2vec-embeddings.ipynb\" alt=\"Open In Binder\" target=\"_parent\"><img src=\"https://mybinder.org/badge_logo.svg\"/></a></td><td><a href=\"https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/embeddings/keras-node2vec-embeddings.ipynb\" alt=\"Open In Colab\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\"/></a></td></tr></table>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
cbb5701ea2ae5171d838bd5cae41086e09aee4f6
24,440
ipynb
Jupyter Notebook
word2vec-nlp-tutorial/tutorial-part-3-4.ipynb
bradsonn/KaggleStruggle
6b9eb35f2a53dfe191606ffb7bca13b41e9a82d7
[ "MIT" ]
2
2020-06-21T10:15:21.000Z
2020-06-21T10:15:24.000Z
word2vec-nlp-tutorial/tutorial-part-3-4.ipynb
bradsonn/KaggleStruggle
6b9eb35f2a53dfe191606ffb7bca13b41e9a82d7
[ "MIT" ]
null
null
null
word2vec-nlp-tutorial/tutorial-part-3-4.ipynb
bradsonn/KaggleStruggle
6b9eb35f2a53dfe191606ffb7bca13b41e9a82d7
[ "MIT" ]
2
2021-04-03T13:43:18.000Z
2021-08-11T03:44:21.000Z
39.355878
7,588
0.655646
[ [ [ "## [Bag of Words Meets Bags of Popcorn | Kaggle](https://www.kaggle.com/c/word2vec-nlp-tutorial#part-3-more-fun-with-word-vectors)\n\n\n# 튜토리얼 파트 3, 4\n\n* [DeepLearningMovies/KaggleWord2VecUtility.py at master · wendykan/DeepLearningMovies](https://github.com/wendykan/DeepLearningMovies/blob/master/KaggleWord2VecUtility.py)\n* 캐글에 링크 되어 있는 github 튜토리얼을 참고하여 만들었으며 파이썬2로 되어있는 소스를 파이썬3에 맞게 일부 수정하였다.\n\n### 첫 번째 시도(average feature vectors)\n- 튜토리얼2의 코드로 벡터의 평균을 구한다.\n\n### 두 번째 시도(K-means)\n- Word2Vec은 의미가 관련있는 단어들의 클러스터를 생성하기 때문에 클러스터 내의 단어 유사성을 이용하는 것이다.\n- 이런식으로 벡터를 그룹화 하는 것을 \"vector quantization(벡터 양자화)\"라고 한다.\n- 이를 위해서는 K-means와 같은 클러스터링 알고리즘을 사용하여 클러스터라는 단어의 중심을 찾아야 한다.\n- 비지도학습인 K-means를 통해 클러스터링을 하고 지도학습인 랜덤포레스트로 리뷰가 추천인지 아닌지를 예측한다.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nfrom gensim.models import Word2Vec\nfrom sklearn.cluster import KMeans\nfrom sklearn.ensemble import RandomForestClassifier\n\nfrom bs4 import BeautifulSoup\nimport re\nimport time\n\nfrom nltk.corpus import stopwords\nimport nltk.data\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "_____no_output_____" ], [ "model = Word2Vec.load('300features_40minwords_10text')\nmodel", "_____no_output_____" ], [ "# 숫자로 단어를 표현\n# Word2Vec 모델은 어휘의 각 단어에 대한 feature 벡터로 구성되며 \n# 'syn0'이라는 넘파이 배열로 저장된다.\n# syn0의 행 수는 모델 어휘의 단어 수\n# 컬럼 수는 2 부에서 설정 한 피처 벡터의 크기\ntype(model.wv.syn0)", "_____no_output_____" ], [ "# syn0의 행 수는 모델 어휘의 단어 수\n# 열 수는 2부에서 설정한 특징 벡터의 크기\nmodel.wv.syn0.shape", "_____no_output_____" ], [ "# 개별 단어 벡터 접근\nmodel.wv['flower'].shape", "_____no_output_____" ], [ "model.wv['flower'][:10]", "_____no_output_____" ] ], [ [ "## K-means (K평균)클러스터링으로 데이터 묶기\n\n* [K-평균 알고리즘 - 위키백과, 우리 모두의 백과사전](https://ko.wikipedia.org/wiki/K-%ED%8F%89%EA%B7%A0_%EC%95%8C%EA%B3%A0%EB%A6%AC%EC%A6%98)\n\n- 클러스터링은 비지도 학습 기법\n- 클러스터링은 유사성 등 개념에 기초해 몇몇 그룹으로 분류하는 기법\n- 클러스터링의 목적은 샘플(실수로 구성된 n차원의 벡터)을 내부적으로는 비슷하지만 외부적으로 공통 분모가 없는 여러 그룹으로 묶는 것\n- 특정 차원의 범위가 다른 차원과 차이가 크면 클러스터링 하기 전에 스케일을 조정해야 한다.\n\n 1. 최초 센트로이드(centroid)(중심점)로 k개의 벡터를 무작위로 선정한다.\n 2. 각 샘플을 그 위치에서 가장 가까운 센트로이드에 할당한다.\n 3. 센트로이드의 위치를 재계산한다.\n 4. 센트로이드가 더 이상 움직이지 않을 때까지 2와 3을 반복한다.\n\n참고 : [책] 모두의 데이터 과학(with 파이썬)", "_____no_output_____" ] ], [ [ "# 단어 벡터에서 k-means를 실행하고 일부 클러스터를 찍어본다.\nstart = time.time() # 시작시간\n\n# 클러스터의 크기 \"k\"를 어휘 크기의 1/5 이나 평균 5단어로 설정한다.\nword_vectors = model.wv.syn0 # 어휘의 feature vector\nnum_clusters = word_vectors.shape[0] / 5\nnum_clusters = int(num_clusters)\n\n# K means 를 정의하고 학습시킨다.\nkmeans_clustering = KMeans( n_clusters = num_clusters )\nidx = kmeans_clustering.fit_predict( word_vectors )\n\n# 끝난시간에서 시작시간을 빼서 걸린 시간을 구한다.\nend = time.time()\nelapsed = end - start\nprint(\"Time taken for K Means clustering: \", elapsed, \"seconds.\")", "Time taken for K Means clustering: 252.84209394454956 seconds.\n" ], [ "# 각 어휘 단어를 클러스터 번호에 매핑되게 word/Index 사전을 만든다.\nidx = list(idx)\nnames = model.wv.index2word\nword_centroid_map = {names[i]: idx[i] for i in range(len(names))}\n# word_centroid_map = dict(zip( model.wv.index2word, idx ))\n\n# 첫번째 클러스터의 처음 10개를 출력\nfor cluster in range(0,10):\n # 클러스터 번호를 출력\n print(\"\\nCluster {}\".format(cluster))\n \n # 클러스터번호와 클러스터에 있는 단어를 찍는다.\n words = []\n for i in range(0,len(list(word_centroid_map.values()))):\n if( list(word_centroid_map.values())[i] == cluster ):\n words.append(list(word_centroid_map.keys())[i])\n print(words)", "\nCluster 0\n['terri', 'roy', 'noah', 'shawn', 'micheal', 'gilliam', 'mckenzi', 'ncis', 'costa', 'xavier', 'flaherti']\n\nCluster 1\n['indulg', 'conscious', 'absorb', 'loath', 'righteous', 'proclaim', 'esteem', 'wallow', 'reflex', 'referenti']\n\nCluster 2\n['coke', 'dope', 'drank']\n\nCluster 3\n['abu', 'cardin', 'kersey', 'pacifist', 'goliath', 'treason', 'tyrann']\n\nCluster 4\n['background', 'landscap', 'backdrop', 'vista', 'wildlif', 'travelogu']\n\nCluster 5\n['touch', 'poignant', 'profound', 'heartbreak', 'underst', 'uplift', 'heartfelt', 'heartwarm', 'sear']\n\nCluster 6\n['midnight', 'clock', 'pm', 'marathon']\n\nCluster 7\n['moor', 'patrick', 'lesli', 'barrymor', 'marc', 'lionel', 'carey', 'farrel', 'seymour', 'perkin', 'gale', 'stanton', 'dali', 'elisha', 'lacey', 'tyne']\n\nCluster 8\n['ann', 'mrs', 'juli', 'helen', 'susan', 'carol', 'elizabeth', 'drew', 'turner', 'alic', 'louis', 'kay', 'margaret', 'june', 'colbert', 'shelley', 'martha', 'beaver', 'kathleen', 'katherin', 'veronica', 'hayward', 'evelyn', 'judith', 'topper', 'fletcher', 'wither', 'claudett', 'delilah', 'jayn']\n\nCluster 9\n['data']\n" ], [ "\"\"\"\n판다스로 데이터프레임 형태의 데이터로 읽어온다.\nQUOTE_MINIMAL (0), QUOTE_ALL (1), \nQUOTE_NONNUMERIC (2) or QUOTE_NONE (3).\n\n그리고 이전 튜토리얼에서 했던 것처럼 clean_train_reviews 와 \nclean_test_reviews 로 텍스트를 정제한다.\n\"\"\"\ntrain = pd.read_csv('data/labeledTrainData.tsv', \n header=0, delimiter=\"\\t\", quoting=3)\ntest = pd.read_csv('data/testData.tsv', \n header=0, delimiter=\"\\t\", quoting=3)\n# unlabeled_train = pd.read_csv( 'data/unlabeledTrainData.tsv', header=0, delimiter=\"\\t\", quoting=3 )", "_____no_output_____" ], [ "from KaggleWord2VecUtility import KaggleWord2VecUtility\n# 학습 리뷰를 정제한다.\nclean_train_reviews = []\nfor review in train[\"review\"]:\n clean_train_reviews.append(\n KaggleWord2VecUtility.review_to_wordlist( review, \\\n remove_stopwords=True ))", "_____no_output_____" ], [ "# 테스트 리뷰를 정제한다.\nclean_test_reviews = []\nfor review in test[\"review\"]:\n clean_test_reviews.append(\n KaggleWord2VecUtility.review_to_wordlist( review, \\\n remove_stopwords=True ))", "_____no_output_____" ], [ "# bags of centroids 생성\n# 속도를 위해 centroid 학습 세트 bag을 미리 할당 한다.\ntrain_centroids = np.zeros((train[\"review\"].size, num_clusters), \\\n dtype=\"float32\" )\n\ntrain_centroids[:5]", "_____no_output_____" ], [ "# centroid 는 두 클러스터의 중심점을 정의 한 다음 중심점의 거리를 측정한 것\ndef create_bag_of_centroids( wordlist, word_centroid_map ):\n \n # 클러스터의 수는 word / centroid map에서 가장 높은 클러스트 인덱스와 같다.\n num_centroids = max( word_centroid_map.values() ) + 1\n \n # 속도를 위해 bag of centroids vector를 미리 할당한다.\n bag_of_centroids = np.zeros( num_centroids, dtype=\"float32\" )\n \n # 루프를 돌며 단어가 word_centroid_map에 있다면\n # 해당되는 클러스터의 수를 하나씩 증가시켜 준다.\n for word in wordlist:\n if word in word_centroid_map:\n index = word_centroid_map[word]\n bag_of_centroids[index] += 1\n \n # bag of centroids를 반환한다.\n return bag_of_centroids", "_____no_output_____" ], [ "# 학습 리뷰를 bags of centroids 로 변환한다.\ncounter = 0\nfor review in clean_train_reviews:\n train_centroids[counter] = create_bag_of_centroids( review, \\\n word_centroid_map )\n counter += 1\n\n# 테스트 리뷰도 같은 방법으로 반복해 준다.\ntest_centroids = np.zeros(( test[\"review\"].size, num_clusters), \\\n dtype=\"float32\" )\n\ncounter = 0\nfor review in clean_test_reviews:\n test_centroids[counter] = create_bag_of_centroids( review, \\\n word_centroid_map )\n counter += 1\n\n\n# 랜덤포레스트를 사용하여 학습시키고 예측\nforest = RandomForestClassifier(n_estimators = 100)\n\n# train 데이터의 레이블을 통해 학습시키고 예측한다.\n# 시간이 좀 소요되기 때문에 %time을 통해 걸린 시간을 찍도록 함\nprint(\"Fitting a random forest to labeled training data...\")\n%time forest = forest.fit(train_centroids, train[\"sentiment\"])", "Fitting a random forest to labeled training data...\nCPU times: user 35.2 s, sys: 450 ms, total: 35.6 s\nWall time: 37.3 s\n" ], [ "from sklearn.model_selection import cross_val_score\n%time score = np.mean(cross_val_score(\\\n forest, train_centroids, train['sentiment'], cv=10,\\\n scoring='roc_auc'))", "CPU times: user 4min 54s, sys: 3.77 s, total: 4min 58s\nWall time: 4min 30s\n" ], [ "%time result = forest.predict(test_centroids)", "CPU times: user 2.21 s, sys: 47.8 ms, total: 2.26 s\nWall time: 2.31 s\n" ], [ "score", "_____no_output_____" ], [ "# 결과를 csv로 저장\noutput = pd.DataFrame(data={\"id\":test[\"id\"], \"sentiment\":result})\noutput.to_csv(\"data/submit_BagOfCentroids_{0:.5f}.csv\".format(score), index=False, quoting=3)", "_____no_output_____" ], [ "fig, axes = plt.subplots(ncols=2)\nfig.set_size_inches(12,5)\nsns.countplot(train['sentiment'], ax=axes[0])\nsns.countplot(output['sentiment'], ax=axes[1])", "_____no_output_____" ], [ "output_sentiment = output['sentiment'].value_counts()\nprint(output_sentiment[0] - output_sentiment[1])\noutput_sentiment", "402\n" ], [ "# 캐글 점수 0.84908\nprint(330/528)", "0.625\n" ] ], [ [ "### 왜 이 튜토리얼에서는 Bag of Words가 더 좋은 결과를 가져올까?\n\n벡터를 평균화하고 centroids를 사용하면 단어 순서가 어긋나며 Bag of Words 개념과 매우 비슷하다. 성능이 (표준 오차의 범위 내에서) 비슷하기 때문에 튜토리얼 1, 2, 3이 동등한 결과를 가져온다.\n\n첫째, Word2Vec을 더 많은 텍스트로 학습시키면 성능이 좋아진다. Google의 결과는 10 억 단어가 넘는 코퍼스에서 배운 단어 벡터를 기반으로 한다. 학습 레이블이 있거나 레이블이 없는 학습 세트는 단지 대략 천팔백만 단어 정도다. 편의상 Word2Vec은 Google의 원래 C도구에서 출력되는 사전 학습 된 모델을 로드하는 기능을 제공하기 때문에 C로 모델을 학습 한 다음 Python으로 가져올 수도 있다.\n\n둘째, 출판 된 자료들에서 분산 워드 벡터 기술은 Bag of Words 모델보다 우수한 것으로 나타났다. 이 논문에서는 IMDB 데이터 집합에 단락 벡터 (Paragraph Vector)라는 알고리즘을 사용하여 현재까지의 최첨단 결과 중 일부를 생성한다. 단락 벡터는 단어 순서 정보를 보존하는 반면 벡터 평균화 및 클러스터링은 단어 순서를 잃어 버리기 때문에 여기에서 시도하는 방식보다 부분적으로 더 좋다.\n\n\n* 더 공부하기 : 스탠포드 NLP 강의 : [Lecture 1 | Natural Language Processing with Deep Learning - YouTube](https://www.youtube.com/watch?v=OQQ-W_63UgQ&list=PL3FW7Lu3i5Jsnh1rnUwq_TcylNr7EkRe6)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cbb576d492aa9dceaa6f6c3858aadd159a4985f1
56,963
ipynb
Jupyter Notebook
10_transformers-from-scratch.ipynb
osanseviero/notebooks
c2db380169af910eed0ca5c61b7882f3caa18cf3
[ "Apache-2.0" ]
1,233
2021-12-22T06:57:43.000Z
2022-03-31T13:57:33.000Z
10_transformers-from-scratch.ipynb
osanseviero/notebooks
c2db380169af910eed0ca5c61b7882f3caa18cf3
[ "Apache-2.0" ]
31
2022-01-29T03:31:38.000Z
2022-03-30T23:30:23.000Z
10_transformers-from-scratch.ipynb
osanseviero/notebooks
c2db380169af910eed0ca5c61b7882f3caa18cf3
[ "Apache-2.0" ]
238
2022-01-31T06:21:33.000Z
2022-03-31T07:57:42.000Z
30.219098
383
0.516405
[ [ [ "# Uncomment and run this cell if you're on Colab or Kaggle\n# !git clone https://github.com/nlp-with-transformers/notebooks.git\n# %cd notebooks\n# from install import *\n# install_requirements(is_chapter10=True)", "_____no_output_____" ], [ "# hide\nfrom utils import *\nsetup_chapter()", "_____no_output_____" ] ], [ [ "# Training Transformers from Scratch", "_____no_output_____" ], [ "> **Note:** In this chapter a large dataset and the script to train a large language model on a distributed infrastructure are built. As such not all the steps in this notebook are executable on platforms such as Colab or Kaggle. Either downscale the steps at critical points or use this notebook as an inspiration when building a script for distributed training.", "_____no_output_____" ], [ "## Large Datasets and Where to Find Them", "_____no_output_____" ], [ "### Challenges of Building a Large-Scale Corpus", "_____no_output_____" ] ], [ [ "#hide_output\nfrom transformers import pipeline, set_seed\n\ngeneration_gpt = pipeline(\"text-generation\", model=\"openai-gpt\")\ngeneration_gpt2 = pipeline(\"text-generation\", model=\"gpt2\")", "_____no_output_____" ], [ "def model_size(model):\n return sum(t.numel() for t in model.parameters())\n\nprint(f\"GPT size: {model_size(generation_gpt.model)/1000**2:.1f}M parameters\")\nprint(f\"GPT2 size: {model_size(generation_gpt2.model)/1000**2:.1f}M parameters\")", "GPT size: 116.5M parameters\nGPT2 size: 124.4M parameters\n" ], [ "# hide\nset_seed(1)", "_____no_output_____" ], [ "def enum_pipeline_ouputs(pipe, prompt, num_return_sequences):\n out = pipe(prompt, num_return_sequences=num_return_sequences,\n clean_up_tokenization_spaces=True)\n return \"\\n\".join(f\"{i+1}.\" + s[\"generated_text\"] for i, s in enumerate(out))\n\nprompt = \"\\nWhen they came back\"\nprint(\"GPT completions:\\n\" + enum_pipeline_ouputs(generation_gpt, prompt, 3))\nprint(\"\")\nprint(\"GPT-2 completions:\\n\" + enum_pipeline_ouputs(generation_gpt2, prompt, 3))", "GPT completions:\n1.\nWhen they came back.\n \" we need all we can get, \" jason said once they had settled into the back of\nthe truck without anyone stopping them. \" after getting out here, it 'll be up\nto us what to find. for now\n2.\nWhen they came back.\n his gaze swept over her body. he 'd dressed her, too, in the borrowed clothes\nthat she 'd worn for the journey.\n \" i thought it would be easier to just leave you there. \" a woman like\n3.\nWhen they came back to the house and she was sitting there with the little boy.\n \" don't be afraid, \" he told her. she nodded slowly, her eyes wide. she was so\nlost in whatever she discovered that tom knew her mistake\n\nGPT-2 completions:\n1.\nWhen they came back we had a big dinner and the other guys went to see what\ntheir opinion was on her. I did an hour and they were happy with it.\n2.\nWhen they came back to this island there had been another massacre, but he could\nnot help but feel pity for the helpless victim who had been left to die, and\nthat they had failed that day. And so was very, very grateful indeed.\n3.\nWhen they came back to our house after the morning, I asked if she was sure. She\nsaid, \"Nope.\" The two kids were gone that morning. I thought they were back to\nbeing a good friend.\n\nWhen Dost\n" ] ], [ [ "### Building a Custom Code Dataset\n", "_____no_output_____" ], [ "#### Creating a dataset with Google BigQuery", "_____no_output_____" ], [ "#sidebar To Filter the Noise or Not?", "_____no_output_____" ], [ "### Working with Large Datasets", "_____no_output_____" ], [ "#### Memory mapping", "_____no_output_____" ], [ "> **Note:** The following code block assumes that you have downloaded the BigQuery dataset to a folder called `codeparrot`. We suggest skipping this step since it will unpack the compressed files and require ~180GB of disk space. This code is just for demonstration purposes and you can just continue below with the streamed dataset which will not consume that much disk space.", "_____no_output_____" ] ], [ [ "#hide_output\nfrom datasets import load_dataset, DownloadConfig\n\ndownload_config = DownloadConfig(delete_extracted=True)\ndataset = load_dataset(\"./codeparrot\", split=\"train\",\n download_config=download_config)", "_____no_output_____" ], [ "import psutil, os\n\nprint(f\"Number of python files code in dataset : {len(dataset)}\")\nds_size = sum(os.stat(f[\"filename\"]).st_size for f in dataset.cache_files)\n# os.stat.st_size is expressed in bytes, so we convert to GB\nprint(f\"Dataset size (cache file) : {ds_size / 2**30:.2f} GB\")\n# Process.memory_info is expressed in bytes, so we convert to MB\nprint(f\"RAM used: {psutil.Process(os.getpid()).memory_info().rss >> 20} MB\")", "Number of python files code in dataset : 18695559\nDataset size (cache file) : 183.68 GB\nRAM memory used: 4924 MB\n" ] ], [ [ "#### Streaming", "_____no_output_____" ] ], [ [ "# hide_output\nstreamed_dataset = load_dataset('./codeparrot', split=\"train\", streaming=True)", "Using custom data configuration default-cae7a1d2f0dbde67\n" ], [ "iterator = iter(streamed_dataset)\n\nprint(dataset[0] == next(iterator))\nprint(dataset[1] == next(iterator))", "True\nTrue\n" ], [ "remote_dataset = load_dataset('transformersbook/codeparrot', split=\"train\",\n streaming=True)", "_____no_output_____" ] ], [ [ "### Adding Datasets to the Hugging Face Hub", "_____no_output_____" ], [ "## Building a Tokenizer", "_____no_output_____" ] ], [ [ "# hide_output\nfrom transformers import AutoTokenizer\n\ndef tok_list(tokenizer, string):\n input_ids = tokenizer(string, add_special_tokens=False)[\"input_ids\"]\n return [tokenizer.decode(tok) for tok in input_ids]\n\ntokenizer_T5 = AutoTokenizer.from_pretrained(\"t5-base\")\ntokenizer_camembert = AutoTokenizer.from_pretrained(\"camembert-base\")", "_____no_output_____" ], [ "print(f'T5 tokens for \"sex\": {tok_list(tokenizer_T5,\"sex\")}')\nprint(f'CamemBERT tokens for \"being\": {tok_list(tokenizer_camembert,\"being\")}')", "T5 tokens for \"sex\": ['', 's', 'ex']\nCamemBERT tokens for \"being\": ['be', 'ing']\n" ] ], [ [ "### The Tokenizer Model", "_____no_output_____" ], [ "### Measuring Tokenizer Performance", "_____no_output_____" ], [ "### A Tokenizer for Python ", "_____no_output_____" ] ], [ [ "from transformers import AutoTokenizer\n\npython_code = r\"\"\"def say_hello():\n print(\"Hello, World!\")\n# Print it\nsay_hello()\n\"\"\"\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\nprint(tokenizer(python_code).tokens())", "['def', 'Ġsay', '_', 'hello', '():', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġprint', '(\"',\n'Hello', ',', 'ĠWorld', '!\"', ')', 'Ġ#', 'ĠPrint', 'Ġit', 'Ċ', 'Ċ', 'say', '_',\n'hello', '()', 'Ċ']\n" ], [ "print(tokenizer.backend_tokenizer.normalizer)", "None\n" ], [ "print(tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(python_code))", "[('def', (0, 3)), ('Ġsay', (3, 7)), ('_', (7, 8)), ('hello', (8, 13)), ('():',\n(13, 16)), ('ĊĠĠĠ', (16, 20)), ('Ġprint', (20, 26)), ('(\"', (26, 28)), ('Hello',\n(28, 33)), (',', (33, 34)), ('ĠWorld', (34, 40)), ('!\")', (40, 43)), ('Ġ#', (43,\n45)), ('ĠPrint', (45, 51)), ('Ġit', (51, 54)), ('Ċ', (54, 55)), ('Ċ', (55, 56)),\n('say', (56, 59)), ('_', (59, 60)), ('hello', (60, 65)), ('()', (65, 67)), ('Ċ',\n(67, 68))]\n" ], [ "a, e = u\"a\", u\"€\"\nbyte = ord(a.encode(\"utf-8\"))\nprint(f'`{a}` is encoded as `{a.encode(\"utf-8\")}` with a single byte: {byte}')\nbyte = [ord(chr(i)) for i in e.encode(\"utf-8\")]\nprint(f'`{e}` is encoded as `{e.encode(\"utf-8\")}` with three bytes: {byte}')", "`a` is encoded as `b'a'` with a single byte: 97\n`€` is encoded as `b'\\xe2\\x82\\xac'` with three bytes: [226, 130, 172]\n" ], [ "from transformers.models.gpt2.tokenization_gpt2 import bytes_to_unicode\n\nbyte_to_unicode_map = bytes_to_unicode()\nunicode_to_byte_map = dict((v, k) for k, v in byte_to_unicode_map.items())\nbase_vocab = list(unicode_to_byte_map.keys())\n\nprint(f'Size of our base vocabulary: {len(base_vocab)}')\nprint(f'First element: `{base_vocab[0]}`, last element: `{base_vocab[-1]}`')", "Size of our base vocabulary: 256\nFirst element: `!`, last element: `Ń`\n" ], [ "# hide_input\n#id unicode_mapping\n#caption Examples of character mappings in BPE\n#hide_input\nimport pandas as pd\nfrom transformers.models.gpt2.tokenization_gpt2 import bytes_to_unicode\n\nbyte_to_unicode_map = bytes_to_unicode()\nunicode_to_byte_map = dict((v, k) for k, v in byte_to_unicode_map.items())\nbase_vocab = list(unicode_to_byte_map.keys())\n\nexamples = [\n ['Regular characters', '`a` and `?`', f'{ord(\"a\")} and {ord(\"?\")}' , f'`{byte_to_unicode_map[ord(\"a\")]}` and `{byte_to_unicode_map[ord(\"?\")]}`'],\n ['Nonprintable control character (carriage return)', '`U+000D`', f'13', f'`{byte_to_unicode_map[13]}`'],\n ['A space', '` `', f'{ord(\" \")}', f'`{byte_to_unicode_map[ord(\" \")]}`'],\n ['A nonbreakable space', '`\\\\xa0`', '160', f'`{byte_to_unicode_map[ord(chr(160))]}`'],\n ['A newline character', '`\\\\n`', '10', f'`{byte_to_unicode_map[ord(chr(10))]}`'],\n]\n\npd.DataFrame(examples, columns = ['Description', 'Character', 'Bytes', 'Mapped bytes'])", "_____no_output_____" ], [ "print(tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(python_code))", "[('def', (0, 3)), ('Ġsay', (3, 7)), ('_', (7, 8)), ('hello', (8, 13)), ('():',\n(13, 16)), ('ĊĠĠĠ', (16, 20)), ('Ġprint', (20, 26)), ('(\"', (26, 28)), ('Hello',\n(28, 33)), (',', (33, 34)), ('ĠWorld', (34, 40)), ('!\")', (40, 43)), ('Ġ#', (43,\n45)), ('ĠPrint', (45, 51)), ('Ġit', (51, 54)), ('Ċ', (54, 55)), ('Ċ', (55, 56)),\n('say', (56, 59)), ('_', (59, 60)), ('hello', (60, 65)), ('()', (65, 67)), ('Ċ',\n(67, 68))]\n" ], [ "print(f\"Size of the vocabulary: {len(tokenizer)}\")", "Size of the vocabulary: 50257\n" ], [ "print(tokenizer(python_code).tokens())", "['def', 'Ġsay', '_', 'hello', '():', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġprint', '(\"',\n'Hello', ',', 'ĠWorld', '!\"', ')', 'Ġ#', 'ĠPrint', 'Ġit', 'Ċ', 'Ċ', 'say', '_',\n'hello', '()', 'Ċ']\n" ] ], [ [ "### Training a Tokenizer", "_____no_output_____" ] ], [ [ "tokens = sorted(tokenizer.vocab.items(), key=lambda x: len(x[0]), reverse=True)\nprint([f'{tokenizer.convert_tokens_to_string(t)}' for t, _ in tokens[:8]]);", "['ÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂ', '\n=================================================================', '\n----------------------------------------------------------------',\n'................................................................',\n'ÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂ',\n'----------------------------------------------------------------',\n'================================================================',\n'________________________________________________________________']\n" ], [ "tokens = sorted(tokenizer.vocab.items(), key=lambda x: x[1], reverse=True)\nprint([f'{tokenizer.convert_tokens_to_string(t)}' for t, _ in tokens[:12]]);", "['<|endoftext|>', ' gazed', ' informants', ' Collider', ' regress', 'ominated',\n' amplification', 'Compar', '….\"', ' (/', 'Commission', ' Hitman']\n" ], [ "#hide_output\nfrom tqdm.auto import tqdm\n\nlength = 100000\ndataset_name = 'transformersbook/codeparrot-train'\ndataset = load_dataset(dataset_name, split=\"train\", streaming=True)\niter_dataset = iter(dataset)\n\ndef batch_iterator(batch_size=10):\n for _ in tqdm(range(0, length, batch_size)):\n yield [next(iter_dataset)['content'] for _ in range(batch_size)]\n\nnew_tokenizer = tokenizer.train_new_from_iterator(batch_iterator(), \n vocab_size=12500,\n initial_alphabet=base_vocab)", "_____no_output_____" ], [ "tokens = sorted(new_tokenizer.vocab.items(), key=lambda x: x[1], reverse=False)\nprint([f'{tokenizer.convert_tokens_to_string(t)}' for t, _ in tokens[257:280]]);", "[' ', ' ', ' ', ' ', 'se', 'in', ' ', 're', 'on', 'te', '\\n\n', '\\n ', 'or', 'st', 'de', '\\n ', 'th', 'le', ' =', 'lf', 'self',\n'me', 'al']\n" ], [ "print([f'{new_tokenizer.convert_tokens_to_string(t)}' for t,_ in tokens[-12:]]);", "[' capt', ' embedded', ' regarding', 'Bundle', '355', ' recv', ' dmp', ' vault',\n' Mongo', ' possibly', 'implementation', 'Matches']\n" ], [ "print(new_tokenizer(python_code).tokens())", "['def', 'Ġs', 'ay', '_', 'hello', '():', 'ĊĠĠĠ', 'Ġprint', '(\"', 'Hello', ',',\n'ĠWor', 'ld', '!\")', 'Ġ#', 'ĠPrint', 'Ġit', 'Ċ', 'Ċ', 's', 'ay', '_', 'hello',\n'()', 'Ċ']\n" ], [ "import keyword\n\nprint(f'There are in total {len(keyword.kwlist)} Python keywords.')\nfor keyw in keyword.kwlist:\n if keyw not in new_tokenizer.vocab:\n print(f'No, keyword `{keyw}` is not in the vocabulary')", "There are in total 35 Python keywords.\nNo, keyword `await` is not in the vocabulary\nNo, keyword `finally` is not in the vocabulary\nNo, keyword `nonlocal` is not in the vocabulary\n" ], [ "# hide_output\nlength = 200000\nnew_tokenizer_larger = tokenizer.train_new_from_iterator(batch_iterator(),\n vocab_size=32768, initial_alphabet=base_vocab)", "100%|██████████| 200/200 [05:08<00:00, 1.54s/it]\n" ], [ "tokens = sorted(new_tokenizer_larger.vocab.items(), key=lambda x: x[1],\n reverse=False)\nprint([f'{tokenizer.convert_tokens_to_string(t)}' for t, _ in tokens[-12:]]);", "['lineEdit', 'spik', ' BC', 'pective', 'OTA', 'theus', 'FLUSH', ' excutils',\n'00000002', ' DIVISION', 'CursorPosition', ' InfoBar']\n" ], [ "print(new_tokenizer_larger(python_code).tokens())", "['def', 'Ġsay', '_', 'hello', '():', 'ĊĠĠĠ', 'Ġprint', '(\"', 'Hello', ',',\n'ĠWorld', '!\")', 'Ġ#', 'ĠPrint', 'Ġit', 'Ċ', 'Ċ', 'say', '_', 'hello', '()',\n'Ċ']\n" ], [ "for keyw in keyword.kwlist:\n if keyw not in new_tokenizer_larger.vocab:\n print(f'No, keyword `{keyw}` is not in the vocabulary')", "No, keyword `nonlocal` is not in the vocabulary\n" ] ], [ [ "### Saving a Custom Tokenizer on the Hub", "_____no_output_____" ] ], [ [ "#hide_output\nmodel_ckpt = \"codeparrot\"\norg = \"transformersbook\"\nnew_tokenizer_larger.push_to_hub(model_ckpt, organization=org)", "Cloning https://huggingface.co/transformersbook/codeparrot into local empty directory.\n" ], [ "reloaded_tokenizer = AutoTokenizer.from_pretrained(org + \"/\" + model_ckpt)\nprint(reloaded_tokenizer(python_code).tokens())", "['def', 'Ġsay', '_', 'hello', '():', 'ĊĠĠĠ', 'Ġprint', '(\"', 'Hello', ',',\n'ĠWorld', '!\")', 'Ġ#', 'ĠPrint', 'Ġit', 'Ċ', 'Ċ', 'say', '_', 'hello', '()',\n'Ċ']\n" ], [ "#hide_output\nnew_tokenizer.push_to_hub(model_ckpt+ \"-small-vocabulary\", organization=org)", "Cloning https://huggingface.co/transformersbook/codeparrot-small-vocabulary into local empty directory.\n" ] ], [ [ "## Training a Model from Scratch", "_____no_output_____" ], [ "### A Tale of Pretraining Objectives", "_____no_output_____" ], [ "<img alt=\"Code snippet\" caption=\"An example of a Python function that could be found in our dataset\" src=\"images/chapter10_code-snippet.png\" id=\"code-snippet\"/>", "_____no_output_____" ], [ "#### Causal language modeling", "_____no_output_____" ], [ "<img alt=\"CLM pretraining\" caption=\"In causal language modeling, the future tokens are masked and the model has to predict them; typically a decoder model such as GPT is used for such a task\" src=\"images/chapter10_pretraining-clm.png\" id=\"pretraining-clm\"/>", "_____no_output_____" ], [ "#### Masked language modeling", "_____no_output_____" ], [ "<img alt=\"MLM pretraining\" caption=\"In masked language modeling some of the input tokens are either masked or replaced, and the model's task is to predict the original tokens; this is the architecture underlying the encoder branch of transformer models\" src=\"images/chapter10_pretraining-mlm.png\" id=\"pretraining-mlm\"/>", "_____no_output_____" ], [ "#### Sequence-to-sequence training", "_____no_output_____" ], [ "<img alt=\"Seq2seq pretraining\" caption=\"Using an encoder-decoder architecture for a sequence-to-sequence task where the inputs are split into comment/code pairs using heuristics: the model gets one element as input and needs to generate the other one\" src=\"images/chapter10_pretraining-seq2seq.png\" id=\"pretraining-seq2seq\"/>", "_____no_output_____" ], [ "### Initializing the Model", "_____no_output_____" ], [ "> **NOTE**: In the following code block, a large GPT-2 checkpoint is loaded into memory. On platforms like Colab and Kaggle, this can cause the instance to crash due to insufficient RAM or GPU memory. You can still run the example if you use the small checkpoint by replacing the configuration with `config = AutoConfig.from_pretrained(\"gpt2\", vocab_size=len(tokenizer))`.", "_____no_output_____" ] ], [ [ "#hide_output\nfrom transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(org + \"/\" + model_ckpt)\nconfig = AutoConfig.from_pretrained(\"gpt2-xl\", vocab_size=len(tokenizer))\nmodel = AutoModelForCausalLM.from_config(config)", "_____no_output_____" ], [ "print(f'GPT-2 (xl) size: {model_size(model)/1000**2:.1f}M parameters')", "GPT-2 (xl) size: 1529.6M parameters\n" ], [ "#hide_output\nmodel.save_pretrained(\"models/\" + model_ckpt, push_to_hub=True,\n organization=org)", "_____no_output_____" ], [ "tokenizer = AutoTokenizer.from_pretrained(model_ckpt)\nconfig_small = AutoConfig.from_pretrained(\"gpt2\", vocab_size=len(tokenizer))\nmodel_small = AutoModelForCausalLM.from_config(config_small)", "_____no_output_____" ], [ "print(f'GPT-2 size: {model_size(model_small)/1000**2:.1f}M parameters')", "GPT-2 size: 111.0M parameters\n" ], [ "#hide_output\nmodel_small.save_pretrained(\"models/\" + model_ckpt + \"-small\", push_to_hub=True,\n organization=org)", "Cloning https://huggingface.co/transformersbook/codeparrot-small into local empty directory.\n" ] ], [ [ "### Implementing the Dataloader", "_____no_output_____" ], [ "<img alt=\"Preprocessing for CLM\" caption=\"Preparing sequences of varying length for causal language modeling by concatenating several tokenized examples with an EOS token before chunking them\" src=\"images/chapter10_preprocessing-clm.png\" id=\"preprocessing-clm\"/>", "_____no_output_____" ] ], [ [ "#hide_output\nexamples, total_characters, total_tokens = 500, 0, 0\ndataset = load_dataset('transformersbook/codeparrot-train', split='train',\n streaming=True)\n\nfor _, example in tqdm(zip(range(examples), iter(dataset)), total=examples):\n total_characters += len(example['content'])\n total_tokens += len(tokenizer(example['content']).tokens())\n\ncharacters_per_token = total_characters / total_tokens", " 0%| | 1/500 [00:00<01:16, 6.54it/s]Token indices sequence length is longer than the specified maximum sequence length for this model (2605 > 1024). Running this sequence through the model will result in indexing errors\n100%|██████████| 500/500 [00:04<00:00, 122.59it/s]\n" ], [ "print(characters_per_token)", "3.6233025034779565\n" ], [ "import torch\nfrom torch.utils.data import IterableDataset\n\nclass ConstantLengthDataset(IterableDataset):\n \n def __init__(self, tokenizer, dataset, seq_length=1024,\n num_of_sequences=1024, chars_per_token=3.6):\n self.tokenizer = tokenizer\n self.concat_token_id = tokenizer.eos_token_id\n self.dataset = dataset\n self.seq_length = seq_length\n self.input_characters = seq_length * chars_per_token * num_of_sequences\n \n def __iter__(self):\n iterator = iter(self.dataset)\n more_examples = True\n while more_examples:\n buffer, buffer_len = [], 0\n while True:\n if buffer_len >= self.input_characters:\n m=f\"Buffer full: {buffer_len}>={self.input_characters:.0f}\"\n print(m)\n break\n try:\n m=f\"Fill buffer: {buffer_len}<{self.input_characters:.0f}\"\n print(m)\n buffer.append(next(iterator)[\"content\"])\n buffer_len += len(buffer[-1])\n except StopIteration:\n iterator = iter(self.dataset)\n\n all_token_ids = []\n tokenized_inputs = self.tokenizer(buffer, truncation=False)\n for tokenized_input in tokenized_inputs['input_ids']:\n all_token_ids.extend(tokenized_input + [self.concat_token_id])\n \n for i in range(0, len(all_token_ids), self.seq_length):\n input_ids = all_token_ids[i : i + self.seq_length]\n if len(input_ids) == self.seq_length:\n yield torch.tensor(input_ids)", "_____no_output_____" ], [ "shuffled_dataset = dataset.shuffle(buffer_size=100)\nconstant_length_dataset = ConstantLengthDataset(tokenizer, shuffled_dataset,\n num_of_sequences=10)\ndataset_iterator = iter(constant_length_dataset)\n\nlengths = [len(b) for _, b in zip(range(5), dataset_iterator)]\nprint(f\"Lengths of the sequences: {lengths}\")", "Fill buffer: 0<36864\nFill buffer: 3311<36864\nFill buffer: 9590<36864\nFill buffer: 22177<36864\nFill buffer: 25530<36864\nFill buffer: 31098<36864\nFill buffer: 32232<36864\nFill buffer: 33867<36864\nBuffer full: 41172>=36864\nLengths of the sequences: [1024, 1024, 1024, 1024, 1024]\n" ] ], [ [ "### Defining the Training Loop", "_____no_output_____" ] ], [ [ "from argparse import Namespace\n\n# Commented parameters correspond to the small model\nconfig = {\"train_batch_size\": 2, # 12\n \"valid_batch_size\": 2, # 12\n \"weight_decay\": 0.1,\n \"shuffle_buffer\": 1000,\n \"learning_rate\": 2e-4, # 5e-4\n \"lr_scheduler_type\": \"cosine\",\n \"num_warmup_steps\": 750, # 2000\n \"gradient_accumulation_steps\": 16, # 1\n \"max_train_steps\": 50000, # 150000\n \"max_eval_steps\": -1,\n \"seq_length\": 1024,\n \"seed\": 1,\n \"save_checkpoint_steps\": 50000} # 15000\n\nargs = Namespace(**config)", "_____no_output_____" ], [ "from torch.utils.tensorboard import SummaryWriter\nimport logging\nimport wandb\n\ndef setup_logging(project_name):\n logger = logging.getLogger(__name__)\n logging.basicConfig(\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n datefmt=\"%m/%d/%Y %H:%M:%S\", level=logging.INFO, handlers=[\n logging.FileHandler(f\"log/debug_{accelerator.process_index}.log\"),\n logging.StreamHandler()])\n if accelerator.is_main_process: # We only want to set up logging once\n wandb.init(project=project_name, config=args)\n run_name = wandb.run.name\n tb_writer = SummaryWriter()\n tb_writer.add_hparams(vars(args), {'0': 0})\n logger.setLevel(logging.INFO)\n datasets.utils.logging.set_verbosity_debug()\n transformers.utils.logging.set_verbosity_info()\n else:\n tb_writer = None\n run_name = ''\n logger.setLevel(logging.ERROR)\n datasets.utils.logging.set_verbosity_error()\n transformers.utils.logging.set_verbosity_error()\n return logger, tb_writer, run_name", "_____no_output_____" ], [ "def log_metrics(step, metrics):\n logger.info(f\"Step {step}: {metrics}\")\n if accelerator.is_main_process:\n wandb.log(metrics)\n [tb_writer.add_scalar(k, v, step) for k, v in metrics.items()]", "_____no_output_____" ], [ "#hide_output\nfrom torch.utils.data.dataloader import DataLoader\n\ndef create_dataloaders(dataset_name):\n train_data = load_dataset(dataset_name+'-train', split=\"train\",\n streaming=True)\n train_data = train_data.shuffle(buffer_size=args.shuffle_buffer,\n seed=args.seed)\n valid_data = load_dataset(dataset_name+'-valid', split=\"validation\",\n streaming=True)\n \n train_dataset = ConstantLengthDataset(tokenizer, train_data,\n seq_length=args.seq_length)\n valid_dataset = ConstantLengthDataset(tokenizer, valid_data,\n seq_length=args.seq_length)\n \n train_dataloader=DataLoader(train_dataset, batch_size=args.train_batch_size)\n eval_dataloader=DataLoader(valid_dataset, batch_size=args.valid_batch_size)\n return train_dataloader, eval_dataloader", "_____no_output_____" ], [ "def get_grouped_params(model, no_decay=[\"bias\", \"LayerNorm.weight\"]):\n params_with_wd, params_without_wd = [], []\n for n, p in model.named_parameters():\n if any(nd in n for nd in no_decay):\n params_without_wd.append(p)\n else:\n params_with_wd.append(p)\n return [{'params': params_with_wd, 'weight_decay': args.weight_decay},\n {'params': params_without_wd, 'weight_decay': 0.0}]", "_____no_output_____" ], [ "def evaluate():\n model.eval()\n losses = []\n for step, batch in enumerate(eval_dataloader):\n with torch.no_grad():\n outputs = model(batch, labels=batch)\n loss = outputs.loss.repeat(args.valid_batch_size)\n losses.append(accelerator.gather(loss))\n if args.max_eval_steps > 0 and step >= args.max_eval_steps: break\n loss = torch.mean(torch.cat(losses))\n try:\n\t\tperplexity = torch.exp(loss)\n\texcept OverflowError:\n\t\tperplexity = torch.tensor(float(\"inf\"))\n return loss.item(), perplexity.item()", "_____no_output_____" ], [ "set_seed(args.seed)\n\n# Accelerator\naccelerator = Accelerator()\nsamples_per_step = accelerator.state.num_processes * args.train_batch_size\n\n# Logging\nlogger, tb_writer, run_name = setup_logging(project_name.split(\"/\")[1])\nlogger.info(accelerator.state)\n\n# Load model and tokenizer\nif accelerator.is_main_process:\n hf_repo = Repository(\"./\", clone_from=project_name, revision=run_name)\nmodel = AutoModelForCausalLM.from_pretrained(\"./\", gradient_checkpointing=True)\ntokenizer = AutoTokenizer.from_pretrained(\"./\")\n\n# Load dataset and dataloader\ntrain_dataloader, eval_dataloader = create_dataloaders(dataset_name)\n\n# Prepare the optimizer and learning rate scheduler\noptimizer = AdamW(get_grouped_params(model), lr=args.learning_rate)\nlr_scheduler = get_scheduler(name=args.lr_scheduler_type, optimizer=optimizer,\n num_warmup_steps=args.num_warmup_steps,\n num_training_steps=args.max_train_steps,)\ndef get_lr():\n return optimizer.param_groups[0]['lr']\n\n# Prepare everything with our `accelerator` (order of args is not important)\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n model, optimizer, train_dataloader, eval_dataloader)\n\n# Train model\nmodel.train()\ncompleted_steps = 0\nfor step, batch in enumerate(train_dataloader, start=1):\n loss = model(batch, labels=batch).loss\n log_metrics(step, {'lr': get_lr(), 'samples': step*samples_per_step,\n 'steps': completed_steps, 'loss/train': loss.item()})\n loss = loss / args.gradient_accumulation_steps\n accelerator.backward(loss)\n if step % args.gradient_accumulation_steps == 0:\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n completed_steps += 1\n if step % args.save_checkpoint_steps == 0:\n logger.info('Evaluating and saving model checkpoint')\n eval_loss, perplexity = evaluate()\n log_metrics(step, {'loss/eval': eval_loss, 'perplexity': perplexity})\n accelerator.wait_for_everyone()\n unwrapped_model = accelerator.unwrap_model(model)\n if accelerator.is_main_process:\n unwrapped_model.save_pretrained(\"./\")\n hf_repo.push_to_hub(commit_message=f'step {step}')\n model.train()\n if completed_steps >= args.max_train_steps:\n break\n\n# Evaluate and save the last checkpoint\nlogger.info('Evaluating and saving model after training')\neval_loss, perplexity = evaluate()\nlog_metrics(step, {'loss/eval': eval_loss, 'perplexity': perplexity})\naccelerator.wait_for_everyone()\nunwrapped_model = accelerator.unwrap_model(model)\nif accelerator.is_main_process:\n unwrapped_model.save_pretrained(\"./\")\n hf_repo.push_to_hub(commit_message=f'final model')", "_____no_output_____" ] ], [ [ "<img alt=\"DDP\" caption=\"Illustration of the processing steps in DDP with four GPUs\" src=\"images/chapter10_ddp.png\" id=\"ddp\"/>", "_____no_output_____" ], [ "### The Training Run", "_____no_output_____" ], [ "## Results and Analysis", "_____no_output_____" ] ], [ [ "#hide_output\nfrom transformers import pipeline, set_seed\n\nmodel_ckpt = 'transformersbook/codeparrot-small'\ngeneration = pipeline('text-generation', model=model_ckpt, device=0)", "2021-10-20 18:29:01.107727: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\n2021-10-20 18:29:01.107759: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\n" ], [ "import re\nfrom transformers import set_seed \n\ndef first_block(string):\n return re.split('\\nclass|\\ndef|\\n#|\\n@|\\nprint|\\nif', string)[0].rstrip()\n\ndef complete_code(pipe, prompt, max_length=64, num_completions=4, seed=1):\n set_seed(seed)\n gen_kwargs = {\"temperature\":0.4, \"top_p\":0.95, \"top_k\":0, \"num_beams\":1,\n \"do_sample\":True,}\n code_gens = generation(prompt, num_return_sequences=num_completions, \n max_length=max_length, **gen_kwargs)\n code_strings = []\n for code_gen in code_gens:\n generated_code = first_block(code_gen['generated_text'][len(prompt):])\n code_strings.append(generated_code)\n print(('\\n'+'='*80 + '\\n').join(code_strings))", "_____no_output_____" ], [ "prompt = '''def area_of_rectangle(a: float, b: float):\n \"\"\"Return the area of the rectangle.\"\"\"'''\ncomplete_code(generation, prompt)", "\n return math.sqrt(a * b)\n================================================================================\n\n return a * b / 2.0\n================================================================================\n\n return a * b\n================================================================================\n\n return a * b / a\n" ], [ "prompt = '''def get_urls_from_html(html):\n \"\"\"Get all embedded URLs in a HTML string.\"\"\"'''\ncomplete_code(generation, prompt)", "\n if not html:\n return []\n return [url for url in re.findall(r'<a href=\"(/[^/]+/[^\"]+?)\">', html)]\n================================================================================\n\n return [url for url in re.findall(r'<a href=\"(.*?)\"', html)\n if url]\n================================================================================\n\n return [url for url in re.findall(r'<a href=\"(/.*)\",', html)]\n================================================================================\n\n return re.findall(r'<a href=\"(.*?)\" class=\"url\"[^>]*>', html)\n" ], [ "import requests\n\ndef get_urls_from_html(html):\n return [url for url in re.findall(r'<a href=\"(.*?)\"', html) if url]\n\nprint(\" | \".join(get_urls_from_html(requests.get('https://hf.co/').text)))", "https://github.com/huggingface/transformers | /allenai | /facebook |\n/asteroid-team | /google | /amazon | /speechbrain | /microsoft | /grammarly |\n/models | /inference-api | /distilbert-base-uncased |\n/dbmdz/bert-large-cased-finetuned-conll03-english |\nhttps://huggingface.co/transformers | https://arxiv.org/abs/1811.06031 |\nhttps://arxiv.org/abs/1803.10631 | https://transformer.huggingface.co/ | /coref\n| https://medium.com/huggingface/distilbert-8cf3380435b5\n" ] ], [ [ "> **NOTE**: In the following code block, a large GPT-2 checkpoint is loaded into memory. On platforms like Colab and Kaggle, this can cause the instance to crash due to insufficient RAM or GPU memory. You can still run the example if you replace the large model with the small one by using `model_ckpt = \"transformersbook/codeparrot-small\"`.\n ", "_____no_output_____" ] ], [ [ "model_ckpt = 'transformersbook/codeparrot'\ngeneration = pipeline('text-generation', model=model_ckpt, device=0)\n\nprompt = '''# a function in native python:\ndef mean(a):\n return sum(a)/len(a)\n\n# the same function using numpy:\nimport numpy as np\ndef mean(a):'''\ncomplete_code(generation, prompt, max_length=64)", "Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.\n" ], [ "prompt = '''X = np.random.randn(100, 100)\ny = np.random.randint(0, 1, 100)\n\n# fit random forest classifier with 20 estimators'''\ncomplete_code(generation, prompt, max_length=96)", "Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.\n" ] ], [ [ "## Conclusion", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cbb57ecd5b9466464d1ce252d512b5a11b99349c
22,236
ipynb
Jupyter Notebook
_data/ipython_notebooks/.ipynb_checkpoints/Compile_Master_Data-checkpoint.ipynb
annaswigart/local-food-access
abf836e894f063bdc6d13f08434c38e7bedeccbf
[ "BSD-3-Clause" ]
1
2015-01-22T02:11:20.000Z
2015-01-22T02:11:20.000Z
_data/ipython_notebooks/Compile_Master_Data.ipynb
annaswigart/local-food-access
abf836e894f063bdc6d13f08434c38e7bedeccbf
[ "BSD-3-Clause" ]
null
null
null
_data/ipython_notebooks/Compile_Master_Data.ipynb
annaswigart/local-food-access
abf836e894f063bdc6d13f08434c38e7bedeccbf
[ "BSD-3-Clause" ]
null
null
null
31.811159
92
0.313006
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cbb5ae9e39dd31f407a432b7400251905bc958dc
15,583
ipynb
Jupyter Notebook
notebooks/chapter21/Passive Reinforcement Learning.ipynb
mbalduccini/aima-python
6624d12467867a84795838abccad579b1da57469
[ "MIT" ]
6,946
2016-02-27T19:28:07.000Z
2022-03-31T21:21:35.000Z
notebooks/chapter21/Passive Reinforcement Learning.ipynb
mbalduccini/aima-python
6624d12467867a84795838abccad579b1da57469
[ "MIT" ]
733
2016-02-29T20:12:12.000Z
2022-02-19T11:56:13.000Z
notebooks/chapter21/Passive Reinforcement Learning.ipynb
mbalduccini/aima-python
6624d12467867a84795838abccad579b1da57469
[ "MIT" ]
3,880
2016-02-24T21:13:35.000Z
2022-03-31T17:09:57.000Z
36.665882
578
0.615543
[ [ [ "# Introduction to Reinforcement Learning\n\nThis Jupyter notebook and the others in the same folder act as supporting materials for **Chapter 21 Reinforcement Learning** of the book* Artificial Intelligence: A Modern Approach*. The notebooks make use of the implementations in `rl.py` module. We also make use of the implementation of MDPs in the `mdp.py` module to test our agents. It might be helpful if you have already gone through the Jupyter notebook dealing with the Markov decision process. Let us import everything from the `rl` module. It might be helpful to view the source of some of our implementations.", "_____no_output_____" ] ], [ [ "import os, sys\nsys.path = [os.path.abspath(\"../../\")] + sys.path\nfrom rl4e import *", "_____no_output_____" ] ], [ [ "Before we start playing with the actual implementations let us review a couple of things about RL.\n\n1. Reinforcement Learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. \n\n2. Reinforcement learning differs from standard supervised learning in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).\n\n-- Source: [Wikipedia](https://en.wikipedia.org/wiki/Reinforcement_learning)\n\nIn summary, we have a sequence of state action transitions with rewards associated with some states. Our goal is to find the optimal policy $\\pi$ which tells us what action to take in each state.", "_____no_output_____" ], [ "# Passive Reinforcement Learning\n\nIn passive Reinforcement Learning the agent follows a fixed policy $\\pi$. Passive learning attempts to evaluate the given policy $pi$ - without any knowledge of the Reward function $R(s)$ and the Transition model $P(s'\\ |\\ s, a)$.\n\nThis is usually done by some method of **utility estimation**. The agent attempts to directly learn the utility of each state that would result from following the policy. Note that at each step, it has to *perceive* the reward and the state - it has no global knowledge of these. Thus, if a certain the entire set of actions offers a very low probability of attaining some state $s_+$ - the agent may never perceive the reward $R(s_+)$.\n\nConsider a situation where an agent is given the policy to follow. Thus, at any point, it knows only its current state and current reward, and the action it must take next. This action may lead it to more than one state, with different probabilities.\n\nFor a series of actions given by $\\pi$, the estimated utility $U$:\n$$U^{\\pi}(s) = E(\\sum_{t=0}^\\inf \\gamma^t R^t(s'))$$\nOr the expected value of summed discounted rewards until termination.\n\nBased on this concept, we discuss three methods of estimating utility: direct utility estimation, adaptive dynamic programming, and temporal-difference learning.\n\n### Implementation\n\nPassive agents are implemented in `rl4e.py` as various `Agent-Class`es.\n\nTo demonstrate these agents, we make use of the `GridMDP` object from the `MDP` module. `sequential_decision_environment` is similar to that used for the `MDP` notebook but has discounting with $\\gamma = 0.9$.\n\nThe `Agent-Program` can be obtained by creating an instance of the relevant `Agent-Class`. The `__call__` method allows the `Agent-Class` to be called as a function. The class needs to be instantiated with a policy ($\\pi$) and an `MDP` whose utility of states will be estimated.\n", "_____no_output_____" ] ], [ [ "from mdp import sequential_decision_environment", "_____no_output_____" ] ], [ [ "The `sequential_decision_environment` is a GridMDP object as shown below. The rewards are **+1** and **-1** in the terminal states, and **-0.04** in the rest. <img src=\"images/mdp.png\"> Now we define actions and a policy similar to **Fig 21.1** in the book.", "_____no_output_____" ] ], [ [ "# Action Directions\nnorth = (0, 1)\nsouth = (0,-1)\nwest = (-1, 0)\neast = (1, 0)\n\npolicy = {\n (0, 2): east, (1, 2): east, (2, 2): east, (3, 2): None,\n (0, 1): north, (2, 1): north, (3, 1): None,\n (0, 0): north, (1, 0): west, (2, 0): west, (3, 0): west, \n}", "_____no_output_____" ] ], [ [ "This enviroment will be extensively used in the following demonstrations.", "_____no_output_____" ], [ "## Direct Utility Estimation (DUE)\n \n The first, most naive method of estimating utility comes from the simplest interpretation of the above definition. We construct an agent that follows the policy until it reaches the terminal state. At each step, it logs its current state, reward. Once it reaches the terminal state, it can estimate the utility for each state for *that* iteration, by simply summing the discounted rewards from that state to the terminal one.\n\n It can now run this 'simulation' $n$ times and calculate the average utility of each state. If a state occurs more than once in a simulation, both its utility values are counted separately.\n \n Note that this method may be prohibitively slow for very large state-spaces. Besides, **it pays no attention to the transition probability $P(s'\\ |\\ s, a)$.** It misses out on information that it is capable of collecting (say, by recording the number of times an action from one state led to another state). The next method addresses this issue.\n \n### Examples\n\nThe `PassiveDEUAgent` class in the `rl` module implements the Agent Program described in **Fig 21.2** of the AIMA Book. `PassiveDEUAgent` sums over rewards to find the estimated utility for each state. It thus requires the running of several iterations.", "_____no_output_____" ] ], [ [ "%psource PassiveDUEAgent", "_____no_output_____" ] ], [ [ "Now let's try the `PassiveDEUAgent` on the newly defined `sequential_decision_environment`:", "_____no_output_____" ] ], [ [ "DUEagent = PassiveDUEAgent(policy, sequential_decision_environment)", "_____no_output_____" ] ], [ [ "We can try passing information through the markove model for 200 times in order to get the converged utility value:", "_____no_output_____" ] ], [ [ "for i in range(200):\n run_single_trial(DUEagent, sequential_decision_environment)\n DUEagent.estimate_U()", "_____no_output_____" ] ], [ [ "Now let's print our estimated utility for each position:", "_____no_output_____" ] ], [ [ "print('\\n'.join([str(k)+':'+str(v) for k, v in DUEagent.U.items()]))", "(0, 1):0.7956939931414414\n(1, 2):0.9162054322837863\n(3, 2):1.0\n(0, 0):0.734717308253083\n(2, 2):0.9595117143816332\n(0, 2):0.8481387156375687\n(1, 0):0.4355860415209706\n(2, 1):-0.550079982553143\n(3, 1):-1.0\n" ] ], [ [ "## Adaptive Dynamic Programming (ADP)\n \n This method makes use of knowledge of the past state $s$, the action $a$, and the new perceived state $s'$ to estimate the transition probability $P(s'\\ |\\ s,a)$. It does this by the simple counting of new states resulting from previous states and actions.<br> \n The program runs through the policy a number of times, keeping track of:\n - each occurrence of state $s$ and the policy-recommended action $a$ in $N_{sa}$\n - each occurrence of $s'$ resulting from $a$ on $s$ in $N_{s'|sa}$.\n \n It can thus estimate $P(s'\\ |\\ s,a)$ as $N_{s'|sa}/N_{sa}$, which in the limit of infinite trials, will converge to the true value.<br>\n Using the transition probabilities thus estimated, it can apply `POLICY-EVALUATION` to estimate the utilities $U(s)$ using properties of convergence of the Bellman functions.\n \n### Examples\n\nThe `PassiveADPAgent` class in the `rl` module implements the Agent Program described in **Fig 21.2** of the AIMA Book. `PassiveADPAgent` uses state transition and occurrence counts to estimate $P$, and then $U$. Go through the source below to understand the agent.", "_____no_output_____" ] ], [ [ "%psource", "_____no_output_____" ] ], [ [ "We instantiate a `PassiveADPAgent` below with the `GridMDP` shown and train it for 200 steps. The `rl` module has a simple implementation to simulate a single step of the iteration. The function is called `run_single_trial`.", "_____no_output_____" ] ], [ [ "ADPagent = PassiveADPAgent(policy, sequential_decision_environment)\nfor i in range(200):\n run_single_trial(ADPagent, sequential_decision_environment)", "Warning: Transition table is empty.\n" ] ], [ [ "The utilities are calculated as :", "_____no_output_____" ] ], [ [ "print('\\n'.join([str(k)+':'+str(v) for k, v in ADPagent.U.items()]))", "(0, 0):0.3014408531958584\n(0, 1):0.40583863351329275\n(1, 2):0.6581480346627065\n(3, 2):1.0\n(3, 0):0.0\n(3, 1):-1.0\n(2, 1):0.5341859348580892\n(2, 0):0.0\n(2, 2):0.810403779650285\n(1, 0):0.23129676787627254\n(0, 2):0.5214746706094832\n" ] ], [ [ "When comparing to the result of `PassiveDUEAgent`, they both have -1.0 for utility at (3,1) and 1.0 at (3,2). Another point to notice is that the spot with the highest utility for both agents is (2,2) beside the terminal states, which is easy to understand when referring to the map.", "_____no_output_____" ], [ "## Temporal-difference learning (TD)\n \n Instead of explicitly building the transition model $P$, the temporal-difference model makes use of the expected closeness between the utilities of two consecutive states $s$ and $s'$.\n For the transition $s$ to $s'$, the update is written as:\n$$U^{\\pi}(s) \\leftarrow U^{\\pi}(s) + \\alpha \\left( R(s) + \\gamma U^{\\pi}(s') - U^{\\pi}(s) \\right)$$\n This model implicitly incorporates the transition probabilities by being weighed for each state by the number of times it is achieved from the current state. Thus, over a number of iterations, it converges similarly to the Bellman equations.\n The advantage of the TD learning model is its relatively simple computation at each step, rather than having to keep track of various counts.\n For $n_s$ states and $n_a$ actions the ADP model would have $n_s \\times n_a$ numbers $N_{sa}$ and $n_s^2 \\times n_a$ numbers $N_{s'|sa}$ to keep track of. The TD model must only keep track of a utility $U(s)$ for each state.\n \n### Examples\n\n`PassiveTDAgent` uses temporal differences to learn utility estimates. We learn the difference between the states and back up the values to previous states. Let us look into the source before we see some usage examples.", "_____no_output_____" ] ], [ [ "%psource PassiveTDAgent", "_____no_output_____" ] ], [ [ "In creating the `TDAgent`, we use the **same learning rate** $\\alpha$ as given in the footnote of the book: $\\alpha(n)=60/(59+n)$", "_____no_output_____" ] ], [ [ "TDagent = PassiveTDAgent(policy, sequential_decision_environment, alpha = lambda n: 60./(59+n))", "_____no_output_____" ] ], [ [ "Now we run **200 trials** for the agent to estimate Utilities.", "_____no_output_____" ] ], [ [ "for i in range(200):\n run_single_trial(TDagent,sequential_decision_environment)", "_____no_output_____" ] ], [ [ "The calculated utilities are:", "_____no_output_____" ] ], [ [ "print('\\n'.join([str(k)+':'+str(v) for k, v in TDagent.U.items()]))", "(0, 1):0.36652562797696076\n(1, 2):0.6584162739552614\n(3, 2):1\n(0, 0):0.27775491505339645\n(3, 0):0.0\n(3, 1):-1\n(2, 1):0.6097040420148784\n(2, 0):0.0\n(2, 2):0.7936759402770092\n(1, 0):0.19085842384266813\n(0, 2):0.5258782999305713\n" ] ], [ [ "When comparing to previous agents, the result of `PassiveTDAgent` is closer to `PassiveADPAgent`.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbb5b609b9ebe0075ad425dcc49d1b722601e758
504,750
ipynb
Jupyter Notebook
.ipynb_checkpoints/New_One-checkpoint.ipynb
iammosespaulr/TDA-tutorial
657873e95a6f7024406c2c1f2ce58718220dcd66
[ "MIT" ]
1
2020-06-21T15:22:23.000Z
2020-06-21T15:22:23.000Z
New_One.ipynb
iammosespaulr/TDA-tutorial
657873e95a6f7024406c2c1f2ce58718220dcd66
[ "MIT" ]
null
null
null
New_One.ipynb
iammosespaulr/TDA-tutorial
657873e95a6f7024406c2c1f2ce58718220dcd66
[ "MIT" ]
null
null
null
85.521857
80,164
0.705191
[ [ [ "### duffing oscillator\n\nimport matplotlib\nimport numpy as np\nfrom numpy import zeros, linspace, pi, cos, array\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Polygon\nfrom matplotlib.patches import Circle\nfrom matplotlib.collections import PatchCollection\nfrom matplotlib.path import Path\nfrom matplotlib.patches import PathPatch\n\nt0=0\ntf=30*pi\nomega=1.2 \nbeta=1 \ndelta=0.3 \ngamma=0.35 \nalpha=1 \nn=10000 #iteration\n\nsampsize=100 #SampleSize\nsampstart=5000 #SampleStart\nsampend=n #SampleEnd\n\nh=(tf-t0)/(n-1) #stepsize\nprint('the value of h is',h)\nu0=0 #initial displacement\n\n\nt=linspace(t0,tf,n)\nv=zeros([n])\nu=zeros([n])\nu[0]=u0\nv[0]=0 #initial velocity\n\n##### DEFINING FUNCTIONS\n\ndef dudt(t,u,v): #### u' = v \n return(v)\n\ndef funt(t,u,v): #### v' = -delta*v+alpha*u-beta*u**3+gamma*cos(omega*t) \n return (-delta*v+alpha*u-beta*u**3+gamma*cos(omega*t))\n\n###### RK4 ALGORITHM USING FOR LOOP\n \nfor i in range(1,n):\n k1=h*dudt(t[i-1],u[i-1],v[i-1])\n l1=h*funt(t[i-1],u[i-1],v[i-1])\n \n k2=h*dudt(t[i-1]+(0.5*h),u[i-1]+(k1*0.5),v[i-1]+(l1*0.5))\n l2=h*funt(t[i-1]+(0.5*h),u[i-1]+(k1*0.5),v[i-1]+(l1*0.5))\n \n k3=h*dudt(t[i-1]+(0.5*h),u[i-1]+(k2*0.5),v[i-1]+(l2*0.5))\n l3=h*funt(t[i-1]+(0.5*h),u[i-1]+(k2*0.5),v[i-1]+(l2*0.5))\n \n k4=h*dudt(t[i-1]+h,u[i-1]+(k3),v[i-1]+(l3))\n l4=h*funt(t[i-1]+h,u[i-1]+(k3),v[i-1]+(l3))\n \n u[i]=u[i-1]+(1/6)*(k1+(2*k2)+(2*k3)+k4)\n v[i]=v[i-1]+(1/6)*(l1+(2*l2)+(2*l3)+l4)\n\n### PLOT\n\nplt.plot(t,u,'-r')\nplt.xlabel('time(t)')\nplt.ylabel('displacement(u)')\nplt.show()\nprint('The value of GAMMA =',gamma)\nfig = plt.figure()\nplt.plot(u[sampstart:sampend],v[sampstart:sampend],'-g')\nplt.xlabel('displacement(u)')\nplt.ylabel('velocity(v)')\nplt.show()\n\n#### InterPlay\n\nimport pandas as pd \nxx = lambda a: np.interp(a, (a.min(), a.max()), (0, +1))\nuu = xx(u[sampstart:sampend:int(sampsize/2)])\nvv = xx(v[sampstart:sampend:int(sampsize/2)])\nhuh = np.array(list(zip(uu,vv)))\n\nhuh = huh[np.random.choice(huh.shape[0], sampsize, replace=False), :]\nu1,v1 = zip(*huh)\n\n#print(huh)\npd.DataFrame(huh).to_csv(\"data/seed1_data.csv\", header=['X_value','Y_value'], index=True, index_label='point_id')\n\n#### SAMPLING\n\nprint(\"SAMPLING\")\n\n\nfig = plt.figure()\nplt.plot(u1,v1,'.g')\nplt.xlabel('displacement(u)')\nplt.ylabel('velocity(v)')\nplt.show()", "the value of h is 0.009425720532822661\n" ], [ "my_data = huh\nfrom gudhi import *\nimport gudhi\n\nrips_complex = gudhi.RipsComplex(points=my_data)\n\nsimplex_tree = rips_complex.create_simplex_tree(max_dimension=2)\nresult_str = 'Rips complex is of dimension ' + repr(simplex_tree.dimension()) + ' - ' + \\\n repr(simplex_tree.num_simplices()) + ' simplices - ' + \\\n repr(simplex_tree.num_vertices()) + ' vertices'\nprint(result_str)\n\nBarCodes_RipsAll = simplex_tree.persistence()\nBarCodes_Rips1 = list(filter(lambda BettiNum: BettiNum[0] == 1, BarCodes_RipsAll))\n\ngudhi.plot_persistence_barcode(BarCodes_Rips1)\ngudhi.plot_persistence_diagram(BarCodes_Rips1)\n\nentropy = representations.Entropy(normalized=True)\nprint(\"Entropy for Dim 1 is {}\".format(entropy(np.array([j for i, j in BarCodes_Rips1]))))\n\nmax_filtration_value = np.array(list(simplex_tree.get_filtration()))[-1, 1]", "Rips complex is of dimension 2 - 166750 simplices - 100 vertices\nEntropy for Dim 1 is [1.58632277]\n" ], [ "pointss = [[1, 1], [7, 0], [4, 6], [9, 6], [0, 14], [2, 19], [9, 17]]\n#for i in (gudhi.RipsComplex(points=pointss).create_simplex_tree().get_skeleton(2)):\n# print(i[0])\nsimplex_tree = gudhi.RipsComplex(points=pointss, max_edge_length=6.0).create_simplex_tree()\n#print(simplex_tree.persistence())\n#list(simplex_tree.get_simplices())", "_____no_output_____" ], [ "def genDiagWithFilt(points, length):\n rc = gudhi.RipsComplex(points=points, max_edge_length=length)\n st = rc.create_simplex_tree(max_dimension=2)\n\n BarCodes_RipsAll = st.persistence()\n BarCodes_Rips1 = list(\n filter(lambda BettiNum: BettiNum[0] == 1, BarCodes_RipsAll))\n max_filtration_value = np.array(list(st.get_filtration()))[-1, 1]\n\n # We are only going to plot the triangles\n triangles = np.array([s[0] for s in st.get_skeleton(2) if len(s[0]) == 3])\n return max_filtration_value, triangles, BarCodes_Rips1\n\nfrom IPython.display import display\nimport ipywidgets as widgets\nfrom ipywidgets import interact, interact_manual\n\n@interact\ndef blah(length=(max_filtration_value/10,max_filtration_value,max_filtration_value/10)):\n max_filtration_value, triangles, BarCodes_Rips1 = genDiagWithFilt(huh, length=length)\n fig2, ax2 = plt.subplots()\n ax2.set_aspect('equal')\n ax2.triplot(u1, v1, triangles, 'go-', lw=1.0,\n alpha=0.5)\n ax2.set_title('triplot of user-specified triangulation, filtration: {}'.format(max_filtration_value))\n ax2.set_xlabel('Longitude (degrees)')\n ax2.set_ylabel('Latitude (degrees)')\n\n plt.show()\n gudhi.plot_persistence_barcode(BarCodes_Rips1)", "_____no_output_____" ], [ "fig2, ax2 = plt.subplots()\npatches = []\n\nhey = [i[0] for i in st.get_skeleton(2)]\nkalel = [huh[j] for j in hey]\n\nfor x1, y1 in huh:\n circle = Circle((x1, y1), max_filtration_value, alpha=0.1)\n patches.append(circle)\n\nfor kkk in kalel:\n if len(kkk) == 2:\n path_data = [(Path.MOVETO, kkk[0]), (Path.LINETO, kkk[1]),]\n codes, verts = zip(*path_data)\n path = Path(verts, codes)\n patch = PathPatch(path, edgecolor='black', alpha=0.7)\n ax2.add_patch(patch)\n if len(kkk) > 2:\n polygon = Polygon(kkk, edgecolor='black', alpha=0.7)\n patches.append(polygon)\n\np = PatchCollection(patches, cmap=matplotlib.cm.jet, alpha=0.1)\n\ncolors = 100*np.random.rand(len(patches))\np.set_array(np.array(colors))\nplt.ylim((-0.1, 1.1))\nplt.xlim((-0.1, 1.1))\nax2.add_collection(p)\n#plt.plot(u1,v1,'.g')\n\nplt.show()", "_____no_output_____" ], [ "import numpy as np\nimport gudhi\nimport matplotlib.pyplot as plt\nimport matplotlib.tri as tri\nimport numpy as np\n\npoints = huh\nrc = gudhi.RipsComplex(points=points, max_edge_length=0.1)\nst = rc.create_simplex_tree(max_dimension=2)\n\nBarCodes_RipsAll = st.persistence()\nBarCodes_Rips1 = list(filter(lambda BettiNum: BettiNum[0] == 1, BarCodes_RipsAll))\n\nmax_filtration_value = np.array(list(st.get_filtration()))[-1,1]\n\n# We are only going to plot the triangles\ntriangles = np.array([s[0] for s in st.get_skeleton(2) if len(s[0])==3])\n\nfig21, ax21 = plt.subplots()\nax21.set_aspect('equal')\nax21.triplot(u1, v1, triangles, 'go-', lw=1.0, alpha=0.5, ms=max_filtration_value*100)\nax21.set_title('triplot of user-specified triangulation')\nax21.set_xlabel('Longitude (degrees)')\nax21.set_ylabel('Latitude (degrees)')\n\nplt.show()\n\nprint(\"Max Filtration is {}\".format(max_filtration_value))\ngudhi.plot_persistence_barcode(BarCodes_Rips1)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cbb5d3ea00bcdc1e16a8ac2c0c1ce7acad73cd0a
10,704
ipynb
Jupyter Notebook
figure_1/1.sample_comparable_authors.ipynb
Michael98Liu/fair-and-inclusive-scientific-publishing
c47db39ad031a1174d20d2e747c053328a54cab1
[ "Apache-2.0" ]
1
2022-02-18T17:09:50.000Z
2022-02-18T17:09:50.000Z
figure_1/1.sample_comparable_authors.ipynb
Michael98Liu/fair-and-inclusive-scientific-publishing
c47db39ad031a1174d20d2e747c053328a54cab1
[ "Apache-2.0" ]
null
null
null
figure_1/1.sample_comparable_authors.ipynb
Michael98Liu/fair-and-inclusive-scientific-publishing
c47db39ad031a1174d20d2e747c053328a54cab1
[ "Apache-2.0" ]
null
null
null
27.658915
261
0.440116
[ [ [ "# Sample authors while controlling for year-of-first-publication\n\nFor each editor, this notebook samples a set of authors whose year-of-first-publication matches that of the editor. For the sake of demonstration, we picked a subset of authors to match against so that the code could finish in a reasonable amount of time.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nfrom tqdm.notebook import tqdm", "_____no_output_____" ], [ "editors = pd.read_csv(\"../data/SampleEditors.csv\", sep='\\t',\n dtype={'issn':str,'NewAuthorId':int,'start_year':int,'end_year':int})\neditors.shape", "_____no_output_____" ], [ "editor_career = pd.read_csv('../data/EditorCareerDiscipline.csv',sep='\\t',\n dtype={'NewAuthorId':int,'Yfp':int,'Ylp':int,'Parent':int})\neditor_career.shape", "_____no_output_____" ], [ "%%time\n# the first year that an author has a known affiliation\nfirst_year = pd.read_csv('../data/figure_1/FirstYearWithKnownAff.csv',sep='\\t',\n dtype={'NewAuthorId':int,'Year':int})\nfirst_year = first_year.rename(columns={'Year':'FirstYear'})\nprint(first_year.shape)", "(4097, 2)\nCPU times: user 1.77 ms, sys: 661 µs, total: 2.43 ms\nWall time: 2.32 ms\n" ], [ "%%time\nauthor_career = pd.read_csv('../data/figure_1/AuthorEraDisp.csv',\n sep='\\t', memory_map=True,\n usecols=['NewAuthorId', 'Parent', 'Yfp', 'Ylp'], # \n dtype={'NewAuthorId':int, 'Yfp':int, 'Ylp':int, 'Parent':int})\nprint(author_career.shape)", "(4097, 4)\nCPU times: user 3.63 ms, sys: 2 µs, total: 3.63 ms\nWall time: 3.47 ms\n" ], [ "editors = editors.merge(editor_career, on='NewAuthorId')\nprint(editors.shape)", "(10, 7)\n" ], [ "def sample(df, year):\n dfs = []\n \n for seed in range(50):\n np.random.seed(seed)\n\n sampled = df.groupby(['EditorsNewId','issn']).apply(\n lambda x: x.filter([np.random.choice(x.index)], axis=0)).reset_index(drop=True)\n \n dfs.append(sampled)\n \n return pd.concat(dfs, ignore_index=True, sort=False)", "_____no_output_____" ], [ "def match(editors, author_career):\n dfs = []\n\n for year in tqdm(range(editors.Yfp.max(), editors.Yfp.min()-1, -1)):\n\n edi = editors[editors.Yfp == year]\n aut = author_career[author_career.Yfp == year]\n\n if edi.shape[0] == 0 or aut.shape[0] == 0: continue\n\n matched = edi.rename(columns={'NewAuthorId':'EditorsNewId'}).merge(aut, on='Yfp')\n matched = matched[~matched.NewAuthorId.isin(editors.NewAuthorId)]\n\n # make sure that at least one aff was known before\n matched = matched.merge(first_year, on='NewAuthorId')\n matched = matched[matched.start_year >= matched.FirstYear] \n\n sampled = sample(matched, year)\n \n dfs.append(sampled)\n \n return pd.concat(dfs, ignore_index=True, sort=False)", "_____no_output_____" ], [ "%%time\nmatched = match(editors, author_career)\nprint(matched.shape)", "_____no_output_____" ], [ "matched.head()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb5e3b8187e5e31194b03790f423b8ab091a75d
138,306
ipynb
Jupyter Notebook
robyn_workspace/Modeling Attempt Notebooks/Feature Engineering.ipynb
robynmundle/predicting_flight_delays
86813a90de41cc17694848095fe6ea00fe1510b0
[ "MIT" ]
null
null
null
robyn_workspace/Modeling Attempt Notebooks/Feature Engineering.ipynb
robynmundle/predicting_flight_delays
86813a90de41cc17694848095fe6ea00fe1510b0
[ "MIT" ]
null
null
null
robyn_workspace/Modeling Attempt Notebooks/Feature Engineering.ipynb
robynmundle/predicting_flight_delays
86813a90de41cc17694848095fe6ea00fe1510b0
[ "MIT" ]
null
null
null
82.867585
39,880
0.714452
[ [ [ "Import Packages", "_____no_output_____" ] ], [ [ "import pandas as pd\npd.set_option('display.max_columns', None)\nimport numpy as np\nfrom sklearn import preprocessing\nimport time\nfrom datetime import datetime, date, time\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"\nimport warnings\nwarnings.filterwarnings('ignore')\nimport copy", "_____no_output_____" ] ], [ [ "Completed Functions", "_____no_output_____" ] ], [ [ "# CRS_ELAPSED_TIME --> HAUL_LENGTH\ndef haul(df, col):\n '''Determine if flight length is SHORT, MEDIUM or LONG based on expected elapsed flight time. \n Input: \n (0) df containing flight information, \n (1) column containing the elapsed flight time in minutes \n Output: 'haul_length' column determining haul length category per row in df'''\n length=[]\n for i in df[col]:\n if i < (3*60): # up to 3 hours\n length.append(0) # 0 = SHORT HAUL\n elif (i >= (3*60)) and (i < (6*60)): # 3-6 hours\n length.append(1) # 1 = MEDIUM HAUL\n elif i >= (6*60):# 6+ hours\n length.append(2) # 2 = LONG HAUL\n df['haul_length'] = length\n# example of implementation: haul(flight10k, 'crs_elapsed_time')\n\n# CRS_DEP_TIME (hhmm) --> CRS_DEP_TIME (hh) -- to be used within time_day function\ndef gethour(df,col):\n '''Convert hhmm to hh (24-hr) hour-only output\n Input: \n (0) df containing flight information, \n (1) column containing the hhmm time \n Output: rewrite on input column in rounded hh format'''\n values = []\n for i in df[col]:\n mins = (i % 100) / 60 \n hour = i // 100\n hh = round(hour+mins)\n values.append(hh)\n df[col] = values\n# example of implementation: gethour(flight10k, 'crs_dep_time')\n\n# CRS_DEP/ARR_TIME (hhmm) --> hot encoded categorical time of day 'morning, aft...' \ndef time_day(df, col):\n ''' Input:\n (0) df containing flight information\n (1) corresponding column of time of flight (i.e. departure or arrival) (format hhmm)\n Output: rewrite of time column into categorical MORNING, AFTERNOON, EVENING, or OVERNIGHT'''\n gethour(df, col)\n timeday = []\n for i in df[col]:\n if (i>=23) or (i<5):\n timeday.append(0) # 0 = OVERNIGHT\n elif (i>=5) and (i<12):\n timeday.append(1) # 1 = MORNING\n elif (i>=12) and (i<18):\n timeday.append(2) # 2 = AFTERNOON\n elif (i>=18) and (i<23):\n timeday.append(3) # 3 = EVENING\n return timeday\n# example of implementation: time_day(flight10k, 'crs_dep_time')", "_____no_output_____" ] ], [ [ "Open CSVs of Pre-Evaluated Features", "_____no_output_____" ] ], [ [ "airline_rating = pd.read_csv('data/airline_delay_rating.csv', index_col=0)\norigin_traffic = pd.read_csv('data/origin_traffic_rating.csv', index_col=0)\norigin_delay = pd.read_csv('data/origin_delay_rating.csv', index_col=0)\ndest_traffic = pd.read_csv('data/dest_traffic_rating.csv', index_col=0)\ndelay_dep_h = pd.read_csv('data/crs_dep_time_delay_rating.csv', index_col=0)\ndelay_arr_h = pd.read_csv('data/crs_arr_time_delay_rating.csv', index_col=0)\nweather_df = pd.read_csv('data/weather_df_monthlymean_bins.csv', index_col=0)", "_____no_output_____" ] ], [ [ "Open CSV of Flight Information to Model", "_____no_output_____" ] ], [ [ "# This is for the dataset you want to investigate\nflights = pd.read_csv('data/flights250K.csv', index_col=0)\nflights.head(1)\nflights.shape", "_____no_output_____" ] ], [ [ "Build df based on columns we will use in transformation - Data Cleaning and Feature Implementation\n\n**See option A or B in first rows to build df based on training or test dataset** (for copy pasta later)", "_____no_output_____" ] ], [ [ "# A - if this is a training dataset, we need arr_delay as our target variable so use this first block of code\nmodel_df = flights[flights['cancelled'] == 0][['arr_delay','fl_date','op_unique_carrier','origin','dest','crs_dep_time','crs_arr_time','crs_elapsed_time','distance']]\n# B - if this is a testing dataset, we will not have arr_delay and cannot include it\n#model_df = flights[['tail_num','op_carrier_fl_num','fl_date','op_unique_carrier','origin','dest','crs_dep_time','crs_arr_time','crs_elapsed_time','distance']]\n\n# first regression will be simple-- is the flight going to be delayed or not?\nif 'arr_delay' in model_df:\n model_df['delay_flag'] = model_df['arr_delay'].map(lambda x: 0 if x <= 0 else 1) # new target 0 or 1\n arr_delay = model_df['arr_delay'].values # to add back in later, let's store these values for us\n model_df.drop(columns='arr_delay', inplace=True) # not our current target variable anymore\n\n# convert date to datetime in order to grab the month\nmodel_df['fl_date'] = pd.to_datetime(model_df['fl_date'])\n#model_df['year'] = model_df['fl_date'].dt.year # decided I do not want year\nmodel_df['month'] = model_df['fl_date'].dt.month\nmodel_df['day'] = model_df['fl_date'].dt.day\nmodel_df['weekday'] = model_df['fl_date'].dt.dayofweek\nmodel_df.drop(columns='fl_date', inplace=True) # this won't be needed after we got month\n\n# join weather columns by origin and destination per each monthly average\nmodel_df = model_df.merge(weather_df, left_on=['month','origin'], right_on=['month','airport'], how='left')\nmodel_df.rename(columns={'mean_precip_monthly':'origin_precip_monthly','mean_snow_monthly':'origin_snow_monthly','mean_wind_monthly':'origin_wind_monthly','mean_cloud_monthly':'origin_cloud_monthly'}, inplace=True)\nmodel_df.drop(columns='airport', inplace=True)\nmodel_df = model_df.merge(weather_df, left_on=['month','dest'], right_on=['month','airport'], how='left')\nmodel_df.rename(columns={'mean_precip_monthly':'dest_precip_monthly','mean_snow_monthly':'dest_snow_monthly','mean_wind_monthly':'dest_wind_monthly','mean_cloud_monthly':'dest_cloud_monthly'}, inplace=True)\nmodel_df.drop(columns='airport', inplace=True)\nmodel_df = model_df.fillna(0)\n\n# set delay rating based on expected performance of the airline\nmodel_df = model_df.merge(airline_rating, left_on='op_unique_carrier', right_on='airline', how='left')\nmodel_df.drop(columns=['airline'],inplace=True) \n\n# obtain haul length of the flight using haul function defined above\nhaul(model_df, 'crs_elapsed_time')\n# model_df.drop(columns=['crs_elapsed_time'],inplace=True)\n\n# new column of categorical time of day information using time_day function defined above as well as expected delays relating to the time of day departure\nmodel_df['dep_timeday'] = time_day(model_df, 'crs_dep_time')\nmodel_df['arr_timeday'] = time_day(model_df, 'crs_arr_time')\nmodel_df = model_df.merge(delay_dep_h, left_on='crs_dep_time', right_on='crs_dep_time', how='left')\nmodel_df = model_df.merge(delay_arr_h, left_on='crs_arr_time', right_on='crs_arr_time', how='left')\n#model_df.drop(columns=['crs_dep_time','crs_arr_time'],inplace=True)\n\n# classify the expected traffic of the origin and departure airports\nmodel_df = model_df.merge(origin_traffic, left_on='origin', right_on='origin', how='left')\nmodel_df = model_df.merge(dest_traffic, left_on='dest', right_on='dest', how='left')\nmodel_df['busy_origin'].fillna(value=model_df['busy_origin'].mean(), inplace=True)\nmodel_df['busy_dest'].fillna(value=model_df['busy_dest'].mean(), inplace=True)\nmodel_df = model_df.merge(origin_delay, left_on='origin', right_on='origin', how='left')\n#model_df.drop(columns=['origin','dest'],inplace=True)\n\n# currently hashed out the dropping of the raw features to test out improved correlations - to keep cat feats we need to encode\n# label encode values for identification of the flight later\nle = preprocessing.LabelEncoder()\nmodel_df['op_unique_carrier'] = le.fit_transform(model_df['op_unique_carrier'].values)\nmodel_df['origin'] = le.fit_transform(model_df['origin'].values)\nmodel_df['dest'] = le.fit_transform(model_df['dest'].values)\n\n# have a look at the dataset\nmodel_df.head(10)\nmodel_df.shape", "_____no_output_____" ] ], [ [ "Import More Packages", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import MinMaxScaler, RobustScaler, StandardScaler\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.ensemble import RandomForestClassifier\n\nfrom sklearn.metrics import r2_score\nfrom sklearn import metrics\n\nimport seaborn as sns; sns.set(style='darkgrid', context='talk')\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "Data Scaling", "_____no_output_____" ] ], [ [ "if 'delay_flag' in model_df: # training dataset\n X = model_df.drop(columns=['delay_flag'])\nelse: # testset\n X = model_df\ny = model_df['delay_flag']\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n\nscaler = RobustScaler()\nscaler.fit(X_train)\nX_train = scaler.transform(X_train)\nX_test = scaler.transform(X_test)", "_____no_output_____" ] ], [ [ "Random Forest Classifier for is it a delay or not? -- run in colab, took 2.25 hours for no further improvement to log reg", "_____no_output_____" ] ], [ [ "#%%time\n#tree_params = {\"n_estimators\": [100, 250, 500, 750, 1000],\n# 'max_depth': [int(x) for x in np.linspace(1, 32, num = 5)]}\n#grid_tree = GridSearchCV(RandomForestClassifier(), tree_params, cv=3, verbose=1, n_jobs=-1)\n#grid_tree.fit(X_train, y_train)\n#forest = grid_tree.best_estimator_\n#forest.best_params_\n#forest_score = cross_val_score(forest, X_train, y_train, cv=3)\n#print('Random Forest Classifier Cross Validation Score: ', round(forest_score.mean() * 100, 2).astype(str) + '%')\n#print(\"Training Score: \", round(grid_tree.best_score_,2))\n#print(f\"Residual Sum of Squares (RSS): {round(np.mean((grid_tree.predict(X_test) - y_test) ** 2),2)}\")\n\n#y_forest = forest.predict(X_test)\n#y_forest_proba = forest.predict_proba(X_test)\n#print('\\nRandom Forest Classifier - y_test')\n#metrics.confusion_matrix(y_test, y_forest)\n#print('AUC Score \\t{:.2f}\\n'.format(metrics.roc_auc_score(y_test, y_forest_proba, multi_class='ovr', average=\"weighted\")))", "_____no_output_____" ] ], [ [ "Logistic Regression -- is it a delay or not?", "_____no_output_____" ] ], [ [ "import pickle", "_____no_output_____" ], [ "%%time\n# Logistic Regression \nlog_params = {'penalty': ['l1', 'l2'], 'C': np.logspace(-4, 4, 20)}\ngrid_log = GridSearchCV(LogisticRegression(solver='liblinear'), log_params, cv=5, verbose=True, n_jobs=-1)\ngrid_log.fit(X_train, y_train)\nlogreg = grid_log.best_estimator_\nprint(grid_log.best_params_,'\\n')\n\nprint(\"Training R2 / Variance: \", round(grid_log.best_score_,2))\nprint(f\"Residual Sum of Squares (RSS): {round(np.mean((grid_log.predict(X_test) - y_test) ** 2),2)}\")\n\ny_logreg = logreg.predict(X_test)\nprint('\\nLogistic Regression - y_test')\n#print('Test R2 Score \\t{:.2f}'.format(metrics.r2_score(y_test, y_logreg)))\nprint('Recall \\t\\t{:.2f}'.format(metrics.recall_score(y_test, y_logreg)))\nprint('Precision \\t{:.2f}'.format(metrics.precision_score(y_test, y_logreg)))\nprint('F1 Score \\t{:.2f}'.format(metrics.f1_score(y_test, y_logreg)))\nprint('Accuracy \\t{:.2f} <--'.format(metrics.accuracy_score(y_test, y_logreg)))\nprint('AUC Score \\t{:.2f}\\n'.format(metrics.roc_auc_score(y_test, y_logreg)))\n\nfilename = 'model1_logreg_delayornot.sav'\npickle.dump(logreg, open(filename, 'wb'))", "Fitting 5 folds for each of 40 candidates, totalling 200 fits\n{'C': 0.08858667904100823, 'penalty': 'l2'} \n\nTraining R2 / Variance: 0.65\nResidual Sum of Squares (RSS): 0.35\n\nLogistic Regression - y_test\nRecall \t\t0.01\nPrecision \t0.53\nF1 Score \t0.02\nAccuracy \t0.65 <--\nAUC Score \t0.50\n\nCPU times: user 4.55 s, sys: 3.61 s, total: 8.16 s\nWall time: 7min 15s\n" ], [ "# load from pickle\nre_logreg = pickle.load(open(filename, 'rb'))\nresult = re_logreg.score(X_test, y_test)\nprint('Re-Loaded from Pickle: ', result)", "Re-Loaded from Pickle: 0.6513643855686466\n" ] ], [ [ "Wow Pickle, you're a nice feature. Note: LogReg takes 7 minutes for 200 fits. If reloading from pickle, reminder to update logreg to re_logreg in the following cells. Re-ran this in COLAB -- also 7 minutes", "_____no_output_____" ] ], [ [ "y_score = logreg.decision_function(X_test)\nfrom sklearn.metrics import average_precision_score\naverage_precision = average_precision_score(y_test, y_score, average='micro')\nfrom sklearn.metrics import plot_precision_recall_curve\ndisp = plot_precision_recall_curve(logreg, X_test, y_test);\ndisp.ax_.set_title('2-class Precision-Recall curve: '\n 'AP={0:0.2f}'.format(average_precision));", "_____no_output_____" ] ], [ [ "That's unfortunate... bad recall = lacking sensitivity. Too high of false negatives -- overpredicting NON-DELAY", "_____no_output_____" ] ], [ [ "y_pred_proba = logreg.predict_proba(X_test)[::,1]\nfpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)\nauc = metrics.roc_auc_score(y_test, y_pred_proba)\nplt.plot(fpr,tpr,label=\"data 1, auc=\"+str(auc))\nplt.legend(loc=4)\nplt.show()", "_____no_output_____" ] ], [ [ "Getting sad now that I can't even tell if it's a delay or not simply. For reference, by chance = auc of 0.5\n\nLet's predict the entire X now so we can implement this result as a column in the model_df to progress into another model that defines 'how much' the delay will be if we were lucky enough to classify it as a delay in the first model.", "_____no_output_____" ] ], [ [ "y_logregX = logreg.predict(X)\nif 'delay_pred' not in model_df: \n model_df['delay_pred'] = y_logregX", "_____no_output_____" ], [ "model_df['arr_delay'] = arr_delay\nmodel_df.dropna(inplace=True)\n\nmodel_df.drop(columns='delay_flag', inplace=True)\ndelayed = model_df['delay_pred'] == 1\nmodel_df2 = model_df[delayed]\nmodel_df2.head(1)\nmodel_df2.shape", "_____no_output_____" ], [ "delay_bin = []\nfor i in model_df2['arr_delay']:\n if i <= 5:\n delay_bin.append(0) # no delay (within 5 minutes)\n elif (i > 5) and (i <= 10):\n delay_bin.append(1) # expect a 5 to 10 minute delay\n elif (i > 10) and (i <= 20):\n delay_bin.append(2) # expect a 10 to 20 minute delay\n elif (i >= 20) and (i <= 45):\n delay_bin.append(3) # expect a 20 to 45 minute delay\n elif (i > 45):\n delay_bin.append(4) # expect a 45+ minute delay\n \nmodel_df2['delay_range'] = delay_bin\nif 'arr_delay' in model_df2:\n model_df2.drop(columns='arr_delay', inplace=True)\nmodel_df2.head(1)\nmodel_df2.shape\n#model_df2['delay_range'].value_counts()", "_____no_output_____" ], [ "# we filtered for all delay_pred==1 -- so we might as well drop this column\nmodel_df2.drop(columns='delay_pred', inplace=True)", "_____no_output_____" ] ], [ [ "Second Model: Arrival Delay Range Prediction \n\nData Scaling", "_____no_output_____" ] ], [ [ "if 'delay_range' in model_df2: # training dataset\n X = model_df2.drop(columns=['delay_range'])\nelse: # testset\n X = model_df2\ny = model_df2['delay_range']\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n\nscaler = RobustScaler()\nscaler.fit(X_train)\nX_train = scaler.transform(X_train)\nX_test = scaler.transform(X_test)", "_____no_output_____" ] ], [ [ "Random Forest Classifier -- can we predict the predicted delay's time allotment?", "_____no_output_____" ] ], [ [ "%%time\ntree_params = {\"n_estimators\": [100, 250, 500, 750, 1000],\n 'max_depth': [int(x) for x in np.linspace(1, 32, num = 5)]}\ngrid_tree = GridSearchCV(RandomForestClassifier(), tree_params, cv=3, verbose=1, n_jobs=-1)\ngrid_tree.fit(X_train, y_train)\nforest = grid_tree.best_estimator_\nforest.best_params_\nforest_score = cross_val_score(forest, X_train, y_train, cv=3)\nprint('Random Forest Classifier Cross Validation Score: ', round(forest_score.mean() * 100, 2).astype(str) + '%')\nprint(\"Training Score: \", round(grid_tree.best_score_,2))\n\nfilename = 'model2_randforest_delayrange.sav'\npickle.dump(logreg, open(filename, 'wb'))", "Fitting 3 folds for each of 25 candidates, totalling 75 fits\n" ] ], [ [ "Note for RandomForestClassifier: 75 fits ran for ... minutes -- it was cpu terminated on my computer many times. :( \n\nI've run it in colab now which will be saved to git under the same name + COLAB FINAL", "_____no_output_____" ] ], [ [ "# load from pickle\nre_forest = pickle.load(open(filename, 'rb'))\nresult = re_forest.score(X_test, y_test)\nprint('Re-Loaded from Pickle: ', result)", "_____no_output_____" ], [ "y_forest = forest.predict(X_test)\ny_forest_proba = forest.predict_proba(X_test)\nprint('\\nRandom Forest Classifier - y_test')\nmetrics.confusion_matrix(y_test, y_forest)\nprint('AUC Score \\t{:.2f}\\n'.format(metrics.roc_auc_score(y_test, y_forest_proba, multi_class='ovr', average=\"weighted\")))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cbb5f257fca8085b23c716d664aa969c4bc5180c
35,031
ipynb
Jupyter Notebook
docs/tutorial/04-pipeline.ipynb
jasonnIguazio/ghpages-mlrun
b3d719a6afa41a50401dc2f8f90390204278a6c7
[ "Apache-2.0" ]
1
2021-02-17T08:12:33.000Z
2021-02-17T08:12:33.000Z
docs/tutorial/04-pipeline.ipynb
jasonnIguazio/ghpages-mlrun
b3d719a6afa41a50401dc2f8f90390204278a6c7
[ "Apache-2.0" ]
null
null
null
docs/tutorial/04-pipeline.ipynb
jasonnIguazio/ghpages-mlrun
b3d719a6afa41a50401dc2f8f90390204278a6c7
[ "Apache-2.0" ]
null
null
null
46.398675
1,257
0.613714
[ [ [ "# Part 4: Projects and Automated ML Pipeline\n\nThis part of the MLRun getting-started tutorial walks you through the steps for working with projects, source control (git), and automating the ML pipeline.\n\nMLRun Project is a container for all your work on a particular activity: all the associated code, functions, \njobs/workflows and artifacts. Projects can be mapped to `git` repositories to enable versioning, collaboration, and CI/CD.\n\nYou can create project definitions using the SDK or a yaml file and store those in MLRun DB, file, or archive.\nOnce the project is loaded you can run jobs/workflows which refer to any project element by name, allowing separation between configuration and code. See the [Projects, Automation & CI/CD](../projects/overview.md) section for details.\n\nProjects contain `workflows` that execute the registered functions in a sequence/graph (DAG), and which can reference project parameters, secrets and artifacts by name. MLRun currently supports two workflow engines, `local` (for simple tasks) and [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/pipelines-quickstart/) (for more complex/advanced tasks). MLRun also supports a real-time workflow engine (see [MLRun serving graphs](../serving/serving-graph.md)). \n\n> **Note**: The Iguazio Data Science Platform has a default (pre-deployed) shared Kubeflow Pipelines service (`pipelines`).\n\nAn ML Engineer can gather the different functions created by the Data Engineer and Data Scientist and create this automated pipeline.\n\nThe tutorial consists of the following steps:\n\n1. [Setting up Your Project](#gs-tutorial-4-step-setting-up-project)\n2. [Updating Project and Function Definitions](#gs-tutorial-4-step-import-functions)\n3. [Defining and Saving a Pipeline Workflow](#gs-tutorial-4-step-pipeline-workflow-define-n-save)\n4. [Registering the Workflow](#gs-tutorial-4-step-register-workflow)\n5. [Running A Pipeline](#gs-tutorial-4-step-run-pipeline)\n6. [Viewing the Pipeline on the Dashboard (UI)](#gs-tutorial-4-step-ui-pipeline-view)\n7. [Invoking the Model](#gs-tutorial-4-step-invoke-model)\n\nBy the end of this tutorial you'll learn how to:\n\n- Create an operational pipeline using previously defined functions.\n- Run the pipeline and track the pipeline results.", "_____no_output_____" ], [ "<a id=\"gs-tutorial-4-prerequisites\"></a>", "_____no_output_____" ], [ "## Prerequisites", "_____no_output_____" ], [ "The following steps are a continuation of the previous parts of this getting-started tutorial and rely on the generated outputs.\nTherefore, make sure to first run parts [1](01-mlrun-basics.ipynb)&mdash;[3](03-model-serving.ipynb) of the tutorial.", "_____no_output_____" ], [ "<a id=\"gs-tutorial-4-step-setting-up-project\"></a>", "_____no_output_____" ], [ "## Step 1: Setting Up Your Project\n\nTo run a pipeline, you first need to create a Python project object and import the required functions for its execution.\n\nCreate a project by using one of:\n\n- the `new_project` MLRun method\n- the `get_or_create_project`method: loads a project from the MLRun DB or the archive/context if it exists, or creates a new project if it doesn't exist.\n\nBoth methods have the following parameters:\n\n- **`name`** (required) &mdash; the project name.\n- **`context`** &mdash; the path to a local project directory (the project's context directory).\n The project directory contains a project-configuration file (default: **project.yaml**) that defines the project, and additional generated Python code.\n The project file is created when you save your project (using the `save` MLRun project method or when saving your first function within the project).\n- **`init_git`** &mdash; set to `True` to perform Git initialization of the project directory (`context`) in case its not initialized.\n > **Note:** It's customary to store project code and definitions in a Git repository.\n\nThe following code gets or creates a user project named \"getting-started-&lt;username&gt;\".\n\n> **Note:** Platform projects are currently shared among all users of the parent tenant, to facilitate collaboration. Therefore:\n>\n> - Set `user_project` to `True` if you want to create a project unique to your user.\n> You can easily change the default project name for this tutorial by changing the definition of the `project_name_base` variable in the following code.\n> - Don't include in your project proprietary information that you don't want to expose to other users.\n> Note that while projects are a useful tool, you can easily develop and run code in the platform without using projects.", "_____no_output_____" ] ], [ [ "import mlrun\n\n# Set the base project name\nproject_name_base = 'getting-started'\n\n# Initialize the MLRun project object\nproject = mlrun.get_or_create_project(project_name_base, context=\"./\", user_project=True, init_git=True)\n\nprint(f'Project name: {project.metadata.name}')", "> 2021-11-15 15:07:16,695 [info] loaded project getting-started from MLRun DB\nProject name: getting-started-admin\n" ] ], [ [ "<a id=\"gs-tutorial-4-step-import-functions\"></a>", "_____no_output_____" ], [ "## Step 2: Updating Project and Function Definitions\n\nYou must save the definitions for the functions used in the project so that you can automatically convert code to functions, import external functions when you load new versions of MLRun code, or run automated CI/CD workflows. In addition, you might want to set other project attributes such as global parameters, secrets, and data.\n\nThe code can be stored in Python files, notebooks, external repositories, packaged containers, etc. Use the `project.set_function()` method to register the code in the project. The definitions are saved to the project object as well as in a YAML file in the root of the project.\nFunctions can also be imported from MLRun marketplace (using the `hub://` schema).\n\nThis tutorial uses the functions:\n- `prep-data` &mdash; the first function, which ingests the Iris data set (in Notebook 01)\n- `describe` &mdash; generates statistics on the data set (from the marketplace)\n- `train-iris` &mdash; the model-training function (in Notebook 02)\n- `test-classifier` &mdash; the model-testing function (from the marketplace)\n- `mlrun-model` &mdash; the model-serving function (in Notebook 03)\n\n> Note: `set_function` uses the `code_to_function` and `import_function` methods under the hood (used in the previous notebooks), but in addition it saves the function configurations in the project spec for use in automated workflows and CI/CD. \n\nAdd the function definitions to the project along with parameters and data artifacts, and save the project.", "_____no_output_____" ], [ "<a id=\"gs-tutorial-4-view-project-functions\"></a>", "_____no_output_____" ] ], [ [ "project.set_function('01-mlrun-basics.ipynb', 'prep-data', kind='job', image='mlrun/mlrun')\nproject.set_function('02-model-training.ipynb', 'train', kind='job', image='mlrun/mlrun', handler='train_iris')\nproject.set_function('hub://describe', 'describe')\nproject.set_function('hub://test_classifier', 'test')\nproject.set_function('hub://v2_model_server', 'serving')\n\n# set project level parameters and save\nproject.spec.params = {'label_column': 'label'}\nproject.save()", "_____no_output_____" ] ], [ [ "<br>When you save the project it stores the project definitions in the `project.yaml`. This means that you can load the project from the source control (GIT) and run it with a single command or API call.\n\nThe project YAML for this project can be printed using:", "_____no_output_____" ] ], [ [ "print(project.to_yaml())", "kind: project\nmetadata:\n name: getting-started-admin\n created: '2021-09-23T10:43:14.830481'\nspec:\n params:\n label_column: label\n functions:\n - url: 01-mlrun-basics.ipynb\n name: prep-data\n kind: job\n image: mlrun/mlrun\n - url: 02-model-training.ipynb\n name: train\n kind: job\n image: mlrun/mlrun\n handler: train_iris\n - url: hub://describe\n name: describe\n - url: hub://test_classifier\n name: test\n - url: hub://v2_model_server\n name: serving\n workflows:\n - name: main\n path: workflow.py\n engine: null\n artifacts: []\n source: ''\n subpath: ''\n origin_url: ''\n desired_state: online\n disable_auto_mount: false\nstatus:\n state: online\n\n" ] ], [ [ "### Saving and Loading Projects from GIT\n\nAfter you save the project and its elements (functions, workflows, artifacts, etc.) you can commit all the changes to a GIT repository. Use the standard GIT tools or use the MLRun `project` methods such as `pull`, `push`, `remote`, which call the Git API for you.\n\nProjects can then be loaded from Git using the MLRun `load_project` method, for example: \n\n project = mlrun.load_project(\"./myproj\", \"git://github.com/mlrun/project-demo.git\", name=project_name)\n \nor using MLRun CLI:\n\n mlrun project -n myproj -u \"git://github.com/mlrun/project-demo.git\" ./myproj\n \nRead the [Projects, Automation & CI/CD](../projects/overview.md) section for more details", "_____no_output_____" ], [ "<a id=\"gs-tutorial-4-kubeflow-pipelines\"></a>", "_____no_output_____" ], [ "### Using Kubeflow Pipelines\n\nYou're now ready to create a full ML pipeline.\nThis is done by using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/) &mdash;\nan open-source framework for building and deploying portable, scalable machine-learning workflows based on Docker containers.\nMLRun leverages this framework to take your existing code and deploy it as steps in the pipeline.\n\n> **Note:** When using the Iguazio Data Science Platform, Kubeflow Pipelines is available as a default (pre-deployed) shared platform service.", "_____no_output_____" ], [ "<a id=\"gs-tutorial-4-step-pipeline-workflow-define-n-save\"></a>", "_____no_output_____" ], [ "## Step 3: Defining and Saving a Pipeline Workflow\n\nA pipeline is created by running an MLRun **\"workflow\"**.\nThe following code defines a workflow and writes it to a file in your local directory, with the file name **workflow.py**.\nThe workflow describes a directed acyclic graph (DAG) for execution using Kubeflow Pipelines, and depicts the connections between the functions and the data as part of an end-to-end pipeline.\nThe workflow file has two parts: initialization of the function objects, and definition of a pipeline DSL (domain-specific language) for connecting the function inputs and outputs.\nExamine the code to see how function objects are initialized and used (by name) within the workflow.\n\nThe defined pipeline includes the following steps:\n\n- Ingest the Iris flower data set (`ingest`).\n- Train and the model (`train`).\n- Test the model with its test data set.\n- Deploy the model as a real-time serverless function (`deploy`).\n\n> **Note**: A pipeline can also include continuous build integration and deployment (CI/CD) steps, such as building container images and deploying models.", "_____no_output_____" ] ], [ [ "%%writefile './workflow.py'\n\nfrom kfp import dsl\nfrom mlrun import run_function, deploy_function\n\n\nDATASET = 'cleaned_data'\nMODEL = 'iris'\nLABELS = \"label\"\n\n# Create a Kubeflow Pipelines pipeline\[email protected](\n name=\"Getting-started-tutorial\",\n description=\"This tutorial is designed to demonstrate some of the main \"\n \"capabilities of the Iguazio Data Science Platform.\\n\"\n \"The tutorial uses the Iris flower data set.\"\n)\ndef kfpipeline(source_url):\n\n # Ingest the data set\n ingest = run_function(\n 'prep-data',\n handler='prep_data',\n inputs={'source_url': source_url},\n params={'label_column': LABELS},\n outputs=[DATASET])\n \n # Train a model \n train = run_function(\n \"train\",\n params={\"label_column\": LABELS},\n inputs={\"dataset\": ingest.outputs[DATASET]},\n outputs=['my_model', 'test_set'])\n \n # Test and visualize the model\n test = run_function(\n \"test\",\n params={\"label_column\": LABELS},\n inputs={\"models_path\": train.outputs['my_model'],\n \"test_set\": train.outputs['test_set']})\n \n # Deploy the model as a serverless function\n deploy = deploy_function(\"serving\", models={f\"{MODEL}_v1\": train.outputs['my_model']})", "Writing ./workflow.py\n" ] ], [ [ "<a id=\"gs-tutorial-4-step-register-workflow\"></a>", "_____no_output_____" ], [ "## Step 4: Registering the Workflow\n\nUse the `set_workflow` MLRun project method to register your workflow with MLRun.\nThe following code sets the `name` parameter to the selected workflow name (\"main\") and the `code` parameter to the name of the workflow file that is found in your project directory (**workflow.py**).", "_____no_output_____" ] ], [ [ "# Register the workflow file as \"main\"\nproject.set_workflow('main', 'workflow.py')", "_____no_output_____" ] ], [ [ "<a id=\"gs-tutorial-4-step-run-pipeline\"></a>", "_____no_output_____" ], [ "## Step 5: Running A Pipeline\n\nFirst run the following code to save your project:", "_____no_output_____" ] ], [ [ "project.save()", "_____no_output_____" ] ], [ [ "Use the `run` MLRun project method to execute your workflow pipeline with Kubeflow Pipelines.\nThe tutorial code sets the following method parameters; (for the full parameters list, see the [MLRun documentation](../api/mlrun.run.html#mlrun.run.run_pipeline) or embedded help):\n\n- **`name`** &mdash; the workflow name (in this case, \"main\" &mdash; see the previous step).\n- **`arguments`** &mdash; A dictionary of Kubeflow Pipelines arguments (parameters).\n The tutorial code sets this parameter to an empty arguments list (`{}`), but you can edit the code to add arguments.\n- **`artifact_path`** &mdash; a path or URL that identifies a location for storing the workflow artifacts.\n You can use `{{workflow.uid}}` in the path to signify the ID of the current workflow run iteration.\n The tutorial code sets the artifacts path to a **&lt;worker ID&gt;** directory (`{{workflow.uid}}`) in a **pipeline** directory under the projects container (**/v3io/projects/getting-started-tutorial-project name/pipeline/&lt;worker ID&gt;**).\n- **`dirty`** &mdash; set to `True` to allow running the workflow also when the project's Git repository is dirty (i.e., contains uncommitted changes).\n (When the notebook that contains the execution code is in the same Git directory as the executed workflow, the directory will always be dirty during the execution.)\n- **`watch`** &mdash; set to `True` to wait for the pipeline to complete and output the execution graph as it updates.\n\nThe `run` method returns the ID of the executed workflow, which the code stores in a `run_id` variable.\nYou can use this ID to track the progress or your workflow, as demonstrated in the following sections.\n\n> **Note**: You can also run the workflow from a command-line shell by using the `mlrun` CLI.\n> The following CLI command defines a similar execution logic as that of the `run` call in the tutorial:\n> ```\n> mlrun project /User/getting-started-tutorial/conf -r main -p \"$V3IO_HOME_URL/getting-started-tutorial/pipeline/{{workflow.uid}}/\"\n> ```", "_____no_output_____" ] ], [ [ "source_url = mlrun.get_sample_path(\"data/iris/iris.data.raw.csv\")", "_____no_output_____" ], [ "import os\npipeline_path = mlrun.mlconf.artifact_path\n\nrun_id = project.run(\n 'main',\n arguments={'source_url' : source_url}, \n artifact_path=os.path.join(pipeline_path, \"pipeline\", '{{workflow.uid}}'),\n dirty=True,\n watch=True)", "_____no_output_____" ] ], [ [ "<a id=\"gs-tutorial-4-step-ui-pipeline-view\"></a>", "_____no_output_____" ], [ "## Step 6: Viewing the Pipeline on the Dashboard (UI)\n\nIn the **Projects > Jobs and Workflows > Monitor Workflows** tab, press the workflow name to view a graph of the workflow. Press any step to open another pane with full details of the step: either the job's overview, inputs, artifacts, etc.; or the deploy / build function's overview, code, and log. \n\n\nAfter the pipelines execution completes, you should be able to view the pipeline and see its functions: \n\n- `prep-data`\n- `train`\n- `test`\n- `deploy-serving`\n\nThe graph is refreshed while the pipeline is running.", "_____no_output_____" ], [ "<img src=\"../_static/images/job_pipeline.png\" alt=\"pipeline\" width=\"700\"/>", "_____no_output_____" ], [ "<a id=\"gs-tutorial-4-step-invoke-model\"></a>", "_____no_output_____" ], [ "## Step 7: Invoking the Model\n\nNow that your model is deployed using the pipeline, you can invoke it as usual:", "_____no_output_____" ] ], [ [ "serving_func = project.func('serving')\nmy_data = {'inputs': [[5.1, 3.5, 1.4, 0.2],[7.7, 3.8, 6.7, 2.2]]}\nserving_func.invoke('/v2/models/iris_v1/infer', my_data)", "> 2021-11-15 15:09:51,721 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-getting-started-admin-v2-model-server.default-tenant.svc.cluster.local:8080/v2/models/iris_v1/infer'}\n" ] ], [ [ "You can also make an HTTP call directly:", "_____no_output_____" ] ], [ [ "import requests\nimport json\npredict_url = f'http://{serving_func.status.address}/v2/models/iris_v1/predict'\nresp = requests.put(predict_url, json=json.dumps(my_data))\nprint(resp.json())", "{'id': '2f050b24-0b61-4697-9260-859f16afe094', 'model_name': 'iris_v1', 'outputs': [0, 2]}\n" ] ], [ [ "<a id=\"gs-tutorial-4-done\"></a>", "_____no_output_____" ], [ "## Done!\n\nCongratulations! You've completed the getting started tutorial.\n\nYou might also want to explore the following demos:\n\n- For an example of distributed training of an image-classification pipeline using TensorFlow (versions 1 or 2), Keras, and Horovod, see the [**image-classification with distributed training demo**](https://github.com/mlrun/demos/tree/release/v0.6.x-latest/image-classification-with-distributed-training).\n- To learn more about deploying live endpoints and concept drift, see the [**network-operations (NetOps) demo**](https://github.com/mlrun/demos/tree/release/v0.6.x-latest/network-operations).\n- To learn how to deploy your model with streaming information, see the [**model-deployment pipeline demo**](https://github.com/mlrun/demos/tree/release/v0.6.x-latest/model-deployment-pipeline).\n\nFor additional information and guidelines, see the MLRun [**How-To Guides and Demos**](../howto/index.md).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cbb5fc1aa12733bc850d1fc6733baf7f3900205e
70,551
ipynb
Jupyter Notebook
examples/GC SAFT-Gamma-Mie Examples/13. SGT for fluid mixtures (beta != 0).ipynb
MatKie/SGTPy
8e98d92fedd2b07d834e547e5154ec8f70d80728
[ "MIT" ]
12
2020-12-27T17:04:33.000Z
2021-07-19T06:28:28.000Z
examples/GC SAFT-Gamma-Mie Examples/13. SGT for fluid mixtures (beta != 0).ipynb
MatKie/SGTPy
8e98d92fedd2b07d834e547e5154ec8f70d80728
[ "MIT" ]
2
2021-05-15T14:27:57.000Z
2021-08-19T15:42:24.000Z
examples/GC SAFT-Gamma-Mie Examples/13. SGT for fluid mixtures (beta != 0).ipynb
MatKie/SGTPy
8e98d92fedd2b07d834e547e5154ec8f70d80728
[ "MIT" ]
5
2021-02-21T01:33:29.000Z
2021-07-26T15:11:08.000Z
141.102
28,724
0.87857
[ [ [ "# SGT ($\\beta \\neq 0 $) calculation for fluids mixtures with SAFT-$\\gamma$-Mie\n\nIn this notebook, the SGT ($\\beta \\neq 0 $) calculations for fluid mixtures with ```saftgammamie``` EoS are illustrated.\n\nWhen using $\\beta \\neq 0 $, the cross-influence parameters are computed as $c_{ij} = (1-\\beta_{ij})\\sqrt{c_{ii}c_{jj}}$.\n\nFirst, all the needed modules are imported.\n\n- numpy: numerical interface and work with arrays\n- matplotlib: to plot results\n- sgtpy: package with SAFT-$\\gamma$-Mie EoS and SGT functions.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom sgtpy import component, mixture, saftgammamie", "_____no_output_____" ] ], [ [ "Now, pure components are configured and created with the ```component``` function. To use SGT it is required to set the influence parameter (```cii```) for the pure fluids. Then, a mixture is created with them using the ```mixture``` function or by adding (`+`) pure components. The interaction parameters are set up with the ```mixture.saftgammamie``` method. Finally, the ```eos``` object is created with the ```saftgammamie``` function.\n\nThe ```eos``` object includes all the necessary methods to compute phase equilibria and interfacial properties using SAFT-$\\gamma$-Mie EoS.\n\nFor this notebook, the calculations are exemplified for the mixture of ethanol + water and the mixture of hexane + ethanol.", "_____no_output_____" ] ], [ [ "ethanol = component(GC={'CH3':1, 'CH2OH':1}, cii=4.1388468864244875e-20)\nwater = component(GC={'H2O':1}, cii=1.6033244745871344e-20)\n\n# creating mixture with mixture class function\nmix1 = mixture(ethanol, water)\n# or creating mixture by adding pure components\nmix1 = ethanol + water\n\nmix1.saftgammamie()\neos1 = saftgammamie(mix1)", "_____no_output_____" ] ], [ [ "Now, it is required to compute the phase equilibria (VLE, LLE or VLLE). See Notebooks 5 to 10 for more information about phase equilibria computation.\n\nIn this example, the bubble point of the mixture of ethanol and water at $x_1=0.2$ and 298.15K is computed.", "_____no_output_____" ] ], [ [ "from sgtpy.equilibrium import bubblePy\n\nT = 298.15 # K\n# liquid composition\nx = np.array([0.2, 0.8])\n# initial guesses\nP0 = 1e4 # Pa\ny0 = np.array([0.8, 0.2])\nsol = bubblePy(y0, P0, x, T, eos1, full_output=True)\ny, P = sol.Y, sol.P\nvl, vv = sol.v1, sol.v2\nrhol = x/vl\nrhov = y/vv", "_____no_output_____" ] ], [ [ "In order to set the $\\beta$ correction is necessary to create the symmetric matrix of shape (`nc, nc`) and then use it with the ```eos.beta_sgt``` method from the eos. The $\\beta_{ij}$ correction is computed as follows:\n\n$$ \\beta_{ij} = \\beta_{ij,0} + \\beta_{ij,1} \\cdot T + \\beta_{ij,2} \\cdot T^2 + \\frac{\\beta_{ij,3}}{T} $$\n\nAlternatively, you can modify just the pair $ij$ using the `eos.set_betaijsgt` method. In both methods, by default only the $\\beta_{ij,0}$ is required. The temperature dependent parameters are optional, if they are not provided they are assumed to be zero.\n\nThe function ```sgt_mix_beta0``` is used to study the interfacial behavior with SGT and $\\beta=0$. AS shown in Notebook 12, Liang method can compute the density paths correctly.", "_____no_output_____" ] ], [ [ "from sgtpy.sgt import sgt_mix_beta0\nbij = 0.0\nbeta = np.array([[0, bij], [bij, 0]])\neos1.beta_sgt(beta)\n# or by setting the beta correction by pair i=0 (hexane), j=1 (ethanol)\neos1.set_betaijsgt(i=0, j=1, beta0=bij)\n\nsoll = sgt_mix_beta0(rhov, rhol, T, P, eos1, n=300, method='liang', full_output=True)", "_____no_output_____" ] ], [ [ "When using $\\beta \\neq 0$ two options are available to solve SGT.\n\n- ```sgt_mix```: solves SGT system as a boundary value problem using orthogonal collocation (increasing interfacial length).\n- ```msgt_mix```: solves a stabilized SGT system as a boundary value problem using orthogonal collocation (fixed interfacial length).", "_____no_output_____" ] ], [ [ "from sgtpy.sgt import sgt_mix\n\nbij = 0.2\nbeta = np.array([[0, bij], [bij, 0]])\neos1.beta_sgt(beta)\n# or by setting the beta correction by pair i=0 (ethanol), j=1 (water)\neos1.set_betaijsgt(i=0, j=1, beta0=bij)\n\nsolbeta = sgt_mix(rhov, rhol, T, P, eos1, full_output=True)", "_____no_output_____" ], [ "from sgtpy.sgt import msgt_mix\nbij = 0.5\nbeta = np.array([[0, bij], [bij, 0]])\neos1.beta_sgt(beta)\n# or by setting the beta correction by pair i=0 (ethanol), j=1 (water)\neos1.set_betaijsgt(i=0, j=1, beta0=bij)\n\nmsolbeta = msgt_mix(rhov, rhol, T, P, eos1, rho0 = solbeta, full_output=True)", "_____no_output_____" ] ], [ [ "The interfacial tension results are shown below.", "_____no_output_____" ] ], [ [ "print('Liang path Function: ', soll.tension, 'mN/m')\nprint('SGT BVP: ', solbeta.tension, 'mN/m')\nprint('Modified SGT BVP: ', msolbeta.tension, 'mN/m')", "Liang path Function: 35.70200225949396 mN/m\nSGT BVP: 35.50582328545892 mN/m\nModified SGT BVP: 34.356558230557376 mN/m\n" ] ], [ [ "The density profiles are plotted below. It can be seen that using a $\\beta$ correction smooths the density profiles.", "_____no_output_____" ] ], [ [ "rhobeta = solbeta.rho / 1000 # kmol/m3\nmrhobeta = msolbeta.rho / 1000 # kmol/m3\n\nrholiang = soll.rho / 1000 # kmol/m3\nalphas = soll.alphas\npath = soll.path\n\nfig = plt.figure(figsize = (10, 4))\nfig.subplots_adjust( wspace=0.3)\nax1 = fig.add_subplot(121)\nax1.plot(rholiang[0], rholiang[1], color = 'red')\nax1.plot(rhobeta[0], rhobeta[1], 's', color = 'blue')\nax1.plot(mrhobeta[0], mrhobeta[1], '--', color = 'black')\nax1.plot(rhov[0]/1000, rhov[1]/1000, 'o', color = 'k')\nax1.plot(rhol[0]/1000, rhol[1]/1000, 'o', color = 'k')\nax1.set_xlabel(r'$\\rho_1$ / kmol m$^{-3}$')\nax1.set_ylabel(r'$\\rho_2$ / kmol m$^{-3}$')\n\nax2 = fig.add_subplot(122)\nax2.plot(path/1000, alphas)\nax2.axhline(y = 0, linestyle = '--',color = 'r')\nax2.set_ylabel(r'$\\alpha$')\nax2.set_xlabel(r'path function / 1000')", "_____no_output_____" ] ], [ [ "## Hexane - Ethanol\n\nThe interfacial behavior of this mixture is well known to be difficult to study as its displays multiple stationary points in the inhomogeneous zone. ", "_____no_output_____" ] ], [ [ "hexane = component(GC={'CH3':2, 'CH2':4}, cii=3.288396028761707e-19)\n\nmix2 = mixture(hexane, ethanol)\nmix2.saftgammamie()\neos2 = saftgammamie(mix2)", "_____no_output_____" ] ], [ [ "In this example, the bubble point of the mixture at $x_1=0.3$ and 298.15K is computed with the ```bubblePy``` function.", "_____no_output_____" ] ], [ [ "T = 298.15 # K\nx = np.array([0.3, 0.7])\ny0 = 1.*x\nP0 = 8000. # Pa\nsol = bubblePy(y0, P0, x, T, eos2, full_output=True)\ny, P = sol.Y, sol.P\nvl, vv = sol.v1, sol.v2\nrhox = x/vl\nrhoy = y/vv\nsol", "_____no_output_____" ] ], [ [ "The function ```sgt_mix_beta0``` is used to study the interfacial behavior with SGT and $\\beta=0$. AS shown in Notebook 12, Liang method can compute the density paths correctly.", "_____no_output_____" ] ], [ [ "soll2 = sgt_mix_beta0(rhoy, rhox, T, P, eos2, n=300, method='liang', full_output=True)", "c:\\users\\gusta\\documents\\sgtpy\\sgtpy\\gammamie_mixtures\\ahs_monomer.py:121: RuntimeWarning: invalid value encountered in log\n log3 = np.log(xhi3_1)\nc:\\users\\gusta\\documents\\sgtpy\\sgtpy\\gammamie_mixtures\\gdHS_chain.py:134: RuntimeWarning: invalid value encountered in log\n k0 = -np.log(xhix_1) + (42*xhix - 39*xhix2 + 9*xhix3 - 2*xhix4)/(6*xhix_13)\nc:\\users\\gusta\\documents\\sgtpy\\sgtpy\\gammamie_mixtures\\ares.py:926: RuntimeWarning: invalid value encountered in log\n aux1 = np.log(Xass) - Xass/2 + 1/2\n" ] ], [ [ "SGT is solved with $\\beta = 0.2$ and $\\beta = 0.5$ using the ```sgt_mix``` and ```msgt_mix``` function.", "_____no_output_____" ] ], [ [ "bij = 0.2\nbeta = np.array([[0, bij], [bij, 0]])\neos2.beta_sgt(beta)\n# or by setting the beta correction by pair i=0 (hexane), j=1 (ethanol)\neos2.set_betaijsgt(i=0, j=1, beta0=bij)\n\nsolbeta = sgt_mix(rhoy, rhox, T, P, eos2, full_output=True)", "_____no_output_____" ], [ "bij = 0.5\nbeta = np.array([[0, bij], [bij, 0]])\neos2.beta_sgt(beta)\n# or by setting the beta correction by pair i=0 (hexane), j=1 (ethanol)\neos2.set_betaijsgt(i=0, j=1, beta0=bij)\n\nmsolbeta = msgt_mix(rhoy, rhox, T, P, eos2, rho0=solbeta, full_output=True)", "_____no_output_____" ] ], [ [ "The interfacial tension results are shown below.", "_____no_output_____" ] ], [ [ "print('Liang path Function: ', soll2.tension, 'mN/m')\nprint('SGT BVP: ', solbeta.tension, 'mN/m')\nprint('Modified SGT BVP: ', msolbeta.tension, 'mN/m')", "Liang path Function: 16.353369314683754 mN/m\nSGT BVP: 16.771852771674972 mN/m\nModified SGT BVP: 16.006470308753357 mN/m\n" ] ], [ [ "The density profiles are plotted below. It can be seen that using a $\\beta$ correction smooths the density profiles and reduces the number of stationary points.", "_____no_output_____" ] ], [ [ "rhobeta = solbeta.rho / 1000 # kmol/m3\nmrhobeta = msolbeta.rho / 1000 # kmol/m3\n\nrholiang = soll2.rho / 1000 # kmol/m3\nalphas = soll2.alphas\npath = soll2.path\n\nfig = plt.figure(figsize = (10, 4))\nfig.subplots_adjust( wspace=0.3)\nax1 = fig.add_subplot(121)\nax1.plot(rholiang[0], rholiang[1], color = 'red')\nax1.plot(rhobeta[0], rhobeta[1], 's', color = 'blue')\nax1.plot(mrhobeta[0], mrhobeta[1], '--', color = 'black')\n\nax1.plot(rhoy[0]/1000, rhoy[1]/1000, 'o', color = 'k')\nax1.plot(rhox[0]/1000, rhox[1]/1000, 'o', color = 'k')\nax1.set_xlabel(r'$\\rho_1$ / kmol m$^{-3}$')\nax1.set_ylabel(r'$\\rho_2$ / kmol m$^{-3}$')\n\nax2 = fig.add_subplot(122)\nax2.plot(path/1000, alphas)\nax2.axhline(y = 0, linestyle = '--',color = 'r')\nax2.set_ylabel(r'$\\alpha$')\nax2.set_xlabel(r'path function / 1000')\n\nax1.tick_params(direction='in')\nax2.tick_params(direction='in')\n# fig.savefig('sgt_mix.pdf')", "_____no_output_____" ] ], [ [ "For further information of any of these functions just run: ```function?```", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbb5ffd309868f5280adfcb844f60d595e2ad945
86,499
ipynb
Jupyter Notebook
Sem5/MachineLearning/ANN/ann.ipynb
nsudhanva/mca-code
812348ce53edbe0f42f85a9c362bfc8aad64e1e7
[ "MIT" ]
null
null
null
Sem5/MachineLearning/ANN/ann.ipynb
nsudhanva/mca-code
812348ce53edbe0f42f85a9c362bfc8aad64e1e7
[ "MIT" ]
null
null
null
Sem5/MachineLearning/ANN/ann.ipynb
nsudhanva/mca-code
812348ce53edbe0f42f85a9c362bfc8aad64e1e7
[ "MIT" ]
2
2018-10-12T06:38:14.000Z
2019-01-30T04:38:03.000Z
85.47332
526
0.714575
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.neural_network import MLPClassifier", "_____no_output_____" ], [ "df = pd.read_csv('wine.csv', sep=';')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "X = df.iloc[:, 0:11].values\ny = df['quality'].values", "_____no_output_____" ], [ "X.shape", "_____no_output_____" ], [ "y.shape", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)", "_____no_output_____" ], [ "from sklearn.preprocessing import StandardScaler", "_____no_output_____" ], [ "scaler = StandardScaler()\nscaler.fit_transform(X_train, X_test)", "_____no_output_____" ], [ "mlp = MLPClassifier()", "_____no_output_____" ], [ "mlp.fit(X_train, y_train)", "/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n" ], [ "y_pred = mlp.predict(X_test)", "_____no_output_____" ], [ "from sklearn.metrics import accuracy_score, confusion_matrix, classification_report", "_____no_output_____" ], [ "accuracy_score(y_test, y_pred)", "_____no_output_____" ], [ "confusion_matrix(y_test, y_pred)", "_____no_output_____" ], [ "print(classification_report(y_test, y_pred))", " precision recall f1-score support\n\n 3 0.00 0.00 0.00 1\n 4 0.00 0.00 0.00 10\n 5 0.63 0.71 0.67 130\n 6 0.55 0.63 0.58 132\n 7 0.45 0.24 0.31 42\n 8 0.00 0.00 0.00 5\n\n accuracy 0.58 320\n macro avg 0.27 0.26 0.26 320\nweighted avg 0.54 0.58 0.55 320\n\n" ], [ "from sklearn.model_selection import GridSearchCV", "_____no_output_____" ], [ "g = {'alpha': [1, 0.1, 0.01, 0.001],\n 'activation': ['relu', 'tanh'],\n 'hidden_layer_sizes': ((15, 12), (15, 15)),\n 'max_iter': [200, 100, 150]}", "_____no_output_____" ], [ "g_mlp = GridSearchCV(mlp, param_grid = g, cv = 5)", "_____no_output_____" ], [ "g_mlp.fit(X_train, y_train)", "/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (150) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/model_selection/_search.py:813: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal.\n DeprecationWarning)\n/home/sudhanva/anaconda3/envs/tf114gpu/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:566: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.\n % self.max_iter, ConvergenceWarning)\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb603af11d600d3c8567e03460f0e7f520a3ddf
102,802
ipynb
Jupyter Notebook
1-Data-Wrangling.ipynb
jmlasalle/water-risk-classification
ff0f38b01d43dc2609ae26db6e64b252d8842fe1
[ "MIT" ]
2
2020-03-20T22:56:30.000Z
2022-03-23T06:56:01.000Z
1-Data-Wrangling.ipynb
leonardoharth/water-risk-classification
ff0f38b01d43dc2609ae26db6e64b252d8842fe1
[ "MIT" ]
null
null
null
1-Data-Wrangling.ipynb
leonardoharth/water-risk-classification
ff0f38b01d43dc2609ae26db6e64b252d8842fe1
[ "MIT" ]
1
2020-01-26T21:59:22.000Z
2020-01-26T21:59:22.000Z
140.439891
66,556
0.845655
[ [ [ "# Water Risk Classification: Data Wrangling", "_____no_output_____" ], [ "## Setup", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport geopandas as gpd\nimport requests, zipfile, io, os, tarfile\n\nimport rasterio as rio\nfrom rasterio import plot\nfrom rasterstats import zonal_stats\nimport rasterio.warp, rasterio.shutil\nimport rioxarray # for the extension to load\nimport xarray\nimport missingno as msno\nfrom shapely.geometry import Polygon\n\nfrom matplotlib import pyplot\nimport folium\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n", "_____no_output_____" ] ], [ [ "## Download Data\n**ONLY RUN IF YOU DON'T HAVE THE DATA FOLDER YET. IT WILL TAKE A LONG TIME.**\n\nDownload and unzip all the datasets. ", "_____no_output_____" ] ], [ [ "# create data folder\nos.mkdir('./data')", "_____no_output_____" ] ], [ [ "### Aqueduct Database", "_____no_output_____" ] ], [ [ "# download and extract\n# DON'T RUN IF DATA IS IN ./DATA FOLDER\nurl_aq = 'https://wri-projects.s3.amazonaws.com/Aqueduct30/finalData/Y2019M07D12_Aqueduct30_V01.zip'\n\nr = requests.get(url_aq) # download zipped directory\nz = zipfile.ZipFile(io.BytesIO(r.content)) # create zipfile object\nz.extractall(path='data') # unzip into data subdirectory", "_____no_output_____" ] ], [ [ "### Global Human Settlments Layer", "_____no_output_____" ] ], [ [ "# download and extract\n# DON'T RUN IF DATA IS IN ./DATA FOLDER\nurl_ghs = 'http://cidportal.jrc.ec.europa.eu/ftp/jrc-opendata/GHSL/GHS_POP_MT_GLOBE_R2019A/GHS_POP_E2015_GLOBE_R2019A_54009_1K/V1-0/GHS_POP_E2015_GLOBE_R2019A_54009_1K_V1_0.zip'\n\nr = requests.get(url_ghs) # download zipped directory\nz = zipfile.ZipFile(io.BytesIO(r.content)) # create zipfile object\nz.extractall(path='data/ghs') # unzip into data subdirectory", "_____no_output_____" ] ], [ [ "### Infant Mortality\n\nThe SEDAC infant mortality data requires user authentication so we did not download if programatically. The data is available for download [here](https://sedac.ciesin.columbia.edu/downloads/data/povmap/povmap-global-subnational-infant-mortality-rates-v2/povmap-global-subnational-infant-mortality-rates-v2-geotiff.zip) and is unzipped to `./data/sedac/`.", "_____no_output_____" ] ], [ [ "# This download requires useer authentication and isn't currently working\n# DON'T RUN IF DATA IS IN ./DATA FOLDER\nurl_inf_mort = 'https://sedac.ciesin.columbia.edu/downloads/data/povmap/povmap-global-subnational-infant-mortality-rates-v2/povmap-global-subnational-infant-mortality-rates-v2-geotiff.zip'\n\nr = requests.get(url_inf_mort) # download zipped directory\nz = zipfile.ZipFile(\"./data/povmap-global-subnational-infant-mortality-rates-v2-geotiff.zip\") # create zipfile object\nz.extractall(path='./data/sedac') # unzip into data subdirectory\nz.close()", "_____no_output_____" ] ], [ [ "### Nighttime Light", "_____no_output_____" ] ], [ [ "# download and extract\n# DON'T RUN IF DATA IS IN ./DATA FOLDER\nurl_light = 'https://ngdc.noaa.gov/eog/data/web_data/v4avg_lights_x_pct/F182013.v4c.avg_lights_x_pct.tgz'\n\nr = requests.get(url_light, allow_redirects=True)\nopen('./data/F182013.v4c.avg_lights_x_pct.tgz', 'wb').write(r.content)", "_____no_output_____" ], [ "temp_path = './data/F182013.v4c.avg_lights_x_pct.tgz'\n\nz = tarfile.open(temp_path, mode='r:gz') # create zipfile object\nz.extractall(path='data/light') # unzip into data subdirectory\nz.close()\n\nos.remove(temp_path)", "_____no_output_____" ] ], [ [ "## Load Data\n\nWRI Aqueduct metadata with column name explanations is available [here](https://github.com/wri/aqueduct30_data_download/blob/master/metadata.md).", "_____no_output_____" ] ], [ [ "path_aq = './data/Y2019M07D12_Aqueduct30_V01/baseline/annual/y2019m07d11_aqueduct30_annual_v01.gpkg'\n\naq = gpd.read_file(path_aq, layer='y2019m07d11_aqueduct30_annual_v01')", "_____no_output_____" ], [ "# Select just the columns that will be used for the analysis\ndata_cols = ['string_id', 'geometry','bws_raw', 'bwd_raw', 'iav_raw', 'sev_raw', 'gtd_raw', 'rfr_raw', 'cfr_raw', 'drr_raw', 'ucw_raw', 'udw_raw', 'usa_raw']\n\ndata = aq.loc[aq['gid_0'] != 'GRL'].copy()\ndata = data.loc[data['gid_0'].notnull()]\ndata = data[data_cols].copy()\ndata.shape", "_____no_output_____" ], [ "path_ghs = './data/ghs/GHS_POP_E2015_GLOBE_R2019A_54009_1K_V1_0.tif'\n\nghs_meta = None\nghs_t = None\nwith rio.open(path_ghs) as tif:\n ghs_meta = tif.meta\n ghs_t = ghs.transform\n oviews = tif.overviews(1) # list of overviews from biggest to smallest\n oview = oviews[-1]\n thumbnail = tif.read(1, out_shape=(1, int(tif.height // oview), int(tif.width // oview)))", "_____no_output_____" ], [ "ghs_meta['width'], ghs_meta['height'], ghs_meta['width']*ghs_meta['height']", "_____no_output_____" ], [ "path_inf_mort = './data/sedac/povmap_global_subnational_infant_mortality_rates_v2.tif'\n\nwith rio.open(path_inf_mort) as tif:\n inf_mort = tif", "_____no_output_____" ], [ "path_light = './data/light/F182013.v4c.avg_lights_x_pct.tif'\n\nwith rio.open(path_light) as tif:\n light = rio.open(tif)\n", "_____no_output_____" ] ], [ [ "## Explore Data", "_____no_output_____" ] ], [ [ "with rio.open(path_ghs) as im:\n rio.plot.show_hist(im, bins=50, lw=0.0, stacked=False, alpha=1, histtype='stepfilled', title=\"Population\")", "_____no_output_____" ], [ "ghs.meta", "_____no_output_____" ], [ "# plot of infant mortality\nwith rio.open(path_inf_mort) as im:\n rio.plot.show_hist(im, bins=50, lw=0.0, stacked=False, alpha=1, histtype='stepfilled', title=\"Infant Mortality\")", "_____no_output_____" ], [ "inf_mort.meta", "_____no_output_____" ], [ "# Plot of light\n# plot of infant mortality\nwith rio.open(path_inf_mort) as im:\n rio.plot.show_hist(im, bins=50, lw=0.0, stacked=False, alpha=1, histtype='stepfilled', title=\"Nightime Light\")", "_____no_output_____" ], [ "light.meta,plot", "_____no_output_____" ], [ "# check crs\naq.crs == ghs.crs, aq.crs == inf_mort.crs, aq.crs == light.crs", "_____no_output_____" ] ], [ [ "## Join Data", "_____no_output_____" ] ], [ [ "with rio.open(path_inf_mort) as inf_mort:\n inf_mort_array = inf_mort.read(1)", "_____no_output_____" ], [ "# Reclassify from two values for no data to just one\ninf_mort_array[inf_mort_array<0] = -999", "_____no_output_____" ], [ "# Need to set nodata value explicitly\nmortality_stats = zonal_stats(data, inf_mort_array, \n affine = inf_mort.transform, \n stats = ['mean', 'median'],\n nodata = -999)", "_____no_output_____" ], [ "data['mean_infant_mort'] = [s['mean'] for s in mortality_stats]\n#aq_join['median_infant_mort'] = [t['median'] for t in mortality_stats]\n\n# aq_join.loc[np.isinf(aq_join['mean_infant_mort']) == True, 'mean_infant_mort'] = float('NaN')\n# aq_join.loc[np.isinf(aq_join['median_infant_mort']) == True, 'median_infant_mort'] = float('NaN')\n# aq_join.loc[np.isinf(aq_join['sum_infant_mort']) == True, 'sum_infant_mort'] = float('NaN')", "_____no_output_____" ], [ "with rio.open(path_light) as light:\n light_array = light.read(1)", "_____no_output_____" ], [ "light_stats = zonal_stats(data, light_array, \n affine = light.transform, \n stats = ['mean'],\n nodata = -999)", "_____no_output_____" ], [ "data['mean_light'] = [s['mean'] for s in light_stats]\n\ndata.loc[np.isnan(data['mean_light']) == True, 'mean_light'] = 0", "_____no_output_____" ], [ "with rio.open(path_ghs) as ghs:\n ghs_array = ghs.read(1)", "_____no_output_____" ], [ "ghs_stats = zonal_stats(data.to_crs(ghs.crs.data), ghs_array, \n affine = ghs.transform, \n stats = ['sum'],\n nodata = -200.0)", "_____no_output_____" ], [ "data['sum_pop'] = [u['sum'] for u in ghs_stats]", "_____no_output_____" ], [ "data['sum_pop'].isna().sum()\n\ns = data.loc[data['sum_pop'] < 0]\nt = data.loc[np.isnan(data['sum_pop'])]\n\ns.shape, t.shape", "_____no_output_____" ] ], [ [ "## Engineer Features", "_____no_output_____" ] ], [ [ "# Calculate the area of each Aqueduct polygon\n# project to equal area projection to calculate densities\n# NSIDC EASE-Grid 2.0 Global Equal area to calculate densities, https://epsg.io/6933\n\ndata['area_sqkm'] = data.to_crs({'init':'epsg:6933'}).area/10000000\n\n# Calculate population density\ndata['pop_density'] = data['sum_pop']/data['area_sqkm']\n\n# reclassify NAs as zero\ndata.loc[np.isnan(data['pop_density']) == True, 'pop_density'] = 0", "_____no_output_____" ], [ "data.sum_pop.min()", "_____no_output_____" ], [ "msno.bar(data)", "_____no_output_____" ] ], [ [ "## Save GeoJSON for Modeling", "_____no_output_____" ] ], [ [ "export_cols = ['string_id', 'geometry', 'bws_raw', 'bwd_raw', 'iav_raw', 'sev_raw', 'gtd_raw', 'rfr_raw', 'cfr_raw', \n 'drr_raw', 'ucw_raw', 'udw_raw', 'usa_raw',\n 'mean_infant_mort', 'mean_light', 'pop_density'\n ]\n# select columns to export and project back to WGS84\ndata_export = data[export_cols].to_crs(aq.crs)", "_____no_output_____" ], [ "# Save data\ndata_export.to_file(\"./data/data.geojson\", driver='GeoJSON')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cbb608ac90c44d1442f56e734a6c24b580260661
20,757
ipynb
Jupyter Notebook
tutorial/lambda.ipynb
crisperdue/holpy
fe88eb91a8db8386184329e3f51a80d11ecdb316
[ "BSD-3-Clause" ]
22
2021-06-15T00:01:27.000Z
2022-03-15T11:22:25.000Z
tutorial/lambda.ipynb
crisperdue/holpy
fe88eb91a8db8386184329e3f51a80d11ecdb316
[ "BSD-3-Clause" ]
null
null
null
tutorial/lambda.ipynb
crisperdue/holpy
fe88eb91a8db8386184329e3f51a80d11ecdb316
[ "BSD-3-Clause" ]
2
2021-11-30T08:56:03.000Z
2022-01-24T10:46:39.000Z
30.213974
1,087
0.566797
[ [ [ "$\\newcommand{\\To}{\\Rightarrow}$", "_____no_output_____" ] ], [ [ "import os\nos.chdir('..')", "_____no_output_____" ], [ "from kernel.type import TFun, BoolType, NatType\nfrom kernel import term\nfrom kernel.term import Term, Var, Const, Lambda, Abs, Bound, Nat, Or, Eq, Forall, Exists, Implies, And\nfrom data import nat\nfrom logic import basic\nfrom syntax.settings import settings\n\nbasic.load_theory('nat')", "_____no_output_____" ] ], [ [ "## Lambda calculus", "_____no_output_____" ], [ "In the previous section, we discussed how to construct terms consisting of variables, constants, and function application. The relevant constructors are `Var`, `Const`, and `Comb`. In this section, we discuss construction of *lambda terms*, which completes the representation of terms in *lambda calculus*.\n\nThe motivation is as follows: we have already noted that terms can have function type. For example, in the previous section, we can declare a variable $f$ of type $nat \\To nat$ by `Var(\"f\", TFun(NatType, NatType))`. We have also encountered constants that have function type, for example the addition operator. However, we have not said anything about how to construct new examples of such functions.\n\nIn principle, any well-defined rule for computing the output from the input should be representable as a function. For example, there should be a function that takes as input a natural number $n$, and outputs $n+2$. In higher-order logic (also known as *simply-typed lambda calculus*), we can represent such functions as *lambda terms*. The above function can be written (in mathematical notation) as:\n\n$$ \\lambda n. n + 2 $$\n\nHere $n$ (the variable right after $\\lambda$) is known as a *bound variable*, in the sense that it is associated to the $\\lambda$ sign directly in front of it, and is valid only in the scope of that $\\lambda$ sign. It is important to note that *the name of the bound variable does not matter*. The expression $\\lambda n. n + 2$ means the same thing as the expression $\\lambda m. m + 2$. Both represent functions that add 2 to its input. We say that two terms are *$\\alpha$-equivalent* if one can be changed to the other by changing the names of some bound variables.", "_____no_output_____" ], [ "We can construct a function term using `Lambda`.", "_____no_output_____" ] ], [ [ "n = Var('n', NatType)\nf = Lambda(n, n + 2)\nprint(f)", "%n::nat. n + 2\n" ] ], [ [ "Note $\\lambda$ is printed in ASCII using `%`. As before, we turn on unicode printing, so the Greek letter $\\lambda$ is printed properly.", "_____no_output_____" ] ], [ [ "settings.unicode = True\nprint(f)", "λn::nat. n + 2\n" ] ], [ [ "We can test that the name of bound variable does not matter by constructing $f$ in another way:", "_____no_output_____" ] ], [ [ "m = Var('m', NatType)\nf2 = Lambda(m, m + 2)\nprint(f2)\nassert f == f2", "λm::nat. m + 2\n" ] ], [ [ "Functions taking several arguments can be constructed using multiple Lambdas. The following constructs a function that takes two natural numbers $x$ and $y$ as input, and returns $x + 2y$.", "_____no_output_____" ] ], [ [ "x = Var('x', NatType)\ny = Var('y', NatType)\ng = Lambda(x, Lambda(y, x + 2 * y))\nprint(g)", "λx::nat. λy. x + 2 * y\n" ] ], [ [ "This can be written more simply as follows:", "_____no_output_____" ] ], [ [ "g2 = Lambda(x, y, x + 2 * y)\nprint(g2)", "λx::nat. λy. x + 2 * y\n" ] ], [ [ "The types of $f$ and $g$ are as expected (recall `checked_get_type` will perform type-checking on the term, in addition to returning the type of the term).", "_____no_output_____" ] ], [ [ "print(f.checked_get_type())\nprint(g.checked_get_type())", "nat ⇒ nat\nnat ⇒ nat ⇒ nat\n" ] ], [ [ "`Lambda` can also be used to construct predicates or binary relations.", "_____no_output_____" ] ], [ [ "P = Lambda(x, Or(Eq(x, 0), Eq(x, 2)))\nprint(P)\n\nR = Lambda(x, y, Eq(x, y + 2))\nprint(R)", "λx::nat. x = 0 ∨ x = 2\nλx::nat. λy. x = y + 2\n" ] ], [ [ "## $\\beta$-conversion", "_____no_output_____" ], [ "In the previous section, we constructed lambda terms using the `Lambda` constructor. These are supposed to represent functions. What happens when we apply such functions an argument? Well, initially nothing happens:", "_____no_output_____" ] ], [ [ "print(f)\nt = f(Nat(3))\nprint(t)", "λn::nat. n + 2\n(λn::nat. n + 2) 3\n" ] ], [ [ "The `Comb` constructor (invoked through the `__call__` method of $f$) simply combines its two arguments, performing no function evaluation. To actually evaluate a function application, we need to use the `beta_conv` method, so named because function evaluation in lambda calculus is called *$\\beta$-conversion*.", "_____no_output_____" ] ], [ [ "t2 = t.beta_conv()\nprint(t2)", "(3::nat) + 2\n" ] ], [ [ "Now, the argument 2 is substituted into the function. More precisely, the function `beta_conv` assumes the input term is in the form `f x`, where `f` is a lambda term, and substitutes `x` for the bound variable of `f`. The addition $3+2$ is still not evaluated: the general rule is that no evaluation is performed unless explicitly called for. We will discuss evaluation of arithmetic on natural numbers in a later section.\n\nLet's see a more complicated example:", "_____no_output_____" ], [ "Oops... Here `beta_conv` failed because the function part of $t_3$ is not a lambda term: it is a lambda term applied to 2. To fully evaluate $f_2$ on two arguments 3 and 4, we need to apply them one at a time, performing $\\beta$-conversion:", "_____no_output_____" ] ], [ [ "print('g: ', g)\nt3 = g(Nat(3), Nat(4))\nprint('t3:', t3)\n\nt4 = t3.beta_conv() # raises TermException", "g: λx::nat. λy. x + 2 * y\nt3: (λx::nat. λy. x + 2 * y) 3 4\n" ], [ "t3 = g(Nat(3)).beta_conv()\nprint('t3:', t3)\nt4 = t3(Nat(4)).beta_conv()\nprint('t4:', t4)", "t3: λy::nat. 3 + 2 * y\nt4: (3::nat) + 2 * 4\n" ] ], [ [ "A more convenient method is `beta_norm`, which performs all $\\beta$-conversions on subterms:", "_____no_output_____" ] ], [ [ "t5 = g(Nat(3),Nat(4)).beta_norm()\nprint(t5)", "(3::nat) + 2 * 4\n" ] ], [ [ "## Quantifiers in predicate logic", "_____no_output_____" ], [ "Predicate logic extends propositional logic by adding two quantifiers: forall ($\\forall$) and exists ($\\exists$). In higher-order logic, both operators are represented as constants of type $('a \\To bool) \\To bool$. This can be explained as follows, taking the forall quantifier as an example. A forall expression in mathematics has the form\n\n$$ \\forall x. P(x) $$\n\nHere $x$ is a bound variable. In (untyped) first-order logic, there are only two types of terms: objects and propositions, and $x$ can only range over objects. The main distinction between higher-order and first-order logic is that in higher-order logic, the bound variable of quantifiers can be of any type, including function types. Hence, we designate the type of the bound variable by the type variable $'a$. Then, the predicate $P$ has type $'a \\To bool$. Any forall expression is a function taking a predicate $P$ of type $'a \\To bool$ as input, and outputs a boolean value (whether $P$ is true on all of $'a$). Hence, its type must be $('a \\To bool) \\To bool$.\n\nForall and exists expressions are constructed as follows.", "_____no_output_____" ] ], [ [ "x = Var(\"x\", NatType)\nt1 = Forall(x, Implies(x > 2, x > 1))\nprint('t1:', t1)\nt2 = Exists(x, And(x > 2, x < 4))\nprint('t2:', t2)", "t1: ∀x::nat. x > 2 ⟶ x > 1\nt2: ∃x::nat. x > 2 ∧ x < 4\n" ] ], [ [ "The type of $t_1$ and $t_2$ are booleans, as expected.", "_____no_output_____" ] ], [ [ "print(t1.checked_get_type())\nprint(t2.checked_get_type())", "bool\nbool\n" ] ], [ [ "Forall and exists can take more than two arguments as well:", "_____no_output_____" ] ], [ [ "print(Forall(x, y, Implies(x < y, x < y + 1)))\nprint(Exists(x, y, x < y))", "∀x::nat. ∀y. x < y ⟶ x < y + 1\n∃x::nat. ∃y. x < y\n" ] ], [ [ "Forall and exists can alternate in a term. Make sure you understand the difference between the two propositions below. The first statement says for any natural number, there is a greater natural number (which is true). The second says there exists a natural number that is greater than all natural numbers (which is false).", "_____no_output_____" ] ], [ [ "print('P1:', Forall(x, Exists(y, x < y)))\nprint('P2:', Exists(y, Forall(x, x < y)))", "P1: ∀x::nat. ∃y. x < y\nP2: ∃y::nat. ∀x. x < y\n" ] ], [ [ "## de Bruijn indices", "_____no_output_____" ], [ "When representing terms in higher-order logic, we would like to be able to quickly tell whether two terms are $\\alpha$-equivalent. This motivates the use of *de Bruijn index* (named after Dutch mathematician Nicolaas Govert de Bruijn). Following this method, the bound variables are (in principle) unnamed, and whenever one needs to refer to a bound variable, one uses a sign $B_i$ where $i$ counts the depth of the location of reference with respect to the lambda sign of that variable. We follow the convention that the counting begins at 0. For example, the above function is represented using de Bruijn index as:\n\n$$ \\lambda\\_. B_0 + 2 $$\n\nHere we use an underscore to denote a bound variable that is unnamed. Another example: the expression $\\lambda x. \\lambda y. x + y$ is represented as $\\lambda\\_. \\lambda\\_. B_1 + B_0$ using de Bruijn indices. This is because the location where $x$ occurs is separated from the $\\lambda$ sign that bounds it (the first $\\lambda$ sign) by one $\\lambda$ sign in the middle, while the location where $y$ occurs is directly after the $\\lambda$ sign that bounds it (the second $\\lambda$ sign).", "_____no_output_____" ], [ "The use of de Bruijn indices is revealed by looking at the `repr` of a lambda term:", "_____no_output_____" ] ], [ [ "x = Var('x', NatType)\nt = Lambda(x, x + 1)\nprint(repr(t))", "Abs(x, nat, Comb(Comb(Const(plus, nat ⇒ nat ⇒ nat), Bound(0)), Const(one, nat)))\n" ] ], [ [ "Here, `Abs` is the constructor for a lambda term. The first argument is the *suggested* name of the bound variable. It is used for printing only (and perhaps as a starting point when names of new variables need to be invented during proof). The second argument is the type of the bound variable, which *is* significant (different types of bound variables give different terms). The third argument is the body of the lambda term. In the body, bound variables are refered to by `Bound(n)`, where $n$ is a natural number.\n\nLet us examine a more complex lambda expression:", "_____no_output_____" ] ], [ [ "x = Var('x', NatType)\ny = Var('y', NatType)\nt = Lambda(x, Lambda(y, x + y))\nprint(t)\nprint(repr(t))", "λx::nat. λy. x + y\nAbs(x, nat, Abs(y, nat, Comb(Comb(Const(plus, nat ⇒ nat ⇒ nat), Bound(1)), Bound(0))))\n" ] ], [ [ "While we are at it, let us also examine the representation of forall and exists terms:", "_____no_output_____" ] ], [ [ "print(repr(Forall(x, x >= 0)))", "Comb(Const(all, (nat ⇒ bool) ⇒ bool), Abs(x, nat, Comb(Comb(Const(greater_eq, nat ⇒ nat ⇒ bool), Bound(0)), Const(zero, nat))))\n" ], [ "print(repr(Exists(x, x < 1)))", "Comb(Const(exists, (nat ⇒ bool) ⇒ bool), Abs(x, nat, Comb(Comb(Const(less, nat ⇒ nat ⇒ bool), Bound(0)), Const(one, nat))))\n" ] ], [ [ "After understanding the de Bruijn representation, we can also creater lambda terms directly using the `Abs` and `Bound` constructors. This is seldomly necessary, but we show it here to illustrate the concepts:", "_____no_output_____" ] ], [ [ "t = Abs('x', NatType, nat.plus(Bound(0), nat.one))\nprint(t)\nassert t == Lambda(x, x + 1)", "λx::nat. x + 1\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbb619760b9a51d002ee8b93f85ae4560ae86f6b
483,401
ipynb
Jupyter Notebook
Copy_of_xray_classification_with_tpus.ipynb
arpit1920/Pneumonia-XRay-detection
623b0c38f8834fa12f918e14074535d6ae1692c9
[ "MIT" ]
null
null
null
Copy_of_xray_classification_with_tpus.ipynb
arpit1920/Pneumonia-XRay-detection
623b0c38f8834fa12f918e14074535d6ae1692c9
[ "MIT" ]
null
null
null
Copy_of_xray_classification_with_tpus.ipynb
arpit1920/Pneumonia-XRay-detection
623b0c38f8834fa12f918e14074535d6ae1692c9
[ "MIT" ]
null
null
null
364.830943
324,478
0.91976
[ [ [ "import re\nimport os\nimport random\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\ntry:\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver()\n print(\"Device:\", tpu.master())\n tf.config.experimental_connect_to_cluster(tpu)\n tf.tpu.experimental.initialize_tpu_system(tpu)\n strategy = tf.distribute.experimental.TPUStrategy(tpu)\nexcept:\n strategy = tf.distribute.get_strategy()\nprint(\"Number of replicas:\", strategy.num_replicas_in_sync)", "Device: grpc://10.50.169.106:8470\nINFO:tensorflow:Initializing the TPU system: grpc://10.50.169.106:8470\n" ] ], [ [ "We need a Google Cloud link to our data to load the data using a TPU.\nBelow, we define key configuration parameters we'll use in this example.\nTo run on TPU, this example must be on Colab with the TPU runtime selected.", "_____no_output_____" ] ], [ [ "AUTOTUNE = tf.data.experimental.AUTOTUNE\nBATCH_SIZE = 25 * strategy.num_replicas_in_sync\nIMAGE_SIZE = [180, 180]\nCLASS_NAMES = [\"NORMAL\", \"PNEUMONIA\"]", "_____no_output_____" ] ], [ [ "## Load the data\n\nThe Chest X-ray data we are using from\n[*Cell*](https://www.cell.com/cell/fulltext/S0092-8674(18)30154-5) divides the data into\ntraining and test files. Let's first load in the training TFRecords.", "_____no_output_____" ] ], [ [ "train_images = tf.data.TFRecordDataset(\n \"gs://download.tensorflow.org/data/ChestXRay2017/train/images.tfrec\"\n)\ntrain_paths = tf.data.TFRecordDataset(\n \"gs://download.tensorflow.org/data/ChestXRay2017/train/paths.tfrec\"\n)\n\nds = tf.data.Dataset.zip((train_images, train_paths))", "_____no_output_____" ] ], [ [ "Let's count how many healthy/normal chest X-rays we have and how many\npneumonia chest X-rays we have:", "_____no_output_____" ] ], [ [ "COUNT_NORMAL = len(\n [\n filename\n for filename in train_paths\n if \"NORMAL\" in filename.numpy().decode(\"utf-8\")\n ]\n)\nprint(\"Normal images count in training set: \" + str(COUNT_NORMAL))\n\nCOUNT_PNEUMONIA = len(\n [\n filename\n for filename in train_paths\n if \"PNEUMONIA\" in filename.numpy().decode(\"utf-8\")\n ]\n)\nprint(\"Pneumonia images count in training set: \" + str(COUNT_PNEUMONIA))", "Normal images count in training set: 1349\nPneumonia images count in training set: 3883\n" ] ], [ [ "Notice that there are way more images that are classified as pneumonia than normal. This\nshows that we have an imbalance in our data. We will correct for this imbalance later on\nin our notebook.", "_____no_output_____" ], [ "We want to map each filename to the corresponding (image, label) pair. The following\nmethods will help us do that.\n\nAs we only have two labels, we will encode the label so that `1` or `True` indicates\npneumonia and `0` or `False` indicates normal.", "_____no_output_____" ] ], [ [ "\ndef get_label(file_path):\n # convert the path to a list of path components\n parts = tf.strings.split(file_path, \"/\")\n # The second to last is the class-directory\n return parts[-2] == \"PNEUMONIA\"\n\n\ndef decode_img(img):\n # convert the compressed string to a 3D uint8 tensor\n img = tf.image.decode_jpeg(img, channels=3)\n # resize the image to the desired size.\n return tf.image.resize(img, IMAGE_SIZE)\n\n\ndef process_path(image, path):\n label = get_label(path)\n # load the raw data from the file as a string\n img = decode_img(image)\n return img, label\n\n\nds = ds.map(process_path, num_parallel_calls=AUTOTUNE)", "_____no_output_____" ] ], [ [ "Let's split the data into a training and validation datasets.", "_____no_output_____" ] ], [ [ "ds = ds.shuffle(10000)\ntrain_ds = ds.take(4200)\nval_ds = ds.skip(4200)", "_____no_output_____" ] ], [ [ "Let's visualize the shape of an (image, label) pair.", "_____no_output_____" ] ], [ [ "for image, label in train_ds.take(1):\n print(\"Image shape: \", image.numpy().shape)\n print(\"Label: \", label.numpy())", "Image shape: (180, 180, 3)\nLabel: True\n" ] ], [ [ "Load and format the test data as well.", "_____no_output_____" ] ], [ [ "test_images = tf.data.TFRecordDataset(\n \"gs://download.tensorflow.org/data/ChestXRay2017/test/images.tfrec\"\n)\ntest_paths = tf.data.TFRecordDataset(\n \"gs://download.tensorflow.org/data/ChestXRay2017/test/paths.tfrec\"\n)\ntest_ds = tf.data.Dataset.zip((test_images, test_paths))\n\ntest_ds = test_ds.map(process_path, num_parallel_calls=AUTOTUNE)\ntest_ds = test_ds.batch(BATCH_SIZE)", "_____no_output_____" ] ], [ [ "## Visualize the dataset\n\nFirst, let's use buffered prefetching so we can yield data from disk without having I/O\nbecome blocking.\n\nPlease note that large image datasets should not be cached in memory. We do it here\nbecause the dataset is not very large and we want to train on TPU.", "_____no_output_____" ] ], [ [ "\ndef prepare_for_training(ds, cache=True):\n # This is a small dataset, only load it once, and keep it in memory.\n # use `.cache(filename)` to cache preprocessing work for datasets that don't\n # fit in memory.\n if cache:\n if isinstance(cache, str):\n ds = ds.cache(cache)\n else:\n ds = ds.cache()\n\n ds = ds.batch(BATCH_SIZE)\n\n # `prefetch` lets the dataset fetch batches in the background while the model\n # is training.\n ds = ds.prefetch(buffer_size=AUTOTUNE)\n\n return ds\n", "_____no_output_____" ] ], [ [ "Call the next batch iteration of the training data.", "_____no_output_____" ] ], [ [ "train_ds = prepare_for_training(train_ds)\nval_ds = prepare_for_training(val_ds)\n\nimage_batch, label_batch = next(iter(train_ds))", "_____no_output_____" ] ], [ [ "Define the method to show the images in the batch.", "_____no_output_____" ] ], [ [ "\ndef show_batch(image_batch, label_batch):\n plt.figure(figsize=(10, 10))\n for n in range(25):\n ax = plt.subplot(5, 5, n + 1)\n plt.imshow(image_batch[n] / 255)\n if label_batch[n]:\n plt.title(\"PNEUMONIA\")\n else:\n plt.title(\"NORMAL\")\n plt.axis(\"off\")\n", "_____no_output_____" ] ], [ [ "As the method takes in NumPy arrays as its parameters, call the numpy function on the\nbatches to return the tensor in NumPy array form.", "_____no_output_____" ] ], [ [ "show_batch(image_batch.numpy(), label_batch.numpy())", "_____no_output_____" ] ], [ [ "## Build the CNN\n\nTo make our model more modular and easier to understand, let's define some blocks. As\nwe're building a convolution neural network, we'll create a convolution block and a dense\nlayer block.\n\nThe architecture for this CNN has been inspired by this\n[article](https://towardsdatascience.com/deep-learning-for-detecting-pneumonia-from-x-ray-images-fc9a3d9fdba8).", "_____no_output_____" ] ], [ [ "from tensorflow import keras\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.layers.experimental import preprocessing\n\n\ndef conv_block(filters, inputs):\n x = layers.SeparableConv2D(filters, 3, activation=\"relu\", padding=\"same\")(inputs)\n x = layers.SeparableConv2D(filters, 3, activation=\"relu\", padding=\"same\")(x)\n x = layers.BatchNormalization()(x)\n outputs = layers.MaxPool2D()(x)\n\n return outputs\n\n\ndef dense_block(units, dropout_rate, inputs):\n x = layers.Dense(units, activation=\"relu\")(inputs)\n x = layers.BatchNormalization()(x)\n outputs = layers.Dropout(dropout_rate)(x)\n\n return outputs\n", "_____no_output_____" ] ], [ [ "The following method will define the function to build our model for us.\n\nThe images originally have values that range from [0, 255]. CNNs work better with smaller\nnumbers so we will scale this down for our input.\n\nThe Dropout layers are important, as they\nreduce the likelikhood of the model overfitting. We want to end the model with a `Dense`\nlayer with one node, as this will be the binary output that determines if an X-ray shows\npresence of pneumonia.", "_____no_output_____" ] ], [ [ "\ndef build_model():\n inputs = keras.Input(shape=(IMAGE_SIZE[0], IMAGE_SIZE[1], 3))\n x = preprocessing.Rescaling(1.0 / 255)(inputs)\n x = layers.Conv2D(16, 3, activation=\"relu\", padding=\"same\")(x)\n x = layers.Conv2D(16, 3, activation=\"relu\", padding=\"same\")(x)\n x = layers.MaxPool2D()(x)\n\n x = conv_block(32, x)\n x = conv_block(64, x)\n\n x = conv_block(128, x)\n x = layers.Dropout(0.2)(x)\n\n x = conv_block(256, x)\n x = layers.Dropout(0.2)(x)\n\n x = layers.Flatten()(x)\n x = dense_block(512, 0.7, x)\n x = dense_block(128, 0.5, x)\n x = dense_block(64, 0.3, x)\n\n outputs = layers.Dense(1, activation=\"sigmoid\")(x)\n\n model = keras.Model(inputs=inputs, outputs=outputs)\n return model\n", "_____no_output_____" ] ], [ [ "## Correct for data imbalance\n\nWe saw earlier in this example that the data was imbalanced, with more images classified\nas pneumonia than normal. We will correct for that by using class weighting:", "_____no_output_____" ] ], [ [ "initial_bias = np.log([COUNT_PNEUMONIA / COUNT_NORMAL])\nprint(\"Initial bias: {:.5f}\".format(initial_bias[0]))\n\nTRAIN_IMG_COUNT = COUNT_NORMAL + COUNT_PNEUMONIA\nweight_for_0 = (1 / COUNT_NORMAL) * (TRAIN_IMG_COUNT) / 2.0\nweight_for_1 = (1 / COUNT_PNEUMONIA) * (TRAIN_IMG_COUNT) / 2.0\n\nclass_weight = {0: weight_for_0, 1: weight_for_1}\n\nprint(\"Weight for class 0: {:.2f}\".format(weight_for_0))\nprint(\"Weight for class 1: {:.2f}\".format(weight_for_1))", "Initial bias: 1.05724\nWeight for class 0: 1.94\nWeight for class 1: 0.67\n" ] ], [ [ "The weight for class `0` (Normal) is a lot higher than the weight for class `1`\n(Pneumonia). Because there are less normal images, each normal image will be weighted\nmore to balance the data as the CNN works best when the training data is balanced.", "_____no_output_____" ], [ "## Train the model", "_____no_output_____" ], [ "### Defining callbacks\n\nThe checkpoint callback saves the best weights of the model, so next time we want to use\nthe model, we do not have to spend time training it. The early stopping callback stops\nthe training process when the model starts becoming stagnant, or even worse, when the\nmodel starts overfitting.", "_____no_output_____" ] ], [ [ "checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(\"xray_model.h5\", save_best_only=True)\n\nearly_stopping_cb = tf.keras.callbacks.EarlyStopping(\n patience=10, restore_best_weights=True\n)", "_____no_output_____" ] ], [ [ "We also want to tune our learning rate. Too high of a learning rate will cause the model\nto diverge. Too small of a learning rate will cause the model to be too slow. We\nimplement the exponential learning rate scheduling method below.", "_____no_output_____" ] ], [ [ "initial_learning_rate = 0.015\nlr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(\n initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True\n)", "_____no_output_____" ] ], [ [ "### Fit the model\n\nFor our metrics, we want to include precision and recall as they will provide use with a\nmore informed picture of how good our model is. Accuracy tells us what fraction of the\nlabels is correct. Since our data is not balanced, accuracy might give a skewed sense of\na good model (i.e. a model that always predicts PNEUMONIA will be 74% accurate but is not\na good model).\n\nPrecision is the number of true positives (TP) over the sum of TP and false positives\n(FP). It shows what fraction of labeled positives are actually correct.\n\nRecall is the number of TP over the sum of TP and false negatves (FN). It shows what\nfraction of actual positives are correct.\n\nSince there are only two possible labels for the image, we will be using the\nbinary crossentropy loss. When we fit the model, remember to specify the class weights,\nwhich we defined earlier. Because we are using a TPU, training will be quick - less than\n2 minutes.", "_____no_output_____" ] ], [ [ "with strategy.scope():\n model = build_model()\n\n METRICS = [\n tf.keras.metrics.BinaryAccuracy(),\n tf.keras.metrics.Precision(name=\"precision\"),\n tf.keras.metrics.Recall(name=\"recall\"),\n ]\n model.compile(\n optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule),\n loss=\"binary_crossentropy\",\n metrics=METRICS,\n )\n\nhistory = model.fit(\n train_ds,\n epochs=100,\n validation_data=val_ds,\n class_weight=class_weight,\n callbacks=[checkpoint_cb, early_stopping_cb],\n)", "Epoch 1/100\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Iterator.get_next_as_optional()` instead.\n" ] ], [ [ "## Visualizing model performance\n\nLet's plot the model accuracy and loss for the training and the validating set. Note that\nno random seed is specified for this notebook. For your notebook, there might be slight\nvariance.", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(1, 4, figsize=(20, 3))\nax = ax.ravel()\n\nfor i, met in enumerate([\"precision\", \"recall\", \"binary_accuracy\", \"loss\"]):\n ax[i].plot(history.history[met])\n ax[i].plot(history.history[\"val_\" + met])\n ax[i].set_title(\"Model {}\".format(met))\n ax[i].set_xlabel(\"epochs\")\n ax[i].set_ylabel(met)\n ax[i].legend([\"train\", \"val\"])", "_____no_output_____" ] ], [ [ "We see that the accuracy for our model is around 95%.", "_____no_output_____" ], [ "## Predict and evaluate results\n\nLet's evaluate the model on our test data!", "_____no_output_____" ] ], [ [ "model.evaluate(test_ds, return_dict=True)", "4/4 [==============================] - 3s 710ms/step - loss: 1.0220 - binary_accuracy: 0.7468 - precision: 0.7132 - recall: 0.9949\n" ] ], [ [ "We see that our accuracy on our test data is lower than the accuracy for our validating\nset. This may indicate overfitting.\n\nOur recall is greater than our precision, indicating that almost all pneumonia images are\ncorrectly identified but some normal images are falsely identified. We should aim to\nincrease our precision.", "_____no_output_____" ] ], [ [ "for image, label in test_ds.take(1):\n plt.imshow(image[0] / 255.0)\n plt.title(CLASS_NAMES[label[0].numpy()])\n\nprediction = model.predict(test_ds.take(1))[0]\nscores = [1 - prediction, prediction]\n\nfor score, name in zip(scores, CLASS_NAMES):\n print(\"This image is %.2f percent %s\" % ((100 * score), name))", "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:3: DeprecationWarning: In future, it will be an error for 'np.bool_' scalars to be interpreted as an index\n This is separate from the ipykernel package so we can avoid doing imports until\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb62e01dfb4a78b040a089cf9dce30a6b77fb67
8,284
ipynb
Jupyter Notebook
Notebooks/LAB-Maps-Data/PHYS3070-LabMD.1.3.ipynb
ANU-RSES-Education/PHYS-3070
59d357eb5b594147b77230c84167aa4be188e79c
[ "MIT" ]
null
null
null
Notebooks/LAB-Maps-Data/PHYS3070-LabMD.1.3.ipynb
ANU-RSES-Education/PHYS-3070
59d357eb5b594147b77230c84167aa4be188e79c
[ "MIT" ]
null
null
null
Notebooks/LAB-Maps-Data/PHYS3070-LabMD.1.3.ipynb
ANU-RSES-Education/PHYS-3070
59d357eb5b594147b77230c84167aa4be188e79c
[ "MIT" ]
null
null
null
39.636364
974
0.631096
[ [ [ "# LAB 5 - Maps\n\nHere we will jump in and make some maps based upon what you have learned making the volcano map and the topography maps of where you grew up in the previous two steps of this Lab.\n\n\n## Navigation\n\n - [Maps 1.1](PHYS3070-LabMD.1.1.ipynb)\n - [Maps 1.2](PHYS3070-LabMD.1.2.ipynb)\n - [Maps 1.3](PHYS3070-LabMD.1.3.ipynb)\n - [Maps 2.1](PHYS3070-LabMD.2.1.ipynb)\n - [Maps 2.2](PHYS3070-LabMD.2.2.ipynb)\n - [Maps 2.3](PHYS3070-LabMD.2.3.ipynb)\n - [Maps 2.4](PHYS3070-LabMD.2.4.ipynb)\n - [Maps 2.5](PHYS3070-LabMD.2.5.ipynb)\n\n \n---\n\n\nRemember how we set up our codes by importing what we need in python.\n\n```python\nimport cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\n```\nThe second line here is, of course, just our familiar way of importing `matplotlib` for plotting.\n\nNow, making a map is exactly the same as plotting in polar coordinates: we simply need to specify `projection=Projection()` when creating the plot. Here, `Projection()` is one of the Cartopy-supported [map projections](https://en.wikipedia.org/wiki/Map_projection). For example,\n```python\nax = plt.subplot(111,projection=ccrs.PlateCarree())\nax.set_global()\nax.coastlines()\nplt.show()\n```\nThe `ax.set_global()` command informs Cartopy that we want to make a map of the whole globe, while the `ax.coastlines()` command draws coastline information onto it.\n\n", "_____no_output_____" ] ], [ [ "# Try it here!\n\n\n", "_____no_output_____" ], [ "# Specify the projection then show the plot\n\nimport cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\n\nax = plt.subplot(111,projection=ccrs.PlateCarree())\nax.set_global()\nax.coastlines()\nplt.show()\n", "_____no_output_____" ] ], [ [ "By default, the map is centred on the 0 longitude meridian. To change this, we can pass `central_longitude=longitude` to `ccrs.PlateCarree()`.\n\nMake a single figure that consists of multiple subplots, illustrating the following projections:\n- `ccrs.PlateCarree`\n- `ccrs.Mercator`\n- `ccrs.Mollweide`\n- `ccrs.Robinson`\n- `ccrs.InterruptedGoodeHomolosine`\n- `ccrs.NearsidePerspective`", "_____no_output_____" ], [ "The maps we have made so far are global. If you only want to work with a subset of the globe, you can specify a different region (after creating the axes, but before doing any plotting) by removing `ax.set_global()` and instead calling `ax.set_extent((llon,rlon,llat,ulat))` where `llon` and `rlon` are the longitudes of the left- and right-hand sides of the region, and `llat` and `ulat` are the lower and upper latitudes of the region. Remember to set `central_longitude` appropriately, or you may get surprising results.\n\nMake a map of the area around your hometown. You may wish to pass `resolution='50m'` or `resolution='10m'` to `ax.coastlines()` to obtain a better-looking result.", "_____no_output_____" ] ], [ [ "# Try it here!\n", "_____no_output_____" ] ], [ [ "To add features such as rivers, national boundaries and so on to your map, we must import another submodule of Cartopy:\n```python\nimport cartopy.feature as cf\n```\nThis provides immediate access to several low-resolution feature datasets, including:\n- `cf.BORDERS`\n- `cf.COASTLINE`\n- `cf.LAND`\n- `cf.OCEAN`\n- `cf.LAKES`\n- `cf.RIVERS`\n\nEach of these can be added to your plot by calling `ax.add_feature(feature)`; additional arguments such as `color=colorname` can be used to control how they are displayed.", "_____no_output_____" ] ], [ [ "# Try it here!\n\n", "_____no_output_____" ] ], [ [ "For higher-resolution data, and for other features, Cartopy allows you to make use of data from [Natural Earth](https://www.naturalearthdata.com/). Unfortunately, the documentation for doing so is currently rather poor, and getting everything to work can require some amount of trial and error. The basic syntax is:\n```python\nax.add_feature(cf.NaturalEarthFeature(category, name, scale),\n edgecolor=color,facecolor=color)\n```\nwhere `category` is either `'physical'` or `'cultural'`, `scale` is `'10m'`, `'50m'` or `'110m'`, and `'name'` is the name of the appropriate dataset. It seems this has to be inferred from the 'download' links on the Natural Earth website. The color options, `edgecolor` and `facecolor`, specify the colour used to draw outlines and fills; for some reason, it is necessary to pass the text string `'none'` (and not, as is more usual, the Python object `None`) if one does not want an object to be filled. Thus, for example, high-resolution rivers can be drawn by the command\n```python\nax.add_feature(cf.NaturalEarthFeature('physical','rivers_lake_centerlines','10m'),\n edgecolor='blue',facecolor='none')\n```\nAdd rivers and country boundaries (as appropriate) to your hometown map.", "_____no_output_____" ] ], [ [ "# Try it here!\n", "_____no_output_____" ] ], [ [ "There is also an interface to [GSHHS coastline data](https://www.ngdc.noaa.gov/mgg/shorelines/).\n\nBecause Cartopy is built as an add-on to `matplotlib.pyplot`, you can use all of the standard plotting tools to add data to your map. For example, point data and lines can be added using `plt.plot()`, contours can be drawn using `plt.contour()`, and gridded data using `plt.imshow()`. However, it is critically important that you include `transform = <something appropriate>` in each plotting command, to ensure that Cartopy correctly interprets the data you provide. For most common cases, where your data is expressed in terms of latitude and longitude, the most appropriate choice will be `transform=ccrs.Geodetic()`. When a geodetic coordinate system is used, using `plt.plot()` to draw a line between two points will result in a geodesic curve (the great-circle path representing the shortest distance on the surface of the spherical Earth). If you instead wish to draw a line that appears straight in the 2-D plane, you can use `transform=ccrs.PlateCarree()`.\n\nCreate a global map and plot a line joining your hometown and Canberra using both `transform=ccrs.PlateCarree()` and `transform=ccrs.Geodetic()`. Satisfy yourself that you understand the difference. Plot (and label) the locations of your hometown, the capital city of your country, and Canberra.", "_____no_output_____" ] ], [ [ "# Try it here!\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb633bd98502773a79617d7c470d50338331303
34,439
ipynb
Jupyter Notebook
titanic_survival_exploration/.ipynb_checkpoints/Lesson 1st-checkpoint.ipynb
nehajaiswal5/Udacity-s-ML-Nanodegree-
606c77e53b7e4ed1d63fbff09060ca77cb2312ca
[ "MIT" ]
null
null
null
titanic_survival_exploration/.ipynb_checkpoints/Lesson 1st-checkpoint.ipynb
nehajaiswal5/Udacity-s-ML-Nanodegree-
606c77e53b7e4ed1d63fbff09060ca77cb2312ca
[ "MIT" ]
null
null
null
titanic_survival_exploration/.ipynb_checkpoints/Lesson 1st-checkpoint.ipynb
nehajaiswal5/Udacity-s-ML-Nanodegree-
606c77e53b7e4ed1d63fbff09060ca77cb2312ca
[ "MIT" ]
null
null
null
64.251866
1,592
0.437237
[ [ [ "import numpy as np\nimport pandas as pd\n\n# Load the dataset\nX = pd.read_csv('titanic_data.csv')\nprint X", " PassengerId Survived Pclass \\\n0 1 0 3 \n1 2 1 1 \n2 3 1 3 \n3 4 1 1 \n4 5 0 3 \n5 6 0 3 \n6 7 0 1 \n7 8 0 3 \n8 9 1 3 \n9 10 1 2 \n10 11 1 3 \n11 12 1 1 \n12 13 0 3 \n13 14 0 3 \n14 15 0 3 \n15 16 1 2 \n16 17 0 3 \n17 18 1 2 \n18 19 0 3 \n19 20 1 3 \n20 21 0 2 \n21 22 1 2 \n22 23 1 3 \n23 24 1 1 \n24 25 0 3 \n25 26 1 3 \n26 27 0 3 \n27 28 0 1 \n28 29 1 3 \n29 30 0 3 \n.. ... ... ... \n861 862 0 2 \n862 863 1 1 \n863 864 0 3 \n864 865 0 2 \n865 866 1 2 \n866 867 1 2 \n867 868 0 1 \n868 869 0 3 \n869 870 1 3 \n870 871 0 3 \n871 872 1 1 \n872 873 0 1 \n873 874 0 3 \n874 875 1 2 \n875 876 1 3 \n876 877 0 3 \n877 878 0 3 \n878 879 0 3 \n879 880 1 1 \n880 881 1 2 \n881 882 0 3 \n882 883 0 3 \n883 884 0 2 \n884 885 0 3 \n885 886 0 3 \n886 887 0 2 \n887 888 1 1 \n888 889 0 3 \n889 890 1 1 \n890 891 0 3 \n\n Name Sex Age SibSp \\\n0 Braund, Mr. Owen Harris male 22.0 1 \n1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1 \n2 Heikkinen, Miss. Laina female 26.0 0 \n3 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1 \n4 Allen, Mr. William Henry male 35.0 0 \n5 Moran, Mr. James male NaN 0 \n6 McCarthy, Mr. Timothy J male 54.0 0 \n7 Palsson, Master. Gosta Leonard male 2.0 3 \n8 Johnson, Mrs. Oscar W (Elisabeth Vilhelmina Berg) female 27.0 0 \n9 Nasser, Mrs. Nicholas (Adele Achem) female 14.0 1 \n10 Sandstrom, Miss. Marguerite Rut female 4.0 1 \n11 Bonnell, Miss. Elizabeth female 58.0 0 \n12 Saundercock, Mr. William Henry male 20.0 0 \n13 Andersson, Mr. Anders Johan male 39.0 1 \n14 Vestrom, Miss. Hulda Amanda Adolfina female 14.0 0 \n15 Hewlett, Mrs. (Mary D Kingcome) female 55.0 0 \n16 Rice, Master. Eugene male 2.0 4 \n17 Williams, Mr. Charles Eugene male NaN 0 \n18 Vander Planke, Mrs. Julius (Emelia Maria Vande... female 31.0 1 \n19 Masselmani, Mrs. Fatima female NaN 0 \n20 Fynney, Mr. Joseph J male 35.0 0 \n21 Beesley, Mr. Lawrence male 34.0 0 \n22 McGowan, Miss. Anna \"Annie\" female 15.0 0 \n23 Sloper, Mr. William Thompson male 28.0 0 \n24 Palsson, Miss. Torborg Danira female 8.0 3 \n25 Asplund, Mrs. Carl Oscar (Selma Augusta Emilia... female 38.0 1 \n26 Emir, Mr. Farred Chehab male NaN 0 \n27 Fortune, Mr. Charles Alexander male 19.0 3 \n28 O'Dwyer, Miss. Ellen \"Nellie\" female NaN 0 \n29 Todoroff, Mr. Lalio male NaN 0 \n.. ... ... ... ... \n861 Giles, Mr. Frederick Edward male 21.0 1 \n862 Swift, Mrs. Frederick Joel (Margaret Welles Ba... female 48.0 0 \n863 Sage, Miss. Dorothy Edith \"Dolly\" female NaN 8 \n864 Gill, Mr. John William male 24.0 0 \n865 Bystrom, Mrs. (Karolina) female 42.0 0 \n866 Duran y More, Miss. Asuncion female 27.0 1 \n867 Roebling, Mr. Washington Augustus II male 31.0 0 \n868 van Melkebeke, Mr. Philemon male NaN 0 \n869 Johnson, Master. Harold Theodor male 4.0 1 \n870 Balkic, Mr. Cerin male 26.0 0 \n871 Beckwith, Mrs. Richard Leonard (Sallie Monypeny) female 47.0 1 \n872 Carlsson, Mr. Frans Olof male 33.0 0 \n873 Vander Cruyssen, Mr. Victor male 47.0 0 \n874 Abelson, Mrs. Samuel (Hannah Wizosky) female 28.0 1 \n875 Najib, Miss. Adele Kiamie \"Jane\" female 15.0 0 \n876 Gustafsson, Mr. Alfred Ossian male 20.0 0 \n877 Petroff, Mr. Nedelio male 19.0 0 \n878 Laleff, Mr. Kristo male NaN 0 \n879 Potter, Mrs. Thomas Jr (Lily Alexenia Wilson) female 56.0 0 \n880 Shelley, Mrs. William (Imanita Parrish Hall) female 25.0 0 \n881 Markun, Mr. Johann male 33.0 0 \n882 Dahlberg, Miss. Gerda Ulrika female 22.0 0 \n883 Banfield, Mr. Frederick James male 28.0 0 \n884 Sutehall, Mr. Henry Jr male 25.0 0 \n885 Rice, Mrs. William (Margaret Norton) female 39.0 0 \n886 Montvila, Rev. Juozas male 27.0 0 \n887 Graham, Miss. Margaret Edith female 19.0 0 \n888 Johnston, Miss. Catherine Helen \"Carrie\" female NaN 1 \n889 Behr, Mr. Karl Howell male 26.0 0 \n890 Dooley, Mr. Patrick male 32.0 0 \n\n Parch Ticket Fare Cabin Embarked \n0 0 A/5 21171 7.2500 NaN S \n1 0 PC 17599 71.2833 C85 C \n2 0 STON/O2. 3101282 7.9250 NaN S \n3 0 113803 53.1000 C123 S \n4 0 373450 8.0500 NaN S \n5 0 330877 8.4583 NaN Q \n6 0 17463 51.8625 E46 S \n7 1 349909 21.0750 NaN S \n8 2 347742 11.1333 NaN S \n9 0 237736 30.0708 NaN C \n10 1 PP 9549 16.7000 G6 S \n11 0 113783 26.5500 C103 S \n12 0 A/5. 2151 8.0500 NaN S \n13 5 347082 31.2750 NaN S \n14 0 350406 7.8542 NaN S \n15 0 248706 16.0000 NaN S \n16 1 382652 29.1250 NaN Q \n17 0 244373 13.0000 NaN S \n18 0 345763 18.0000 NaN S \n19 0 2649 7.2250 NaN C \n20 0 239865 26.0000 NaN S \n21 0 248698 13.0000 D56 S \n22 0 330923 8.0292 NaN Q \n23 0 113788 35.5000 A6 S \n24 1 349909 21.0750 NaN S \n25 5 347077 31.3875 NaN S \n26 0 2631 7.2250 NaN C \n27 2 19950 263.0000 C23 C25 C27 S \n28 0 330959 7.8792 NaN Q \n29 0 349216 7.8958 NaN S \n.. ... ... ... ... ... \n861 0 28134 11.5000 NaN S \n862 0 17466 25.9292 D17 S \n863 2 CA. 2343 69.5500 NaN S \n864 0 233866 13.0000 NaN S \n865 0 236852 13.0000 NaN S \n866 0 SC/PARIS 2149 13.8583 NaN C \n867 0 PC 17590 50.4958 A24 S \n868 0 345777 9.5000 NaN S \n869 1 347742 11.1333 NaN S \n870 0 349248 7.8958 NaN S \n871 1 11751 52.5542 D35 S \n872 0 695 5.0000 B51 B53 B55 S \n873 0 345765 9.0000 NaN S \n874 0 P/PP 3381 24.0000 NaN C \n875 0 2667 7.2250 NaN C \n876 0 7534 9.8458 NaN S \n877 0 349212 7.8958 NaN S \n878 0 349217 7.8958 NaN S \n879 1 11767 83.1583 C50 C \n880 1 230433 26.0000 NaN S \n881 0 349257 7.8958 NaN S \n882 0 7552 10.5167 NaN S \n883 0 C.A./SOTON 34068 10.5000 NaN S \n884 0 SOTON/OQ 392076 7.0500 NaN S \n885 5 382652 29.1250 NaN Q \n886 0 211536 13.0000 NaN S \n887 0 112053 30.0000 B42 S \n888 2 W./C. 6607 23.4500 NaN S \n889 0 111369 30.0000 C148 C \n890 0 370376 7.7500 NaN Q \n\n[891 rows x 12 columns]\n" ], [ "X = X.select_dtypes(include=[object])\n\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.preprocessing import OneHotEncoder\n", "_____no_output_____" ], [ "print X", " Name Sex \\\n0 Braund, Mr. Owen Harris male \n1 Cumings, Mrs. John Bradley (Florence Briggs Th... female \n2 Heikkinen, Miss. Laina female \n3 Futrelle, Mrs. Jacques Heath (Lily May Peel) female \n4 Allen, Mr. William Henry male \n5 Moran, Mr. James male \n6 McCarthy, Mr. Timothy J male \n7 Palsson, Master. Gosta Leonard male \n8 Johnson, Mrs. Oscar W (Elisabeth Vilhelmina Berg) female \n9 Nasser, Mrs. Nicholas (Adele Achem) female \n10 Sandstrom, Miss. Marguerite Rut female \n11 Bonnell, Miss. Elizabeth female \n12 Saundercock, Mr. William Henry male \n13 Andersson, Mr. Anders Johan male \n14 Vestrom, Miss. Hulda Amanda Adolfina female \n15 Hewlett, Mrs. (Mary D Kingcome) female \n16 Rice, Master. Eugene male \n17 Williams, Mr. Charles Eugene male \n18 Vander Planke, Mrs. Julius (Emelia Maria Vande... female \n19 Masselmani, Mrs. Fatima female \n20 Fynney, Mr. Joseph J male \n21 Beesley, Mr. Lawrence male \n22 McGowan, Miss. Anna \"Annie\" female \n23 Sloper, Mr. William Thompson male \n24 Palsson, Miss. Torborg Danira female \n25 Asplund, Mrs. Carl Oscar (Selma Augusta Emilia... female \n26 Emir, Mr. Farred Chehab male \n27 Fortune, Mr. Charles Alexander male \n28 O'Dwyer, Miss. Ellen \"Nellie\" female \n29 Todoroff, Mr. Lalio male \n.. ... ... \n861 Giles, Mr. Frederick Edward male \n862 Swift, Mrs. Frederick Joel (Margaret Welles Ba... female \n863 Sage, Miss. Dorothy Edith \"Dolly\" female \n864 Gill, Mr. John William male \n865 Bystrom, Mrs. (Karolina) female \n866 Duran y More, Miss. Asuncion female \n867 Roebling, Mr. Washington Augustus II male \n868 van Melkebeke, Mr. Philemon male \n869 Johnson, Master. Harold Theodor male \n870 Balkic, Mr. Cerin male \n871 Beckwith, Mrs. Richard Leonard (Sallie Monypeny) female \n872 Carlsson, Mr. Frans Olof male \n873 Vander Cruyssen, Mr. Victor male \n874 Abelson, Mrs. Samuel (Hannah Wizosky) female \n875 Najib, Miss. Adele Kiamie \"Jane\" female \n876 Gustafsson, Mr. Alfred Ossian male \n877 Petroff, Mr. Nedelio male \n878 Laleff, Mr. Kristo male \n879 Potter, Mrs. Thomas Jr (Lily Alexenia Wilson) female \n880 Shelley, Mrs. William (Imanita Parrish Hall) female \n881 Markun, Mr. Johann male \n882 Dahlberg, Miss. Gerda Ulrika female \n883 Banfield, Mr. Frederick James male \n884 Sutehall, Mr. Henry Jr male \n885 Rice, Mrs. William (Margaret Norton) female \n886 Montvila, Rev. Juozas male \n887 Graham, Miss. Margaret Edith female \n888 Johnston, Miss. Catherine Helen \"Carrie\" female \n889 Behr, Mr. Karl Howell male \n890 Dooley, Mr. Patrick male \n\n Ticket Cabin Embarked \n0 A/5 21171 NaN S \n1 PC 17599 C85 C \n2 STON/O2. 3101282 NaN S \n3 113803 C123 S \n4 373450 NaN S \n5 330877 NaN Q \n6 17463 E46 S \n7 349909 NaN S \n8 347742 NaN S \n9 237736 NaN C \n10 PP 9549 G6 S \n11 113783 C103 S \n12 A/5. 2151 NaN S \n13 347082 NaN S \n14 350406 NaN S \n15 248706 NaN S \n16 382652 NaN Q \n17 244373 NaN S \n18 345763 NaN S \n19 2649 NaN C \n20 239865 NaN S \n21 248698 D56 S \n22 330923 NaN Q \n23 113788 A6 S \n24 349909 NaN S \n25 347077 NaN S \n26 2631 NaN C \n27 19950 C23 C25 C27 S \n28 330959 NaN Q \n29 349216 NaN S \n.. ... ... ... \n861 28134 NaN S \n862 17466 D17 S \n863 CA. 2343 NaN S \n864 233866 NaN S \n865 236852 NaN S \n866 SC/PARIS 2149 NaN C \n867 PC 17590 A24 S \n868 345777 NaN S \n869 347742 NaN S \n870 349248 NaN S \n871 11751 D35 S \n872 695 B51 B53 B55 S \n873 345765 NaN S \n874 P/PP 3381 NaN C \n875 2667 NaN C \n876 7534 NaN S \n877 349212 NaN S \n878 349217 NaN S \n879 11767 C50 C \n880 230433 NaN S \n881 349257 NaN S \n882 7552 NaN S \n883 C.A./SOTON 34068 NaN S \n884 SOTON/OQ 392076 NaN S \n885 382652 NaN Q \n886 211536 NaN S \n887 112053 B42 S \n888 W./C. 6607 NaN S \n889 111369 C148 C \n890 370376 NaN Q \n\n[891 rows x 5 columns]\n" ], [ "le = X[2].LabelEncoder()", "_____no_output_____" ], [ "import pandas as pd\n\n# Load the dataset\nfrom sklearn import datasets\n\nX = pd.read_csv('titanic_data.csv')\n\nX = X._get_numeric_data()\ny = X['Survived']\ndel X['Age'], X['Survived']\n\n\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.naive_bayes import GaussianNB\n\n# TODO: split the data into training and testing sets,\n# using the standard settings for train_test_split.\n# Then, train and test the classifiers with your newly split data instead of X and y.\n\nclf1 = DecisionTreeClassifier()\nclf1.fit(X,y)\nprint \"Confusion matrix for this Decision Tree:\\n\",confusion_matrix(y,clf1.predict(X))\n\nclf2 = GaussianNB()\nclf2.fit(X,y)\nprint \"GaussianNB confusion matrix:\\n\",confusion_matrix(y,clf2.predict(X))\n", "Confusion matrix for this Decision Tree:\n[[549 0]\n [ 0 342]]\nGaussianNB confusion matrix:\n[[467 82]\n [205 137]]\n" ], [ "import numpy as np\nimport pandas as pd\nfrom sklearn import cross_validation\n# Load the dataset\nX = pd.read_csv('titanic_data.csv')\n\nX = X._get_numeric_data()\ny = X['Survived']\ndel X['Age'], X['Survived']\n\n\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.metrics import recall_score as recall\nfrom sklearn.metrics import precision_score as precision\nfrom sklearn.naive_bayes import GaussianNB\n\n# TODO: split the data into training and testing sets,\n# using the standard settings for train_test_split.\n# Then, train and test the classifiers with your newly split data instead of X and y.\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.40, random_state=42)\n\nclf1 = DecisionTreeClassifier()\nclf1.fit(X_train, y_train)\nprint \"Decision Tree recall: {:.2f} and precision: {:.2f}\".format(recall(y_test,clf1.predict(X_test)),precision(y_test,clf1.predict(X_test)))\n\nclf2 = GaussianNB()\nclf2.fit(X_train, y_train)\nprint \"GaussianNB recall: {:.2f} and precision: {:.2f}\".format(recall(y_train,clf2.predict(X_train)),precision(y_train,clf2.predict(X_train)))\n", "Decision Tree recall: 0.48 and precision: 0.54\nGaussianNB recall: 0.37 and precision: 0.60\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cbb635b4bd86c3f4c3f7d76fbf3c91c2873343cc
27,750
ipynb
Jupyter Notebook
site/en/tutorials/generative/dcgan.ipynb
jonathanking/docs-1
3082041fb5ef2b29217584659bc43d89602d57cf
[ "Apache-2.0" ]
3
2020-04-05T20:42:50.000Z
2020-04-17T07:47:54.000Z
site/en/tutorials/generative/dcgan.ipynb
jonathanking/docs-1
3082041fb5ef2b29217584659bc43d89602d57cf
[ "Apache-2.0" ]
null
null
null
site/en/tutorials/generative/dcgan.ipynb
jonathanking/docs-1
3082041fb5ef2b29217584659bc43d89602d57cf
[ "Apache-2.0" ]
1
2020-05-31T15:11:45.000Z
2020-05-31T15:11:45.000Z
31.896552
435
0.517045
[ [ [ "##### Copyright 2019 The TensorFlow Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Deep Convolutional Generative Adversarial Network", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/generative/dcgan\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/dcgan.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/dcgan.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/dcgan.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>", "_____no_output_____" ], [ "This tutorial demonstrates how to generate images of handwritten digits using a [Deep Convolutional Generative Adversarial Network](https://arxiv.org/pdf/1511.06434.pdf) (DCGAN). The code is written using the [Keras Sequential API](https://www.tensorflow.org/guide/keras) with a `tf.GradientTape` training loop.", "_____no_output_____" ], [ "## What are GANs?\n[Generative Adversarial Networks](https://arxiv.org/abs/1406.2661) (GANs) are one of the most interesting ideas in computer science today. Two models are trained simultaneously by an adversarial process. A *generator* (\"the artist\") learns to create images that look real, while a *discriminator* (\"the art critic\") learns to tell real images apart from fakes.\n\n![A diagram of a generator and discriminator](./images/gan1.png)\n\nDuring training, the *generator* progressively becomes better at creating images that look real, while the *discriminator* becomes better at telling them apart. The process reaches equilibrium when the *discriminator* can no longer distinguish real images from fakes.\n\n![A second diagram of a generator and discriminator](./images/gan2.png)\n\nThis notebook demonstrates this process on the MNIST dataset. The following animation shows a series of images produced by the *generator* as it was trained for 50 epochs. The images begin as random noise, and increasingly resemble hand written digits over time.\n\n![sample output](https://tensorflow.org/images/gan/dcgan.gif)\n\nTo learn more about GANs, we recommend MIT's [Intro to Deep Learning](http://introtodeeplearning.com/) course.", "_____no_output_____" ], [ "### Import TensorFlow and other libraries", "_____no_output_____" ] ], [ [ "import tensorflow as tf", "_____no_output_____" ], [ "tf.__version__", "_____no_output_____" ], [ "# To generate GIFs\n!pip install imageio", "_____no_output_____" ], [ "import glob\nimport imageio\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport PIL\nfrom tensorflow.keras import layers\nimport time\n\nfrom IPython import display", "_____no_output_____" ] ], [ [ "### Load and prepare the dataset\n\nYou will use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data.", "_____no_output_____" ] ], [ [ "(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()", "_____no_output_____" ], [ "train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')\ntrain_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]", "_____no_output_____" ], [ "BUFFER_SIZE = 60000\nBATCH_SIZE = 256", "_____no_output_____" ], [ "# Batch and shuffle the data\ntrain_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)", "_____no_output_____" ] ], [ [ "## Create the models\n\nBoth the generator and discriminator are defined using the [Keras Sequential API](https://www.tensorflow.org/guide/keras#sequential_model).", "_____no_output_____" ], [ "### The Generator\n\nThe generator uses `tf.keras.layers.Conv2DTranspose` (upsampling) layers to produce an image from a seed (random noise). Start with a `Dense` layer that takes this seed as input, then upsample several times until you reach the desired image size of 28x28x1. Notice the `tf.keras.layers.LeakyReLU` activation for each layer, except the output layer which uses tanh.", "_____no_output_____" ] ], [ [ "def make_generator_model():\n model = tf.keras.Sequential()\n model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))\n model.add(layers.BatchNormalization())\n model.add(layers.LeakyReLU())\n\n model.add(layers.Reshape((7, 7, 256)))\n assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size\n\n model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))\n assert model.output_shape == (None, 7, 7, 128)\n model.add(layers.BatchNormalization())\n model.add(layers.LeakyReLU())\n\n model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))\n assert model.output_shape == (None, 14, 14, 64)\n model.add(layers.BatchNormalization())\n model.add(layers.LeakyReLU())\n\n model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))\n assert model.output_shape == (None, 28, 28, 1)\n\n return model", "_____no_output_____" ] ], [ [ "Use the (as yet untrained) generator to create an image.", "_____no_output_____" ] ], [ [ "generator = make_generator_model()\n\nnoise = tf.random.normal([1, 100])\ngenerated_image = generator(noise, training=False)\n\nplt.imshow(generated_image[0, :, :, 0], cmap='gray')", "_____no_output_____" ] ], [ [ "### The Discriminator\n\nThe discriminator is a CNN-based image classifier.", "_____no_output_____" ] ], [ [ "def make_discriminator_model():\n model = tf.keras.Sequential()\n model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',\n input_shape=[28, 28, 1]))\n model.add(layers.LeakyReLU())\n model.add(layers.Dropout(0.3))\n\n model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))\n model.add(layers.LeakyReLU())\n model.add(layers.Dropout(0.3))\n\n model.add(layers.Flatten())\n model.add(layers.Dense(1))\n\n return model", "_____no_output_____" ] ], [ [ "Use the (as yet untrained) discriminator to classify the generated images as real or fake. The model will be trained to output positive values for real images, and negative values for fake images.", "_____no_output_____" ] ], [ [ "discriminator = make_discriminator_model()\ndecision = discriminator(generated_image)\nprint (decision)", "_____no_output_____" ] ], [ [ "## Define the loss and optimizers\n\nDefine loss functions and optimizers for both models.\n", "_____no_output_____" ] ], [ [ "# This method returns a helper function to compute cross entropy loss\ncross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)", "_____no_output_____" ] ], [ [ "### Discriminator loss\n\nThis method quantifies how well the discriminator is able to distinguish real images from fakes. It compares the discriminator's predictions on real images to an array of 1s, and the discriminator's predictions on fake (generated) images to an array of 0s.", "_____no_output_____" ] ], [ [ "def discriminator_loss(real_output, fake_output):\n real_loss = cross_entropy(tf.ones_like(real_output), real_output)\n fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)\n total_loss = real_loss + fake_loss\n return total_loss", "_____no_output_____" ] ], [ [ "### Generator loss\nThe generator's loss quantifies how well it was able to trick the discriminator. Intuitively, if the generator is performing well, the discriminator will classify the fake images as real (or 1). Here, we will compare the discriminators decisions on the generated images to an array of 1s.", "_____no_output_____" ] ], [ [ "def generator_loss(fake_output):\n return cross_entropy(tf.ones_like(fake_output), fake_output)", "_____no_output_____" ] ], [ [ "The discriminator and the generator optimizers are different since we will train two networks separately.", "_____no_output_____" ] ], [ [ "generator_optimizer = tf.keras.optimizers.Adam(1e-4)\ndiscriminator_optimizer = tf.keras.optimizers.Adam(1e-4)", "_____no_output_____" ] ], [ [ "### Save checkpoints\nThis notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.", "_____no_output_____" ] ], [ [ "checkpoint_dir = './training_checkpoints'\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")\ncheckpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,\n discriminator_optimizer=discriminator_optimizer,\n generator=generator,\n discriminator=discriminator)", "_____no_output_____" ] ], [ [ "## Define the training loop\n", "_____no_output_____" ] ], [ [ "EPOCHS = 50\nnoise_dim = 100\nnum_examples_to_generate = 16\n\n# We will reuse this seed overtime (so it's easier)\n# to visualize progress in the animated GIF)\nseed = tf.random.normal([num_examples_to_generate, noise_dim])", "_____no_output_____" ] ], [ [ "The training loop begins with generator receiving a random seed as input. That seed is used to produce an image. The discriminator is then used to classify real images (drawn from the training set) and fakes images (produced by the generator). The loss is calculated for each of these models, and the gradients are used to update the generator and discriminator.", "_____no_output_____" ] ], [ [ "# Notice the use of `tf.function`\n# This annotation causes the function to be \"compiled\".\[email protected]\ndef train_step(images):\n noise = tf.random.normal([BATCH_SIZE, noise_dim])\n\n with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:\n generated_images = generator(noise, training=True)\n\n real_output = discriminator(images, training=True)\n fake_output = discriminator(generated_images, training=True)\n\n gen_loss = generator_loss(fake_output)\n disc_loss = discriminator_loss(real_output, fake_output)\n\n gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)\n gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)\n\n generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))\n discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))", "_____no_output_____" ], [ "def train(dataset, epochs):\n for epoch in range(epochs):\n start = time.time()\n\n for image_batch in dataset:\n train_step(image_batch)\n\n # Produce images for the GIF as we go\n display.clear_output(wait=True)\n generate_and_save_images(generator,\n epoch + 1,\n seed)\n\n # Save the model every 15 epochs\n if (epoch + 1) % 15 == 0:\n checkpoint.save(file_prefix = checkpoint_prefix)\n\n print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))\n\n # Generate after the final epoch\n display.clear_output(wait=True)\n generate_and_save_images(generator,\n epochs,\n seed)", "_____no_output_____" ] ], [ [ "**Generate and save images**\n", "_____no_output_____" ] ], [ [ "def generate_and_save_images(model, epoch, test_input):\n # Notice `training` is set to False.\n # This is so all layers run in inference mode (batchnorm).\n predictions = model(test_input, training=False)\n\n fig = plt.figure(figsize=(4,4))\n\n for i in range(predictions.shape[0]):\n plt.subplot(4, 4, i+1)\n plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')\n plt.axis('off')\n\n plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))\n plt.show()", "_____no_output_____" ] ], [ [ "## Train the model\nCall the `train()` method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).\n\nAt the beginning of the training, the generated images look like random noise. As training progresses, the generated digits will look increasingly real. After about 50 epochs, they resemble MNIST digits. This may take about one minute / epoch with the default settings on Colab.", "_____no_output_____" ] ], [ [ "train(train_dataset, EPOCHS)", "_____no_output_____" ] ], [ [ "Restore the latest checkpoint.", "_____no_output_____" ] ], [ [ "checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))", "_____no_output_____" ] ], [ [ "## Create a GIF\n", "_____no_output_____" ] ], [ [ "# Display a single image using the epoch number\ndef display_image(epoch_no):\n return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))", "_____no_output_____" ], [ "display_image(EPOCHS)", "_____no_output_____" ] ], [ [ "Use `imageio` to create an animated gif using the images saved during training.", "_____no_output_____" ] ], [ [ "anim_file = 'dcgan.gif'\n\nwith imageio.get_writer(anim_file, mode='I') as writer:\n filenames = glob.glob('image*.png')\n filenames = sorted(filenames)\n last = -1\n for i,filename in enumerate(filenames):\n frame = 2*(i**0.5)\n if round(frame) > round(last):\n last = frame\n else:\n continue\n image = imageio.imread(filename)\n writer.append_data(image)\n image = imageio.imread(filename)\n writer.append_data(image)\n\nimport IPython\nif IPython.version_info > (6,2,0,''):\n display.Image(filename=anim_file)", "_____no_output_____" ] ], [ [ "If you're working in Colab you can download the animation with the code below:", "_____no_output_____" ] ], [ [ "try:\n from google.colab import files\nexcept ImportError:\n pass\nelse:\n files.download(anim_file)", "_____no_output_____" ] ], [ [ "## Next steps\n", "_____no_output_____" ], [ "This tutorial has shown the complete code necessary to write and train a GAN. As a next step, you might like to experiment with a different dataset, for example the Large-scale Celeb Faces Attributes (CelebA) dataset [available on Kaggle](https://www.kaggle.com/jessicali9530/celeba-dataset). To learn more about GANs we recommend the [NIPS 2016 Tutorial: Generative Adversarial Networks](https://arxiv.org/abs/1701.00160).\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cbb640f2af2ef41c57dd7142c874898e7c2babb5
724,243
ipynb
Jupyter Notebook
Muriel/old notebooks/Exploring Model Data.ipynb
SalishSeaCast/analysis
5964628f08ca1f36121a5d8430ad5b4ae7756c7a
[ "Apache-2.0" ]
null
null
null
Muriel/old notebooks/Exploring Model Data.ipynb
SalishSeaCast/analysis
5964628f08ca1f36121a5d8430ad5b4ae7756c7a
[ "Apache-2.0" ]
null
null
null
Muriel/old notebooks/Exploring Model Data.ipynb
SalishSeaCast/analysis
5964628f08ca1f36121a5d8430ad5b4ae7756c7a
[ "Apache-2.0" ]
null
null
null
83.99942
331
0.829472
[ [ [ "This notebook is used for some basic exploration on modelled data.", "_____no_output_____" ], [ "#Importing the data\nThe following code imports the necessary packages and creates convinient shortcut functions. As well as assures the plots will appear in this notebook within the Out.", "_____no_output_____" ] ], [ [ "from __future__ import division, print_function\n\nimport netCDF4 as nc\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom salishsea_tools import (nc_tools, viz_tools)\n\n%matplotlib inline", "_____no_output_____" ] ], [ [ "This will upload the nowcast grid T data from May 4th 2015; temperature, salinity and sea surface height.", "_____no_output_____" ] ], [ [ "filename='/data/dlatorne/MEOPAR/SalishSea/nowcast/04may15/SalishSea_1d_20150504_20150504_grid_T.nc'\nf=nc.Dataset(filename)", "_____no_output_____" ] ], [ [ "Below I will create my own shorter variable names to facilitate accessing these variables.", "_____no_output_____" ] ], [ [ "nc_tools.show_variables(f)\nlons=f.variables['nav_lon']\nlats=f.variables['nav_lat']\nsal=f.variables['vosaline']\ntemp=f.variables['votemper']\nssh=f.variables['sossheig']\ndep=f.variables['deptht']", "[u'deptht', u'nav_lat', u'nav_lon', u'rain_rate', u'snow_rate', u'sossheig', u'time_counter', u'time_counter_bnds', u'vosaline', u'votemper']\n" ] ], [ [ "Making a plot of the (average) sea surface height of May 4th as a colour mesh", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(1, 1, figsize=(10, 8))\nviz_tools.set_aspect(ax)\nmesh = ax.pcolormesh(ssh[0])\nfig.colorbar(mesh)", "_____no_output_____" ] ], [ [ "##Making a Complete Colour Mesh Plot \n\nMasking the land by creating a numpy masked array which masks the values that are zero within the ssh array and adding labels and titles.", "_____no_output_____" ] ], [ [ "ssh1=np.ma.masked_values(ssh[0],0)\nfig, ax = plt.subplots(1, 1, figsize=(10, 8))\nviz_tools.set_aspect(ax)\ncmap = plt.get_cmap('jet')\ncmap.set_bad('burlywood')\nmesh = ax.pcolormesh(ssh1, cmap=cmap)\ncbar=fig.colorbar(mesh)\n\nax.set_xlabel('x')\nax.set_ylabel('y')\ncbar.set_label('SSH [{units}]'.format(units=ssh.units))\n", "_____no_output_____" ] ], [ [ "From this map we notice the sea surface height is similar, approx. 0m everywhere but at the Fraser River where it is up to approximate .60m higher. The sea surface height is slightly lower in the Juan da Fuca straight and as we move south. \n\nNow I will try with temperature at the surface and I will use the longitude and latitude instead of indices.", "_____no_output_____" ] ], [ [ "\ntemp1=np.ma.masked_values(temp[0,0,:,:],0)\n\nfig, ax = plt.subplots(1, 1, figsize=(10, 8))\nviz_tools.set_aspect(ax)\ncmap = plt.get_cmap('jet')\ncmap.set_bad('burlywood')\nmesh = ax.pcolormesh(lons[:], lats[:], temp1, cmap=cmap)\ncbar=fig.colorbar(mesh)\n\n\nax.set_xlabel('{lon} [{units}]'.format(lon=lons.long_name, units=lons.units))\nax.set_ylabel('{lat} [{units}]'.format(lat=lats.long_name, units=lats.units))\ncbar.set_label('{label} [{units}]'.format(label=temp.long_name.title(), units=temp.units))\nax.set_title(u'May 4th 2015 Mean, depth \\u2248 {d:.2f}{z.units}'.format(d=dep[0], z=dep))", "_____no_output_____" ] ], [ [ "##Velocity QuiverPlots\n", "_____no_output_____" ], [ "I will upload the datasets of May 4th, 2015 with the u, v and w velocity components. The tracers, u, v and w all have different grid box sizes in x, y and depth. In order to align the u and v velocities with the tracers grod point as a reference we will use the unstagger function within viz_tools.", "_____no_output_____" ] ], [ [ "filenameu='/data/dlatorne/MEOPAR/SalishSea/nowcast/04may15/SalishSea_1d_20150504_20150504_grid_U.nc'\nfilenamev='/data/dlatorne/MEOPAR/SalishSea/nowcast/04may15/SalishSea_1d_20150504_20150504_grid_V.nc'\nfilenamew='/data/dlatorne/MEOPAR/SalishSea/nowcast/04may15/SalishSea_1d_20150504_20150504_grid_W.nc'\nbathy=nc.Dataset('/data/dlatorne/MEOPAR/NEMO-forcing/grid/bathy_meter_SalishSea2.nc')\n\nuvel=nc.Dataset(filenameu)\nvvel=nc.Dataset(filenamev)\nwvel=nc.Dataset(filenamew)", "_____no_output_____" ], [ "ugrid= uvel.variables['vozocrtx']\nvgrid= vvel.variables['vomecrty']\ndepv= vvel.variables['depthv']\ndepu=uvel.variables['depthu']\ndepw=wvel.variables['depthw']\n\nvgrid_mask=np.ma.masked_values(vgrid[0,0,:,:],0)\nugrid_mask=np.ma.masked_values(ugrid[0,0,:,:],0)\nu, v=viz_tools.unstagger(ugrid_mask,vgrid_mask)\n\ndepv[:]==depw[:]", "_____no_output_____" ], [ "dep[:]", "_____no_output_____" ], [ "ugrid.coordinates", "_____no_output_____" ] ], [ [ "For the next plot I will use the plot_land_mask function within viz_tools in order to fill the land area as solid polygons. This uses the bathymetry file. Quiver auto-scaling capability increases the size of the arrows when you diminish the amount of vectors you are plotting.", "_____no_output_____" ] ], [ [ "zlevel = 0\ny_slice = np.arange(200, 320)\nx_slice = np.arange(200, 320)\n\narrow_step = 4\ny_slice_a = y_slice[::arrow_step]\nx_slice_a = x_slice[::arrow_step]\n\nugrid_tzyx = np.ma.masked_values(ugrid[0, zlevel, y_slice_a, x_slice_a], 0)\nvgrid_tzyx = np.ma.masked_values(vgrid[0, zlevel, y_slice_a, x_slice_a], 0)\nu, v = viz_tools.unstagger(ugrid_tzyx, vgrid_tzyx)\n\nfig, ax = plt.subplots(1, 1, figsize=(10, 8))\nviz_tools.set_aspect(ax)\n#\nax.quiver(x_slice_a[1:], y_slice_a[1:], u,v, pivot='mid')\nviz_tools.plot_land_mask(ax, bathy , xslice=x_slice, yslice=y_slice)\n\nax.set_xlim(x_slice[0], x_slice[-1])\nax.set_ylim(y_slice[0], y_slice[-1])\nax.grid()\n", "_____no_output_____" ], [ "x_slice=np.arange(ugrid.shape[3])\ny_slice=np.arange(ugrid.shape[2])\n\nugrid_tzyx = np.ma.masked_values(ugrid[0,0,:,:], 0)\nvgrid_tzyx = np.ma.masked_values(vgrid[0,0,:,:], 0)\nu, v = viz_tools.unstagger(ugrid_tzyx, vgrid_tzyx)\n\nfig, ax = plt.subplots(1, 1, figsize=(10, 8))\nviz_tools.set_aspect(ax)\n\nax.quiver(x_slice[1::3], y_slice[1::3], u[::3,::3],v[::3,::3], pivot='mid')\nviz_tools.plot_land_mask(ax, bathy, xslice=x_slice, yslice=y_slice, color='burlywood')", "_____no_output_____" ] ], [ [ "The quiver of the daily average shows the currents moving alot faster though Haro Stait and near the Fraser River. However the arrows and the detailed flow is not visible in this plot.", "_____no_output_____" ], [ "#Velocity Quiver with Colour map\nWe will now apply a colour map that corresponds to the velocity magnitude.", "_____no_output_____" ] ], [ [ "y_slice_zoom=np.arange(350,470)\nx_slice_zoom=np.arange(250,360)\n\nu_zoom=u[350:470,250:360]\nv_zoom=v[350:470,250:360]\n\nspeed=np.sqrt(np.square(u_zoom)+np.square(v_zoom))\n\nfig, ax = plt.subplots(1, 1, figsize=(10, 8))\nviz_tools.set_aspect(ax)\n\nax.quiver(x_slice_zoom[::3], y_slice_zoom[::3], u_zoom[::3,::3],v_zoom[::3,::3], speed[::3,::3], pivot='mid', cmap='Reds', width=0.005)\nviz_tools.plot_land_mask(ax, bathy, xslice=x_slice_zoom, yslice=y_slice_zoom, color='burlywood')\nax.set_xlim(x_slice_zoom[0],x_slice_zoom[-1])\nax.set_ylim(y_slice_zoom[0],y_slice_zoom[-1])", "_____no_output_____" ] ], [ [ "#Vetical Plane Plots\n\n##Velocities on Vertical Plane\nThe figure with contain two subplots. One of the vertical depth profiles which will demontrate the average daily velocity on May 4th 2015 through the water column across a cross sectional area of the Strait of Georgia. The second will show where the trasect was taken and display the surface current of the surrounding area. ", "_____no_output_____" ] ], [ [ "fig, (axl, axr)= plt.subplots(1,2,figsize=(16,8))\nlcolour='burlywood'\n\nt=0 #The daily average only have one time\nzmax=37 #The vertical profile will go down to the 41th depth bin... it only has 40...\nycross=416 #The cross sectional area will be taken along the 416th index of y\n\nx_slice_cross=np.arange(200, 320)\ntimestamp=nc_tools.timestamp(vvel,t)\n\n#Mask and slice\nv_vel=np.ma.masked_values(vgrid[t, :zmax, ycross,x_slice_cross],0)\n\ncmap=plt.get_cmap('bwr')\ncmap.set_bad(lcolour)\nmesh= axl.pcolormesh(x_slice_cross[:],depv[:zmax], v_vel,cmap=cmap, vmin=-0.1, vmax=1)\naxl.invert_yaxis()\ncbar=fig.colorbar(mesh, ax=axl)\n\naxl.set_xlim(x_slice_cross[0], x_slice_cross[-1])\naxl.set_ylim(depv[zmax-2] +10,0)\naxl.grid()\n\n\n#Second subplot\ny_slice=np.arange(350,470)\nx_slice=np.arange(180,360)\n\nu_mask=np.ma.masked_values(ugrid[t,0,y_slice, x_slice],0)\nv_mask=np.ma.masked_values(vgrid[t,0,y_slice, x_slice],0)\n\nu_unstag,v_unstag=viz_tools.unstagger(u_mask,v_mask)\nspeeds=np.sqrt(np.square(u_unstag)+np.square(v_unstag))\nmax_speed=viz_tools.calc_abs_max(speeds)\n\nviz_tools.set_aspect(axr)\naxr.streamplot(x_slice[1:], y_slice[1:], u_unstag, v_unstag, linewidth=6*speeds/max_speed)\nviz_tools.plot_land_mask(axr,bathy,xslice=x_slice, yslice=y_slice,color=lcolour)\naxr.plot(x_slice_cross, ycross*np.ones_like(x_slice_cross), linestyle='solid', linewidth=3, color='black', label='Section Line')\naxr.legend(loc='best', fancybox=True)", "_____no_output_____" ] ], [ [ "##Salinity Contour Mesh\nThe last plot I will create in this note book will be of the vetical salinity all along the Georgia Strait following the deepest cross-section.", "_____no_output_____" ] ], [ [ "#Loading the array with the deepest point of the strait.\nthalweg=np.loadtxt('/data/dlatorne/MEOPAR/tools/analysis_tools/thalweg.txt',dtype=int, unpack=True)\nfig,(axl,axcb,axr)=plt.subplots(1,3,figsize=(16,8))\n\naxl.set_axis_bgcolor(lcolour)\naxr.set_axis_bgcolor(lcolour)\naxl.set_position((0.125, 0.125, 0.6, 0.775))\naxcb.set_position((0.73, 0.125, 0.02, 0.775))\naxr.set_position((0.83, 0.125, 0.2, 0.775))\n\n#First we will do the map with thalweg line on it\nviz_tools.set_aspect(axr)\ncmap=plt.get_cmap('winter_r')\ncmap.set_bad(lcolour)\nbathym=bathy.variables['Bathymetry']\nx_slice=np.arange(bathym.shape[1])\ny_slice=np.arange(200,800)\naxr.pcolormesh(x_slice, y_slice, bathym[y_slice, x_slice], cmap=cmap)\naxr.plot(thalweg[1], thalweg[0], linestyle='-', marker='+', color='red', label='Thalweg Points')\naxr.legend(loc='best', fancybox=True)\naxr.grid()\n\n#Salinity coutour plot\nsalmin=26\nsalmax=34\ndels=0.5\n\ncmap=plt.get_cmap('rainbow')\ncmap.set_bad(lcolour)\nsal_0=sal[0,:,thalweg[0],thalweg[1]]\nsal_mask=np.ma.masked_values(sal_0,0)\nx,z=np.meshgrid(np.arange(thalweg.shape[1]), dep)\nmesh=axl.pcolormesh(x,z,sal_mask, cmap=cmap, vmin=salmin, vmax=salmax)\ncbar=plt.colorbar(mesh,cax=axcb)\nclines=axl.contour(x,z,sal_mask,np.arange(salmin,salmax,dels), colors='black')\naxl.clabel(clines, fmt='%1.1f', inline=True)\naxl.invert_yaxis()\naxl.set_xlim(0,thalweg[0][-1])\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb644c28163afa67fc6630215868ba2070e67ac
608,869
ipynb
Jupyter Notebook
experiments/pert_cost_trajectories_gaussian_vs_uniform_noise.ipynb
anonymous2398384/adversarial-examples-and-where-to-find-them
11a17e603e014ea638c2136c550b67d2f5543827
[ "BSD-3-Clause" ]
6
2020-04-24T11:20:19.000Z
2020-06-28T21:50:13.000Z
experiments/pert_cost_trajectories_gaussian_vs_uniform_noise.ipynb
anonymous2398384/adversarial-examples-and-where-to-find-them
11a17e603e014ea638c2136c550b67d2f5543827
[ "BSD-3-Clause" ]
null
null
null
experiments/pert_cost_trajectories_gaussian_vs_uniform_noise.ipynb
anonymous2398384/adversarial-examples-and-where-to-find-them
11a17e603e014ea638c2136c550b67d2f5543827
[ "BSD-3-Clause" ]
4
2020-04-27T12:45:33.000Z
2020-06-04T06:05:42.000Z
2,166.793594
598,468
0.957052
[ [ [ "# Perturbation cost trajectories for gaussian noise of different sizes vs uniform noise of different sizes", "_____no_output_____" ] ], [ [ "import os\nos.chdir(\"../\")\nimport sys\nimport json\nfrom argparse import Namespace\nimport numpy as np\nfrom sklearn import metrics\nfrom sklearn.metrics import pairwise_distances as dist\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(context='paper')\n\nimport provable_robustness_max_linear_regions.data as dt\nfrom generate_perturbation_cost_trajectories import calculate_perturbation_cost_data\nfrom utils import NumpyEncoder", "_____no_output_____" ] ], [ [ "## Plot settings:", "_____no_output_____" ] ], [ [ "SMALL_SIZE = 14\nMEDIUM_SIZE = 18\nBIGGER_SIZE = 26\n\nplt.rc('font', size=SMALL_SIZE) # controls default text sizes\nplt.rc('axes', titlesize=MEDIUM_SIZE) # fontsize of the axes title\nplt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels\nplt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels\nplt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels\nplt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize\nplt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title\nplt.rc('text', usetex=True)\n\n# dictionary that maps color string to 'good looking' seaborn colors that are easily distinguishable\ncolors = {\n \"orange\": sns.xkcd_rgb[\"yellowish orange\"],\n \"red\": sns.xkcd_rgb[\"pale red\"],\n \"green\": sns.xkcd_rgb[\"medium green\"],\n \"blue\": sns.xkcd_rgb[\"denim blue\"],\n \"yellow\": sns.xkcd_rgb[\"amber\"],\n \"purple\": sns.xkcd_rgb[\"dusty purple\"],\n \"cyan\": sns.xkcd_rgb[\"cyan\"]\n}", "_____no_output_____" ] ], [ [ "## Calculate perturbation cost data:\nEstimated runtime (if no file with data is present): 12 hours", "_____no_output_____" ] ], [ [ "def load_from_json(file_name):\n\n if not os.path.exists(\"res/\" + file_name + \".json\"):\n return None\n else:\n with open(\"res/\" + file_name + \".json\", 'r') as fp:\n return json.load(fp)\n \ndef save_to_json(dictionary, file_name):\n \n if not os.path.exists(\"res\"):\n os.makedirs(\"res\")\n\n with open(\"res/\" + file_name + \".json\", 'w') as fp:\n json.dump(dictionary, fp, cls = NumpyEncoder)\n\nn_points = 1000\nperturbation_cost_data = load_from_json(\"pc_croce_adversarial_noise_plus_gaussian_noise_n_points={}\".format(n_points))\n\ncroce_model_paths = [\"provable_robustness_max_linear_regions/models/plain/2019-02-24 00:50:45 dataset=mnist nn_type=cnn_lenet_small p_norm=2 lmbd=0.0 gamma_rb=0.0 gamma_db=0.0 ae_frac=0.0 lr=0.001 epoch=100.mat\",\"provable_robustness_max_linear_regions/models/mmr+at/2019-02-17 01:54:16 dataset=mnist nn_type=cnn_lenet_small p_norm=inf lmbd=0.5 gamma_rb=0.2 gamma_db=0.2 ae_frac=0.5 epoch=100.mat\", \"provable_robustness_max_linear_regions/models/mmr+at/2019-02-24 00:04:27 dataset=mnist nn_type=cnn_lenet_small p_norm=2 lmbd=6.0 gamma_rb=0.45 gamma_db=0.45 ae_frac=0.5 lr=5e-05 epoch=50.mat\"]\n\nif not perturbation_cost_data:\n \n perturbation_cost_data = dict()\n \n for model_path in croce_model_paths:\n args = Namespace()\n\n args.dataset = \"mnist\"\n args.n_points = n_points\n args.model_path = model_path\n args.adversarial_model_paths = [model_path]\n args.nn_type = \"cnn\"\n args.norms = [\"inf\", \"2\"]\n args.noise_types = [\"gaussian\", \"uniform\"]\n args.noise_sizes = [0.1, 0.3, 0.6]\n args.splits = [{\"inf\": [0.0, np.inf], \"2\": [0.0, np.inf]}]\n args.save = False\n args.plot = False\n \n file_name = model_path.split(\"/\")[3]\n model_name = file_name.split(\".mat\")[0]\n perturbation_cost_data[model_name] = calculate_perturbation_cost_data(args)\n \n save_to_json(perturbation_cost_data, \"pc_croce_adversarial_noise_plus_gaussian_noise_n_points={}\".format(n_points))", "_____no_output_____" ] ], [ [ "## Plot:", "_____no_output_____" ] ], [ [ "# name to save the plot\nsave_name = \"fig_pc_comparing_noise_levels\"\n\nmodel_names = [\n \"2019-02-17 01:54:16 dataset=mnist nn_type=cnn_lenet_small p_norm=inf lmbd=0.5 gamma_rb=0.2 gamma_db=0.2 ae_frac=0.5 epoch=100\",\n \"2019-02-24 00:04:27 dataset=mnist nn_type=cnn_lenet_small p_norm=2 lmbd=6.0 gamma_rb=0.45 gamma_db=0.45 ae_frac=0.5 lr=5e-05 epoch=50\"\n]\nmodel_name_dict = {\n \"2019-02-24 00:50:45 dataset=mnist nn_type=cnn_lenet_small p_norm=2 lmbd=0.0 gamma_rb=0.0 gamma_db=0.0 ae_frac=0.0 lr=0.001 epoch=100\":\n \"Training: Standard Training\",\n \"2019-02-17 01:54:16 dataset=mnist nn_type=cnn_lenet_small p_norm=inf lmbd=0.5 gamma_rb=0.2 gamma_db=0.2 ae_frac=0.5 epoch=100\":\n \"Training: MMR+AT\\nThreat Model: $\\ell_\\infty(\\epsilon=0.1)$\",\n \"2019-02-24 00:04:27 dataset=mnist nn_type=cnn_lenet_small p_norm=2 lmbd=6.0 gamma_rb=0.45 gamma_db=0.45 ae_frac=0.5 lr=5e-05 epoch=50\":\n \"Training: MMR+AT\\nThreat Model: $\\ell_2(\\epsilon=0.3)$\"\n}\nbase_color_dict_for_noise_type = {\n \"noise\": {\n \"inf\": {\n \"gaussian\": colors[\"red\"],\n \"uniform\": colors[\"blue\"]\n },\n \"2\": {\n \"gaussian\": colors[\"red\"],\n \"uniform\": colors[\"blue\"]\n }\n }\n}\n\n# number of model types and parameter combinations\nn_cols = 2\nn_rows = 1\n\nfig, ax = plt.subplots(n_rows,\n n_cols,\n figsize=(6 * n_cols, 5 * n_rows),\n dpi=400)\n\nlinestyle = \"-\"\nbase_color_dict = {\n \"adv\": {\n \"inf\": colors[\"blue\"],\n \"2\": colors[\"green\"]\n },\n \"noise\": {\n \"inf\": colors[\"red\"],\n \"2\": colors[\"yellow\"]\n }\n}\nnorm_to_latex = {\"inf\": \"\\infty\", \"2\": \"2\"}\n\nmodel_name = \"2019-02-24 00:50:45 dataset=mnist nn_type=cnn_lenet_small p_norm=2 lmbd=0.0 gamma_rb=0.0 gamma_db=0.0 ae_frac=0.0 lr=0.001 epoch=100\"\npert_costs_data = perturbation_cost_data[model_name][model_name]\n\nfor i, norm in enumerate([\"inf\", \"2\"]):\n pert_cost_norm = norm\n perturbation_norm = norm\n noise_types = [\"uniform\", \"gaussian\"]\n noise_sizes = [\"0.1\", \"0.3\", \"0.6\"]\n split = json.dumps({\"inf\": [0.0, np.inf], \"2\": [0.0, np.inf]})\n\n for noise_type in noise_types:\n for noise_size in noise_sizes:\n pert_costs_noise = np.array(\n pert_costs_data[pert_cost_norm][perturbation_norm]\n [noise_type][noise_size][split][\"pert_costs_2\"])\n\n linestyle = \"-\"\n color = base_color_dict_for_noise_type[\"noise\"][\n perturbation_norm][noise_type]\n ax[i].plot(np.mean(pert_costs_noise, axis=0),\n c=color,\n linestyle=linestyle,\n label=\"{} noise, size $\\ell_{}={}$\".format(\n noise_type, norm_to_latex[perturbation_norm],\n noise_size),\n alpha=0.4 + float(noise_size))\n\n ax[i].set_xlim(0.0, 8.0)\n\n ax[i].set_xticks(np.arange(0, 9.0, 1.0))\n ax[i].set_xticklabels([\n \"INPUT\", \"CONV\", \"RELU\", \"CONV\", \"RELU\", \"FC\", \"RELU\", \"FC\",\n \"ARGMAX\"\n ])\n\n ax[i].set_xlabel(\"layer\")\n\n ax[i].legend()\n\nfig.suptitle(\n \"mean perturbation cost curves for different noise sizes in $\\ell_\\infty$ and $\\ell_2$ norm\"\n)\n\nax[0].set_ylabel(\"perturbation costs\")\nax[0].set_ylim(0.0, 2.0)\nax[1].set_ylim(0.0, 0.1)\n\nfig.tight_layout()\nfig.subplots_adjust(top=0.88)\nfig.savefig('res/{}.pdf'.format(save_name))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb6560b562b4845164b3310f78b9083aaae993b
43,628
ipynb
Jupyter Notebook
line-follower/src/test_junk_lstm/random_lstm_data_formatting.ipynb
project-memory/line-follower
df03947b90b351f62c46ca59767bc234a73c3776
[ "MIT" ]
null
null
null
line-follower/src/test_junk_lstm/random_lstm_data_formatting.ipynb
project-memory/line-follower
df03947b90b351f62c46ca59767bc234a73c3776
[ "MIT" ]
3
2017-04-13T20:50:07.000Z
2017-04-17T20:11:43.000Z
line-follower/src/test_junk_lstm/random_lstm_data_formatting.ipynb
no-fire/line-follower
df03947b90b351f62c46ca59767bc234a73c3776
[ "MIT" ]
null
null
null
107.458128
16,764
0.8575
[ [ [ "## Imports", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nimport seaborn as sns\nimport numpy as np\n\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Flatten, Dropout, Reshape\nfrom keras.layers.recurrent import LSTM\nfrom keras.layers import Convolution2D, MaxPooling2D", "_____no_output_____" ] ], [ [ "## Constants", "_____no_output_____" ] ], [ [ "img_rows, img_cols = 28, 28\nin_shape = (img_rows, img_cols, 1)\nbatch_size = 256\nnb_epoch = 3", "_____no_output_____" ] ], [ [ "## Data", "_____no_output_____" ] ], [ [ "X_train = np.random.rand(10000, 28, 28, 1)\nY_train = np.random.rand(10000, 1)\nX_test = np.random.rand(1000, 28, 28, 1)", "_____no_output_____" ], [ "sns.heatmap(X_train[0].reshape(28,28), cmap='gray')", "_____no_output_____" ] ], [ [ "## Train Data", "_____no_output_____" ] ], [ [ "X_train = np.random.rand(10000, 28, 28, 1)\nsns.heatmap(X_train[0].reshape(28, 28), cmap='gray')", "_____no_output_____" ], [ "Y_train = np.random.rand(10000, 1)\nY_train[0]", "_____no_output_____" ] ], [ [ "## Test Data", "_____no_output_____" ] ], [ [ "X_test = np.random.rand(1024, 28, 28, 1)", "_____no_output_____" ] ], [ [ "# Model", "_____no_output_____" ] ], [ [ "model = Sequential()\nmodel.add(Convolution2D(32, 3, 3, border_mode='same', activation='relu', input_shape=in_shape))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Convolution2D(64, 3, 3, border_mode='same', activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Convolution2D(128, 3, 3, border_mode='same', activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Flatten())", "_____no_output_____" ], [ "model.add(Reshape((1, -1)))\nmodel.add(LSTM(100)) #return_sequences=True\nmodel.add(Dense(1))\nmodel.compile(loss='mean_squared_error', optimizer='adam')\nmodel.summary()", "____________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n====================================================================================================\nconvolution2d_1 (Convolution2D) (None, 28, 28, 32) 320 convolution2d_input_1[0][0] \n____________________________________________________________________________________________________\nmaxpooling2d_1 (MaxPooling2D) (None, 14, 14, 32) 0 convolution2d_1[0][0] \n____________________________________________________________________________________________________\nconvolution2d_2 (Convolution2D) (None, 14, 14, 64) 18496 maxpooling2d_1[0][0] \n____________________________________________________________________________________________________\nmaxpooling2d_2 (MaxPooling2D) (None, 7, 7, 64) 0 convolution2d_2[0][0] \n____________________________________________________________________________________________________\nconvolution2d_3 (Convolution2D) (None, 7, 7, 128) 73856 maxpooling2d_2[0][0] \n____________________________________________________________________________________________________\nmaxpooling2d_3 (MaxPooling2D) (None, 3, 3, 128) 0 convolution2d_3[0][0] \n____________________________________________________________________________________________________\nflatten_1 (Flatten) (None, 1152) 0 maxpooling2d_3[0][0] \n____________________________________________________________________________________________________\nreshape_1 (Reshape) (None, 1, 1152) 0 flatten_1[0][0] \n____________________________________________________________________________________________________\nlstm_1 (LSTM) (None, 100) 501200 reshape_1[0][0] \n____________________________________________________________________________________________________\ndense_1 (Dense) (None, 1) 101 lstm_1[0][0] \n====================================================================================================\nTotal params: 593,973\nTrainable params: 593,973\nNon-trainable params: 0\n____________________________________________________________________________________________________\n" ] ], [ [ "## Train model\nI think it is just learning to output .5 to minimize loss as we are using completely random training data", "_____no_output_____" ] ], [ [ "model.fit(X_train, Y_train, # specify training data\n batch_size=1, # use this many images per mini-batch - memory dependent - 256\n nb_epoch=1, # go through my training data this number of times - 3\n shuffle=False,\n verbose=True # please print things \n )", "Epoch 1/1\n10000/10000 [==============================] - 179s - loss: 0.0864 \n" ] ], [ [ "### Predictions", "_____no_output_____" ] ], [ [ "predictions = model.predict(X_test)", "_____no_output_____" ], [ "predictions[0]", "_____no_output_____" ], [ "predictions.mean()", "_____no_output_____" ], [ "Y_train.mean()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
cbb669a0ae32cb8dd405f9ea3dfcc0744e4c30b5
264,150
ipynb
Jupyter Notebook
src/image_distort.ipynb
BRIQUE-Inc/tiny_imagenet
566378fac2dfedbc719df7403f42ffd92f1340f3
[ "MIT" ]
53
2017-09-13T14:54:44.000Z
2022-02-17T12:18:19.000Z
src/image_distort.ipynb
BRIQUE-Inc/tiny_imagenet
566378fac2dfedbc719df7403f42ffd92f1340f3
[ "MIT" ]
1
2019-01-03T07:33:57.000Z
2019-11-22T03:50:57.000Z
src/image_distort.ipynb
BRIQUE-Inc/tiny_imagenet
566378fac2dfedbc719df7403f42ffd92f1340f3
[ "MIT" ]
26
2017-10-16T10:39:51.000Z
2021-03-01T16:15:11.000Z
2,079.92126
261,084
0.949688
[ [ [ "# View TensorFlow Image Distortions\n\nIt is standard practice to \"augment\" training data with distorted images: strerching, cropping, flipping, saturation and hue. TensorFlow has several built-in image distortions. This is a short notebook to view images with these distortions. This is useful for selecting reasonable ranges for random distortions.\n\n\nPython Notebook by Patrick Coady: [Learning Artificial Intelligence](https://learningai.io/)", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport glob\nimport matplotlib.pyplot as plt\nimport random\nimport scipy.ndimage\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10, 6)", "_____no_output_____" ], [ "def distort(filename):\n \"\"\"Apply image distortions\"\"\"\n with tf.Graph().as_default():\n file = tf.read_file(filename)\n img = tf.image.decode_jpeg(file, 3)\n img = tf.image.adjust_saturation(img, 0.5)\n# img = tf.image.adjust_hue(img, -0.05)\n with tf.Session() as sess:\n dist_img = sess.run(img)\n \n return dist_img", "_____no_output_____" ], [ "filenames = glob.glob('../tiny-imagenet-200/test/images/*.JPEG')\npick_8 = random.sample(filenames, 8)", "_____no_output_____" ], [ "count = 0\nfor filename in pick_8:\n count += 1\n plt.subplot(4, 4, count)\n img = scipy.ndimage.imread(filename)\n plt.imshow(img)\n plt.axis('off')\n img_distort = distort(filename)\n count += 1\n plt.subplot(4, 4, count)\n plt.imshow(img_distort)\n plt.axis('off')\n\nplt.show() ", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ] ]
cbb6a5585735f2f8b9dd0d0351fddeb3f8b7fbdd
3,498
ipynb
Jupyter Notebook
Advanced Data Science with IBM Specialization/Applied AI with DL/warm up code.ipynb
ngngocsonan2610/coursera
2b813d7514ccb2bf931d8b030dcb779f7ac746d3
[ "MIT" ]
null
null
null
Advanced Data Science with IBM Specialization/Applied AI with DL/warm up code.ipynb
ngngocsonan2610/coursera
2b813d7514ccb2bf931d8b030dcb779f7ac746d3
[ "MIT" ]
null
null
null
Advanced Data Science with IBM Specialization/Applied AI with DL/warm up code.ipynb
ngngocsonan2610/coursera
2b813d7514ccb2bf931d8b030dcb779f7ac746d3
[ "MIT" ]
null
null
null
53
718
0.532304
[ [ [ "# Warmup Assignment\n\nIn this exercise you will just submit a ping to the grader in order to make sure the exercice and grading environment is setup correctly.", "_____no_output_____" ], [ "\n\nWe have to install a little library in order to submit to coursera\n", "_____no_output_____" ] ], [ [ "!rm -f rklib.py\n!wget https://raw.githubusercontent.com/IBM/coursera/master/rklib.py", "Waiting for a Spark session to start...\nSpark Initialization Done! ApplicationId = app-20200302031439-0000\nKERNEL_ID = a6622b3f-be5e-42ec-b31c-45fa8131d118\n--2020-03-02 03:14:42-- https://raw.githubusercontent.com/IBM/coursera/master/rklib.py\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.232.8.133\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.232.8.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 2540 (2.5K) [text/plain]\nSaving to: 'rklib.py'\n\n100%[======================================>] 2,540 --.-K/s in 0s \n\n2020-03-02 03:14:42 (42.2 MB/s) - 'rklib.py' saved [2540/2540]\n\n" ] ], [ [ "Please provide your email address and obtain a submission token on the grader’s submission page in coursera, then execute the cell", "_____no_output_____" ] ], [ [ "from rklib import submit\nimport json\n\nkey = \"tlOUM_XREeerCBIEgYad1g\"\npart = \"HA0XG\"\nemail = \"[email protected]\"###_YOUR_CODE_GOES_HERE_###\ntoken = \"8ovjUS5lJwOss5wH\"###_YOUR_CODE_GOES_HERE_### #you can obtain it from the grader page on Coursera (have a look here if you need more information on how to obtain the token https://youtu.be/GcDo0Rwe06U?t=276)\n\n\nsubmit(email, token, key, part, [part], json.dumps(23))", "Submission successful, please check on the coursera grader page for the status\n-------------------------\n{\"elements\":[{\"itemId\":\"sLXV2\",\"id\":\"tE4j0qhMEeecqgpT6QjMdA~sLXV2~iqUR0Vw0EeqfDhLKHH-DlQ\",\"courseId\":\"tE4j0qhMEeecqgpT6QjMdA\"}],\"paging\":{},\"linked\":{}}\n-------------------------\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb6aeba3076ff5886a80dcb0e8260c7e530ea33
70,472
ipynb
Jupyter Notebook
ETL-cna_matrix.ipynb
sinnamone/PCAprofiler
5ccbfbf8e3859da9c2c52fbc48ced05d3bc68840
[ "MIT" ]
null
null
null
ETL-cna_matrix.ipynb
sinnamone/PCAprofiler
5ccbfbf8e3859da9c2c52fbc48ced05d3bc68840
[ "MIT" ]
null
null
null
ETL-cna_matrix.ipynb
sinnamone/PCAprofiler
5ccbfbf8e3859da9c2c52fbc48ced05d3bc68840
[ "MIT" ]
null
null
null
38.635965
138
0.290811
[ [ [ "!ls", "ETL-mRNA_vst_normalized.txt .ipynb\r\nETL-mutmatrix.txt.ipynb\r\nExport_fromR.R\r\nGSETS.RData\r\nGSETS_RES.RData\r\n\u001b[34mGSETS_RES_split_object\u001b[m\u001b[m\r\n\u001b[34mGSET_split_object\u001b[m\u001b[m\r\nUntitled.ipynb\r\ncna_matrix.txt\r\nmRNA_vst_normalized.rar\r\nmRNA_vst_normalized.txt\r\nmRNA_vst_normalized_genesubset.rar\r\nmRNA_vst_normalized_genesubset.txt\r\nmRNA_vst_normalized_transposed.txt\r\nmutmatrix.txt\r\nmutmatrix_transposed.txt\r\nsample_pca_and_annotations.xlsx\r\nsample_pca_annotations_sheet_gene2ptime_correlations.txt\r\nsample_pca_annotations_sheet_gene_annotations.txt\r\nsample_pca_annotations_sheet_pca_and_pseudotime.txt\r\nsample_pca_annotations_sheet_sample_annotations.txt\r\n" ], [ "import pandas as pd", "_____no_output_____" ], [ "df = pd.read_csv(\"cna_matrix.txt\",sep=\"\\t\",header=0,index_col=0)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df2 = df.T", "_____no_output_____" ], [ "df2.info()", "<class 'pandas.core.frame.DataFrame'>\nIndex: 587 entries, SRR3018011 to 140815_UNC9-SN296_0431_BC5AG8ACXX_CTTGTA_L002\nColumns: 24939 entries, AR to RN7SL91P\ndtypes: float64(24939)\nmemory usage: 111.7+ MB\n" ], [ "df2.isna()", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "df.shape[0] - df.dropna().shape[0]", "_____no_output_____" ], [ "df.dropna().shape", "_____no_output_____" ], [ "df2 = df2.fillna(-1)", "_____no_output_____" ], [ "df2 = df2.astype(\"int\")", "_____no_output_____" ], [ "df2.to_csv(\"cna_matrix_transposed.txt\",index=True,header=True,sep=\"\\t\") ", "_____no_output_____" ], [ "df2.shape", "_____no_output_____" ], [ "df2", "_____no_output_____" ], [ "df3 = pd.read_csv(\"/Users/simone/Documents/dashboard/GSETS_RES_split_object/GSETS_RES_merge.tsv\",sep=\"\\t\",header=0,index_col=1)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb6bc860c1db8614be2231da57f83ab9420ea6a
278,016
ipynb
Jupyter Notebook
notebooks/outdated/Writing Patents Revised-Copy1.ipynb
bonchae/recurrent-neural-networks
111cbf2c685b1633a343d52e59f5bf6fa2fc6ff9
[ "MIT" ]
363
2018-10-27T15:39:58.000Z
2022-03-29T09:57:36.000Z
notebooks/outdated/Writing Patents Revised-Copy1.ipynb
aawanRahman/recurrent-neural-networks
8da0e37a2360ba281a07c65f438eb5ccd8a930e0
[ "MIT" ]
9
2018-11-14T13:29:02.000Z
2021-09-22T11:36:56.000Z
notebooks/outdated/Writing Patents Revised-Copy1.ipynb
aawanRahman/recurrent-neural-networks
8da0e37a2360ba281a07c65f438eb5ccd8a930e0
[ "MIT" ]
297
2018-10-27T17:50:48.000Z
2022-03-28T23:51:04.000Z
92.456269
86,844
0.796573
[ [ [ "# Introduction: Writing Patent Abstracts with a Recurrent Neural Network\n\nThe purpose of this notebook is to develop a recurrent neural network using LSTM cells that can generate patent abstracts. We will look at using a _word level_ recurrent neural network and _embedding_ the vocab, both with pre-trained vectors and training our own embeddings. We will train the model by feeding in as the features a long sequence of words (for example 50 words) and then using the next word as the label. Over time, the network will (hopefully) learn to predict the next word in a given sequence and we can use the model predictions to generate entirely novel patent abstracts.\n\n## Approach \n\nThe approach to solving this problem is:\n\n1. Read in training data: thousands of \"neural network\" patents\n2. Convert patents to integer sequences: `tokenization`\n3. Create training dataset using next word following a sequence as label\n4. Build a recurrent neural network using word embeddings and LSTM cells\n5. Load in pre-trained embeddings\n6. Train network to predict next word from sequence\n7. Generate new abstracts by feeding network a seed sequence\n8. Repeat steps 2 - 7 using pre-trained embeddings\n9. Try different model architecture to see if performance improves\n10. For fun, create a simple game where we must guess if the output is human or computer! \n\nEach of these steps is relatively simple by itself, so don't be intimidated. We'll walk through the entire process and at the end will be able to have a working application of deep learning! ", "_____no_output_____" ] ], [ [ "# Set up IPython to show all outputs from a cell\nfrom IPython.core.interactiveshell import InteractiveShell\n\nInteractiveShell.ast_node_interactivity = 'all'\n\nimport warnings\nwarnings.filterwarnings('ignore', category = RuntimeWarning)\n\nRANDOM_STATE = 50\nEPOCHS = 150\nBATCH_SIZE = 2048\nTRAINING_LENGTH = 50\nTRAIN_FRACTION = 0.7\nVERBOSE = 0\nSAVE_MODEL = True", "_____no_output_____" ], [ "from tensorflow.python.client import device_lib\nprint(device_lib.list_local_devices())", "[name: \"/device:CPU:0\"\ndevice_type: \"CPU\"\nmemory_limit: 268435456\nlocality {\n}\nincarnation: 6469279316292172199\n, name: \"/device:GPU:0\"\ndevice_type: \"GPU\"\nmemory_limit: 11281553818\nlocality {\n bus_id: 1\n links {\n }\n}\nincarnation: 8713867755173837630\nphysical_device_desc: \"device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7\"\n]\n" ] ], [ [ "## Read in Data \n\nOur data consists of patent abstracts by searching for the term \"neural networks\" on [patentsview query](http://www.patentsview.org/querydev) web interface. The data can be downloaded in a number of formats and can include a number of patent attributes (I only kept 4). ", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\n\n# Read in data\ndata = pd.read_csv('../data/neural_network_patent_query.csv', parse_dates = ['patent_date'])\n\n# Extract abstracts\noriginal_abstracts = list(data['patent_abstract'])\nlen(original_abstracts)\n\ndata.head()", "_____no_output_____" ] ], [ [ "### Brief Data Exploration\n\nThis data is extremely clean, which means we don't need to do any manual munging. We can still make a few simple plots out of curiousity though! ", "_____no_output_____" ] ], [ [ "data['patent_abstract'][100]", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.style.use('fivethirtyeight')\n\ndata['year-month'] = [pd.datetime(year, month, 1) for year, month in zip(data['patent_date'].dt.year,\n data['patent_date'].dt.month)]\n\nmonthly = data.groupby('year-month')['patent_number'].count().reset_index()\n\nmonthly.set_index('year-month')['patent_number'].plot(figsize = (16, 8))\nplt.ylabel('Number of Patents'); plt.xlabel('Date'); \nplt.title('Neural Network Patents over Time');", "_____no_output_____" ], [ "monthly.groupby(monthly['year-month'].dt.year)['patent_number'].sum().plot.bar(color = 'red', edgecolor = 'k',\n figsize = (12, 6))\nplt.xlabel('Year'); plt.ylabel('Number of Patents'); plt.title('Neural Network Patents by Year');", "_____no_output_____" ] ], [ [ "The distribution of patents over time is interesting. I would expect 2018 to come out on top once the patents have been accepted. ", "_____no_output_____" ], [ "## Data Cleaning \n\nOur preprocessing is going to involve using a `Tokenizer` to convert the patents from sequences of words (strings) into sequences of integers. We'll get to that in a bit, but even with neural networks, having a clean dataset is paramount. The data quality is already high, but there are some idiosyncracies of patents as well as general text improvements to make. For example, let's consider the following two sentences.\n\n>'This is a short sentence (1) with one reference to an image. This next sentence, while non-sensical, does not have an image and has two commas.'\n\nIf we choose to remove all punctuation with the default Tokenizer settings, we get the following.", "_____no_output_____" ] ], [ [ "from keras.preprocessing.text import Tokenizer\n\nexample = 'This is a short sentence (1) with one reference to an image. This next sentence, while non-sensical, does not have an image and has two commas.'\ntokenizer = Tokenizer(filters='!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\t\\n')\ntokenizer.fit_on_texts([example])\ns = tokenizer.texts_to_sequences([example])[0]\n' '.join(tokenizer.index_word[i] for i in s)", "Using TensorFlow backend.\n" ] ], [ [ "This removes all the punctuation and now we have a random number in the sentence. If we choose to not remove the punctuation, the sentence looks better, but then we have some interesting words in the vocabulary.", "_____no_output_____" ] ], [ [ "tokenizer = Tokenizer(filters='\"#$%&*+/:;<=>?@[\\\\]^_`{|}~\\t\\n')\ntokenizer.fit_on_texts([example])\ns = tokenizer.texts_to_sequences([example])[0]\n' '.join(tokenizer.index_word[i] for i in s)\ntokenizer.word_index.keys()", "_____no_output_____" ] ], [ [ "Notice that `image` and `image.` are classified as distinct words. This is because the period is attached to one and not the other and the same with `sentence` and `sentence,`. To alleviate this issue, we can add spaces around the punctuation using regular expressions. We will also remove the image references.", "_____no_output_____" ] ], [ [ "import re\n\ndef format_patent(patent):\n \"\"\"Add spaces around punctuation and remove references to images/citations.\"\"\"\n \n # Add spaces around punctuation\n patent = re.sub(r'(?<=[^\\s0-9])(?=[.,;?])', r' ', patent)\n \n # Remove references to figures\n patent = re.sub(r'\\((\\d+)\\)', r'', patent)\n \n # Remove double spaces\n patent = re.sub(r'\\s\\s', ' ', patent)\n return patent\n\nf = format_patent(example)\nf", "_____no_output_____" ] ], [ [ "Now when we do the tokenization, we get separate entries in the vocab for the punctuation, but _not_ for words with punctuation attached.", "_____no_output_____" ] ], [ [ "tokenizer = Tokenizer(filters='\"#$%&*+/:;<=>?@[\\\\]^_`{|}~\\t\\n')\ntokenizer.fit_on_texts([f])\ns = tokenizer.texts_to_sequences([f])[0]\n' '.join(tokenizer.index_word[i] for i in s)\ntokenizer.word_index.keys()", "_____no_output_____" ] ], [ [ "We no longer have the `image` and `image.` problem but we do have separate symbols for `.` and `,`. This means the network will be forced to learn a representation for these punctuation marks (they are also in the pre-trained embeddings). When we want to get back to the original sentence (without image references) we simply have to remove the spaces.", "_____no_output_____" ] ], [ [ "def remove_spaces(patent):\n \"\"\"Remove spaces around punctuation\"\"\"\n patent = re.sub(r'\\s+([.,;?])', r'\\1', patent)\n \n return patent\n\nremove_spaces(' '.join(tokenizer.index_word[i] for i in s))", "_____no_output_____" ] ], [ [ "We can apply this operation to all of the original abstracts.", "_____no_output_____" ] ], [ [ "formatted = []\n\n# Iterate through all the original abstracts\nfor a in original_abstracts:\n formatted.append(format_patent(a))\n \nlen(formatted)", "_____no_output_____" ] ], [ [ "# Convert Text to Sequences\n\nA neural network cannot process words, so we must convert the patent abstracts into integers. This is done using the Keras utility `Tokenizer`. By default, this will convert all words to lowercase and remove punctuation. Therefore, our model will not be able to write complete sentences. However, this can be beneficial for a first model because it limits the size of the vocabulary and means that more of the words (converted into tokens) will have pre-trained embeddings.\n\nLater, we will not remove the capitalization and punctuation when we train our own embeddings.\n\n## Features and Labels\n\nThis function takes a few parameters including a training length which is the number of words we will feed into the network as features with the next word the label. For example, if we set `training_length = 50`, then the model will take in 50 words as features and the 51st word as the label. \n\nFor each abstract, we can make multiple training examples by slicing at different points. We can use the first 50 words as features with the 51st as a label, then the 2nd through 51st word as features and the 52nd as the label, then 3rd - 52nd with 53rd as label and so on. This gives us much more data to train on and the performance of the model is proportional to the amount of training data.", "_____no_output_____" ] ], [ [ "def make_sequences(texts, training_length = 50,\n lower = True, filters='!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\t\\n'):\n \"\"\"Turn a set of texts into sequences of integers\"\"\"\n \n # Create the tokenizer object and train on texts\n tokenizer = Tokenizer(lower=lower, filters=filters)\n tokenizer.fit_on_texts(texts)\n \n # Create look-up dictionaries and reverse look-ups\n word_idx = tokenizer.word_index\n idx_word = tokenizer.index_word\n num_words = len(word_idx) + 1\n word_counts = tokenizer.word_counts\n \n print(f'There are {num_words} unique words.')\n \n # Convert text to sequences of integers\n sequences = tokenizer.texts_to_sequences(texts)\n \n # Limit to sequences with more than training length tokens\n seq_lengths = [len(x) for x in sequences]\n over_idx = [i for i, l in enumerate(seq_lengths) if l > (training_length + 20)]\n \n new_texts = []\n new_sequences = []\n \n # Only keep sequences with more than training length tokens\n for i in over_idx:\n new_texts.append(texts[i])\n new_sequences.append(sequences[i])\n \n \n training_seq = []\n labels = []\n \n # Iterate through the sequences of tokens\n for seq in new_sequences:\n \n # Create multiple training examples from each sequence\n for i in range(training_length, len(seq)):\n # Extract the features and label\n extract = seq[i - training_length: i + 1]\n \n # Set the features and label\n training_seq.append(extract[:-1])\n labels.append(extract[-1])\n \n print(f'There are {len(training_seq)} training sequences.')\n \n # Return everything needed for setting up the model\n return word_idx, idx_word, num_words, word_counts, new_texts, new_sequences, training_seq, labels", "_____no_output_____" ] ], [ [ "Now let's see how our function generates data. For using pre-trained embeddings, we'll remove a fair amount of the punctuation and lowercase all letters but leave in periods and commas. This is because there are no capitalized words in the pre-trained embeddings but there is some punctuation. Our model will not learn how to capitalize words, but it may learn how to end a sentence and insert commas.", "_____no_output_____" ] ], [ [ "filters = '!\"#$%&()*+/:<=>@[\\\\]^_`{|}~\\t\\n'\nword_idx, idx_word, num_words, word_counts, abstracts, sequences, features, labels = make_sequences(formatted, \n TRAINING_LENGTH,\n lower = True,\n filters = filters)", "There are 13677 unique words.\nThere are 320881 training sequences.\n" ] ], [ [ "Each patent is now represented as a sequence of integers. Let's look at an example of a few features and the corresponding labels. The label is the next word in the sequence after the first 50 words.", "_____no_output_____" ] ], [ [ "n = 3\nfeatures[n][:10]", "_____no_output_____" ], [ "def find_answer(index):\n \"\"\"Find label corresponding to features for index in training data\"\"\"\n \n # Find features and label\n feats = ' '.join(idx_word[i] for i in features[index])\n answer = idx_word[labels[index]]\n \n print('Features:', feats)\n print('\\nLabel: ', answer)", "_____no_output_____" ], [ "find_answer(n)", "Features: enhances stability in a neural network system that , when used as a track-while-scan system , assigns sensor plots to predicted track positions in a plot track association situation . the barometer neuron functions as a bench-mark or reference system node that equates a superimposed plot and track to a\n\nLabel: zero\n" ], [ "original_abstracts[0]", "_____no_output_____" ], [ "find_answer(100)", "Features: it comprises a novel hybrid architecture employing a binary synaptic array whose embodiment incorporates the fixed rules of the problem , such as the number of cities to be visited . the array is prompted by analog voltages representing variables such as distances . the processor incorporates two interconnected feedback\n\nLabel: networks\n" ] ], [ [ "Our patents are no longer correct English, but, by removing capital letters, we do reduce the size of the vocabulary. \n\n__Deciding which pre-processing steps to take in general is the most important aspect of an machine learning project.__", "_____no_output_____" ] ], [ [ "sorted(word_counts.items(), key = lambda x: x[1], reverse = True)[:15]", "_____no_output_____" ] ], [ [ "The most common words make sense in the context of the patents we are using and the geneal English language.", "_____no_output_____" ], [ "## Training Data\n\nNext we need to take the features and labels and convert them into training and validation data. The following function does this by splitting the data - after random shuffling because the features were made in sequential order - based on the `train_fraction` specified. All the inputs are converted into numpy arrays which is the correct input to a keras neural network.\n\n### Encoding of Labels\n\nOne important step is to convert the labels to one hot encoded vectors because our network will be trained using `categorical_crossentropy` and makes a prediction for each word in the vocabulary (we can train with the labels represented as simple integers, but I found performance was better and training faster when using a one-hot representation of the labels). This is done by creating an array of rows of all zeros except for the index of the word which we want to predict - the label - which gets a 1.", "_____no_output_____" ] ], [ [ "from sklearn.utils import shuffle\n\ndef create_train_valid(features, labels, num_words, train_fraction = TRAIN_FRACTION):\n \"\"\"Create training and validation features and labels.\"\"\"\n \n # Randomly shuffle features and labels\n features, labels = shuffle(features, labels, random_state = RANDOM_STATE)\n\n # Decide on number of samples for training\n train_end = int(train_fraction * len(labels))\n \n train_features = np.array(features[:train_end])\n valid_features = np.array(features[train_end:])\n\n train_labels = labels[:train_end]\n valid_labels = labels[train_end:]\n \n # Convert to arrays\n X_train, X_valid = np.array(train_features), np.array(valid_features)\n \n # Using int8 for memory savings\n y_train = np.zeros((len(train_labels), num_words), dtype = np.int8)\n y_valid = np.zeros((len(valid_labels), num_words), dtype = np.int8)\n \n # One hot encoding of labels\n for example_index, word_index in enumerate(train_labels):\n y_train[example_index, word_index] = 1\n \n for example_index, word_index in enumerate(valid_labels):\n y_valid[example_index, word_index] = 1\n \n # Memory management\n import gc\n gc.enable()\n del features, labels, train_features, valid_features, train_labels, valid_labels\n gc.collect()\n \n return X_train, X_valid, y_train, y_valid", "_____no_output_____" ], [ "X_train, X_valid, y_train, y_valid = create_train_valid(features, labels, num_words)\nX_train.shape\ny_train.shape", "_____no_output_____" ] ], [ [ "We do want to be careful about using up too much memory. One hot encoding the labels creates massive numpy arrays so I took care to delete the un-used objects from the workspace. ", "_____no_output_____" ] ], [ [ "import sys\nsys.getsizeof(y_train) / 1e9", "_____no_output_____" ], [ "def check_sizes(gb_min = 1):\n for x in globals():\n size = sys.getsizeof(eval(x))/1e9\n if size > gb_min:\n print(f'Object: {x:10}\\tSize: {size} GB.')\n \ncheck_sizes(gb_min = 1)", "Object: y_train \tSize: 3.072073144 GB.\nObject: y_valid \tSize: 1.316616517 GB.\n" ] ], [ [ "# Pre-Trained Embeddings\n\nRather than training our own word embeddings, a very expensive operation, we can use word embeddings that were trained on a large corpus of words. The hope is that these embeddings will generalize from the training corpus to our needs.\n\nThis code downloads 100-dimensional word embeddings if you don't already have them. There are a number of different pre-trained word embeddings you can find from [Stanford online](https://nlp.stanford.edu/data/).", "_____no_output_____" ] ], [ [ "import os\nfrom keras.utils import get_file\n\n# Vectors to use\nglove_vectors = '/home/ubuntu/.keras/datasets/glove.6B.zip'\n\n# Download word embeddings if they are not present\nif not os.path.exists(glove_vectors):\n glove_vectors = get_file('glove.6B.zip', 'http://nlp.stanford.edu/data/glove.6B.zip')\n os.system(f'unzip {glove_vectors}')\n \n# Load in unzipped file \nglove_vectors = '/home/ubuntu/.keras/datasets/glove.6B.100d.txt'\nglove = np.loadtxt(glove_vectors, dtype='str', comments=None)\nglove.shape", "_____no_output_____" ] ], [ [ "Now we separated into the words and the vectors.", "_____no_output_____" ] ], [ [ "vectors = glove[:, 1:].astype('float')\nwords = glove[:, 0]\n\ndel glove\n\nvectors[100], words[100]", "_____no_output_____" ] ], [ [ "Next we want to keep only those words that appear in our vocabulary. For words that are in our vocabulary but don't have an embedding, they will be represented as all 0s (a shortcoming that we can address by training our own embeddings.)", "_____no_output_____" ] ], [ [ "vectors.shape", "_____no_output_____" ], [ "word_lookup = {word: vector for word, vector in zip(words, vectors)}\n\nembedding_matrix = np.zeros((num_words, vectors.shape[1]))\n\nnot_found = 0\n\nfor i, word in enumerate(word_idx.keys()):\n # Look up the word embedding\n vector = word_lookup.get(word, None)\n \n # Record in matrix\n if vector is not None:\n embedding_matrix[i + 1, :] = vector\n else:\n not_found += 1\n \nprint(f'There were {not_found} words without pre-trained embeddings.')", "There were 2941 words without pre-trained embeddings.\n" ], [ "import gc\ngc.enable()\ndel vectors\ngc.collect()", "_____no_output_____" ] ], [ [ "Each word is represented by 100 numbers with a number of words that can't be found. We can find the closest words to a given word in embedding space using the cosine distance. This requires first normalizing the vectors to have a magnitude of 1.", "_____no_output_____" ] ], [ [ "# Normalize and convert nan to 0\nembedding_matrix = embedding_matrix / np.linalg.norm(embedding_matrix, axis = 1).reshape((-1, 1))\nembedding_matrix = np.nan_to_num(embedding_matrix)", "_____no_output_____" ], [ "def find_closest(query, embedding_matrix, word_idx, idx_word, n = 10):\n \"\"\"Find closest words to a query word in embeddings\"\"\"\n \n idx = word_idx.get(query, None)\n # Handle case where query is not in vocab\n if idx is None:\n print(f'{query} not found in vocab.')\n return\n else:\n vec = embedding_matrix[idx]\n # Handle case where word doesn't have an embedding\n if np.all(vec == 0):\n print(f'{query} has no pre-trained embedding.')\n return\n else:\n # Calculate distance between vector and all others\n dists = np.dot(embedding_matrix, vec)\n \n # Sort indexes in reverse order\n idxs = np.argsort(dists)[::-1][:n]\n sorted_dists = dists[idxs]\n closest = [idx_word[i] for i in idxs]\n \n print(f'Query: {query}\\n')\n max_len = max([len(i) for i in closest])\n # Print out the word and cosine distances\n for word, dist in zip(closest, sorted_dists):\n print(f'Word: {word:15} Cosine Similarity: {round(dist, 4)}')", "_____no_output_____" ], [ "find_closest('the', embedding_matrix, word_idx, idx_word)", "Query: the\n\nWord: the Cosine Similarity: 1.0\nWord: this Cosine Similarity: 0.8573\nWord: part Cosine Similarity: 0.8508\nWord: one Cosine Similarity: 0.8503\nWord: of Cosine Similarity: 0.8329\nWord: same Cosine Similarity: 0.8325\nWord: first Cosine Similarity: 0.821\nWord: on Cosine Similarity: 0.82\nWord: its Cosine Similarity: 0.8169\nWord: as Cosine Similarity: 0.8128\n" ], [ "find_closest('neural', embedding_matrix, word_idx, idx_word, 10)", "Query: neural\n\nWord: neural Cosine Similarity: 1.0\nWord: neuronal Cosine Similarity: 0.6841\nWord: cortical Cosine Similarity: 0.676\nWord: plasticity Cosine Similarity: 0.6625\nWord: pathways Cosine Similarity: 0.6534\nWord: neurons Cosine Similarity: 0.6485\nWord: sensory Cosine Similarity: 0.6391\nWord: cognitive Cosine Similarity: 0.6125\nWord: brain Cosine Similarity: 0.6082\nWord: physiological Cosine Similarity: 0.6022\n" ], [ "find_closest('.', embedding_matrix, word_idx, idx_word, 10)", "Query: .\n\nWord: . Cosine Similarity: 1.0\nWord: but Cosine Similarity: 0.9049\nWord: although Cosine Similarity: 0.8812\nWord: however Cosine Similarity: 0.8778\nWord: , Cosine Similarity: 0.8756\nWord: when Cosine Similarity: 0.8729\nWord: and Cosine Similarity: 0.8717\nWord: though Cosine Similarity: 0.8691\nWord: it Cosine Similarity: 0.8654\nWord: this Cosine Similarity: 0.8653\n" ], [ "find_closest('wonder', embedding_matrix, word_idx, idx_word)", "wonder not found in vocab.\n" ], [ "find_closest('dnn', embedding_matrix, word_idx, idx_word)", "dnn has no pre-trained embedding.\n" ] ], [ [ "# Build Model\n\nWith data encoded as integers and an embedding matrix of pre-trained word vectors, we're ready to build the recurrent neural network. This model is relatively simple and uses an LSTM cell as the heart of the network. After converting the words into embeddings, we pass them through a single LSTM layer, then into a fully connected layer with `relu` activation before the final output layer with a `softmax` activation. The final layer produces a probability for every word in the vocab. \n\nWhen training, these predictions are compared to the actual label using the `categorical_crossentropy` to calculate a loss. The parameters (weights) in the network are then updated using the Adam optimizer (a variant on Stochastic Gradient Descent) with gradients calculated through backpropagation. Fortunately, Keras handles all of this behind the scenes, so we just have to set up the network and then start the training. The most difficult part is figuring out the correct shapes for the inputs and outputs into the model.", "_____no_output_____" ] ], [ [ "from keras.models import Sequential, load_model\nfrom keras.layers import LSTM, Dense, Dropout, Embedding, Masking, Bidirectional\nfrom keras.optimizers import Adam\n\nfrom keras.utils import plot_model", "_____no_output_____" ], [ "def make_word_level_model(num_words, embedding_matrix, bi_directional = False, \n trainable = False, lstm_cells = 128, lstm_layers = 1):\n \"\"\"Make a word level recurrent neural network with option for pretrained embeddings\n and varying numbers of LSTM cell layers.\"\"\"\n\n model = Sequential()\n \n # Map words to an embedding\n if not trainable:\n model.add(Embedding(input_dim=num_words, \n output_dim=embedding_matrix.shape[1],\n weights = [embedding_matrix], trainable = False,\n mask_zero = True))\n model.add(Masking())\n else:\n model.add(Embedding(input_dim = num_words, \n output_dim = embedding_matrix.shape[1], \n weights = [embedding_matrix],\n trainable = True))\n \n # If want to add multiple LSTM layers\n if lstm_layers > 1:\n for i in range(lstm_layers - 1): \n \n model.add(LSTM(128, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))\n \n # Add final LSTM cell layer\n if bi_directional:\n model.add(Bidirectional(LSTM(lstm_cells, return_sequences = False, dropout = 0.1, recurrent_dropout=0.1)))\n else:\n \n model.add(LSTM(lstm_cells, return_sequences=False, dropout=0.1))\n model.add(Dense(128, activation = 'relu'))\n # Dropout for regularization\n model.add(Dropout(0.5))\n \n # Output layer\n model.add(Dense(num_words, activation = 'softmax'))\n \n # Compile the model\n model.compile(optimizer = 'adam', loss = 'categorical_crossentropy',\n metrics = ['accuracy'])\n return model\n\nmodel = make_word_level_model(num_words, embedding_matrix = embedding_matrix, bi_directional = True,\n trainable = False, lstm_layers = 1, lstm_cells = 64)\nmodel.summary()", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding_5 (Embedding) (None, None, 100) 1367700 \n_________________________________________________________________\nmasking_5 (Masking) (None, None, 100) 0 \n_________________________________________________________________\nbidirectional_4 (Bidirection (None, 128) 84480 \n_________________________________________________________________\ndense_9 (Dense) (None, 128) 16512 \n_________________________________________________________________\ndropout_5 (Dropout) (None, 128) 0 \n_________________________________________________________________\ndense_10 (Dense) (None, 13677) 1764333 \n=================================================================\nTotal params: 3,233,025\nTrainable params: 1,865,325\nNon-trainable params: 1,367,700\n_________________________________________________________________\n" ] ], [ [ "The model needs a loss to minimize (`categorical_crossentropy`) as well as a method for updating the weights using the gradients (`Adam`). We will also monitor accuracy which is not a good loss but can give us a more interpretable measure of the model performance.", "_____no_output_____" ], [ "Using pre-trained embeddings means we have about half the parameters to train. However, this also means that the embeddings might not be the best for our data, and there are a number of words with no embeddings.", "_____no_output_____" ] ], [ [ "model_name = 'pre-trained-bi-directional-rnn'\nmodel_dir = '../models/'\n\nplot_model(model, to_file = f'{model_dir}{model_name}.png', show_shapes = True)\nfrom IPython.display import Image\n\nImage(f'{model_dir}{model_name}.png')", "_____no_output_____" ] ], [ [ "# Train Model\n\nWe can now train the model on our training examples. We'll make sure to use early stopping with a validation set to stop the training when the loss on the validation set is no longer decreasing. Also, we'll save the best model every time the validation loss decreases so we can then load in the best model to generate predictions.", "_____no_output_____" ], [ "### Callbacks\n\n* Early Stopping: Stop training when validation loss no longer decreases\n* Model Checkpoint: Save the best model on disk", "_____no_output_____" ] ], [ [ "from keras.callbacks import EarlyStopping, ModelCheckpoint\n\nBATCH_SIZE = 2048\n\ndef make_callbacks(model_name, save = SAVE_MODEL):\n \"\"\"Make list of callbacks for training\"\"\"\n callbacks = [EarlyStopping(monitor = 'val_loss', patience = 5)]\n \n if save:\n callbacks.append(ModelCheckpoint(f'{model_dir}{model_name}.h5', \n save_best_only = True, save_weights_only = False))\n return callbacks\n\ncallbacks = make_callbacks(model_name)", "_____no_output_____" ], [ "def load_and_evaluate(model_name, return_model = False):\n \"\"\"Load in a trained model and evaluate with log loss and accuracy\"\"\"\n \n model = load_model(f'{model_dir}{model_name}.h5')\n r = model.evaluate(X_valid, y_valid, batch_size = 2048, verbose = 1)\n\n valid_crossentropy = r[0]\n valid_accuracy = r[1]\n\n print(f'Cross Entropy: {round(valid_crossentropy, 4)}')\n print(f'Accuracy: {round(100 * valid_accuracy, 2)}%')\n \n if return_model:\n return model", "_____no_output_____" ] ], [ [ "__Depending on your machine, this may take several hours to run.__", "_____no_output_____" ] ], [ [ "history = model.fit(X_train, y_train, epochs = EPOCHS, batch_size = BATCH_SIZE, verbose = 1,\n callbacks=callbacks, \n validation_data = (X_valid, y_valid))", "Train on 224616 samples, validate on 96265 samples\nEpoch 1/150\n224616/224616 [==============================] - 63s 282us/step - loss: 6.8688 - acc: 0.0695 - val_loss: 6.2372 - val_acc: 0.0872\nEpoch 2/150\n224616/224616 [==============================] - 61s 271us/step - loss: 6.2438 - acc: 0.0863 - val_loss: 6.2225 - val_acc: 0.0872\nEpoch 3/150\n224616/224616 [==============================] - 61s 271us/step - loss: 6.2086 - acc: 0.0863 - val_loss: 6.1968 - val_acc: 0.0872\nEpoch 4/150\n224616/224616 [==============================] - 61s 271us/step - loss: 6.1465 - acc: 0.0888 - val_loss: 6.0872 - val_acc: 0.0979\nEpoch 5/150\n224616/224616 [==============================] - 61s 271us/step - loss: 6.0366 - acc: 0.1087 - val_loss: 5.9838 - val_acc: 0.1199\nEpoch 6/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.9316 - acc: 0.1293 - val_loss: 5.8665 - val_acc: 0.1442\nEpoch 7/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.8291 - acc: 0.1414 - val_loss: 5.7835 - val_acc: 0.1499\nEpoch 8/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.7618 - acc: 0.1457 - val_loss: 5.7334 - val_acc: 0.1518\nEpoch 9/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.7124 - acc: 0.1487 - val_loss: 5.6930 - val_acc: 0.1538\nEpoch 10/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.6673 - acc: 0.1518 - val_loss: 5.6563 - val_acc: 0.1572\nEpoch 11/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.6289 - acc: 0.1538 - val_loss: 5.6223 - val_acc: 0.1614\nEpoch 12/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.5927 - acc: 0.1566 - val_loss: 5.5904 - val_acc: 0.1629\nEpoch 13/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.5525 - acc: 0.1579 - val_loss: 5.5546 - val_acc: 0.1653\nEpoch 14/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.5169 - acc: 0.1599 - val_loss: 5.5204 - val_acc: 0.1667\nEpoch 15/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.4810 - acc: 0.1613 - val_loss: 5.4882 - val_acc: 0.1681\nEpoch 16/150\n224616/224616 [==============================] - 61s 272us/step - loss: 5.4461 - acc: 0.1632 - val_loss: 5.4536 - val_acc: 0.1693\nEpoch 17/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.4101 - acc: 0.1649 - val_loss: 5.4263 - val_acc: 0.1714\nEpoch 18/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.3788 - acc: 0.1656 - val_loss: 5.3994 - val_acc: 0.1726\nEpoch 19/150\n224616/224616 [==============================] - 61s 272us/step - loss: 5.3470 - acc: 0.1678 - val_loss: 5.3685 - val_acc: 0.1750\nEpoch 20/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.3118 - acc: 0.1689 - val_loss: 5.3429 - val_acc: 0.1766\nEpoch 21/150\n224616/224616 [==============================] - 61s 272us/step - loss: 5.2841 - acc: 0.1707 - val_loss: 5.3182 - val_acc: 0.1779\nEpoch 22/150\n224616/224616 [==============================] - 61s 272us/step - loss: 5.2492 - acc: 0.1727 - val_loss: 5.2897 - val_acc: 0.1822\nEpoch 23/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.2202 - acc: 0.1748 - val_loss: 5.2651 - val_acc: 0.1839\nEpoch 24/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.1891 - acc: 0.1765 - val_loss: 5.2427 - val_acc: 0.1851\nEpoch 25/150\n224616/224616 [==============================] - 61s 272us/step - loss: 5.1632 - acc: 0.1784 - val_loss: 5.2199 - val_acc: 0.1878\nEpoch 26/150\n224616/224616 [==============================] - 61s 272us/step - loss: 5.1338 - acc: 0.1797 - val_loss: 5.1987 - val_acc: 0.1890\nEpoch 27/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.0599 - acc: 0.1830 - val_loss: 5.1435 - val_acc: 0.1929\nEpoch 30/150\n224616/224616 [==============================] - 61s 272us/step - loss: 5.0335 - acc: 0.1840 - val_loss: 5.1262 - val_acc: 0.1931\nEpoch 31/150\n224616/224616 [==============================] - 61s 271us/step - loss: 5.0088 - acc: 0.1849 - val_loss: 5.1069 - val_acc: 0.1944\nEpoch 32/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.9855 - acc: 0.1853 - val_loss: 5.0902 - val_acc: 0.1964\nEpoch 33/150\n224616/224616 [==============================] - 61s 271us/step - loss: 4.9615 - acc: 0.1863 - val_loss: 5.0741 - val_acc: 0.1973\nEpoch 34/150\n224616/224616 [==============================] - 61s 271us/step - loss: 4.9392 - acc: 0.1875 - val_loss: 5.0573 - val_acc: 0.1975\nEpoch 35/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.9178 - acc: 0.1876 - val_loss: 5.0431 - val_acc: 0.1988\nEpoch 36/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.8973 - acc: 0.1878 - val_loss: 5.0280 - val_acc: 0.2001\nEpoch 37/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.8746 - acc: 0.1892 - val_loss: 5.0130 - val_acc: 0.2004\nEpoch 38/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.8534 - acc: 0.1893 - val_loss: 5.0021 - val_acc: 0.2016\nEpoch 39/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.8338 - acc: 0.1897 - val_loss: 4.9900 - val_acc: 0.2022\nEpoch 40/150\n224616/224616 [==============================] - 61s 271us/step - loss: 4.8182 - acc: 0.1904 - val_loss: 4.9796 - val_acc: 0.2033\nEpoch 41/150\n224616/224616 [==============================] - 61s 271us/step - loss: 4.8001 - acc: 0.1916 - val_loss: 4.9724 - val_acc: 0.2035\nEpoch 42/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.7830 - acc: 0.1918 - val_loss: 4.9558 - val_acc: 0.2041\nEpoch 43/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.7646 - acc: 0.1916 - val_loss: 4.9478 - val_acc: 0.2051\nEpoch 44/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.7462 - acc: 0.1928 - val_loss: 4.9359 - val_acc: 0.2055\nEpoch 45/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.7247 - acc: 0.1935 - val_loss: 4.9253 - val_acc: 0.2068\nEpoch 46/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.7096 - acc: 0.1937 - val_loss: 4.9223 - val_acc: 0.2071\nEpoch 47/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.6932 - acc: 0.1942 - val_loss: 4.9088 - val_acc: 0.2079\nEpoch 48/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.6760 - acc: 0.1953 - val_loss: 4.8968 - val_acc: 0.2088\nEpoch 49/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.6600 - acc: 0.1957 - val_loss: 4.8904 - val_acc: 0.2094\nEpoch 50/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.6419 - acc: 0.1961 - val_loss: 4.8835 - val_acc: 0.2108\nEpoch 51/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.6288 - acc: 0.1961 - val_loss: 4.8741 - val_acc: 0.2106\nEpoch 52/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.6107 - acc: 0.1979 - val_loss: 4.8728 - val_acc: 0.2101\nEpoch 53/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.5971 - acc: 0.1992 - val_loss: 4.8605 - val_acc: 0.2128\nEpoch 54/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.5845 - acc: 0.1984 - val_loss: 4.8539 - val_acc: 0.2120\nEpoch 55/150\n224616/224616 [==============================] - 61s 272us/step - loss: 4.5729 - acc: 0.1988 - val_loss: 4.8473 - val_acc: 0.2138\nEpoch 56/150\n 32768/224616 [===>..........................] - ETA: 42s - loss: 4.5451 - acc: 0.1987" ], [ "model = load_and_evaluate(model_name, return_model = True)", "96265/96265 [==============================] - 12s 123us/step\nCross Entropy: 4.6629\nAccuracy: 24.52%\n" ], [ "model = make_word_level_model(num_words, embedding_matrix = embedding_matrix, bi_directional = False,\n trainable = False, lstm_layers = 1, lstm_cells = 64)\nmodel.summary()", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding_6 (Embedding) (None, None, 100) 1367700 \n_________________________________________________________________\nmasking_6 (Masking) (None, None, 100) 0 \n_________________________________________________________________\nlstm_6 (LSTM) (None, 64) 42240 \n_________________________________________________________________\ndense_11 (Dense) (None, 128) 8320 \n_________________________________________________________________\ndropout_6 (Dropout) (None, 128) 0 \n_________________________________________________________________\ndense_12 (Dense) (None, 13677) 1764333 \n=================================================================\nTotal params: 3,182,593\nTrainable params: 1,814,893\nNon-trainable params: 1,367,700\n_________________________________________________________________\n" ], [ "model_name = 'pre-trained-nonbi-directional-rnn'\ncallbacks = make_callbacks(model_name)\n\nhistory = model.fit(X_train, y_train, epochs = EPOCHS, batch_size = BATCH_SIZE, verbose = 1,\n callbacks=callbacks, \n validation_data = (X_valid, y_valid))", "Train on 224616 samples, validate on 96265 samples\nEpoch 1/150\n224616/224616 [==============================] - 45s 201us/step - loss: 6.9667 - acc: 0.0677 - val_loss: 6.2318 - val_acc: 0.0872\nEpoch 2/150\n224616/224616 [==============================] - 43s 191us/step - loss: 6.2390 - acc: 0.0863 - val_loss: 6.2190 - val_acc: 0.0872\nEpoch 3/150\n224616/224616 [==============================] - 43s 191us/step - loss: 6.2129 - acc: 0.0863 - val_loss: 6.2084 - val_acc: 0.0872\nEpoch 4/150\n224616/224616 [==============================] - 43s 193us/step - loss: 6.1837 - acc: 0.0865 - val_loss: 6.1658 - val_acc: 0.0872\nEpoch 5/150\n224616/224616 [==============================] - 43s 191us/step - loss: 6.1135 - acc: 0.0962 - val_loss: 6.0724 - val_acc: 0.1121\nEpoch 6/150\n224616/224616 [==============================] - 43s 191us/step - loss: 6.0454 - acc: 0.1104 - val_loss: 6.0242 - val_acc: 0.1191\nEpoch 7/150\n224616/224616 [==============================] - 43s 193us/step - loss: 6.0006 - acc: 0.1172 - val_loss: 5.9862 - val_acc: 0.1239\nEpoch 8/150\n224616/224616 [==============================] - 43s 191us/step - loss: 5.9500 - acc: 0.1232 - val_loss: 5.9170 - val_acc: 0.1342\nEpoch 9/150\n224616/224616 [==============================] - 43s 191us/step - loss: 5.8821 - acc: 0.1330 - val_loss: 5.8448 - val_acc: 0.1467\nEpoch 10/150\n224616/224616 [==============================] - 43s 193us/step - loss: 5.8224 - acc: 0.1406 - val_loss: 5.7955 - val_acc: 0.1493\nEpoch 11/150\n224616/224616 [==============================] - 43s 191us/step - loss: 5.7793 - acc: 0.1441 - val_loss: 5.7545 - val_acc: 0.1506\nEpoch 12/150\n224616/224616 [==============================] - 43s 191us/step - loss: 5.7370 - acc: 0.1460 - val_loss: 5.7179 - val_acc: 0.1515\nEpoch 13/150\n224616/224616 [==============================] - 43s 193us/step - loss: 5.6958 - acc: 0.1491 - val_loss: 5.6821 - val_acc: 0.1564\nEpoch 14/150\n224616/224616 [==============================] - 43s 191us/step - loss: 5.6584 - acc: 0.1516 - val_loss: 5.6541 - val_acc: 0.1601\nEpoch 15/150\n224616/224616 [==============================] - 43s 191us/step - loss: 5.6227 - acc: 0.1537 - val_loss: 5.6178 - val_acc: 0.1613\nEpoch 16/150\n224616/224616 [==============================] - 43s 192us/step - loss: 5.5843 - acc: 0.1561 - val_loss: 5.5822 - val_acc: 0.1629\nEpoch 17/150\n224616/224616 [==============================] - 43s 191us/step - loss: 5.5466 - acc: 0.1577 - val_loss: 5.5527 - val_acc: 0.1644\nEpoch 18/150\n224616/224616 [==============================] - 43s 193us/step - loss: 5.5101 - acc: 0.1597 - val_loss: 5.5197 - val_acc: 0.1661\nEpoch 19/150\n224616/224616 [==============================] - 43s 192us/step - loss: 5.4786 - acc: 0.1609 - val_loss: 5.4901 - val_acc: 0.1674\nEpoch 20/150\n224616/224616 [==============================] - 43s 191us/step - loss: 5.4450 - acc: 0.1624 - val_loss: 5.4637 - val_acc: 0.1685\nEpoch 21/150\n 16384/224616 [=>............................] - ETA: 31s - loss: 5.4155 - acc: 0.1631" ], [ "model = load_and_evaluate(model_name, return_model = True)", "95569/95569 [==============================] - 12s 129us/step\nCross Entropy: 4.6428\nAccuracy: 26.75%\n" ] ], [ [ "The accuracy - both training and validation - increase over time and the loss decreases over time which gives us indication that our model is getting better with training. \n\nWe can load back in the model so we don't need to repeat the training.", "_____no_output_____" ] ], [ [ "def load_and_evaluate(model_name, return_model = False):\n \"\"\"Load in a trained model and evaluate with log loss and accuracy\"\"\"\n \n model = load_model(f'{model_dir}{model_name}.h5')\n r = model.evaluate(X_valid, y_valid, batch_size = 2048, verbose = 1)\n\n valid_crossentropy = r[0]\n valid_accuracy = r[1]\n\n print(f'Cross Entropy: {round(valid_crossentropy, 4)}')\n print(f'Accuracy: {round(100 * valid_accuracy, 2)}%')\n \n if return_model:\n return model", "_____no_output_____" ], [ "model = load_and_evaluate(model_name, return_model = True)", "_____no_output_____" ] ], [ [ "To check how the model compares to just using the word frequencies to make predictions, we can compute the accuracy if we were to use the most frequent word for every guess. We can also choose from a multinomial distribution using the word frequencies as probabilities.", "_____no_output_____" ] ], [ [ "np.random.seed(40)\n\n# Number of all words\ntotal_words = sum(word_counts.values())\n\n# Compute frequency of each word in vocab\nfrequencies = [word_counts[word]/total_words for word in word_idx.keys()]\nfrequencies.insert(0, 0)", "_____no_output_____" ], [ "frequencies[1:10], list(word_idx.keys())[0:9]", "_____no_output_____" ] ], [ [ "The most common word is 'the'. Let's see the accuracy of guessing this for every validation example.", "_____no_output_____" ] ], [ [ "print(f'The accuracy is {round(100 * np.mean(np.argmax(y_valid, axis = 1) == 1), 4)}%.')", "_____no_output_____" ] ], [ [ "Now we make a guess for each of the sequences in the validation set using the frequencies as probabilities. This is in some sense informed, but the multinomial also has randomness. ", "_____no_output_____" ] ], [ [ "random_guesses = []\n\n# Make a prediction based on frequencies for each example in validation data\nfor i in range(len(y_valid)):\n random_guesses.append(np.argmax(np.random.multinomial(1, frequencies, size = 1)[0]))", "_____no_output_____" ], [ "from collections import Counter\n\n# Create a counter from the guesses\nc = Counter(random_guesses)\n\n# Iterate through the 10 most common guesses\nfor i in c.most_common(10):\n word = idx_word[i[0]]\n word_count = word_counts[word]\n print(f'Word: {word} \\tCount: {word_count} \\tPercentage: {round(100 * word_count / total_words, 2)}% \\tPredicted: {i[1]}')", "_____no_output_____" ], [ "accuracy = np.mean(random_guesses == np.argmax(y_valid, axis = 1))\nprint(f'Random guessing accuracy: {100 * round(accuracy, 4)}%')", "_____no_output_____" ] ], [ [ "We can see that our model easily outperforms both guessing the most common word - 7.76% accuracy - as well as using relative word frequencies to guess the next word - 1.46% accuracy. Therefore, we can say that our model has learned something! ", "_____no_output_____" ], [ "# Generating Output\n\nNow for the fun part: we get to use our model to generate new abstracts. To do this, we feed the network a seed sequence, have it make a prediction, add the predicted word to the sequence, and make another prediction for the next word. We continue this for the number of words that we want. We compare the generated output to the actual abstract to see if we can tell the difference!", "_____no_output_____" ] ], [ [ "from IPython.display import HTML\n\ndef header(text, color = 'black'):\n raw_html = f'<h1 style=\"color: {color};\"><center>' + str(text) + '</center></h1>'\n return raw_html\n\ndef box(text):\n raw_html = '<div style=\"border:1px inset black;padding:1em;font-size: 20px;\">'+str(text)+'</div>'\n return raw_html\n\ndef addContent(old_html, raw_html):\n old_html += raw_html\n return old_html", "_____no_output_____" ], [ "import random\n\ndef generate_output(model, sequences, training_length = 50, new_words = 50, diversity = 1,\n return_output = False, n_gen = 1):\n \"\"\"Generate `new_words` words of output from a trained model and format into HTML.\"\"\"\n \n # Choose a random sequence\n seq = random.choice(sequences)\n \n # Choose a random starting point\n seed_idx = random.randint(0, len(seq) - training_length - 10)\n # Ending index for seed\n end_idx = seed_idx + training_length\n \n gen_list = []\n \n for n in range(n_gen):\n # Extract the seed sequence\n seed = seq[seed_idx:end_idx] \n original_sequence = [idx_word[i] for i in seed]\n generated = seed[:] + ['#']\n\n # Find the actual entire sequence\n actual = generated[:] + seq[end_idx:end_idx + new_words]\n\n # Keep adding new words \n for i in range(new_words):\n\n # Make a prediction from the seed\n preds = model.predict(np.array(seed).reshape(1, -1))[0].astype(np.float64)\n\n # Diversify\n preds = np.log(preds) / diversity\n exp_preds = np.exp(preds)\n\n # Softmax\n preds = exp_preds / sum(exp_preds)\n\n # Choose the next word\n probas = np.random.multinomial(1, preds, 1)[0]\n\n next_idx = np.argmax(probas)\n\n # New seed adds on old word\n seed = seed[1:] + [next_idx]\n generated.append(next_idx)\n\n # Showing generated and actual abstract\n n = []\n\n for i in generated:\n n.append(idx_word.get(i, '< --- >'))\n \n gen_list.append(n)\n\n a = []\n \n for i in actual:\n a.append(idx_word.get(i, '< --- >'))\n\n a = a[training_length:]\n\n gen_list = [gen[training_length:training_length + len(a)] for gen in gen_list]\n\n if return_output:\n return original_sequence, gen_list, a\n \n # HTML formatting\n seed_html = ''\n seed_html = addContent(seed_html, header('Seed Sequence', color = 'darkblue'))\n seed_html = addContent(seed_html, box(remove_spaces(' '.join(original_sequence))))\n \n gen_html = ''\n gen_html = addContent(gen_html, header('RNN Generated', color = 'darkred'))\n gen_html = addContent(gen_html, box(remove_spaces(' '.join(gen_list[0]))))\n \n a_html = ''\n a_html = addContent(a_html, header('Actual', color = 'darkgreen'))\n a_html = addContent(a_html, box(remove_spaces(' '.join(a))))\n \n return seed_html, gen_html, a_html\n\n", "_____no_output_____" ] ], [ [ "The `diversity` parameter determines how much randomness is added to the predictions. If we just use the most likely word for each prediction, the output sometimes gets stuck in loops. The diversity means the predicted text has a little more variation. ", "_____no_output_____" ] ], [ [ "seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH)\nHTML(seed_html)\nHTML(gen_html)\nHTML(a_html)", "_____no_output_____" ], [ "seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 1)\nHTML(seed_html)\nHTML(gen_html)\nHTML(a_html)", "_____no_output_____" ], [ "seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.75)\nHTML(seed_html)\nHTML(gen_html)\nHTML(a_html)", "_____no_output_____" ] ], [ [ "Increasing the diversity seems to increase the plausibility of the output. However, that could be becuase the patents themselves don't sound that realistic. This is especially true when we remove the punctuation. We'll fix that in the next section by keeping the punctuation and training our own embeddings.", "_____no_output_____" ], [ "# Training Own Embeddings\n\nIf we aren't happy with the output, especially the lack of punctuation, we can try training our own embeddings. This means the model will adapt the embeddings by itself to get better at the problem of predicting the next output. The final embeddings should place words that are more similar closer together in embedding space. The advantage of training our own embeddings are that they might be more relevant to the task. However, the downside is that training will take longer because the number of parameters significantly increases.", "_____no_output_____" ] ], [ [ "def clear_memory():\n import gc\n gc.enable()\n for i in ['model', 'X', 'y', 'word_idx', 'idx_word', 'X_train', 'X_valid,' 'y_train', 'y_valid', 'embedding_matrix',\n 'words', 'vectors', 'labels', 'random_guesses', 'training_seq', 'word_counts', 'data', 'frequencies']:\n if i in dir():\n del globals()[i]\n gc.collect()\n \nclear_memory()", "_____no_output_____" ] ], [ [ "Now when we create the training data, we do not remove the punctuation or convert the words to lowercase. ", "_____no_output_____" ] ], [ [ "TRAINING_LENGTH = 50\n\nfilters = '!\"%;[\\\\]^_`{|}~\\t\\n'\nword_idx, idx_word, num_words, word_counts, abstracts, sequences, features, labels = make_sequences(formatted, \n TRAINING_LENGTH,\n lower = False,\n filters = filters)", "_____no_output_____" ], [ "embedding_matrix = np.zeros((num_words, len(word_lookup['the'])))\n\nnot_found = 0\n\nfor i, word in enumerate(word_idx.keys()):\n # Look up the word embedding\n vector = word_lookup.get(word, None)\n \n # Record in matrix\n if vector is not None:\n embedding_matrix[i + 1, :] = vector\n else:\n not_found += 1\n \nprint(f'There were {not_found} words without pre-trained embeddings.')\nembedding_matrix.shape", "_____no_output_____" ], [ "# Split into training and validation\nX_train, X_valid, y_train, y_valid = create_train_valid(features, labels, num_words)\nX_train.shape, y_train.shape", "_____no_output_____" ], [ "check_sizes(gb_min = 1)", "_____no_output_____" ] ], [ [ "Let's create a model with 100 dimensional embeddings, input sequences of length 50, and 1 LSTM layer as before.", "_____no_output_____" ] ], [ [ "model = make_word_level_model(num_words, embedding_matrix, trainable = True, bi_directional = True,\n lstm_layers = 1, lstm_cells = 64)\nmodel.summary()", "_____no_output_____" ], [ "model_name = 'training-rnn-bi-directional'\n\ncallbacks = make_callbacks(model_name)", "_____no_output_____" ], [ "model.compile(optimizer = Adam(), loss = 'categorical_crossentropy', metrics = ['accuracy'])\n\nhistory = model.fit(X_train, y_train, batch_size = BATCH_SIZE, verbose = VERBOSE, epochs = EPOCHS, callbacks=callbacks, \n validation_data = (X_valid, y_valid))", "_____no_output_____" ], [ "import json\n\nwith open('training-rnn.json', 'w') as f:\n f.write(json.dumps(word_idx))", "_____no_output_____" ] ], [ [ "As before we load in the model and have it generate output.", "_____no_output_____" ] ], [ [ "model_dir = '../models/'\nfrom keras.models import load_model\nmodel = load_and_evaluate(model_name, return_model=True)", "_____no_output_____" ], [ "seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.75)\nHTML(seed_html)\nHTML(gen_html)\nHTML(a_html)", "_____no_output_____" ], [ "seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.75)\nHTML(seed_html)\nHTML(gen_html)\nHTML(a_html)", "_____no_output_____" ] ], [ [ "The most realisitic output seems to occur when the diversity is between 0.5 and 1.0. Sometimes it's difficult to tell the generated from the actual, a trial we'll look at a little later! ", "_____no_output_____" ], [ "## Inspect Embeddings\n\nWe can take a look at our trained embeddings to figure out the closest words in the embedding space. These embeddings are trained for our task, which means they may differ slightly from the pre-trained versions.", "_____no_output_____" ] ], [ [ "model.summary()", "_____no_output_____" ], [ "def get_embeddings(model):\n embedding_layer = model.get_layer(index = 0) \n embedding_matrix = embedding_layer.get_weights()[0]\n embedding_matrix = embedding_matrix / np.linalg.norm(embedding_matrix, axis = 1).reshape((-1, 1))\n embedding_matrix = np.nan_to_num(embedding_matrix)\n return embedding_matrix\nembedding_matrix = get_embeddings(model)\nembedding_matrix.shape", "_____no_output_____" ], [ "find_closest('the', embedding_matrix, word_idx, idx_word)", "_____no_output_____" ], [ "find_closest('neural', embedding_matrix, word_idx, idx_word)", "_____no_output_____" ], [ "find_closest('computer', embedding_matrix, word_idx, idx_word)", "_____no_output_____" ] ], [ [ "# Change Parameters of Network\n\nNext, we can try to generate more accurate predictions by altering the network parameters. Primarily, we will increase the number of LSTM layers to 2. The first LSTM layer returns the sequences - the entire output for each input sequence instead of only the final one - before passing it on to the second. Training may take a little longer, but performance could also improve. There's no guarantee this model is better because we could just end up overfitting on the training data. There is no downside to trying though. ", "_____no_output_____" ] ], [ [ "model = make_word_level_model(num_words, embedding_matrix, trainable = True, lstm_layers = 2)\nmodel.summary()", "_____no_output_____" ], [ "model_name = 'training-rnn-2_layers'\n\ncallbacks = make_callbacks(model_name)", "_____no_output_____" ], [ "history = model.fit(X_train, y_train, batch_size = BATCH_SIZE, verbose = VERBOSE, epochs = EPOCHS, callbacks=callbacks, \n validation_data = (X_valid, y_valid))", "_____no_output_____" ], [ "model = load_and_evaluate(model_name, return_model = True)\nembedding_matrix = get_embeddings(model)", "_____no_output_____" ], [ "seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.75)\nHTML(seed_html)\nHTML(gen_html)\nHTML(a_html)", "_____no_output_____" ] ], [ [ "# Change Training Length\n\nAnother option to try and improve the model is to change the length of the training sequences. The idea here is using more previous words will give the network more context for predicting the next word. However, it could also be that including more words _hurts_ the model because some of them are irrelevant! ", "_____no_output_____" ] ], [ [ "clear_memory()", "_____no_output_____" ], [ "TRAINING_LENGTH = 100\n\nfilters = '!\"%;[\\\\]^_`{|}~\\t\\n'\nword_idx, idx_word, num_words, word_counts, abstracts, sequences, features, labels = make_sequences(formatted, \n TRAINING_LENGTH,\n lower = False,\n filters = filters)", "_____no_output_____" ], [ "X_train, X_valid, y_train, y_valid = create_train_valid(features, labels, num_words)\nX_train.shape, y_train.shape", "_____no_output_____" ], [ "check_sizes()", "_____no_output_____" ], [ "model = make_word_level_model(num_words, embedding_matrix, trainable = True)\nmodel.summary()", "_____no_output_____" ], [ "model_name = 'training-len100'\ncallbacks = make_callbacks(model_name)", "_____no_output_____" ], [ "history = model.fit(X_train, y_train, epochs = EPOCHS, callbacks=callbacks, batch_size = BATCH_SIZE, verbose = VERBOSE, \n validation_data = (X_valid, y_valid))", "_____no_output_____" ], [ "model = load_and_evaluate(model_name, return_model=True)\nembedding_matrix = get_embeddings(model)", "_____no_output_____" ], [ "word_lookup = {word: embedding_matrix[i] for i, word in zip(idx_word.keys(), idx_word.values())}\nlen(word_lookup)", "_____no_output_____" ], [ "seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 1.5)\nHTML(seed_html)\nHTML(gen_html)\nHTML(a_html)", "_____no_output_____" ] ], [ [ "# Reduce Training Length", "_____no_output_____" ] ], [ [ "clear_memory()\nTRAINING_LENGTH = 20\n\nfilters = '!\"%[\\\\]^_`{|}~\\t\\n'\nword_idx, idx_word, num_words, word_counts, abstracts, sequences, features, labels = make_sequences(formatted, \n TRAINING_LENGTH,\n lower = False,\n filters = filters)", "_____no_output_____" ], [ "embedding_matrix = np.zeros((num_words, len(word_lookup['the'])))\n\nnot_found = 0\n\nfor i, word in enumerate(word_idx.keys()):\n # Look up the word embedding\n vector = word_lookup.get(word, None)\n \n # Record in matrix\n if vector is not None:\n embedding_matrix[i + 1, :] = vector\n else:\n not_found += 1\n \nprint(f'There were {not_found} words without pre-trained embeddings.')", "_____no_output_____" ], [ "X_train, X_valid, y_train, y_valid = create_train_valid(features, labels, num_words)\nX_train.shape, y_train.shape", "_____no_output_____" ], [ "check_sizes()", "_____no_output_____" ], [ "model = make_word_level_model(num_words, embedding_matrix, trainable = True, lstm_layers = 1)\n\nmodel_name = 'training-len20'\ncallbacks = make_callbacks(model_name)", "_____no_output_____" ], [ "history = model.fit(X_train, y_train, epochs = EPOCHS, batch_size = BATCH_SIZE, verbose = VERBOSE,\n callbacks=callbacks, \n validation_data = (X_valid, y_valid))", "_____no_output_____" ], [ "model = load_and_evaluate(model_name, return_model = True)", "_____no_output_____" ], [ "seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.75)\nHTML(seed_html)\nHTML(gen_html)\nHTML(a_html)", "_____no_output_____" ], [ "seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.8)\nHTML(seed_html)\nHTML(gen_html)\nHTML(a_html)", "_____no_output_____" ] ], [ [ "# Is Output from a human or machine?", "_____no_output_____" ] ], [ [ "def guess_human(model, sequences, training_length=50, new_words=50):\n \"\"\"Produce 2 RNN sequences and play game to compare to actaul.\n Diversity is randomly set between 0.5 and 1.25\"\"\"\n \n diversity = np.random.uniform(0.5, 1.25)\n sequence, gen_list, actual = generate_output(model, sequences, training_length, \n diversity=diversity, return_output=True, n_gen = 2)\n gen_0, gen_1 = gen_list\n \n output = {'sequence': remove_spaces(' '.join(sequence)),\n 'c0': remove_spaces(' '.join(gen_0)),\n 'c1': remove_spaces(' '.join(gen_1)),\n 'h': remove_spaces(' '.join(actual))}\n \n print(f\"Seed Sequence: {output['sequence']}\\n\")\n \n choices = ['h', 'c0', 'c1']\n \n selected = []\n i = 0\n while len(selected) < 3:\n choice = random.choice(choices)\n selected.append(choice)\n print('\\n')\n print(f'Option {i + 1} {output[choice]}')\n choices.remove(selected[-1])\n i += 1\n \n print('\\n')\n guess = int(input('Enter option you think is human (1-3): ')) - 1\n print('\\n')\n \n if guess == np.where(np.array(selected) == 'h')[0][0]:\n print('Correct')\n print('Correct Ordering', selected)\n else:\n print('Incorrect')\n print('Correct Ordering', selected)\n \n print('Diversity', round(diversity, 2))", "_____no_output_____" ], [ "guess_human(model, sequences)", "_____no_output_____" ], [ "guess_human(model, sequences)", "_____no_output_____" ] ], [ [ "# Conclusions\n\nIn this notebook, we saw how to build a recurrent neural network and used it to generate patent abstracts. Although the output is not always believable, this project gives us practice handling text sequences with neural networks. Deep learning has some advantages compared to traditional machine learning, especially in areas of computer vision and natural language processing. Hopefully you are now confident harnessing these powerful techniques to solve your own text problems! \n\nThis project covered a number of steps for working with text data including:\n\n1. Cleaning data using regular expressions\n2. Preparing data for neural network\n * Converting text strings to integers (tokenization)\n * Encoding labels using one-hot encoding\n * Building training and validation set\n3. Buildig a recurrent neural network using LSTM cells\n4. Using pre-trained word embeddings and training our own embeddings\n5. Adjusting model parameters to improve performance\n6. Inspecting model results\n\nAlthough we didn't cover the theory in depth, we did see the implementation, which means we now have a framework to fit the concepts we study. Technical topics are best learned through practice, and this project gave us a great opportunity to explore the frontiers of natural language processing with deep learning.", "_____no_output_____" ], [ "# Appendix I: Training with A Data Generator", "_____no_output_____" ] ], [ [ "def data_gen(sequences, labels, batch_size, num_words):\n \"\"\"Yield batches for training\"\"\"\n i = 0\n while True:\n \n # Reset once all examples have been used\n if i + batch_size > len(labels):\n i = 0\n \n X = np.array(sequences[i: i + batch_size])\n \n # Create array of zeros for labels\n y = np.zeros((BATCH_SIZE, num_words))\n # Extract integer labels\n ys = labels[i: i + batch_size]\n \n # Convert to one hot representation\n for example_num, word_num in enumerate(ys):\n y[example_num, word_num] = 1\n yield X, y\n \n i += batch_size\n gc.collect()\n \ndef create_train_valid_gen(features, labels, batch_size, num_words):\n \"\"\"Create training and validation generators for training\"\"\"\n \n # Randomly shuffle features and labels\n features, labels = shuffle(features, labels, random_state = RANDOM_STATE)\n\n # Decide on number of samples for training\n train_end = int(0.7 * len(labels))\n \n train_features = np.array(features[:train_end])\n valid_features = np.array(features[train_end:])\n\n train_labels = labels[:train_end]\n valid_labels = labels[train_end:]\n \n # Make training and validation generators\n train_gen = data_gen(train_features, train_labels, batch_size, num_words)\n valid_gen = data_gen(valid_features, valid_labels, batch_size, num_words)\n \n return train_gen, valid_gen, train_end\n\nBATCH_SIZE = 2048\n\ntrain_gen, valid_gen, train_len = create_train_valid_gen(features, labels, BATCH_SIZE, num_words)\nX, y = next(train_gen)\n\ntrain_steps = train_len // BATCH_SIZE\nvalid_steps = (len(labels) - train_len) // BATCH_SIZE\n\nX.shape\ny.shape\n\ntrain_steps\nvalid_steps", "_____no_output_____" ], [ "history = model.fit_generator(train_gen, steps_per_epoch= train_steps, epochs = 2,\n callbacks=None, \n validation_data = valid_gen,\n validation_steps = valid_steps)", "_____no_output_____" ] ], [ [ "# Appendix II: Using a Keras Sequence for Training", "_____no_output_____" ] ], [ [ "from keras.utils import Sequence\n\nclass textSequence(Sequence):\n \"\"\"Keras Sequence for training with a generator.\"\"\"\n\n def __init__(self, x_set, y_set, batch_size, num_words):\n self.x, self.y = x_set, y_set\n self.batch_size = batch_size\n self.num_words = num_words\n\n def __len__(self):\n return int(np.ceil(len(self.x) / float(self.batch_size)))\n\n def __getitem__(self, idx):\n batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]\n batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]\n\n X = np.array(batch_x)\n y = np.zeros((len(batch_y), self.num_words))\n \n for example_idx, word_idx in enumerate(batch_y):\n y[example_idx, word_idx] = 1\n \n return X, y", "_____no_output_____" ], [ "# Decide on number of samples for training\ntrain_end = int(TRAIN_FRACTION * len(labels))\n\ntrain_features = np.array(features[:train_end])\nvalid_features = np.array(features[train_end:])\n\ntrain_labels = labels[:train_end]\nvalid_labels = labels[train_end:]\n\ntrain_sequence = textSequence(train_features, train_labels, 2048, num_words)\nvalid_sequence = textSequence(valid_features, valid_labels, 2048, num_words)", "_____no_output_____" ], [ "history = model.fit_generator(train_sequence, epochs = 2,\n callbacks=None, \n validation_data = valid_sequence,\n workers = 20)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cbb6c40c82489bbd50801b756cd0f3141e396705
8,120
ipynb
Jupyter Notebook
MC/Blackjack Playground.ipynb
andytwigg/reinforcement-learning
b71371312fa18814547f7b12a19085dd73b383eb
[ "MIT" ]
85
2017-03-13T12:27:38.000Z
2021-08-03T23:00:48.000Z
MC/Blackjack Playground.ipynb
ketronmw/reinforcement-learning
762f34c98f6c1978c97d258beee3c20ff57537a4
[ "MIT" ]
5
2019-11-15T02:00:26.000Z
2021-01-06T04:26:40.000Z
MC/Blackjack Playground.ipynb
ketronmw/reinforcement-learning
762f34c98f6c1978c97d258beee3c20ff57537a4
[ "MIT" ]
25
2017-04-15T01:40:33.000Z
2021-11-23T02:54:10.000Z
35.614035
83
0.549877
[ [ [ "import numpy as np\nimport sys\nif \"../\" not in sys.path:\n sys.path.append(\"../\") \nfrom lib.envs.blackjack import BlackjackEnv", "_____no_output_____" ], [ "env = BlackjackEnv()", "_____no_output_____" ], [ "def print_observation(observation):\n score, dealer_score, usable_ace = observation\n print(\"Player Score: {} (Usable Ace: {}), Dealer Score: {}\".format(\n score, usable_ace, dealer_score))\n\ndef strategy(observation):\n score, dealer_score, usable_ace = observation\n # Stick (action 0) if the score is > 20, hit (action 1) otherwise\n return 0 if score >= 20 else 1\n\nfor i_episode in range(20):\n observation = env.reset()\n for t in range(100):\n print_observation(observation)\n action = strategy(observation)\n print(\"Taking action: {}\".format( [\"Stick\", \"Hit\"][action]))\n observation, reward, done, _ = env.step(action)\n if done:\n print_observation(observation)\n print(\"Game end. Reward: {}\\n\".format(float(reward)))\n break", "Player Score: 17 (Usable Ace: False), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 18 (Usable Ace: False), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 28 (Usable Ace: False), Dealer Score: 10\nGame end. Reward: -1.0\n\nPlayer Score: 6 (Usable Ace: False), Dealer Score: 9\nTaking action: Hit\nPlayer Score: 16 (Usable Ace: False), Dealer Score: 9\nTaking action: Hit\nPlayer Score: 26 (Usable Ace: False), Dealer Score: 9\nGame end. Reward: -1.0\n\nPlayer Score: 12 (Usable Ace: False), Dealer Score: 6\nTaking action: Hit\nPlayer Score: 21 (Usable Ace: False), Dealer Score: 6\nTaking action: Stick\nPlayer Score: 21 (Usable Ace: False), Dealer Score: 6\nGame end. Reward: 1.0\n\nPlayer Score: 17 (Usable Ace: True), Dealer Score: 8\nTaking action: Hit\nPlayer Score: 17 (Usable Ace: False), Dealer Score: 8\nTaking action: Hit\nPlayer Score: 22 (Usable Ace: False), Dealer Score: 8\nGame end. Reward: -1.0\n\nPlayer Score: 17 (Usable Ace: False), Dealer Score: 8\nTaking action: Hit\nPlayer Score: 27 (Usable Ace: False), Dealer Score: 8\nGame end. Reward: -1.0\n\nPlayer Score: 16 (Usable Ace: False), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 19 (Usable Ace: False), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 28 (Usable Ace: False), Dealer Score: 10\nGame end. Reward: -1.0\n\nPlayer Score: 13 (Usable Ace: False), Dealer Score: 7\nTaking action: Hit\nPlayer Score: 14 (Usable Ace: False), Dealer Score: 7\nTaking action: Hit\nPlayer Score: 24 (Usable Ace: False), Dealer Score: 7\nGame end. Reward: -1.0\n\nPlayer Score: 17 (Usable Ace: False), Dealer Score: 5\nTaking action: Hit\nPlayer Score: 25 (Usable Ace: False), Dealer Score: 5\nGame end. Reward: -1.0\n\nPlayer Score: 20 (Usable Ace: False), Dealer Score: 5\nTaking action: Stick\nPlayer Score: 20 (Usable Ace: False), Dealer Score: 5\nGame end. Reward: 1.0\n\nPlayer Score: 12 (Usable Ace: True), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 20 (Usable Ace: True), Dealer Score: 10\nTaking action: Stick\nPlayer Score: 20 (Usable Ace: True), Dealer Score: 10\nGame end. Reward: 0.0\n\nPlayer Score: 12 (Usable Ace: False), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 19 (Usable Ace: False), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 24 (Usable Ace: False), Dealer Score: 10\nGame end. Reward: -1.0\n\nPlayer Score: 19 (Usable Ace: False), Dealer Score: 4\nTaking action: Hit\nPlayer Score: 22 (Usable Ace: False), Dealer Score: 4\nGame end. Reward: -1.0\n\nPlayer Score: 16 (Usable Ace: False), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 20 (Usable Ace: False), Dealer Score: 10\nTaking action: Stick\nPlayer Score: 20 (Usable Ace: False), Dealer Score: 10\nGame end. Reward: 0.0\n\nPlayer Score: 4 (Usable Ace: False), Dealer Score: 3\nTaking action: Hit\nPlayer Score: 14 (Usable Ace: False), Dealer Score: 3\nTaking action: Hit\nPlayer Score: 24 (Usable Ace: False), Dealer Score: 3\nGame end. Reward: -1.0\n\nPlayer Score: 21 (Usable Ace: True), Dealer Score: 10\nTaking action: Stick\nPlayer Score: 21 (Usable Ace: True), Dealer Score: 10\nGame end. Reward: 1.0\n\nPlayer Score: 16 (Usable Ace: True), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 12 (Usable Ace: False), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 20 (Usable Ace: False), Dealer Score: 10\nTaking action: Stick\nPlayer Score: 20 (Usable Ace: False), Dealer Score: 10\nGame end. Reward: 1.0\n\nPlayer Score: 9 (Usable Ace: False), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 19 (Usable Ace: False), Dealer Score: 10\nTaking action: Hit\nPlayer Score: 26 (Usable Ace: False), Dealer Score: 10\nGame end. Reward: -1.0\n\nPlayer Score: 12 (Usable Ace: False), Dealer Score: 5\nTaking action: Hit\nPlayer Score: 15 (Usable Ace: False), Dealer Score: 5\nTaking action: Hit\nPlayer Score: 21 (Usable Ace: False), Dealer Score: 5\nTaking action: Stick\nPlayer Score: 21 (Usable Ace: False), Dealer Score: 5\nGame end. Reward: 1.0\n\nPlayer Score: 11 (Usable Ace: False), Dealer Score: 9\nTaking action: Hit\nPlayer Score: 13 (Usable Ace: False), Dealer Score: 9\nTaking action: Hit\nPlayer Score: 17 (Usable Ace: False), Dealer Score: 9\nTaking action: Hit\nPlayer Score: 19 (Usable Ace: False), Dealer Score: 9\nTaking action: Hit\nPlayer Score: 29 (Usable Ace: False), Dealer Score: 9\nGame end. Reward: -1.0\n\nPlayer Score: 14 (Usable Ace: False), Dealer Score: 7\nTaking action: Hit\nPlayer Score: 19 (Usable Ace: False), Dealer Score: 7\nTaking action: Hit\nPlayer Score: 29 (Usable Ace: False), Dealer Score: 7\nGame end. Reward: -1.0\n\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cbb6ce77586560fe3f9bc983ea8c45fc1bec4ae9
34,144
ipynb
Jupyter Notebook
notebooks/workwithtestdata.ipynb
VUIIS/CuBIDS
6a4be059b29eb27174cfd16c6c64aa840801ccfa
[ "MIT" ]
2
2021-03-05T20:12:15.000Z
2021-03-25T21:41:51.000Z
notebooks/workwithtestdata.ipynb
VUIIS/CuBIDS
6a4be059b29eb27174cfd16c6c64aa840801ccfa
[ "MIT" ]
48
2020-10-30T17:09:37.000Z
2021-06-15T16:53:59.000Z
notebooks/workwithtestdata.ipynb
VUIIS/CuBIDS
6a4be059b29eb27174cfd16c6c64aa840801ccfa
[ "MIT" ]
3
2021-07-28T14:13:55.000Z
2021-12-22T19:09:35.000Z
53.85489
1,716
0.601131
[ [ [ "from pathlib import Path\nimport os\nimport os.path as op\nfrom pkg_resources import resource_filename as pkgrf\nimport shutil\nimport cubids\nTEST_DATA = pkgrf(\"cubids\", \"testdata\")\n\ndef test_data(tmp_path):\n data_root = tmp_path / \"testdata\"\n shutil.copytree(TEST_DATA, str(data_root))\n assert len(list(data_root.rglob(\"*\"))) > 5\n return data_root\n\nworkdir = os.getcwd()\n\ndef copy_testing_data(dirname):\n newdir = op.join(workdir, dirname)\n os.makedirs(newdir)\n data_dir = test_data(Path(newdir))\n return data_dir\n\n# copy the data \ndata_root = copy_testing_data(\"test1\")", "_____no_output_____" ], [ "!rm -rf test1", "_____no_output_____" ] ], [ [ "# Test the key / param groups\n\nThis test copies the data and makes sure we get the correct number of key and parameter groups out of it\n", "_____no_output_____" ] ], [ [ "from cubids import CuBIDS\n\nbod = CuBIDS(str(first_test / \"complete\"))\nbod._cache_fieldmaps()", "100%|██████████| 6/6 [00:00<00:00, 268.30it/s]\n" ], [ "key_groups = bod.get_key_groups()\nprint(key_groups)", "['acquisition-HASC55AP_datatype-dwi_suffix-dwi', 'acquisition-v4_datatype-fmap_fmap-magnitude1_suffix-magnitude1', 'acquisition-v4_datatype-fmap_fmap-magnitude2_suffix-magnitude2', 'acquisition-v4_datatype-fmap_fmap-phasediff_suffix-phasediff', 'datatype-anat_suffix-T1w', 'datatype-fmap_direction-PA_fmap-epi_suffix-epi', 'datatype-func_suffix-bold_task-rest']\n" ], [ "ibod = CuBIDS(str(first_test / \"inconsistent\"))\nmisfits = ibod._cache_fieldmaps()\nlen(misfits)", "100%|██████████| 6/6 [00:00<00:00, 267.86it/s]\n" ], [ "ikey_groups = ibod.get_key_groups()", "_____no_output_____" ], [ "ikey_groups == key_groups", "_____no_output_____" ] ], [ [ "# Working with datalad\n\nHere we try to initialize a datalad repo on the test data", "_____no_output_____" ] ], [ [ "import datalad.api as dlapi\n\ndl = dlapi.create(path=first_test / \"inconsistent\", force=True)", "[INFO] Creating a new annex repo at /Users/mcieslak/projects/CuBIDS/notebooks/test1/testdata/inconsistent \n" ], [ "files_df, summary_df = bod.get_param_groups_dataframes()", "_____no_output_____" ], [ "%qtconsole", "_____no_output_____" ], [ "summary_df[[\"key_group\", \"ParamGroup\", \"Count\"]]", "_____no_output_____" ], [ "import pandas as pd\nparam_group_cols = list(set(df.columns.to_list()) - set([\"FilePath\"]))\nuniques = df.drop_duplicates(param_group_cols, ignore_index=True)\nprint(uniques.shape)\ncounts = df.groupby([\"key_group\", \"ParamGroup\"]).size().reset_index(name='Count')\nprint(counts.shape)\n\nparams_and_counts = pd.merge(uniques, counts)\nprint(params_and_counts.shape)", "_____no_output_____" ], [ "no_paths[[\"key_group\", \"ParamGroup\"]].groupby([\"key_group\", \"ParamGroup\"]).count()", "_____no_output_____" ], [ "keyparam_df.groupby([\"key_group\", \"ParamGroup\"]).size().reset_index(name='Count')", "_____no_output_____" ], [ "fname = 'sub-NDARAT581NDH/ses-HBNsiteRU/dwi/sub-NDARAT581NDH_ses-HBNsiteRU_acq-64dir_dwi.nii.gz'", "_____no_output_____" ], [ "bod.get_key_groups()", "_____no_output_____" ], [ "self = bod\n", "_____no_output_____" ], [ "from cubids.cubids import *\nsuffix = '(phase1|phasediff|epi|fieldmap)'\nfmap_files = self.layout.get(suffix=suffix, regex_search=True,\n extension=['.nii.gz', '.nii'])\n\nfiles_to_fmaps = defaultdict(list)\n\nprint(\"\\n\".join([f.path for f in fmap_files]))", "_____no_output_____" ], [ "\"\"\"\nfor fmap_file in tqdm(fmap_files):\n intentions = listify(fmap_file.get_metadata().get(\"IntendedFor\"))\n subject_prefix = \"sub-%s/\" % fmap_file.entities['subject']\n for intended_for in intentions:\n subject_relative_path = subject_prefix + intended_for\n files_to_fmaps[subject_relative_path].append(fmap_file)\n\"\"\"\nfmap_file = fmap_files[0]\nintentions = listify(fmap_file.get_metadata().get(\"IntendedFor\"))\nprint(\"intentions:\", intentions)\nsubject_prefix = \"sub-%s/\" % fmap_file.entities['subject']\nprint(subject_prefix)", "_____no_output_____" ], [ "suffix = '(phase1|phasediff|epi|fieldmap)'\nfmap_files = self.layout.get(suffix=suffix, regex_search=True,\n extension=['.nii.gz', '.nii'])\n\nfiles_to_fmaps = defaultdict(list)\nfor fmap_file in tqdm(fmap_files):\n intentions = listify(fmap_file.get_metadata().get(\"IntendedFor\"))\n subject_prefix = \"sub-%s\" % fmap_file.entities['subject']\n for intended_for in intentions:\n full_path = Path(self.path) / subject_prefix / intended_for\n files_to_fmaps[str(full_path)].append(fmap_file)", "_____no_output_____" ], [ "for data_file, fmap_files in bod.fieldmap_lookup.items():\n print(data_file[44:])\n for fmap_file in fmap_files:\n print(\" \", fmap_file.path[44:])", "_____no_output_____" ], [ "files_to_fmaps.keys()", "_____no_output_____" ], [ "from cubids.cubids import *\nfiles = [\n '/Users/mcieslak/projects/test_bids_data/HBN/sub-NDARAT581NDH/ses-HBNsiteRU/dwi/sub-NDARAT581NDH_ses-HBNsiteRU_acq-64dir_dwi.nii.gz', \n '/Users/mcieslak/projects/test_bids_data/HBN/sub-NDARRP384BVX/ses-HBNsiteRU/dwi/sub-NDARRP384BVX_ses-HBNsiteRU_acq-64dir_dwi.nii.gz']\n\ndfs = []\nfieldmap_lookup = bod.fieldmap_lookup\nkey_group_name = \"test\"\n# path needs to be relative to the root with no leading prefix\nfor path in files:\n metadata = bod.layout.get_metadata(path)\n wanted_keys = metadata.keys() & IMAGING_PARAMS\n example_data = {key: metadata[key] for key in wanted_keys}\n example_data[\"key_group\"] = key_group_name\n\n # Get the fieldmaps out and add their types\n print(fieldmap_lookup[path])\n fieldmap_types = sorted([fmap.entities['fmap'] for fmap in fieldmap_lookup[path]])\n for fmap_num, fmap_type in enumerate(fieldmap_types):\n example_data['fieldmap_type%02d' % fmap_num] = fmap_type \n\n # Expand slice timing to multiple columns\n SliceTime = example_data.get('SliceTiming')\n if SliceTime:\n # round each slice time to one place after the decimal\n for i in range(len(SliceTime)):\n SliceTime[i] = round(SliceTime[i], 1)\n example_data.update(\n {\"SliceTime%03d\" % SliceNum: time for\n SliceNum, time in enumerate(SliceTime)})\n del example_data['SliceTiming']\n\n dfs.append(example_data)", "_____no_output_____" ], [ "example_data", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb6e3b38a378bee4beb460528e6e9748217df69
1,696
ipynb
Jupyter Notebook
downloaded_kernels/loan_data/kernel_262.ipynb
josepablocam/common-code-extraction
a6978fae73eee8ece6f1db09f2f38cf92f03b3ad
[ "MIT" ]
null
null
null
downloaded_kernels/loan_data/kernel_262.ipynb
josepablocam/common-code-extraction
a6978fae73eee8ece6f1db09f2f38cf92f03b3ad
[ "MIT" ]
null
null
null
downloaded_kernels/loan_data/kernel_262.ipynb
josepablocam/common-code-extraction
a6978fae73eee8ece6f1db09f2f38cf92f03b3ad
[ "MIT" ]
2
2021-07-12T00:48:08.000Z
2021-08-11T12:53:05.000Z
84.8
1,307
0.689858
[ [ [ "# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load in \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nfrom collections import Counter\nimport matplotlib.pyplot as plt\n\n# Input data files are available in the \"../input/\" directory.\n# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory\n\nfrom subprocess import check_output\nprint(check_output([\"ls\", \"../input\"]).decode(\"utf8\"))\n\nloan = pd.read_csv('../input/loan.csv')\n\n#loan.info()\nloan=loan[loan.loan_status!='Current']\nc=Counter(list(loan.loan_status))\nmmp={x[0]:1 for x in c.most_common(20)}\nmmp['Fully Paid']=0\nmmp['Does not meet the credit policy. Status:Fully Paid']=0\nmmp['Issued']=0\nloan['target']=loan['loan_status'].map(mmp)\n\ncl2=['term','grade','sub_grade','purpose']\n\nn=1\nfor i in cl2:\n plt.subplot(2,2,n)\n pd.pivot_table(loan, values='target', index=i).plot(kind='barh',alpha=0.5, figsize=(15, 10))\n n+=1 \n\n\n\n# Any results you write to the current directory are saved as output.", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
cbb6f0aedecdbf252a02e10c07c5a9e8f29ace4c
68,306
ipynb
Jupyter Notebook
p3_submit/Tennis.ipynb
sajjad-ahmed/Udacity-DeepRL-Project-3-Collaboration-and-Competition
4d2f7a0417b9040eb6fb4f24388e611d892cc386
[ "MIT" ]
1
2021-03-16T17:16:40.000Z
2021-03-16T17:16:40.000Z
p3_submit/Tennis.ipynb
sajjad-ahmed/Udacity-DeepRL-Project-3-Collaboration-and-Competition
4d2f7a0417b9040eb6fb4f24388e611d892cc386
[ "MIT" ]
null
null
null
p3_submit/Tennis.ipynb
sajjad-ahmed/Udacity-DeepRL-Project-3-Collaboration-and-Competition
4d2f7a0417b9040eb6fb4f24388e611d892cc386
[ "MIT" ]
null
null
null
160.72
54,996
0.885398
[ [ [ "# Collaboration and Competition\n\n---\n\nYou are welcome to use this coding environment to train your agent for the project. Follow the instructions below to get started!\n\n### 1. Start the Environment\n\nRun the next code cell to install a few packages. This line will take a few minutes to run!", "_____no_output_____" ] ], [ [ "!pip -q install ./python", "\u001b[31mtensorflow 1.7.1 has requirement numpy>=1.13.3, but you'll have numpy 1.12.1 which is incompatible.\u001b[0m\r\n\u001b[31mipython 6.5.0 has requirement prompt-toolkit<2.0.0,>=1.0.15, but you'll have prompt-toolkit 3.0.5 which is incompatible.\u001b[0m\r\n" ] ], [ [ "The environment is already saved in the Workspace and can be accessed at the file path provided below. ", "_____no_output_____" ] ], [ [ "from unityagents import UnityEnvironment\nimport numpy as np\n\nenv = UnityEnvironment(file_name=\"/data/Tennis_Linux_NoVis/Tennis\")", "INFO:unityagents:\n'Academy' started successfully!\nUnity Academy name: Academy\n Number of Brains: 1\n Number of External Brains : 1\n Lesson number : 0\n Reset Parameters :\n\t\t\nUnity brain name: TennisBrain\n Number of Visual Observations (per agent): 0\n Vector Observation space type: continuous\n Vector Observation space size (per agent): 8\n Number of stacked Vector Observation: 3\n Vector Action space type: continuous\n Vector Action space size (per agent): 2\n Vector Action descriptions: , \n" ] ], [ [ "Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.", "_____no_output_____" ] ], [ [ "# get the default brain\nbrain_name = env.brain_names[0]\nbrain = env.brains[brain_name]", "_____no_output_____" ] ], [ [ "### 2. Examine the State and Action Spaces\n\nRun the code cell below to print some information about the environment.", "_____no_output_____" ] ], [ [ "# reset the environment\nenv_info = env.reset(train_mode=True)[brain_name]\n\n# number of agents \nnum_agents = len(env_info.agents)\nprint('Number of agents:', num_agents)\n\n# size of each action\naction_size = brain.vector_action_space_size\nprint('Size of each action:', action_size)\n\n# examine the state space \nstates = env_info.vector_observations\nstate_size = states.shape[1]\nprint('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))\nprint('The state for the first agent looks like:', states[0])", "Number of agents: 2\nSize of each action: 2\nThere are 2 agents. Each observes a state with length: 24\nThe state for the first agent looks like: [ 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. -6.65278625 -1.5 -0. 0.\n 6.83172083 6. -0. 0. ]\n" ] ], [ [ "### 3. Take Random Actions in the Environment\n\nIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.\n\nNote that **in this coding environment, you will not be able to watch the agents while they are training**, and you should set `train_mode=True` to restart the environment.", "_____no_output_____" ] ], [ [ "for i in range(5): # play game for 5 episodes\n env_info = env.reset(train_mode=False)[brain_name] # reset the environment \n states = env_info.vector_observations # get the current state (for each agent)\n scores = np.zeros(num_agents) # initialize the score (for each agent)\n while True:\n actions = np.random.randn(num_agents, action_size) # select an action (for each agent)\n actions = np.clip(actions, -1, 1) # all actions between -1 and 1\n env_info = env.step(actions)[brain_name] # send all actions to tne environment\n next_states = env_info.vector_observations # get next state (for each agent)\n rewards = env_info.rewards # get reward (for each agent)\n dones = env_info.local_done # see if episode finished\n scores += env_info.rewards # update the score (for each agent)\n states = next_states # roll over states to next time step\n if np.any(dones): # exit loop if episode finished\n break\n print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))", "Total score (averaged over agents) this episode: -0.004999999888241291\nTotal score (averaged over agents) this episode: -0.004999999888241291\nTotal score (averaged over agents) this episode: -0.004999999888241291\nTotal score (averaged over agents) this episode: 0.04500000085681677\nTotal score (averaged over agents) this episode: -0.004999999888241291\n" ] ], [ [ "When finished, you can close the environment.", "_____no_output_____" ], [ "### 4. It's Your Turn!\n\nNow it's your turn to train your own agent to solve the environment! A few **important notes**:\n- When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:\n```python\nenv_info = env.reset(train_mode=True)[brain_name]\n```\n- To structure your work, you're welcome to work directly in this Jupyter notebook, or you might like to start over with a new file! You can see the list of files in the workspace by clicking on **_Jupyter_** in the top left corner of the notebook.\n- In this coding environment, you will not be able to watch the agents while they are training. However, **_after training the agents_**, you can download the saved model weights to watch the agents on your own machine! ", "_____no_output_____" ], [ "# Import necessart packages", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib inline\n\nimport time, os\nfrom collections import deque\n", "_____no_output_____" ], [ "import torch\n\nfrom maddpg import MADDPG", "_____no_output_____" ] ], [ [ "# Instantiate agent", "_____no_output_____" ] ], [ [ "agent = MADDPG(seed=2, noise_start=0.5, update_every=2, gamma=1, t_stop_noise=30000)", "_____no_output_____" ], [ "episode_num = 6000\nmax_t = 1000\nscores = []\nscores_deque = deque(maxlen=100)\nscores_avg = []\n\nfor i_episode in range(1, episode_num + 1):\n rewards = []\n env_info = env.reset(train_mode=False)[brain_name]\n state = env_info.vector_observations\n\n for t in range(max_t):\n action = agent.act(state)\n env_info = env.step(action)[brain_name]\n next_state = env_info.vector_observations\n rewards_vec = env_info.rewards\n done = env_info.local_done\n agent.step(state, action, rewards_vec, next_state, done)\n state = next_state\n rewards.append(rewards_vec)\n if any(done):\n break\n\n episode_reward = np.max(np.sum(np.array(rewards), axis=0))\n scores.append(episode_reward)\n scores_deque.append(episode_reward)\n current_avg_score = np.mean(scores_deque)\n scores_avg.append(current_avg_score)\n print('\\rEpisode {}\\tAverage Score: {:.3f}'.format(i_episode, current_avg_score), end=\"\")\n\n if i_episode % 200 == 0:\n print('\\rEpisode {}\\tAverage Score: {:.3f}'.format(i_episode, current_avg_score))\n agent.save_agents()\n\n if np.mean(scores_deque) >= .5:\n print('\\nEnvironment solved in {:d} episodes!\\tAverage Score: {:.3f}'.format(i_episode, np.mean(scores_deque)))\n agent.save_agents()\n break\n", "Episode 200\tAverage Score: 0.020\nEpisode 400\tAverage Score: 0.000\nEpisode 600\tAverage Score: 0.043\nEpisode 800\tAverage Score: 0.006\nEpisode 1000\tAverage Score: 0.049\nEpisode 1200\tAverage Score: 0.080\nEpisode 1400\tAverage Score: 0.120\nEpisode 1600\tAverage Score: 0.191\nEpisode 1800\tAverage Score: 0.257\nEpisode 2000\tAverage Score: 0.471\nEpisode 2200\tAverage Score: 0.387\nEpisode 2307\tAverage Score: 0.513\nEnvironment solved in 2307 episodes!\tAverage Score: 0.513\n" ] ], [ [ "# Training", "_____no_output_____" ] ], [ [ "%%time\nimport pandas as pd\npd.DataFrame({\"scores\":scores,\"scores_avg\":scores_avg}).to_csv(\"p3_score.csv\",index=False)", "CPU times: user 16.1 ms, sys: 3.57 ms, total: 19.7 ms\nWall time: 28.5 ms\n" ] ], [ [ "# Plotting", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(16,8))\nax = fig.add_subplot(111)\nplt.plot(np.arange(1, len(scores)+1), scores,'b',label='Episode Scores')\nplt.plot(np.arange(1, len(scores)+1), scores_avg,'y',\\\n linewidth=5,label='Avg. score of last 100 episodes')\nplt.ylabel('Score', fontsize=18)\nplt.xlabel('Episode no', fontsize=18)\nax.legend(fontsize=14)\nplt.show()", "_____no_output_____" ], [ "!tar -zcvf p3_.tar.gz * \n", "_____no_output_____" ] ], [ [ "# Closing env", "_____no_output_____" ] ], [ [ "env.close()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbb70f415dbcb20120e711c5728915111b6564da
29,957
ipynb
Jupyter Notebook
podcast/tfidf.ipynb
undeadsushi/springboard_capstone
3bda587ef3ad34094cb10b7c20504280c401ff94
[ "MIT" ]
3
2016-06-19T13:20:34.000Z
2021-05-25T18:14:17.000Z
podcast/tfidf.ipynb
sheldonsmickley/springboard_capstone
3bda587ef3ad34094cb10b7c20504280c401ff94
[ "MIT" ]
null
null
null
podcast/tfidf.ipynb
sheldonsmickley/springboard_capstone
3bda587ef3ad34094cb10b7c20504280c401ff94
[ "MIT" ]
null
null
null
50.43266
155
0.652201
[ [ [ "import pandas as pd\nimport numpy as np\nimport os\nimport glob\nimport nltk.data\nfrom __future__ import division # Python 2 users only\nimport nltk, re, pprint\nfrom nltk import word_tokenize\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.metrics.pairwise import linear_kernel\nfrom nltk.corpus import stopwords\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom collections import Counter\n%matplotlib inline", "_____no_output_____" ], [ "translations = glob.glob('/Users/sheldon/completed_podcasts/*/*.txt')", "_____no_output_____" ], [ "translations = filter(lambda x: 'DONE' not in x, translations)\ntranslations = filter(lambda x: 'speech_notebook' not in x, translations)", "_____no_output_____" ], [ "translations", "_____no_output_____" ], [ "episode = [i.split('/')[5] for i in translations]\nseries = [i.split('/')[4] for i in translations]\nlocations = translations\ntranscribed = [open(i).read() for i in translations]", "_____no_output_____" ], [ "df = pd.DataFrame(data={'episode':episode,'series':series,'locations':locations,'transcribed':transcribed})", "_____no_output_____" ], [ "df['id'] = df.index", "_____no_output_____" ], [ "stop = set(stopwords.words('english'))\n\ndef tokenize_and_lower(textfile):\n tokens = word_tokenize(textfile)\n lower = [w.lower() for w in tokens]\n filtered_words = [word for word in lower if word not in stop]\n remove_contractions = [word for word in filtered_words if \"'\" not in word]\n remove_periods = [word for word in remove_contractions if \".\" not in word]\n count = Counter(remove_periods)\n return count\n \n#df['trans_token'] = df.transcribed.apply(tokenize_and_lower)\ndf['removed_stop_transcribed'] = df.transcribed.apply(tokenize_and_lower)\ntf = TfidfVectorizer(stop_words=stop)\ntfidf_matrix = tf.fit_transform(df['transcribed'])", "_____no_output_____" ], [ "tfidf_matrix", "_____no_output_____" ], [ "from sklearn.metrics.pairwise import linear_kernel\ncosine_similarities = linear_kernel(tfidf_matrix, tfidf_matrix)\n", "_____no_output_____" ], [ "def get_related_podcasts(podcast_number,number_of_similarities):\n cosine_similarities = linear_kernel(tfidf_matrix, tfidf_matrix)\n related_pod_index = cosine_similarities.argsort()[podcast_number][::-1]\n pod_dict = dict(zip(range(0, len(related_pod_index)),related_pod_index))\n pod_dict = pd.DataFrame({'rank':pod_dict.keys()},index=pod_dict.values())\n related_podcasts_df = pd.DataFrame.join(pod_dict, df, how='inner')\n final_df = related_podcasts_df.sort_values('rank')[0:number_of_similarities+1][['rank','episode','series']]\n return final_df\n\ndef get_related_podcasts_query(query, number_of_similarities):\n query = query.lower()\n query = query.split()\n tfidf_matrix_test = tf.fit_transform(query)\n tfidf_matrix_train = tf.transform(df['transcribed'])\n tfidf_matrix_train.todense()\n tfidf_matrix_test.todense()\n query_similarities = linear_kernel(tfidf_matrix_test, tfidf_matrix_train)\n query_similarities = query_similarities.argsort()[0][::-1]\n pod_dict = dict(zip(range(0, len(query_similarities)),query_similarities))\n pod_dict = pd.DataFrame({'rank':pod_dict.keys()},index=pod_dict.values())\n related_podcasts_df = pd.DataFrame.join(pod_dict, df, how='inner')\n final_df = related_podcasts_df.sort_values('rank')[0:number_of_similarities+1][['rank','episode','series']]\n return final_df", "_____no_output_____" ], [ "get_related_podcasts_query('economics math statistics',5)", "_____no_output_____" ], [ "get_related_podcasts(17,5)", "_____no_output_____" ] ], [ [ "## Compute for queries", "_____no_output_____" ] ], [ [ "query = ['python tim ferris']\nvectorizer = TfidfVectorizer(stop_words='english')\ntfidf_matrix_test = tf.fit_transform(query)\ntfid_matrix_train = tfidf_matrix.todense()\ntfidf_matrix_test.todense()\ncosine_similarities = linear_kernel(tfidf_matrix_test, tfidf_matrix_train)\ncosine_similarities= cosine_similarities.argsort()[::-1]\ncosine_similarities", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
cbb7271decc857edcdfee2569ead9b85029f8a1f
3,096
ipynb
Jupyter Notebook
driveway_classifier.ipynb
davecampbell/fastai-binder-app-template
7419566265359553b261f2f6ab1c532640e963e9
[ "MIT" ]
null
null
null
driveway_classifier.ipynb
davecampbell/fastai-binder-app-template
7419566265359553b261f2f6ab1c532640e963e9
[ "MIT" ]
null
null
null
driveway_classifier.ipynb
davecampbell/fastai-binder-app-template
7419566265359553b261f2f6ab1c532640e963e9
[ "MIT" ]
null
null
null
21.957447
120
0.545543
[ [ [ "from fastai.vision.all import *\nfrom fastai.vision.widgets import *", "_____no_output_____" ] ], [ [ "# The Amazing Driveway Classifier!", "_____no_output_____" ], [ "yes, a driveway classifier.\n\n----", "_____no_output_____" ] ], [ [ "path = Path()\nlearn_inf = load_learner(path/'export.pkl', cpu=True)\nbtn_upload = widgets.FileUpload()\nout_pl = widgets.Output()\nlbl_pred = widgets.Label()", "_____no_output_____" ], [ "def on_data_change(change):\n lbl_pred.value = ''\n img = PILImage.create(btn_upload.data[-1])\n out_pl.clear_output()\n with out_pl: display(img.to_thumb(128,128))\n pred,pred_idx,probs = learn_inf.predict(img)\n lbl_pred.value = f'Prediction: {pred}; Probability: {probs[pred_idx]:.04f}'", "_____no_output_____" ], [ "btn_upload.observe(on_data_change, names=['data'])", "_____no_output_____" ], [ "display(VBox([widgets.Label('Select your driveway!'), btn_upload, out_pl, lbl_pred]))", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
cbb72b1b47ac71b6df7a9c4ea442a70414ab164f
187,194
ipynb
Jupyter Notebook
Advanced House Price Prediction/House_Price_Prediction_using_PyTorch.ipynb
aryamanak10/Data-Science-Learning
d74ae72149d300f80e47ffdcb76caa5fed02338b
[ "MIT" ]
1
2020-08-14T07:17:54.000Z
2020-08-14T07:17:54.000Z
Advanced House Price Prediction/House_Price_Prediction_using_PyTorch.ipynb
aryamanak10/Data-Science-Learning
d74ae72149d300f80e47ffdcb76caa5fed02338b
[ "MIT" ]
null
null
null
Advanced House Price Prediction/House_Price_Prediction_using_PyTorch.ipynb
aryamanak10/Data-Science-Learning
d74ae72149d300f80e47ffdcb76caa5fed02338b
[ "MIT" ]
null
null
null
38.132817
16,554
0.411146
[ [ [ "## Kaggle Advance House Price Prediction Using PyTorch\n\n* https://docs.fast.ai/tabular.html\n\n* https://www.fast.ai/2018/04/29/categorical-embeddings/\n\n* https://yashuseth.blog/2018/07/22/pytorch-neural-network-for-tabular-data-with-categorical-embeddings/", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ] ], [ [ "### Importing the Dataset", "_____no_output_____" ] ], [ [ "df=pd.read_csv('houseprice.csv',usecols=[\"SalePrice\", \"MSSubClass\", \"MSZoning\", \"LotFrontage\", \"LotArea\",\n \"Street\", \"YearBuilt\", \"LotShape\", \"1stFlrSF\", \"2ndFlrSF\"]).dropna()", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 1201 entries, 0 to 1459\nData columns (total 10 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 MSSubClass 1201 non-null int64 \n 1 MSZoning 1201 non-null object \n 2 LotFrontage 1201 non-null float64\n 3 LotArea 1201 non-null int64 \n 4 Street 1201 non-null object \n 5 LotShape 1201 non-null object \n 6 YearBuilt 1201 non-null int64 \n 7 1stFlrSF 1201 non-null int64 \n 8 2ndFlrSF 1201 non-null int64 \n 9 SalePrice 1201 non-null int64 \ndtypes: float64(1), int64(6), object(3)\nmemory usage: 103.2+ KB\n" ] ], [ [ "### Unique Values in the Columns", "_____no_output_____" ] ], [ [ "for i in df.columns:\n print(\"Column name {} and unique values are {}\".format(i,len(df[i].unique())))", "Column name MSSubClass and unique values are 15\nColumn name MSZoning and unique values are 5\nColumn name LotFrontage and unique values are 110\nColumn name LotArea and unique values are 869\nColumn name Street and unique values are 2\nColumn name LotShape and unique values are 4\nColumn name YearBuilt and unique values are 112\nColumn name 1stFlrSF and unique values are 678\nColumn name 2ndFlrSF and unique values are 368\nColumn name SalePrice and unique values are 597\n" ] ], [ [ "### Derived Features", "_____no_output_____" ] ], [ [ "import datetime\ndatetime.datetime.now().year", "_____no_output_____" ], [ "df['Total Years']=datetime.datetime.now().year-df['YearBuilt']", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.drop(\"YearBuilt\",axis=1,inplace=True)", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ] ], [ [ "### Creating my Categorical Features", "_____no_output_____" ] ], [ [ "cat_features=[\"MSSubClass\", \"MSZoning\", \"Street\", \"LotShape\"]\nout_feature=\"SalePrice\"", "_____no_output_____" ], [ "df[\"MSSubClass\"].unique()", "_____no_output_____" ] ], [ [ "### Converting the categorical feature", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import LabelEncoder\nlbl_encoders={}\nlbl_encoders[\"MSSubClass\"]=LabelEncoder()\nlbl_encoders[\"MSSubClass\"].fit_transform(df[\"MSSubClass\"])", "_____no_output_____" ], [ "lbl_encoders", "_____no_output_____" ], [ "from sklearn.preprocessing import LabelEncoder\nlbl_encoders={}\nfor feature in cat_features:\n lbl_encoders[feature]=LabelEncoder()\n df[feature]=lbl_encoders[feature].fit_transform(df[feature])", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "### Stacking and Converting Into Tensors", "_____no_output_____" ] ], [ [ "import numpy as np\n\ncat_features=np.stack([df['MSSubClass'],df['MSZoning'],df['Street'],df['LotShape']],1)\ncat_features", "_____no_output_____" ] ], [ [ "### Convert numpy to Tensors", "_____no_output_____" ], [ "**Note: CATEGORICAL FEATURES CAN NEVER BY CONVERTED TO FLOAT**", "_____no_output_____" ] ], [ [ "import torch\ncat_features=torch.tensor(cat_features,dtype=torch.int64)\ncat_features", "_____no_output_____" ] ], [ [ "### Creating continuous variables", "_____no_output_____" ] ], [ [ "cont_features=[]\nfor i in df.columns:\n if i in [\"MSSubClass\", \"MSZoning\", \"Street\", \"LotShape\",\"SalePrice\"]:\n pass\n else:\n cont_features.append(i)", "_____no_output_____" ], [ "cont_features", "_____no_output_____" ] ], [ [ "### Stacking continuous variables to a tensor", "_____no_output_____" ] ], [ [ "cont_values=np.stack([df[i].values for i in cont_features],axis=1)\ncont_values=torch.tensor(cont_values,dtype=torch.float)\ncont_values", "_____no_output_____" ], [ "cont_values.dtype", "_____no_output_____" ] ], [ [ "### Dependent Feature", "_____no_output_____" ] ], [ [ "y=torch.tensor(df['SalePrice'].values,dtype=torch.float).reshape(-1,1)\ny", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 1201 entries, 0 to 1459\nData columns (total 10 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 MSSubClass 1201 non-null int64 \n 1 MSZoning 1201 non-null int64 \n 2 LotFrontage 1201 non-null float64\n 3 LotArea 1201 non-null int64 \n 4 Street 1201 non-null int64 \n 5 LotShape 1201 non-null int64 \n 6 1stFlrSF 1201 non-null int64 \n 7 2ndFlrSF 1201 non-null int64 \n 8 SalePrice 1201 non-null int64 \n 9 Total Years 1201 non-null int64 \ndtypes: float64(1), int64(9)\nmemory usage: 103.2 KB\n" ], [ "cat_features.shape,cont_values.shape,y.shape", "_____no_output_____" ], [ "len(df['MSSubClass'].unique())", "_____no_output_____" ] ], [ [ "## Embedding Size For Categorical columns", "_____no_output_____" ] ], [ [ "cat_dims=[len(df[col].unique()) for col in [\"MSSubClass\", \"MSZoning\", \"Street\", \"LotShape\"]]", "_____no_output_____" ], [ "cat_dims", "_____no_output_____" ] ], [ [ "### Dimension of Output from the Embedding Layer\n\n* Output dimension should be set based on the input dimension\n\n* Should be min(50, feature dimension/2)\n\n* **Not more than 50 categorical values can be used**", "_____no_output_____" ] ], [ [ "embedding_dim= [(x, min(50, (x + 1) // 2)) for x in cat_dims]", "_____no_output_____" ], [ "embedding_dim", "_____no_output_____" ] ], [ [ "## Creating an Embedding Layer inside the Neural Network", "_____no_output_____" ], [ "* ModuleList is used because we have many dimensions (4) in the embedding layer.\n\n* Embedding function creates the embedding layer using the list comprehension", "_____no_output_____" ] ], [ [ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nembed_representation=nn.ModuleList([nn.Embedding(inp,out) for inp,out in embedding_dim])\nembed_representation", "_____no_output_____" ], [ "cat_features", "_____no_output_____" ], [ "cat_featuresz=cat_features[:4]\ncat_featuresz", "_____no_output_____" ], [ "pd.set_option('display.max_rows', 500)\nembedding_val=[]\nfor i,e in enumerate(embed_representation):\n embedding_val.append(e(cat_features[:,i]))", "_____no_output_____" ], [ "embedding_val", "_____no_output_____" ], [ "len(embedding_val[0][0])", "_____no_output_____" ] ], [ [ "### Stacking the embedded values column wise", "_____no_output_____" ] ], [ [ "z = torch.cat(embedding_val, 1)\nz", "_____no_output_____" ] ], [ [ "### Implement dropout - Regularization Method (Prevents Overfitting)", "_____no_output_____" ] ], [ [ "# 40% values are dropped out.\n\ndroput=nn.Dropout(.4)", "_____no_output_____" ], [ "final_embed=droput(z)\nfinal_embed", "_____no_output_____" ] ], [ [ "## Create a Feed Forward Neural Network", "_____no_output_____" ] ], [ [ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nclass FeedForwardNN(nn.Module):\n\n def __init__(self, embedding_dim, n_cont, out_sz, layers, p=0.5):\n super().__init__()\n self.embeds = nn.ModuleList([nn.Embedding(inp,out) for inp,out in embedding_dim])\n self.emb_drop = nn.Dropout(p)\n self.bn_cont = nn.BatchNorm1d(n_cont)\n \n layerlist = []\n n_emb = sum((out for inp,out in embedding_dim))\n\n # Input feature = Embedding Layers + Continuous Variables\n n_in = n_emb + n_cont\n \n for i in layers:\n layerlist.append(nn.Linear(n_in,i)) \n layerlist.append(nn.ReLU(inplace=True))\n layerlist.append(nn.BatchNorm1d(i))\n layerlist.append(nn.Dropout(p))\n n_in = i\n layerlist.append(nn.Linear(layers[-1],out_sz))\n \n self.layers = nn.Sequential(*layerlist)\n \n def forward(self, x_cat, x_cont):\n embeddings = []\n for i,e in enumerate(self.embeds):\n embeddings.append(e(x_cat[:,i]))\n x = torch.cat(embeddings, 1)\n x = self.emb_drop(x)\n \n x_cont = self.bn_cont(x_cont)\n x = torch.cat([x, x_cont], 1)\n x = self.layers(x)\n return x", "_____no_output_____" ], [ "len(cont_features)", "_____no_output_____" ], [ "torch.manual_seed(100)\nmodel=FeedForwardNN(embedding_dim,len(cont_features),1,[100,50],p=0.4)", "_____no_output_____" ] ], [ [ "* ReLU activation function is used because it is a regression problem.", "_____no_output_____" ] ], [ [ "model", "_____no_output_____" ] ], [ [ "### Define Loss And Optimizer", "_____no_output_____" ] ], [ [ "model.parameters", "_____no_output_____" ], [ "# Later converted to Root Mean Squared Error\n\nloss_function=nn.MSELoss()\noptimizer=torch.optim.Adam(model.parameters(),lr=0.01)", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "cont_values", "_____no_output_____" ], [ "cont_values.shape", "_____no_output_____" ], [ "batch_size=1200\ntest_size=int(batch_size*0.15)\ntrain_categorical=cat_features[:batch_size-test_size]\ntest_categorical=cat_features[batch_size-test_size:batch_size]\ntrain_cont=cont_values[:batch_size-test_size]\ntest_cont=cont_values[batch_size-test_size:batch_size]\ny_train=y[:batch_size-test_size]\ny_test=y[batch_size-test_size:batch_size]", "_____no_output_____" ], [ "len(train_categorical),len(test_categorical),len(train_cont),len(test_cont),len(y_train),len(y_test)", "_____no_output_____" ], [ "epochs=5000\nfinal_losses=[]\nfor i in range(epochs):\n i=i+1\n y_pred=model(train_categorical,train_cont)\n\n # RMSE\n loss=torch.sqrt(loss_function(y_pred,y_train)) \n final_losses.append(loss)\n if i%10==1:\n print(\"Epoch number: {} and the loss : {}\".format(i,loss.item()))\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()", "Epoch number: 1 and the loss : 200496.78125\nEpoch number: 11 and the loss : 200493.4375\nEpoch number: 21 and the loss : 200489.15625\nEpoch number: 31 and the loss : 200482.609375\nEpoch number: 41 and the loss : 200473.265625\nEpoch number: 51 and the loss : 200461.359375\nEpoch number: 61 and the loss : 200446.390625\nEpoch number: 71 and the loss : 200429.359375\nEpoch number: 81 and the loss : 200408.0\nEpoch number: 91 and the loss : 200383.421875\nEpoch number: 101 and the loss : 200355.3125\nEpoch number: 111 and the loss : 200322.125\nEpoch number: 121 and the loss : 200291.40625\nEpoch number: 131 and the loss : 200252.0\nEpoch number: 141 and the loss : 200206.625\nEpoch number: 151 and the loss : 200162.25\nEpoch number: 161 and the loss : 200112.25\nEpoch number: 171 and the loss : 200059.796875\nEpoch number: 181 and the loss : 200005.8125\nEpoch number: 191 and the loss : 199946.1875\nEpoch number: 201 and the loss : 199881.546875\nEpoch number: 211 and the loss : 199815.828125\nEpoch number: 221 and the loss : 199737.25\nEpoch number: 231 and the loss : 199669.640625\nEpoch number: 241 and the loss : 199588.796875\nEpoch number: 251 and the loss : 199505.765625\nEpoch number: 261 and the loss : 199411.03125\nEpoch number: 271 and the loss : 199326.203125\nEpoch number: 281 and the loss : 199244.265625\nEpoch number: 291 and the loss : 199140.5\nEpoch number: 301 and the loss : 199026.96875\nEpoch number: 311 and the loss : 198931.984375\nEpoch number: 321 and the loss : 198844.59375\nEpoch number: 331 and the loss : 198694.125\nEpoch number: 341 and the loss : 198605.078125\nEpoch number: 351 and the loss : 198504.15625\nEpoch number: 361 and the loss : 198383.453125\nEpoch number: 371 and the loss : 198243.875\nEpoch number: 381 and the loss : 198103.96875\nEpoch number: 391 and the loss : 198017.0\nEpoch number: 401 and the loss : 197889.40625\nEpoch number: 411 and the loss : 197730.28125\nEpoch number: 421 and the loss : 197594.484375\nEpoch number: 431 and the loss : 197419.1875\nEpoch number: 441 and the loss : 197284.4375\nEpoch number: 451 and the loss : 197177.65625\nEpoch number: 461 and the loss : 196967.453125\nEpoch number: 471 and the loss : 196880.796875\nEpoch number: 481 and the loss : 196704.875\nEpoch number: 491 and the loss : 196503.734375\nEpoch number: 501 and the loss : 196432.296875\nEpoch number: 511 and the loss : 196208.109375\nEpoch number: 521 and the loss : 196045.890625\nEpoch number: 531 and the loss : 195880.671875\nEpoch number: 541 and the loss : 195663.84375\nEpoch number: 551 and the loss : 195497.546875\nEpoch number: 561 and the loss : 195290.671875\nEpoch number: 571 and the loss : 195074.1875\nEpoch number: 581 and the loss : 194839.453125\nEpoch number: 591 and the loss : 194691.296875\nEpoch number: 601 and the loss : 194530.453125\nEpoch number: 611 and the loss : 194272.6875\nEpoch number: 621 and the loss : 194122.125\nEpoch number: 631 and the loss : 193830.890625\nEpoch number: 641 and the loss : 193685.484375\nEpoch number: 651 and the loss : 193475.65625\nEpoch number: 661 and the loss : 193223.625\nEpoch number: 671 and the loss : 193058.03125\nEpoch number: 681 and the loss : 192778.78125\nEpoch number: 691 and the loss : 192498.359375\nEpoch number: 701 and the loss : 192299.3125\nEpoch number: 711 and the loss : 192207.765625\nEpoch number: 721 and the loss : 192026.625\nEpoch number: 731 and the loss : 191636.6875\nEpoch number: 741 and the loss : 191512.1875\nEpoch number: 751 and the loss : 191196.5625\nEpoch number: 761 and the loss : 190986.65625\nEpoch number: 771 and the loss : 190729.53125\nEpoch number: 781 and the loss : 190464.515625\nEpoch number: 791 and the loss : 190280.671875\nEpoch number: 801 and the loss : 189874.28125\nEpoch number: 811 and the loss : 189753.3125\nEpoch number: 821 and the loss : 189661.359375\nEpoch number: 831 and the loss : 189288.84375\nEpoch number: 841 and the loss : 188941.234375\nEpoch number: 851 and the loss : 188843.421875\nEpoch number: 861 and the loss : 188393.609375\nEpoch number: 871 and the loss : 188171.140625\nEpoch number: 881 and the loss : 187892.8125\nEpoch number: 891 and the loss : 187477.875\nEpoch number: 901 and the loss : 187439.59375\nEpoch number: 911 and the loss : 187233.203125\nEpoch number: 921 and the loss : 186804.46875\nEpoch number: 931 and the loss : 186380.671875\nEpoch number: 941 and the loss : 186142.625\nEpoch number: 951 and the loss : 185960.546875\nEpoch number: 961 and the loss : 185525.21875\nEpoch number: 971 and the loss : 185286.046875\nEpoch number: 981 and the loss : 185261.734375\nEpoch number: 991 and the loss : 184791.046875\nEpoch number: 1001 and the loss : 184303.5625\nEpoch number: 1011 and the loss : 184003.046875\nEpoch number: 1021 and the loss : 183797.140625\nEpoch number: 1031 and the loss : 183369.8125\nEpoch number: 1041 and the loss : 183233.171875\nEpoch number: 1051 and the loss : 182897.15625\nEpoch number: 1061 and the loss : 182756.75\nEpoch number: 1071 and the loss : 182305.390625\nEpoch number: 1081 and the loss : 181908.6875\nEpoch number: 1091 and the loss : 181493.75\nEpoch number: 1101 and the loss : 181296.46875\nEpoch number: 1111 and the loss : 180749.3125\nEpoch number: 1121 and the loss : 180474.96875\nEpoch number: 1131 and the loss : 180411.40625\nEpoch number: 1141 and the loss : 179917.796875\nEpoch number: 1151 and the loss : 179520.40625\nEpoch number: 1161 and the loss : 179362.703125\nEpoch number: 1171 and the loss : 178925.71875\nEpoch number: 1181 and the loss : 178540.375\nEpoch number: 1191 and the loss : 178103.1875\nEpoch number: 1201 and the loss : 178084.953125\nEpoch number: 1211 and the loss : 177440.71875\nEpoch number: 1221 and the loss : 177459.90625\nEpoch number: 1231 and the loss : 176569.65625\nEpoch number: 1241 and the loss : 176320.421875\nEpoch number: 1251 and the loss : 175505.015625\nEpoch number: 1261 and the loss : 175565.609375\nEpoch number: 1271 and the loss : 175246.015625\nEpoch number: 1281 and the loss : 174914.109375\nEpoch number: 1291 and the loss : 174471.265625\nEpoch number: 1301 and the loss : 174210.421875\nEpoch number: 1311 and the loss : 173658.546875\nEpoch number: 1321 and the loss : 173111.765625\nEpoch number: 1331 and the loss : 173405.9375\nEpoch number: 1341 and the loss : 172628.484375\nEpoch number: 1351 and the loss : 172276.0625\nEpoch number: 1361 and the loss : 172004.640625\nEpoch number: 1371 and the loss : 171816.234375\nEpoch number: 1381 and the loss : 171322.734375\nEpoch number: 1391 and the loss : 170380.109375\nEpoch number: 1401 and the loss : 170471.953125\nEpoch number: 1411 and the loss : 169653.84375\nEpoch number: 1421 and the loss : 169844.546875\nEpoch number: 1431 and the loss : 169650.515625\nEpoch number: 1441 and the loss : 168659.609375\nEpoch number: 1451 and the loss : 168564.0625\nEpoch number: 1461 and the loss : 167951.859375\nEpoch number: 1471 and the loss : 167613.015625\nEpoch number: 1481 and the loss : 167345.875\nEpoch number: 1491 and the loss : 166591.0625\nEpoch number: 1501 and the loss : 166339.859375\nEpoch number: 1511 and the loss : 165579.453125\nEpoch number: 1521 and the loss : 165820.1875\nEpoch number: 1531 and the loss : 165515.5\nEpoch number: 1541 and the loss : 164851.640625\nEpoch number: 1551 and the loss : 164743.109375\nEpoch number: 1561 and the loss : 163687.5625\nEpoch number: 1571 and the loss : 163462.453125\nEpoch number: 1581 and the loss : 163085.703125\nEpoch number: 1591 and the loss : 162952.78125\nEpoch number: 1601 and the loss : 162482.375\nEpoch number: 1611 and the loss : 161681.515625\nEpoch number: 1621 and the loss : 160775.25\nEpoch number: 1631 and the loss : 160810.6875\nEpoch number: 1641 and the loss : 160633.109375\nEpoch number: 1651 and the loss : 160130.4375\nEpoch number: 1661 and the loss : 160152.375\nEpoch number: 1671 and the loss : 159077.078125\nEpoch number: 1681 and the loss : 158377.4375\nEpoch number: 1691 and the loss : 158333.84375\nEpoch number: 1701 and the loss : 157657.8125\nEpoch number: 1711 and the loss : 157348.734375\nEpoch number: 1721 and the loss : 157138.125\nEpoch number: 1731 and the loss : 156711.859375\nEpoch number: 1741 and the loss : 156259.5\nEpoch number: 1751 and the loss : 156171.8125\nEpoch number: 1761 and the loss : 154753.453125\nEpoch number: 1771 and the loss : 154650.75\nEpoch number: 1781 and the loss : 154439.578125\nEpoch number: 1791 and the loss : 153716.875\nEpoch number: 1801 and the loss : 153738.609375\nEpoch number: 1811 and the loss : 152830.1875\nEpoch number: 1821 and the loss : 152309.46875\nEpoch number: 1831 and the loss : 152448.359375\nEpoch number: 1841 and the loss : 151628.09375\nEpoch number: 1851 and the loss : 150717.875\nEpoch number: 1861 and the loss : 150833.5\nEpoch number: 1871 and the loss : 150570.0625\nEpoch number: 1881 and the loss : 149636.09375\nEpoch number: 1891 and the loss : 148813.921875\nEpoch number: 1901 and the loss : 148658.140625\nEpoch number: 1911 and the loss : 148903.59375\nEpoch number: 1921 and the loss : 147781.328125\nEpoch number: 1931 and the loss : 147365.109375\nEpoch number: 1941 and the loss : 146713.953125\nEpoch number: 1951 and the loss : 146620.515625\nEpoch number: 1961 and the loss : 146007.59375\nEpoch number: 1971 and the loss : 145559.03125\nEpoch number: 1981 and the loss : 145807.78125\nEpoch number: 1991 and the loss : 144714.609375\nEpoch number: 2001 and the loss : 144213.703125\nEpoch number: 2011 and the loss : 143075.03125\nEpoch number: 2021 and the loss : 143022.25\nEpoch number: 2031 and the loss : 142567.484375\nEpoch number: 2041 and the loss : 142025.078125\nEpoch number: 2051 and the loss : 141205.09375\nEpoch number: 2061 and the loss : 140984.875\nEpoch number: 2071 and the loss : 140998.0625\nEpoch number: 2081 and the loss : 139757.03125\nEpoch number: 2091 and the loss : 139764.6875\nEpoch number: 2101 and the loss : 139943.484375\nEpoch number: 2111 and the loss : 138724.109375\nEpoch number: 2121 and the loss : 137914.53125\nEpoch number: 2131 and the loss : 137977.15625\nEpoch number: 2141 and the loss : 137237.765625\nEpoch number: 2151 and the loss : 136609.03125\nEpoch number: 2161 and the loss : 136455.21875\nEpoch number: 2171 and the loss : 136887.25\nEpoch number: 2181 and the loss : 135967.78125\nEpoch number: 2191 and the loss : 134839.515625\nEpoch number: 2201 and the loss : 134191.625\nEpoch number: 2211 and the loss : 133769.5625\nEpoch number: 2221 and the loss : 132845.96875\nEpoch number: 2231 and the loss : 132616.359375\nEpoch number: 2241 and the loss : 131460.5\nEpoch number: 2251 and the loss : 131620.890625\nEpoch number: 2261 and the loss : 130279.0234375\nEpoch number: 2271 and the loss : 130363.8515625\nEpoch number: 2281 and the loss : 130361.09375\nEpoch number: 2291 and the loss : 129453.7890625\nEpoch number: 2301 and the loss : 128888.765625\nEpoch number: 2311 and the loss : 129747.296875\nEpoch number: 2321 and the loss : 128437.3125\nEpoch number: 2331 and the loss : 128694.9921875\nEpoch number: 2341 and the loss : 127686.5546875\nEpoch number: 2351 and the loss : 126193.5\nEpoch number: 2361 and the loss : 127119.4453125\nEpoch number: 2371 and the loss : 125700.7734375\nEpoch number: 2381 and the loss : 124824.9765625\nEpoch number: 2391 and the loss : 124761.6328125\nEpoch number: 2401 and the loss : 124421.25\nEpoch number: 2411 and the loss : 123205.4765625\nEpoch number: 2421 and the loss : 123201.140625\nEpoch number: 2431 and the loss : 122450.0546875\nEpoch number: 2441 and the loss : 122385.296875\nEpoch number: 2451 and the loss : 121547.5390625\nEpoch number: 2461 and the loss : 120443.25\nEpoch number: 2471 and the loss : 120723.7265625\nEpoch number: 2481 and the loss : 121227.4609375\nEpoch number: 2491 and the loss : 120644.890625\nEpoch number: 2501 and the loss : 120985.8984375\nEpoch number: 2511 and the loss : 118626.0625\nEpoch number: 2521 and the loss : 117707.78125\nEpoch number: 2531 and the loss : 119102.0234375\nEpoch number: 2541 and the loss : 117476.2734375\nEpoch number: 2551 and the loss : 117166.21875\nEpoch number: 2561 and the loss : 116365.6875\nEpoch number: 2571 and the loss : 115621.234375\nEpoch number: 2581 and the loss : 116025.671875\nEpoch number: 2591 and the loss : 115601.046875\nEpoch number: 2601 and the loss : 115459.921875\nEpoch number: 2611 and the loss : 114371.171875\nEpoch number: 2621 and the loss : 114658.8125\nEpoch number: 2631 and the loss : 112764.296875\nEpoch number: 2641 and the loss : 112165.3359375\nEpoch number: 2651 and the loss : 112221.5390625\nEpoch number: 2661 and the loss : 111420.2890625\nEpoch number: 2671 and the loss : 110858.0\nEpoch number: 2681 and the loss : 110071.796875\nEpoch number: 2691 and the loss : 109593.453125\nEpoch number: 2701 and the loss : 109912.453125\nEpoch number: 2711 and the loss : 109199.765625\nEpoch number: 2721 and the loss : 109115.9140625\nEpoch number: 2731 and the loss : 108504.6875\nEpoch number: 2741 and the loss : 107224.859375\nEpoch number: 2751 and the loss : 105984.953125\nEpoch number: 2761 and the loss : 106299.8359375\nEpoch number: 2771 and the loss : 106751.71875\nEpoch number: 2781 and the loss : 105410.2109375\nEpoch number: 2791 and the loss : 104705.96875\nEpoch number: 2801 and the loss : 103795.6328125\nEpoch number: 2811 and the loss : 104117.9140625\nEpoch number: 2821 and the loss : 102839.7421875\nEpoch number: 2831 and the loss : 103165.75\nEpoch number: 2841 and the loss : 102487.3359375\nEpoch number: 2851 and the loss : 101247.03125\nEpoch number: 2861 and the loss : 101842.5546875\nEpoch number: 2871 and the loss : 100967.5390625\nEpoch number: 2881 and the loss : 101029.6875\nEpoch number: 2891 and the loss : 100507.0234375\nEpoch number: 2901 and the loss : 99015.9140625\nEpoch number: 2911 and the loss : 98136.328125\nEpoch number: 2921 and the loss : 98212.53125\nEpoch number: 2931 and the loss : 98896.171875\nEpoch number: 2941 and the loss : 97147.25\nEpoch number: 2951 and the loss : 97494.5625\nEpoch number: 2961 and the loss : 96472.6875\nEpoch number: 2971 and the loss : 95379.125\nEpoch number: 2981 and the loss : 95148.2421875\nEpoch number: 2991 and the loss : 94356.8046875\nEpoch number: 3001 and the loss : 93826.9921875\nEpoch number: 3011 and the loss : 94425.828125\nEpoch number: 3021 and the loss : 93151.0546875\nEpoch number: 3031 and the loss : 92648.140625\nEpoch number: 3041 and the loss : 92689.53125\nEpoch number: 3051 and the loss : 91518.484375\nEpoch number: 3061 and the loss : 92024.7578125\nEpoch number: 3071 and the loss : 91083.59375\nEpoch number: 3081 and the loss : 90851.0390625\nEpoch number: 3091 and the loss : 89949.125\nEpoch number: 3101 and the loss : 89197.6796875\nEpoch number: 3111 and the loss : 89732.2421875\nEpoch number: 3121 and the loss : 92186.265625\nEpoch number: 3131 and the loss : 87971.8046875\nEpoch number: 3141 and the loss : 87727.921875\nEpoch number: 3151 and the loss : 87765.984375\nEpoch number: 3161 and the loss : 85952.8125\nEpoch number: 3171 and the loss : 87067.953125\nEpoch number: 3181 and the loss : 85912.21875\nEpoch number: 3191 and the loss : 86808.1875\nEpoch number: 3201 and the loss : 84680.015625\nEpoch number: 3211 and the loss : 84347.046875\nEpoch number: 3221 and the loss : 83062.5390625\nEpoch number: 3231 and the loss : 84105.4375\nEpoch number: 3241 and the loss : 82194.40625\nEpoch number: 3251 and the loss : 81605.9140625\nEpoch number: 3261 and the loss : 81483.9453125\nEpoch number: 3271 and the loss : 81877.9921875\nEpoch number: 3281 and the loss : 80955.0390625\nEpoch number: 3291 and the loss : 81005.2109375\nEpoch number: 3301 and the loss : 80098.671875\nEpoch number: 3311 and the loss : 79212.5703125\nEpoch number: 3321 and the loss : 81050.5390625\nEpoch number: 3331 and the loss : 78464.671875\nEpoch number: 3341 and the loss : 77863.9375\nEpoch number: 3351 and the loss : 77110.4609375\nEpoch number: 3361 and the loss : 76913.75\nEpoch number: 3371 and the loss : 76559.1875\nEpoch number: 3381 and the loss : 76211.1796875\nEpoch number: 3391 and the loss : 74934.3125\nEpoch number: 3401 and the loss : 74475.8203125\nEpoch number: 3411 and the loss : 74441.8046875\nEpoch number: 3421 and the loss : 75650.7265625\nEpoch number: 3431 and the loss : 73757.765625\nEpoch number: 3441 and the loss : 73054.59375\nEpoch number: 3451 and the loss : 72988.640625\nEpoch number: 3461 and the loss : 72257.4453125\nEpoch number: 3471 and the loss : 71155.390625\nEpoch number: 3481 and the loss : 71685.140625\nEpoch number: 3491 and the loss : 70960.1796875\nEpoch number: 3501 and the loss : 68513.53125\nEpoch number: 3511 and the loss : 69469.2109375\nEpoch number: 3521 and the loss : 69065.6484375\nEpoch number: 3531 and the loss : 68595.703125\nEpoch number: 3541 and the loss : 68254.40625\nEpoch number: 3551 and the loss : 68390.3984375\nEpoch number: 3561 and the loss : 66304.9140625\nEpoch number: 3571 and the loss : 66203.359375\nEpoch number: 3581 and the loss : 66169.828125\nEpoch number: 3591 and the loss : 66314.0390625\nEpoch number: 3601 and the loss : 64764.75\nEpoch number: 3611 and the loss : 65285.8828125\nEpoch number: 3621 and the loss : 65437.234375\nEpoch number: 3631 and the loss : 63149.47265625\nEpoch number: 3641 and the loss : 64540.6171875\nEpoch number: 3651 and the loss : 62474.8515625\nEpoch number: 3661 and the loss : 63414.953125\nEpoch number: 3671 and the loss : 62719.6953125\nEpoch number: 3681 and the loss : 60876.28125\nEpoch number: 3691 and the loss : 61419.63671875\nEpoch number: 3701 and the loss : 61455.25\nEpoch number: 3711 and the loss : 61459.8828125\nEpoch number: 3721 and the loss : 59544.375\nEpoch number: 3731 and the loss : 57562.61328125\nEpoch number: 3741 and the loss : 59007.7734375\nEpoch number: 3751 and the loss : 59072.84765625\nEpoch number: 3761 and the loss : 58844.49609375\nEpoch number: 3771 and the loss : 56971.41796875\nEpoch number: 3781 and the loss : 56132.5703125\nEpoch number: 3791 and the loss : 58002.2890625\nEpoch number: 3801 and the loss : 56292.3046875\nEpoch number: 3811 and the loss : 55786.0859375\nEpoch number: 3821 and the loss : 56860.53125\nEpoch number: 3831 and the loss : 54475.88671875\nEpoch number: 3841 and the loss : 53451.21875\nEpoch number: 3851 and the loss : 55285.80078125\nEpoch number: 3861 and the loss : 54659.76171875\nEpoch number: 3871 and the loss : 52517.7734375\nEpoch number: 3881 and the loss : 51582.9296875\nEpoch number: 3891 and the loss : 50352.66796875\nEpoch number: 3901 and the loss : 56014.62890625\nEpoch number: 3911 and the loss : 52436.9296875\nEpoch number: 3921 and the loss : 51133.046875\nEpoch number: 3931 and the loss : 50135.44921875\nEpoch number: 3941 and the loss : 49889.265625\nEpoch number: 3951 and the loss : 49500.12890625\nEpoch number: 3961 and the loss : 50650.328125\nEpoch number: 3971 and the loss : 49051.6640625\nEpoch number: 3981 and the loss : 52412.765625\nEpoch number: 3991 and the loss : 49647.2734375\nEpoch number: 4001 and the loss : 49774.72265625\nEpoch number: 4011 and the loss : 46375.9921875\nEpoch number: 4021 and the loss : 48124.73046875\nEpoch number: 4031 and the loss : 45388.02734375\nEpoch number: 4041 and the loss : 47241.85546875\nEpoch number: 4051 and the loss : 47614.4375\nEpoch number: 4061 and the loss : 44883.97265625\nEpoch number: 4071 and the loss : 46024.9921875\nEpoch number: 4081 and the loss : 44898.46484375\nEpoch number: 4091 and the loss : 44908.6953125\nEpoch number: 4101 and the loss : 44221.41796875\nEpoch number: 4111 and the loss : 43736.64453125\nEpoch number: 4121 and the loss : 43915.40625\nEpoch number: 4131 and the loss : 43497.2734375\nEpoch number: 4141 and the loss : 45308.48046875\nEpoch number: 4151 and the loss : 43324.375\nEpoch number: 4161 and the loss : 42992.95703125\nEpoch number: 4171 and the loss : 41931.42578125\nEpoch number: 4181 and the loss : 42532.87109375\nEpoch number: 4191 and the loss : 42519.1796875\nEpoch number: 4201 and the loss : 41382.046875\nEpoch number: 4211 and the loss : 40172.4375\nEpoch number: 4221 and the loss : 40009.13671875\nEpoch number: 4231 and the loss : 41727.59375\nEpoch number: 4241 and the loss : 40630.00390625\nEpoch number: 4251 and the loss : 40144.41015625\nEpoch number: 4261 and the loss : 40974.46484375\nEpoch number: 4271 and the loss : 40714.97265625\nEpoch number: 4281 and the loss : 42204.8671875\nEpoch number: 4291 and the loss : 39406.7890625\nEpoch number: 4301 and the loss : 39050.87109375\nEpoch number: 4311 and the loss : 39831.6328125\nEpoch number: 4321 and the loss : 40008.68359375\nEpoch number: 4331 and the loss : 38729.32421875\nEpoch number: 4341 and the loss : 38553.38671875\nEpoch number: 4351 and the loss : 41735.484375\nEpoch number: 4361 and the loss : 40164.03125\nEpoch number: 4371 and the loss : 37602.3984375\nEpoch number: 4381 and the loss : 38815.9765625\nEpoch number: 4391 and the loss : 36606.5078125\nEpoch number: 4401 and the loss : 37687.90625\nEpoch number: 4411 and the loss : 38632.71484375\nEpoch number: 4421 and the loss : 39022.90625\nEpoch number: 4431 and the loss : 38177.91015625\nEpoch number: 4441 and the loss : 38113.75\nEpoch number: 4451 and the loss : 38052.16796875\nEpoch number: 4461 and the loss : 38121.625\nEpoch number: 4471 and the loss : 37924.62109375\nEpoch number: 4481 and the loss : 37076.8828125\nEpoch number: 4491 and the loss : 40067.48046875\nEpoch number: 4501 and the loss : 37945.56640625\nEpoch number: 4511 and the loss : 37815.00390625\nEpoch number: 4521 and the loss : 36596.2109375\nEpoch number: 4531 and the loss : 37756.984375\nEpoch number: 4541 and the loss : 35360.93359375\nEpoch number: 4551 and the loss : 35767.78515625\nEpoch number: 4561 and the loss : 36125.71484375\nEpoch number: 4571 and the loss : 39856.75390625\nEpoch number: 4581 and the loss : 35291.75390625\nEpoch number: 4591 and the loss : 34204.62890625\nEpoch number: 4601 and the loss : 37450.2109375\nEpoch number: 4611 and the loss : 35435.8125\nEpoch number: 4621 and the loss : 34542.1328125\nEpoch number: 4631 and the loss : 35792.6953125\nEpoch number: 4641 and the loss : 35763.5546875\nEpoch number: 4651 and the loss : 35811.00390625\nEpoch number: 4661 and the loss : 34760.703125\nEpoch number: 4671 and the loss : 37238.1015625\nEpoch number: 4681 and the loss : 36655.11328125\nEpoch number: 4691 and the loss : 38700.04296875\nEpoch number: 4701 and the loss : 34532.3671875\nEpoch number: 4711 and the loss : 33980.35546875\nEpoch number: 4721 and the loss : 40729.3125\nEpoch number: 4731 and the loss : 35026.5234375\nEpoch number: 4741 and the loss : 35284.7265625\nEpoch number: 4751 and the loss : 36004.40234375\nEpoch number: 4761 and the loss : 35814.234375\nEpoch number: 4771 and the loss : 35639.62890625\nEpoch number: 4781 and the loss : 34802.21875\nEpoch number: 4791 and the loss : 36565.3984375\nEpoch number: 4801 and the loss : 34606.37109375\nEpoch number: 4811 and the loss : 35034.109375\nEpoch number: 4821 and the loss : 34988.4453125\nEpoch number: 4831 and the loss : 35977.9765625\nEpoch number: 4841 and the loss : 35544.25\nEpoch number: 4851 and the loss : 39623.90234375\nEpoch number: 4861 and the loss : 36200.89453125\nEpoch number: 4871 and the loss : 35670.47265625\nEpoch number: 4881 and the loss : 41166.66796875\nEpoch number: 4891 and the loss : 36249.1796875\nEpoch number: 4901 and the loss : 36768.77734375\nEpoch number: 4911 and the loss : 34041.66015625\nEpoch number: 4921 and the loss : 34517.1171875\nEpoch number: 4931 and the loss : 36179.01171875\nEpoch number: 4941 and the loss : 37503.05078125\nEpoch number: 4951 and the loss : 34548.72265625\nEpoch number: 4961 and the loss : 34638.06640625\nEpoch number: 4971 and the loss : 36708.17578125\nEpoch number: 4981 and the loss : 34645.5078125\nEpoch number: 4991 and the loss : 33835.1796875\n" ], [ "import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(range(epochs), final_losses)\nplt.ylabel('RMSE Loss')\nplt.xlabel('Epoch')", "_____no_output_____" ] ], [ [ "### Validate the Test Data", "_____no_output_____" ] ], [ [ "y_pred=\"\"\nwith torch.no_grad():\n y_pred=model(test_categorical,test_cont)\n loss=torch.sqrt(loss_function(y_pred,y_test))\nprint('RMSE: {}'.format(loss))", "RMSE: 42693.4296875\n" ], [ "data_verify=pd.DataFrame(y_test.tolist(),columns=[\"Test\"])", "_____no_output_____" ], [ "data_verify", "_____no_output_____" ], [ "data_predicted=pd.DataFrame(y_pred.tolist(),columns=[\"Prediction\"])", "_____no_output_____" ], [ "data_predicted", "_____no_output_____" ], [ "final_output=pd.concat([data_verify,data_predicted],axis=1)\nfinal_output['Difference']=final_output['Test']-final_output['Prediction']\nfinal_output.head()", "_____no_output_____" ] ], [ [ "## Save the model", "_____no_output_____" ] ], [ [ "torch.save(model,'HousePrice.pt')", "/usr/local/lib/python3.6/dist-packages/torch/serialization.py:402: UserWarning: Couldn't retrieve source code for container of type FeedForwardNN. It won't be checked for correctness upon loading.\n \"type \" + obj.__name__ + \". It won't be checked \"\n" ], [ "torch.save(model.state_dict(),'HouseWeights.pt')", "_____no_output_____" ] ], [ [ "### Loading the saved Model", "_____no_output_____" ] ], [ [ "embs_size=[(15, 8), (5, 3), (2, 1), (4, 2)]\nmodel1=FeedForwardNN(embs_size,5,1,[100,50],p=0.4)", "_____no_output_____" ], [ "model1.load_state_dict(torch.load('HouseWeights.pt'))", "_____no_output_____" ], [ "model1.eval()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cbb72b939f2ce3eadccc3b7f207510677f720766
50,304
ipynb
Jupyter Notebook
nbgrader/docs/source/user_guide/creating_and_grading_assignments.ipynb
bmmlab/nbgrader
e7fbb9dcbfb5939142a9bb2b2fb7c8d6ed758138
[ "BSD-3-Clause" ]
1,116
2015-01-20T19:22:24.000Z
2022-03-31T22:05:10.000Z
nbgrader/docs/source/user_guide/creating_and_grading_assignments.ipynb
bmmlab/nbgrader
e7fbb9dcbfb5939142a9bb2b2fb7c8d6ed758138
[ "BSD-3-Clause" ]
1,166
2015-01-08T21:50:31.000Z
2022-03-31T05:15:01.000Z
nbgrader/docs/source/user_guide/creating_and_grading_assignments.ipynb
AnotherCodeArtist/nbgrader
24b5683bb34d55f2f6dc8c640697ddc4557f3394
[ "BSD-3-Clause" ]
337
2015-02-06T01:28:00.000Z
2022-03-29T06:52:38.000Z
38.814815
612
0.649451
[ [ [ "# Creating and grading assignments", "_____no_output_____" ], [ "This guide walks an instructor through the workflow for generating an assignment and preparing it for release to students.", "_____no_output_____" ] ], [ [ ".. contents:: Table of Contents\n :depth: 2", "_____no_output_____" ], [ ".. versionadded:: 0.5.0\n\n Much of the core functionality of nbgrader can now be accessed through the \"formgrader\" extension.", "_____no_output_____" ] ], [ [ "## Accessing the formgrader extension", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`installation`\n Instructions on how to install the formgrader extension.", "_____no_output_____" ] ], [ [ "The formgrader extension provides the core access to nbgrader's instructor tools. After the extension has been installed, you can access it through the tab in the notebook list:\n\n![](images/formgrader_tab.png)", "_____no_output_____" ], [ "## Creating a new assignment", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`managing_the_database`\n Instructions on how to manage assignments in the database from the command line\n\n :doc:`/command_line_tools/nbgrader-db-assignment-add`\n Command line options for ``nbgrader db assignment add``", "_____no_output_____" ] ], [ [ "### From the formgrader", "_____no_output_____" ], [ "To create a new assignment, open the formgrader extension and click the \"Add new assignment...\" button at the bottom of the page. This will ask you to provide some information such as the name of the assignment and its due date. Then, you can add files to the assignment and edit them by clicking the name of the assignment:\n\n![](images/manage_assignments1.png)", "_____no_output_____" ], [ "### From the command line", "_____no_output_____" ] ], [ [ "If you are not using the formgrader extension, you can add a new assignment simply by creating a folder in your course directory with the name of the assignment. You can specify the assignment metadata (such as the duedate) using the `nbgrader db assignment` command (see :doc:`managing_the_database`).", "_____no_output_____" ] ], [ [ "To simplify this example, two notebooks of the assignment have already been stored in the `source/ps1` folder:\n\n* [source/ps1/problem1.ipynb](source/ps1/problem1.ipynb)\n* [source/ps1/problem2.ipynb](source/ps1/problem2.ipynb)", "_____no_output_____" ], [ "## Developing assignments with the assignment toolbar", "_____no_output_____" ], [ "**Note**: As you are developing your assignments, you should save them\ninto the `source/{assignment_id}/` folder of the nbgrader hierarchy,\nwhere `assignment_id` is the name of the assignment you are creating\n(e.g. \"ps1\").", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`philosophy`\n More details on how the nbgrader hierarchy is structured.\n \n :doc:`/configuration/student_version`\n Instructions for customizing how the student version of the assignment looks.\n\nBefore you can begin developing assignments, you will need to actually\ninstall the nbgrader toolbar. If you do not have it installed, please\nfirst follow the :doc:`installation instructions <installation>`.", "_____no_output_____" ] ], [ [ "Once the toolbar has been installed, you should see it in the drop down \"View -> Cell Toolbar\" menu:\n\n![](images/assignment_toolbar.png)\n\nSelecting the \"Create Assignment\" toolbar will create a separate toolbar\nfor each cell which by default will be a dropdown menu with the \"-\" item\nselected. For markdown cells, there are two additional options to choose\nfrom, either \"Manually graded answer\" or \"Read-only\":\n\n![](images/markdown_cell.png)\n\nFor code cells, there are four options to choose from, including\n\"Manually graded answer\", \"Autograded answer\", \"Autograder tests\", and\n\"Read-only\":\n\n![](images/code_cell.png)\n\nThe following sections go into detail about the different cell types,\nand show cells that are taken from a complete example of an assignment\ngenerated with the nbgrader toolbar extension:\n\n- [source/ps1/problem1.ipynb](source/ps1/problem1.html)\n- [source/ps1/problem2.ipynb](source/ps1/problem2.html)", "_____no_output_____" ] ], [ [ ".. _manually-graded-cells:", "_____no_output_____" ] ], [ [ "### \"Manually graded answer\" cells", "_____no_output_____" ], [ "If you select the \"Manually graded answer\" option (available for both\nmarkdown and code cells), the nbgrader extension will mark that cell as\na cell that contains an answer that must be manually graded by a human\ngrader. Here is an example of a manually graded answer cell:\n\n![](images/manually_graded_answer.png)\n\nThe most common use case for this type of cell is for written\nfree-response answers (for example, which interpret the results of code\nthat may have been written and/or executed above).", "_____no_output_____" ] ], [ [ "When you specify a manually graded answer, you must additionally tell nbgrader how many points the answer is worth, and an id for the cell. Additionally, when creating the release version of the assignment (see :ref:`assign-and-release-an-assignment`), the bodies of answer cells will be replaced with a code or text stub indicating to the students that they should put their answer or solution there. Please see :doc:`/configuration/student_version` for details on how to customize this behavior.", "_____no_output_____" ] ], [ [ "*Note: the blue border only shows up when the nbgrader extension toolbar\nis active; it will not be visible to students.*", "_____no_output_____" ] ], [ [ ".. _manually-graded-task-cells:", "_____no_output_____" ] ], [ [ "### “Manually graded task” cells", "_____no_output_____" ] ], [ [ ".. versionadded:: 0.6.0", "_____no_output_____" ] ], [ [ "If you select the “Manually graded task” option (available for markdown cells), \nthe nbgrader extension will mark that cell as\na cell that contains the description of a task that students have to perform.\nThey must be manually graded by a human\ngrader. Here is an example of a manually graded answer cell:\n\n![](images/task-cell-source.png)\n\nThe difference with a manually graded answer is that the manually graded tasks cells are not edited by the student. A manually or automatically graded cell ask students to perform a task *in* one cell. A manually graded task asks students to perform a task *with* cells.\n\nThe common use case for this type of cell is for tasks that require the\nstudent to create several cells such as \"Process the data and create a plot to illustrate your results.\" \nor to contain notebook-wide tasks such as \"adhere to the PEP8 style convention.\"\n\n*Note: the blue border only shows up when the nbgrader extension toolbar\nis active; it will not be visible to students.*", "_____no_output_____" ] ], [ [ ".. _manually-graded-task-cell-mark-scheme:", "_____no_output_____" ] ], [ [ "### “Manually graded task” cells with mark scheme", "_____no_output_____" ] ], [ [ ".. versionadded:: 0.6.0", "_____no_output_____" ] ], [ [ "A mark scheme can be created through the use of a\nspecial syntax such as ``=== BEGIN MARK SCHEME ===`` and\n``=== END MARK SCHEME ===``. The section of text between the two markers will be removed from the student version,\nbut will be visible at the grading stage and in the feedback.", "_____no_output_____" ] ], [ [ ".. _autograded-answer-cells:", "_____no_output_____" ] ], [ [ "### \"Autograded answer\" cells", "_____no_output_____" ], [ "If you select the \"Autograded answer\" option (available only for code\ncells), the nbgrader extension will mark that cell as a cell that\ncontains an answer which will be autograded. Here is an example of an\nautograded graded answer cell:\n\n![](images/autograded_answer.png)", "_____no_output_____" ] ], [ [ "As shown in the image above, solutions can be specified inline, through the use of a special syntax such as ``### BEGIN SOLUTION`` and ``### END SOLUTION``. When creating the release version (see :ref:`assign-and-release-an-assignment`), the region between the special syntax lines will be replaced with a code stub. If this special syntax is not used, then the entire contents of the cell will be replaced with the code stub. Please see :doc:`/configuration/student_version` for details on how to customize this behavior.", "_____no_output_____" ] ], [ [ "Unlike manually graded answers, autograded answers aren't worth any\npoints: instead, the points for autograded answers are specified for the\nparticular tests that grade those answers. See the next section for\nfurther details.\n\n*Note: the blue border only shows up when the nbgrader extension toolbar\nis active; it will not be visible to students.*", "_____no_output_____" ], [ "### \"Autograder tests\" cells", "_____no_output_____" ], [ "If you select the \"Autograder tests\" option (available only for code\ncells), the nbgrader extension will mark that cell as a cell that\ncontains tests to be run during autograding. Here is an example of two\ntest cells:\n\n![](images/autograder_tests.png)", "_____no_output_____" ] ], [ [ "Test cells should contain ``assert`` statements (or similar). When run through ``nbgrader autograde`` (see :ref:`autograde-assignments`), the cell will pass if no errors are raised, and fail otherwise. You must specify the number of points that each test cell is worth; then, if the tests pass during autograding, students will receive the specified number of points, and otherwise will receive zero points.", "_____no_output_____" ] ], [ [ "The lock icon on the left side of the cell toolbar indicates that the\ntests are \"read-only\". See the next section for further details on what\nthis means.", "_____no_output_____" ] ], [ [ "For tips on writing autograder tests, see :ref:`autograding-resources`.", "_____no_output_____" ] ], [ [ "*Note: the blue border only shows up when the nbgrader extension toolbar\nis active; it will not be visible to students.*", "_____no_output_____" ] ], [ [ ".. _autograder-tests-cell-hidden-tests:", "_____no_output_____" ] ], [ [ "### \"Autograder tests\" cells with hidden tests", "_____no_output_____" ] ], [ [ ".. versionadded:: 0.5.0", "_____no_output_____" ] ], [ [ "Tests in \"Autograder tests\" cells can be hidden through the use of a special syntax such as ``### BEGIN HIDDEN TESTS`` and ``### END HIDDEN TESTS``, for example:\n\n![](images/autograder_tests_hidden_tests.png)", "_____no_output_____" ] ], [ [ "When creating the release version (see :ref:`assign-and-release-an-assignment`), the region between the special syntax lines will be removed. If this special syntax is not used, then the contents of the cell will remain as is. Please see :doc:`/configuration/student_version` for details on how to customize this behavior.\n\n.. note::\n\n Keep in mind that wrapping all tests (for an \"Autograder tests\" cell) in this special syntax will remove all these tests in\n the release version and the students will only see a blank cell. It is recommended to have at least one or more visible \n tests, or a comment in the cell for the students to see.\n\n These hidden tests are placed back into the \"Autograder tests\" cells when running ``nbgrader autograde``\n (see :ref:`autograde-assignments`).", "_____no_output_____" ], [ ".. _read-only-cells:", "_____no_output_____" ] ], [ [ "### \"Read-only\" cells", "_____no_output_____" ], [ "If you select the \"Read-only\" option (available for both code and\nmarkdown cells), the nbgrader extension will mark that cell as one that\ncannot be modified. This is indicated by a lock icon on the left side of\nthe cell toolbar:\n\n![](images/read_only.png)", "_____no_output_____" ] ], [ [ "However, this doesn't actually mean that it is truly read-only when opened in the notebook. Instead, what it means is that during the ``nbgrader generate_assignment`` step (see :ref:`assign-and-release-an-assignment`), the source of these cells will be recorded into the database. Then, during the ``nbgrader autograde`` step (see :ref:`autograde-assignments`), nbgrader will check whether the source of the student's version of the cell has changed. If it has, it will replace the cell's source with the version in the database, thus effectively overwriting any changes the student made.\n\n.. versionadded:: 0.4.0\n Read-only cells (and test cells) are now truly read-only! However, at the moment this functionality will only work on the master version of the notebook (5.0.0.dev).", "_____no_output_____" ] ], [ [ "This functionality is particularly important for test cells, which are\nalways marked as read-only. Because the mechanism for autograding is\nthat students receive full credit if the tests pass, an easy way to get\naround this would be to simply delete or comment out the tests. This\nread-only functionality will reverse any such changes made by the\nstudent.", "_____no_output_____" ], [ "## Validating the instructor version", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`/command_line_tools/nbgrader-validate`\n Command line options for ``nbgrader validate``", "_____no_output_____" ] ], [ [ "### From the validate extension", "_____no_output_____" ], [ "Ideally, the solutions in the instructor version should be correct and pass all the test cases to ensure that you are giving your students tests that they can actually pass. To verify this is the case, you can use the validate extension:\n\n![](images/validate_extension.png)\n\nIf your assignment passes all the tests, you'll get a success pop-up:\n\n![](images/validate_success.png)\n\nIf it doesn't pass all the tests, you'll get a message telling you which cells failed:\n\n![](images/validate_failed.png)", "_____no_output_____" ], [ "### From the command line", "_____no_output_____" ], [ "You can also validate assignments on the command line using the `nbgrader validate` command:", "_____no_output_____" ] ], [ [ "%%bash\n\nnbgrader validate source/ps1/*.ipynb", "Success! Your notebook passes all the tests.\nSuccess! Your notebook passes all the tests.\n" ] ], [ [ ".. _assign-and-release-an-assignment:", "_____no_output_____" ] ], [ [ "## Generate and release an assignment", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`/command_line_tools/nbgrader-generate-assignment`\n Command line options for ``nbgrader generate_assignment``\n \n :doc:`philosophy`\n Details about how the directory hierarchy is structured\n\n :doc:`/configuration/config_options`\n Details on ``nbgrader_config.py``", "_____no_output_____" ] ], [ [ "### From the formgrader", "_____no_output_____" ], [ "After an assignment has been created with the assignment toolbar, you will want to generate the version that students will receive. You can do this from the formgrader by clicking the \"generate\" button:\n\n![](images/manage_assignments2.png)\n\nThis should succeed with a pop-up window containing log output:\n\n![](images/generate_assignment.png)", "_____no_output_____" ], [ "### From the command line", "_____no_output_____" ] ], [ [ "As described in :doc:`philosophy`, you need to organize your files in a particular way. For releasing assignments, you should have the master copy of your files saved (by default) in the following source directory structure:", "_____no_output_____" ] ], [ [ "```\n{course_directory}/source/{assignment_id}/{notebook_id}.ipynb\n```\n\nNote: The `student_id` is not included here because the source and release versions of the assignment are the same for all students.\n\nAfter running `nbgrader generate_assignment`, the release version of the notebooks will be:\n\n```\n{course_directory}/release/{assignment_id}/{notebook_id}.ipynb\n```\n\nAs a reminder, the instructor is responsible for distributing this release version to their students using their institution's existing student communication and document distribution infrastructure.", "_____no_output_____" ], [ "When running `nbgrader generate_assignment`, the assignment name (which is \"ps1\") is passed. We also specify a *header* notebook (`source/header.ipynb`) to prepend at the beginning of each notebook in the assignment. By default, this command should be run from the root of the course directory:", "_____no_output_____" ] ], [ [ "%%bash\n\nnbgrader generate_assignment \"ps1\" --IncludeHeaderFooter.header=source/header.ipynb --force", "[GenerateAssignmentApp | WARNING] No nbgrader_config.py file found (rerun with --debug to see where nbgrader is looking)\n[GenerateAssignmentApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/source/./ps1/jupyter.png -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/release/./ps1/jupyter.png\n[GenerateAssignmentApp | INFO] Updating/creating assignment 'ps1': {}\n[GenerateAssignmentApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/source/./ps1/problem1.ipynb\n[GenerateAssignmentApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/release/./ps1/problem1.ipynb\n[GenerateAssignmentApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/source/./ps1/problem2.ipynb\n[GenerateAssignmentApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/release/./ps1/problem2.ipynb\n[GenerateAssignmentApp | INFO] Setting destination file permissions to 644\n" ] ], [ [ "## Preview the student version", "_____no_output_____" ], [ "After generating the student version of assignment, you should preview it to make sure that it looks correct. You can do this from the formgrader extension by clicking the \"preview\" button:\n\n![](images/manage_assignments3.png)", "_____no_output_____" ], [ "Under the hood, there will be a new folder called `release` with the same structure as `source`. The `release` folder contains the actual release version of the assignment files:\n\n* [release/ps1/problem1.ipynb](release/ps1/problem1.ipynb)\n* [release/ps1/problem2.ipynb](release/ps1/problem2.ipynb)", "_____no_output_____" ], [ "If you are working on the command line, you may want to formally verify the student version as well. Ideally, all the tests should fail in the student version if the student hasn't implemented anything. To verify that this is in fact the case, we can use the `nbgrader validate --invert` command:", "_____no_output_____" ] ], [ [ "%%bash\n\nnbgrader validate --invert release/ps1/*.ipynb", "Success! The notebook does not pass any tests.\nSuccess! The notebook does not pass any tests.\n" ] ], [ [ "If the notebook fails all the test cases, you should see the message \"Success! The notebook does not pass any tests.\"", "_____no_output_____" ], [ "## Releasing files to students and collecting submissions", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`managing_assignment_files`\n Guide to releasing and collecting submissions.\n\n :doc:`/command_line_tools/nbgrader-release-assignment`\n Command line options for ``nbgrader release_assignment``\n\n :doc:`/command_line_tools/nbgrader-collect`\n Command line options for ``nbgrader collect``\n \n :doc:`philosophy`\n Details about how the directory hierarchy is structured\n\n :doc:`/configuration/config_options`\n Details on ``nbgrader_config.py``", "_____no_output_____" ], [ "Note: the :doc:`Managing Assignment Files Guide <managing_assignment_files>` goes into greater depth on how to release and collect assignments, and the various options that are available to do you for doing so.", "_____no_output_____" ], [ "At this point you will be able to take the files in the ``release`` folder and distribute them to students. If you are using nbgrader with JupyterHub, you can do this with either with the formgrader extension or with the ``nbgrader release_assignment`` command (see :doc:`managing_assignment_files`). Otherwise, you will need to do this manually.\n\nSimilarly, to collect submissions, you can do this either with the formgrader extension or with the ``nbgrader collect`` command. Otherwise, you will need to manually place submitted files into the ``submitted`` directory. As described in :doc:`philosophy`, you need to organize your files in a particular way. For submitted assignments, you should have the submitted versions of students' assignments organized as follows:", "_____no_output_____" ] ], [ [ "```\nsubmitted/{student_id}/{assignment_id}/{notebook_id}.ipynb\n```", "_____no_output_____" ], [ "**Please note**: Students must use version 3 or greater of the IPython/Jupyter notebook for nbgrader to work properly. If they are not using version 3 or greater, it is possible for them to delete cells that contain important metadata for nbgrader. With version 3 or greater, there is a feature in the notebook that prevents cells from being deleted. See [this issue](https://github.com/jupyter/nbgrader/issues/424) for more details.\n\nTo ensure that students have a recent enough version of the notebook, you can include a cell such as the following in each notebook of the assignment:\n\n```python\nimport IPython\nassert IPython.version_info[0] >= 3, \"Your version of IPython is too old, please update it.\"\n```", "_____no_output_____" ] ], [ [ ".. _autograde-assignments:", "_____no_output_____" ] ], [ [ "## Autograde assignments", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`/command_line_tools/nbgrader-autograde`\n Command line options for ``nbgrader autograde``\n \n :doc:`philosophy`\n Details about how the directory hierarchy is structured\n\n :doc:`/configuration/config_options`\n Details on ``nbgrader_config.py``", "_____no_output_____" ] ], [ [ "In the following example, we have an assignment with two notebooks. There are two submissions of the assignment:\n\nSubmission 1:\n\n* [submitted/bitdiddle/ps1/problem1.ipynb](submitted/bitdiddle/ps1/problem1.ipynb)\n* [submitted/bitdiddle/ps1/problem2.ipynb](submitted/bitdiddle/ps1/problem2.ipynb)\n\nSubmission 2:\n\n* [submitted/hacker/ps1/problem1.ipynb](submitted/hacker/ps1/problem1.ipynb)\n* [submitted/hacker/ps1/problem2.ipynb](submitted/hacker/ps1/problem2.ipynb)", "_____no_output_____" ], [ "### From the formgrader", "_____no_output_____" ], [ "You can autograde individual submissions from the formgrader directly. To do so, click on the the number of submissions in the \"Manage Assignments\" view:\n\n![](images/manage_assignments4.png)\n\nThis will take you to a new page where you can see all the submissions. For a particular submission, click the \"autograde\" button to autograde it:\n\n![](images/manage_submissions1.png)\n\nAfter autograding completes, you will see a pop-up window with log output:\n\n![](images/autograde_assignment.png)\n\nAnd back on the submissions screen, you will see that the status of the submission has changed to \"needs manual grading\" and there is now a reported score as well: \n\n![](images/manage_submissions2.png)", "_____no_output_____" ], [ "### From the command line", "_____no_output_____" ], [ "We can run the autograder for all students at once from the command line:", "_____no_output_____" ] ], [ [ "%%bash\n\nnbgrader autograde \"ps1\" --force", "[AutogradeApp | WARNING] No nbgrader_config.py file found (rerun with --debug to see where nbgrader is looking)\n[AutogradeApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/bitdiddle/ps1/timestamp.txt -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/timestamp.txt\n[AutogradeApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/bitdiddle/ps1/jupyter.png -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/jupyter.png\n[AutogradeApp | INFO] Creating/updating student with ID 'bitdiddle': {}\n[AutogradeApp | INFO] SubmittedAssignment<ps1 for bitdiddle> submitted at [timestamp]\n[AutogradeApp | INFO] Overwriting files with master versions from the source directory\n[AutogradeApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/source/./ps1/jupyter.png -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/jupyter.png\n[AutogradeApp | INFO] Sanitizing [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/bitdiddle/ps1/problem1.ipynb\n[AutogradeApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/bitdiddle/ps1/problem1.ipynb\n[AutogradeApp | WARNING] Attribute 'checksum' for cell correct_squares has changed! (should be: 8f41dd0f9c8fd2da8e8708d73e506b3a, got: 845d4666cabb30b6c75fc534f7375bf5)\n[AutogradeApp | WARNING] Attribute 'checksum' for cell squares_invalid_input has changed! (should be: 23c2b667d3b60eff3be46eb3290a6b4a, got: 123394e73f33a622ec057e2eae51a54a)\n[AutogradeApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb\n[AutogradeApp | INFO] Autograding [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb\n[AutogradeApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb\n[AutogradeApp | INFO] Executing notebook with kernel: python\n[AutogradeApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb\n[AutogradeApp | INFO] Sanitizing [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/bitdiddle/ps1/problem2.ipynb\n[AutogradeApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/bitdiddle/ps1/problem2.ipynb\n[AutogradeApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem2.ipynb\n[AutogradeApp | INFO] Autograding [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem2.ipynb\n[AutogradeApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem2.ipynb\n[AutogradeApp | INFO] Executing notebook with kernel: python\n[AutogradeApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem2.ipynb\n[AutogradeApp | INFO] Setting destination file permissions to 444\n[AutogradeApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/hacker/ps1/timestamp.txt -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/timestamp.txt\n[AutogradeApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/hacker/ps1/jupyter.png -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/jupyter.png\n[AutogradeApp | INFO] Creating/updating student with ID 'hacker': {}\n[AutogradeApp | INFO] SubmittedAssignment<ps1 for hacker> submitted at [timestamp]\n[AutogradeApp | INFO] Overwriting files with master versions from the source directory\n[AutogradeApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/source/./ps1/jupyter.png -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/jupyter.png\n[AutogradeApp | INFO] Sanitizing [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/hacker/ps1/problem1.ipynb\n[AutogradeApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/hacker/ps1/problem1.ipynb\n[AutogradeApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem1.ipynb\n[AutogradeApp | INFO] Autograding [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem1.ipynb\n[AutogradeApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem1.ipynb\n[AutogradeApp | INFO] Executing notebook with kernel: python\n[AutogradeApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem1.ipynb\n[AutogradeApp | INFO] Sanitizing [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/hacker/ps1/problem2.ipynb\n[AutogradeApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/submitted/hacker/ps1/problem2.ipynb\n[AutogradeApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem2.ipynb\n[AutogradeApp | INFO] Autograding [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem2.ipynb\n[AutogradeApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem2.ipynb\n[AutogradeApp | INFO] Executing notebook with kernel: python\n[AutogradeApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem2.ipynb\n[AutogradeApp | INFO] Setting destination file permissions to 444\n" ] ], [ [ "When grading the submission for `Bitdiddle`, you'll see some warnings that look like \"Checksum for grade cell correct_squares has changed!\". What's happening here is that nbgrader has recorded what the *original* contents of the grade cell `correct_squares` (when `nbgrader generate_assignment` was run), and is checking the submitted version against this original version. It has found that the submitted version changed (perhaps this student tried to cheat by commenting out the failing tests), and has therefore overwritten the submitted version of the tests with the original version of the tests.\n\nYou may also notice that there is a note saying \"ps1 for Bitdiddle is 21503.948203 seconds late\". What is happening here is that nbgrader is detecting a file in Bitdiddle's submission called `timestamp.txt`, reading in that timestamp, and saving it into the database. From there, it can compare the timestamp to the duedate of the problem set, and compute whether the submission is at all late.\n\nOnce the autograding is complete, there will be new directories for the autograded versions of the submissions:\n\n```\nautograded/{student_id}/{assignment_id}/{notebook_id}.ipynb\n```\n\nAutograded submission 1:\n\n* [autograded/bitdiddle/ps1/problem1.ipynb](autograded/bitdiddle/ps1/problem1.ipynb)\n* [autograded/bitdiddle/ps1/problem2.ipynb](autograded/bitdiddle/ps1/problem2.ipynb)\n\nAutograded submission 2:\n\n* [autograded/hacker/ps1/problem1.ipynb](autograded/hacker/ps1/problem1.ipynb)\n* [autograded/hacker/ps1/problem2.ipynb](autograded/hacker/ps1/problem2.ipynb)", "_____no_output_____" ], [ "## Manual grading", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`philosophy`\n More details on how the nbgrader hierarchy is structured.\n\n :doc:`/configuration/config_options`\n Details on ``nbgrader_config.py``", "_____no_output_____" ], [ "After assignments have been autograded, they will saved into an ``autograded`` directory (see :doc:`philosophy` for details):", "_____no_output_____" ] ], [ [ "After running `nbgrader autograde`, the autograded version of the\nnotebooks will be:\n\n autograded/{student_id}/{assignment_id}/{notebook_id}.ipynb\n\nWe can manually grade assignments through the formgrader as well, by clicking on the \"Manual Grading\" navigation button. This will provide you with an interface for hand grading assignments that it finds in the directory listed above. Note that this applies to *all* assignments as well -- as long as the autograder has been run on the assignment, it will be available for manual grading via the formgrader.", "_____no_output_____" ], [ "## Generate feedback on assignments", "_____no_output_____" ] ], [ [ ".. seealso::\n\n :doc:`/command_line_tools/nbgrader-generate-feedback`\n Command line options for ``nbgrader generate_feedback``\n \n :doc:`/command_line_tools/nbgrader-release-feedback`\n Command line options for ``nbgrader release_feedback``\n \n :doc:`philosophy`\n Details about how the directory hierarchy is structured\n\n :doc:`/configuration/config_options`\n Details on ``nbgrader_config.py``", "_____no_output_____" ], [ "As mentioned before, after assignments have been autograded and/or manually graded, they will located in the `autograded` directory (see :doc:`philosophy` for details):", "_____no_output_____" ] ], [ [ "```\nautograded/{student_id}/{assignment_id}/{notebook_id}.ipynb\n```\n\nCreating feedback for students is divided into two parts:\n\n* generate feedback\n* release feedback\n\nGenerating feedback will create HTML files in the local instructor directory. Releasing feedback will copy those HTML files to the nbgrader exchange.\n\nWe can generate feedback based on the graded notebooks by running the `nbgrader generate_feedback` command, which will produce HTML versions of these notebooks at the following location:\n\n```\nfeedback/{student_id}/{assignment_id}/{notebook_id}.html\n```\n\nThe `nbgrader generate_feedback` is available by clicking the Generate Feedback button on either the Manage Assignments view (to generate feedback for all graded submissions), or on the individual student's Manage Submission page (to generate feedback for that specific individual).", "_____no_output_____" ], [ "We can release the generated feedback by running the `nbgrader release_feedback` command, which will send the generated HTML files to the nbgrader exchange.\n\nThe `nbgrader release_feedback` is available by clicking the Release Feedback button on either the Manage Assignments view (to release feedback for all generated feedback), or on the individual student's Manage Submission page (to release feedback for that specific individual).", "_____no_output_____" ], [ "### Workflow example: Instructor returning feedback to students", "_____no_output_____" ], [ "In some scenarios, you may not want to (or be able to) use the exchange to deliver student feedback. This sections describes a workflow for manually returning generated feedback.\n\nIn the following example, we have an assignment with two notebooks. There are two submissions of the assignment that have been graded:\n\nAutograded submission 1:\n\n* [autograded/bitdiddle/ps1/problem1.ipynb](autograded/bitdiddle/ps1/problem1.ipynb)\n* [autograded/bitdiddle/ps1/problem2.ipynb](autograded/bitdiddle/ps1/problem2.ipynb)\n\nAutograded submission 2:\n\n* [autograded/hacker/ps1/problem1.ipynb](autograded/hacker/ps1/problem1.ipynb)\n* [autograded/hacker/ps1/problem2.ipynb](autograded/hacker/ps1/problem2.ipynb)", "_____no_output_____" ], [ "Generating feedback is fairly straightforward (and as with the other nbgrader commands for instructors, this must be run from the root of the course directory):", "_____no_output_____" ] ], [ [ "%%bash\n\nnbgrader generate_feedback \"ps1\" ", "[GenerateFeedbackApp | WARNING] No nbgrader_config.py file found (rerun with --debug to see where nbgrader is looking)\n[GenerateFeedbackApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/timestamp.txt -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/timestamp.txt\n[GenerateFeedbackApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/jupyter.png -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/jupyter.png\n[GenerateFeedbackApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb\n[GenerateFeedbackApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/problem1.html\n[GenerateFeedbackApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem2.ipynb\n[GenerateFeedbackApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/problem2.html\n[GenerateFeedbackApp | INFO] Setting destination file permissions to 644\n[GenerateFeedbackApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/timestamp.txt -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/feedback/hacker/ps1/timestamp.txt\n[GenerateFeedbackApp | INFO] Copying [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/jupyter.png -> [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/feedback/hacker/ps1/jupyter.png\n[GenerateFeedbackApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem1.ipynb\n[GenerateFeedbackApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/feedback/hacker/ps1/problem1.html\n[GenerateFeedbackApp | INFO] Converting notebook [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem2.ipynb\n[GenerateFeedbackApp | INFO] Writing [size] bytes to [NB_GRADER_ROOT]/nbgrader/docs/source/user_guide/feedback/hacker/ps1/problem2.html\n[GenerateFeedbackApp | INFO] Setting destination file permissions to 644\n" ] ], [ [ "Once the feedback has been generated, there will be new directories and HTML files corresponding to each notebook in each submission:\n\nFeedback for submission 1:\n\n* [feedback/bitdiddle/ps1/problem1.html](feedback/bitdiddle/ps1/problem1.html)\n* [feedback/bitdiddle/ps1/problem2.html](feedback/bitdiddle/ps1/problem2.html)\n\nFeedback for submission 2:\n\n* [feedback/hacker/ps1/problem1.html](feedback/hacker/ps1/problem1.html)\n* [feedback/hacker/ps1/problem2.html](feedback/hacker/ps1/problem2.html)\n\nIf the exchange is available, one would of course use `nbgrader release_feedback`. However if not available, you can now deliver these generated HTML feedback files via whatever mechanism you wish.", "_____no_output_____" ], [ "## Getting grades from the database", "_____no_output_____" ] ], [ [ ".. versionadded:: 0.4.0\n\n.. seealso::\n\n :doc:`/command_line_tools/nbgrader-export`\n Command line options for ``nbgrader export``\n \n :doc:`/plugins/export-plugin`\n Details on how to write your own custom exporter.", "_____no_output_____" ] ], [ [ "In addition to creating feedback for the students, you may need to upload grades to whatever learning management system your school uses (e.g. Canvas, Blackboard, etc.). nbgrader provides a way to export grades to CSV out of the box, with the `nbgrader export` command:", "_____no_output_____" ] ], [ [ "%%bash\n\nnbgrader export", "[ExportApp | WARNING] No nbgrader_config.py file found (rerun with --debug to see where nbgrader is looking)\n[ExportApp | INFO] Using exporter: CsvExportPlugin\n[ExportApp | INFO] Exporting grades to grades.csv\n" ] ], [ [ "After running `nbgrader export`, you will see the grades in a CSV file called `grades.csv`:", "_____no_output_____" ] ], [ [ "%%bash\n\ncat grades.csv", "assignment,duedate,timestamp,student_id,last_name,first_name,email,raw_score,late_submission_penalty,score,max_score\nps1,,[timestamp],bitdiddle,,,,1.5,0.0,1.5,13.0\nps1,,[timestamp],hacker,,,,3.0,0.0,3.0,13.0\n" ] ], [ [ "If you need to customize how the grades are exported, you can :doc:`write your own exporter </plugins/export-plugin>`.", "_____no_output_____" ] ] ]
[ "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "code", "raw", "markdown", "raw", "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "code", "markdown", "raw", "markdown", "raw", "markdown", "code", "markdown", "raw", "markdown", "code", "markdown", "code", "raw" ]
[ [ "markdown", "markdown" ], [ "raw", "raw" ], [ "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown" ], [ "raw", "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "raw", "raw", "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "raw", "raw" ], [ "markdown", "markdown" ], [ "raw", "raw" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "raw" ] ]
cbb72d6f2d78538ccf7a2a53c77f426d2af15acf
3,657
ipynb
Jupyter Notebook
Notebooks/Scala/09 Creating a managed Spark Table.ipynb
revinjchalil/Synapse
d0fc2413217b7c3471bb93db0f348d933f4524ac
[ "MIT" ]
1
2020-08-27T15:05:35.000Z
2020-08-27T15:05:35.000Z
Notebooks/Scala/09 Creating a managed Spark Table.ipynb
AzureMentor/Synapse
9e4f66630013a3b7bf9bf194b05d6e104d463140
[ "MIT" ]
null
null
null
Notebooks/Scala/09 Creating a managed Spark Table.ipynb
AzureMentor/Synapse
9e4f66630013a3b7bf9bf194b05d6e104d463140
[ "MIT" ]
1
2021-09-07T07:49:33.000Z
2021-09-07T07:49:33.000Z
24.543624
166
0.423571
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cbb758eba08f11611e326209bfa2f6293b393857
222,143
ipynb
Jupyter Notebook
CTG_RP_Train_Model.ipynb
williamsdoug/CTG_RP
e8fd03f04f0dd63dd31547538bbb2c3916df4401
[ "MIT" ]
4
2019-08-20T07:24:57.000Z
2021-11-15T14:14:14.000Z
CTG_RP_Train_Model.ipynb
williamsdoug/CTG_RP
e8fd03f04f0dd63dd31547538bbb2c3916df4401
[ "MIT" ]
1
2020-08-26T17:44:35.000Z
2020-08-26T17:44:35.000Z
CTG_RP_Train_Model.ipynb
williamsdoug/CTG_RP
e8fd03f04f0dd63dd31547538bbb2c3916df4401
[ "MIT" ]
null
null
null
48.333986
21,540
0.445708
[ [ [ "<a href=\"https://colab.research.google.com/github/williamsdoug/CTG_RP/blob/master/CTG_RP_Train_Model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Generate Datasets and Train Model", "_____no_output_____" ] ], [ [ "#! rm -R images\n! ls", "basic_denoise.py ctg_utils.py\t\t libRP.py\t test.py\ncompute_metadata.py ctu-uhb-ctgdb\t\t __pycache__\nconfig_local.py generate_recurrence_images.py sample_data\n" ], [ "%reload_ext autoreload\n%autoreload 2\n%matplotlib inline", "_____no_output_____" ], [ "import config_local\nfrom config_local import *", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport gc\n\nfrom fastai.vision import *\nfrom fastai.metrics import error_rate\n\nimport torch\nfrom torch import nn\n\n\nimport collections\nimport pprint\nimport random", "_____no_output_____" ], [ "from compute_metadata import get_splits, generate_label_file, generate_lists\nfrom generate_recurrence_images import generate_rp_images, gen_recurrence_params", "_____no_output_____" ] ], [ [ "## Code", "_____no_output_____" ], [ "## Config", "_____no_output_____" ] ], [ [ "np.random.seed(1234)\nrandom.seed(1234)\n\n# Configure Recurrent Plot Parameters\n\nPOLICY='early_valid' # 'best_quality', 'early_valid', 'late_valid'\n\nrp_params = gen_recurrence_params(dimensions=[2], time_delays=[1], percentages=[1,3, 10], use_clip_vals=[False])\nrp_params", "_____no_output_____" ], [ "tfms=[]\nsize=64\nbs=64\nworkers=4\n\npath = Path() / 'images'", "_____no_output_____" ] ], [ [ "## Generate Recurrence Images", "_____no_output_____" ] ], [ [ "generate_rp_images(RECORDINGS_DIR, images_dir=IMAGES_DIR, rp_params=rp_params[:1], \n policy=POLICY,\n show_signal=False, show_image=True, verbose=True, cmap='binary',\n limit=3,\n )", "\nRecord: 1001 Samples: 19200 Duration: 80.0 min Stage.II: 20 min\n" ], [ "generate_rp_images(RECORDINGS_DIR, images_dir=IMAGES_DIR, rp_params=rp_params, \n policy=POLICY,\n show_signal=False, show_image=False, verbose=True, cmap='binary',\n )", "_____no_output_____" ], [ "#!ls images", "_____no_output_____" ] ], [ [ "## Generate Train and Valid Label Files", "_____no_output_____" ] ], [ [ "train_valid_groups_full = get_splits(image_dir='images', image_file='rp_images_index.json', \n exclude=['_clipped'],\n thresh = 7.15)\n \n# Create valid_x.csv files for each split\nfor i in range(len(train_valid_groups_full)):\n generate_lists(train_valid_groups_full[i], train_file='train_{}.csv'.format(i),\n valid_file='valid_{}.csv'.format(i))", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "!ls images/*.csv", "images/train_0.csv images/train_3.csv\timages/valid_1.csv images/valid_4.csv\nimages/train_1.csv images/train_4.csv\timages/valid_2.csv\nimages/train_2.csv images/valid_0.csv\timages/valid_3.csv\n" ], [ "train = ImageList.from_csv(path, 'train_0.csv')\nvalid = ImageList.from_csv(path, 'valid_0.csv')\n\nlls = ItemLists(path, train, valid).label_from_df(cols=1).transform(tfms, size=size) \n#db = lls.databunch(bs=bs, num_workers=workers)#.normalize(binary_image_stats)\ndb = lls.databunch(bs=bs, num_workers=workers)\nmy_stats = db.batch_stats()\ndb = lls.databunch(bs=bs, num_workers=workers).normalize(my_stats)", "_____no_output_____" ], [ "db.batch_stats()", "_____no_output_____" ] ], [ [ "### Examine Results", "_____no_output_____" ] ], [ [ "print('nClass: {} classes: {}'.format(db.c, db.classes))\ndb", "nClass: 2 classes: [0, 1]\n" ], [ "im = train.get(-1)\nprint(len(train), im.size)\nim.show()", "492 torch.Size([599, 599])\n" ] ], [ [ "## Model", "_____no_output_____" ] ], [ [ "trial_model = nn.Sequential(\n nn.Sequential(\n nn.Conv2d(3,8,5), # 60 × 60 × 8\n nn.ReLU(),\n nn.AvgPool2d(3, stride=2), # 29 × 29 × 8\n \n #nn.Dropout(p=0.25),\n nn.Conv2d(8,8,5), # 25 × 25 × 8\n nn.ReLU(),\n nn.AvgPool2d(3, stride=2), # 12 × 12 × 8\n \n Flatten() # 1152\n ),\n # removed model head to compute flatten size\n)", "_____no_output_____" ], [ "trial_learn = Learner(db, trial_model, loss_func = nn.CrossEntropyLoss(), metrics=accuracy)\ntrial_learn.summary()", "_____no_output_____" ], [ "del trial_model\ntrial_learn.destroy()\ngc.collect()", "this Learner object self-destroyed - it still exists, but no longer usable\n" ], [ "mymodel = nn.Sequential(\n nn.Sequential(\n nn.Conv2d(3,8,5), # 60 × 60 × 8\n nn.ReLU(),\n nn.AvgPool2d(3, stride=2), # 29 × 29 × 8\n \n #nn.Dropout(p=0.25),\n nn.Conv2d(8,8,5), # 25 × 25 × 8\n nn.ReLU(),\n nn.AvgPool2d(3, stride=2), # 12 × 12 × 8\n \n Flatten() # 1152\n ),\n nn.Sequential(\n# nn.Dropout(p=0.25),\n nn.Linear(1152, 144),\n nn.ReLU(),\n nn.Dropout(p=0.8),\n nn.Linear(144, db.c) \n )\n)", "_____no_output_____" ], [ "learn = Learner(db, mymodel, loss_func = nn.CrossEntropyLoss(), metrics=accuracy)\nlearn.summary()\nlearn.save('initial')", "_____no_output_____" ] ], [ [ "# Train Model", "_____no_output_____" ] ], [ [ "learn.fit_one_cycle(1, 1e-6) # learn.fit_one_cycle(1, 0.01)\n\n# learn.save('save-1')", "_____no_output_____" ], [ "learn.lr_find(end_lr=1)\nlearn.recorder.plot()", "_____no_output_____" ], [ "learn.load('initial')\nlearn.fit_one_cycle(100, 3e-3) # learn.fit_one_cycle(1, 0.01)", "_____no_output_____" ], [ "learn.load('initial')\nlearn.fit_one_cycle(100, 1e-2) # learn.fit_one_cycle(1, 0.01)", "_____no_output_____" ], [ "learn.load('initial')\nlearn.fit_one_cycle(100, 1e-3) # learn.fit_one_cycle(1, 0.01)", "_____no_output_____" ], [ "learn.load('initial')\nlearn.fit_one_cycle(100, 1e-4) # learn.fit_one_cycle(1, 0.01)", "_____no_output_____" ], [ "#train an additional 100 epochs\nlearn.fit_one_cycle(100, 1e-4) # learn.fit_one_cycle(1, 0.01)", "_____no_output_____" ], [ "gc.collect()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb769b5a04e8afa9dc53866cb7c95e4e7590e0a
67,877
ipynb
Jupyter Notebook
TensorFlow_MIT/Music Generation.ipynb
Zchappie/DL_Garage
7884a1b5cf5748fbd5fb42e1ab677277bb7f8370
[ "MIT" ]
null
null
null
TensorFlow_MIT/Music Generation.ipynb
Zchappie/DL_Garage
7884a1b5cf5748fbd5fb42e1ab677277bb7f8370
[ "MIT" ]
null
null
null
TensorFlow_MIT/Music Generation.ipynb
Zchappie/DL_Garage
7884a1b5cf5748fbd5fb42e1ab677277bb7f8370
[ "MIT" ]
null
null
null
67,877
67,877
0.838988
[ [ [ "# Music Generation with RNNs ", "_____no_output_____" ] ], [ [ "# Import Tensorflow 2.0\n%tensorflow_version 2.x\nimport tensorflow as tf \n\n# Download and import the MIT 6.S191 package\n!pip install mitdeeplearning\nimport mitdeeplearning as mdl\n\n# Import all remaining packages\nimport numpy as np\nimport os\nimport time\nimport functools\nfrom IPython import display as ipythondisplay\nfrom tqdm import tqdm\n!apt-get install abcmidi timidity > /dev/null 2>&1\n\n# Check that we are using a GPU, if not switch runtimes\n# using Runtime > Change Runtime Type > GPU\nassert len(tf.config.list_physical_devices('GPU')) > 0", "Requirement already satisfied: mitdeeplearning in /usr/local/lib/python3.6/dist-packages (0.1.2)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.5)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)\nRequirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.2)\nRequirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)\nRequirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)\nRequirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)\n" ] ], [ [ "# Load dataset", "_____no_output_____" ] ], [ [ "songs = mdl.lab1.load_training_data()\nexample_song = songs[0]\nprint(\"\\nExample song: \")\nprint(example_song)", "Found 816 songs in text\n\nExample song: \nX:2\nT:An Buachaill Dreoite\nZ: id:dc-hornpipe-2\nM:C|\nL:1/8\nK:G Major\nGF|DGGB d2GB|d2GF Gc (3AGF|DGGB d2GB|dBcA F2GF|!\nDGGB d2GF|DGGF G2Ge|fgaf gbag|fdcA G2:|!\nGA|B2BG c2cA|d2GF G2GA|B2BG c2cA|d2DE F2GA|!\nB2BG c2cA|d^cde f2 (3def|g2gf gbag|fdcA G2:|!\n" ], [ "mdl.lab1.play_song(example_song)", "_____no_output_____" ], [ "songs_joined = \"\\n\\n\".join(songs) # join song list into one string\nvocab = sorted(songs_joined)\nprint(\"There are\", len(vocab), \"unique characters in the dataset\")", "There are 200425 unique characters in the dataset\n" ] ], [ [ "# Process the dataset for learning\nTwo maps are needed: from character to number; from number to character", "_____no_output_____" ] ], [ [ "char2idx = {u:i for i, u in enumerate(vocab)} #dict\nidx2char = np.array(vocab)\n# print(char2idx, idx2char)", "_____no_output_____" ], [ "print('{')\nfor char,_ in zip(char2idx, range(20)):\n print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))\nprint(' ...\\n}')", "{\n '\\n': 9425,\n ' ' : 28079,\n '!' : 31744,\n '\"' : 31752,\n '#' : 31753,\n \"'\" : 32054,\n '(' : 32311,\n ')' : 32312,\n ',' : 32876,\n '-' : 34545,\n '.' : 34586,\n '/' : 36695,\n '0' : 36932,\n '1' : 38624,\n '2' : 47838,\n '3' : 50578,\n '4' : 51248,\n '5' : 51550,\n '6' : 52105,\n '7' : 52377,\n ...\n}\n" ], [ "# convert songs string to a single vector\ndef vectorize_string(string):\n return np.array([char2idx[i] for i in string])\nvectorized_songs = vectorize_string(songs_joined)", "_____no_output_____" ], [ "print('{} --- characters mapped to int ----> {}'.format(repr(songs_joined[:10]), vectorized_songs[:10]))\nassert isinstance(vectorized_songs, np.ndarray), \"returned result should be a numpy array\"\nprint(vectorized_songs.shape)", "'X:2\\nT:An B' --- characters mapped to int ----> [114282 61308 47838 9425 113368 61308 74047 177741 28079 85361]\n(200425,)\n" ] ], [ [ "## Create training examples and targets\nE.g. input -> \"Hell\", output -> \"ello\", original text -> \"Hello\"", "_____no_output_____" ] ], [ [ "def get_batch(vectorized_songs, seq_length, batch_size):\n n = vectorized_songs.shape[0] - 1\n idx = np.random.choice(n-seq_length, batch_size)\n\n input_batch = np.array([vectorized_songs[i: i+seq_length] for i in idx])\n output_batch = np.array([vectorized_songs[i+1: i+1+seq_length] for i in idx])\n x_batch = np.reshape(input_batch, [batch_size, seq_length])\n y_batch = np.reshape(output_batch, [batch_size, seq_length])\n return x_batch, y_batch\n\ntest_args = (vectorized_songs, 10, 2)\nif not mdl.lab1.test_batch_func_types(get_batch, test_args) or \\\n not mdl.lab1.test_batch_func_shapes(get_batch, test_args) or \\\n not mdl.lab1.test_batch_func_next_step(get_batch, test_args):\n print(\"======\\n[FAIL] could not pass tests\")\nelse:\n print(\"======\\n[PASS] passed all tests!\")", "[PASS] test_batch_func_types\n[PASS] test_batch_func_shapes\n[PASS] test_batch_func_next_step\n======\n[PASS] passed all tests!\n" ], [ "x_batch, y_batch = get_batch(vectorized_songs, seq_length=5, batch_size=1)\nfor i, (input_idx, target_idx) in enumerate(zip(np.squeeze(x_batch), np.squeeze(y_batch))):\n print(\"Step {:3d}\".format(i))\n print(\" input: {} ({:s})\".format(input_idx, repr(idx2char[input_idx])))\n print(\" expected output: {} ({:s})\".format(target_idx, repr(idx2char[target_idx])))", "Step 0\n input: 164610 ('f')\n expected output: 171664 ('g')\nStep 1\n input: 171664 ('g')\n expected output: 28079 (' ')\nStep 2\n input: 28079 (' ')\n expected output: 121779 ('a')\nStep 3\n input: 121779 ('a')\n expected output: 47838 ('2')\nStep 4\n input: 47838 ('2')\n expected output: 122955 ('b')\n" ] ], [ [ "## RNN model", "_____no_output_____" ] ], [ [ "def LSTM(rnn_units):\n return tf.keras.layers.LSTM(\n rnn_units,\n return_sequences = True,\n recurrent_initializer = 'glorot_uniform',\n recurrent_activation = 'sigmoid',\n stateful = True\n )", "_____no_output_____" ], [ "def build_model(vocab_size, embedding_dim, rnn_units, batch_size):\n model = tf.keras.Sequential([\n tf.keras.layers.Embedding(vocab_size, embedding_dim, batch_input_shape=[batch_size, None]),\n LSTM(rnn_units),\n tf.keras.layers.Dense(vocab_size)\n ])\n return model\n\nmodel = build_model(len(vocab), embedding_dim=256, rnn_units=1024, batch_size=32)", "_____no_output_____" ], [ "model.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding (Embedding) (32, None, 256) 51308800 \n_________________________________________________________________\nlstm (LSTM) (32, None, 1024) 5246976 \n_________________________________________________________________\ndense (Dense) (32, None, 200425) 205435625 \n=================================================================\nTotal params: 261,991,401\nTrainable params: 261,991,401\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "x, y = get_batch(vectorized_songs, seq_length=100, batch_size=32)\npred = model(x)\nprint(\"Input shape: \", x.shape, \" # (batch_size, sequence_length\")\nprint(\"Output shape: \", pred.shape, \" # (batch_size, sequence_length, vocab_size)\")", "Input shape: (32, 100) # (batch_size, sequence_length\nOutput shape: (32, 100, 200425) # (batch_size, sequence_length, vocab_size)\n" ], [ "sample_indices = tf.random.categorical(pred[0], num_samples=1)\nsample_indices = tf.squeeze(sample_indices, axis=-1).numpy()\nsample_indices", "_____no_output_____" ], [ "print(\"Input: \\n\", repr(\"\".join(idx2char[x[0]])))\nprint()\nprint(\"Next Char Prediction: \\n\", repr(\"\".join(idx2char[sample_indices])))", "Input: \n ' BGG:|!\\nA|BABd g2bg|agef gedB|GABd g2bg|agef g2ga|!\\nbgg2 agef|gage dBGE|DEGA Bdge|dBAc BGG:|!\\n\\nX:12\\n'\n\nNext Char Prediction: \n 'g]||,28BAG f8Ddea 2A GFGhg-ed2|u-Gd-|r fMfB:!lGeEyFD|\\n dL2GdGBBFcB\\naeG e:g2! eGf \\nd f f1 f\\nd:d|:|Lf'\n" ], [ "## Define the loss ", "_____no_output_____" ], [ "def compute_loss(labels, logits):\n loss = tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)\n return loss\n\nexample_batch_loss = compute_loss(y, pred)\nprint(\"Prediction shape: \", pred.shape, \" # (batch_size, sequence_length, vocab_size)\")\nprint(\"Scalar_loss: \", example_batch_loss.numpy().mean())", "Prediction shape: (32, 100, 200425) # (batch_size, sequence_length, vocab_size)\nScalar_loss: 12.208126\n" ] ], [ [ "## Hyperparameters ", "_____no_output_____" ] ], [ [ "num_training_iterations = 200\nbatch_size = 4 # Experiment between 1 and 64\nseq_length = 100 # Experiment between 50 and 500\nlearning_rate = 5e-3 # Experiment between 1e-5 and 1e-1\n\nvocab_size = len(vocab) \nembedding_dim = 256\nrnn_units = 1024 # Experiment between 1 and 2048\n\ncheckpoint_dir = './training_checkpoints'\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"my_ckpt\")", "_____no_output_____" ], [ "model = build_model(vocab_size, embedding_dim, rnn_units, batch_size)\noptimizer = tf.keras.optimizers.Adam(learning_rate)\n\n# model.trainable_variables\n\[email protected]\ndef train_step(x, y):\n with tf.GradientTape() as tape:\n y_hat = model(x)\n loss = compute_loss(y, y_hat)\n grads = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(zip(grads, model.trainable_variables))\n return loss\n \n##################\n# Begin training!#\n##################\n\nhistory = []\nplotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss')\nif hasattr(tqdm, '_instances'): tqdm._instances.clear()\n\nfor iter in tqdm(range(num_training_iterations)):\n x_batch, y_batch = get_batch(vectorized_songs, seq_length, batch_size)\n loss = train_step(x_batch, y_batch)\n \n history.append(loss.numpy().mean())\n plotter.plot(history)\n \n if iter % 100 == 0:\n model.save_weights(checkpoint_prefix)\n\nmodel.save_weights(checkpoint_prefix)", "_____no_output_____" ], [ "model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)\nmodel.load_weights(tf.train.latest_checkpoint(checkpoint_dir))\nmodel.build(tf.TensorShape([1, None]))\n\nmodel.summary()", "Model: \"sequential_2\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding_2 (Embedding) (1, None, 256) 51308800 \n_________________________________________________________________\nlstm_2 (LSTM) (1, None, 1024) 5246976 \n_________________________________________________________________\ndense_2 (Dense) (1, None, 200425) 205435625 \n=================================================================\nTotal params: 261,991,401\nTrainable params: 261,991,401\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "# prediction of a generated song\ndef generate_text(model, start_string, generation_length=1000):\n input_eval = [char2idx[s] for s in start_string]\n input_eval = tf.expand_dims(input_eval, 0)\n\n text_generated = []\n model.reset_states()\n tqdm._instances.clear()\n\n for i in tqdm(range(generation_length)):\n predictions = model(input_eval)\n predictions = tf.squeeze(predictions, 0) # remove batch dim\n predicted_id = tf.random.categorical(predictions, num_samples=1)[-1, 0].numpy()\n\n # Pass the prediction along with the previous hidden state\n # as the next inputs to the model\n input_eval = tf.expand_dims([predicted_id], 0) \n text_generated.append(idx2char[predicted_id])\n\n return (start_string + ''.join(text_generated))", "_____no_output_____" ], [ "generated_text = generate_text(model, 'X', generation_length=1000)", "100%|██████████| 1000/1000 [00:07<00:00, 127.28it/s]\n" ], [ "print(generated_text)", "Xne:l2ef3fAcFF\nc2B2F 8:efBFAeTegeea:Co\nFeBef\nfGr eE| 2\n d2A\n\n2ee\n\n|\ng|d:FX623E adgfAGA\nc\n,\nA AX\n2eB\n\n|f acAa2cc d2dTa\n|/ABdeB- \nc3ee 3\nem3\neg0f!ecE'f8\n93efCdfd 8efaB\nFG|\n\n\neef 3 BAe e|BXTe2f cFG' |\n\nfC2gf vdX'3d2G\ne\n\nedbT\n\ndA e^BD\n\n|egF83:a ee8e Ae c,f3B \n eBpd:::Xe82b fmceBe\ng!:\nXfZ2Adm3 T\n2fdXe:ed E\nKf:f\n2g|i38Ge:d3B|cngf d|\nT 2:e:AfKCB3e\nAe\n7fcA2\ng3f\nd\n| ||eBPF\n \neAe2 F efB2ecBA\n\n Xe!Gb:T c82dePf2!p3\n|a\nd8\nd| -FBfAT8ee3Fn\nd/dg dgG\n|g\n\n\ndm /2| 8c|edeeF:f3d\n\n nefff:ffFFf A!2 ee\nl3Ac\nb2ee2\n2e|e:eoGBXlaecdTaA Gedca 2b|e2Go l!> eAdd\n7g\n2b/!|AA3oeB 283e2eBeC:AXEGGe eZd e32dgeF-XgB\nGe:a23\n\nXge dKfa:3fe=\nce\n2 FceA 232ag:bff\neBFgf2f8ofr !dFgc HBml\n:BeG2a2da2f|:edc\n 'eaedTeBfdGBd 2\ne:\n7\nFf2Xee 2d22 eFEEee72meiFefBE:\ncGfBeaX2dF|c 2e:8\nFfBcA e\nAfl: e7ec e ee\ne3 aB 3e!23|ef|d gBd gfdd2\noCe eedgfa8eBede3AdBg Tp2d c:\nZedFmE\nA aZe \n\ndFeEEn e:ddjA4|2egf d\nZ|\nedEge\nXZfiFfG\neie eA mu2Gf\ne:cf3d deXeGcadedF\nfdCdf\n2dB3\nB\ncee |f : F c co\n'2dddf\n3e\ne:-3:\nlieT\nB 2f\nAteegEX!3 2fE3T de7XAF|nFcdg Fg2a\n" ], [ "generated_songs = mdl.lab1.extract_song_snippet(generated_text)\n\nfor i, song in enumerate(generated_songs):\n # mdl.lab1.play_song(song)\n waveform = mdl.lab1.play_song(song)\n\n if waveform:\n print(\"Generated song\", i)\n ipythondisplay.display(waveform)", "Found 14 songs in text\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb77ec1d32ee085fad56bf7b4942c2e350afc21
11,655
ipynb
Jupyter Notebook
datastructures/HashTable.ipynb
HalmonLui/coding-lessons
0579b9f661cfa9cee61bfbe95bbb52d3f6868073
[ "MIT" ]
1
2020-11-23T00:53:59.000Z
2020-11-23T00:53:59.000Z
datastructures/.ipynb_checkpoints/HashTable-checkpoint.ipynb
HalmonLui/coding-lessons
0579b9f661cfa9cee61bfbe95bbb52d3f6868073
[ "MIT" ]
null
null
null
datastructures/.ipynb_checkpoints/HashTable-checkpoint.ipynb
HalmonLui/coding-lessons
0579b9f661cfa9cee61bfbe95bbb52d3f6868073
[ "MIT" ]
null
null
null
38.213115
211
0.491892
[ [ [ "# Developer: Halmon Lui\n# Implement a Hash Table using Linear Probing from scratch\n\nclass HashTable:\n def __init__(self, length=11):\n self.hash_list = [None for _ in range(length)]\n self.length = length\n self.item_count = 0\n \n # hash key where m is size of table\n def _hash(self, k, m):\n return hash(k) % m\n \n # if key exists, update value\n def add(self, key, value):\n # If we are full return immediately\n if self.item_count==self.length:\n return 'Error: HashTable is full'\n \n hash_key = self._hash(key, self.length)\n # If there is something in the location make sure we aren't colliding\n if self.hash_list[hash_key]:\n # If the keys match update the value\n if self.hash_list[hash_key][0] == key:\n self.hash_list[hash_key] = (key, value)\n self.item_count += 1\n # Handle collision case\n elif self.item_count < self.length:\n # Linear probe for a free spot\n for count, location in enumerate(self.hash_list):\n if not location:\n self.hash_list[count] = (key, value)\n self.item_count += 1\n return\n # Slot is free to add the key, value pair\n else:\n self.hash_list[hash_key] = (key, value)\n self.item_count += 1\n \n # check if key exists\n def exists(self, key):\n hash_key = self._hash(key, self.length)\n if self.hash_list[hash_key]:\n # Return true if matching key\n if self.hash_list[hash_key][0] == key:\n return True\n # Handle collision case\n else:\n # Linear probe for matching key\n for item in self.hash_list:\n if item and item[0]==key:\n return True\n return False\n \n \n # get value from key\n def get(self, key):\n hash_key = self._hash(key, self.length)\n if self.hash_list[hash_key]:\n # Return value if matching key\n if self.hash_list[hash_key][0] == key:\n return self.hash_list[hash_key][1]\n # Handle collision case\n else:\n for item in self.hash_list:\n if item and item[0]==key:\n return item[1]\n return 'Error: Invalid Key'\n \n # remove value at key\n def remove(self, key):\n hash_key = self._hash(key, self.length)\n if self.hash_list[hash_key]:\n # Delete if key matches\n if self.hash_list[hash_key][0] == key:\n self.hash_list[hash_key] = None\n self.item_count -= 1\n # handle collision case\n else:\n for count, item in enumerate(self.hash_list):\n if item and item[0]==key:\n self.hash_list[count] = None\n return 'Error: Invalid Key'\n", "_____no_output_____" ], [ "# Test HashTable methods\n\n# Initialize HashTable object\nht = HashTable()\nprint('Created HashTable: ', ht.hash_list)\n\n# Add to table, check if exists and get it\nprint('Adding key1')\nht.add('key1', 'hello')\nprint('Check if key1 exists: ', ht.exists('key1'))\nprint('Get value of key1: ', ht.get('key1'))\nprint(ht.hash_list)\n\n# Remove key1 from table and get it\nprint('Removing key1')\nht.remove('key1')\nprint('Check if key1 exists: ', ht.exists('key1'))\nprint('Get value of key1: ', ht.get('key1'))\nprint(ht.hash_list)\n\nprint('###########################################')\n\n# Add to table, check if exists and get it\nht = HashTable()\nprint('Adding key1 and key2')\nht.add('key1', 'hello')\nht.add('key2', 'world')\n\nprint('Check if key1 exists: ', ht.exists('key1'))\nprint('Get value of key1: ', ht.get('key1'))\nprint('Check if key2 exists: ', ht.exists('key2'))\nprint('Get value of key2: ', ht.get('key2'))\nprint(ht.hash_list)\n\n# Remove key1 from table and get it\nprint('Removing key1')\nht.remove('key1')\nprint('Check if key1 exists: ', ht.exists('key1'))\nprint('Get value of key1: ', ht.get('key1'))\nprint('Check if key1 exists: ', ht.exists('key2'))\nprint('Get value of key1: ', ht.get('key2'))\nprint(ht.hash_list)\n\nprint('###########################################')\n\n# Add to table, check if exists and get it\nht = HashTable()\nprint('Fill up the table and check for collisions')\nht.add('key1', 'aaaa')\nht.add('key2', 'bbbb')\nht.add('key3', 'cccc')\nht.add('key4', 'dddd')\nht.add('key5', 'eeee')\nht.add('key6', 'ffff')\nht.add('key7', 'gggg')\nht.add('key8', 'hhhh')\nht.add('key9', 'iiii')\nht.add('key10', 'jjjj')\nht.add('key11', 'kkkk')\n\nprint('Check if key1 exists: ', ht.exists('key1'))\nprint('Get value of key1: ', ht.get('key1'))\nprint('Check if key2 exists: ', ht.exists('key2'))\nprint('Get value of key2: ', ht.get('key2'))\nprint('Check if key3 exists: ', ht.exists('key3'))\nprint('Get value of key3: ', ht.get('key3'))\nprint('Check if key4 exists: ', ht.exists('key4'))\nprint('Get value of key4: ', ht.get('key4'))\nprint('Check if key5 exists: ', ht.exists('key5'))\nprint('Get value of key5: ', ht.get('key5'))\nprint('Check if key6 exists: ', ht.exists('key6'))\nprint('Get value of key6: ', ht.get('key6'))\nprint('Check if key7 exists: ', ht.exists('key7'))\nprint('Get value of key7: ', ht.get('key7'))\nprint('Check if key8 exists: ', ht.exists('key8'))\nprint('Get value of key8: ', ht.get('key8'))\nprint('Check if key9 exists: ', ht.exists('key9'))\nprint('Get value of key9: ', ht.get('key9'))\nprint('Check if key10 exists: ', ht.exists('key10'))\nprint('Get value of key10: ', ht.get('key10'))\nprint('Check if key11 exists: ', ht.exists('key11'))\nprint('Get value of key11: ', ht.get('key11'))\nprint(ht.hash_list)\n\nprint('test removing key11')\nht.remove('key11')\nprint(ht.hash_list)", "Created HashTable: [None, None, None, None, None, None, None, None, None, None, None]\nAdding key1\nCheck if key1 exists: True\nGet value of key1: hello\n[None, None, None, None, None, None, None, None, None, None, ('key1', 'hello')]\nRemoving key1\nCheck if key1 exists: False\nGet value of key1: Error: Invalid Key\n[None, None, None, None, None, None, None, None, None, None, None]\n###########################################\nAdding key1 and key2\nCheck if key1 exists: True\nGet value of key1: hello\nCheck if key2 exists: True\nGet value of key2: world\n[None, None, None, None, None, None, None, None, ('key2', 'world'), None, ('key1', 'hello')]\nRemoving key1\nCheck if key1 exists: False\nGet value of key1: Error: Invalid Key\nCheck if key1 exists: True\nGet value of key1: world\n[None, None, None, None, None, None, None, None, ('key2', 'world'), None, None]\n###########################################\nFill up the table and check for collisions\nCheck if key1 exists: True\nGet value of key1: aaaa\nCheck if key2 exists: True\nGet value of key2: bbbb\nCheck if key3 exists: True\nGet value of key3: cccc\nCheck if key4 exists: True\nGet value of key4: dddd\nCheck if key5 exists: True\nGet value of key5: eeee\nCheck if key6 exists: True\nGet value of key6: ffff\nCheck if key7 exists: True\nGet value of key7: gggg\nCheck if key8 exists: True\nGet value of key8: hhhh\nCheck if key9 exists: True\nGet value of key9: iiii\nCheck if key10 exists: True\nGet value of key10: jjjj\nCheck if key11 exists: True\nGet value of key11: kkkk\n[('key3', 'cccc'), ('key5', 'eeee'), ('key6', 'ffff'), ('key7', 'gggg'), ('key4', 'dddd'), ('key8', 'hhhh'), ('key9', 'iiii'), ('key10', 'jjjj'), ('key2', 'bbbb'), ('key11', 'kkkk'), ('key1', 'aaaa')]\ntest removing key11\n[('key3', 'cccc'), ('key5', 'eeee'), ('key6', 'ffff'), ('key7', 'gggg'), ('key4', 'dddd'), ('key8', 'hhhh'), ('key9', 'iiii'), ('key10', 'jjjj'), ('key2', 'bbbb'), None, ('key1', 'aaaa')]\n" ], [ "# Test bad cases\n\n# Add to table, check if exists and get it\nht = HashTable()\nht.add('key1', 'aaaa')\nht.add('key2', 'bbbb')\nht.add('key3', 'cccc')\nht.add('key4', 'dddd')\nht.add('key5', 'eeee')\nht.add('key6', 'ffff')\nht.add('key7', 'gggg')\nht.add('key8', 'hhhh')\nht.add('key9', 'iiii')\nht.add('key10', 'jjjj')\nht.add('key11', 'kkkk')\n\nprint('Try adding over table size: ', ht.add('key12', 'no bueno'))\nprint('Try getting invalid key', ht.get('badkeyhere'))\nprint('Try removing invalid key', ht.get('notpossible'))", "Try adding over table size: Error: HashTable is full\nTry getting invalid key Error: Invalid Key\nTry removing invalid key Error: Invalid Key\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cbb7959ab696077afd1d52f5c02e52967599c8e7
482,237
ipynb
Jupyter Notebook
xgboost_debugger_demo.ipynb
juliaangxy/amazon-sagemaker-immersion-day
bd42c73b901423331a114900f57fe1334d95e72e
[ "MIT-0" ]
1
2021-08-31T02:58:29.000Z
2021-08-31T02:58:29.000Z
xgboost_debugger_demo.ipynb
Napkin-DL/amazon-sagemaker-immersion-day
196d106b1f6dcdd4acd25a284c7fa10ffa5f515a
[ "MIT-0" ]
1
2021-12-12T00:05:42.000Z
2021-12-12T00:05:42.000Z
xgboost_debugger_demo.ipynb
Napkin-DL/amazon-sagemaker-immersion-day
196d106b1f6dcdd4acd25a284c7fa10ffa5f515a
[ "MIT-0" ]
1
2021-08-16T04:03:27.000Z
2021-08-16T04:03:27.000Z
285.010047
198,556
0.904097
[ [ [ "# Targeting Direct Marketing with Amazon SageMaker XGBoost\n_**Supervised Learning with Gradient Boosted Trees: A Binary Prediction Problem With Unbalanced Classes**_\n\n\n## Background\nDirect marketing, either through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention is limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem.\n\nThis notebook presents an example problem to predict if a customer will enroll for a term deposit at a bank, after one or more phone calls. The steps include:\n\n* Preparing your Amazon SageMaker notebook\n* Downloading data from the internet into Amazon SageMaker\n* Investigating and transforming the data so that it can be fed to Amazon SageMaker algorithms\n* Estimating a model using the Gradient Boosting algorithm\n* Evaluating the effectiveness of the model\n* Setting the model up to make on-going predictions\n\n---\n\n## Preparation\n\n_This notebook was created and tested on an ml.m4.xlarge notebook instance._\n\nLet's start by specifying:\n\n- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.\n- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).", "_____no_output_____" ] ], [ [ "# Define IAM role\nimport boto3\nimport sagemaker\nimport re\nfrom sagemaker import get_execution_role\n\nregion = boto3.Session().region_name\nsession = sagemaker.Session()\nbucket = session.default_bucket()\nprefix = 'sagemaker/DEMO-xgboost-dm'\nrole = get_execution_role() ", "_____no_output_____" ] ], [ [ "Now let's bring in the Python libraries that we'll use throughout the analysis", "_____no_output_____" ] ], [ [ "import numpy as np # For matrix operations and numerical processing\nimport pandas as pd # For munging tabular data\nimport matplotlib.pyplot as plt # For charts and visualizations\nfrom IPython.display import Image # For displaying images in the notebook\nfrom IPython.display import display # For displaying outputs in the notebook\nfrom time import gmtime, strftime # For labeling SageMaker models, endpoints, etc.\nimport sys # For writing outputs to notebook\nimport math # For ceiling function\nimport json # For parsing hosting outputs\nimport os # For manipulating filepath names\nimport sagemaker # Amazon SageMaker's Python SDK provides many helper functions\nfrom sagemaker.predictor import csv_serializer # Converts strings for HTTP POST requests on inference\n! python -m pip install smdebug ", "Collecting smdebug\n Downloading smdebug-1.0.2-py2.py3-none-any.whl (260 kB)\n\u001b[K |████████████████████████████████| 260 kB 17.2 MB/s eta 0:00:01\n\u001b[?25hCollecting pyinstrument>=3.1.3\n Downloading pyinstrument-3.3.0-py2.py3-none-any.whl (85 kB)\n\u001b[K |████████████████████████████████| 85 kB 440 kB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: packaging in /opt/conda/lib/python3.7/site-packages (from smdebug) (20.1)\nRequirement already satisfied: numpy<2.0.0,>1.16.0 in /opt/conda/lib/python3.7/site-packages (from smdebug) (1.18.1)\nRequirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from smdebug) (3.14.0)\nRequirement already satisfied: boto3>=1.10.32 in /opt/conda/lib/python3.7/site-packages (from smdebug) (1.16.56)\nCollecting pyinstrument-cext>=0.2.2\n Downloading pyinstrument_cext-0.2.3-cp37-cp37m-manylinux2010_x86_64.whl (20 kB)\nRequirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging->smdebug) (2.4.6)\nRequirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from packaging->smdebug) (1.14.0)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in /opt/conda/lib/python3.7/site-packages (from boto3>=1.10.32->smdebug) (0.10.0)\nRequirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /opt/conda/lib/python3.7/site-packages (from boto3>=1.10.32->smdebug) (0.3.4)\nRequirement already satisfied: botocore<1.20.0,>=1.19.56 in /opt/conda/lib/python3.7/site-packages (from boto3>=1.10.32->smdebug) (1.19.56)\nRequirement already satisfied: python-dateutil<3.0.0,>=2.1 in /opt/conda/lib/python3.7/site-packages (from botocore<1.20.0,>=1.19.56->boto3>=1.10.32->smdebug) (2.8.1)\nRequirement already satisfied: urllib3<1.27,>=1.25.4; python_version != \"3.4\" in /opt/conda/lib/python3.7/site-packages (from botocore<1.20.0,>=1.19.56->boto3>=1.10.32->smdebug) (1.25.8)\nInstalling collected packages: pyinstrument-cext, pyinstrument, smdebug\nSuccessfully installed pyinstrument-3.3.0 pyinstrument-cext-0.2.3 smdebug-1.0.2\n" ] ], [ [ "---\n\n## Data\nLet's start by downloading the [direct marketing dataset](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) from the sample data s3 bucket. \n\n\\[Moro et al., 2014\\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014\n", "_____no_output_____" ] ], [ [ "!wget https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip\n!conda install -y -c conda-forge unzip\n!unzip -o bank-additional.zip", "--2021-02-02 07:22:44-- https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip\nResolving sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com (sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com)... 52.218.221.89\nConnecting to sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com (sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com)|52.218.221.89|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 432828 (423K) [application/zip]\nSaving to: ‘bank-additional.zip.1’\n\nbank-additional.zip 100%[===================>] 422.68K 864KB/s in 0.5s \n\n2021-02-02 07:22:45 (864 KB/s) - ‘bank-additional.zip.1’ saved [432828/432828]\n\nCollecting package metadata (current_repodata.json): done\nSolving environment: done\n\n# All requested packages already installed.\n\nArchive: bank-additional.zip\n inflating: bank-additional/bank-additional-names.txt \n inflating: bank-additional/bank-additional.csv \n inflating: bank-additional/bank-additional-full.csv \n" ] ], [ [ "Now lets read this into a Pandas data frame and take a look.", "_____no_output_____" ] ], [ [ "data = pd.read_csv('./bank-additional/bank-additional-full.csv')\npd.set_option('display.max_rows',10)\ndata", "_____no_output_____" ] ], [ [ "Let's talk about the data. At a high level, we can see:\n\n* We have a little over 40K customer records, and 20 features for each customer\n* The features are mixed; some numeric, some categorical\n* The data appears to be sorted, at least by `time` and `contact`, maybe more\n\n_**Specifics on each of the features:**_\n\n*Demographics:*\n* `age`: Customer's age (numeric)\n* `job`: Type of job (categorical: 'admin.', 'services', ...)\n* `marital`: Marital status (categorical: 'married', 'single', ...)\n* `education`: Level of education (categorical: 'basic.4y', 'high.school', ...)\n\n*Past customer events:*\n* `default`: Has credit in default? (categorical: 'no', 'unknown', ...)\n* `housing`: Has housing loan? (categorical: 'no', 'yes', ...)\n* `loan`: Has personal loan? (categorical: 'no', 'yes', ...)\n\n*Past direct marketing contacts:*\n* `contact`: Contact communication type (categorical: 'cellular', 'telephone', ...)\n* `month`: Last contact month of year (categorical: 'may', 'nov', ...)\n* `day_of_week`: Last contact day of the week (categorical: 'mon', 'fri', ...)\n* `duration`: Last contact duration, in seconds (numeric). Important note: If duration = 0 then `y` = 'no'.\n \n*Campaign information:*\n* `campaign`: Number of contacts performed during this campaign and for this client (numeric, includes last contact)\n* `pdays`: Number of days that passed by after the client was last contacted from a previous campaign (numeric)\n* `previous`: Number of contacts performed before this campaign and for this client (numeric)\n* `poutcome`: Outcome of the previous marketing campaign (categorical: 'nonexistent','success', ...)\n\n*External environment factors:*\n* `emp.var.rate`: Employment variation rate - quarterly indicator (numeric)\n* `cons.price.idx`: Consumer price index - monthly indicator (numeric)\n* `cons.conf.idx`: Consumer confidence index - monthly indicator (numeric)\n* `euribor3m`: Euribor 3 month rate - daily indicator (numeric)\n* `nr.employed`: Number of employees - quarterly indicator (numeric)\n\n*Target variable:*\n* `y`: Has the client subscribed a term deposit? (binary: 'yes','no')", "_____no_output_____" ], [ "### Transformation\n\nCleaning up data is part of nearly every machine learning project. It arguably presents the biggest risk if done incorrectly and is one of the more subjective aspects in the process. Several common techniques include:\n\n* Handling missing values: Some machine learning algorithms are capable of handling missing values, but most would rather not. Options include:\n * Removing observations with missing values: This works well if only a very small fraction of observations have incomplete information.\n * Removing features with missing values: This works well if there are a small number of features which have a large number of missing values.\n * Imputing missing values: Entire [books](https://www.amazon.com/Flexible-Imputation-Missing-Interdisciplinary-Statistics/dp/1439868247) have been written on this topic, but common choices are replacing the missing value with the mode or mean of that column's non-missing values.\n* Converting categorical to numeric: The most common method is one hot encoding, which for each feature maps every distinct value of that column to its own feature which takes a value of 1 when the categorical feature is equal to that value, and 0 otherwise.\n* Oddly distributed data: Although for non-linear models like Gradient Boosted Trees, this has very limited implications, parametric models like regression can produce wildly inaccurate estimates when fed highly skewed data. In some cases, simply taking the natural log of the features is sufficient to produce more normally distributed data. In others, bucketing values into discrete ranges is helpful. These buckets can then be treated as categorical variables and included in the model when one hot encoded.\n* Handling more complicated data types: Mainpulating images, text, or data at varying grains is left for other notebook templates.\n\nLuckily, some of these aspects have already been handled for us, and the algorithm we are showcasing tends to do well at handling sparse or oddly distributed data. Therefore, let's keep pre-processing simple.", "_____no_output_____" ] ], [ [ "data['no_previous_contact'] = np.where(data['pdays'] == 999, 1, 0) # Indicator variable to capture when pdays takes a value of 999\ndata['not_working'] = np.where(np.in1d(data['job'], ['student', 'retired', 'unemployed']), 1, 0) # Indicator for individuals not actively employed\nmodel_data = pd.get_dummies(data) # Convert categorical variables to sets of indicators", "_____no_output_____" ] ], [ [ "Another question to ask yourself before building a model is whether certain features will add value in your final use case. For example, if your goal is to deliver the best prediction, then will you have access to that data at the moment of prediction? Knowing it's raining is highly predictive for umbrella sales, but forecasting weather far enough out to plan inventory on umbrellas is probably just as difficult as forecasting umbrella sales without knowledge of the weather. So, including this in your model may give you a false sense of precision.\n\nFollowing this logic, let's remove the economic features and `duration` from our data as they would need to be forecasted with high precision to use as inputs in future predictions.\n\nEven if we were to use values of the economic indicators from the previous quarter, this value is likely not as relevant for prospects contacted early in the next quarter as those contacted later on.", "_____no_output_____" ] ], [ [ "model_data = model_data.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)", "_____no_output_____" ] ], [ [ "When building a model whose primary goal is to predict a target value on new data, it is important to understand overfitting. Supervised learning models are designed to minimize error between their predictions of the target value and actuals, in the data they are given. This last part is key, as frequently in their quest for greater accuracy, machine learning models bias themselves toward picking up on minor idiosyncrasies within the data they are shown. These idiosyncrasies then don't repeat themselves in subsequent data, meaning those predictions can actually be made less accurate, at the expense of more accurate predictions in the training phase.\n\nThe most common way of preventing this is to build models with the concept that a model shouldn't only be judged on its fit to the data it was trained on, but also on \"new\" data. There are several different ways of operationalizing this, holdout validation, cross-validation, leave-one-out validation, etc. For our purposes, we'll simply randomly split the data into 3 uneven groups. The model will be trained on 70% of data, it will then be evaluated on 20% of data to give us an estimate of the accuracy we hope to have on \"new\" data, and 10% will be held back as a final testing dataset which will be used later on.", "_____no_output_____" ] ], [ [ "train_data, validation_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9 * len(model_data))]) # Randomly sort the data then split out first 70%, second 20%, and last 10%", "_____no_output_____" ] ], [ [ "Amazon SageMaker's XGBoost container expects data in the libSVM or CSV data format. For this example, we'll stick to CSV. Note that the first column must be the target variable and the CSV should not include headers. Also, notice that although repetitive it's easiest to do this after the train|validation|test split rather than before. This avoids any misalignment issues due to random reordering.", "_____no_output_____" ] ], [ [ "pd.concat([train_data['y_yes'], train_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)\npd.concat([validation_data['y_yes'], validation_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('validation.csv', index=False, header=False)", "_____no_output_____" ] ], [ [ "Now we'll copy the file to S3 for Amazon SageMaker's managed training to pickup.", "_____no_output_____" ] ], [ [ "boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')\nboto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation/validation.csv')).upload_file('validation.csv')", "_____no_output_____" ] ], [ [ "---\n\n## Training\nNow we know that most of our features have skewed distributions, some are highly correlated with one another, and some appear to have non-linear relationships with our target variable. Also, for targeting future prospects, good predictive accuracy is preferred to being able to explain why that prospect was targeted. Taken together, these aspects make gradient boosted trees a good candidate algorithm.\n\nThere are several intricacies to understanding the algorithm, but at a high level, gradient boosted trees works by combining predictions from many simple models, each of which tries to address the weaknesses of the previous models. By doing this the collection of simple models can actually outperform large, complex models. Other Amazon SageMaker notebooks elaborate on gradient boosting trees further and how they differ from similar algorithms.\n\n`xgboost` is an extremely popular, open-source package for gradient boosted trees. It is computationally powerful, fully featured, and has been successfully used in many machine learning competitions. Let's start with a simple `xgboost` model, trained using Amazon SageMaker's managed, distributed training framework.\n\nFirst we'll need to specify the ECR container location for Amazon SageMaker's implementation of XGBoost.", "_____no_output_____" ] ], [ [ "from sagemaker.amazon.amazon_estimator import get_image_uri\ncontainer = sagemaker.image_uris.retrieve(region=boto3.Session().region_name, framework='xgboost', version='1.0-1')", "_____no_output_____" ] ], [ [ "Then, because we're training with the CSV file format, we'll create `s3_input`s that our training function can use as a pointer to the files in S3, which also specify that the content type is CSV.", "_____no_output_____" ] ], [ [ "s3_input_train = sagemaker.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket, prefix), content_type='csv')\ns3_input_validation = sagemaker.TrainingInput(s3_data='s3://{}/{}/validation/'.format(bucket, prefix), content_type='csv')", "_____no_output_____" ], [ "base_job_name = \"demo-smdebug-xgboost-regression\"\nbucket_path='s3://{}/{}/output'.format(bucket, prefix)", "_____no_output_____" ] ], [ [ "### Enabling Debugger in Estimator object\n\n\n#### DebuggerHookConfig\n\nEnabling Amazon SageMaker Debugger in training job can be accomplished by adding its configuration into Estimator object constructor:\n\n```python\nfrom sagemaker.debugger import DebuggerHookConfig, CollectionConfig\n\nestimator = Estimator(\n ...,\n debugger_hook_config = DebuggerHookConfig(\n s3_output_path=\"s3://{bucket_name}/{location_in_bucket}\", # Required\n collection_configs=[\n CollectionConfig(\n name=\"metrics\",\n parameters={\n \"save_interval\": \"10\"\n }\n )\n ]\n )\n)\n```\nHere, the `DebuggerHookConfig` object instructs `Estimator` what data we are interested in.\nTwo parameters are provided in the example:\n\n- `s3_output_path`: it points to S3 bucket/path where we intend to store our debugging tensors.\n Amount of data saved depends on multiple factors, major ones are: training job / data set / model / frequency of saving tensors.\n This bucket should be in your AWS account, and you should have full access control over it.\n **Important Note**: this s3 bucket should be originally created in the same region where your training job will be running, otherwise you might run into problems with cross region access.\n\n- `collection_configs`: it enumerates named collections of tensors we want to save.\n Collections are a convinient way to organize relevant tensors under same umbrella to make it easy to navigate them during analysis.\n In this particular example, you are instructing Amazon SageMaker Debugger that you are interested in a single collection named `metrics`.\n We also instructed Amazon SageMaker Debugger to save metrics every 10 iteration.\n See [Collection](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/api.md#collection) documentation for all parameters that are supported by Collections and DebuggerConfig documentation for more details about all parameters DebuggerConfig supports.\n \n#### Rules\n\nEnabling Rules in training job can be accomplished by adding the `rules` configuration into Estimator object constructor.\n\n- `rules`: This new parameter will accept a list of rules you wish to evaluate against the tensors output by this training job.\n For rules, Amazon SageMaker Debugger supports two types:\n - SageMaker Rules: These are rules specially curated by the data science and engineering teams in Amazon SageMaker which you can opt to evaluate against your training job.\n - Custom Rules: You can optionally choose to write your own rule as a Python source file and have it evaluated against your training job.\n To provide Amazon SageMaker Debugger to evaluate this rule, you would have to provide the S3 location of the rule source and the evaluator image.\n\nIn this example, you will use a Amazon SageMaker's LossNotDecreasing rule, which helps you identify if you are running into a situation where the training loss is not going down.\n\n```python\nfrom sagemaker.debugger import rule_configs, Rule\n\nestimator = Estimator(\n ...,\n rules=[\n Rule.sagemaker(\n rule_configs.loss_not_decreasing(),\n rule_parameters={\n \"collection_names\": \"metrics\",\n \"num_steps\": \"10\",\n },\n ),\n ],\n)\n```\n\n- `rule_parameters`: In this parameter, you provide the runtime values of the parameter in your constructor.\n You can still choose to pass in other values which may be necessary for your rule to be evaluated.\n In this example, you will use Amazon SageMaker's LossNotDecreasing rule to monitor the `metircs` collection.\n The rule will alert you if the tensors in `metrics` has not decreased for more than 10 steps.", "_____no_output_____" ], [ "First we'll need to specify training parameters to the estimator. This includes:\n1. The `xgboost` algorithm container\n1. The IAM role to use\n1. Training instance type and count\n1. S3 location for output data\n1. Algorithm hyperparameters\n\nAnd then a `.fit()` function which specifies:\n1. S3 location for output data. In this case we have both a training and validation set which are passed in.", "_____no_output_____" ] ], [ [ "from sagemaker.debugger import rule_configs, Rule, DebuggerHookConfig, CollectionConfig\nfrom sagemaker.estimator import Estimator\nsess = sagemaker.Session()\n\nsave_interval = 5 \n\nxgboost_estimator = Estimator(\n role=role,\n base_job_name=base_job_name,\n instance_count=1,\n instance_type='ml.m5.4xlarge',\n image_uri=container,\n max_run=1800,\n sagemaker_session=sess,\n debugger_hook_config=DebuggerHookConfig(\n s3_output_path=bucket_path, # Required\n collection_configs=[\n CollectionConfig(\n name=\"metrics\",\n parameters={\n \"save_interval\": str(save_interval)\n }\n ),\n CollectionConfig(\n name=\"predictions\",\n parameters={\n \"save_interval\": str(save_interval)\n }\n ),\n CollectionConfig(\n name=\"feature_importance\",\n parameters={\n \"save_interval\": str(save_interval)\n }\n ),\n CollectionConfig(\n name=\"average_shap\",\n parameters={\n \"save_interval\": str(save_interval)\n }\n )\n ],\n )\n)", "_____no_output_____" ], [ "xgboost_estimator.set_hyperparameters(max_depth=5,\n eta=0.2,\n gamma=4,\n min_child_weight=6,\n subsample=0.8,\n silent=0,\n objective='binary:logistic',\n num_round=100)", "_____no_output_____" ], [ "\nxgboost_estimator.fit(\n {\"train\": s3_input_train, \"validation\": s3_input_validation},\n # This is a fire and forget event. By setting wait=False, you submit the job to run in the background.\n # Amazon SageMaker starts one training job and release control to next cells in the notebook.\n # Follow this notebook to see status of the training job.\n wait=False\n)", "_____no_output_____" ] ], [ [ "### Result\n\nAs a result of the above command, Amazon SageMaker starts one training job and one rule job for you. The first one is the job that produces the tensors to be analyzed. The second one analyzes the tensors to check if `train-rmse` and `validation-rmse` are not decreasing at any point during training.\n\nCheck the status of the training job below.\nAfter your training job is started, Amazon SageMaker starts a rule-execution job to run the LossNotDecreasing rule.\n\n**Note that the next cell blocks until the rule execution job ends. You can stop it at any point to proceed to the rest of the notebook. Once it says Rule Evaluation Status is Started, and shows the `RuleEvaluationJobArn`, you can look at the status of the rule being monitored.**", "_____no_output_____" ] ], [ [ "import time\nfrom time import gmtime, strftime\n\n\n# Below command will give the status of training job\njob_name = xgboost_estimator.latest_training_job.name\nclient = xgboost_estimator.sagemaker_session.sagemaker_client\ndescription = client.describe_training_job(TrainingJobName=job_name)\nprint('Training job name: ' + job_name)\nprint(description['TrainingJobStatus'])\n\nif description['TrainingJobStatus'] != 'Completed':\n while description['SecondaryStatus'] not in ['Training', 'Completed']:\n description = client.describe_training_job(TrainingJobName=job_name)\n primary_status = description['TrainingJobStatus']\n secondary_status = description['SecondaryStatus']\n print(\"{}: {}, {}\".format(strftime('%X', gmtime()), primary_status, secondary_status))\n time.sleep(15)", "Training job name: demo-smdebug-xgboost-regression-2021-02-02-07-23-14-125\nInProgress\n07:23:16: InProgress, Starting\n07:23:31: InProgress, Starting\n07:23:46: InProgress, Starting\n07:24:01: InProgress, Starting\n07:24:16: InProgress, Starting\n07:24:31: InProgress, Starting\n07:24:46: InProgress, Starting\n07:25:01: InProgress, Starting\n07:25:16: InProgress, Downloading\n07:25:31: InProgress, Training\n" ] ], [ [ "## Data Analysis - Manual\n\nNow that you've trained the system, analyze the data.\nHere, you focus on after-the-fact analysis.\n\nYou import a basic analysis library, which defines the concept of trial, which represents a single training run.", "_____no_output_____" ] ], [ [ "from smdebug.trials import create_trial\n\ndescription = client.describe_training_job(TrainingJobName=job_name)\ns3_output_path = xgboost_estimator.latest_job_debugger_artifacts_path()\n\n# This is where we create a Trial object that allows access to saved tensors.\ntrial = create_trial(s3_output_path)", "[2021-02-02 07:35:06.811 datascience-1-0-ml-t3-medium-5b87494b3efe79ac159474c0d6df:3491 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None\n[2021-02-02 07:35:06.875 datascience-1-0-ml-t3-medium-5b87494b3efe79ac159474c0d6df:3491 INFO s3_trial.py:42] Loading trial debug-output at path s3://sagemaker-eu-west-1-802173394839/sagemaker/DEMO-xgboost-dm/output/demo-smdebug-xgboost-regression-2021-02-02-07-23-14-125/debug-output\n" ] ], [ [ "You can list all the tensors that you know something about. Each one of these names is the name of a tensor. The name is a combination of the feature name, which in these cases, is auto-assigned by XGBoost, and whether it's an evaluation metric, feature importance, or SHAP value.", "_____no_output_____" ] ], [ [ "trial.tensor_names()", "[2021-02-02 07:35:15.034 datascience-1-0-ml-t3-medium-5b87494b3efe79ac159474c0d6df:3491 INFO trial.py:198] Training has ended, will refresh one final time in 1 sec.\n[2021-02-02 07:35:16.053 datascience-1-0-ml-t3-medium-5b87494b3efe79ac159474c0d6df:3491 INFO trial.py:210] Loaded all steps\n" ] ], [ [ "For each tensor, ask for the steps where you have data. In this case, every five steps", "_____no_output_____" ] ], [ [ "trial.tensor(\"predictions\").values()", "_____no_output_____" ] ], [ [ "You can obtain each tensor at each step as a NumPy array.", "_____no_output_____" ] ], [ [ "type(trial.tensor(\"predictions\").value(10))", "_____no_output_____" ] ], [ [ "### Performance metrics\n\nYou can also create a simple function that visualizes the training and validation errors as the training progresses.\nEach gradient should get smaller over time, as the system converges to a good solution.\nRemember that this is an interactive analysis. You are showing these tensors to give an idea of the data.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport seaborn as sns\nimport re\n\n\ndef get_data(trial, tname):\n \"\"\"\n For the given tensor name, walks though all the iterations\n for which you have data and fetches the values.\n Returns the set of steps and the values.\n \"\"\"\n tensor = trial.tensor(tname)\n steps = tensor.steps()\n vals = [tensor.value(s) for s in steps]\n return steps, vals\n\ndef plot_collection(trial, collection_name, regex='.*', figsize=(8, 6)):\n \"\"\"\n Takes a `trial` and a collection name, and \n plots all tensors that match the given regex.\n \"\"\"\n fig, ax = plt.subplots(figsize=figsize)\n sns.despine()\n\n tensors = trial.collection(collection_name).tensor_names\n\n for tensor_name in sorted(tensors):\n if re.match(regex, tensor_name):\n steps, data = get_data(trial, tensor_name)\n ax.plot(steps, data, label=tensor_name)\n\n ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))\n ax.set_xlabel('Iteration')", "_____no_output_____" ], [ "plot_collection(trial, \"metrics\")", "_____no_output_____" ] ], [ [ "### Feature importances\n\nYou can also visualize the feature priorities as determined by\n[xgboost.get_score()](https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.Booster.get_score).\nIf you instructed Estimator to log the `feature_importance` collection, all five importance types supported by `xgboost.get_score()` will be available in the collection.", "_____no_output_____" ] ], [ [ "def plot_feature_importance(trial, importance_type=\"weight\"):\n SUPPORTED_IMPORTANCE_TYPES = [\"weight\", \"gain\", \"cover\", \"total_gain\", \"total_cover\"]\n if importance_type not in SUPPORTED_IMPORTANCE_TYPES:\n raise ValueError(f\"{importance_type} is not one of the supported importance types.\")\n plot_collection(\n trial,\n \"feature_importance\",\n regex=f\"feature_importance/{importance_type}/.*\")", "_____no_output_____" ], [ "plot_feature_importance(trial)", "_____no_output_____" ], [ "plot_feature_importance(trial, importance_type=\"cover\")", "_____no_output_____" ] ], [ [ "### SHAP\n\n[SHAP](https://github.com/slundberg/shap) (SHapley Additive exPlanations) is\nanother approach to explain the output of machine learning models.\nSHAP values represent a feature's contribution to a change in the model output.\nYou instructed Estimator to log the average SHAP values in this example so the SHAP values (as calculated by [xgboost.predict(pred_contribs=True)](https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.Booster.predict)) will be available the `average_shap` collection.", "_____no_output_____" ] ], [ [ "plot_collection(trial,\"average_shap\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
cbb79b81e34c99d3c2a4765b5f604909c3caed6d
106,894
ipynb
Jupyter Notebook
position-opening2vec/plot_of_flanders.ipynb
johan-andries/data-experiments
126f6e38fafd40d4507cf9b694469eaa25d78dfe
[ "Apache-2.0" ]
1
2021-06-24T02:58:21.000Z
2021-06-24T02:58:21.000Z
position-opening2vec/plot_of_flanders.ipynb
johan-andries/data-experiments
126f6e38fafd40d4507cf9b694469eaa25d78dfe
[ "Apache-2.0" ]
null
null
null
position-opening2vec/plot_of_flanders.ipynb
johan-andries/data-experiments
126f6e38fafd40d4507cf9b694469eaa25d78dfe
[ "Apache-2.0" ]
null
null
null
712.626667
103,156
0.937892
[ [ [ "te_tonen = [\"3000-LEUVEN\", \"3200-AARSCHOT\", \"3290-DIEST\", \"2800-MECHELEN\", \"1000-BRUSSEL\",\n \"9000-GENT\", \"3500-HASSELT\", \"3680-MAASEIK\", \"3600-GENK\", \"2850-BOOM\", \"2000-ANTWERPEN\",\n \"9200-DENDERMONDE\", \"9300-AALST\", \"8000-BRUGGE\", \"8500-KORTRIJK\", \"8400-OOSTENDE\", \"3300-TIENEN\",\n \"3800-SINT-TRUIDEN\", \"3370-BOUTERSEM\", \"2300-TURNHOUT\", \"2910-ESSEN\", \"8900-IEPER\", \"9100-SINT-NIKLAAS\",\n \"2500-LIER\", \"1840-LONDERZEEL\", \"9060-ZELZATE\", \"9800-DEINZE\", \"8820-TORHOUT\", \"8700-TIELT\",\n \"9990-MALDEGEM\", \"9230-WETTEREN\", \"3070-KORTENBERG\", \"1800-VILVOORDE\", \"2070-ZWIJNDRECHT\",\n \"9150-KRUIBEKE\", \"2660-HOBOKEN\", \"2630-AARTSELAAR\", \"9500-GERAARDSBERGEN\", \"9700-OUDENAARDE\",\n \"2220-HEIST-OP-DEN-BERG\", \"2440-GEEL\", \"8630-VEURNE\",\n \"Regio Gent\", \"Regio Leuven\"]", "_____no_output_____" ], [ "labels = []\nvectors = []\nfor input_line in open(\"work/location_vectors.csv\"):\n input_list = input_line.strip().split(\",\")\n labels.append(input_list[0])\n vectors.append(list(map(lambda i: float(i), input_list[1:])))", "_____no_output_____" ], [ "import numpy as np\nfrom sklearn.manifold import TSNE", "_____no_output_____" ], [ "X = np.array(vectors)", "_____no_output_____" ], [ "model = TSNE(n_components=2, init=\"pca\", method=\"exact\") # , init=\"pca\", \nnp.set_printoptions(suppress=True)\nprojected = model.fit_transform(X) ", "_____no_output_____" ], [ "%matplotlib inline\n\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = (18,15)", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nax.scatter(projected.T[0], projected.T[1], s=0.1)\n\nfor i, txt in enumerate(labels):\n if txt in te_tonen:\n ax.annotate(txt, (projected.T[0][i],projected.T[1][i]))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
cbb79c57b24ef74cd9f04335894649fb6fca5bd9
731,765
ipynb
Jupyter Notebook
Model backlog/DenseNet169/124 - DenseNet169 - Classification.ipynb
ThinkBricks/APTOS2019BlindnessDetection
e524fd69f83a1252710076c78b6a5236849cd885
[ "MIT" ]
23
2019-09-08T17:19:16.000Z
2022-02-02T16:20:09.000Z
Model backlog/DenseNet169/124 - DenseNet169 - Classification.ipynb
ThinkBricks/APTOS2019BlindnessDetection
e524fd69f83a1252710076c78b6a5236849cd885
[ "MIT" ]
1
2020-03-10T18:42:12.000Z
2020-09-18T22:02:38.000Z
Model backlog/DenseNet169/124 - DenseNet169 - Classification.ipynb
ThinkBricks/APTOS2019BlindnessDetection
e524fd69f83a1252710076c78b6a5236849cd885
[ "MIT" ]
16
2019-09-21T12:29:59.000Z
2022-03-21T00:42:26.000Z
181.354399
156,588
0.832297
[ [ [ "## Dependencies", "_____no_output_____" ] ], [ [ "import os\nimport cv2\nimport shutil\nimport random\nimport warnings\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom tensorflow import set_random_seed\nfrom sklearn.utils import class_weight\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix, cohen_kappa_score\nfrom keras import backend as K\nfrom keras.models import Model\nfrom keras.utils import to_categorical\nfrom keras import optimizers, applications\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler\nfrom keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input\n\n# Set seeds to make the experiment more reproducible.\ndef seed_everything(seed=0):\n random.seed(seed)\n os.environ['PYTHONHASHSEED'] = str(seed)\n np.random.seed(seed)\n set_random_seed(0)\nseed = 0\nseed_everything(seed)\n\n%matplotlib inline\nsns.set(style=\"whitegrid\")\nwarnings.filterwarnings(\"ignore\")", "Using TensorFlow backend.\n" ] ], [ [ "## Load data", "_____no_output_____" ] ], [ [ "hold_out_set = pd.read_csv('../input/aptos-data-split/hold-out.csv')\nX_train = hold_out_set[hold_out_set['set'] == 'train']\nX_val = hold_out_set[hold_out_set['set'] == 'validation']\ntest = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')\nprint('Number of train samples: ', X_train.shape[0])\nprint('Number of validation samples: ', X_val.shape[0])\nprint('Number of test samples: ', test.shape[0])\n\n# Preprocecss data\nX_train[\"id_code\"] = X_train[\"id_code\"].apply(lambda x: x + \".png\")\nX_val[\"id_code\"] = X_val[\"id_code\"].apply(lambda x: x + \".png\")\ntest[\"id_code\"] = test[\"id_code\"].apply(lambda x: x + \".png\")\nX_train['diagnosis'] = X_train['diagnosis'].astype('str')\nX_val['diagnosis'] = X_val['diagnosis'].astype('str')\ndisplay(X_train.head())", "Number of train samples: 2929\nNumber of validation samples: 733\nNumber of test samples: 1928\n" ] ], [ [ "# Model parameters", "_____no_output_____" ] ], [ [ "# Model parameters\nN_CLASSES = X_train['diagnosis'].nunique()\nBATCH_SIZE = 16\nEPOCHS = 40\nWARMUP_EPOCHS = 5\nLEARNING_RATE = 1e-4\nWARMUP_LEARNING_RATE = 1e-3\nHEIGHT = 320\nWIDTH = 320\nCHANNELS = 3\nES_PATIENCE = 5\nRLROP_PATIENCE = 3\nDECAY_DROP = 0.5", "_____no_output_____" ], [ "def kappa(y_true, y_pred, n_classes=5):\n y_trues = K.cast(K.argmax(y_true), K.floatx())\n y_preds = K.cast(K.argmax(y_pred), K.floatx())\n n_samples = K.cast(K.shape(y_true)[0], K.floatx())\n distance = K.sum(K.abs(y_trues - y_preds))\n max_distance = n_classes - 1\n \n kappa_score = 1 - ((distance**2) / (n_samples * (max_distance**2)))\n\n return kappa_score\n\ndef step_decay(epoch):\n lrate = 30e-5\n if epoch > 3:\n lrate = 15e-5\n if epoch > 7:\n lrate = 7.5e-5\n if epoch > 11:\n lrate = 3e-5\n if epoch > 15:\n lrate = 1e-5\n\n return lrate\n\ndef focal_loss(y_true, y_pred):\n gamma = 2.0\n epsilon = K.epsilon()\n \n pt = y_pred * y_true + (1-y_pred) * (1-y_true)\n pt = K.clip(pt, epsilon, 1-epsilon)\n CE = -K.log(pt)\n FL = K.pow(1-pt, gamma) * CE\n loss = K.sum(FL, axis=1)\n \n return loss", "_____no_output_____" ] ], [ [ "# Pre-procecess images", "_____no_output_____" ] ], [ [ "train_base_path = '../input/aptos2019-blindness-detection/train_images/'\ntest_base_path = '../input/aptos2019-blindness-detection/test_images/'\ntrain_dest_path = 'base_dir/train_images/'\nvalidation_dest_path = 'base_dir/validation_images/'\ntest_dest_path = 'base_dir/test_images/'\n\n# Making sure directories don't exist\nif os.path.exists(train_dest_path):\n shutil.rmtree(train_dest_path)\nif os.path.exists(validation_dest_path):\n shutil.rmtree(validation_dest_path)\nif os.path.exists(test_dest_path):\n shutil.rmtree(test_dest_path)\n \n# Creating train, validation and test directories\nos.makedirs(train_dest_path)\nos.makedirs(validation_dest_path)\nos.makedirs(test_dest_path)\n\ndef crop_image(img, tol=7):\n if img.ndim ==2:\n mask = img>tol\n return img[np.ix_(mask.any(1),mask.any(0))]\n elif img.ndim==3:\n gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n mask = gray_img>tol\n check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]\n if (check_shape == 0): # image is too dark so that we crop out everything,\n return img # return original image\n else:\n img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]\n img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]\n img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]\n img = np.stack([img1,img2,img3],axis=-1)\n \n return img\n\ndef circle_crop(img):\n img = crop_image(img)\n\n height, width, depth = img.shape\n largest_side = np.max((height, width))\n img = cv2.resize(img, (largest_side, largest_side))\n\n height, width, depth = img.shape\n\n x = width//2\n y = height//2\n r = np.amin((x, y))\n\n circle_img = np.zeros((height, width), np.uint8)\n cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)\n img = cv2.bitwise_and(img, img, mask=circle_img)\n img = crop_image(img)\n\n return img\n \ndef preprocess_image(base_path, save_path, image_id, HEIGHT, WIDTH, sigmaX=10):\n image = cv2.imread(base_path + image_id)\n image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n image = circle_crop(image)\n image = cv2.resize(image, (HEIGHT, WIDTH))\n image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)\n cv2.imwrite(save_path + image_id, image)\n \n# Pre-procecss train set\nfor i, image_id in enumerate(X_train['id_code']):\n preprocess_image(train_base_path, train_dest_path, image_id, HEIGHT, WIDTH)\n \n# Pre-procecss validation set\nfor i, image_id in enumerate(X_val['id_code']):\n preprocess_image(train_base_path, validation_dest_path, image_id, HEIGHT, WIDTH)\n \n# Pre-procecss test set\nfor i, image_id in enumerate(test['id_code']):\n preprocess_image(test_base_path, test_dest_path, image_id, HEIGHT, WIDTH)", "_____no_output_____" ] ], [ [ "# Data generator", "_____no_output_____" ] ], [ [ "datagen=ImageDataGenerator(rescale=1./255, \n rotation_range=360,\n horizontal_flip=True,\n vertical_flip=True)\n\ntrain_generator=datagen.flow_from_dataframe(\n dataframe=X_train,\n directory=train_dest_path,\n x_col=\"id_code\",\n y_col=\"diagnosis\",\n class_mode=\"categorical\",\n batch_size=BATCH_SIZE,\n target_size=(HEIGHT, WIDTH),\n seed=seed)\n\nvalid_generator=datagen.flow_from_dataframe(\n dataframe=X_val,\n directory=validation_dest_path,\n x_col=\"id_code\",\n y_col=\"diagnosis\",\n class_mode=\"categorical\",\n batch_size=BATCH_SIZE,\n target_size=(HEIGHT, WIDTH),\n seed=seed)\n\ntest_generator=datagen.flow_from_dataframe( \n dataframe=test,\n directory=test_dest_path,\n x_col=\"id_code\",\n batch_size=1,\n class_mode=None,\n shuffle=False,\n target_size=(HEIGHT, WIDTH),\n seed=seed)", "Found 2929 validated image filenames belonging to 5 classes.\nFound 733 validated image filenames belonging to 5 classes.\nFound 1928 validated image filenames.\n" ] ], [ [ "# Model", "_____no_output_____" ] ], [ [ "def create_model(input_shape, n_out):\n input_tensor = Input(shape=input_shape)\n base_model = applications.DenseNet169(weights=None, \n include_top=False,\n input_tensor=input_tensor)\n base_model.load_weights('../input/keras-notop/densenet169_weights_tf_dim_ordering_tf_kernels_notop.h5')\n\n x = GlobalAveragePooling2D()(base_model.output)\n x = Dropout(0.5)(x)\n x = Dense(2048, activation='relu')(x)\n x = Dropout(0.5)(x)\n final_output = Dense(n_out, activation='softmax', name='final_output')(x)\n model = Model(input_tensor, final_output)\n \n return model", "_____no_output_____" ] ], [ [ "# Train top layers", "_____no_output_____" ] ], [ [ "model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS), n_out=N_CLASSES)\n\nfor layer in model.layers:\n layer.trainable = False\n\nfor i in range(-5, 0):\n model.layers[i].trainable = True\n \nclass_weights = class_weight.compute_class_weight('balanced', np.unique(X_train['diagnosis'].astype('int').values), X_train['diagnosis'].astype('int').values)\n\nmetric_list = [\"accuracy\", kappa]\noptimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)\nmodel.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=metric_list)\nmodel.summary()", "__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 320, 320, 3) 0 \n__________________________________________________________________________________________________\nzero_padding2d_1 (ZeroPadding2D (None, 326, 326, 3) 0 input_1[0][0] \n__________________________________________________________________________________________________\nconv1/conv (Conv2D) (None, 160, 160, 64) 9408 zero_padding2d_1[0][0] \n__________________________________________________________________________________________________\nconv1/bn (BatchNormalization) (None, 160, 160, 64) 256 conv1/conv[0][0] \n__________________________________________________________________________________________________\nconv1/relu (Activation) (None, 160, 160, 64) 0 conv1/bn[0][0] \n__________________________________________________________________________________________________\nzero_padding2d_2 (ZeroPadding2D (None, 162, 162, 64) 0 conv1/relu[0][0] \n__________________________________________________________________________________________________\npool1 (MaxPooling2D) (None, 80, 80, 64) 0 zero_padding2d_2[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_bn (BatchNormali (None, 80, 80, 64) 256 pool1[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_relu (Activation (None, 80, 80, 64) 0 conv2_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_conv (Conv2D) (None, 80, 80, 128) 8192 conv2_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_relu (Activation (None, 80, 80, 128) 0 conv2_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_concat (Concatenat (None, 80, 80, 96) 0 pool1[0][0] \n conv2_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_0_bn (BatchNormali (None, 80, 80, 96) 384 conv2_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block2_0_relu (Activation (None, 80, 80, 96) 0 conv2_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_conv (Conv2D) (None, 80, 80, 128) 12288 conv2_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_relu (Activation (None, 80, 80, 128) 0 conv2_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_concat (Concatenat (None, 80, 80, 128) 0 conv2_block1_concat[0][0] \n conv2_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_0_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block3_0_relu (Activation (None, 80, 80, 128) 0 conv2_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_conv (Conv2D) (None, 80, 80, 128) 16384 conv2_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_relu (Activation (None, 80, 80, 128) 0 conv2_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_concat (Concatenat (None, 80, 80, 160) 0 conv2_block2_concat[0][0] \n conv2_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block4_0_bn (BatchNormali (None, 80, 80, 160) 640 conv2_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block4_0_relu (Activation (None, 80, 80, 160) 0 conv2_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block4_1_conv (Conv2D) (None, 80, 80, 128) 20480 conv2_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block4_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block4_1_relu (Activation (None, 80, 80, 128) 0 conv2_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block4_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block4_concat (Concatenat (None, 80, 80, 192) 0 conv2_block3_concat[0][0] \n conv2_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block5_0_bn (BatchNormali (None, 80, 80, 192) 768 conv2_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block5_0_relu (Activation (None, 80, 80, 192) 0 conv2_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block5_1_conv (Conv2D) (None, 80, 80, 128) 24576 conv2_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block5_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block5_1_relu (Activation (None, 80, 80, 128) 0 conv2_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block5_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block5_concat (Concatenat (None, 80, 80, 224) 0 conv2_block4_concat[0][0] \n conv2_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block6_0_bn (BatchNormali (None, 80, 80, 224) 896 conv2_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block6_0_relu (Activation (None, 80, 80, 224) 0 conv2_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block6_1_conv (Conv2D) (None, 80, 80, 128) 28672 conv2_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block6_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block6_1_relu (Activation (None, 80, 80, 128) 0 conv2_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block6_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block6_concat (Concatenat (None, 80, 80, 256) 0 conv2_block5_concat[0][0] \n conv2_block6_2_conv[0][0] \n__________________________________________________________________________________________________\npool2_bn (BatchNormalization) (None, 80, 80, 256) 1024 conv2_block6_concat[0][0] \n__________________________________________________________________________________________________\npool2_relu (Activation) (None, 80, 80, 256) 0 pool2_bn[0][0] \n__________________________________________________________________________________________________\npool2_conv (Conv2D) (None, 80, 80, 128) 32768 pool2_relu[0][0] \n__________________________________________________________________________________________________\npool2_pool (AveragePooling2D) (None, 40, 40, 128) 0 pool2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_bn (BatchNormali (None, 40, 40, 128) 512 pool2_pool[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_relu (Activation (None, 40, 40, 128) 0 conv3_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_conv (Conv2D) (None, 40, 40, 128) 16384 conv3_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_relu (Activation (None, 40, 40, 128) 0 conv3_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_concat (Concatenat (None, 40, 40, 160) 0 pool2_pool[0][0] \n conv3_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_0_bn (BatchNormali (None, 40, 40, 160) 640 conv3_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block2_0_relu (Activation (None, 40, 40, 160) 0 conv3_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_conv (Conv2D) (None, 40, 40, 128) 20480 conv3_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_relu (Activation (None, 40, 40, 128) 0 conv3_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_concat (Concatenat (None, 40, 40, 192) 0 conv3_block1_concat[0][0] \n conv3_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_0_bn (BatchNormali (None, 40, 40, 192) 768 conv3_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block3_0_relu (Activation (None, 40, 40, 192) 0 conv3_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_conv (Conv2D) (None, 40, 40, 128) 24576 conv3_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_relu (Activation (None, 40, 40, 128) 0 conv3_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_concat (Concatenat (None, 40, 40, 224) 0 conv3_block2_concat[0][0] \n conv3_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_0_bn (BatchNormali (None, 40, 40, 224) 896 conv3_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block4_0_relu (Activation (None, 40, 40, 224) 0 conv3_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_conv (Conv2D) (None, 40, 40, 128) 28672 conv3_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_relu (Activation (None, 40, 40, 128) 0 conv3_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_concat (Concatenat (None, 40, 40, 256) 0 conv3_block3_concat[0][0] \n conv3_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block5_0_bn (BatchNormali (None, 40, 40, 256) 1024 conv3_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block5_0_relu (Activation (None, 40, 40, 256) 0 conv3_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block5_1_conv (Conv2D) (None, 40, 40, 128) 32768 conv3_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block5_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block5_1_relu (Activation (None, 40, 40, 128) 0 conv3_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block5_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block5_concat (Concatenat (None, 40, 40, 288) 0 conv3_block4_concat[0][0] \n conv3_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block6_0_bn (BatchNormali (None, 40, 40, 288) 1152 conv3_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block6_0_relu (Activation (None, 40, 40, 288) 0 conv3_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block6_1_conv (Conv2D) (None, 40, 40, 128) 36864 conv3_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block6_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block6_1_relu (Activation (None, 40, 40, 128) 0 conv3_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block6_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block6_concat (Concatenat (None, 40, 40, 320) 0 conv3_block5_concat[0][0] \n conv3_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block7_0_bn (BatchNormali (None, 40, 40, 320) 1280 conv3_block6_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block7_0_relu (Activation (None, 40, 40, 320) 0 conv3_block7_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block7_1_conv (Conv2D) (None, 40, 40, 128) 40960 conv3_block7_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block7_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block7_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block7_1_relu (Activation (None, 40, 40, 128) 0 conv3_block7_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block7_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block7_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block7_concat (Concatenat (None, 40, 40, 352) 0 conv3_block6_concat[0][0] \n conv3_block7_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block8_0_bn (BatchNormali (None, 40, 40, 352) 1408 conv3_block7_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block8_0_relu (Activation (None, 40, 40, 352) 0 conv3_block8_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block8_1_conv (Conv2D) (None, 40, 40, 128) 45056 conv3_block8_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block8_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block8_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block8_1_relu (Activation (None, 40, 40, 128) 0 conv3_block8_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block8_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block8_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block8_concat (Concatenat (None, 40, 40, 384) 0 conv3_block7_concat[0][0] \n conv3_block8_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block9_0_bn (BatchNormali (None, 40, 40, 384) 1536 conv3_block8_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block9_0_relu (Activation (None, 40, 40, 384) 0 conv3_block9_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block9_1_conv (Conv2D) (None, 40, 40, 128) 49152 conv3_block9_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block9_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block9_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block9_1_relu (Activation (None, 40, 40, 128) 0 conv3_block9_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block9_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block9_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block9_concat (Concatenat (None, 40, 40, 416) 0 conv3_block8_concat[0][0] \n conv3_block9_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block10_0_bn (BatchNormal (None, 40, 40, 416) 1664 conv3_block9_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block10_0_relu (Activatio (None, 40, 40, 416) 0 conv3_block10_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block10_1_conv (Conv2D) (None, 40, 40, 128) 53248 conv3_block10_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block10_1_bn (BatchNormal (None, 40, 40, 128) 512 conv3_block10_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block10_1_relu (Activatio (None, 40, 40, 128) 0 conv3_block10_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block10_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block10_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block10_concat (Concatena (None, 40, 40, 448) 0 conv3_block9_concat[0][0] \n conv3_block10_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block11_0_bn (BatchNormal (None, 40, 40, 448) 1792 conv3_block10_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block11_0_relu (Activatio (None, 40, 40, 448) 0 conv3_block11_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block11_1_conv (Conv2D) (None, 40, 40, 128) 57344 conv3_block11_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block11_1_bn (BatchNormal (None, 40, 40, 128) 512 conv3_block11_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block11_1_relu (Activatio (None, 40, 40, 128) 0 conv3_block11_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block11_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block11_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block11_concat (Concatena (None, 40, 40, 480) 0 conv3_block10_concat[0][0] \n conv3_block11_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block12_0_bn (BatchNormal (None, 40, 40, 480) 1920 conv3_block11_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block12_0_relu (Activatio (None, 40, 40, 480) 0 conv3_block12_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block12_1_conv (Conv2D) (None, 40, 40, 128) 61440 conv3_block12_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block12_1_bn (BatchNormal (None, 40, 40, 128) 512 conv3_block12_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block12_1_relu (Activatio (None, 40, 40, 128) 0 conv3_block12_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block12_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block12_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block12_concat (Concatena (None, 40, 40, 512) 0 conv3_block11_concat[0][0] \n conv3_block12_2_conv[0][0] \n__________________________________________________________________________________________________\npool3_bn (BatchNormalization) (None, 40, 40, 512) 2048 conv3_block12_concat[0][0] \n__________________________________________________________________________________________________\npool3_relu (Activation) (None, 40, 40, 512) 0 pool3_bn[0][0] \n__________________________________________________________________________________________________\npool3_conv (Conv2D) (None, 40, 40, 256) 131072 pool3_relu[0][0] \n__________________________________________________________________________________________________\npool3_pool (AveragePooling2D) (None, 20, 20, 256) 0 pool3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_bn (BatchNormali (None, 20, 20, 256) 1024 pool3_pool[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_relu (Activation (None, 20, 20, 256) 0 conv4_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_conv (Conv2D) (None, 20, 20, 128) 32768 conv4_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_relu (Activation (None, 20, 20, 128) 0 conv4_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_concat (Concatenat (None, 20, 20, 288) 0 pool3_pool[0][0] \n conv4_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_0_bn (BatchNormali (None, 20, 20, 288) 1152 conv4_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block2_0_relu (Activation (None, 20, 20, 288) 0 conv4_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_conv (Conv2D) (None, 20, 20, 128) 36864 conv4_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_relu (Activation (None, 20, 20, 128) 0 conv4_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_concat (Concatenat (None, 20, 20, 320) 0 conv4_block1_concat[0][0] \n conv4_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_0_bn (BatchNormali (None, 20, 20, 320) 1280 conv4_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block3_0_relu (Activation (None, 20, 20, 320) 0 conv4_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_conv (Conv2D) (None, 20, 20, 128) 40960 conv4_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_relu (Activation (None, 20, 20, 128) 0 conv4_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_concat (Concatenat (None, 20, 20, 352) 0 conv4_block2_concat[0][0] \n conv4_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_0_bn (BatchNormali (None, 20, 20, 352) 1408 conv4_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block4_0_relu (Activation (None, 20, 20, 352) 0 conv4_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_conv (Conv2D) (None, 20, 20, 128) 45056 conv4_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_relu (Activation (None, 20, 20, 128) 0 conv4_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_concat (Concatenat (None, 20, 20, 384) 0 conv4_block3_concat[0][0] \n conv4_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_0_bn (BatchNormali (None, 20, 20, 384) 1536 conv4_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block5_0_relu (Activation (None, 20, 20, 384) 0 conv4_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_conv (Conv2D) (None, 20, 20, 128) 49152 conv4_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_relu (Activation (None, 20, 20, 128) 0 conv4_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_concat (Concatenat (None, 20, 20, 416) 0 conv4_block4_concat[0][0] \n conv4_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_0_bn (BatchNormali (None, 20, 20, 416) 1664 conv4_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block6_0_relu (Activation (None, 20, 20, 416) 0 conv4_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_conv (Conv2D) (None, 20, 20, 128) 53248 conv4_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_relu (Activation (None, 20, 20, 128) 0 conv4_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_concat (Concatenat (None, 20, 20, 448) 0 conv4_block5_concat[0][0] \n conv4_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block7_0_bn (BatchNormali (None, 20, 20, 448) 1792 conv4_block6_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block7_0_relu (Activation (None, 20, 20, 448) 0 conv4_block7_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block7_1_conv (Conv2D) (None, 20, 20, 128) 57344 conv4_block7_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block7_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block7_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block7_1_relu (Activation (None, 20, 20, 128) 0 conv4_block7_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block7_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block7_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block7_concat (Concatenat (None, 20, 20, 480) 0 conv4_block6_concat[0][0] \n conv4_block7_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block8_0_bn (BatchNormali (None, 20, 20, 480) 1920 conv4_block7_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block8_0_relu (Activation (None, 20, 20, 480) 0 conv4_block8_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block8_1_conv (Conv2D) (None, 20, 20, 128) 61440 conv4_block8_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block8_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block8_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block8_1_relu (Activation (None, 20, 20, 128) 0 conv4_block8_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block8_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block8_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block8_concat (Concatenat (None, 20, 20, 512) 0 conv4_block7_concat[0][0] \n conv4_block8_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block9_0_bn (BatchNormali (None, 20, 20, 512) 2048 conv4_block8_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block9_0_relu (Activation (None, 20, 20, 512) 0 conv4_block9_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block9_1_conv (Conv2D) (None, 20, 20, 128) 65536 conv4_block9_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block9_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block9_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block9_1_relu (Activation (None, 20, 20, 128) 0 conv4_block9_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block9_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block9_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block9_concat (Concatenat (None, 20, 20, 544) 0 conv4_block8_concat[0][0] \n conv4_block9_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block10_0_bn (BatchNormal (None, 20, 20, 544) 2176 conv4_block9_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block10_0_relu (Activatio (None, 20, 20, 544) 0 conv4_block10_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block10_1_conv (Conv2D) (None, 20, 20, 128) 69632 conv4_block10_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block10_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block10_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block10_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block10_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block10_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block10_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block10_concat (Concatena (None, 20, 20, 576) 0 conv4_block9_concat[0][0] \n conv4_block10_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block11_0_bn (BatchNormal (None, 20, 20, 576) 2304 conv4_block10_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block11_0_relu (Activatio (None, 20, 20, 576) 0 conv4_block11_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block11_1_conv (Conv2D) (None, 20, 20, 128) 73728 conv4_block11_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block11_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block11_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block11_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block11_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block11_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block11_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block11_concat (Concatena (None, 20, 20, 608) 0 conv4_block10_concat[0][0] \n conv4_block11_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block12_0_bn (BatchNormal (None, 20, 20, 608) 2432 conv4_block11_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block12_0_relu (Activatio (None, 20, 20, 608) 0 conv4_block12_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block12_1_conv (Conv2D) (None, 20, 20, 128) 77824 conv4_block12_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block12_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block12_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block12_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block12_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block12_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block12_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block12_concat (Concatena (None, 20, 20, 640) 0 conv4_block11_concat[0][0] \n conv4_block12_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block13_0_bn (BatchNormal (None, 20, 20, 640) 2560 conv4_block12_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block13_0_relu (Activatio (None, 20, 20, 640) 0 conv4_block13_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block13_1_conv (Conv2D) (None, 20, 20, 128) 81920 conv4_block13_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block13_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block13_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block13_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block13_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block13_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block13_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block13_concat (Concatena (None, 20, 20, 672) 0 conv4_block12_concat[0][0] \n conv4_block13_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block14_0_bn (BatchNormal (None, 20, 20, 672) 2688 conv4_block13_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block14_0_relu (Activatio (None, 20, 20, 672) 0 conv4_block14_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block14_1_conv (Conv2D) (None, 20, 20, 128) 86016 conv4_block14_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block14_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block14_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block14_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block14_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block14_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block14_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block14_concat (Concatena (None, 20, 20, 704) 0 conv4_block13_concat[0][0] \n conv4_block14_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block15_0_bn (BatchNormal (None, 20, 20, 704) 2816 conv4_block14_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block15_0_relu (Activatio (None, 20, 20, 704) 0 conv4_block15_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block15_1_conv (Conv2D) (None, 20, 20, 128) 90112 conv4_block15_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block15_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block15_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block15_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block15_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block15_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block15_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block15_concat (Concatena (None, 20, 20, 736) 0 conv4_block14_concat[0][0] \n conv4_block15_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block16_0_bn (BatchNormal (None, 20, 20, 736) 2944 conv4_block15_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block16_0_relu (Activatio (None, 20, 20, 736) 0 conv4_block16_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block16_1_conv (Conv2D) (None, 20, 20, 128) 94208 conv4_block16_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block16_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block16_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block16_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block16_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block16_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block16_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block16_concat (Concatena (None, 20, 20, 768) 0 conv4_block15_concat[0][0] \n conv4_block16_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block17_0_bn (BatchNormal (None, 20, 20, 768) 3072 conv4_block16_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block17_0_relu (Activatio (None, 20, 20, 768) 0 conv4_block17_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block17_1_conv (Conv2D) (None, 20, 20, 128) 98304 conv4_block17_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block17_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block17_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block17_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block17_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block17_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block17_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block17_concat (Concatena (None, 20, 20, 800) 0 conv4_block16_concat[0][0] \n conv4_block17_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block18_0_bn (BatchNormal (None, 20, 20, 800) 3200 conv4_block17_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block18_0_relu (Activatio (None, 20, 20, 800) 0 conv4_block18_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block18_1_conv (Conv2D) (None, 20, 20, 128) 102400 conv4_block18_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block18_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block18_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block18_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block18_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block18_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block18_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block18_concat (Concatena (None, 20, 20, 832) 0 conv4_block17_concat[0][0] \n conv4_block18_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block19_0_bn (BatchNormal (None, 20, 20, 832) 3328 conv4_block18_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block19_0_relu (Activatio (None, 20, 20, 832) 0 conv4_block19_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block19_1_conv (Conv2D) (None, 20, 20, 128) 106496 conv4_block19_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block19_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block19_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block19_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block19_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block19_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block19_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block19_concat (Concatena (None, 20, 20, 864) 0 conv4_block18_concat[0][0] \n conv4_block19_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block20_0_bn (BatchNormal (None, 20, 20, 864) 3456 conv4_block19_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block20_0_relu (Activatio (None, 20, 20, 864) 0 conv4_block20_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block20_1_conv (Conv2D) (None, 20, 20, 128) 110592 conv4_block20_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block20_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block20_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block20_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block20_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block20_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block20_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block20_concat (Concatena (None, 20, 20, 896) 0 conv4_block19_concat[0][0] \n conv4_block20_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block21_0_bn (BatchNormal (None, 20, 20, 896) 3584 conv4_block20_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block21_0_relu (Activatio (None, 20, 20, 896) 0 conv4_block21_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block21_1_conv (Conv2D) (None, 20, 20, 128) 114688 conv4_block21_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block21_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block21_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block21_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block21_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block21_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block21_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block21_concat (Concatena (None, 20, 20, 928) 0 conv4_block20_concat[0][0] \n conv4_block21_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block22_0_bn (BatchNormal (None, 20, 20, 928) 3712 conv4_block21_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block22_0_relu (Activatio (None, 20, 20, 928) 0 conv4_block22_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block22_1_conv (Conv2D) (None, 20, 20, 128) 118784 conv4_block22_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block22_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block22_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block22_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block22_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block22_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block22_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block22_concat (Concatena (None, 20, 20, 960) 0 conv4_block21_concat[0][0] \n conv4_block22_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block23_0_bn (BatchNormal (None, 20, 20, 960) 3840 conv4_block22_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block23_0_relu (Activatio (None, 20, 20, 960) 0 conv4_block23_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block23_1_conv (Conv2D) (None, 20, 20, 128) 122880 conv4_block23_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block23_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block23_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block23_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block23_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block23_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block23_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block23_concat (Concatena (None, 20, 20, 992) 0 conv4_block22_concat[0][0] \n conv4_block23_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block24_0_bn (BatchNormal (None, 20, 20, 992) 3968 conv4_block23_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block24_0_relu (Activatio (None, 20, 20, 992) 0 conv4_block24_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block24_1_conv (Conv2D) (None, 20, 20, 128) 126976 conv4_block24_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block24_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block24_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block24_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block24_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block24_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block24_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block24_concat (Concatena (None, 20, 20, 1024) 0 conv4_block23_concat[0][0] \n conv4_block24_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block25_0_bn (BatchNormal (None, 20, 20, 1024) 4096 conv4_block24_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block25_0_relu (Activatio (None, 20, 20, 1024) 0 conv4_block25_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block25_1_conv (Conv2D) (None, 20, 20, 128) 131072 conv4_block25_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block25_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block25_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block25_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block25_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block25_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block25_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block25_concat (Concatena (None, 20, 20, 1056) 0 conv4_block24_concat[0][0] \n conv4_block25_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block26_0_bn (BatchNormal (None, 20, 20, 1056) 4224 conv4_block25_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block26_0_relu (Activatio (None, 20, 20, 1056) 0 conv4_block26_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block26_1_conv (Conv2D) (None, 20, 20, 128) 135168 conv4_block26_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block26_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block26_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block26_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block26_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block26_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block26_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block26_concat (Concatena (None, 20, 20, 1088) 0 conv4_block25_concat[0][0] \n conv4_block26_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block27_0_bn (BatchNormal (None, 20, 20, 1088) 4352 conv4_block26_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block27_0_relu (Activatio (None, 20, 20, 1088) 0 conv4_block27_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block27_1_conv (Conv2D) (None, 20, 20, 128) 139264 conv4_block27_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block27_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block27_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block27_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block27_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block27_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block27_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block27_concat (Concatena (None, 20, 20, 1120) 0 conv4_block26_concat[0][0] \n conv4_block27_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block28_0_bn (BatchNormal (None, 20, 20, 1120) 4480 conv4_block27_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block28_0_relu (Activatio (None, 20, 20, 1120) 0 conv4_block28_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block28_1_conv (Conv2D) (None, 20, 20, 128) 143360 conv4_block28_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block28_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block28_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block28_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block28_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block28_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block28_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block28_concat (Concatena (None, 20, 20, 1152) 0 conv4_block27_concat[0][0] \n conv4_block28_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block29_0_bn (BatchNormal (None, 20, 20, 1152) 4608 conv4_block28_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block29_0_relu (Activatio (None, 20, 20, 1152) 0 conv4_block29_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block29_1_conv (Conv2D) (None, 20, 20, 128) 147456 conv4_block29_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block29_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block29_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block29_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block29_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block29_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block29_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block29_concat (Concatena (None, 20, 20, 1184) 0 conv4_block28_concat[0][0] \n conv4_block29_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block30_0_bn (BatchNormal (None, 20, 20, 1184) 4736 conv4_block29_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block30_0_relu (Activatio (None, 20, 20, 1184) 0 conv4_block30_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block30_1_conv (Conv2D) (None, 20, 20, 128) 151552 conv4_block30_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block30_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block30_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block30_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block30_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block30_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block30_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block30_concat (Concatena (None, 20, 20, 1216) 0 conv4_block29_concat[0][0] \n conv4_block30_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block31_0_bn (BatchNormal (None, 20, 20, 1216) 4864 conv4_block30_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block31_0_relu (Activatio (None, 20, 20, 1216) 0 conv4_block31_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block31_1_conv (Conv2D) (None, 20, 20, 128) 155648 conv4_block31_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block31_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block31_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block31_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block31_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block31_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block31_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block31_concat (Concatena (None, 20, 20, 1248) 0 conv4_block30_concat[0][0] \n conv4_block31_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block32_0_bn (BatchNormal (None, 20, 20, 1248) 4992 conv4_block31_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block32_0_relu (Activatio (None, 20, 20, 1248) 0 conv4_block32_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block32_1_conv (Conv2D) (None, 20, 20, 128) 159744 conv4_block32_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block32_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block32_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block32_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block32_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block32_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block32_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block32_concat (Concatena (None, 20, 20, 1280) 0 conv4_block31_concat[0][0] \n conv4_block32_2_conv[0][0] \n__________________________________________________________________________________________________\npool4_bn (BatchNormalization) (None, 20, 20, 1280) 5120 conv4_block32_concat[0][0] \n__________________________________________________________________________________________________\npool4_relu (Activation) (None, 20, 20, 1280) 0 pool4_bn[0][0] \n__________________________________________________________________________________________________\npool4_conv (Conv2D) (None, 20, 20, 640) 819200 pool4_relu[0][0] \n__________________________________________________________________________________________________\npool4_pool (AveragePooling2D) (None, 10, 10, 640) 0 pool4_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_bn (BatchNormali (None, 10, 10, 640) 2560 pool4_pool[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_relu (Activation (None, 10, 10, 640) 0 conv5_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_conv (Conv2D) (None, 10, 10, 128) 81920 conv5_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_relu (Activation (None, 10, 10, 128) 0 conv5_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_concat (Concatenat (None, 10, 10, 672) 0 pool4_pool[0][0] \n conv5_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_0_bn (BatchNormali (None, 10, 10, 672) 2688 conv5_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block2_0_relu (Activation (None, 10, 10, 672) 0 conv5_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_conv (Conv2D) (None, 10, 10, 128) 86016 conv5_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_relu (Activation (None, 10, 10, 128) 0 conv5_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_concat (Concatenat (None, 10, 10, 704) 0 conv5_block1_concat[0][0] \n conv5_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_0_bn (BatchNormali (None, 10, 10, 704) 2816 conv5_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block3_0_relu (Activation (None, 10, 10, 704) 0 conv5_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_conv (Conv2D) (None, 10, 10, 128) 90112 conv5_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_relu (Activation (None, 10, 10, 128) 0 conv5_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_concat (Concatenat (None, 10, 10, 736) 0 conv5_block2_concat[0][0] \n conv5_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block4_0_bn (BatchNormali (None, 10, 10, 736) 2944 conv5_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block4_0_relu (Activation (None, 10, 10, 736) 0 conv5_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block4_1_conv (Conv2D) (None, 10, 10, 128) 94208 conv5_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block4_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block4_1_relu (Activation (None, 10, 10, 128) 0 conv5_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block4_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block4_concat (Concatenat (None, 10, 10, 768) 0 conv5_block3_concat[0][0] \n conv5_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block5_0_bn (BatchNormali (None, 10, 10, 768) 3072 conv5_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block5_0_relu (Activation (None, 10, 10, 768) 0 conv5_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block5_1_conv (Conv2D) (None, 10, 10, 128) 98304 conv5_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block5_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block5_1_relu (Activation (None, 10, 10, 128) 0 conv5_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block5_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block5_concat (Concatenat (None, 10, 10, 800) 0 conv5_block4_concat[0][0] \n conv5_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block6_0_bn (BatchNormali (None, 10, 10, 800) 3200 conv5_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block6_0_relu (Activation (None, 10, 10, 800) 0 conv5_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block6_1_conv (Conv2D) (None, 10, 10, 128) 102400 conv5_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block6_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block6_1_relu (Activation (None, 10, 10, 128) 0 conv5_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block6_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block6_concat (Concatenat (None, 10, 10, 832) 0 conv5_block5_concat[0][0] \n conv5_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block7_0_bn (BatchNormali (None, 10, 10, 832) 3328 conv5_block6_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block7_0_relu (Activation (None, 10, 10, 832) 0 conv5_block7_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block7_1_conv (Conv2D) (None, 10, 10, 128) 106496 conv5_block7_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block7_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block7_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block7_1_relu (Activation (None, 10, 10, 128) 0 conv5_block7_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block7_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block7_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block7_concat (Concatenat (None, 10, 10, 864) 0 conv5_block6_concat[0][0] \n conv5_block7_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block8_0_bn (BatchNormali (None, 10, 10, 864) 3456 conv5_block7_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block8_0_relu (Activation (None, 10, 10, 864) 0 conv5_block8_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block8_1_conv (Conv2D) (None, 10, 10, 128) 110592 conv5_block8_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block8_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block8_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block8_1_relu (Activation (None, 10, 10, 128) 0 conv5_block8_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block8_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block8_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block8_concat (Concatenat (None, 10, 10, 896) 0 conv5_block7_concat[0][0] \n conv5_block8_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block9_0_bn (BatchNormali (None, 10, 10, 896) 3584 conv5_block8_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block9_0_relu (Activation (None, 10, 10, 896) 0 conv5_block9_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block9_1_conv (Conv2D) (None, 10, 10, 128) 114688 conv5_block9_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block9_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block9_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block9_1_relu (Activation (None, 10, 10, 128) 0 conv5_block9_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block9_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block9_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block9_concat (Concatenat (None, 10, 10, 928) 0 conv5_block8_concat[0][0] \n conv5_block9_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block10_0_bn (BatchNormal (None, 10, 10, 928) 3712 conv5_block9_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block10_0_relu (Activatio (None, 10, 10, 928) 0 conv5_block10_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block10_1_conv (Conv2D) (None, 10, 10, 128) 118784 conv5_block10_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block10_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block10_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block10_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block10_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block10_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block10_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block10_concat (Concatena (None, 10, 10, 960) 0 conv5_block9_concat[0][0] \n conv5_block10_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block11_0_bn (BatchNormal (None, 10, 10, 960) 3840 conv5_block10_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block11_0_relu (Activatio (None, 10, 10, 960) 0 conv5_block11_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block11_1_conv (Conv2D) (None, 10, 10, 128) 122880 conv5_block11_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block11_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block11_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block11_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block11_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block11_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block11_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block11_concat (Concatena (None, 10, 10, 992) 0 conv5_block10_concat[0][0] \n conv5_block11_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block12_0_bn (BatchNormal (None, 10, 10, 992) 3968 conv5_block11_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block12_0_relu (Activatio (None, 10, 10, 992) 0 conv5_block12_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block12_1_conv (Conv2D) (None, 10, 10, 128) 126976 conv5_block12_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block12_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block12_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block12_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block12_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block12_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block12_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block12_concat (Concatena (None, 10, 10, 1024) 0 conv5_block11_concat[0][0] \n conv5_block12_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block13_0_bn (BatchNormal (None, 10, 10, 1024) 4096 conv5_block12_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block13_0_relu (Activatio (None, 10, 10, 1024) 0 conv5_block13_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block13_1_conv (Conv2D) (None, 10, 10, 128) 131072 conv5_block13_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block13_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block13_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block13_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block13_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block13_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block13_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block13_concat (Concatena (None, 10, 10, 1056) 0 conv5_block12_concat[0][0] \n conv5_block13_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block14_0_bn (BatchNormal (None, 10, 10, 1056) 4224 conv5_block13_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block14_0_relu (Activatio (None, 10, 10, 1056) 0 conv5_block14_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block14_1_conv (Conv2D) (None, 10, 10, 128) 135168 conv5_block14_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block14_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block14_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block14_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block14_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block14_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block14_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block14_concat (Concatena (None, 10, 10, 1088) 0 conv5_block13_concat[0][0] \n conv5_block14_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block15_0_bn (BatchNormal (None, 10, 10, 1088) 4352 conv5_block14_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block15_0_relu (Activatio (None, 10, 10, 1088) 0 conv5_block15_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block15_1_conv (Conv2D) (None, 10, 10, 128) 139264 conv5_block15_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block15_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block15_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block15_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block15_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block15_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block15_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block15_concat (Concatena (None, 10, 10, 1120) 0 conv5_block14_concat[0][0] \n conv5_block15_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block16_0_bn (BatchNormal (None, 10, 10, 1120) 4480 conv5_block15_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block16_0_relu (Activatio (None, 10, 10, 1120) 0 conv5_block16_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block16_1_conv (Conv2D) (None, 10, 10, 128) 143360 conv5_block16_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block16_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block16_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block16_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block16_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block16_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block16_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block16_concat (Concatena (None, 10, 10, 1152) 0 conv5_block15_concat[0][0] \n conv5_block16_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block17_0_bn (BatchNormal (None, 10, 10, 1152) 4608 conv5_block16_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block17_0_relu (Activatio (None, 10, 10, 1152) 0 conv5_block17_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block17_1_conv (Conv2D) (None, 10, 10, 128) 147456 conv5_block17_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block17_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block17_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block17_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block17_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block17_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block17_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block17_concat (Concatena (None, 10, 10, 1184) 0 conv5_block16_concat[0][0] \n conv5_block17_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block18_0_bn (BatchNormal (None, 10, 10, 1184) 4736 conv5_block17_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block18_0_relu (Activatio (None, 10, 10, 1184) 0 conv5_block18_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block18_1_conv (Conv2D) (None, 10, 10, 128) 151552 conv5_block18_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block18_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block18_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block18_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block18_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block18_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block18_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block18_concat (Concatena (None, 10, 10, 1216) 0 conv5_block17_concat[0][0] \n conv5_block18_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block19_0_bn (BatchNormal (None, 10, 10, 1216) 4864 conv5_block18_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block19_0_relu (Activatio (None, 10, 10, 1216) 0 conv5_block19_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block19_1_conv (Conv2D) (None, 10, 10, 128) 155648 conv5_block19_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block19_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block19_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block19_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block19_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block19_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block19_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block19_concat (Concatena (None, 10, 10, 1248) 0 conv5_block18_concat[0][0] \n conv5_block19_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block20_0_bn (BatchNormal (None, 10, 10, 1248) 4992 conv5_block19_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block20_0_relu (Activatio (None, 10, 10, 1248) 0 conv5_block20_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block20_1_conv (Conv2D) (None, 10, 10, 128) 159744 conv5_block20_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block20_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block20_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block20_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block20_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block20_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block20_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block20_concat (Concatena (None, 10, 10, 1280) 0 conv5_block19_concat[0][0] \n conv5_block20_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block21_0_bn (BatchNormal (None, 10, 10, 1280) 5120 conv5_block20_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block21_0_relu (Activatio (None, 10, 10, 1280) 0 conv5_block21_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block21_1_conv (Conv2D) (None, 10, 10, 128) 163840 conv5_block21_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block21_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block21_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block21_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block21_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block21_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block21_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block21_concat (Concatena (None, 10, 10, 1312) 0 conv5_block20_concat[0][0] \n conv5_block21_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block22_0_bn (BatchNormal (None, 10, 10, 1312) 5248 conv5_block21_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block22_0_relu (Activatio (None, 10, 10, 1312) 0 conv5_block22_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block22_1_conv (Conv2D) (None, 10, 10, 128) 167936 conv5_block22_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block22_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block22_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block22_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block22_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block22_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block22_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block22_concat (Concatena (None, 10, 10, 1344) 0 conv5_block21_concat[0][0] \n conv5_block22_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block23_0_bn (BatchNormal (None, 10, 10, 1344) 5376 conv5_block22_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block23_0_relu (Activatio (None, 10, 10, 1344) 0 conv5_block23_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block23_1_conv (Conv2D) (None, 10, 10, 128) 172032 conv5_block23_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block23_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block23_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block23_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block23_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block23_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block23_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block23_concat (Concatena (None, 10, 10, 1376) 0 conv5_block22_concat[0][0] \n conv5_block23_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block24_0_bn (BatchNormal (None, 10, 10, 1376) 5504 conv5_block23_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block24_0_relu (Activatio (None, 10, 10, 1376) 0 conv5_block24_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block24_1_conv (Conv2D) (None, 10, 10, 128) 176128 conv5_block24_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block24_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block24_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block24_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block24_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block24_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block24_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block24_concat (Concatena (None, 10, 10, 1408) 0 conv5_block23_concat[0][0] \n conv5_block24_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block25_0_bn (BatchNormal (None, 10, 10, 1408) 5632 conv5_block24_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block25_0_relu (Activatio (None, 10, 10, 1408) 0 conv5_block25_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block25_1_conv (Conv2D) (None, 10, 10, 128) 180224 conv5_block25_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block25_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block25_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block25_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block25_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block25_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block25_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block25_concat (Concatena (None, 10, 10, 1440) 0 conv5_block24_concat[0][0] \n conv5_block25_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block26_0_bn (BatchNormal (None, 10, 10, 1440) 5760 conv5_block25_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block26_0_relu (Activatio (None, 10, 10, 1440) 0 conv5_block26_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block26_1_conv (Conv2D) (None, 10, 10, 128) 184320 conv5_block26_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block26_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block26_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block26_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block26_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block26_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block26_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block26_concat (Concatena (None, 10, 10, 1472) 0 conv5_block25_concat[0][0] \n conv5_block26_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block27_0_bn (BatchNormal (None, 10, 10, 1472) 5888 conv5_block26_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block27_0_relu (Activatio (None, 10, 10, 1472) 0 conv5_block27_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block27_1_conv (Conv2D) (None, 10, 10, 128) 188416 conv5_block27_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block27_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block27_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block27_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block27_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block27_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block27_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block27_concat (Concatena (None, 10, 10, 1504) 0 conv5_block26_concat[0][0] \n conv5_block27_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block28_0_bn (BatchNormal (None, 10, 10, 1504) 6016 conv5_block27_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block28_0_relu (Activatio (None, 10, 10, 1504) 0 conv5_block28_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block28_1_conv (Conv2D) (None, 10, 10, 128) 192512 conv5_block28_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block28_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block28_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block28_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block28_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block28_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block28_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block28_concat (Concatena (None, 10, 10, 1536) 0 conv5_block27_concat[0][0] \n conv5_block28_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block29_0_bn (BatchNormal (None, 10, 10, 1536) 6144 conv5_block28_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block29_0_relu (Activatio (None, 10, 10, 1536) 0 conv5_block29_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block29_1_conv (Conv2D) (None, 10, 10, 128) 196608 conv5_block29_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block29_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block29_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block29_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block29_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block29_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block29_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block29_concat (Concatena (None, 10, 10, 1568) 0 conv5_block28_concat[0][0] \n conv5_block29_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block30_0_bn (BatchNormal (None, 10, 10, 1568) 6272 conv5_block29_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block30_0_relu (Activatio (None, 10, 10, 1568) 0 conv5_block30_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block30_1_conv (Conv2D) (None, 10, 10, 128) 200704 conv5_block30_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block30_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block30_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block30_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block30_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block30_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block30_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block30_concat (Concatena (None, 10, 10, 1600) 0 conv5_block29_concat[0][0] \n conv5_block30_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block31_0_bn (BatchNormal (None, 10, 10, 1600) 6400 conv5_block30_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block31_0_relu (Activatio (None, 10, 10, 1600) 0 conv5_block31_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block31_1_conv (Conv2D) (None, 10, 10, 128) 204800 conv5_block31_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block31_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block31_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block31_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block31_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block31_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block31_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block31_concat (Concatena (None, 10, 10, 1632) 0 conv5_block30_concat[0][0] \n conv5_block31_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block32_0_bn (BatchNormal (None, 10, 10, 1632) 6528 conv5_block31_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block32_0_relu (Activatio (None, 10, 10, 1632) 0 conv5_block32_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block32_1_conv (Conv2D) (None, 10, 10, 128) 208896 conv5_block32_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block32_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block32_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block32_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block32_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block32_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block32_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block32_concat (Concatena (None, 10, 10, 1664) 0 conv5_block31_concat[0][0] \n conv5_block32_2_conv[0][0] \n__________________________________________________________________________________________________\nbn (BatchNormalization) (None, 10, 10, 1664) 6656 conv5_block32_concat[0][0] \n__________________________________________________________________________________________________\nrelu (Activation) (None, 10, 10, 1664) 0 bn[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_1 (Glo (None, 1664) 0 relu[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 1664) 0 global_average_pooling2d_1[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 2048) 3409920 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 2048) 0 dense_1[0][0] \n__________________________________________________________________________________________________\nfinal_output (Dense) (None, 5) 10245 dropout_2[0][0] \n==================================================================================================\nTotal params: 16,063,045\nTrainable params: 3,420,165\nNon-trainable params: 12,642,880\n__________________________________________________________________________________________________\n" ], [ "STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size\nSTEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size\n\nhistory_warmup = model.fit_generator(generator=train_generator,\n steps_per_epoch=STEP_SIZE_TRAIN,\n validation_data=valid_generator,\n validation_steps=STEP_SIZE_VALID,\n epochs=WARMUP_EPOCHS,\n class_weight=class_weights,\n verbose=1).history", "Epoch 1/5\n183/183 [==============================] - 89s 488ms/step - loss: 1.3083 - acc: 0.5765 - kappa: 0.3781 - val_loss: 1.0053 - val_acc: 0.5639 - val_kappa: 0.1469\nEpoch 2/5\n183/183 [==============================] - 78s 428ms/step - loss: 0.9893 - acc: 0.6346 - kappa: 0.5937 - val_loss: 0.9950 - val_acc: 0.6402 - val_kappa: 0.4549\nEpoch 3/5\n183/183 [==============================] - 77s 423ms/step - loss: 0.9536 - acc: 0.6575 - kappa: 0.6189 - val_loss: 1.2106 - val_acc: 0.5021 - val_kappa: -0.3053\nEpoch 4/5\n183/183 [==============================] - 78s 427ms/step - loss: 0.8854 - acc: 0.6745 - kappa: 0.6878 - val_loss: 0.9691 - val_acc: 0.6081 - val_kappa: 0.3214\nEpoch 5/5\n183/183 [==============================] - 77s 422ms/step - loss: 0.8658 - acc: 0.6578 - kappa: 0.6677 - val_loss: 0.9826 - val_acc: 0.7211 - val_kappa: 0.7715\n" ] ], [ [ "# Fine-tune the complete model (1st step)", "_____no_output_____" ] ], [ [ "for layer in model.layers:\n layer.trainable = True\n\n# lrstep = LearningRateScheduler(step_decay)\nes = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)\nrlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)\n\ncallback_list = [es, rlrop]\noptimizer = optimizers.Adam(lr=LEARNING_RATE)\nmodel.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=metric_list)\nmodel.summary()", "__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 320, 320, 3) 0 \n__________________________________________________________________________________________________\nzero_padding2d_1 (ZeroPadding2D (None, 326, 326, 3) 0 input_1[0][0] \n__________________________________________________________________________________________________\nconv1/conv (Conv2D) (None, 160, 160, 64) 9408 zero_padding2d_1[0][0] \n__________________________________________________________________________________________________\nconv1/bn (BatchNormalization) (None, 160, 160, 64) 256 conv1/conv[0][0] \n__________________________________________________________________________________________________\nconv1/relu (Activation) (None, 160, 160, 64) 0 conv1/bn[0][0] \n__________________________________________________________________________________________________\nzero_padding2d_2 (ZeroPadding2D (None, 162, 162, 64) 0 conv1/relu[0][0] \n__________________________________________________________________________________________________\npool1 (MaxPooling2D) (None, 80, 80, 64) 0 zero_padding2d_2[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_bn (BatchNormali (None, 80, 80, 64) 256 pool1[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_relu (Activation (None, 80, 80, 64) 0 conv2_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_conv (Conv2D) (None, 80, 80, 128) 8192 conv2_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_relu (Activation (None, 80, 80, 128) 0 conv2_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_concat (Concatenat (None, 80, 80, 96) 0 pool1[0][0] \n conv2_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_0_bn (BatchNormali (None, 80, 80, 96) 384 conv2_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block2_0_relu (Activation (None, 80, 80, 96) 0 conv2_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_conv (Conv2D) (None, 80, 80, 128) 12288 conv2_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_relu (Activation (None, 80, 80, 128) 0 conv2_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_concat (Concatenat (None, 80, 80, 128) 0 conv2_block1_concat[0][0] \n conv2_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_0_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block3_0_relu (Activation (None, 80, 80, 128) 0 conv2_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_conv (Conv2D) (None, 80, 80, 128) 16384 conv2_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_relu (Activation (None, 80, 80, 128) 0 conv2_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_concat (Concatenat (None, 80, 80, 160) 0 conv2_block2_concat[0][0] \n conv2_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block4_0_bn (BatchNormali (None, 80, 80, 160) 640 conv2_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block4_0_relu (Activation (None, 80, 80, 160) 0 conv2_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block4_1_conv (Conv2D) (None, 80, 80, 128) 20480 conv2_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block4_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block4_1_relu (Activation (None, 80, 80, 128) 0 conv2_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block4_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block4_concat (Concatenat (None, 80, 80, 192) 0 conv2_block3_concat[0][0] \n conv2_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block5_0_bn (BatchNormali (None, 80, 80, 192) 768 conv2_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block5_0_relu (Activation (None, 80, 80, 192) 0 conv2_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block5_1_conv (Conv2D) (None, 80, 80, 128) 24576 conv2_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block5_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block5_1_relu (Activation (None, 80, 80, 128) 0 conv2_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block5_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block5_concat (Concatenat (None, 80, 80, 224) 0 conv2_block4_concat[0][0] \n conv2_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block6_0_bn (BatchNormali (None, 80, 80, 224) 896 conv2_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv2_block6_0_relu (Activation (None, 80, 80, 224) 0 conv2_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block6_1_conv (Conv2D) (None, 80, 80, 128) 28672 conv2_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block6_1_bn (BatchNormali (None, 80, 80, 128) 512 conv2_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block6_1_relu (Activation (None, 80, 80, 128) 0 conv2_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block6_2_conv (Conv2D) (None, 80, 80, 32) 36864 conv2_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block6_concat (Concatenat (None, 80, 80, 256) 0 conv2_block5_concat[0][0] \n conv2_block6_2_conv[0][0] \n__________________________________________________________________________________________________\npool2_bn (BatchNormalization) (None, 80, 80, 256) 1024 conv2_block6_concat[0][0] \n__________________________________________________________________________________________________\npool2_relu (Activation) (None, 80, 80, 256) 0 pool2_bn[0][0] \n__________________________________________________________________________________________________\npool2_conv (Conv2D) (None, 80, 80, 128) 32768 pool2_relu[0][0] \n__________________________________________________________________________________________________\npool2_pool (AveragePooling2D) (None, 40, 40, 128) 0 pool2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_bn (BatchNormali (None, 40, 40, 128) 512 pool2_pool[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_relu (Activation (None, 40, 40, 128) 0 conv3_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_conv (Conv2D) (None, 40, 40, 128) 16384 conv3_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_relu (Activation (None, 40, 40, 128) 0 conv3_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_concat (Concatenat (None, 40, 40, 160) 0 pool2_pool[0][0] \n conv3_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_0_bn (BatchNormali (None, 40, 40, 160) 640 conv3_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block2_0_relu (Activation (None, 40, 40, 160) 0 conv3_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_conv (Conv2D) (None, 40, 40, 128) 20480 conv3_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_relu (Activation (None, 40, 40, 128) 0 conv3_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_concat (Concatenat (None, 40, 40, 192) 0 conv3_block1_concat[0][0] \n conv3_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_0_bn (BatchNormali (None, 40, 40, 192) 768 conv3_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block3_0_relu (Activation (None, 40, 40, 192) 0 conv3_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_conv (Conv2D) (None, 40, 40, 128) 24576 conv3_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_relu (Activation (None, 40, 40, 128) 0 conv3_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_concat (Concatenat (None, 40, 40, 224) 0 conv3_block2_concat[0][0] \n conv3_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_0_bn (BatchNormali (None, 40, 40, 224) 896 conv3_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block4_0_relu (Activation (None, 40, 40, 224) 0 conv3_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_conv (Conv2D) (None, 40, 40, 128) 28672 conv3_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_relu (Activation (None, 40, 40, 128) 0 conv3_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_concat (Concatenat (None, 40, 40, 256) 0 conv3_block3_concat[0][0] \n conv3_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block5_0_bn (BatchNormali (None, 40, 40, 256) 1024 conv3_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block5_0_relu (Activation (None, 40, 40, 256) 0 conv3_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block5_1_conv (Conv2D) (None, 40, 40, 128) 32768 conv3_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block5_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block5_1_relu (Activation (None, 40, 40, 128) 0 conv3_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block5_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block5_concat (Concatenat (None, 40, 40, 288) 0 conv3_block4_concat[0][0] \n conv3_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block6_0_bn (BatchNormali (None, 40, 40, 288) 1152 conv3_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block6_0_relu (Activation (None, 40, 40, 288) 0 conv3_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block6_1_conv (Conv2D) (None, 40, 40, 128) 36864 conv3_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block6_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block6_1_relu (Activation (None, 40, 40, 128) 0 conv3_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block6_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block6_concat (Concatenat (None, 40, 40, 320) 0 conv3_block5_concat[0][0] \n conv3_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block7_0_bn (BatchNormali (None, 40, 40, 320) 1280 conv3_block6_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block7_0_relu (Activation (None, 40, 40, 320) 0 conv3_block7_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block7_1_conv (Conv2D) (None, 40, 40, 128) 40960 conv3_block7_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block7_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block7_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block7_1_relu (Activation (None, 40, 40, 128) 0 conv3_block7_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block7_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block7_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block7_concat (Concatenat (None, 40, 40, 352) 0 conv3_block6_concat[0][0] \n conv3_block7_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block8_0_bn (BatchNormali (None, 40, 40, 352) 1408 conv3_block7_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block8_0_relu (Activation (None, 40, 40, 352) 0 conv3_block8_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block8_1_conv (Conv2D) (None, 40, 40, 128) 45056 conv3_block8_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block8_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block8_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block8_1_relu (Activation (None, 40, 40, 128) 0 conv3_block8_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block8_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block8_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block8_concat (Concatenat (None, 40, 40, 384) 0 conv3_block7_concat[0][0] \n conv3_block8_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block9_0_bn (BatchNormali (None, 40, 40, 384) 1536 conv3_block8_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block9_0_relu (Activation (None, 40, 40, 384) 0 conv3_block9_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block9_1_conv (Conv2D) (None, 40, 40, 128) 49152 conv3_block9_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block9_1_bn (BatchNormali (None, 40, 40, 128) 512 conv3_block9_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block9_1_relu (Activation (None, 40, 40, 128) 0 conv3_block9_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block9_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block9_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block9_concat (Concatenat (None, 40, 40, 416) 0 conv3_block8_concat[0][0] \n conv3_block9_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block10_0_bn (BatchNormal (None, 40, 40, 416) 1664 conv3_block9_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block10_0_relu (Activatio (None, 40, 40, 416) 0 conv3_block10_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block10_1_conv (Conv2D) (None, 40, 40, 128) 53248 conv3_block10_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block10_1_bn (BatchNormal (None, 40, 40, 128) 512 conv3_block10_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block10_1_relu (Activatio (None, 40, 40, 128) 0 conv3_block10_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block10_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block10_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block10_concat (Concatena (None, 40, 40, 448) 0 conv3_block9_concat[0][0] \n conv3_block10_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block11_0_bn (BatchNormal (None, 40, 40, 448) 1792 conv3_block10_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block11_0_relu (Activatio (None, 40, 40, 448) 0 conv3_block11_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block11_1_conv (Conv2D) (None, 40, 40, 128) 57344 conv3_block11_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block11_1_bn (BatchNormal (None, 40, 40, 128) 512 conv3_block11_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block11_1_relu (Activatio (None, 40, 40, 128) 0 conv3_block11_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block11_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block11_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block11_concat (Concatena (None, 40, 40, 480) 0 conv3_block10_concat[0][0] \n conv3_block11_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block12_0_bn (BatchNormal (None, 40, 40, 480) 1920 conv3_block11_concat[0][0] \n__________________________________________________________________________________________________\nconv3_block12_0_relu (Activatio (None, 40, 40, 480) 0 conv3_block12_0_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block12_1_conv (Conv2D) (None, 40, 40, 128) 61440 conv3_block12_0_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block12_1_bn (BatchNormal (None, 40, 40, 128) 512 conv3_block12_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block12_1_relu (Activatio (None, 40, 40, 128) 0 conv3_block12_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block12_2_conv (Conv2D) (None, 40, 40, 32) 36864 conv3_block12_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block12_concat (Concatena (None, 40, 40, 512) 0 conv3_block11_concat[0][0] \n conv3_block12_2_conv[0][0] \n__________________________________________________________________________________________________\npool3_bn (BatchNormalization) (None, 40, 40, 512) 2048 conv3_block12_concat[0][0] \n__________________________________________________________________________________________________\npool3_relu (Activation) (None, 40, 40, 512) 0 pool3_bn[0][0] \n__________________________________________________________________________________________________\npool3_conv (Conv2D) (None, 40, 40, 256) 131072 pool3_relu[0][0] \n__________________________________________________________________________________________________\npool3_pool (AveragePooling2D) (None, 20, 20, 256) 0 pool3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_bn (BatchNormali (None, 20, 20, 256) 1024 pool3_pool[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_relu (Activation (None, 20, 20, 256) 0 conv4_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_conv (Conv2D) (None, 20, 20, 128) 32768 conv4_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_relu (Activation (None, 20, 20, 128) 0 conv4_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_concat (Concatenat (None, 20, 20, 288) 0 pool3_pool[0][0] \n conv4_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_0_bn (BatchNormali (None, 20, 20, 288) 1152 conv4_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block2_0_relu (Activation (None, 20, 20, 288) 0 conv4_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_conv (Conv2D) (None, 20, 20, 128) 36864 conv4_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_relu (Activation (None, 20, 20, 128) 0 conv4_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_concat (Concatenat (None, 20, 20, 320) 0 conv4_block1_concat[0][0] \n conv4_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_0_bn (BatchNormali (None, 20, 20, 320) 1280 conv4_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block3_0_relu (Activation (None, 20, 20, 320) 0 conv4_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_conv (Conv2D) (None, 20, 20, 128) 40960 conv4_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_relu (Activation (None, 20, 20, 128) 0 conv4_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_concat (Concatenat (None, 20, 20, 352) 0 conv4_block2_concat[0][0] \n conv4_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_0_bn (BatchNormali (None, 20, 20, 352) 1408 conv4_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block4_0_relu (Activation (None, 20, 20, 352) 0 conv4_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_conv (Conv2D) (None, 20, 20, 128) 45056 conv4_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_relu (Activation (None, 20, 20, 128) 0 conv4_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_concat (Concatenat (None, 20, 20, 384) 0 conv4_block3_concat[0][0] \n conv4_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_0_bn (BatchNormali (None, 20, 20, 384) 1536 conv4_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block5_0_relu (Activation (None, 20, 20, 384) 0 conv4_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_conv (Conv2D) (None, 20, 20, 128) 49152 conv4_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_relu (Activation (None, 20, 20, 128) 0 conv4_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_concat (Concatenat (None, 20, 20, 416) 0 conv4_block4_concat[0][0] \n conv4_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_0_bn (BatchNormali (None, 20, 20, 416) 1664 conv4_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block6_0_relu (Activation (None, 20, 20, 416) 0 conv4_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_conv (Conv2D) (None, 20, 20, 128) 53248 conv4_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_relu (Activation (None, 20, 20, 128) 0 conv4_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_concat (Concatenat (None, 20, 20, 448) 0 conv4_block5_concat[0][0] \n conv4_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block7_0_bn (BatchNormali (None, 20, 20, 448) 1792 conv4_block6_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block7_0_relu (Activation (None, 20, 20, 448) 0 conv4_block7_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block7_1_conv (Conv2D) (None, 20, 20, 128) 57344 conv4_block7_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block7_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block7_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block7_1_relu (Activation (None, 20, 20, 128) 0 conv4_block7_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block7_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block7_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block7_concat (Concatenat (None, 20, 20, 480) 0 conv4_block6_concat[0][0] \n conv4_block7_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block8_0_bn (BatchNormali (None, 20, 20, 480) 1920 conv4_block7_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block8_0_relu (Activation (None, 20, 20, 480) 0 conv4_block8_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block8_1_conv (Conv2D) (None, 20, 20, 128) 61440 conv4_block8_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block8_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block8_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block8_1_relu (Activation (None, 20, 20, 128) 0 conv4_block8_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block8_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block8_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block8_concat (Concatenat (None, 20, 20, 512) 0 conv4_block7_concat[0][0] \n conv4_block8_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block9_0_bn (BatchNormali (None, 20, 20, 512) 2048 conv4_block8_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block9_0_relu (Activation (None, 20, 20, 512) 0 conv4_block9_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block9_1_conv (Conv2D) (None, 20, 20, 128) 65536 conv4_block9_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block9_1_bn (BatchNormali (None, 20, 20, 128) 512 conv4_block9_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block9_1_relu (Activation (None, 20, 20, 128) 0 conv4_block9_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block9_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block9_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block9_concat (Concatenat (None, 20, 20, 544) 0 conv4_block8_concat[0][0] \n conv4_block9_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block10_0_bn (BatchNormal (None, 20, 20, 544) 2176 conv4_block9_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block10_0_relu (Activatio (None, 20, 20, 544) 0 conv4_block10_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block10_1_conv (Conv2D) (None, 20, 20, 128) 69632 conv4_block10_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block10_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block10_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block10_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block10_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block10_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block10_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block10_concat (Concatena (None, 20, 20, 576) 0 conv4_block9_concat[0][0] \n conv4_block10_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block11_0_bn (BatchNormal (None, 20, 20, 576) 2304 conv4_block10_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block11_0_relu (Activatio (None, 20, 20, 576) 0 conv4_block11_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block11_1_conv (Conv2D) (None, 20, 20, 128) 73728 conv4_block11_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block11_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block11_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block11_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block11_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block11_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block11_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block11_concat (Concatena (None, 20, 20, 608) 0 conv4_block10_concat[0][0] \n conv4_block11_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block12_0_bn (BatchNormal (None, 20, 20, 608) 2432 conv4_block11_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block12_0_relu (Activatio (None, 20, 20, 608) 0 conv4_block12_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block12_1_conv (Conv2D) (None, 20, 20, 128) 77824 conv4_block12_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block12_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block12_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block12_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block12_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block12_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block12_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block12_concat (Concatena (None, 20, 20, 640) 0 conv4_block11_concat[0][0] \n conv4_block12_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block13_0_bn (BatchNormal (None, 20, 20, 640) 2560 conv4_block12_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block13_0_relu (Activatio (None, 20, 20, 640) 0 conv4_block13_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block13_1_conv (Conv2D) (None, 20, 20, 128) 81920 conv4_block13_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block13_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block13_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block13_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block13_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block13_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block13_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block13_concat (Concatena (None, 20, 20, 672) 0 conv4_block12_concat[0][0] \n conv4_block13_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block14_0_bn (BatchNormal (None, 20, 20, 672) 2688 conv4_block13_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block14_0_relu (Activatio (None, 20, 20, 672) 0 conv4_block14_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block14_1_conv (Conv2D) (None, 20, 20, 128) 86016 conv4_block14_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block14_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block14_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block14_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block14_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block14_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block14_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block14_concat (Concatena (None, 20, 20, 704) 0 conv4_block13_concat[0][0] \n conv4_block14_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block15_0_bn (BatchNormal (None, 20, 20, 704) 2816 conv4_block14_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block15_0_relu (Activatio (None, 20, 20, 704) 0 conv4_block15_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block15_1_conv (Conv2D) (None, 20, 20, 128) 90112 conv4_block15_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block15_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block15_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block15_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block15_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block15_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block15_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block15_concat (Concatena (None, 20, 20, 736) 0 conv4_block14_concat[0][0] \n conv4_block15_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block16_0_bn (BatchNormal (None, 20, 20, 736) 2944 conv4_block15_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block16_0_relu (Activatio (None, 20, 20, 736) 0 conv4_block16_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block16_1_conv (Conv2D) (None, 20, 20, 128) 94208 conv4_block16_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block16_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block16_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block16_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block16_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block16_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block16_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block16_concat (Concatena (None, 20, 20, 768) 0 conv4_block15_concat[0][0] \n conv4_block16_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block17_0_bn (BatchNormal (None, 20, 20, 768) 3072 conv4_block16_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block17_0_relu (Activatio (None, 20, 20, 768) 0 conv4_block17_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block17_1_conv (Conv2D) (None, 20, 20, 128) 98304 conv4_block17_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block17_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block17_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block17_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block17_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block17_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block17_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block17_concat (Concatena (None, 20, 20, 800) 0 conv4_block16_concat[0][0] \n conv4_block17_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block18_0_bn (BatchNormal (None, 20, 20, 800) 3200 conv4_block17_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block18_0_relu (Activatio (None, 20, 20, 800) 0 conv4_block18_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block18_1_conv (Conv2D) (None, 20, 20, 128) 102400 conv4_block18_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block18_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block18_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block18_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block18_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block18_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block18_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block18_concat (Concatena (None, 20, 20, 832) 0 conv4_block17_concat[0][0] \n conv4_block18_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block19_0_bn (BatchNormal (None, 20, 20, 832) 3328 conv4_block18_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block19_0_relu (Activatio (None, 20, 20, 832) 0 conv4_block19_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block19_1_conv (Conv2D) (None, 20, 20, 128) 106496 conv4_block19_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block19_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block19_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block19_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block19_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block19_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block19_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block19_concat (Concatena (None, 20, 20, 864) 0 conv4_block18_concat[0][0] \n conv4_block19_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block20_0_bn (BatchNormal (None, 20, 20, 864) 3456 conv4_block19_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block20_0_relu (Activatio (None, 20, 20, 864) 0 conv4_block20_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block20_1_conv (Conv2D) (None, 20, 20, 128) 110592 conv4_block20_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block20_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block20_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block20_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block20_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block20_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block20_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block20_concat (Concatena (None, 20, 20, 896) 0 conv4_block19_concat[0][0] \n conv4_block20_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block21_0_bn (BatchNormal (None, 20, 20, 896) 3584 conv4_block20_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block21_0_relu (Activatio (None, 20, 20, 896) 0 conv4_block21_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block21_1_conv (Conv2D) (None, 20, 20, 128) 114688 conv4_block21_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block21_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block21_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block21_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block21_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block21_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block21_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block21_concat (Concatena (None, 20, 20, 928) 0 conv4_block20_concat[0][0] \n conv4_block21_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block22_0_bn (BatchNormal (None, 20, 20, 928) 3712 conv4_block21_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block22_0_relu (Activatio (None, 20, 20, 928) 0 conv4_block22_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block22_1_conv (Conv2D) (None, 20, 20, 128) 118784 conv4_block22_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block22_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block22_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block22_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block22_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block22_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block22_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block22_concat (Concatena (None, 20, 20, 960) 0 conv4_block21_concat[0][0] \n conv4_block22_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block23_0_bn (BatchNormal (None, 20, 20, 960) 3840 conv4_block22_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block23_0_relu (Activatio (None, 20, 20, 960) 0 conv4_block23_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block23_1_conv (Conv2D) (None, 20, 20, 128) 122880 conv4_block23_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block23_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block23_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block23_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block23_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block23_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block23_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block23_concat (Concatena (None, 20, 20, 992) 0 conv4_block22_concat[0][0] \n conv4_block23_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block24_0_bn (BatchNormal (None, 20, 20, 992) 3968 conv4_block23_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block24_0_relu (Activatio (None, 20, 20, 992) 0 conv4_block24_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block24_1_conv (Conv2D) (None, 20, 20, 128) 126976 conv4_block24_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block24_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block24_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block24_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block24_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block24_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block24_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block24_concat (Concatena (None, 20, 20, 1024) 0 conv4_block23_concat[0][0] \n conv4_block24_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block25_0_bn (BatchNormal (None, 20, 20, 1024) 4096 conv4_block24_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block25_0_relu (Activatio (None, 20, 20, 1024) 0 conv4_block25_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block25_1_conv (Conv2D) (None, 20, 20, 128) 131072 conv4_block25_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block25_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block25_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block25_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block25_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block25_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block25_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block25_concat (Concatena (None, 20, 20, 1056) 0 conv4_block24_concat[0][0] \n conv4_block25_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block26_0_bn (BatchNormal (None, 20, 20, 1056) 4224 conv4_block25_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block26_0_relu (Activatio (None, 20, 20, 1056) 0 conv4_block26_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block26_1_conv (Conv2D) (None, 20, 20, 128) 135168 conv4_block26_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block26_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block26_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block26_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block26_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block26_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block26_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block26_concat (Concatena (None, 20, 20, 1088) 0 conv4_block25_concat[0][0] \n conv4_block26_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block27_0_bn (BatchNormal (None, 20, 20, 1088) 4352 conv4_block26_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block27_0_relu (Activatio (None, 20, 20, 1088) 0 conv4_block27_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block27_1_conv (Conv2D) (None, 20, 20, 128) 139264 conv4_block27_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block27_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block27_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block27_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block27_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block27_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block27_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block27_concat (Concatena (None, 20, 20, 1120) 0 conv4_block26_concat[0][0] \n conv4_block27_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block28_0_bn (BatchNormal (None, 20, 20, 1120) 4480 conv4_block27_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block28_0_relu (Activatio (None, 20, 20, 1120) 0 conv4_block28_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block28_1_conv (Conv2D) (None, 20, 20, 128) 143360 conv4_block28_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block28_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block28_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block28_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block28_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block28_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block28_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block28_concat (Concatena (None, 20, 20, 1152) 0 conv4_block27_concat[0][0] \n conv4_block28_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block29_0_bn (BatchNormal (None, 20, 20, 1152) 4608 conv4_block28_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block29_0_relu (Activatio (None, 20, 20, 1152) 0 conv4_block29_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block29_1_conv (Conv2D) (None, 20, 20, 128) 147456 conv4_block29_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block29_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block29_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block29_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block29_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block29_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block29_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block29_concat (Concatena (None, 20, 20, 1184) 0 conv4_block28_concat[0][0] \n conv4_block29_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block30_0_bn (BatchNormal (None, 20, 20, 1184) 4736 conv4_block29_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block30_0_relu (Activatio (None, 20, 20, 1184) 0 conv4_block30_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block30_1_conv (Conv2D) (None, 20, 20, 128) 151552 conv4_block30_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block30_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block30_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block30_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block30_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block30_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block30_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block30_concat (Concatena (None, 20, 20, 1216) 0 conv4_block29_concat[0][0] \n conv4_block30_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block31_0_bn (BatchNormal (None, 20, 20, 1216) 4864 conv4_block30_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block31_0_relu (Activatio (None, 20, 20, 1216) 0 conv4_block31_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block31_1_conv (Conv2D) (None, 20, 20, 128) 155648 conv4_block31_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block31_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block31_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block31_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block31_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block31_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block31_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block31_concat (Concatena (None, 20, 20, 1248) 0 conv4_block30_concat[0][0] \n conv4_block31_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block32_0_bn (BatchNormal (None, 20, 20, 1248) 4992 conv4_block31_concat[0][0] \n__________________________________________________________________________________________________\nconv4_block32_0_relu (Activatio (None, 20, 20, 1248) 0 conv4_block32_0_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block32_1_conv (Conv2D) (None, 20, 20, 128) 159744 conv4_block32_0_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block32_1_bn (BatchNormal (None, 20, 20, 128) 512 conv4_block32_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block32_1_relu (Activatio (None, 20, 20, 128) 0 conv4_block32_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block32_2_conv (Conv2D) (None, 20, 20, 32) 36864 conv4_block32_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block32_concat (Concatena (None, 20, 20, 1280) 0 conv4_block31_concat[0][0] \n conv4_block32_2_conv[0][0] \n__________________________________________________________________________________________________\npool4_bn (BatchNormalization) (None, 20, 20, 1280) 5120 conv4_block32_concat[0][0] \n__________________________________________________________________________________________________\npool4_relu (Activation) (None, 20, 20, 1280) 0 pool4_bn[0][0] \n__________________________________________________________________________________________________\npool4_conv (Conv2D) (None, 20, 20, 640) 819200 pool4_relu[0][0] \n__________________________________________________________________________________________________\npool4_pool (AveragePooling2D) (None, 10, 10, 640) 0 pool4_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_bn (BatchNormali (None, 10, 10, 640) 2560 pool4_pool[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_relu (Activation (None, 10, 10, 640) 0 conv5_block1_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_conv (Conv2D) (None, 10, 10, 128) 81920 conv5_block1_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_relu (Activation (None, 10, 10, 128) 0 conv5_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_concat (Concatenat (None, 10, 10, 672) 0 pool4_pool[0][0] \n conv5_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_0_bn (BatchNormali (None, 10, 10, 672) 2688 conv5_block1_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block2_0_relu (Activation (None, 10, 10, 672) 0 conv5_block2_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_conv (Conv2D) (None, 10, 10, 128) 86016 conv5_block2_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_relu (Activation (None, 10, 10, 128) 0 conv5_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_concat (Concatenat (None, 10, 10, 704) 0 conv5_block1_concat[0][0] \n conv5_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_0_bn (BatchNormali (None, 10, 10, 704) 2816 conv5_block2_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block3_0_relu (Activation (None, 10, 10, 704) 0 conv5_block3_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_conv (Conv2D) (None, 10, 10, 128) 90112 conv5_block3_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_relu (Activation (None, 10, 10, 128) 0 conv5_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_concat (Concatenat (None, 10, 10, 736) 0 conv5_block2_concat[0][0] \n conv5_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block4_0_bn (BatchNormali (None, 10, 10, 736) 2944 conv5_block3_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block4_0_relu (Activation (None, 10, 10, 736) 0 conv5_block4_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block4_1_conv (Conv2D) (None, 10, 10, 128) 94208 conv5_block4_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block4_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block4_1_relu (Activation (None, 10, 10, 128) 0 conv5_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block4_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block4_concat (Concatenat (None, 10, 10, 768) 0 conv5_block3_concat[0][0] \n conv5_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block5_0_bn (BatchNormali (None, 10, 10, 768) 3072 conv5_block4_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block5_0_relu (Activation (None, 10, 10, 768) 0 conv5_block5_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block5_1_conv (Conv2D) (None, 10, 10, 128) 98304 conv5_block5_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block5_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block5_1_relu (Activation (None, 10, 10, 128) 0 conv5_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block5_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block5_concat (Concatenat (None, 10, 10, 800) 0 conv5_block4_concat[0][0] \n conv5_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block6_0_bn (BatchNormali (None, 10, 10, 800) 3200 conv5_block5_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block6_0_relu (Activation (None, 10, 10, 800) 0 conv5_block6_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block6_1_conv (Conv2D) (None, 10, 10, 128) 102400 conv5_block6_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block6_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block6_1_relu (Activation (None, 10, 10, 128) 0 conv5_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block6_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block6_concat (Concatenat (None, 10, 10, 832) 0 conv5_block5_concat[0][0] \n conv5_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block7_0_bn (BatchNormali (None, 10, 10, 832) 3328 conv5_block6_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block7_0_relu (Activation (None, 10, 10, 832) 0 conv5_block7_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block7_1_conv (Conv2D) (None, 10, 10, 128) 106496 conv5_block7_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block7_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block7_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block7_1_relu (Activation (None, 10, 10, 128) 0 conv5_block7_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block7_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block7_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block7_concat (Concatenat (None, 10, 10, 864) 0 conv5_block6_concat[0][0] \n conv5_block7_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block8_0_bn (BatchNormali (None, 10, 10, 864) 3456 conv5_block7_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block8_0_relu (Activation (None, 10, 10, 864) 0 conv5_block8_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block8_1_conv (Conv2D) (None, 10, 10, 128) 110592 conv5_block8_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block8_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block8_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block8_1_relu (Activation (None, 10, 10, 128) 0 conv5_block8_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block8_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block8_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block8_concat (Concatenat (None, 10, 10, 896) 0 conv5_block7_concat[0][0] \n conv5_block8_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block9_0_bn (BatchNormali (None, 10, 10, 896) 3584 conv5_block8_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block9_0_relu (Activation (None, 10, 10, 896) 0 conv5_block9_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block9_1_conv (Conv2D) (None, 10, 10, 128) 114688 conv5_block9_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block9_1_bn (BatchNormali (None, 10, 10, 128) 512 conv5_block9_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block9_1_relu (Activation (None, 10, 10, 128) 0 conv5_block9_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block9_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block9_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block9_concat (Concatenat (None, 10, 10, 928) 0 conv5_block8_concat[0][0] \n conv5_block9_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block10_0_bn (BatchNormal (None, 10, 10, 928) 3712 conv5_block9_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block10_0_relu (Activatio (None, 10, 10, 928) 0 conv5_block10_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block10_1_conv (Conv2D) (None, 10, 10, 128) 118784 conv5_block10_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block10_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block10_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block10_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block10_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block10_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block10_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block10_concat (Concatena (None, 10, 10, 960) 0 conv5_block9_concat[0][0] \n conv5_block10_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block11_0_bn (BatchNormal (None, 10, 10, 960) 3840 conv5_block10_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block11_0_relu (Activatio (None, 10, 10, 960) 0 conv5_block11_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block11_1_conv (Conv2D) (None, 10, 10, 128) 122880 conv5_block11_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block11_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block11_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block11_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block11_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block11_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block11_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block11_concat (Concatena (None, 10, 10, 992) 0 conv5_block10_concat[0][0] \n conv5_block11_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block12_0_bn (BatchNormal (None, 10, 10, 992) 3968 conv5_block11_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block12_0_relu (Activatio (None, 10, 10, 992) 0 conv5_block12_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block12_1_conv (Conv2D) (None, 10, 10, 128) 126976 conv5_block12_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block12_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block12_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block12_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block12_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block12_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block12_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block12_concat (Concatena (None, 10, 10, 1024) 0 conv5_block11_concat[0][0] \n conv5_block12_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block13_0_bn (BatchNormal (None, 10, 10, 1024) 4096 conv5_block12_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block13_0_relu (Activatio (None, 10, 10, 1024) 0 conv5_block13_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block13_1_conv (Conv2D) (None, 10, 10, 128) 131072 conv5_block13_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block13_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block13_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block13_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block13_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block13_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block13_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block13_concat (Concatena (None, 10, 10, 1056) 0 conv5_block12_concat[0][0] \n conv5_block13_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block14_0_bn (BatchNormal (None, 10, 10, 1056) 4224 conv5_block13_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block14_0_relu (Activatio (None, 10, 10, 1056) 0 conv5_block14_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block14_1_conv (Conv2D) (None, 10, 10, 128) 135168 conv5_block14_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block14_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block14_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block14_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block14_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block14_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block14_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block14_concat (Concatena (None, 10, 10, 1088) 0 conv5_block13_concat[0][0] \n conv5_block14_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block15_0_bn (BatchNormal (None, 10, 10, 1088) 4352 conv5_block14_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block15_0_relu (Activatio (None, 10, 10, 1088) 0 conv5_block15_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block15_1_conv (Conv2D) (None, 10, 10, 128) 139264 conv5_block15_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block15_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block15_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block15_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block15_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block15_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block15_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block15_concat (Concatena (None, 10, 10, 1120) 0 conv5_block14_concat[0][0] \n conv5_block15_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block16_0_bn (BatchNormal (None, 10, 10, 1120) 4480 conv5_block15_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block16_0_relu (Activatio (None, 10, 10, 1120) 0 conv5_block16_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block16_1_conv (Conv2D) (None, 10, 10, 128) 143360 conv5_block16_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block16_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block16_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block16_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block16_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block16_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block16_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block16_concat (Concatena (None, 10, 10, 1152) 0 conv5_block15_concat[0][0] \n conv5_block16_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block17_0_bn (BatchNormal (None, 10, 10, 1152) 4608 conv5_block16_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block17_0_relu (Activatio (None, 10, 10, 1152) 0 conv5_block17_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block17_1_conv (Conv2D) (None, 10, 10, 128) 147456 conv5_block17_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block17_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block17_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block17_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block17_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block17_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block17_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block17_concat (Concatena (None, 10, 10, 1184) 0 conv5_block16_concat[0][0] \n conv5_block17_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block18_0_bn (BatchNormal (None, 10, 10, 1184) 4736 conv5_block17_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block18_0_relu (Activatio (None, 10, 10, 1184) 0 conv5_block18_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block18_1_conv (Conv2D) (None, 10, 10, 128) 151552 conv5_block18_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block18_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block18_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block18_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block18_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block18_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block18_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block18_concat (Concatena (None, 10, 10, 1216) 0 conv5_block17_concat[0][0] \n conv5_block18_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block19_0_bn (BatchNormal (None, 10, 10, 1216) 4864 conv5_block18_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block19_0_relu (Activatio (None, 10, 10, 1216) 0 conv5_block19_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block19_1_conv (Conv2D) (None, 10, 10, 128) 155648 conv5_block19_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block19_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block19_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block19_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block19_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block19_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block19_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block19_concat (Concatena (None, 10, 10, 1248) 0 conv5_block18_concat[0][0] \n conv5_block19_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block20_0_bn (BatchNormal (None, 10, 10, 1248) 4992 conv5_block19_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block20_0_relu (Activatio (None, 10, 10, 1248) 0 conv5_block20_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block20_1_conv (Conv2D) (None, 10, 10, 128) 159744 conv5_block20_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block20_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block20_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block20_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block20_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block20_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block20_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block20_concat (Concatena (None, 10, 10, 1280) 0 conv5_block19_concat[0][0] \n conv5_block20_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block21_0_bn (BatchNormal (None, 10, 10, 1280) 5120 conv5_block20_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block21_0_relu (Activatio (None, 10, 10, 1280) 0 conv5_block21_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block21_1_conv (Conv2D) (None, 10, 10, 128) 163840 conv5_block21_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block21_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block21_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block21_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block21_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block21_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block21_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block21_concat (Concatena (None, 10, 10, 1312) 0 conv5_block20_concat[0][0] \n conv5_block21_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block22_0_bn (BatchNormal (None, 10, 10, 1312) 5248 conv5_block21_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block22_0_relu (Activatio (None, 10, 10, 1312) 0 conv5_block22_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block22_1_conv (Conv2D) (None, 10, 10, 128) 167936 conv5_block22_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block22_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block22_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block22_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block22_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block22_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block22_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block22_concat (Concatena (None, 10, 10, 1344) 0 conv5_block21_concat[0][0] \n conv5_block22_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block23_0_bn (BatchNormal (None, 10, 10, 1344) 5376 conv5_block22_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block23_0_relu (Activatio (None, 10, 10, 1344) 0 conv5_block23_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block23_1_conv (Conv2D) (None, 10, 10, 128) 172032 conv5_block23_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block23_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block23_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block23_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block23_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block23_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block23_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block23_concat (Concatena (None, 10, 10, 1376) 0 conv5_block22_concat[0][0] \n conv5_block23_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block24_0_bn (BatchNormal (None, 10, 10, 1376) 5504 conv5_block23_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block24_0_relu (Activatio (None, 10, 10, 1376) 0 conv5_block24_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block24_1_conv (Conv2D) (None, 10, 10, 128) 176128 conv5_block24_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block24_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block24_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block24_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block24_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block24_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block24_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block24_concat (Concatena (None, 10, 10, 1408) 0 conv5_block23_concat[0][0] \n conv5_block24_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block25_0_bn (BatchNormal (None, 10, 10, 1408) 5632 conv5_block24_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block25_0_relu (Activatio (None, 10, 10, 1408) 0 conv5_block25_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block25_1_conv (Conv2D) (None, 10, 10, 128) 180224 conv5_block25_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block25_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block25_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block25_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block25_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block25_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block25_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block25_concat (Concatena (None, 10, 10, 1440) 0 conv5_block24_concat[0][0] \n conv5_block25_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block26_0_bn (BatchNormal (None, 10, 10, 1440) 5760 conv5_block25_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block26_0_relu (Activatio (None, 10, 10, 1440) 0 conv5_block26_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block26_1_conv (Conv2D) (None, 10, 10, 128) 184320 conv5_block26_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block26_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block26_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block26_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block26_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block26_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block26_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block26_concat (Concatena (None, 10, 10, 1472) 0 conv5_block25_concat[0][0] \n conv5_block26_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block27_0_bn (BatchNormal (None, 10, 10, 1472) 5888 conv5_block26_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block27_0_relu (Activatio (None, 10, 10, 1472) 0 conv5_block27_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block27_1_conv (Conv2D) (None, 10, 10, 128) 188416 conv5_block27_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block27_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block27_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block27_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block27_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block27_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block27_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block27_concat (Concatena (None, 10, 10, 1504) 0 conv5_block26_concat[0][0] \n conv5_block27_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block28_0_bn (BatchNormal (None, 10, 10, 1504) 6016 conv5_block27_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block28_0_relu (Activatio (None, 10, 10, 1504) 0 conv5_block28_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block28_1_conv (Conv2D) (None, 10, 10, 128) 192512 conv5_block28_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block28_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block28_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block28_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block28_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block28_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block28_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block28_concat (Concatena (None, 10, 10, 1536) 0 conv5_block27_concat[0][0] \n conv5_block28_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block29_0_bn (BatchNormal (None, 10, 10, 1536) 6144 conv5_block28_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block29_0_relu (Activatio (None, 10, 10, 1536) 0 conv5_block29_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block29_1_conv (Conv2D) (None, 10, 10, 128) 196608 conv5_block29_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block29_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block29_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block29_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block29_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block29_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block29_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block29_concat (Concatena (None, 10, 10, 1568) 0 conv5_block28_concat[0][0] \n conv5_block29_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block30_0_bn (BatchNormal (None, 10, 10, 1568) 6272 conv5_block29_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block30_0_relu (Activatio (None, 10, 10, 1568) 0 conv5_block30_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block30_1_conv (Conv2D) (None, 10, 10, 128) 200704 conv5_block30_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block30_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block30_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block30_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block30_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block30_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block30_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block30_concat (Concatena (None, 10, 10, 1600) 0 conv5_block29_concat[0][0] \n conv5_block30_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block31_0_bn (BatchNormal (None, 10, 10, 1600) 6400 conv5_block30_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block31_0_relu (Activatio (None, 10, 10, 1600) 0 conv5_block31_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block31_1_conv (Conv2D) (None, 10, 10, 128) 204800 conv5_block31_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block31_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block31_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block31_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block31_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block31_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block31_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block31_concat (Concatena (None, 10, 10, 1632) 0 conv5_block30_concat[0][0] \n conv5_block31_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block32_0_bn (BatchNormal (None, 10, 10, 1632) 6528 conv5_block31_concat[0][0] \n__________________________________________________________________________________________________\nconv5_block32_0_relu (Activatio (None, 10, 10, 1632) 0 conv5_block32_0_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block32_1_conv (Conv2D) (None, 10, 10, 128) 208896 conv5_block32_0_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block32_1_bn (BatchNormal (None, 10, 10, 128) 512 conv5_block32_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block32_1_relu (Activatio (None, 10, 10, 128) 0 conv5_block32_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block32_2_conv (Conv2D) (None, 10, 10, 32) 36864 conv5_block32_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block32_concat (Concatena (None, 10, 10, 1664) 0 conv5_block31_concat[0][0] \n conv5_block32_2_conv[0][0] \n__________________________________________________________________________________________________\nbn (BatchNormalization) (None, 10, 10, 1664) 6656 conv5_block32_concat[0][0] \n__________________________________________________________________________________________________\nrelu (Activation) (None, 10, 10, 1664) 0 bn[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_1 (Glo (None, 1664) 0 relu[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 1664) 0 global_average_pooling2d_1[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 2048) 3409920 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 2048) 0 dense_1[0][0] \n__________________________________________________________________________________________________\nfinal_output (Dense) (None, 5) 10245 dropout_2[0][0] \n==================================================================================================\nTotal params: 16,063,045\nTrainable params: 15,904,645\nNon-trainable params: 158,400\n__________________________________________________________________________________________________\n" ], [ "history_finetunning = model.fit_generator(generator=train_generator,\n steps_per_epoch=STEP_SIZE_TRAIN,\n validation_data=valid_generator,\n validation_steps=STEP_SIZE_VALID,\n epochs=int(EPOCHS*0.8),\n callbacks=callback_list,\n class_weight=class_weights,\n verbose=1).history", "Epoch 1/32\n183/183 [==============================] - 139s 758ms/step - loss: 0.6781 - acc: 0.7336 - kappa: 0.8292 - val_loss: 0.5805 - val_acc: 0.7685 - val_kappa: 0.8765\nEpoch 2/32\n183/183 [==============================] - 96s 526ms/step - loss: 0.5851 - acc: 0.7780 - kappa: 0.8807 - val_loss: 0.5794 - val_acc: 0.8089 - val_kappa: 0.8944\nEpoch 3/32\n183/183 [==============================] - 96s 527ms/step - loss: 0.5130 - acc: 0.8074 - kappa: 0.9146 - val_loss: 0.4713 - val_acc: 0.8382 - val_kappa: 0.9291\nEpoch 4/32\n183/183 [==============================] - 96s 526ms/step - loss: 0.4783 - acc: 0.8163 - kappa: 0.9202 - val_loss: 0.4851 - val_acc: 0.8159 - val_kappa: 0.9324\nEpoch 5/32\n183/183 [==============================] - 97s 530ms/step - loss: 0.5002 - acc: 0.8112 - kappa: 0.9198 - val_loss: 0.4360 - val_acc: 0.8298 - val_kappa: 0.9004\nEpoch 6/32\n183/183 [==============================] - 96s 526ms/step - loss: 0.4764 - acc: 0.8224 - kappa: 0.9273 - val_loss: 0.5844 - val_acc: 0.7573 - val_kappa: 0.8325\nEpoch 7/32\n183/183 [==============================] - 96s 527ms/step - loss: 0.4258 - acc: 0.8296 - kappa: 0.9323 - val_loss: 0.4955 - val_acc: 0.8257 - val_kappa: 0.9250\nEpoch 8/32\n183/183 [==============================] - 96s 526ms/step - loss: 0.4150 - acc: 0.8350 - kappa: 0.9381 - val_loss: 0.7650 - val_acc: 0.7713 - val_kappa: 0.8651\n\nEpoch 00008: ReduceLROnPlateau reducing learning rate to 4.999999873689376e-05.\nEpoch 9/32\n183/183 [==============================] - 94s 516ms/step - loss: 0.3971 - acc: 0.8508 - kappa: 0.9460 - val_loss: 0.4680 - val_acc: 0.8257 - val_kappa: 0.9289\nEpoch 10/32\n183/183 [==============================] - 96s 522ms/step - loss: 0.3890 - acc: 0.8535 - kappa: 0.9521 - val_loss: 0.4772 - val_acc: 0.8187 - val_kappa: 0.8993\nRestoring model weights from the end of the best epoch\nEpoch 00010: early stopping\n" ] ], [ [ "# Fine-tune the complete model (2nd step)", "_____no_output_____" ] ], [ [ "optimizer = optimizers.SGD(lr=LEARNING_RATE, momentum=0.9, nesterov=True)\nmodel.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=metric_list)", "_____no_output_____" ], [ "history_finetunning_2 = model.fit_generator(generator=train_generator,\n steps_per_epoch=STEP_SIZE_TRAIN,\n validation_data=valid_generator,\n validation_steps=STEP_SIZE_VALID,\n epochs=int(EPOCHS*0.2),\n callbacks=callback_list,\n class_weight=class_weights,\n verbose=1).history", "Epoch 1/8\n183/183 [==============================] - 126s 687ms/step - loss: 0.4280 - acc: 0.8303 - kappa: 0.9380 - val_loss: 0.4776 - val_acc: 0.8243 - val_kappa: 0.9202\nEpoch 2/8\n183/183 [==============================] - 94s 515ms/step - loss: 0.4020 - acc: 0.8439 - kappa: 0.9394 - val_loss: 0.4517 - val_acc: 0.8340 - val_kappa: 0.9338\nEpoch 3/8\n183/183 [==============================] - 96s 526ms/step - loss: 0.4027 - acc: 0.8477 - kappa: 0.9433 - val_loss: 0.4192 - val_acc: 0.8326 - val_kappa: 0.9328\nEpoch 4/8\n183/183 [==============================] - 96s 526ms/step - loss: 0.3997 - acc: 0.8528 - kappa: 0.9467 - val_loss: 0.3972 - val_acc: 0.8605 - val_kappa: 0.9468\nEpoch 5/8\n183/183 [==============================] - 96s 527ms/step - loss: 0.3979 - acc: 0.8480 - kappa: 0.9460 - val_loss: 0.4347 - val_acc: 0.8368 - val_kappa: 0.9297\nEpoch 6/8\n183/183 [==============================] - 95s 521ms/step - loss: 0.3919 - acc: 0.8484 - kappa: 0.9464 - val_loss: 0.4353 - val_acc: 0.8410 - val_kappa: 0.9390\nEpoch 7/8\n183/183 [==============================] - 94s 515ms/step - loss: 0.3922 - acc: 0.8552 - kappa: 0.9518 - val_loss: 0.4450 - val_acc: 0.8396 - val_kappa: 0.9334\n\nEpoch 00007: ReduceLROnPlateau reducing learning rate to 4.999999873689376e-05.\nEpoch 8/8\n183/183 [==============================] - 92s 504ms/step - loss: 0.3902 - acc: 0.8583 - kappa: 0.9477 - val_loss: 0.3853 - val_acc: 0.8661 - val_kappa: 0.9537\n" ] ], [ [ "# Model loss graph ", "_____no_output_____" ] ], [ [ "history = {'loss': history_finetunning['loss'] + history_finetunning_2['loss'], \n 'val_loss': history_finetunning['val_loss'] + history_finetunning_2['val_loss'], \n 'acc': history_finetunning['acc'] + history_finetunning_2['acc'], \n 'val_acc': history_finetunning['val_acc'] + history_finetunning_2['val_acc'], \n 'kappa': history_finetunning['kappa'] + history_finetunning_2['kappa'], \n 'val_kappa': history_finetunning['val_kappa'] + history_finetunning_2['val_kappa']}\n\nsns.set_style(\"whitegrid\")\nfig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex='col', figsize=(20, 18))\n\nax1.plot(history['loss'], label='Train loss')\nax1.plot(history['val_loss'], label='Validation loss')\nax1.legend(loc='best')\nax1.set_title('Loss')\n\nax2.plot(history['acc'], label='Train accuracy')\nax2.plot(history['val_acc'], label='Validation accuracy')\nax2.legend(loc='best')\nax2.set_title('Accuracy')\n\nax3.plot(history['kappa'], label='Train kappa')\nax3.plot(history['val_kappa'], label='Validation kappa')\nax3.legend(loc='best')\nax3.set_title('Kappa')\n\nplt.xlabel('Epochs')\nsns.despine()\nplt.show()", "_____no_output_____" ], [ "# Create empty arays to keep the predictions and labels\nlastFullTrainPred = np.empty((0, N_CLASSES))\nlastFullTrainLabels = np.empty((0, N_CLASSES))\nlastFullValPred = np.empty((0, N_CLASSES))\nlastFullValLabels = np.empty((0, N_CLASSES))\n\n# Add train predictions and labels\nfor i in range(STEP_SIZE_TRAIN+1):\n im, lbl = next(train_generator)\n scores = model.predict(im, batch_size=train_generator.batch_size)\n lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)\n lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)\n\n# Add validation predictions and labels\nfor i in range(STEP_SIZE_VALID+1):\n im, lbl = next(valid_generator)\n scores = model.predict(im, batch_size=valid_generator.batch_size)\n lastFullValPred = np.append(lastFullValPred, scores, axis=0)\n lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)\n\nlastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))\nlastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))\n\ntrain_preds = [np.argmax(pred) for pred in lastFullTrainPred]\ntrain_labels = [np.argmax(label) for label in lastFullTrainLabels]\nvalidation_preds = [np.argmax(pred) for pred in lastFullValPred]\nvalidation_labels = [np.argmax(label) for label in lastFullValLabels]\ncomplete_labels = [np.argmax(label) for label in lastFullComLabels]", "_____no_output_____" ] ], [ [ "# Threshold optimization", "_____no_output_____" ] ], [ [ "def find_best_fixed_threshold(preds, targs, do_plot=True):\n best_thr_list = [0 for i in range(preds.shape[1])]\n for index in range(1, preds.shape[1]):\n score = []\n thrs = np.arange(0, 1, 0.01)\n for thr in thrs:\n preds_thr = [index if x[index] > thr else np.argmax(x) for x in preds]\n score.append(cohen_kappa_score(targs, preds_thr))\n score = np.array(score)\n pm = score.argmax()\n best_thr, best_score = thrs[pm], score[pm].item()\n best_thr_list[index] = best_thr\n print('Label %s: thr=%.3f, Kappa=%.3f' % (index, best_thr, best_score))\n if do_plot:\n plt.plot(thrs, score)\n plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max())\n plt.text(best_thr+0.03, best_score-0.01, ('Kappa[%s]=%.3f' % (index, best_score)), fontsize=14);\n plt.show()\n \n return best_thr_list\n\nthreshold_list = find_best_fixed_threshold(lastFullComPred, complete_labels, do_plot=True)\nthreshold_list[0] = 0 # In last instance assign label 0", "Label 1: thr=0.440, Kappa=0.802\n" ], [ "# Apply optimized thresholds to the train predictions\ntrain_preds_opt = [0 for i in range(lastFullTrainPred.shape[0])]\nfor idx, thr in enumerate(threshold_list):\n for idx2, pred in enumerate(lastFullTrainPred):\n if pred[idx] > thr:\n train_preds_opt[idx2] = idx\n\n# Apply optimized thresholds to the validation predictions\nvalidation_preds_opt = [0 for i in range(lastFullValPred.shape[0])]\nfor idx, thr in enumerate(threshold_list):\n for idx2, pred in enumerate(lastFullValPred):\n if pred[idx] > thr:\n validation_preds_opt[idx2] = idx\n \nindex_order = [0, 2, 1, 4, 3]\n# Apply optimized thresholds to the train predictions by class distribution\ntrain_preds_opt2 = [0 for i in range(lastFullTrainPred.shape[0])]\nfor idx in index_order:\n thr = threshold_list[idx]\n for idx2, pred in enumerate(lastFullTrainPred):\n if pred[idx] > thr:\n train_preds_opt2[idx2] = idx\n\n# Apply optimized thresholds to the validation predictions by class distribution\nvalidation_preds_opt2 = [0 for i in range(lastFullValPred.shape[0])]\nfor idx in index_order:\n thr = threshold_list[idx]\n for idx2, pred in enumerate(lastFullValPred):\n if pred[idx] > thr:\n validation_preds_opt2[idx2] = idx", "_____no_output_____" ] ], [ [ "# Model Evaluation", "_____no_output_____" ], [ "## Confusion Matrix\n\n### Original thresholds", "_____no_output_____" ] ], [ [ "labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']\ndef plot_confusion_matrix(train, validation, labels=labels):\n train_labels, train_preds = train\n validation_labels, validation_preds = validation\n fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))\n train_cnf_matrix = confusion_matrix(train_labels, train_preds)\n validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)\n\n train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]\n validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]\n\n train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)\n validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)\n\n sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap=\"Blues\",ax=ax1).set_title('Train')\n sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')\n plt.show()\n\nplot_confusion_matrix((train_labels, train_preds), (validation_labels, validation_preds))", "_____no_output_____" ] ], [ [ "### Optimized thresholds", "_____no_output_____" ] ], [ [ "plot_confusion_matrix((train_labels, train_preds_opt), (validation_labels, validation_preds_opt))", "_____no_output_____" ] ], [ [ "### Optimized thresholds by class", "_____no_output_____" ] ], [ [ "plot_confusion_matrix((train_labels, train_preds_opt2), (validation_labels, validation_preds_opt2))", "_____no_output_____" ] ], [ [ "## Quadratic Weighted Kappa", "_____no_output_____" ] ], [ [ "def evaluate_model(train, validation):\n train_labels, train_preds = train\n validation_labels, validation_preds = validation\n print(\"Train Cohen Kappa score: %.3f\" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))\n print(\"Validation Cohen Kappa score: %.3f\" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))\n print(\"Complete set Cohen Kappa score: %.3f\" % cohen_kappa_score(train_preds+validation_preds, train_labels+validation_labels, weights='quadratic'))\n \nprint(\" Original thresholds\")\nevaluate_model((train_preds, train_labels), (validation_preds, validation_labels))\nprint(\" Optimized thresholds\")\nevaluate_model((train_preds_opt, train_labels), (validation_preds_opt, validation_labels))\nprint(\" Optimized thresholds by class\")\nevaluate_model((train_preds_opt2, train_labels), (validation_preds_opt2, validation_labels))", " Original thresholds\nTrain Cohen Kappa score: 0.928\nValidation Cohen Kappa score: 0.892\nComplete set Cohen Kappa score: 0.921\n Optimized thresholds\nTrain Cohen Kappa score: 0.909\nValidation Cohen Kappa score: 0.873\nComplete set Cohen Kappa score: 0.902\n Optimized thresholds by class\nTrain Cohen Kappa score: 0.909\nValidation Cohen Kappa score: 0.869\nComplete set Cohen Kappa score: 0.901\n" ] ], [ [ "## Apply model to test set and output predictions", "_____no_output_____" ] ], [ [ "def apply_tta(model, generator, steps=10):\n step_size = generator.n//generator.batch_size\n preds_tta = []\n for i in range(steps):\n generator.reset()\n preds = model.predict_generator(generator, steps=step_size)\n preds_tta.append(preds)\n\n return np.mean(preds_tta, axis=0)\n\npreds = apply_tta(model, test_generator)\npredictions = np.argmax(preds, axis=1)\n\npredictions_opt = [0 for i in range(preds.shape[0])]\nfor idx, thr in enumerate(threshold_list):\n for idx2, pred in enumerate(preds):\n if pred[idx] > thr:\n predictions_opt[idx2] = idx\n\npredictions_opt2 = [0 for i in range(preds.shape[0])]\nfor idx in index_order:\n thr = threshold_list[idx]\n for idx2, pred in enumerate(preds):\n if pred[idx] > thr:\n predictions_opt2[idx2] = idx\n\nresults = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})\nresults['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])\n\nresults_opt = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions_opt})\nresults_opt['id_code'] = results_opt['id_code'].map(lambda x: str(x)[:-4])\n\nresults_opt2 = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions_opt2})\nresults_opt2['id_code'] = results_opt2['id_code'].map(lambda x: str(x)[:-4])", "_____no_output_____" ], [ "# Cleaning created directories\nif os.path.exists(train_dest_path):\n shutil.rmtree(train_dest_path)\nif os.path.exists(validation_dest_path):\n shutil.rmtree(validation_dest_path)\nif os.path.exists(test_dest_path):\n shutil.rmtree(test_dest_path)", "_____no_output_____" ] ], [ [ "# Predictions class distribution", "_____no_output_____" ] ], [ [ "fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex='col', figsize=(24, 8.7))\nsns.countplot(x=\"diagnosis\", data=results, palette=\"GnBu_d\", ax=ax1).set_title('Test')\nsns.countplot(x=\"diagnosis\", data=results_opt, palette=\"GnBu_d\", ax=ax2).set_title('Test optimized')\nsns.countplot(x=\"diagnosis\", data=results_opt2, palette=\"GnBu_d\", ax=ax3).set_title('Test optimized by class')\nsns.despine()\nplt.show()", "_____no_output_____" ], [ "val_kappa = cohen_kappa_score(validation_preds, validation_labels, weights='quadratic')\nval_opt_kappa = cohen_kappa_score(validation_preds_opt, validation_labels, weights='quadratic')\nval_opt_kappa2 = cohen_kappa_score(validation_preds_opt2, validation_labels, weights='quadratic')\nresults_name = 'submission.csv'\nresults_opt_name = 'submission_opt.csv'\nresults_opt2_name = 'submission_opt2.csv'\n\n# if (val_kappa > val_opt_kappa) and (val_kappa > val_opt_kappa2):\n# results_name = 'submission.csv'\n# results_opt_name = 'submission_opt.csv'\n# results_opt2_name = 'submission_opt2.csv'\n# elif (val_opt_kappa > val_kappa) and (val_opt_kappa > val_opt_kappa2):\n# results_name = 'submission_norm.csv'\n# results_opt_name = 'submission.csv'\n# results_opt2_name = 'submission_opt2.csv'\n# else:\n# results_name = 'submission_norm.csv'\n# results_opt_name = 'submission_opt.csv'\n# results_opt2_name = 'submission.csv'", "_____no_output_____" ], [ "results.to_csv(results_name, index=False)\ndisplay(results.head())\n\nresults_opt.to_csv(results_opt_name, index=False)\ndisplay(results_opt.head())\n\nresults_opt2.to_csv(results_opt2_name, index=False)\ndisplay(results_opt2.head())", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cbb7a17145e0d89d96df603fe3b4365e3d649c74
3,747
ipynb
Jupyter Notebook
ejercicio1/Ejercicio1.ipynb
jinchuika/umg-ai
c32c410e2dc2c4c1bde04f9f0d8100022d0e57f9
[ "MIT" ]
3
2020-05-04T19:09:52.000Z
2021-10-05T23:13:26.000Z
ejercicio1/Ejercicio1.ipynb
jinchuika/umg-ai
c32c410e2dc2c4c1bde04f9f0d8100022d0e57f9
[ "MIT" ]
null
null
null
ejercicio1/Ejercicio1.ipynb
jinchuika/umg-ai
c32c410e2dc2c4c1bde04f9f0d8100022d0e57f9
[ "MIT" ]
null
null
null
30.966942
385
0.564452
[ [ [ "# Análisis estadístico del clima\n\nEl set de datos adjunto (`wheather.csv`) contiene información obtenida de 122 estaciones climatológicas ubicadas en la región sudeste de Brasil. Cada registro representa una hora de información sobre el clima. Los registros fueron generados entre 2010 y 2016, por lo que generan un volumen de datos bastante grande.\n\nSelecciones tres ciudades del listado adjunto, conforme a su número de grupo. Utilizando los datos de esas ciudades, responda los siguientes cuestionamientos utilizando la librería Pandas. Recuerde que cada respuesta debe ser acompañada de los datos que respalden su veracidad. El resultado debe ser entregado en un Jupyter Notebook con los datos de los integrantes del grupo.\n\n## Parte 1\n\nPara cada ciudad, debe calcular la media, el mínimo, el máximo y la desviación estándar de las siguientes dimensiones:\n\n- Temperatura (`temperature`)\n- Presión del aire (`air_pressure`)\n- Humedad (`humidity`)\n\nPara cada dimensión, responda a las siguientes preguntas:\n\n1. ¿En promedio, qué ciudad registra los valores más altos?\n2. ¿Qué ciudad registró el valor más alto de todos?\n3. ¿Qué ciudad registró el valor más bajo de todos?\n4. ¿En qué ciudad varía más el registro de los valores?\n\n## Parte 2\n\nResponda las siguientes preguntas:\n\n1. ¿Para cada ciudad, qué año registró el promedio más alto de temperatura?\n2. ¿Qué ciudad tiene la mayor cantidad de muestras?\n3. ¿Para cada ciudad, en qué año se registró la mayor cantidad de datos?\n4. ¿Qué ciudad registró el año con la humedad más variada y en cuál fue ese año?\n5. ¿Qué ciudad cuenta con la presión del aire menos variada?\n6. ¿Para cada ciudad, cuál es el mes que registra las temperaturas más bajas?", "_____no_output_____" ] ], [ [ "ciudades = [\n 'Belo Horizonte',\n 'Campina Verde',\n 'Petrópolis',\n \n 'Vitória',\n 'Sorocaba',\n 'Guarujá',\n \n 'Pirapora',\n 'Manhuaçu',\n 'Maria da Fé',\n \n 'Votuporanga',\n 'Ourinhos',\n 'Passa Quatro',\n \n 'Mantena',\n 'João Pinheiro',\n 'Barueri',\n \n 'Ituverava',\n 'Campos dos Goytacazes',\n 'Pradópolis',\n \n 'Juiz de Fora',\n 'Piracicaba',\n 'Silva Jardim',\n \n 'Nova Venécia',\n 'Salinas',\n 'Ituiutaba',\n \n 'Capelinha',\n 'São Mateus',\n 'Pompéu',\n \n 'Alegre',\n 'Montalvânia',\n 'Alfredo Chaves'\n]", "_____no_output_____" ] ], [ [ "## Respuesta 1\n\nLa ciudad que jfklfdsjlkfdsjflkdsjlkf jdslkf sdlk flkds", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbb7ac7556f8fd4338399d04482d343712e44692
406,225
ipynb
Jupyter Notebook
Module_2b_linear_regression_ols.ipynb
sanketvega/hackermath
a2143999a341cb57ed31d4a556024328776d49c3
[ "MIT" ]
null
null
null
Module_2b_linear_regression_ols.ipynb
sanketvega/hackermath
a2143999a341cb57ed31d4a556024328776d49c3
[ "MIT" ]
null
null
null
Module_2b_linear_regression_ols.ipynb
sanketvega/hackermath
a2143999a341cb57ed31d4a556024328776d49c3
[ "MIT" ]
1
2020-04-08T07:00:40.000Z
2020-04-08T07:00:40.000Z
248.759951
76,196
0.904277
[ [ [ "# Linear Regression (OLS)\n\n### Key Equation: $Ax =b ~~ \\text{for} ~~ n \\times p+1 $\n\n\nLinear regression - Ordinary Least Square (OLS) is the most basic form of supervised learning. In this we have a target variable (y) and we want to establish a linear relationship with a set of features (x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>, ...)\n\nLets take a simple example to illustrate this problem:\n\nWe have price ('000 INR) and mileage (kmpl) for 7 hatchback cars as below\n\n```\nprice = [199 , 248 , 302 , 363 , 418 , 462 , 523 ]\nkmpl = [23.9, 22.7, 21.1, 20.5, 19.8, 20.4, 18.6]\n```\n\nWe want to predict the target variable `price`, given the input variable `kmpl`", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('fivethirtyeight')\nplt.rcParams['figure.figsize'] = (10, 6)", "_____no_output_____" ], [ "price = np.array([199, 248, 302, 363, 418, 462, 523])", "_____no_output_____" ], [ "kmpl = np.array([23.9, 22.7, 21.1, 20.5, 19.8, 20.4, 18.6])", "_____no_output_____" ], [ "plt.scatter(kmpl, price, s = 150)\nplt.xlabel('kmpl')\nplt.ylabel('price')", "_____no_output_____" ] ], [ [ "## Thinking Linear Algebra Way\n\nThe basic problem in linear regression is solving - `n` linear equation, with `p` unknowns, where `p < n`\n\nSo a linear relationship can be written as:\n\n$$ price = \\beta_{0} + \\beta_{1} kmpl $$\n\nWe have added an intercept to the equation, so that the line does not need to pass through zero\n\nSo we are trying to solve these n = 7 equations with, p = 2\n\n$$ 199 = \\beta_{0} + \\beta_{1} 23.9 ~~~~ \\text{(eq 1)} $$\n$$ 248 = \\beta_{0} + \\beta_{1} 22.7 ~~~~ \\text{(eq 2)} $$\n$$ 302 = \\beta_{0} + \\beta_{1} 21.1 ~~~~ \\text{(eq 3)} $$\n$$ 363 = \\beta_{0} + \\beta_{1} 20.5 ~~~~ \\text{(eq 4)} $$\n$$ 418 = \\beta_{0} + \\beta_{1} 19.8 ~~~~ \\text{(eq 5)} $$\n$$ 462 = \\beta_{0} + \\beta_{1} 20.4 ~~~~ \\text{(eq 6)} $$\n$$ 523 = \\beta_{0} + \\beta_{1} 18.6 ~~~~ \\text{(eq 7)} $$\n\nSo the key to remember here is that we are solving for $\\beta_{0}$ and $ \\beta_{1} $\n\nNow if we plot these lines, it is clear that there will not be a one point of intersection that we can get like we get if we had only 2 equations.", "_____no_output_____" ] ], [ [ "b0 = np.arange(-500,4000, 100)\n\nfor i in range(7):\n b1 = (price[i] - b0)/kmpl[i]\n plt.plot(b0, b1, linewidth = 1)\n plt.text(b0[-10], b1[-10], 'eq %s'% (i + 1), fontsize = 8 )\n\nplt.axhline(0, color='grey', linewidth=2)\nplt.axvline(0, color='grey', linewidth=2)\n\nplt.xlabel('beta0')\nplt.ylabel('beta1')\n\nplt.ylim(-150,50)", "_____no_output_____" ] ], [ [ "Now we don't have an exact solution. But can see the $\\beta_{0} $ is around 1600 and $ \\beta_{1} $ is around -60. So one possible line is \n\n$$ price = 1600 - 60 * kmpl $$\n\nBut we can clearly see that this is probably not the best possible line!!", "_____no_output_____" ] ], [ [ "beta_0 = 1600 \nbeta_1 = -60\nplt.scatter(kmpl, price, s = 150)\nplt.xlabel('kmpl')\nplt.ylabel('price')\ny = beta_0 + beta_1 * kmpl\nplt.plot(kmpl, y, '-')", "_____no_output_____" ] ], [ [ "## Adding Error Term\n\nThe linear relationship hence needs to be modeled through a error variable $\\epsilon_{i}$ — an unobserved random variable that adds noise to the linear relationship between the target variable and input variable.\n\nIf we have `p` input variables then,\n\n$$ y_{i} = \\beta_{0} + \\sum_{i=1}^p \\beta_{i} x_{i} + \\epsilon_{i} $$\n\nWe can add the $x_{0} = 1 $ in the equation:\n\n$$ y_{i} = \\sum_{i=0}^p \\beta_{i} x_{i} + \\epsilon_{i} $$\n\n$$ y_{i} = x_{i}^T \\beta_{i} + \\epsilon_{i} $$\n\n\n", "_____no_output_____" ] ], [ [ "plt.scatter(kmpl, price, s = 150)\nplt.xlabel('kmpl')\nplt.ylabel('price')\ny = 1600 - 60 * kmpl\nyerrL = y - price\nyerrB = y - y\nplt.errorbar(kmpl,y, fmt = 'o', yerr= [yerrL, yerrB], c= 'r')\nplt.plot(kmpl, y,linewidth = 2)", "_____no_output_____" ] ], [ [ "## Represent Matrix Way\n\nIf we write this in matrix form \n\n$$ y = X\\beta + \\epsilon $$\n\n$$ \\text{where} ~~~~ X = \\begin{bmatrix} - x_{1}^T- \\\\ - x_{2}^T- \\\\ ... \\\\ - x_{n}^T- \\end{bmatrix} ~~ \\text{,} ~~ y = \\begin{bmatrix} y_{1} \\\\ y_{2} \\\\ ... \\\\ y_{n} \\end{bmatrix} ~~ \\text{and} ~~ \\epsilon = \\begin{bmatrix} \\epsilon_{1} \\\\ \\epsilon_{2} \\\\ ... \\\\ \\epsilon_{n} \\end{bmatrix} $$\n\nFor our specific example, the matrix looks like:\n\n$$ \\begin{bmatrix}199 \\\\ 248 \\\\ 302 \\\\ 363 \\\\ 418 \\\\ 462 \\\\ 523 \\end{bmatrix} = \\begin{bmatrix} 1 & 23.9 \\\\ 1 & 22.7 \\\\ 1 & 21.1 \\\\ 1 & 20.5 \\\\ 1 & 19.8 \\\\ 1 & 20.4 \\\\ 1 & 18.6 \\end{bmatrix} \\begin{bmatrix}\n\\beta_{0} \\\\ \\beta_{1} \\end{bmatrix} + \\begin{bmatrix} \\epsilon_{1} \\\\ \\epsilon_{2} \\\\ \\epsilon_{3} \\\\ \\epsilon_{4} \\\\ \\epsilon_{5} \\\\ \\epsilon_{6} \\\\ \\epsilon_{7} \\end{bmatrix} $$\n\n", "_____no_output_____" ], [ "## Minimize Error - Ordinary Least Square\n\n\nThe error we will aim to minimize is the squared error:\n\n$$ E(\\beta)= \\frac {1}{n} \\sum _{i=1}^{n}(\\epsilon_{i})^2 $$\n\nThis is why this technique is called **Ordinary Least Square** (OLS) regression\n\n$$ E(\\beta)= \\frac {1}{n} \\sum _{i=1}^{n}(y_{i}-x_{i}^{T}\\beta)^{2} $$\n\nwhich in matrix way is equal to: \n\n$$ E(\\beta)= \\frac {1}{n} (y-X\\beta)^{T}(y-X\\beta) $$\n\n$$ E(\\beta)= \\frac {1}{n} {||y - X\\beta||}^2 $$\n\nTo get the minimum for this error function, we need to differentiate by $\\beta^T$\n\n$$ \\nabla E(\\beta) = 0 $$\n\n$$ \\nabla E(\\beta) ={\\frac {dE(\\beta)}{d\\beta^T}} = {\\frac {d}{d\\beta^T}}{\\bigg (}{ \\frac {1}{n} ||y - X\\beta||}^2{\\bigg )} = 0 $$\n\n$$ \\nabla E(\\beta)= \\frac {2}{n} X^T(X\\beta−y) = 0 $$\n\n$$ X^T X\\beta = X^T y $$\n\nSo the solution to OLS:\n\n$$ \\beta = X^†y ~~ \\text{where} ~~ X^† = (X^T X)^{−1} X^T $$\n\n$$X^† ~~ \\text{is the pseudo inverse of} ~~ X $$\n\n\n\n", "_____no_output_____" ], [ "## Calculate Pseudo Inverse\n\n$$ X^† = (X^T X)^{−1} X^T $$\n\n$X^† $ is the pseudo inverse of $ X $ has good properties\n\n$$ X^† = \\left( \\begin{matrix} ~ \\\\\n \\begin{bmatrix} ~ \\\\ p + 1 \\times n \\\\ ~ \\end{bmatrix} \n \\begin{bmatrix} ~ \\\\ n \\times p + 1 \\\\ ~ \\end{bmatrix} \n \\\\ ~ \n \\end{matrix}\n \\right)^{-1} \n \\begin{bmatrix} ~ \\\\ (p + 1 \\times n) \\\\ ~ \\end{bmatrix}$$\n\n$$ X^† = \\left( \\begin{matrix} ~ \\\\\n \\begin{bmatrix} ~ \\\\ p + 1 \\times p + 1 \\\\ ~ \\end{bmatrix} \n \\\\ ~ \n \\end{matrix}\n \\right)^{-1} \n \\begin{bmatrix} ~ \\\\ (p + 1 \\times n) \\\\ ~ \\end{bmatrix}$$\n\n\n$$ X^† = \\begin{bmatrix} ~ \\\\ (p + 1 \\times n) \\\\ ~ \\end{bmatrix}$$\n\n\n$$ X^†_{p + 1 \\times n} = {(X^T_{p + 1 \\times n} ~ X_{n \\times p+1})}^{-1} ~ X^T_{p + 1 \\times n}$$\n\n", "_____no_output_____" ] ], [ [ "n = 7", "_____no_output_____" ], [ "x0 = np.ones(n)\nx0", "_____no_output_____" ], [ "x1 = kmpl\nx1", "_____no_output_____" ], [ "# Create the X matrix\nX = np.c_[x0, x1]\nX = np.asmatrix(X)\nX", "_____no_output_____" ], [ "# Create the y matrix\ny = np.asmatrix(price.reshape(-1,1))\ny", "_____no_output_____" ], [ "y.shape", "_____no_output_____" ], [ "X_T = np.transpose(X)\nX_T", "_____no_output_____" ], [ "X_T * X", "_____no_output_____" ], [ "X_pseudo = np.linalg.inv(X_T * X) * X_T\nX_pseudo", "_____no_output_____" ], [ "beta = X_pseudo * y\nbeta", "_____no_output_____" ] ], [ [ "## OLS Solution\n\nHence we now know that the best-fit line is $\\beta_0 = 1662 $ and $\\beta_1 = -62$\n\n$$ price = 1662 - 62 * kmpl $$\n\n", "_____no_output_____" ] ], [ [ "beta_0 = 1662 \nbeta_1 = -62\nplt.scatter(kmpl, price, s = 150)\nplt.xlabel('kmpl')\nplt.ylabel('price')\ny = beta_0 + beta_1 * kmpl\nplt.plot(kmpl, y, '-')", "_____no_output_____" ] ], [ [ "## Exercise 1\n\nWe had price ('000 INR), mileage (kmpl) and now we have one more input variable - horsepower (bhp) for the 7 cars\n\n```\nprice = [199 , 248 , 302 , 363 , 418 , 462 , 523 ]\nkmpl = [23.9, 22.7, 21.1, 20.5, 19.8, 20.4, 18.6]\nbhp = [38 , 47 , 55 , 67 , 68 , 83 , 82 ] \n```\nWe want to predict the value of `price`, given the variable `kmpl` and `bhp`", "_____no_output_____" ] ], [ [ "bhp = np.array([38, 47, 55, 67, 68, 83, 82])", "_____no_output_____" ], [ "from mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.scatter(bhp, kmpl, price, c='r', marker='o', s = 200)\nax.view_init(azim=30)", "_____no_output_____" ] ], [ [ "So a linear relationship can be written as:\n\n$$ price = \\beta_{0} + \\beta_{1} kmpl + \\beta_{2} bhp $$\n\nWe have added an intercept to the equation, so that the plane does not need to pass through zero\n\nSo we are trying to solve these n = 7 equations with, p = 3\n\n$$ 199 = \\beta_{0} + \\beta_{1} 23.9 + \\beta_{2} 38 + \\epsilon_{1} ~~~~ \\text{(eq 1)} $$\n$$ 248 = \\beta_{0} + \\beta_{1} 22.7 + \\beta_{2} 47 + \\epsilon_{2} ~~~~ \\text{(eq 2)} $$\n$$ 302 = \\beta_{0} + \\beta_{1} 21.1 + \\beta_{2} 55 + \\epsilon_{3} ~~~~ \\text{(eq 3)} $$\n$$ 363 = \\beta_{0} + \\beta_{1} 20.5 + \\beta_{2} 67 + \\epsilon_{4} ~~~~ \\text{(eq 4)} $$\n$$ 418 = \\beta_{0} + \\beta_{1} 19.8 + \\beta_{2} 68 + \\epsilon_{5} ~~~~ \\text{(eq 5)} $$\n$$ 462 = \\beta_{0} + \\beta_{1} 20.4 + \\beta_{2} 83 + \\epsilon_{6} ~~~~ \\text{(eq 6)} $$\n$$ 523 = \\beta_{0} + \\beta_{1} 18.6 + \\beta_{2} 82 + \\epsilon_{7} ~~~~ \\text{(eq 7)} $$\n\nor in matrix form - we can write it as\n\n$$ \\begin{bmatrix}199 \\\\ 248 \\\\ 302 \\\\ 363 \\\\ 418 \\\\ 462 \\\\ 523 \\end{bmatrix} = \\begin{bmatrix} 1 & 23.9 & 38 \\\\ 1 & 22.7 & 47 \\\\ 1 & 21.1 & 55 \\\\ 1 & 20.5 & 67 \\\\ 1 & 19.8 & 68 \\\\ 1 & 20.4 & 83 \\\\ 1 & 18.6 & 82 \\end{bmatrix} \\begin{bmatrix}\\beta_{0} \\\\ \\beta_{1} \\\\ \\beta_{2}\\end{bmatrix} + \\begin{bmatrix} \\epsilon_{1} \\\\ \\epsilon_{2} \\\\ \\epsilon_{3} \\\\ \\epsilon_{4} \\\\ \\epsilon_{5} \\\\ \\epsilon_{6} \\\\ \\epsilon_{7} \\end{bmatrix}$$\n\n", "_____no_output_____" ], [ "Develop the $X$ matrix for this problem?", "_____no_output_____" ], [ "Develop the $y$ matrix for this problem?", "_____no_output_____" ], [ "Calculate the pseudo inverse of $X$.", "_____no_output_____" ], [ "Find the $\\beta$ for the best-fit plane.", "_____no_output_____" ], [ "Plot the `price`, `kmpl` and `bhp` and the best-fit plane.", "_____no_output_____" ] ], [ [ "from mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.scatter(bhp, kmpl, price, c='r', marker='o', s = 200)\n\nxrange = np.arange(min(bhp), max(bhp), 1)\nyrange = np.arange(min(kmpl), max(kmpl), 1)\nx, y = np.meshgrid(xrange, yrange)\nz = 524 - 22 * y + 4 * x\nax.plot_surface(x, y, z, color ='blue', alpha = 0.5)\nax.view_init(azim=60)", "_____no_output_____" ] ], [ [ "## Using a package: statsmodel\n\nRun the Ordinary Least Square using the package in statsmodels", "_____no_output_____" ] ], [ [ "import statsmodels.api as sm", "_____no_output_____" ], [ "import pandas as pd\ndf = pd.read_csv(\"cars_sample.csv\")", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "y = df.price", "_____no_output_____" ], [ "X = df[['kmpl', 'bhp']]", "_____no_output_____" ], [ "X = sm.add_constant(X)\nX", "_____no_output_____" ], [ "model_sm = sm.OLS(y,X)", "_____no_output_____" ], [ "results = model_sm.fit()", "_____no_output_____" ], [ "results.params", "_____no_output_____" ] ], [ [ "## Using a package: sklearn\n\nRun the Ordinary Least Square using the package sklearn", "_____no_output_____" ] ], [ [ "from sklearn import linear_model", "_____no_output_____" ], [ "y = df.price", "_____no_output_____" ], [ "X = df[['kmpl', 'bhp']]", "_____no_output_____" ], [ "model_sklearn = linear_model.LinearRegression()", "_____no_output_____" ], [ "model_sklearn.fit(X, y)", "_____no_output_____" ], [ "model_sklearn.coef_", "_____no_output_____" ], [ "model_sklearn.intercept_", "_____no_output_____" ] ], [ [ "## Non Linear Transformation\n\nWhat happens when we do Non-Linear transforms to the features?\n\nWhat if we want to predict $price$ based on $kmpl$, $bhp$, $kmpl^2$ and $bhp / kmpl$\n\nThe think to remember is that non-linear transforms of the features does not impact the Linear Regression. Because the linear relationship is really about $\\beta $ and not the features.\n\nWe can be write this as:\n\n$$ price = \\beta_{0} + \\beta_{1} kmpl + \\beta_{2} bhp + \\beta_{3} kmpl^2 + \\beta_{4} bhp/kmpl $$", "_____no_output_____" ] ], [ [ "df['kmpl2'] = np.power(df.kmpl,2)", "_____no_output_____" ], [ "plt.scatter(df.kmpl2, df.price, s = 150)\nplt.xlabel('kmpl2')\nplt.ylabel('price')", "_____no_output_____" ], [ "df['bhp_kmpl'] = np.divide(df.bhp, df.kmpl)", "_____no_output_____" ], [ "plt.scatter(df.bhp_kmpl, df.price, s = 150)\nplt.xlabel('bhp/kmpl')\nplt.ylabel('price')", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "## Exercise 2\n\nRun a linear regeression:\n$$ price = \\beta_{0} + \\beta_{1} kmpl + \\beta_{2} bhp + \\beta_{2} kmpl^2 + \\beta_{2} bhp/kmpl $$\n\nUsing Pseudo-Inverse Matrix:", "_____no_output_____" ], [ "Using sklearn package:", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
cbb7c23beaee2086bc15345a630fc3ad93dfb707
126,493
ipynb
Jupyter Notebook
week04/week04_part1_sol.ipynb
MariaB-Edin/ISM
ff517d113a81e9280ed631581d78b13f0714004f
[ "CC0-1.0" ]
null
null
null
week04/week04_part1_sol.ipynb
MariaB-Edin/ISM
ff517d113a81e9280ed631581d78b13f0714004f
[ "CC0-1.0" ]
null
null
null
week04/week04_part1_sol.ipynb
MariaB-Edin/ISM
ff517d113a81e9280ed631581d78b13f0714004f
[ "CC0-1.0" ]
1
2020-10-01T17:18:05.000Z
2020-10-01T17:18:05.000Z
59.30286
32,052
0.659871
[ [ [ "# ISM Lecture 3 continued in week 04 Part 1 Solutions\n\nThis content is authored by Maria Boutchkova for use in the University of Edinbugh Business School Investment and Securities Markets course in Autumn 2020. \n\nMake sure to have watched the videos preceeding this Notebook and have covered the slides. Detailed explanations in the assigned textbook chapters.\n\nThis lesson covers:\n\n* Value-weighted Portfolio risk and return of N assets\n\nThe first computational cell below (with In \\[ \\] in front) contains the solution. Go over the command lines, make sure they make sense to you, click inside the cell, it should become surrounded by a green rectangle, press Esc - the rectangle will become blue, now press Shift+Enter - this will execute the cell and produce the results beneath it.\n\nTo remove all output in the notebook and start again, go to the Kernel tab above, select Restart and Clear Output.\n\nIn this notebook we use the functionality of the pandas library. If you want to explore its full documetation, see [here](https://pandas.pydata.org/pandas-docs/stable/index.html).\n", "_____no_output_____" ], [ "## Input data\n\nNow we are going to import two csv files with data on the S&P500 index: one of monthly prices (same as last week) and another one of market capitalizations (when a company has only one class of common stock and no preferred stock, mkt cap = stock price * number of shares outstanding).\n\nWe shall form a value-weighted portfolio and compute its risk and return.\n\nIn this example we have monthly adjusted closing prices of the stocks in the S&P500 index from December 2017 until end of September 2020 (34 monthly observations per stock). The original data is arranged with stocks down the rows and dates along the columns.", "_____no_output_____" ] ], [ [ "# input a list of the prices on the portfolio components and save it as a panda series\nimport pandas as pd\nprices_orig = pd.read_csv(\"SnP500_monthly.csv\", index_col=0)\n# transpose the data so that the stocks are in the columns and the dates are along the rows\nprices = prices_orig.transpose()\nprices.head(5)", "_____no_output_____" ] ], [ [ "## Solved Problem 1: Forming a value-weighted portfolio and computing its average return and st. dev.\n\nLet us form a value-weighted portfolio of the US stocks and compute its expected return, variance and st. dev.\n\nThe weights will be equal to the mkt cap of each stock over the total of the mkt cap of all stocks.", "_____no_output_____" ] ], [ [ "mkt_cap = pd.read_csv(\"SnP500_mkt_cap.csv\", index_col=0)\nweights_v = mkt_cap / mkt_cap.sum()\n# name the column containing the weights to be called weights\nweights_v.columns = ['weights']\nweights_v.head(5)", "_____no_output_____" ] ], [ [ "Now we need to compute the portfolio average return and variance. For this we need the average returns and variances of the individual securities.", "_____no_output_____" ] ], [ [ "returns = prices / prices.shift(1) - 1\n# drop NaN-s\nreturns = returns[1:]\nreturns.head(5)", "_____no_output_____" ], [ "means = returns.mean()\nmeans.head(5)", "_____no_output_____" ], [ "vars = returns.var()\nvars.head(5)", "_____no_output_____" ], [ "port_ave_ret = means.mul(weights_v.weights).sum()\nport_ave_ret", "_____no_output_____" ], [ "# same thing using the numpy library instead\nimport numpy as np\nport_exp_ret = np.sum(returns.mean()*weights_v.weights)\nport_exp_ret", "_____no_output_____" ] ], [ [ "For the portfolio variance we are going to use the numpy matrix multiplication function dot().\n\nRecall that matrix multiplication is element wise multiplication of each row and each column of two matrices with corresponding dimensions and then summing over these products to arrive at each element of the resulting matrix.\n\nAnd this is precisely what the formula for a portfolio variance is except all the products are also multiplied by their respective postfolio weights.", "_____no_output_____" ] ], [ [ "cov = returns.cov()\ncov.head(5)", "_____no_output_____" ], [ "port_var = np.dot(weights_v.weights, np.dot(cov, weights_v.weights))\nport_var", "_____no_output_____" ], [ "port_std = port_var**(1/2)\nport_std", "_____no_output_____" ] ], [ [ "## Practice Problem 1: Forming a value-weighted portfolio and computing its average return and st. dev.\n\nLet us form the value-weighted portfolio of the UK stocks and compute its expected return, variance and st. dev.\n\nThe weights will be equal to the mkt cap of each stock over the total of the mkt cap of all stocks. The weights vector of the UK stocks is formed for you: weights_v_uk.\n\nYour task is to 1) compute the average return of the value-weighted UK portfolio; 2) compute the variance of the value-weighted UK portfolio and 3) compute the standard deviation of the value-weighted UK portfolio. Name everything _uk.", "_____no_output_____" ] ], [ [ "prices_orig_uk = pd.read_csv(\"FTSE100_250_monthly.csv\", index_col=0)\n# transpose the data so that the stocks are in the columns and the dates are along the rows\nprices_uk = prices_orig_uk.transpose()\nprices_uk.head(5)", "_____no_output_____" ], [ "mkt_cap = pd.read_csv(\"FTSE100_250_mkt_cap.csv\", index_col=0)\nweights_v_uk = mkt_cap / mkt_cap.sum()\n# name the column containing the weights to be called weights\nweights_v_uk.columns = ['weights']\nweights_v_uk.head(5)", "_____no_output_____" ] ], [ [ "Now you practice computing returns, average returns, variances and the var-cov matrix for the component stocks and finally the portfolio average return, variance and st. dev.", "_____no_output_____" ] ], [ [ "returns_uk = prices_uk / prices_uk.shift(1) - 1\n# drop NaN-s\nreturns_uk = returns_uk[1:]\nreturns_uk.head(5)", "_____no_output_____" ], [ "means_uk = returns_uk.mean()\nmeans_uk.head(5)", "_____no_output_____" ], [ "vars_uk = returns_uk.var()\nvars_uk.head(5)", "_____no_output_____" ], [ "cov_uk = returns_uk.cov()\ncov_uk.head(5)", "_____no_output_____" ], [ "port_ave_ret_uk = means_uk.mul(weights_v_uk.weights).sum()\nport_ave_ret_uk", "_____no_output_____" ], [ "port_var_uk = np.dot(weights_v_uk.weights, np.dot(cov_uk, weights_v_uk.weights))\nport_var_uk", "_____no_output_____" ], [ "port_std_uk = port_var_uk**(1/2)\nport_std_uk", "_____no_output_____" ] ], [ [ "## Solved Problem 2: Plotting different portfolios on the expected return - st. dev. coordinate system.\nRecall that we use the average reurn as an estimate of the expected return.\n\nLet us graph the value-weighted US portfolio we computed just now and the equally-weighted one from last week. First we recompute quickly the equally-weighted one here and name the new outputs _eq.", "_____no_output_____" ] ], [ [ "# declate a list of 500 elements each equal to 1/500\nlst = [1/500]*500\n# make it into a dataframe\nweights_eq = pd.DataFrame(lst)\n# name the column containing the weights to be called weights\nweights_eq.columns = ['weights']\n# do the transposing gymnastics\naux = weights_eq.T\n# take the stock id-s from the returns dataframe and put them in a list called lst\nlst = list(returns.columns.values)\n# assign the id-s in lst to be the header of the weights\naux.columns = lst\nweights_eq = aux.T\n# calculate the equally-weighted portfolio average return, var and st. dev.\nport_ave_ret_eq = means.mul(weights_eq.weights).sum()\nport_var_eq = np.dot(weights_eq.weights, np.dot(cov, weights_eq.weights))\nport_std_eq = port_var_eq**(1/2)\nport_std_eq", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.scatter(port_std_eq, port_ave_ret_eq, c='b', marker='x', label='Equally-weighted')\nplt.scatter(port_std, port_ave_ret, c='r', marker='+', label='Value-weighted')\nplt.legend(loc='upper right')\n\n# add axes labels\nplt.xlabel('Sigma')\nplt.ylabel('E(R)')\n\n# add title\nplt.title('S&P500 Portfolios');\n\n# control axes ranges\nplt.xlim(0, .1)\nplt.ylim(0, .1)\n\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Practice Problem 2: Plotting different portfolios on the expected return - st. dev. coordinate system.\n\nGraph the value-weighted UK portfolio you computed just now and the equally-weighted one from last week. First we recompute quickly the equally-weighted one here and name the new outputs _eq.", "_____no_output_____" ] ], [ [ "# declate a list of 500 elements each equal to 1/500\nlst = [1/350]*350\n# make it into a dataframe\nweights_eq_uk = pd.DataFrame(lst)\n# name the column containing the weights to be called weights\nweights_eq_uk.columns = ['weights']\n# do the transposing gymnastics\naux = weights_eq_uk.T\nlst = list(returns_uk.columns.values)\naux.columns = lst\nweights_eq_uk = aux.T\n# calculate the equally-weighted portfolio average return, var and st. dev.\nport_ave_ret_uk_eq = means_uk.mul(weights_eq_uk.weights).sum()\nport_var_uk_eq = np.dot(weights_eq_uk.weights, np.dot(cov_uk, weights_eq_uk.weights))\nport_std_uk_eq = port_var_uk_eq**(1/2)\nport_std_uk_eq", "_____no_output_____" ], [ "plt.scatter(port_std_uk_eq, port_ave_ret_uk_eq, c='b', marker='x', label='Equally-weighted')\nplt.scatter(port_std_uk, port_ave_ret_uk, c='r', marker='+', label='Value-weighted')\nplt.legend(loc='upper right')\n\n# add axes labels\nplt.xlabel('Sigma')\nplt.ylabel('E(R)')\n\n# add title\nplt.title('FTSE100&250 Portfolios');\n\n# control axes ranges\nplt.xlim(0, .1)\nplt.ylim(0, .1)\n\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Extra\n\nThis is how we can graph both the US and UK index constituents.", "_____no_output_____" ] ], [ [ "# compute the st dev-s of each group and save\nstd = returns.std()\nstd_uk = returns_uk.std()\n\n### import matplotlib's plotting functions\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.scatter(std, means, c='b', marker='x', label='S&P500')\nplt.scatter(std_uk, means_uk, c='r', marker='+', label='FTSE100&250')\nplt.legend(loc='upper right')\n\n# add axes labels\nplt.xlabel('Sigma')\nplt.ylabel('E(R)')\n\n# add title\nplt.title('S&P500 and FTSE100&250 Constituents Monthly E(R) and St. Dev-s since Jan. 2018');\n\n# control axes ranges\n#plt.xlim(0, .3)\n#plt.ylim(-.05, .1)\n\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "raw", "code", "raw", "code", "markdown", "code", "raw", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "raw" ], [ "code", "code", "code", "code", "code" ], [ "raw" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "raw" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbb7c97e4b1ef39a9ca386d0cecda84c9f346c48
124,466
ipynb
Jupyter Notebook
mpfi/probability blog post.ipynb
mathnathan/notebooks
63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28
[ "MIT" ]
1
2019-12-04T11:04:45.000Z
2019-12-04T11:04:45.000Z
mpfi/probability blog post.ipynb
mathnathan/notebooks
63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28
[ "MIT" ]
null
null
null
mpfi/probability blog post.ipynb
mathnathan/notebooks
63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28
[ "MIT" ]
null
null
null
339.144414
61,314
0.91302
[ [ [ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Introduction\n\nMachine learning literature makes heavy use of probabilistic graphical models\nand bayesian statistics. In fact, state of the art (SOTA) architectures, such as\n[variational autoencoders][vae-blog] (VAE) or [generative adversarial\nnetworks][gan-blog] (GAN), are intrinsically stochastic by nature. To\nwholesomely understand research in this field not only do we need a broad\nknowledge of mathematics, probability, and optimization but we somehow need\nintuition about how these concepts are applied to real world problems. For\nexample, one of the most common applications of deep learning techniques is\nvision. We may want to classify images or generate new ones. Most SOTA\ntechniques pose these problems in a probabilistic framework. We frequently see\nthings like $p(\\mathbf{x}|\\mathbf{z})$ where $\\mathbf{x}$ is an image and\n$\\mathbf{z}$ is a latent variable. What do we mean by the probability of an\nimage? What is a latent variable, and why is it necessary[^Bishop2006] to pose\nthe problems this way?\n \nShort answer, it is necessary due to the inherent uncertainty of our universe.\nIn this case, uncertainty in image acquisition can be introduced via many\nsources, such as the recording apparatus, the finite precision of our\nmeasurements, as well as the intrinsic stochasticity of the process being\nmeasured. Perhaps the most important source of uncertainty we will consider is\ndue to there being sources of variability that are themselves unobserved.\nProbability theory provides us with a framework to reason in the presence of\nuncertainty and information theory allows us to quantify uncertainty. As we \nelluded earlier the field of machine learning makes heavy use of both, and\nthis is no coincidence.\n\n\n## Representations\n\nHow do we describe a face? The word \"face\" is a symbol and this symbol means\ndifferent things to different people. Yet, there is enough commonality between\nour interpretations that we are able to effectively communicate with one\nanother using the word. How is that? What are the underlying features of faces\nthat we all hold common? Why is a simple smiley face clip art so obviously\nperceived as a face? To make it more concrete, why are two simple ellipses\ndecorated underneath by a short curve so clearly a face, while an eye lid,\nlower lip, one ear and a nostril, not? \n\n\n**Insert Image of Faces**\n*Left: Most would likely agree, this is clearly a face. Middle:\nWith nearly all of the details removed, a mere two circles and\ncurve are enough to create what the author still recognizes\nas a face. Right: Does this look like a face to you? An ear, \nnostril, eyelid, and lip do not seem to convey a face as clearly\nas the eyes and the mouth do. We will quantify this demonstration\nshortly.*\n\nFeatures, or representations, are built on the idea that characteristics of the\nsymbol \"face\" are not a property of any one face. Rather, they only arise from\nthe myriad of things we use the symbol to represent. In other words, a\nparticular face is not ascribed meaning by the word \"face\" - the word \"face\"\nderives meaning from the many faces it represents. This suggests that facial\ncharacteristics can be described through the statistical properties of all\nfaces. Loosely speaking, these underlying statistical characteristics are what\nthe machine learning field often calls latent variables.\n\n\n## Probability of an Image\n\nMost images are contaminated with noise that must be addressed. At the\nhighest level, we have noise being added to the data by the imaging device. The\nnext level of uncertainty comes as a consequence of discretization.\nImages in reality are continuous but in the process of imaging we only measure\ncertain points along the face. Consider for example a military satellite\ntracking a vehicle. If one wishes to predict the future location of the van,\nthe prediction is limited to be within one of the discrete cells that make up\nits measurements. However, the true location of the van could be anywhere\nwithin that grid cell. There is also intrinsic stochasticity at the atomic\nlevel that we ignore. The fluctuations taking place at that scale are assumed\nto be averaged out in our observations.\n\nThe unobserved sources of variability will be our primary focus. Before we\naddress that, let us lay down some preliminary concepts. We are going to assume\nthat there exists some true unknown process that determines what faces look\nlike. A dataset of faces can then be considered as a sample of this process at \nvarious points throughout its life. This suggests that these snapshots are a\noutputs of the underlying data generating process. Considering the many\nsources of uncertainty outlined above, it is natural to describe this process\nas a probability distribution. There will be many ways to interpret the data as\na probability, but we will begin by considering any one image to be the result\nof a data generating distribution, $P_{data}(\\mathbf{x})$. Here $\\mathbf{x}$ is considered to be\nan image of a face with $n$ pixels. So $P_{data}$ is a joint distribution over\neach pixel of the frame with a probability density function (pdf),\n$p_{data}(x_1,x_2,\\dots,x_n)$.\n\nTo build intuition about what $p_{data}(\\mathbf{x})$ is and how it relates to\nthe assumed data generating process, we will explore a simple example. Take an\nimage with only 2 pixels... [$x_1$,$x_2$] where both $x_1$ and $x_2$ are in\n[0,1]. Each image can be considered as a two dimensional point, in\n$\\mathbb{R}^2$. All possible images would occupy a square in the 2 dimensional\nplane. An example of what this might look like can be seen in Figure\n\\ref{fig:images_in_2dspace} on page \\pageref{fig:images_in_2dspace}. Any one\npoint inside the unit square would represent an image. For example the image\nassociated with the point $(0.25,0.85)$ is shown below.", "_____no_output_____" ] ], [ [ "x1 = np.random.uniform(size=500)\nx2 = np.random.uniform(size=500)\nfig = plt.figure();\nax = fig.add_subplot(1,1,1);\nax.scatter(x1,x2, edgecolor='black', s=80);\nax.grid();\nax.set_axisbelow(True);\nax.set_xlim(-0.25,1.25); ax.set_ylim(-0.25,1.25)\nax.set_xlabel('Pixel 2'); ax.set_ylabel('Pixel 1'); plt.savefig('images_in_2dspace.pdf')", "_____no_output_____" ] ], [ [ "Any one point inside the unit square would represent an image. For example the image associated with the point $(0.25,0.85)$ is shown below.", "_____no_output_____" ] ], [ [ "im = [(0.25, 0.85)]\nplt.imshow(im, cmap='gray',vmin=0,vmax=1)\nplt.tick_params(\n axis='both', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n bottom='off', # ticks along the bottom edge are off\n top='off', # ticks along the top edge are off\n left='off',\n right='off'\n)\nplt.xticks([])\nplt.yticks([])\nplt.xlabel('Pixel 1 = 0.25 Pixel 2 = 0.85')\nplt.savefig('sample_2dspace_image.pdf')", "_____no_output_____" ] ], [ [ "Now consider the case where there is some \nprocess correlating the two variables. This \nwould be similar to their being some rules behind\nthe structure of faces. We know, that this must be\nthe case because if it weren't then faces would\nbe created randomly and we would not see the \npatterns that was do. In \nthis case, the pixels would be correlated in \nsome manner due to the mechanism driving the\nconstruction of faces. In this simple case, \nlet's consider a direct correlation of the \nform $x_1 = \\frac{1}{2} \\cos(2\\pi x_2)+\\frac{1}{2}+\\epsilon$ \nwhere $\\epsilon$ is a noise term coming from\na low variability normal distribution \n$\\epsilon \\sim N(0,\\frac{1}{10})$. We see \nin Figure \\ref{fig:structured_images_in_2dspace}\non page \\pageref{fig:structured_images_in_2dspace}\nthat in this case, the images plotted\nin two dimensions resulting from this \nrelationship form a distinct pattern.", "_____no_output_____" ] ], [ [ "x1 = lambda x2: 0.5*np.cos(2*np.pi*x2)+0.5\nx2 = np.linspace(0,1,200)\neps = np.random.normal(scale=0.1, size=200)\nfig = plt.figure();\nax = fig.add_subplot(1,1,1);\nax.scatter(x2,x1(x2)+eps, edgecolor='black', s=80);\nax.grid();\nax.set_axisbelow(True);\nax.set_xlim(-0.25,1.25); ax.set_ylim(-0.25,1.25); plt.axes().set_aspect('equal')\nax.set_xlabel('Pixel 2'); ax.set_ylabel('Pixel 1'); plt.savefig('structured_images_in_2dspace.pdf')", "_____no_output_____" ] ], [ [ "We will refer to the structure suggested by \nthe two dimensional points as the 'manifold'.\nThis is a common practice when analyzing images.\nA 28 by 28 dimensional image will be a point in\n784 dimensional space. If we are examining \nimages with structure, various images of the\nnumber 2 for example, then it turns out that \nthese images will form a manifold in 784 \ndimensional space. In most cases, as is the \ncase in our contrived example, this manifold \nexists in a lower dimensional space than that\nof the images themselves. The goal is to 'learn'\nthis manifold. In our simple case we can describe\nthe manifold as a function of only 1 variable \n$$f(t) = <t,\\frac{1}{2} \\cos(2\\pi t)+\\frac{1}{2}>$$ \nThis is what we would call the underlying data \ngenerating process. In practice we usually \ndescribe the manifold in terms of a probability\ndistribution. We will refer to the data \ngenerating distribution in our example as \n$p_{test}(x_1, x_2)$. Why did we choose a \nprobability to describe the manifold created \nby the data generating process? How might this\nprobability be interpreted?\n\nLearning the actual distribution turns out to \nbe a difficult task. Here we will use a\ncommon non parametric technique for describing\ndistributions, the histogram. Looking at a \nhistogram of the images, or two dimensional points,\nwill give us insight into the structure of the \ndistribution from which they came. Notice here \nthough that the histogram merely describes the \ndistribution, we do not know what it is.\n", "_____no_output_____" ] ], [ [ "from matplotlib.colors import LogNorm\nx2 = np.random.uniform(size=100000)\neps = np.random.normal(scale=0.1, size=100000)\nhist2d = plt.hist2d(x2,x1(x2)+eps, bins=50, norm=LogNorm())\nplt.xlim(0.0,1.0); plt.ylim(-0.3,1.3); plt.axes().set_aspect('equal')\nplt.xlabel('Pixel 2'); plt.ylabel('Pixel 1')\nplt.colorbar();\nplt.savefig('histogram_of_structured_images.pdf')", "_____no_output_____" ] ], [ [ "As our intuition might have suggested, the data\ngenerating distribution looks very similar to \nthe structure suggested by the two dimensional\nimages plotted above. There is high probability\nvery near the actual curve \n$x_1 = \\frac{1}{2} \\cos(2\\pi x_2)+\\frac{1}{2}$ \nand low probability as we move away. We imposed\nthe uncertainty via the Gaussian noise term \n$\\epsilon$. However, in real data the \nuncertainty can be due to the myriad sources\noutlined above. In these cases a complex \nprobability distribution isn't an arbitrary \nchoice for representing the data, it becomes \nnecessary. \n\nHopefully we're now beginning to understand how\nto interpret $p_{test}(x_1, x_2)$. One might say\n$p_{test}$ measures how likely a certain \nconfiguration of $x_1$ and $x_2$ is to have \narisen from the data generating process $f(t)$.\nTherefore if one can learn the data generating\ndistribution, then they have a descriptive\nmeasure of the true underlying data generating\nprocess. This intuition extends to the \n$p_{data}(x)$ for faces that was presented \nabove. A sample from the LFW dataset is shown in \nFigure \\ref{fig:Agnelo_Queiroz_0001} on page\n\\pageref{fig:Agnelo_Queiroz_0001}. \n\n", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbb7d19439a13d5f7b463770cd274e70273a2c15
69,536
ipynb
Jupyter Notebook
cats and dogs sound classification/Untitled1.ipynb
adibyte95/ML-Competitions
107171c6c1e395e7d82d6bddfee47f94c2c822a1
[ "MIT" ]
null
null
null
cats and dogs sound classification/Untitled1.ipynb
adibyte95/ML-Competitions
107171c6c1e395e7d82d6bddfee47f94c2c822a1
[ "MIT" ]
null
null
null
cats and dogs sound classification/Untitled1.ipynb
adibyte95/ML-Competitions
107171c6c1e395e7d82d6bddfee47f94c2c822a1
[ "MIT" ]
null
null
null
88.243655
21,248
0.820942
[ [ [ "import numpy as np\nimport utils\nimport librosa\nimport keras\nfrom keras.utils import to_categorical\nfrom keras.models import load_model\nfrom keras.models import Sequential\nfrom keras.layers import MaxPooling2D,Conv2D,Flatten,Activation,Dense,Dropout,BatchNormalization\nfrom keras.optimizers import Adamax\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import preprocessing\nimport os # Manipulate files\nfrom matplotlib import pyplot as plt\nfrom IPython.display import clear_output", "_____no_output_____" ], [ "# List the wav files\nROOT_DIR = './input/cats_dogs/'\nX_path = os.listdir(ROOT_DIR)\n# changing the values into 1 and 0\ny = [0 if 'cat' in f else 1 for f in X_path] # change y to int values\n\n# Split train and test\ntrain_input, test_input, train_target, test_target = train_test_split(X_path, y, test_size=0.10)\n", "_____no_output_____" ], [ "#examples\nprint(train_input[0])\nprint(test_input[0])\n\nprint(train_target[0])\nprint(test_target[0])", "cat_48.wav\ncat_110.wav\n0\n0\n" ], [ "def extract_feature(file_name):\n X, sample_rate = librosa.load(file_name,duration=5)\n stft = np.abs(librosa.stft(X))\n mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T,axis=0)\n chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=sample_rate).T,axis=0)\n mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T,axis=0)\n contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=sample_rate).T,axis=0)\n tonnetz = np.mean(librosa.feature.tonnetz(y=librosa.effects.harmonic(X),\n sr=sample_rate).T,axis=0)\n return mfccs,chroma,mel,contrast,tonnetz", "_____no_output_____" ], [ "i = 0\nX_train = []\nwhile i<len(train_input):\n print('processing file: ',i)\n filename = 'input/cats_dogs/' + train_input[i]\n mfccs, chroma, mel, contrast,tonnetz = extract_feature(filename)\n features = []\n features.append(np.mean(mfccs))\n features.append(np.mean(chroma))\n features.append(np.mean(mel))\n features.append(np.mean(contrast))\n features.append(np.mean(tonnetz))\n X_train.append(features)\n i = i +1", "processing file: 0\nprocessing file: 1\nprocessing file: 2\nprocessing file: 3\nprocessing file: 4\nprocessing file: 5\nprocessing file: 6\nprocessing file: 7\nprocessing file: 8\nprocessing file: 9\nprocessing file: 10\nprocessing file: 11\nprocessing file: 12\nprocessing file: 13\nprocessing file: 14\nprocessing file: 15\nprocessing file: 16\nprocessing file: 17\nprocessing file: 18\nprocessing file: 19\nprocessing file: 20\nprocessing file: 21\nprocessing file: 22\nprocessing file: 23\nprocessing file: 24\nprocessing file: 25\nprocessing file: 26\nprocessing file: 27\nprocessing file: 28\nprocessing file: 29\nprocessing file: 30\nprocessing file: 31\nprocessing file: 32\nprocessing file: 33\nprocessing file: 34\nprocessing file: 35\nprocessing file: 36\nprocessing file: 37\nprocessing file: 38\nprocessing file: 39\nprocessing file: 40\nprocessing file: 41\nprocessing file: 42\nprocessing file: 43\nprocessing file: 44\nprocessing file: 45\nprocessing file: 46\nprocessing file: 47\nprocessing file: 48\nprocessing file: 49\nprocessing file: 50\nprocessing file: 51\nprocessing file: 52\nprocessing file: 53\nprocessing file: 54\nprocessing file: 55\nprocessing file: 56\nprocessing file: 57\nprocessing file: 58\nprocessing file: 59\nprocessing file: 60\nprocessing file: 61\nprocessing file: 62\nprocessing file: 63\nprocessing file: 64\nprocessing file: 65\nprocessing file: 66\nprocessing file: 67\nprocessing file: 68\nprocessing file: 69\nprocessing file: 70\nprocessing file: 71\nprocessing file: 72\nprocessing file: 73\nprocessing file: 74\nprocessing file: 75\nprocessing file: 76\nprocessing file: 77\nprocessing file: 78\nprocessing file: 79\nprocessing file: 80\nprocessing file: 81\nprocessing file: 82\nprocessing file: 83\nprocessing file: 84\nprocessing file: 85\nprocessing file: 86\nprocessing file: 87\nprocessing file: 88\nprocessing file: 89\nprocessing file: 90\nprocessing file: 91\nprocessing file: 92\nprocessing file: 93\nprocessing file: 94\nprocessing file: 95\nprocessing file: 96\nprocessing file: 97\nprocessing file: 98\nprocessing file: 99\nprocessing file: 100\nprocessing file: 101\nprocessing file: 102\nprocessing file: 103\nprocessing file: 104\n" ], [ "# converting into an numpy array\nX_train = np.asarray(X_train)\ny_train = np.asarray(train_target)", "_____no_output_____" ], [ "X_train.shape", "_____no_output_____" ], [ "print(y_train.shape)", "(249,)\n" ], [ "model = Sequential()\nmodel.add(Dense(500,input_shape = (5,)))\nmodel.add(Activation('relu'))\nmodel.add(Dense(200))\nmodel.add(Activation('relu'))\n\nmodel.add(Dense(100))\nmodel.add(Activation('relu'))\n\nmodel.add(Dense(1))\nmodel.add(Activation('sigmoid'))\nmodel.summary()", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_21 (Dense) (None, 500) 3000 \n_________________________________________________________________\nactivation_21 (Activation) (None, 500) 0 \n_________________________________________________________________\ndense_22 (Dense) (None, 200) 100200 \n_________________________________________________________________\nactivation_22 (Activation) (None, 200) 0 \n_________________________________________________________________\ndense_23 (Dense) (None, 100) 20100 \n_________________________________________________________________\nactivation_23 (Activation) (None, 100) 0 \n_________________________________________________________________\ndense_24 (Dense) (None, 1) 101 \n_________________________________________________________________\nactivation_24 (Activation) (None, 1) 0 \n=================================================================\nTotal params: 123,401\nTrainable params: 123,401\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "model.compile(optimizer = 'adam', metrics=['accuracy'], loss = 'binary_crossentropy')", "_____no_output_____" ], [ "#plot\nclass PlotLosses(keras.callbacks.Callback):\n def on_train_begin(self, logs={}):\n self.i = 0\n self.x = []\n self.losses = []\n self.val_losses = []\n \n self.fig = plt.figure()\n \n self.logs = []\n\n def on_epoch_end(self, epoch, logs={}):\n \n self.logs.append(logs)\n self.x.append(self.i)\n self.losses.append(logs.get('loss'))\n self.val_losses.append(logs.get('val_loss'))\n self.i += 1\n \n clear_output(wait=True)\n plt.plot(self.x, self.losses, label=\"loss\")\n plt.plot(self.x, self.val_losses, label=\"val_loss\")\n plt.legend()\n plt.show();\n \nplot_losses = PlotLosses()", "_____no_output_____" ], [ "model.fit(X_train,y_train,\n epochs =500,\n callbacks = [plot_losses],\n verbose= 2)", "_____no_output_____" ], [ "i = 0\nX_test = []\nwhile i<len(test_input):\n print('processing file: ',i)\n filename = 'input/cats_dogs/' + test_input[i]\n mfccs, chroma, mel, contrast,tonnetz = extract_feature(filename)\n features = []\n features.append(np.mean(mfccs))\n features.append(np.mean(chroma))\n features.append(np.mean(mel))\n features.append(np.mean(contrast))\n features.append(np.mean(tonnetz))\n X_test.append(features)\n i = i +1\nX_test = np.asarray(X_test)", "processing file: 0\nprocessing file: 1\nprocessing file: 2\nprocessing file: 3\nprocessing file: 4\nprocessing file: 5\nprocessing file: 6\nprocessing file: 7\nprocessing file: 8\nprocessing file: 9\nprocessing file: 10\nprocessing file: 11\nprocessing file: 12\nprocessing file: 13\nprocessing file: 14\nprocessing file: 15\nprocessing file: 16\nprocessing file: 17\nprocessing file: 18\nprocessing file: 19\nprocessing file: 20\nprocessing file: 21\nprocessing file: 22\nprocessing file: 23\nprocessing file: 24\nprocessing file: 25\nprocessing file: 26\nprocessing file: 27\n" ], [ "predicted = model.predict(X_test)", "_____no_output_____" ], [ "i = 0\nwhile i<len(predicted):\n if predicted[i] >=.5:\n predicted[i] = 1\n else:\n predicted[i] = 0\n i = i +1\ny_test = np.asarray(predicted)", "_____no_output_____" ], [ "predicted = predicted.reshape([-1])\nprint(predicted.shape)", "(28,)\n" ], [ "#plotting the confusion matrix\nimport itertools\nfrom sklearn.metrics import confusion_matrix\n\n\ndef plot_confusion_matrix(cm, classes,\n normalize=False,\n title='Confusion matrix',\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n print(cm)\n\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n fmt = '.2f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j], fmt),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n\n# Compute confusion matrix\ncnf_matrix = confusion_matrix(y_test, predicted)\nnp.set_printoptions(precision=2)\n\n# Plot non-normalized confusion matrix\nplt.figure()\nplot_confusion_matrix(cnf_matrix, classes=['cat','dog'],\n title='Confusion matrix, without normalization')\n\n# Plot normalized confusion matrix\nplt.figure()\nplot_confusion_matrix(cnf_matrix, classes=['cat','dog'], normalize=True,\n title='Normalized confusion matrix')\n\nplt.show()", "Confusion matrix, without normalization\n[[18 0]\n [ 0 10]]\nNormalized confusion matrix\n[[ 1. 0.]\n [ 0. 1.]]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb7d3ff59a6108d11a108d6be915a72f91c32aa
10,581
ipynb
Jupyter Notebook
Lesson-06_2_Fancy_Softmax_Classification.ipynb
Tony-Khor/PyTorch-From-Zero-to-All
d8f9b6d81fe390dee93a887f342dc818553e61b3
[ "MIT" ]
null
null
null
Lesson-06_2_Fancy_Softmax_Classification.ipynb
Tony-Khor/PyTorch-From-Zero-to-All
d8f9b6d81fe390dee93a887f342dc818553e61b3
[ "MIT" ]
null
null
null
Lesson-06_2_Fancy_Softmax_Classification.ipynb
Tony-Khor/PyTorch-From-Zero-to-All
d8f9b6d81fe390dee93a887f342dc818553e61b3
[ "MIT" ]
null
null
null
22.512766
92
0.470371
[ [ [ "# Lab 6-2: Fancy Softmax Classification", "_____no_output_____" ], [ "Author: Seungjae Lee (이승재)", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "import numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim", "_____no_output_____" ], [ "# For reproducibility\ntorch.manual_seed(1)", "_____no_output_____" ] ], [ [ "## Cross-entropy Loss with `torch.nn.functional`", "_____no_output_____" ], [ "PyTorch has `F.log_softmax()` function.", "_____no_output_____" ] ], [ [ "z = torch.rand(3, 5, requires_grad=True)\nhypothesis = F.softmax(z, dim=1)\ny = torch.randint(5, (3,)).long()\ny_one_hot = torch.zeros_like(hypothesis)\ny_one_hot.scatter_(1, y.unsqueeze(1), 1)", "_____no_output_____" ], [ "# Low level\ntorch.log(F.softmax(z, dim=1))", "_____no_output_____" ], [ "# High level\nF.log_softmax(z, dim=1)", "_____no_output_____" ] ], [ [ "PyTorch also has `F.nll_loss()` function that computes the negative loss likelihood.", "_____no_output_____" ] ], [ [ "# Low level\n(y_one_hot * -torch.log(F.softmax(z, dim=1))).sum(dim=1).mean()", "_____no_output_____" ], [ "# High level\nF.nll_loss(F.log_softmax(z, dim=1), y.long())", "_____no_output_____" ] ], [ [ "PyTorch also has `F.cross_entropy` that combines `F.log_softmax()` and `F.nll_loss()`.", "_____no_output_____" ] ], [ [ "F.cross_entropy(z, y)", "_____no_output_____" ] ], [ [ "## Data", "_____no_output_____" ] ], [ [ "xy = np.loadtxt('data-04-zoo.csv', delimiter=',', dtype=np.float32)", "_____no_output_____" ], [ "x_train = torch.FloatTensor(xy[:, 0:-1])\ny_train = torch.LongTensor(xy[:, [-1]]).squeeze()", "_____no_output_____" ], [ "print(x_train.shape) # x_train shape\nprint(len(x_train)) # x_train 길이\nprint(x_train[:5]) # 첫 다섯 개", "torch.Size([101, 16])\n101\ntensor([[1., 0., 0., 1., 0., 0., 1., 1., 1., 1., 0., 0., 4., 0., 0., 1.],\n [1., 0., 0., 1., 0., 0., 0., 1., 1., 1., 0., 0., 4., 1., 0., 1.],\n [0., 0., 1., 0., 0., 1., 1., 1., 1., 0., 0., 1., 0., 1., 0., 0.],\n [1., 0., 0., 1., 0., 0., 1., 1., 1., 1., 0., 0., 4., 0., 0., 1.],\n [1., 0., 0., 1., 0., 0., 1., 1., 1., 1., 0., 0., 4., 1., 0., 1.]])\n" ], [ "print(y_train.shape) # y_train shape\nprint(len(y_train)) # y_train 길이\nprint(y_train[:5]) # 첫 다섯 개", "torch.Size([101])\n101\ntensor([0, 0, 3, 0, 0])\n" ], [ "nb_classes = 7\ny_one_hot = torch.zeros((len(y_train), nb_classes))\ny_one_hot = y_one_hot.scatter(1, y_train.unsqueeze(1), 1)", "_____no_output_____" ] ], [ [ "## Training with `F.cross_entropy`", "_____no_output_____" ] ], [ [ "# 모델 초기화\nW = torch.zeros((16, 7), requires_grad=True)\nb = torch.zeros(1, requires_grad=True)\n# optimizer 설정\noptimizer = optim.SGD([W, b], lr=0.1)\n\nnb_epochs = 1000\nfor epoch in range(nb_epochs + 1):\n\n # Cost 계산 (2)\n z = x_train.matmul(W) + b # or .mm or @\n cost = F.cross_entropy(z, y_train)\n\n # cost로 H(x) 개선\n optimizer.zero_grad()\n cost.backward()\n optimizer.step()\n\n # 100번마다 로그 출력\n if epoch % 100 == 0:\n print('Epoch {:4d}/{} Cost: {:.6f}'.format(\n epoch, nb_epochs, cost.item()\n ))", "Epoch 0/1000 Cost: 1.945909\nEpoch 100/1000 Cost: 0.471836\nEpoch 200/1000 Cost: 0.326327\nEpoch 300/1000 Cost: 0.257839\nEpoch 400/1000 Cost: 0.215762\nEpoch 500/1000 Cost: 0.186603\nEpoch 600/1000 Cost: 0.164898\nEpoch 700/1000 Cost: 0.147955\nEpoch 800/1000 Cost: 0.134279\nEpoch 900/1000 Cost: 0.122962\nEpoch 1000/1000 Cost: 0.113422\n" ] ], [ [ "## High-level Implementation with `nn.Module`", "_____no_output_____" ] ], [ [ "class SoftmaxClassifierModel(nn.Module):\n def __init__(self):\n super().__init__()\n self.linear = nn.Linear(16, 7)\n def forward(self, x):\n return self.linear(x)", "_____no_output_____" ], [ "model = SoftmaxClassifierModel()", "_____no_output_____" ], [ "# optimizer 설정\noptimizer = optim.SGD(model.parameters(), lr=0.1)\n\nnb_epochs = 1000\nfor epoch in range(nb_epochs + 1):\n\n # H(x) 계산\n prediction = model(x_train)\n\n # cost 계산\n cost = F.cross_entropy(prediction, y_train)\n\n # cost로 H(x) 개선\n optimizer.zero_grad()\n cost.backward()\n optimizer.step()\n \n # 20번마다 로그 출력\n if epoch % 100 == 0:\n print('Epoch {:4d}/{} Cost: {:.6f}'.format(\n epoch, nb_epochs, cost.item()\n ))", "Epoch 0/1000 Cost: 1.919160\nEpoch 100/1000 Cost: 0.468405\nEpoch 200/1000 Cost: 0.320585\nEpoch 300/1000 Cost: 0.248953\nEpoch 400/1000 Cost: 0.204819\nEpoch 500/1000 Cost: 0.174506\nEpoch 600/1000 Cost: 0.152248\nEpoch 700/1000 Cost: 0.135139\nEpoch 800/1000 Cost: 0.121543\nEpoch 900/1000 Cost: 0.110461\nEpoch 1000/1000 Cost: 0.101245\n" ] ], [ [ "<div class=\"alert alert-warning\">\n Should I display how many it got correct in the training set?\n</div>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
cbb7da4c55df2c5503da6fe4ca849e3e7db5f05d
7,250
ipynb
Jupyter Notebook
notebooks/rapid_testing_pipeline.ipynb
catalystneuro/feldman-lab-to-nwb
1058dc10f1f308db78db9d79716c13f1e3a3ab3e
[ "MIT" ]
null
null
null
notebooks/rapid_testing_pipeline.ipynb
catalystneuro/feldman-lab-to-nwb
1058dc10f1f308db78db9d79716c13f1e3a3ab3e
[ "MIT" ]
13
2021-04-07T17:21:31.000Z
2021-10-11T19:26:13.000Z
notebooks/rapid_testing_pipeline.ipynb
catalystneuro/feldman-lab-to-nwb
1058dc10f1f308db78db9d79716c13f1e3a3ab3e
[ "MIT" ]
null
null
null
25.34965
122
0.584414
[ [ [ "# Online pipeline for Feldman lab", "_____no_output_____" ] ], [ [ "from pathlib import Path\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom pprint import pprint\nimport time\n\nfrom spikeextractors import SpikeGLXRecordingExtractor, NwbSortingExtractor\nfrom spiketoolkit.sortingcomponents import detect_spikes\nfrom spiketoolkit.curation import threshold_firing_rates\nfrom spikewidgets import plot_rasters\nfrom pynwb import NWBHDF5IO\n\nimport feldman_lab_to_nwb", "_____no_output_____" ], [ "# Set parameters for parallelization\nn_jobs = 8 # Number of concurrent jobs\nchunk_mb = 2000 # Maximum amount of RAM in Mb", "_____no_output_____" ] ], [ [ "## 1) Load short AP recording", "_____no_output_____" ] ], [ [ "base_path = Path(\"E:/Feldman/Neuropixels_Feldman/210209/SpikeGLX\")\nsession_name = \"LR_210209_g0\"\n\nap_bin_path = base_path / session_name / f\"{session_name}_imec0\" / f\"{session_name}_t0.imec0.ap.bin\"\nnidq_file_path = base_path / session_name / f\"{session_name}_t0.nidq.bin\"\n\ntrial_ongoing_channel = 3\nevent_channel = 4\n\nnwbfile_path = f\"E:/Feldman/rapid_testing_{session_name}_full_test.nwb\"", "_____no_output_____" ], [ "recording_ap = SpikeGLXRecordingExtractor(ap_bin_path)\ntrial_numbers, _, trial_times = feldman_lab_to_nwb.utils.get_trials_info(\n recording_nidq=SpikeGLXRecordingExtractor(nidq_file_path),\n trial_ongoing_channel=trial_ongoing_channel,\n event_channel=event_channel\n)\nif trial_numbers[0] != 0:\n recording_ap = clip_recording(trial_numbers=trial_numbers, trial_times=trial_times, recording=recording_ap)", "_____no_output_____" ], [ "duration = recording_ap.get_num_frames() / recording_ap.get_sampling_frequency()\nfs = recording_ap.get_sampling_frequency()\nprint(f\"Duration: {np.round(duration, 1)} s\")", "_____no_output_____" ] ], [ [ "# 2) Quick spike detection by channel", "_____no_output_____" ] ], [ [ "detect_spikes?", "_____no_output_____" ], [ "t_start = time.time()\nsorting_ch = detect_spikes(recording=recording_ap, n_jobs=n_jobs, chunk_mb=chunk_mb, verbose=True)\nt_stop = time.time()\nprint(f\"Elapsed time for detection: {t_stop - t_start}\")", "_____no_output_____" ], [ "print(f\"Detected spikes on {len(sorting_ch.get_unit_ids())} channels\")", "_____no_output_____" ], [ "wr = plot_rasters(sorting_ch)", "_____no_output_____" ] ], [ [ "### (optional) Remove channels below a certain firing rate", "_____no_output_____" ] ], [ [ "firing_rate_threshold = 0.1 # Adjusts sensitivity.\n\nsorting_high_fr = threshold_firing_rates(\n sorting_ch,\n duration_in_frames=recording_ap.get_num_frames(),\n threshold=firing_rate_threshold, \n threshold_sign=\"less\"\n)", "_____no_output_____" ], [ "print(f\"Detected spikes on {len(sorting_high_fr.get_unit_ids())} channels with fr > {firing_rate_threshold}\")", "_____no_output_____" ] ], [ [ "# 3) Save spike and behavior info to NWB", "_____no_output_____" ] ], [ [ "# Choose a sorting extractor by uncommenting one of these lines (either the basic detection or rate thresholded).\nchosen_sorting = sorting_ch\n#chosen_sorting = sorting_high_fr\n\n# Run conversion to NWB.\nsource_data = dict(RapidTesting=dict(file_path=str(nidq_file_path)))\nconverter = feldman_lab_to_nwb.RapidTestingNWBConverter(source_data=source_data)\nmetadata = converter.get_metadata()\nmetadata[\"NWBFile\"].update(session_description=\"Rapid testing file for electrode placement.\")\nmetadata[\"Ecephys\"][\"Electrodes\"] = []\nconversion_options = dict(\n RapidTesting=dict(\n trial_ongoing_channel=trial_ongoing_channel,\n event_channel=event_channel\n )\n)\nconverter.run_conversion(\n nwbfile_path=nwbfile_path,\n metadata=metadata,\n conversion_options=conversion_options,\n overwrite=True, # This always creates a new file.\n)\n\npprint(\"Appending spike detection...\")\nNwbSortingExtractor.write_sorting(\n sorting=chosen_sorting,\n save_path=nwbfile_path,\n overwrite=False # This appends the file. True would write a new file.\n)\npprint(\"Spike detection appended!\")", "_____no_output_____" ] ], [ [ "# 5) View output vs. behavior in NWBWidgets ", "_____no_output_____" ] ], [ [ "io = NWBHDF5IO(nwbfile_path, mode=\"r\")\nnwb = io.read()\n\nfeldman_lab_to_nwb.rapid_testing_nwb2widget(nwb)", "_____no_output_____" ], [ "io.close()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cbb7ff83ed14655f96482d8a1ac5317ff4dedd3e
2,758
ipynb
Jupyter Notebook
analysis/simulations/spatial_process_simulation/learning/save_splits.ipynb
krisrs1128/measuring_stability
ae04f91486f49f342a45c4368c00fc555f3d91a4
[ "CC-BY-4.0" ]
null
null
null
analysis/simulations/spatial_process_simulation/learning/save_splits.ipynb
krisrs1128/measuring_stability
ae04f91486f49f342a45c4368c00fc555f3d91a4
[ "CC-BY-4.0" ]
null
null
null
analysis/simulations/spatial_process_simulation/learning/save_splits.ipynb
krisrs1128/measuring_stability
ae04f91486f49f342a45c4368c00fc555f3d91a4
[ "CC-BY-4.0" ]
null
null
null
25.537037
247
0.544598
[ [ [ "import sys\nsys.path.append(\"../../../../inst/python\")\nimport data as sd\nimport bootstrap as sb\nimport pathlib\nimport pandas as pd\nimport os\nfrom addict import Dict\nimport yaml", "_____no_output_____" ] ], [ [ "This script creates a CSV specifying which split each of the generated simulation / data analysis numpy array belong to. It is the source of the `bootstrap_*.csv` files visible in `stability_data_sim.tar.gz` and `stability_data_tnbc.tar.gz`.", "_____no_output_____" ] ], [ [ "def save_splits(paths, props):\n splits = sd.random_split(paths, props)\n p_str = str(round(sum(props[:2]), 3))\n splits.to_csv(data_dir / f\"splits_{p_str}-train.csv\", index=False)\n sb.bootstrap_indices(len(splits.loc[splits[\"split\"] == \"train\", \"path\"]), opts.bootstrap.B, data_dir / f\"bootstraps_{p_str}-train.csv\")", "_____no_output_____" ], [ "data_dir = Path(\"../../../data/raw_data/stability_data/\")\nsd.save_pngs(data_dir / \"tiles\", data_dir / \"pngs\")\npaths = list((data_dir / \"tiles\").glob(\"*.npy\"))\npaths = [p.relative_to(data_dir) for p in paths]", "_____no_output_____" ] ], [ [ "Next, we save the paths to the train / splits, as well as a master file of all the resampling plans associated with each bootstrap.", "_____no_output_____" ] ], [ [ "opts = Dict(yaml.safe_load(open(\"conf/cnn-k64-50.yaml\", \"r\")))\n\nsplit_choices = [\n [0.8, 0.1, 0.1],\n [0.1, 0.05, 0.85],\n [0.4, 0.1, 0.5]\n]\n\nfor p in split_choices:\n save_splits(paths, p)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbb802ba19341e025d104053b3e687a91cf6d413
99,028
ipynb
Jupyter Notebook
Lectures/StatsBackground.ipynb
canner6/NSMA
abeb24db99e4adfc61e4a5a9f723d52e60112a53
[ "BSD-3-Clause" ]
4
2021-08-24T13:39:44.000Z
2021-09-05T21:09:00.000Z
Lectures/StatsBackground.ipynb
canner6/NSMA
abeb24db99e4adfc61e4a5a9f723d52e60112a53
[ "BSD-3-Clause" ]
null
null
null
Lectures/StatsBackground.ipynb
canner6/NSMA
abeb24db99e4adfc61e4a5a9f723d52e60112a53
[ "BSD-3-Clause" ]
7
2021-08-31T13:03:12.000Z
2022-01-21T10:16:38.000Z
149.815431
48,300
0.865402
[ [ [ "# Astronomy 8824 - Numerical and Statistical Methods in Astrophysics\n\n## Statistical Methods Topic I. High Level Backround\n\nThese notes are for the course Astronomy 8824: Numerical and Statistical Methods in Astrophysics. It is based on notes from David Weinberg with modifications and additions by Paul Martini.\nDavid's original notes are available from his website: http://www.astronomy.ohio-state.edu/~dhw/A8824/index.html\n\n#### Background reading: \n- Statistics, Data Mining, and Machine Learning in Astronomy, Chapter 3 (see David's [Reader's Guide](http://www.astronomy.ohio-state.edu/~dhw/A8824/ivezic_guide.pdf))", "_____no_output_____" ] ], [ [ "import math\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom scipy import optimize\n\n# matplotlib settings \nSMALL_SIZE = 14\nMEDIUM_SIZE = 16\nBIGGER_SIZE = 18\n\nplt.rc('font', size=SMALL_SIZE) # controls default text sizes\nplt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title\nplt.rc('axes', labelsize=BIGGER_SIZE) # fontsize of the x and y labels\nplt.rc('lines', linewidth=2)\nplt.rc('axes', linewidth=2)\nplt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels\nplt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels\nplt.rc('legend', fontsize=MEDIUM_SIZE) # legend fontsize\nplt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title", "_____no_output_____" ] ], [ [ "LaTex macros hidden here -- \n$\\newcommand{\\expect}[1]{{\\left\\langle #1 \\right\\rangle}}$\n$\\newcommand{\\intinf}{\\int_{-\\infty}^{\\infty}}$\n$\\newcommand{\\xbar}{\\overline{x}}$", "_____no_output_____" ], [ "### Statistical Tasks in Astrophysics\n\nFour common statistical tasks:\n\n1. Parameter estimation\n\n2. Comparison of hypotheses\n\n3. Absolute evaluation of a hypothesis \n\n4. Forecasting of errors\n\nAnother task, slightly less common: Prediction of values from a model fit to some set of data, when the parameters of the model are uncertain.", "_____no_output_____" ], [ "### Simple Example: Data points with error bars\n\n**Parameter estimation:** What are slope and amplitude of a power-law fit?\nWhat are the uncertainties in the parameters?\n\nWhen you fit a power-law model to data, you _assume_ that power-law description is valid.\n\n\n**Hypothesis comparison:** Is a double power-law better than a single power-law?\n\nHypothesis comparisons are trickier when the number of parameters is different, since one must decide whether the fit to the data is _sufficiently_ better given the extra freedom in the more complex model.\n\nA simpler comparison would be single power-law vs. two constant plateaus with a break at a specified location, both with two parameters.\n\n\n**Absolute evaluation:** Are the data consistent with a power-law?\n\nAbsolute assessments of this sort are generally much more problematic than hypothesis comparisons.\n\n\n**Forecasting of errors:** How many more measurements, or what reduction of uncertainties in the measurements, would allow single and double power-law models to be clearly distinguished?\n\nNeed to specify goals, and assumptions about the data. This is a common need for observing proposals, grant proposals, satellite proposals etc. ", "_____no_output_____" ], [ "### Complicated example: CMB power spectrum with errors.\n\n\n**Parameter estimation:** In a \"vanilla\" $\\Lambda$CDM model, what are the best values of $\\Omega_m$, $\\Omega_b$, $h$, $n$, and $\\tau$?\n\nOne often wants to combine CMB with other data to break degeneracies and get better constraints.\n\n\n**Hypothesis comparisons:** Are data consistent with $\\Omega_m=1$? Do they favor inclusion of space curvature, or gravity waves?\n\nThis typically involves comparison of models with different numbers of parameters.\n\n\n**Absolute assessment:** Can the restricted, \"vanilla\" $\\Lambda$CDM model be rejected?\n\n\n**Forecasting:** What constraints or tests could be achieved with a new experiment?\n\nThis kind of analysis played a key role in the design and approval of WMAP, Planck, DESI, and other major cosmological surveys. \n\nThere is presently a lot of work along these lines for future cosmological surveys and CMB experiments.", "_____no_output_____" ], [ "### PDF, Mean, and Variance\n\nIf $p(x)$ is the **probability distribution function** (pdf) of a **random variable** $x$, then $p(x) dx$ is the probability that $x$ lies in a small interval $dx$.\n\nThe **expectation value** of a random variable $x$ is $\\expect{x} = \\intinf xp(x)dx = \\mu$. The expectation value of $x$ is equal to the (arithmetic) mean. It is sometimes also written $\\mu = E(x)$. \n\nThe expectation value of a function $y(x)$ is $\\expect{y(x)} = \\intinf y(x) p(x) dx.$ \n\n\nThe variance is $V(x)=\\expect{(x-\\mu)^2} \\equiv \\sigma^2$.\n\nThe standard deviation is $\\sigma = \\sqrt{\\sigma^2}$. This is also called the dispersion.\n\n#### Useful variance relation\n\n$$\nV(x)=\\expect{(x-\\mu)^2} = \\int (x - \\mu)^2 p(x) dx\n$$\n\n$$\n= \\int (x^2 - 2\\mu x + \\mu^2) p(x) dx = \\int x^2 p(x) dx - 2 \\mu \\int x p(x) dx + \\mu^2 \\int p(x) dx\n$$\n\n$$\n= \\expect{x^2} - 2 \\expect{x}^2 + \\expect{x}^2 \n$$\n\nThis reduces to the useful result that $V(x) = \\expect{x^2} - \\expect{x}^2$.\n\n\n#### Sum of the variances\n\nFor _independent_ random variables $y_1$, $y_2$, ... $y_N$ (drawn from the same distribution or different distributions), the variance of the sum is the sum of the variances:\n$$\nV(y_1+y_2+...y_N) = \\sum_{i=1,N} V(y_i).\n$$\nThis can be proved by induction.\n\nIf random variables $x$ and $y$ are independent, then $p(x,y) = p(x)p(y)$ and\n$$\n{\\rm Cov}(x,y) \\equiv \\expect{(x-\\mu_x)(y-\\mu_y)}=0.\n$$\nThe second statement can be proved from the first.\n\n#### Demonstration\n\n$$\nVar(y_1 + y_2) = \\expect{(y_1 + y_2)^2} - \\expect{y_1+y_2}^2\n$$\n$$\n= \\expect{y_1^2 + 2 y_1 y_2 + y_2^2} - \\expect{y_1+y_2}^2\n$$\nThen looking at just the first term:\n\n$$\n\\expect{y_1^2 + 2 y_1 y_2 + y_2^2} = \\int y_1^2 p(y_1) p(y_2) dy_1 dy_2 + 2 \\int y_1 y_2 p(y_1) p(y_2) dy_1 dy_2 + \\int y_2^2 p(y_1) p(y_2) dy_1 dy_2\\int \n$$\nNote that the integral \\int p(y_1) dy_1 = 1 by definition, so we can simplify the above to:\n\n$$\n= \\expect{y_1^2} + 2 \\expect{y_1 y_2} + \\expect{y_2^2}\n$$\n\nNow looking at the second term:\n$$\n\\expect{y_1+y_2}^2 = \\left[ \\int (y_1 + y_2) p(y_1) p(y_2) dy_1 dy_2 \\right]^2\n$$\n$$\n= \\expect{y_1}^2 + 2 \\expect{y_1} \\expect{y_2} + \\expect{y_2}^2\n$$\n\nNow combining these two:\n$$\nVar(y_1 + y_2) = \\expect{y_1^2} + 2 \\expect{y_1 y_2} + \\expect{y_2^2} - \\expect{y_1}^2 - 2 \\expect{y_1} \\expect{y_2} - \\expect{y_2}^2\n$$\n$$\n= \\expect{y_1^2} + \\expect{y_2^2} - \\expect{y_1}^2 - \\expect{y_2}^2 \n$$\nWhich is equivalent to:\n$$\nVar(y_1 + y_2) = Var(y_1) + Var(y_2)\n$$\n\n#### Linearity of Expectation\n\nThis is often invoked more generally as a statement about the _Linearity of Expectation_. \n\n$$\n\\expect{x + y} = \\int (x + y) p(x) p(y) dx dy = \\int x p(x) p(y) dx dy + \\int y p(x) p(y) dx dy = \\expect{x} + \\expect{y} \n$$", "_____no_output_____" ], [ "### Covariance\n\nCovariance is a measure of the _joint probability_ of 2 random variables. It describes how they change together. \n\nIt is commonly written as:\n\n$$\nCov(y_1, y_2) = \\expect{ (y_1 - \\expect{y_1} ) (y_2 - \\expect{y_2}) } = \\expect{ (y_1 - \\mu_1) (y_2 - \\mu_2) }\n$$\n\n\nThis can also be written as:\n$$\nCov(y_1, y_2) = \\expect{y_1 y_2 - \\expect{y_1} y_2 - y_1 \\expect{y_2} + \\expect{y_1} \\expect{y_2} }\n$$\nusing the linearity of expectation\n$$\n= \\expect{y_1 y_2} - \\expect{y_1}\\expect{y_2} - \\expect{y_1}\\expect{y_2} + \\expect{y_1} \\expect{y_2}\n$$\nor \n$$\nCov(y_1, y_2) = \\expect{y_1 y_2} - \\expect{y_1} \\expect{y_2}\n$$\nNote that if $y_1$ and $y_2$ are independent variables, \n$$\n\\expect{y_1 y_2} = \\int y_1 y_2 p(y_1) p(y_2) dy_1 dy_2 = \\int y_1 p(y_1) dy_1 \\int y_2 p(y_2) dy_2 = \\expect{y_1} \\expect{y_2}\n$$\nand therefore $Cov(y_1, y_2) = 0$. ", "_____no_output_____" ] ], [ [ "### Covariance Example\n\nnp.random.seed(1216)\n\nsig_x = 2\nsig_y = 1\nsig_xy = 0\nmean = np.array([0, 0], dtype=float)\ncov = np.array( [[sig_x, sig_xy], [sig_xy, sig_y]], dtype=float)\nx = np.random.multivariate_normal(mean, cov, size=1000)\n\nfig, axarr = plt.subplots(1, 2, figsize=(14,7))\n\naxarr[0].plot(x.T[0], x.T[1], 'k.')\naxarr[0].set_xlabel(r\"$x_1$\")\naxarr[0].set_ylabel(r\"$x_2$\")\naxarr[0].set_xlim(-5, 5)\naxarr[0].set_ylim(-5, 5)\naxarr[0].text(-4, 4, r\"$\\sigma_{xy} = 0.0$\")\n\nsig_x = 2\nsig_y = 1\nsig_xy = 0.5\nmean = np.array([0, 0], dtype=float)\ncov = np.array( [[sig_x, sig_xy], [sig_xy, sig_y]], dtype=float)\nx = np.random.multivariate_normal(mean, cov, size=1000)\n\naxarr[1].plot(x.T[0], x.T[1], 'k.')\naxarr[1].set_xlim(-5, 5)\naxarr[1].set_ylim(-5, 5)\naxarr[1].plot( [x[0], x[-1]], [0, 0], 'k:')\naxarr[1].set_xlabel(\"$x_1$\")\naxarr[1].text(-4, 4, r\"$\\sigma_{xy} = 0.5$\")", "_____no_output_____" ] ], [ [ "### Estimators\n\nAn estimator is a mathematical function of data that estimates a quantity of interest. An important distinction to keep in mind for data is the distinction between \"population statistics\" (the underlying distribution) and \"sample statistics\" (the measurements of the population). \n\nIdeally one wants an estimator to be\n\n- _unbiased_ -- even with a small amount of data, the expectation value of estimator is equal to the quantity being estimated\n\n- _efficient_ -- makes good use of the data, giving a low variance about the true value of the quantity\n\n- _robust_ -- isn't easily thrown off by data that violate your assumptions about the pdf, e.g., by non-Gaussian tails of the error distribution\n\n- _consistent_ -- in the limit of lots of data, it converges to the true value\n\nThese four desiderata sometimes pull in different directions.\n\nSuppose we have $N$ independent data points (the sample) drawn from an unknown distribution $p(x)$ (the population).\n\n#### The mean estimator\n\nThe obvious estimator for the mean of the distribution is the sample mean, $\\xbar={1\\over N}\\sum x_i$. The expectation value for the sample mean is: \n\n$$\n\\expect{\\xbar} = \\expect{\\frac{1}{N} \\sum x_i} = \n\\frac{1}{N} \\sum \\expect{x_i} = \\mu.\n$$\nThus, the sample mean is an _unbiased_ estimator of $\\mu$.\n\n#### Variance of the mean estimator\n\nThe variance of this estimator is \n$$\n\\expect{(\\xbar-\\mu)^2} = V\\left(\\frac{1}{N} \\sum x_i\\right) = \n{1 \\over N^2} V\\left(\\sum x_i\\right) = \n{1 \\over N^2} \\sum V(x_i) =\n{1 \\over N^2} \\times N\\sigma^2 = {\\sigma^2 \\over N},\n$$\nwhere $\\sigma^2$ is the variance of the underlying distribution.\n\nWe have used the fact that $\\expect{\\xbar}=\\mu$, and we have used the assumed independence of the $x_i$ to go from the variance of a sum to a sum of variances.\n\n#### Other mean estimators \n\nAn alternative estimator for the mean is the value of the third sample member, $x_3$.\n\nSince $\\expect{x_3} = \\mu$, this estimator is unbiased, but $V(x_3) = \\sigma^2$, so this estimate is noisier than the sample mean by $\\sqrt{N}$.\n\nA more reasonable estimator is the sample _median_, though this is a biased estimator if $p(x)$ is asymmetric about the mean.\n\nIf $p(x)$ is Gaussian, then the variance of the sample median is ${\\pi \\over 2}{\\sigma^2 \\over N}$, so it is a less _efficient_ estimator than the sample mean.\n\nHowever, if $p(x)$ has long non-Gaussian tails, then the median may be a much _more_ efficient estimator of the true mean(i.e., giving a more accurate answer for a fixed number of data points), since it is not sensitive to rare large or small values.\n\nEstimators that are insensitive to the extremes of a distribution are often called _robust_ estimators.\n\n#### Variance estimator\n\nThe obvious estimator for the variance of the distribution is the sample variance\n$$\ns^2 = \\frac{1}{N} \\sum (x_i-\\xbar)^2 = \\frac{1}{N} \\sum x_i^2 - \\xbar^2.\n$$\nHowever, a short derivation shows that the sample variance is biased low: \n$$\n\\expect{s^2} = {N-1 \\over N}\\sigma^2,\n$$\nThis is because we had to use the sample mean rather than the true mean, which on average drives down the variance.\n\nAn unbiased estimator is therefore\n$$\n\\hat{\\sigma}^2 = {1\\over N-1} \\sum (x_i-\\xbar)^2.\n$$\n\nIf you compute the mean of a sample, or of data values in a bin, the estimated _standard deviation of the mean_ is\n$$\n\\hat{\\sigma}_\\mu = \\left[{1 \\over N(N-1)}\\sum (x_i-\\xbar)^2\\right]^{1/2}.\n$$\nNote that this is smaller by $N^{-1/2}$ than the estimate of the dispersion within the bin. You should always be clear which quantity (dispersion or standard deviation of the mean) you are plotting.\n\nIf $p(x)$ is Gaussian, then the distribution of $\\xbar/\\sigma$ is a Gaussian of width $N^{-1/2}$. However, the distribution of $\\xbar/\\hat{\\sigma}$ is broader (a Student's $t$ distribution).", "_____no_output_____" ], [ "### Snap-judging Error Bars\n\nWhat is wrong with this plot?", "_____no_output_____" ] ], [ [ "Npts = 20\nx = np.linspace(0, 5, Npts)\nm = 2\nb = 3\ny = m*x + b\nsig_y = np.random.normal(0, 1, Npts)\nfx = y + sig_y\n\nerr_y = 3*np.ones(len(x)) # + 2.*np.ones(len(x))\nplt.figure(figsize=(10,5))\nplt.errorbar(x, fx, yerr=err_y, fmt='bo', capsize=4, label=\"Data\")\nplt.plot(x, y, 'k:', label=\"Relation\")\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.legend(loc='upper left')", "_____no_output_____" ] ], [ [ "### Bayesian vs. Frequentist Statistics\n\nSuppose we have measured the mean mass of a sample of G stars, by some method, and say: at the 68\\% confidence level the mean mass of G stars is $a \\pm b$. What does this statement mean?\n\n\nBayesian answer: There is some true mean mass $\\alpha$ of G stars, and there is a 68\\% probability that $a-b \\leq \\alpha \\leq a+b$.\n\nMore pedantically: The hypothesis that the true mean mass $\\alpha$ of G stars lies in the range $a-b$ to $a+b$ has a 68\\% probability of being true.\n\nThe **probability of the hypothesis is a real-numbered expression of the degree of belief we should have in the hypothesis**, and it obeys the axioms of probability theory.\n\n\nIn \"classical\" or \"frequentist\" statistics, a probability is a statement about the frequency of outcomes in many repeated trials. With this restricted definition, **one can't refer to the probability\nof a hypothesis -- it is either true or false**. One can refer to the probability of data if a hypothesis is true, where probability means the fraction of time the data would have come out the way it did in many repeated trials. \n\nFrequentist answer: The statement means something like: if $\\alpha = a$, we would have expected to obtain a sample mean in the range $a\\pm b$ 68\\% of the time.\n\n##### This is the fundamental conceptual difference between Bayesian and frequentist statistics.\n\n\n**Bayesian:** Evaluate the probability of a hypothesis in light of data (and prior information). Parameter values or probability of truth of a hypothesis are random variables, _data are not_ (though they are drawn from a pdf).\n\n**Frequentist:** Evaluate the probability of obtaining the data --- more precisely, the fraction of times a given _statistic_ (such as the sample mean) applied to the data would come out the way it did in many repeated trials --- given the hypothesis, or parameter values. A probability is a statement about the frequency of outcomes in many repeated trials. Data are random variables, parameter values or truth of hypotheses are not.\n\n#### Summary of the differences\n\n| Bayesian | Frequentist | \n| :-: | :-: | \n| Evaluate the probability of a hypothesis, given the data | Evaluate the probability of obtaining the data | \n| Parameters and probability of truth are random variables | Data are random variables | \n| Data are not random variables | Parameters and probability of truth are not random variables | \n| Need to specify alternatives to evaluate hypotheses | Statistical tests implicitly account for alternatives | \n\nDavid's opinion: The Bayesian formulation corresponds better to the way scientists actually think about probability, hypotheses, and data. It provides a better conceptual basis for figuring out what to do in a case where a\nstandard recipe does not neatly apply. But frequentist methods sometimes seem easier to apply, and they clearly capture _some_ of our intuition about probability.\n\nBottom line: One should be a Bayesian in principle, but maybe not always\nin practice.\n", "_____no_output_____" ], [ "### Probability Axioms and Bayes' Theorem\n\nProbabilities are real numbers $0 \\leq p \\leq 1$ obeying the axioms\n$$\np(A|C) + p(\\overline{A}|C) = 1.\n$$\n$$\np(AB|C) = p(A|BC)P(B|C)\n$$\n$\\overline{A}$ means \"not $A$\" \n\n$AB$ means \"$A$ and $B$\" and is thus equivalent to $BA$. \n\nFrom this equivalence we see that\n$$\np(AB|C) = p(A|BC)p(B|C)=p(BA|C)=p(B|AC)p(A|C).\n$$\n\nFrom the 2nd and 4th entries above, we arrive at **Bayes' Theorem**\n$$\np(A|BC) = p(A|C) {p(B|AC) \\over p(B|C)}.\n$$\n", "_____no_output_____" ], [ "### Bayesian Inference\n\n\nIn application to scientific inference, this theorem is usually written\n$$ \np(H|DI) = p(H|I) {p(D|HI) \\over p(D|I)},\n$$\nwhere\n\n$H$ = hypothesis, which might be a statement about a parameter value, e.g., the population mean lies in the range $x \\rightarrow x+dx$.\n\n$D$ = data\n\n$I$ = background information, which may be minimally informative or highly\ninformative. \n\n$p(H|I)$ = **prior probability**, i.e., before data are considered\n\n$p(D|HI)$ = **likelihood** of data given $H$ and $I$\n\n$p(D|I)$ = **global likelihood**\n\n$p(H|DI)$ = **posterior probability**, the probability of the hypothesis\nafter consideration of the data\n\n\nBayes' Theorem tells us how to update our estimate of the probability of a hypothesis in light of new data.\n\nIt can be applied sequentially, with the posterior probability from one experiment becoming the prior for the next, as more data become available.\n\nCalculation of likelihood $P(D|HI)$ is sometimes straightforward, sometimes difficult. The background information\n$I$ may specify assumptions like a Gaussian error distribution, independence of data points.\n\nImportant aspect of Bayesian approach: only the actual data enter, not hypothetical data that could have been taken.\n\n_All the evidence of the data is contained in the likelihood._\n", "_____no_output_____" ], [ "### Global Likelihood and Absolute Assessment\n\nThe global likelihood of the data, $P(D|I)$ is the sum (or integral) over \"all\" hypotheses. This can be a slippery concept.\n\nOften $P(D|I)$ doesn't matter: in comparing hypotheses or parameter values, it cancels out.\n\nWhen needed, it can often be found by requiring that $p(H|DI)$ integrate (or sum) to one, as it must if it is a true probability.\n\nThe Bayesian approach forces specification of alternatives to evaluate hypotheses.\n\nFrequentist assessment tends to do this implicitly via the choice of statistical test.", "_____no_output_____" ], [ "### Criticism of Bayesian approach\n\nThe incorporation of priors makes Bayesian methods seem subjective, and it is the main source of criticism of the Bayesian approach.\n\nIf the data are compelling and the prior is broad, then the prior doesn't have much effect on the posterior. But if the data are weak, or the prior is narrow, then it can have a big effect.\n\nSometimes there are well defined ways of assigning an \"uninformative\" prior, but sometimes there is genuine ambiguity.\n\nBayesian methods sometimes seem like a lot of work to get to a straightforward answer.\n\nIn particular, we sometimes want to carry out an \"absolute\" hypothesis test without having to enumerate all alternative hypotheses.\n", "_____no_output_____" ], [ "### Criticism of frequentist approach\n\nThe frequentist approach doesn't correspond as well to scientific intuition. We want to talk about the probability of hypotheses or parameter values.\n\nThe choice of which statistical test to apply is often arbitrary. There is not a clear way to go from the result of a test to an actual scientific inference about parameter values or validity of a hypothesis.\n\nBayesians argue (and I agree) that frequentist methods obtain the appearance of objectivity only by sweeping priors under the rug, making assumptions implicit rather than explicit.\n\nFrequentist approach relies on hypothetical data as well as actual data obtained. Choice of hypothetical data sets is often ambiguous, e.g., in the \"stopping\" problem.\n\nSometimes we _do_ have good prior information. It is straightforward to incorporate this in a Bayesian approach, while it is not in the frequentist approach.\n\nFrequentist methods are poorly equipped to handle \"nuisance parameters,\" which in the Bayesian approach are easily handled by marginalization.\n\nFor example, the marginal distribution of a parameter $x$ \n$$\np(x) = \\int p(x|a,b,c) da\\,db\\,dc\n$$\ncan only exist if $x$ is a random variable.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cbb8045f6ec336f77e6199cf8c4d9d9b3fc98879
14,084
ipynb
Jupyter Notebook
INFO7390_FinalPRoject.ipynb
ManaliSharma/Self_Driving_Cars_Supervised_And_Reinforcement_Learning
a2c9fe632fa999e6c01c6bbb42140667ad91c40a
[ "MIT" ]
2
2021-01-26T03:14:55.000Z
2021-07-28T02:33:06.000Z
INFO7390_FinalPRoject.ipynb
ManaliSharma/Self_Driving_Cars_Supervised_And_Reinforcement_Learning
a2c9fe632fa999e6c01c6bbb42140667ad91c40a
[ "MIT" ]
null
null
null
INFO7390_FinalPRoject.ipynb
ManaliSharma/Self_Driving_Cars_Supervised_And_Reinforcement_Learning
a2c9fe632fa999e6c01c6bbb42140667ad91c40a
[ "MIT" ]
null
null
null
60.706897
749
0.686311
[ [ [ "# <p style=\"text-align: center;\"> Self Driving Car in OpenAI Gym using Imitation Learning and Reinforcement Learning</p>\n![title](https://miro.medium.com/max/1575/1*IQfXahuDuh0pgVE5fMpiFQ.gif )\n\n# <p style=\"text-align: center;\"> 1.0 Abstract </p> <a id='abstract'></a>\n\nWe all know self-driving cars is one of the hottest areas of research and business for the tech giants. What seemed like a science-fiction, a few years ago, now seems more like something which is soon to become a part and parcel of life. The reason, I am saying “soon to be” is because of the fact that even though companies like Tesla, Nissan, Cadillac do have self-driving car assistance software, but, they still require a human to keep an eye on the road and take control when needed. However, it is fascinating to see how far we have come in terms of innovation and how fast technology is advancing. So much so, that now, with the help of basic deep learning, neural network magic, we can build our own pipeline for autonomous driving.\n\nOur idea to try and build our very own self driving car emerged from here. In order to understand the basics of the process , we did this project in two parts. \n\n- Self Driving Car using Supervised Learning\n- Self Driving Car using Reinforcement Learning\n\n**PS- To make you understand the structure for the same, We have done this project in 3 parts, and all 3 parts are divided into seperate notebooks. And these individual notebooks contain the whole code and documentation of the entire part.\n\n### Readme Structure \n\n\n### 1. Basics of CNN :- \n\nThe main agenda of this notebook is as follow:-\n> - To understand the convolution operation\n> - To understand the pooling operation\n> - Remembering the vocabulary used in convolutional neural networks (padding, stride, filter, etc.)\n> - Building a convolutional neural network for multi-class classification in images\n>- Basics of Imitation Learning\n\nThis notebook includes the basics of convolutional operations and whole network in general. This was a very integral part of our project and will serve as a guide for any beginner trying to understand CNN .\n\n### 2. Self Driving Car using Supervised Learning :- \nIn this notebook ,we applied a supervised learning algorithm (convolution networks), to control the direction of a car in a 2D simulation. The notebook captures the following:-\n\n> - How a convolution network works?\n> - How to create the dataset and use it for training our network\n> - How to use gym to retrieve the output of our neural network in order to control the simulation.\n\nThe general idea that we used is that of the supervised classifier. We are going to train a convolutional neural network to classify images in the game, according to three labels: left, right and straight ahead. We will then convert these commands into instructions for the simulator, which will execute them.\n\n\n### 3. Basics of Deep Q-Learning:- \nThe main agenda of this notebook is as follow:-\n\n> - Q-Learning\n> - Why ‘Deep’ Q-Learning?\n> -Introduction to Deep Q-Learning\n> - Challenges of Deep Reinforcement Learning as compared to Deep Learning\n> - Experience Replay\n> - Target Network\n\nThis notebook includes the basics of deep q learning. This was a very integral part of our project and will serve as a guide for any beginner trying to understand Q-Learning .\n\n\n### 4. Self Driving Car using Reinforcement Learning :-\n\nIn this notebook, a python based car racing environment is trained using a deep reinforcement learning algorithm to perform efficient self driving on a racing track. The notebook captures the following.\n\n> - Development of a deep Q learning algorithm which is then used to train an autonomous driver agent. \n> - Different configurations in the deep Q learning algorithm parameters and in the neural network architecture are then tested and compared in order to obtain the best racing car average score over a period of 100 races. This score is given by the gym environment and can be seen on the bottom left corner.\n\nAccording to OpenAI Gym, this environment is considered solved when the agent successfully reaches an average score of 900 on the last 100 runs. In this project, this goal was surpassed having obtained an average score of 905 over the last 100 runs. Therefore, we successfully solved the environment.\n\n\n# <p style=\"text-align: center;\"> Index </p>\n- # 1 [Abstract](#abstract)\n- # 2 [Basics of CNN](./Umbrella_Academy_INFO7390_Project/INFO7390_Notebooks/Basics_of_Convolutional_Neural_Network.ipynb)\n- # 3 [Self Driving Car using Supervised Learning](./Umbrella_Academy_INFO7390_Project/INFO7390_Notebooks/Self_Driving_Car_Imitation_Learning.ipynb)\n- # 4 [Basics of Deep Q learning](./Umbrella_Academy_INFO7390_Project/INFO7390_Notebooks/Basics_of_Deep_Q_Learning.ipynb)\n- # 5 [Self Driving Car using Reinforcement Learning](./Umbrella_Academy_INFO7390_Project/INFO7390_Notebooks/RL_Self_Driving_Car.ipynb)\n- # 6 [Conclusion](#Conclusion)", "_____no_output_____" ], [ "![](https://i.ytimg.com/vi/oCVkiLBZb24/maxresdefault.jpg)", "_____no_output_____" ], [ "# Setting up the Environment <a id='Environment'></a>\n\nBefore we start with the setup of our environment, we need to install a few pakages which will make our game and neural network work.\n\n### 1) Gym facility\nInstall OpenAI Gym on the machine\n\nFollow the instructions at https://github.com/openai/gym#installation for extensive and deep guide.\n\n**Summary of instructions:**\n- Install Python 3.5+\n- Clone the gym repo: git clone https://github.com/openai/gym.git\n- cd gym\n- Gym installation, with the box2d environments: pip install -e '.[box2d]'\n\nFollow the following steps to play the Car Racing Game\n- cd gym/envs/box2d\n- python car_racing.py\n\n### 2) Pytorch\nPytorch is the deep learning framework that we will be using. It makes it possible to build neural networks very simply.\n\nFollow the instructions on http://pytorch.org/ for a deep guide.\n\n**Summary of instructions:**\n- Install Python 3.5+\n- It is recommended to manage PyTorch with Anaconda. Please install Anaconda\n- Install PyTorch following instructions at https://pytorch.org/get-started/locally/\n![title](images_main_notebook\\Pytorch_Installation.png)\n\nFor example this is the setup for my Computer\n> pip install torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n## The Environment\n\nFor this tutorial, we will use the gym library developed by OpenAI. It provides environments (simple games) to develop reinforcement learning algorithms.\n\nThe environment we will be using is CarRacing-v0 ( https://gym.openai.com/envs/CarRacing-v0/ ). It is about driving a car on a circuit, the objective being to move forward while staying on the track, which contains many turns. The input to the algorithm (the state provided by the environment) is only the image displayed by the environment: we see the car, and the terrain around it.\n![title](images_main_notebook\\car-racing.png)\n\nThe idea is to drive the car by analyzing this image.\n\nWe are going to use this library in a roundabout way: It is designed for reinforcement learning. The objective is in principle to use the rewards (rewards) provided by the environment to learn the optimal strategy without user action. Here we will not be using these rewards.\n\nIn addition, we will be doing end-to-end learning , which means that the neural network will directly give us the commands to navigate the car. This is not a road detection module, which will then be analyzed by another program (most true autonomous driving systems are made this way). Here, the neural network takes the field matrix as input, and issues a command to be executed (turn left, turn right, continue straight ahead), without any intermediate program.\n\nTo use the environment, you need to import it like this:\n\n>import gym\n\n>env = gym.make('CarRacing-v0').env\n\nYou can then access several useful functions:\n\n- **env.reset() :** Allows you to restart the environment\n- **env.step(action) :** Allows you to perform the action `action`. This function returns a tuple `state`, `reward`, `done`, `info` containing the state of the game after the action, the reward obtained, doneindicates if the game is finished, and infocontains debug data.\n- **env.render() :** Displays the game window.\n\nHere, the state `state` that will be returned by env.step(action)is the image displayed on the screen (the pixel matrix). It is this data that we will use to steer our car.", "_____no_output_____" ], [ "# <p style=\"text-align: center;\"> Conclusion<p><a id='Conclusion'></a>\n\n#### 1. Video Simulation of self driving car by supervised learning (Imitation Learning) :- \n<video controls src=\"main_videos/IL_Result.mp4\" width=\"500\" height=\"340\"/>", "_____no_output_____" ], [ "#### 2. Video Simulation of self driving car by Reinforcement learning (Deep Q Learning) :- \n\n<video controls src=\"main_videos/RL_SelfDriving.mp4\" height=\"340\"/>", "_____no_output_____" ], [ "3. Our network recognizes the shapes to keep the car on the desired path. It's a sort of classifier that just indicates whether the car is in the right position, too far to the right or too far to the left. We then send this command to the simulator. All of this is done in real time.\n \n> Behavioural Cloning though has a few disadvantages, and we can see them here in this notebook.\n- We need to manually accelerate and decelerate, and we can only accelerate till a certain limit, because beyond that, the car will spin out of control and go outside in the patch of grass. \n- Since while training we never leave the track, the car has no way of coming back to the road after it has left the track and is into the grass.\n- Here we only have a train set of 3000 and validation set of 600, but we tried increasing the sizes of these by a magintude of 10 (30,000 and 6000), but because of the substantial increase in the size of the dataset, the error while generating the dataset also shot up, which turned out to be a very bad dataset for out neural net. \n- Also, because we were well within the tracks, the car has no data on cases in which it goes out by accident.\n- A possible remedy for this is preprocessing the data in such a way that the dataset has images of car coming in, but not going out.\n \nFor seeing how this works refer to :- [Self Driving Car using Supervised Learning](./Umbrella_Academy_INFO7390_Project/INFO7390_Notebooks/Self_Driving_Car_Imitation_Learning.ipynb)", "_____no_output_____" ], [ "# <p style=\"text-align: center;\"> Contribution<p><a id='Contribution'></a>\n\n \n- Code by self : 75%\n- Code from external Sources : 25%", "_____no_output_____" ], [ "# <p style=\"text-align: center;\"> License<p><a id='License'></a>\nCopyright (c) 2020 Rushabh Nisher, Manali Sharma\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cbb8057c7f286dea9fb68610e9f6cf3c2b863892
173,199
ipynb
Jupyter Notebook
Open Data Cologne.ipynb
autonubil/public-notbooks
7fbbf9093f56dda26f823d64d715eec01762b532
[ "MIT" ]
2
2018-04-18T11:41:35.000Z
2018-04-18T11:41:56.000Z
Open Data Cologne.ipynb
autonubil/public-notbooks
7fbbf9093f56dda26f823d64d715eec01762b532
[ "MIT" ]
null
null
null
Open Data Cologne.ipynb
autonubil/public-notbooks
7fbbf9093f56dda26f823d64d715eec01762b532
[ "MIT" ]
null
null
null
61.07158
30,204
0.576793
[ [ [ "## Offene Daten Köln\nSource: https://offenedaten-koeln.de/\nThemen: Lautstärke, Verkehrsdaten und Klima", "_____no_output_____" ] ], [ [ "import json\nimport requests\nimport pandas as pd\nimport datetime\nimport numpy as np\nurl = \"https://offenedaten-koeln.de/api/3/action/package_show?id=d367488b-fe8c-42ac-822d-113951831ead\"\nr = requests.get(url)\ndata = json.loads(r.content)\n\nd0 = pd.DataFrame(data['result'][0]['resources'])\ndf_csv = d0[d0['format'] =='csv']\ndf_csv = df_csv[df_csv.name.str.contains(\"Geschwindigkeitsueberwachung \")]\ndef get_datetime(row):\n return parse_date(row[\"vorfallsdatum\"]) + parse_time(row[\"vorfallsuhrzeit\"])\n#Datum = D/MM/YY\ndef parse_date(date):\n date = str(date)\n year = int(\"20\" + date[-2:])\n month = int(date[-4:-2])\n day = int(date[:-4])\n datetime_parsed = datetime.datetime(year,month,day)\n return datetime_parsed\n\n#Vorfallsuhrzeit (\"5\" = 00:05; \" 335\" = 03:35; \"1211\" = 12:11)\n\ndef parse_time(time):\n time = str(time)\n try:\n minutes = int(time[-2:])\n except:\n return np.nan\n try:\n hours = int(time[:-2])\n except:\n hours = 0\n time_parsed = datetime.timedelta(hours = hours,minutes= minutes)\n return time_parsed\n\ndef parse_hours(time):\n time = str(time)\n try:\n minutes = int(time[-2:])\n except:\n return np.nan\n try:\n hours = int(time[:-2])\n except:\n hours = 0\n return hours%24", "_____no_output_____" ], [ "df = pd.DataFrame()\nfor row in df_csv.url:\n monthly_data = pd.read_csv(row,delimiter=\";\")\n monthly_data['datetime'] = monthly_data.apply(get_datetime,axis=1)\n df = df.append(monthly_data)\ndf['hours'] = df['vorfallsuhrzeit'].apply(parse_hours)\ndf['ueberschreitung'] = pd.to_numeric(df.ueberschreitung,errors='coerce')\n\n%matplotlib inline\ndf.plot.scatter(x='hours', y= 'ueberschreitung')", "_____no_output_____" ], [ "df[df.ueberschreitung>0].groupby('hours')['ueberschreitung'].describe()", "_____no_output_____" ], [ "df[df.ueberschreitung>0].head(1000).groupby('standort')['ueberschreitung'].describe().sort_values(\"count\",ascending= False)", "_____no_output_____" ], [ "df[df['standort'] == 99]['ueberschreitung'].plot()", "_____no_output_____" ], [ "df.head(1000).plot(x='datetime', y= 'ueberschreitung')", "_____no_output_____" ], [ "list(d0.tail(3).url)", "_____no_output_____" ], [ "pd.read_csv('https://offenedaten-koeln.de/sites/default/files/Kombistandorte_Geschwindigkeit-und_oder_Rotlichtfaelle.csv',delimiter=\";\")", "_____no_output_____" ], [ "pd.read_csv('https://offenedaten-koeln.de/sites/default/files/Mobilstandorte.csv',delimiter=\";\")", "_____no_output_____" ], [ "pd.read_csv('https://offenedaten-koeln.de/sites/default/files/Stationaerstandorte.csv',delimiter=\";\").head(100)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb80c318e0a9f187d63268ef00f2817c8ea75b8
256,621
ipynb
Jupyter Notebook
Amin/DC2019-Amin Aria-Successful Start.ipynb
ariashahverdi/DC2019
6078c2a337b5c30d67cf63cf16a5434b51ee8d37
[ "MIT" ]
null
null
null
Amin/DC2019-Amin Aria-Successful Start.ipynb
ariashahverdi/DC2019
6078c2a337b5c30d67cf63cf16a5434b51ee8d37
[ "MIT" ]
null
null
null
Amin/DC2019-Amin Aria-Successful Start.ipynb
ariashahverdi/DC2019
6078c2a337b5c30d67cf63cf16a5434b51ee8d37
[ "MIT" ]
null
null
null
172.576328
163,018
0.847557
[ [ [ "# DC 2019- ُSuccessful Business Start", "_____no_output_____" ], [ "**Your Name:** Amin Aria\n\n", "_____no_output_____" ], [ "# Small buisinesses success\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport sklearn\nDC19 = pd.read_csv('..\\sbdc_data_merged.csv')", "_____no_output_____" ], [ "# Show the first 5 rows of data, just to demostrate it has loaded\nDC19.head()", "_____no_output_____" ], [ "DC19['Impact: Started Business'].value_counts(normalize = True)\nDC19[['Attended Group Training?','Impact: Started Business']][DC19['Attended Group Training?']=='Yes'].groupby(['Impact: Started Business']).size()\n#DC19['Total Counseling Time, hrs'][(DC19['Impact: Started Business'] == 'Yes')]", "_____no_output_____" ], [ "DC19[DC19['Total Counseling Time, hrs'] < 40].boxplot(column = 'Total Counseling Time, hrs', by ='Impact: Started Business',grid = False )", "_____no_output_____" ], [ "DC19[['Impact: Capital Investments','Impact: Started Business']].groupby(['Impact: Started Business'])\\\n.median()\n#.agg(np.std,ddof=0)", "_____no_output_____" ], [ "#####------GroupBy Statistics-----#####\n#fig, ax = plt.subplots(figsize=(15,7))\n # (DC19['Impact: Capital Investments']<400000) ].\\\n \nA= DC19[(DC19['Impact: Started Business'] != 'Not applicable: Already in Business')] \\\n.groupby(['Industry Title','Impact: Started Business'], sort = True).size() \\\n.transform(lambda x: x/sum(x)) .unstack()\n#A['Industry Title']=A.index\n#plt.bar(A.index,A['Yes'],width = 0.3 )\nA\n#.unstack().plot(ax=ax,ylim=((0,0.02)))", "_____no_output_____" ], [ "####------Started Business Analysis-----#####\nDC19start= DC19[DC19['Impact: Started Business']== 'Yes']\nDC19start.shape", "_____no_output_____" ], [ "#####-----Correlation Box-----######\n#####-----Correlation Box-----######\nimport seaborn as sns\n\n#DC19start.corr()\n#sns.pairplot(DC19start)\n#sns.heatmap(DC19['County'],DC19['Service Center'])", "_____no_output_____" ], [ "dc19r=DC19[['County', 'Initial Services Sought at First Visit',\n 'Attended Group Training?', 'Total Counseling Time, hrs',\n 'Business Status', 'Impact: Capital Investments',\n 'Impact: Created New Jobs', 'Impact: Revenue Increase','Company\\'s Gross Revenue, $',\n 'Industry Title', 'Ownership Gender', 'Owner\\'s Race','Owner\\'s Hispanic Origin']].copy()\n\n", "_____no_output_____" ], [ "####-----Handling categorical variables-----####\ndc19rc=pd.get_dummies(dc19r, prefix=None, prefix_sep='_')\ndc19rc.shape", "_____no_output_____" ] ], [ [ "# Classification Methods", "_____no_output_____" ] ], [ [ "####---- Data Pre-processing ----#####\n\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\nle=LabelEncoder()\nle.fit(DC19['Impact: Started Business'])\ny=le.transform(DC19['Impact: Started Business']\\\n [DC19['Impact: Started Business'] != 'Not applicable: Already in Business'])\nX= dc19rc[DC19['Impact: Started Business'] != 'Not applicable: Already in Business']\n\n#size of training and cross-validation set\ntset=int(0.8*X.shape[0])\n\n##NEW Definition of Success including a positive Revenue Increase\n\nynew= (X['Impact: Revenue Increase']>0) & (y==2)", "_____no_output_____" ], [ "###----Input data for Impact excluded Analysis----###\n\ndc19rno=DC19[['County', 'Initial Services Sought at First Visit',\n 'Attended Group Training?', 'Total Counseling Time, hrs',\n 'Business Status','Company\\'s Gross Revenue, $',\n 'Industry Title', 'Ownership Gender', 'Owner\\'s Race','Owner\\'s Hispanic Origin']].copy()\ndc19rcno=pd.get_dummies(dc19rno, prefix=None, prefix_sep='_')\nXnoim= dc19rcno[DC19['Impact: Started Business'] != 'Not applicable: Already in Business']\nXnoim.head()", "_____no_output_____" ] ], [ [ "# Multinomial Logistic Regression", "_____no_output_____" ] ], [ [ "####------Multinomial logistic Regression------#####\n\n\nfrom sklearn.linear_model import LogisticRegressionCV\n\n\nclf = LogisticRegressionCV(cv=5, random_state=0,multi_class='multinomial').fit(Xnoim.head(tset), y[0:tset])\n#for the new definition of success\n#.fit(X,ynew)\n\ntotal=clf.predict(Xnoim)\n\nfrom sklearn.metrics import confusion_matrix\nsns.heatmap(confusion_matrix(y, total))", "_____no_output_____" ], [ "###----Out of Sample Performance----####\nlogstart=clf.predict(Xnoim.tail(X.shape[0]-tset))\nconfusion_matrix(y[tset:X.shape[0]], logstart)\n#fig, ax = plt.subplots(figsize=(15,7))\n#plt.scatter(y[tset:X.shape[0]], logstart)", "_____no_output_____" ], [ "####----Probabilistic Prediction-----#######\nlogp=clf.predict_proba(Xnoim.tail(100))\n\nppre = pd.DataFrame(logp[:,1])\nppre=pd.concat([pd.DataFrame(y[(X.shape[0]-100):X.shape[0]]),ppre],axis=1)\nppre.columns=['Real','Probability']\nppre['Real']= ppre['Real']/2\nplt.hist(ppre['Probability'][ppre['Real']==1.0],20,alpha=0.5, facecolor = 'g',label='Predicted Probability of Success',normed=True)\n#ppre", "_____no_output_____" ] ], [ [ "# Random Forrest", "_____no_output_____" ] ], [ [ "###1-----Random Forrest ----1#####\n\nfrom sklearn.ensemble import RandomForestClassifier\n\nclf3 = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0)\nclf3.fit(X.head(tset), y[0:tset])\nRandomForestClassifier(bootstrap=True, class_weight=None, criterion='entropy',\n max_depth=2, max_features='auto', max_leaf_nodes=None,\n min_impurity_split=None, min_samples_leaf=1, min_samples_split=2,\n min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None,\n oob_score=False, random_state=0, verbose=0, warm_start=False)\ntotal3=clf3.predict(X)\nconfusion_matrix(y, total3)\n", "_____no_output_____" ], [ "###----Out of Sample Performance----####\nlogstart3=clf3.predict(X.tail(X.shape[0]-tset))\nconfusion_matrix(y[tset:X.shape[0]], logstart3)", "_____no_output_____" ] ], [ [ "# SVM", "_____no_output_____" ] ], [ [ "###1-----Support Vector Machine ----1#####\n\nfrom sklearn.svm import SVC\n\nclf2 = SVC(gamma='auto')\nclf2.fit(X.head(tset), y[0:tset]) \nSVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,\n decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf',\n max_iter=-1, probability=False, random_state=None, shrinking=True,\n tol=0.001, verbose=False)\ntotal2=clf2.predict(X)\nconfusion_matrix(y, total2)\n", "_____no_output_____" ], [ "###----Out of Sample Performance----####\nlogstart2=clf2.predict(X.tail(X.shape[0]-tset))\nconfusion_matrix(y[tset:X.shape[0]], logstart2)", "_____no_output_____" ] ], [ [ "# Feature Selection (Most Important Ones)", "_____no_output_____" ] ], [ [ "#####----- Univariate Selection-------####\n\n# Feature Extraction with Univariate Statistical Tests (Chi-squared for classification)\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.feature_selection import chi2\n# load data\n# feature extraction\ntest = SelectKBest(score_func=chi2, k=4)\n\n###=------Scaling Data\nfrom sklearn.preprocessing import MinMaxScaler\nscaler = MinMaxScaler()\nscaler.fit(Xnoim)\nXtrans=scaler.transform(Xnoim)\n\nfit = test.fit(Xtrans, y)\n# summarize scores\nnp.set_printoptions(precision=3)\nBest_Features = pd.DataFrame(fit.scores_)\nBest_Features=pd.concat([pd.DataFrame(Xnoim.columns),Best_Features],axis=1)\nBest_Features.columns=['feature','score']\nBest_Features=Best_Features.sort_values(by=['score'],ascending= False)\n#print(fit.scores_)\n#features = fit.transform(X)\n# summarize selected features\nprint(Best_Features[0:10])", " feature score\n34 Business Status_Started with SBDC 5992.907019\n32 Business Status_Pre-venture/Nascent 1667.000000\n33 Business Status_Start-up (in bus. < 1 year) 490.404319\n31 Business Status_In Business (> 1 year) 444.688662\n55 Ownership Gender_Choose not to respond 184.694786\n60 Ownership Gender_Woman-Owned 148.557081\n30 Attended Group Training?_Yes 70.236316\n3 County_Anne Arundel 67.643437\n0 Total Counseling Time, hrs 66.082508\n61 Ownership Gender_Woman-Owned (WOSB) Certified 54.027594\n" ] ], [ [ "# Regression for Prediction of Increased Revenue", "_____no_output_____" ], [ "It is likely that regression analysis provide more accuracy on industry-wise data", "_____no_output_____" ] ], [ [ "DC19PST=DC19[DC19['Industry Title']=='Professional, Scientific, and Technical Services']\nDC19PST=pd.DataFrame(DC19PST)\nsns.pairplot(DC19PST._get_numeric_data())\n\n#DC19['Industry Title'].groupby", "_____no_output_____" ], [ "from sklearn import linear_model\n\n#lm = linear_model.LinearRegression()\n#model = lm.fit(X_train, y_train)\n#'Not applicable: Already in Business'\n\nXri = DC19PST[(DC19PST['Impact: Started Business'] =='Not applicable: Already in Business')\\\n & (DC19PST['Impact: Revenue Increase'] < 250000 ) ]\n\n#Xri = Xri[['County', 'Initial Services Sought at First Visit',\n# 'Attended Group Training?', 'Impact: Created New Jobs','Total Counseling Time, hrs',\n# 'Business Status', 'Company\\'s Total employees',\n# 'Company\\'s Gross Revenue, $', 'Ownership Gender', 'Owner\\'s Race', 'Industry Title']]\n\nXri=pd.get_dummies(Xri, prefix=None, prefix_sep='_')\nyri = DC19PST['Impact: Revenue Increase']\\\n [(DC19PST['Impact: Started Business'] == 'Not applicable: Already in Business')\\\n & (DC19PST['Impact: Revenue Increase'] < 250000 )] \n \nyci = DC19PST['Impact: Capital Investments']\\\n[DC19PST['Impact: Started Business'] == 'Not applicable: Already in Business']\n \n\nscaler.fit(Xri)\nXritr=scaler.transform(Xri) \nXritr=pd.DataFrame(Xritr)\nXritr.columns=Xri.columns\n\n#training and testing set\ntset2=int(0.7*Xri.shape[0])\n\n#Xritr = SelectKBest(chi2, k=2).fit_transform(Xritr, yri)\nlm = linear_model.RidgeCV(alphas=(0.1,1,10))\nmodel = lm.fit(Xritr.head(tset), yri[0:tset])\n\n#y_pred = lm.predict(X_test)\n#print(y_pred)\nprint(lm.score(Xritr.head(tset2), yri[0:tset2]))\nprint(lm.score(Xritr.tail(Xri.shape[0]-tset2),yri[tset2:Xritr.shape[0]]))\nprint(\"\\n\")", "0.999977030564\n0.999972368485\n\n\n" ], [ "y_pred = lm.predict(Xritr.tail(Xri.shape[0]-tset2))\nplt.scatter(y_pred,yri[tset2:Xritr.shape[0]],s=50)\nplt.xlabel=\"Predicted increase in Revenue\"\nplt.ylabel ='Real increase in Revenue'\nplt.show()", "_____no_output_____" ], [ "DC19PST.columns", "_____no_output_____" ] ], [ [ "Xri['County','Initial Services Sought at First Visit',\n 'Attended Group Training?', 'Total Counseling Time, hrs',\n 'Business Status', 'Company's Total employees',\n 'Company's Gross Revenue, $', 'Ownership Gender',\n 'Owner's Race', 'Industry Title',]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "raw" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "raw" ] ]
cbb810dfae5dcf4e6eb33c935975cc6065b69ab7
459,034
ipynb
Jupyter Notebook
examples/earthquakes.ipynb
tingiskhan/pyfilter
2d7ef0638c0c0fe70d8e01d6bcf8131d901144fd
[ "MIT" ]
61
2018-07-12T17:17:37.000Z
2022-03-19T17:37:59.000Z
examples/earthquakes.ipynb
tingiskhan/pyfilter
2d7ef0638c0c0fe70d8e01d6bcf8131d901144fd
[ "MIT" ]
12
2018-08-26T20:01:03.000Z
2021-12-04T16:42:48.000Z
examples/earthquakes.ipynb
tingiskhan/pyfilter
2d7ef0638c0c0fe70d8e01d6bcf8131d901144fd
[ "MIT" ]
9
2019-01-20T16:31:03.000Z
2021-12-24T07:10:04.000Z
1,765.515385
283,096
0.95997
[ [ [ "# Earthquakes", "_____no_output_____" ], [ "In this notebook we'll try and model the intensity of earthquakes, basically replicating one of the examples in [this](http://user.it.uu.se/~thosc112/dahlin2014-lic.pdf) paper. To that end, let's first grab the data we need from USGS. We then filter the data to only include earthquakes of a magnitude 7.0, on the Richter scale, or higher.", "_____no_output_____" ] ], [ [ "from requests import get\nfrom datetime import datetime\nfrom json import loads\nimport pandas as pd\n\nurl = url = \"https://earthquake.usgs.gov/fdsnws/event/1/query.geojson?minsig=600\"\n\nresp = get(url, params={\"starttime\": datetime(1900, 1, 1), \"endtime\": datetime(2021, 1, 1)})\njson = resp.json()\n\ndata = pd.DataFrame.from_dict((i[\"properties\"] for i in json[\"features\"]), orient=\"columns\")\ndata.set_index(\"time\", inplace=True)\n\ndata.index = pd.to_datetime(data.index, unit=\"ms\")\ndata = data.where(data[\"mag\"] >= 7.0).sort_index()\n\nby_year = data.groupby(data.index.year)[\"mag\"].count()\nby_year.plot(figsize=(16, 9), color=\"gray\")", "_____no_output_____" ] ], [ [ "Next, we'll setup the model for the data. We'll use the same one as Dahlin uses, i.e.\n\\begin{cases}\nd \\log {\\lambda_t} = \\kappa (\\mu - \\log{\\lambda_t})dt + \\sigma dW_t, \\\\\nY_t \\sim \\mathcal{P} \\left ( \\lambda_t \\right),\n\\end{cases}\nwhere $\\mathcal{P(x)}$ denotes a Poisson distribution with rate $x$.", "_____no_output_____" ] ], [ [ "from pyfilter.timeseries import models as m, GeneralObservable, StateSpaceModel\nfrom pyfilter.distributions import Prior\nfrom torch.distributions import Poisson, Normal, Exponential, LogNormal\nimport torch\n\n\nclass EarthquakeObservable(GeneralObservable):\n def build_density(self, x):\n return Poisson(rate=x.values.exp(), validate_args=False)\n \npriors = Prior(Exponential, rate=5.0), Prior(Normal, loc=0.0, scale=1.0), Prior(LogNormal, loc=0.0, scale=1.0)\ninitial_state_mean = Prior(Normal, loc=0.0, scale=1.0)\nlatent = m.OrnsteinUhlenbeck(*priors, initial_state_mean=initial_state_mean, dt=1.0, ndim=1)\n\nobs = EarthquakeObservable(torch.Size([]), ())\n\nssm = StateSpaceModel(latent, obs)", "_____no_output_____" ] ], [ [ "Next, we'll perform the inference. For this model we'll use PMMH together with a gradient based proposal, corresponding to PMH1 of the dissertation referenced earlier.", "_____no_output_____" ] ], [ [ "from pyfilter.inference.batch.mcmc import PMMH, proposals as p\nfrom pyfilter.filters.particle import APF\n\nas_tensor = torch.from_numpy(by_year.values).int()\n\nfilt = APF(ssm, 500, record_states=True)\nalg = PMMH(filt, 3000, num_chains=6, proposal=p.GradientBasedProposal(scale=5e-2)).cuda()\n\nstate = alg.fit(as_tensor.cuda())", "PMMH: 100%|████████████████████████████████████████████████████████████████████████| 3000/3000 [33:05<00:00, 1.51it/s]\n" ] ], [ [ "Plot one smoothed realization.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(figsize=(16, 9))\n\nsmoothed = filt.smooth(state.filter_state.states).mean((1, 2)).exp().cpu().numpy()[1:]\nax.plot(by_year.index, smoothed, color=\"gray\", label=\"Rate\")\n\nax2 = ax.twinx()\nby_year.plot(ax=ax2, color=\"salmon\", alpha=0.75, label=\"Earthquakes\")\n\nfig.legend()", "_____no_output_____" ] ], [ [ "And finally plot the posterior distributions of the parameters.", "_____no_output_____" ] ], [ [ "from pyfilter.inference.utils import params_to_tensor\nfrom arviz import plot_trace\n\nparameters = state.samples.values().transpose(1, 0).cpu().numpy()\n\n# fig, ax = plt.subplots(parameters.shape[-1], figsize=(16, 9))\n\nplot_trace(parameters)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb8149cbee79d4f25faf9dc5b14f26e566cc3fc
43,581
ipynb
Jupyter Notebook
notebooks/copied/map_bare_list.ipynb
turbomam/scoped-mapping
09117cc0964d3fc938434102b3c36c2a45d30dce
[ "MIT" ]
null
null
null
notebooks/copied/map_bare_list.ipynb
turbomam/scoped-mapping
09117cc0964d3fc938434102b3c36c2a45d30dce
[ "MIT" ]
4
2021-07-07T19:52:55.000Z
2021-07-07T19:57:47.000Z
notebooks/copied/map_bare_list.ipynb
turbomam/scoped-mapping
09117cc0964d3fc938434102b3c36c2a45d30dce
[ "MIT" ]
null
null
null
44.882595
102
0.403937
[ [ [ "import scoped_mapping\ninputs = ['Homo-sapiens', 'mus musculus', 'rattus norvegicus']\nsearchres_annotations = scoped_mapping.search_get_annotations_wrapper(inputs,\n bad_chars='._-',\n cat_name='test', \n ontoprefix='ncbitaxon',\n query_fields='label',\n rr=3)\n\n\nsearchres_annotations", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
cbb827c5c14f392a78bf5e988f869ffd8718eeab
732,949
ipynb
Jupyter Notebook
vignettes/gibbons_et_al.ipynb
CSB5/BEEM
013fd0540f2e811a689040d4ab3d66081896f2f6
[ "MIT" ]
7
2019-02-04T19:44:08.000Z
2020-10-27T16:09:01.000Z
vignettes/gibbons_et_al.ipynb
CSB5/BEEM
013fd0540f2e811a689040d4ab3d66081896f2f6
[ "MIT" ]
2
2021-06-03T01:06:53.000Z
2021-06-08T16:45:15.000Z
vignettes/gibbons_et_al.ipynb
lch14forever/BEEM
87c9cdb785e8df684f4f6467573ed73ff9bab68c
[ "MIT" ]
3
2018-03-18T07:27:28.000Z
2021-08-24T16:21:53.000Z
1,462.972056
183,714
0.948174
[ [ [ "# Analysis of one-year trace of gut microbiome\nThis notebook records the code used for analyzing data from [Gibbons _et. al._ (2017)](http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005364). ", "_____no_output_____" ], [ "## Load required packages", "_____no_output_____" ] ], [ [ "library(beem)\nlibrary(grid)\nlibrary(ggplot2)\nlibrary(ggsci)\nlibrary(igraph)\nlibrary(reshape2)", "Loading required package: foreach\nLoading required package: doMC\nLoading required package: iterators\nLoading required package: parallel\n\nAttaching package: ‘igraph’\n\nThe following objects are masked from ‘package:stats’:\n\n decompose, spectrum\n\nThe following object is masked from ‘package:base’:\n\n union\n\n" ] ], [ [ "## Load functions and data", "_____no_output_____" ] ], [ [ "input.da <- read.table('~/BEEM/vignettes/gibbons_et_al_analysis/DA.counts.txt', head=F, row.names=1)\nmetadata.da <- read.table('~/BEEM/vignettes//gibbons_et_al_analysis/DA.metadata.txt', head=T)\n\n## For DB, point #74 has extremely high of one species and #180 is sampled too far from the previous time point\ninput.db <- read.table('~/BEEM/vignettes//gibbons_et_al_analysis/DB.counts.txt', head=F, row.names=1)[,-c(74,180)]\nmetadata.db <- read.table('~/BEEM/vignettes/gibbons_et_al_analysis/DB.metadata.txt', head=T)[-c(74,180),]\n\n## For M3, data from 330:332 are too far from previous time point\ninput.m3 <- read.table('~/BEEM/vignettes/gibbons_et_al_analysis/M3.counts.txt', head=F, row.names=1)[,1:329] \nmetadata.m3 <- read.table('~/BEEM/vignettes/gibbons_et_al_analysis/M3.metadata.txt', head=T)[1:329,]\n\ninput.f4 <- read.table('~/BEEM/vignettes/gibbons_et_al_analysis/F4.counts.txt', head=F, row.names=1) \nmetadata.f4 <- read.table('~/BEEM/vignettes/gibbons_et_al_analysis/F4.metadata.txt', head=T)", "_____no_output_____" ] ], [ [ "## Run BEEM", "_____no_output_____" ], [ "### Individual DA", "_____no_output_____" ] ], [ [ "counts.da <- input.da[-1,] + 0.0001 ## added pseudo value for R>3.5\ncolnames(counts.da) <- as.character(input.da[1,])\nres.da <- EM(dat=counts.da,meta=metadata.da, dev=10, verbose=FALSE,\n min_iter=50, max_iter=100, converge_thre = 1e-4, \n scaling = 10000, ncpu=4, seed=0)", "[!]: Small number (<10) of biological replicates detected. Note that BEEM works best with >10 biological replicates or the time series contains intrinsic infrequent perturbations.\nThe following species are not recommended due to 0 values:\n\nThe following species are not recommended due to their low/high abudances:\n552961, 682726, 529021, 188634, 181765, 593352, 302439, 574965, 1809696, 4407174, 4459708, 310391, 177224, 1073276, 586030, 1106789, 851715, 328438, 316489, 198583, 365868\nThe following species is recommended as the reference:\n1111783\nBEEM selecting reference species as default...\nReference species: 1111783\n" ] ], [ [ "### Individual M3", "_____no_output_____" ] ], [ [ "counts.m3 <- input.m3[-1,]\ncolnames(counts.m3) <- as.character(input.m3[1,])\nres.m3 <- EM(dat=counts.m3, meta=metadata.m3, dev=10, verbose=FALSE,\n min_iter=50, max_iter=100, converge_thre = 1e-4, \n scaling = 10000, ncpu=4, seed=0)", "[!]: Small number (<10) of biological replicates detected. Note that BEEM works best with >10 biological replicates or the time series contains intrinsic infrequent perturbations.\nThe following species are not recommended due to 0 values:\n577562\nThe following species are not recommended due to their low/high abudances:\n367104, 4479888, 4444095, 1136492, 367517, 182497, 577562, 180045, 4348111, 3257594, 340642, 536212, 581021, 938834, 643390, 1110862\nThe following species is recommended as the reference:\n3700151\nBEEM selecting reference species as default...\nReference species: 3700151\n" ] ], [ [ "### Individual DB", "_____no_output_____" ] ], [ [ "counts.db <- input.db[-1,]\ncolnames(counts.db) <- as.character(input.db[1,])\nres.db <- EM(dat=counts.db,meta=metadata.db, dev=10, verbose=FALSE,\n min_iter=50, max_iter=100, converge_thre=1e-4, \n scaling = 10000, ncpu=4, seed=0)", "[!]: Small number (<10) of biological replicates detected. Note that BEEM works best with >10 biological replicates or the time series contains intrinsic infrequent perturbations.\nThe following species are not recommended due to 0 values:\n4297420, 294363, 191633, 1110862, 4414388, 198910\nThe following species are not recommended due to their low/high abudances:\n585220, 1106927, 1111783, 529021, 4361727, 147040, 4297420, 194201, 528935, 520734, 183809, 294363, 191633, 4385756, 152842, 346253, 4407703, 2079328, 1110862, 328438, 316489, 4371419\nThe following species is recommended as the reference:\n682726\nBEEM selecting reference species as default...\n[!]: The reference species has zero abundance in some samples. This will treated as non-zeros by adding a pseudo count.\nReference species: 682726\n" ] ], [ [ "### Individual F4", "_____no_output_____" ] ], [ [ "counts.f4 <- input.f4[-1,]\ncolnames(counts.f4) <- as.character(input.f4[1,])\nres.f4 <- EM(dat=counts.f4,meta=metadata.f4, dev=10, verbose=FALSE,\n min_iter=50, max_iter=100, converge_thre=1e-4, \n scaling = 10000, ncpu=4, seed=0)", "[!]: Small number (<10) of biological replicates detected. Note that BEEM works best with >10 biological replicates or the time series contains intrinsic infrequent perturbations.\nThe following species are not recommended due to 0 values:\n\nThe following species are not recommended due to their low/high abudances:\n367104, 4479888, 851668, 368324, 358016, 364109, 1105014, 528634, 271563, 1976824, 560336, 531444, 181985, 4435546, 3718952, 3335289, 2430157, 516298, 367394\nThe following species is recommended as the reference:\n193041\nBEEM selecting reference species as default...\nReference species: 193041\n" ] ], [ [ "## Infer parameters", "_____no_output_____" ] ], [ [ "params.da <- paramFromEM(res.da, counts.da, metadata.da, ncpu=4)\nparams.m3 <- paramFromEM(res.m3, counts.m3, metadata.m3, ncpu=4)\nparams.db <- paramFromEM(res.db, counts.db, metadata.db, ncpu=4)\nparams.f4 <- paramFromEM(res.f4, counts.f4, metadata.f4, ncpu=4)", "_____no_output_____" ] ], [ [ "## Functions for analysis", "_____no_output_____" ] ], [ [ "int.net <- function(counts, parms, sig=1, title){\n ## plot interaction network\n minmax <- function(x) (x-min(x))/(max(x)-min(x))\n annote <- read.table('~/BEEM/vignettes/gibbons_et_al_analysis/all_otu_mapping.txt',head=F, row.names=1)\n counts.mean <- rowMeans(counts)\n int <- parms[parms$parameter_type=='interaction' & parms$source_taxon!=parms$target_taxon,]\n int.f <- int[int$significance>sig,2:4]\n g <- graph.data.frame(int.f[,1:2])\n V(g)$color <- annote[V(g)$name,]$V4\n V(g)$size <- log(counts.mean[V(g)$name]) +4\n E(g)$color <- ifelse(int.f$value>0,fill_cols[12],fill_cols[13])\n E(g)$lty <- ifelse(int.f$value>0,1,2)\n E(g)$width <- minmax(abs(int.f$value) ) * 2 + 0.5\n plot(g, main=title,asp=0,edge.arrow.size=0.5,edge.curved=.15)\n return(g)\n}", "_____no_output_____" ] ], [ [ "## Biomass trajectory of individual DA\nNote the periodic behaviour of the biomass -- the period is around 90 days (i.e. 3 months).", "_____no_output_____" ] ], [ [ "par(mfrow = c(4,1))\nplot(x=metadata.da$measurementID,y=biomassFromEM(res.da), xlim=c(0, 450), type='b', pch=19, xlab='Date', ylab='Estimated biomass', log='y')\nplot(x=metadata.m3$measurementID,y=biomassFromEM(res.m3), xlim=c(0, 450), type='b', pch=19, xlab='Date', ylab='Estimated biomass', log='y')\nplot(x=metadata.db$measurementID,y=biomassFromEM(res.db), xlim=c(0, 450), type='b', pch=19, xlab='Date', ylab='Estimated biomass', log='y')\nplot(x=metadata.f4$measurementID,y=biomassFromEM(res.f4), xlim=c(0, 450), type='b', pch=19, xlab='Date', ylab='Estimated biomass', log='y')", "_____no_output_____" ] ], [ [ "## Plot interaction network", "_____no_output_____" ] ], [ [ "fill_cols <- pal_simpsons(c(\"springfield\"))(16)\nga <- int.net(counts.da, params.da, 1.5, 'DA')", "_____no_output_____" ], [ "gm <- int.net(counts.m3, params.m3, 1.5, 'M3')", "_____no_output_____" ], [ "gb <- int.net(counts.db, params.db, 1.5, 'DB')", "_____no_output_____" ], [ "f4 <- int.net(counts.f4, params.f4, 1.5, 'F4')", "_____no_output_____" ], [ "res.da$counts <- counts.da\nres.da$metadata <- metadata.da\nsaveRDS(res.da, '~/BEEM/vignettes/gibbons_et_al_analysis/DA.EM.rds')\nwrite.table(params.da, '~/BEEM/vignettes/gibbons_et_al_analysis/DA.params.txt', col.names=TRUE, row.names=FALSE, sep='\\t', quote=FALSE)\n\nres.m3$counts <- counts.m3\nres.m3$metadata <- metadata.m3\nsaveRDS(res.m3, '~/BEEM/vignettes/gibbons_et_al_analysis/M3.EM.rds')\nwrite.table(params.m3, '~/BEEM/vignettes/gibbons_et_al_analysis/M3.params.txt', col.names=TRUE, row.names=FALSE, sep='\\t', quote=FALSE)\n\nres.db$counts <- counts.db\nres.db$metadata <- metadata.db\nsaveRDS(res.db, '~/BEEM/vignettes/gibbons_et_al_analysis/DB.EM.rds')\nwrite.table(params.db, '~/BEEM/vignettes/gibbons_et_al_analysis/DB.params.txt', col.names=TRUE, row.names=FALSE, sep='\\t', quote=FALSE)\n\nres.f4$counts <- counts.f4\nres.f4$metadata <- metadata.f4\nsaveRDS(res.f4, '~/BEEM/vignettes/gibbons_et_al_analysis/F4.EM.rds')\nwrite.table(params.f4, '~/BEEM/vignettes/gibbons_et_al_analysis/F4.params.txt', col.names=TRUE, row.names=FALSE, sep='\\t', quote=FALSE)", "_____no_output_____" ], [ "sessionInfo()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cbb83c057945f23800bc1a5a1d9cb164c88dd486
170,234
ipynb
Jupyter Notebook
code/hiv_model.ipynb
colinmsnow/ModSimPy
4cf8e8a64a398c72498ea08ddffa605a61394527
[ "MIT" ]
null
null
null
code/hiv_model.ipynb
colinmsnow/ModSimPy
4cf8e8a64a398c72498ea08ddffa605a61394527
[ "MIT" ]
null
null
null
code/hiv_model.ipynb
colinmsnow/ModSimPy
4cf8e8a64a398c72498ea08ddffa605a61394527
[ "MIT" ]
null
null
null
164.636364
15,472
0.863453
[ [ [ "# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *", "_____no_output_____" ], [ "state=State(R=200,L=0,E=0,V=0.0000004)\nsystem=System(Γ=1.36, μ=.00136, τ=.2, β=.00027, ρ=.1, α=.036, σ=2, δ=.33, π=100)\ninit = State(R=200,L=0,E=0, V=0.0000004)\nt0 = 1\ndt=.01\nt_end = 120/dt\nframe = TimeFrame(columns=init.index)\n", "_____no_output_____" ], [ "def update_func(system,t, state):\n unpack(system)\n unpack(state)\n r,l,e,v=state\n \n r+=(Γ*τ-μ*R-β*R*V)*dt\n l+=(ρ*β*R*V-μ*L-α*L)*dt\n e+=((1-ρ)*β*R*V+α*L-δ*E)*dt\n v+=(π*E-σ*V)*dt\n return State(R=r, L=l, E=e,V=v)", "_____no_output_____" ], [ "def run_simulation():\n \n frame = TimeFrame(columns=init.index)\n frame.row[t0] = init\n \n for t in linrange(t0, t_end):\n frame.row[t+1] = update_func(system,t,frame.row[t])\n return frame", "_____no_output_____" ], [ "results=run_simulation()", "_____no_output_____" ], [ "results.plot()", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\nplot(results.index, results.R)\n", "_____no_output_____" ], [ "plt.semilogy(results.index, results.L)\ndecorate( ylim=[.1,100])", "_____no_output_____" ], [ "plt.semilogy(results.index, results.E)\ndecorate( ylim=[.1,100])", "_____no_output_____" ], [ "plt.semilogy(results.index, results.V)\ndecorate( ylim=[.1,10000])", "_____no_output_____" ], [ "def slope_func(state,t, system):\n unpack(system)\n R,L,E,V=state\n \n drdt=(Γ*τ-μ*R-β*R*V)\n dldt=(ρ*β*R*V-μ*L-α*L)\n dedt=((1-ρ)*β*R*V+α*L-δ*E)\n dvdt=(π*E-σ*V)\n return drdt,dldt,dedt,dvdt", "_____no_output_____" ], [ "system=System(init=init, t0=1,t_end=120, Γ=1.36, μ=.00136, τ=.2, β=.00027, ρ=.1, α=.036, σ=2, δ=.33, π=100)\n\nresults,details=run_ode_solver(system, slope_func, max_step=.01)\ndetails", "_____no_output_____" ], [ "results.plot()", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\nplt.plot(results.index, results.R)\ndecorate(xlabel='days',\n ylabel='R')\n", "_____no_output_____" ], [ "plt.semilogy(results.index, results.L)\ndecorate( ylim=[.1,100],xlabel='days',\n ylabel='L')\n", "_____no_output_____" ], [ "plt.semilogy(results.index, results.E)\ndecorate( ylim=[.1,100],xlabel='days',\n ylabel='E')", "_____no_output_____" ], [ "plt.semilogy(results.index, results.V)\ndecorate( ylim=[.1,10000],xlabel='days',\n ylabel='V')\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb847b241473bf4b4c0f057aa6616b9ce2ff05b
14,863
ipynb
Jupyter Notebook
nbs/2.1_repr.codeberta.ipynb
WM-CSCI-435-F19/data-science-4-software-engineering
3692163df710550d4ee5b399a2a184968a0f18c6
[ "Apache-2.0" ]
5
2020-12-08T00:38:24.000Z
2021-11-16T20:00:59.000Z
nbs/2.1_repr.codeberta.ipynb
WM-CSCI-435-F19/data-science-4-software-engineering
3692163df710550d4ee5b399a2a184968a0f18c6
[ "Apache-2.0" ]
110
2020-09-26T18:36:35.000Z
2022-03-12T00:54:35.000Z
nbs/2.1_repr.codeberta.ipynb
WM-CSCI-435-F19/data-science-4-software-engineering
3692163df710550d4ee5b399a2a184968a0f18c6
[ "Apache-2.0" ]
3
2020-12-09T19:23:10.000Z
2021-02-16T12:54:16.000Z
35.137116
2,150
0.514365
[ [ [ "# default_exp repr.codeberta", "_____no_output_____" ] ], [ [ "# Training a Code Berta Transformer\n\n> This module comprises a code berta (roberta for source code) to use it for future vectorization projects", "_____no_output_____" ] ], [ [ "# export\n# Imports\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom annoy import AnnoyIndex\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom sklearn import decomposition\nfrom pathlib import Path\nfrom transformers import pipeline\nfrom tqdm.notebook import tqdm", "_____no_output_____" ], [ "#hide\nfrom nbdev.showdoc import *", "_____no_output_____" ], [ "#! pip -q install transformers annoy\n#! wget -q https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/java.zip\n#! wget -q https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip\n! unzip -qq java.zip\n#! unzip -qq python.zip", "_____no_output_____" ], [ "def jsonl_list_to_dataframe(file_list, columns=['language', 'docstring', 'code']):\n \"\"\"Load a list of jsonl.gz files into a pandas DataFrame.\"\"\"\n return pd.concat([pd.read_json(f,\n orient='records', \n compression='gzip',\n lines=True)[columns] \n for f in file_list], sort=False)", "_____no_output_____" ], [ "def get_dfs(path, splits = [\"train\", \"valid\", \"test\"]):\n \"\"\"Grabs the different data splits and converts them into dataframes\"\"\"\n dfs = []\n for split in [\"train\", \"valid\", \"test\"]:\n files = sorted((path/split).glob(\"**/*.gz\"))\n df = jsonl_list_to_dataframe(files)\n dfs.append(df)\n \n return dfs", "_____no_output_____" ], [ "path = Path('.')", "_____no_output_____" ], [ "java_df = get_dfs(path/\"codesearch/java/final/jsonl\", [\"valid\"])[0]\npython_df = get_dfs(path/\"codesearch/python/final/jsonl\", [\"valid\"])[0]", "_____no_output_____" ], [ "python_df.head()", "_____no_output_____" ], [ "python_df.shape", "_____no_output_____" ], [ "java_df.head()", "_____no_output_____" ], [ "java_df.shape", "_____no_output_____" ], [ "# hide\n# This script needs to be converted to a jupyter notebook.\n! python /tf/data/scripts/run_language_modeling.py \\\n --output_dir=/tf/data/models/JavaBert-v1 \\\n --model_type=roberta \\\n --model_name_or_path=roberta-base \\\n --do_train \\\n --train_data_file=/tf/main/nbs/test_data/text.txt \\\n --do_eval \\\n --eval_data_file=/tf/main/nbs/test_data/text.txt \\\n --mlm", "_____no_output_____" ], [ "fill_mask = pipeline(\n \"fill-mask\",\n model=\"/tf/data/models/JavaBert-v1\",\n tokenizer=\"/tf/data/models/JavaBert-v1\"\n)\n\nresult = np.array(fill_mask(\"public static void <mask>(String[] args)\")); result", "Model name '/tf/data/models/JavaBert-v1' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-japanese, bert-base-japanese-whole-word-masking, bert-base-japanese-char, bert-base-japanese-char-whole-word-masking, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased, openai-gpt, transfo-xl-wt103, gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2, ctrl, xlnet-base-cased, xlnet-large-cased, xlm-mlm-en-2048, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024, xlm-mlm-tlm-xnli15-1024, xlm-mlm-xnli15-1024, xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-17-1280, xlm-mlm-100-1280, roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector, distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased, distilbert-base-uncased-finetuned-sst-2-english, albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2, camembert-base, umberto-commoncrawl-cased-v1, umberto-wikipedia-uncased-v1, t5-small, t5-base, t5-large, t5-3b, t5-11b, xlm-roberta-base, xlm-roberta-large, xlm-roberta-large-finetuned-conll02-dutch, xlm-roberta-large-finetuned-conll02-spanish, xlm-roberta-large-finetuned-conll03-english, xlm-roberta-large-finetuned-conll03-german, flaubert-small-cased, flaubert-base-uncased, flaubert-base-cased, flaubert-large-cased). We assumed '/tf/data/models/JavaBert-v1/modelcard.json' was a path or url to a model card file named modelcard.json or a directory containing such a file but couldn't find any such file at this path or url.\nCreating an empty model card.\n" ], [ "from nbdev.export import notebook2script\nnotebook2script()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb85ea8649e96347e01cffe7f45d5eafa21c827
17,749
ipynb
Jupyter Notebook
Note_books/NHL_data_shape_notebooks/.ipynb_checkpoints/Fix_df_game-checkpoint.ipynb
joeyamosjohns/final_project_nhl_prediction_first_draft
8bffe1c82c76ec4aa8482d38d9eb5efad1644496
[ "MIT" ]
null
null
null
Note_books/NHL_data_shape_notebooks/.ipynb_checkpoints/Fix_df_game-checkpoint.ipynb
joeyamosjohns/final_project_nhl_prediction_first_draft
8bffe1c82c76ec4aa8482d38d9eb5efad1644496
[ "MIT" ]
null
null
null
Note_books/NHL_data_shape_notebooks/.ipynb_checkpoints/Fix_df_game-checkpoint.ipynb
joeyamosjohns/final_project_nhl_prediction_first_draft
8bffe1c82c76ec4aa8482d38d9eb5efad1644496
[ "MIT" ]
null
null
null
87.866337
1,478
0.677559
[ [ [ "##from the vscode file... data_fix_season_cut_down ... \n\nimport pandas as pd\nimport numpy as np\n\n\n##import all the files \n\n##file paths\n\nKaggle_path = \"/Users/joejohns/data_bootcamp/Final_Project_NHL_prediction/Data/Kaggle_Data_Ellis/\"\nmp_path = \"/Users/joejohns/data_bootcamp/Final_Project_NHL_prediction/Data/Money_Puck_Data/\"\nbetting_path = \"/Users/joejohns/data_bootcamp/Final_Project_NHL_prediction/Data/Betting_Data/\"\n\n##Kaggle files\n\ndf_game = pd.read_csv(Kaggle_path+'game.csv')\ndf_game_team_stats = pd.read_csv(Kaggle_path+'game_teams_stats.csv')\ndf_game_skater_stats = pd.read_csv(Kaggle_path+'game_skater_stats.csv')\ndf_game_goalie_stats = pd.read_csv(Kaggle_path+'game_goalie_stats.csv')\n\n##more subtle Kaggle features:\ndf_game_scratches = pd.read_csv(Kaggle_path+'game_scratches.csv')\ndf_game_officials = pd.read_csv(Kaggle_path+'game_officials.csv')\ndf_team_info = pd.read_csv(Kaggle_path+'team_info.csv')\ndf_fran_info = pd.read_csv(Kaggle_path+'franchise_info.csv')\n\n## grab all the moneypuck data \n\ndf_mp_teams = pd.read_csv(mp_path+'all_teams.csv')\n\n\n## grab all betting data \ndf1 = pd.read_excel(io = betting_path+'nhl odds 2007-08.xlsx')\ndf2 = pd.read_excel(io = betting_path+'nhl odds 2008-09.xlsx')\ndf3 = pd.read_excel(io = betting_path+'nhl odds 2009-10.xlsx')\ndf4 = pd.read_excel(io = betting_path+'nhl odds 2010-11.xlsx')\ndf5 = pd.read_excel(io = betting_path+'nhl odds 2011-12.xlsx')\ndf6 = pd.read_excel(io = betting_path+'nhl odds 2012-13.xlsx')\ndf7 = pd.read_excel(io = betting_path+'nhl odds 2013-14.xlsx')\ndf8 = pd.read_excel(io = betting_path+'nhl odds 2014-15.xlsx')\ndf9 = pd.read_excel(io = betting_path+'nhl odds 2015-16.xlsx')\ndf10 = pd.read_excel(io = betting_path+'nhl odds 2016-17.xlsx')\ndf11 = pd.read_excel(io = betting_path+'nhl odds 2017-18.xlsx')\ndf12 = pd.read_excel(io = betting_path+'nhl odds 2018-19.xlsx')\ndf13 = pd.read_excel(io = betting_path+'nhl odds 2019-20.xlsx')\n\n\n\n\ndf1['season'] = 20072008\ndf2['season'] = 20082009\ndf3['season'] = 20092010\ndf4['season'] = 20102011\ndf5['season'] = 20112012\ndf6['season'] = 20122013\ndf7['season'] = 20132014\ndf8['season'] = 20142015\ndf9['season'] = 20152016\ndf10['season'] = 20162017\ndf11['season'] = 20172018\ndf12['season'] = 20182019\ndf13['season'] = 20192020\n\ndf_betting = pd.concat([df1, df2, df3, df4, df5, df6, df7, df8, df9, df10, df11, df12, df13])\n\n##### restrict data sets \n## restrict data sets \ndf_betting = df_betting.loc[:, ['Date', 'season','VH', 'Team', 'Open']].copy()\ndf_mp_teams.rename(columns={\"teamId\": \"team_id\"}, inplace = True)\n\ndf_mp_teams_all = df_mp_teams.loc[df_mp_teams['situation'] == 'all', :].copy()\n\n##### restrict data sets \n\n\ndf_betting = df_betting.loc[:, ['Date', 'season','VH', 'Team', 'Open']].copy()\ndf_mp_teams_all = df_mp_teams.loc[df_mp_teams['situation'] == 'all', :].copy()\n\n##drop duplicates and one column had some NaN;\n##note there are more nan values in df_game_skaters/team/goalies but I think df_mp gets those.\n\ndf_game.drop_duplicates(inplace = True)\ndf_game.drop(columns = ['home_rink_side_start'], inplace = True)\n\n## fix seasons in df_mp (other 2 already have 20082009 format)\n\n\ndef fix_mp_season(n):\n return int(str(n)+str(n+1))\n\n#test \n#fix_mp_season(2010)\ndf_mp_teams['season'] = df_mp_teams['season'].map(fix_mp_season)\ndf_mp_teams_all['season'] = df_mp_teams_all['season'].map(fix_mp_season)\n\n\n##restrict seasons; 20082009 to 20192020 is the range common to all 3 df's\nseasons = []\nfor n in range(2008,2020):\n seasons.append(int(str(n)+str(n+1)))\n\n#check seasons look ok\nprint(seasons)\n\n#restrict seasons:\n\n##note: In notebook I checked that value counts of the seasons did not change after this\n#restriction\n\n\ndf_betting = df_betting.loc[df_game['season'].isin(seasons), :].copy()\ndf_game = df_game.loc[df_game['season'].isin(seasons), :].copy()\ndf_mp_teams = df_mp_teams.loc[df_mp_teams['season'].isin(seasons), :].copy()\ndf_mp_teams_all = df_mp_teams_all.loc[df_mp_teams_all['season'].isin(seasons), :].copy()\n\n##here is a count of how many games in each df ... approx the same ... so looks likely there \n##should be close to full overlap in game_id's\n\n\n## the index is no longer consecutive so we reset:\n\ndf_betting.reset_index(drop = True, inplace = True)\ndf_game.reset_index(drop = True, inplace = True)\ndf_mp_teams.reset_index(drop = True, inplace = True)\ndf_mp_teams_all.reset_index(drop = True, inplace = True)\n\nfor seas in seasons:\n print(seas, len(df_mp_teams_all.loc[df_mp_teams['season']==seas])/2, len(df_game.loc[df_game['season']==seas]), len(df_betting.loc[df_betting['season']==seas])/2)", "_____no_output_____" ], [ "## check that df_mp and df_game have similar games ... ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
cbb86508bf9b8d75c870dd6e8604d8f494601277
145,677
ipynb
Jupyter Notebook
Tissue_DNA-FISH/CTP12_marker-gene/20220316-PreProcess_P_brain_CTP11_3color_Franklin.ipynb
shiwei23/Chromatin_Analysis_Scripts
909b9b81de8fcf04dd4c39ac21a84864ce2003ff
[ "MIT" ]
null
null
null
Tissue_DNA-FISH/CTP12_marker-gene/20220316-PreProcess_P_brain_CTP11_3color_Franklin.ipynb
shiwei23/Chromatin_Analysis_Scripts
909b9b81de8fcf04dd4c39ac21a84864ce2003ff
[ "MIT" ]
null
null
null
Tissue_DNA-FISH/CTP12_marker-gene/20220316-PreProcess_P_brain_CTP11_3color_Franklin.ipynb
shiwei23/Chromatin_Analysis_Scripts
909b9b81de8fcf04dd4c39ac21a84864ce2003ff
[ "MIT" ]
null
null
null
87.651625
61,851
0.711979
[ [ [ "# Analysis of DNA-MERFISH for CTP11\n\nby Pu Zheng\n\n2022.02.15\n\nanalysis for dataset:\n\n\\\\10.245.74.158\\Chromatin_NAS_1\\20220307-P_brain_CTP11_from_0303\n\nThis data is DNA of uncleared MERFISH RNA:\n \\\\10.245.74.158\\Chromatin_NAS_0\\20220303-P_brain_M1_nonclear_adaptor\n", "_____no_output_____" ] ], [ [ "%run \"..\\..\\Startup_py3.py\"\nsys.path.append(r\"..\\..\\..\\..\\Documents\")\n\nimport ImageAnalysis3 as ia\n%matplotlib notebook\n\nfrom ImageAnalysis3 import *\nprint(os.getpid())\n\nimport h5py\nfrom ImageAnalysis3.classes import _allowed_kwds\nimport ast", "19800\n" ] ], [ [ "# 0. fov parameters", "_____no_output_____" ] ], [ [ "reload(ia)\nreload(classes)\nreload(classes.batch_functions)\nreload(classes.field_of_view)\nreload(io_tools.load)\nreload(get_img_info)\nreload(visual_tools)\nreload(ia.correction_tools)\nreload(ia.correction_tools.alignment)\nreload(ia.spot_tools.matching)\nreload(ia.segmentation_tools.chromosome)\nreload(ia.spot_tools.fitting)\n\nfov_param = {'data_folder':[r'\\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304',\n r'\\\\10.245.74.212\\Chromatin_NAS_2\\20220316-P_brain_CTP11-12-13_from_0304'],\n 'save_folder':r'\\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304',\n 'experiment_type': 'DNA',\n 'num_threads': 16,\n 'correction_folder':r'\\\\10.245.74.158\\Chromatin_NAS_0\\Corrections\\20210621-Corrections_lumencor_from_60_to_50',\n 'shared_parameters':{\n 'single_im_size':[50,2048,2048],\n 'distance_zxy': [250, 108, 108],\n 'corr_channels':['750','647','561'],\n 'num_empty_frames': 0, \n 'num_buffer_frames':0,\n 'corr_hot_pixel':True,\n 'corr_Z_shift':False,\n 'corr_bleed':True,\n 'min_num_seeds':5,\n 'max_num_seeds': 20000,\n 'spot_seeding_th': 1000,\n 'normalize_intensity_local':False,\n 'normalize_intensity_background':False,\n 'corr_gaussian_highpass':False,\n }, \n }", "_____no_output_____" ] ], [ [ "# 1. Process Fov", "_____no_output_____" ] ], [ [ "_overwrite = False\n\n_save_images = True\n\n_fit_spots = True\n\nsel_fov_ids = np.arange(20, 21) # batch1 in franklin\n\nfor _fov_id in sel_fov_ids:\n \n if 'bad_fovs_ids' in locals() and _fov_id in bad_fovs_ids:\n continue\n \n fov = classes.field_of_view.Field_of_View(fov_param, _fov_id=_fov_id,\n _color_info_kwargs={\n '_color_filename':'Color_Usage',\n }, \n _prioritize_saved_attrs=False,\n _save_info_to_file=True, # whether overwrite\n )\n # 2. Process image into candidate spots\n fov.parallel = True\n setattr(fov, \"combo_ref_id\", 0)\n fov._process_image_to_spots('combo', \n _load_common_reference=True, _load_with_multiple=False,\n _save_images=_save_images,\n _warp_images=False, _fit_spots=True,\n _overwrite_drift=False, _overwrite_image=_overwrite,\n _overwrite_spot=_overwrite)\n try:\n fov._save_to_file('relabeled_combo')\n except:\n pass\n setattr(fov, \"relabeled_combo_ref_id\", 0)\n fov._process_image_to_spots('relabeled_combo', \n _load_common_reference=True, _load_with_multiple=False,\n _save_images=_save_images,\n _warp_images=False, _fit_spots=True,\n _overwrite_drift=False, _overwrite_image=_overwrite,\n _overwrite_spot=_overwrite)\n # 3. unique\n setattr(fov, \"unique_ref_id\", 0)\n fov._process_image_to_spots('unique', \n _load_common_reference=True, _load_with_multiple=False,\n _save_images=_save_images,\n _warp_images=False, _fit_spots=True,\n _overwrite_drift=False, _overwrite_image=_overwrite,\n _overwrite_spot=_overwrite)\n fov.parallel = False\n fov._save_to_file('relabeled_unique')\n setattr(fov, \"relabeled_unique_ref_id\", 0)\n fov._process_image_to_spots('relabeled_unique', \n _load_common_reference=True, _load_with_multiple=False,\n _save_images=_save_images,\n _warp_images=False, _fit_spots=True,\n _overwrite_drift=False, _overwrite_image=_overwrite,\n _overwrite_spot=_overwrite)\n # 4. Process DAPI image\n fov._load_dapi_image()", "Get Folder Names: (ia.get_img_info.get_folders)\n- Number of folders: 57\n- Number of field of views: 168\nGet Folder Names: (ia.get_img_info.get_folders)\n- Number of folders: 20\n- Number of field of views: 169\n- Importing csv format color_usage file: \\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\Analysis\\Color_Usage.csv\n- header: ['Hyb', '750', '647', '561', '488', '405']\ndict_keys(['H0U1', 'H1C2', 'H2C3', 'H3C4', 'H4C5', 'H5C6', 'H6C7', 'H7C8', 'H8C9', 'H9C10', 'H10C11', 'H11C12', 'H12C13', 'H13C14', 'H14C15', 'H15C16', 'H16C17', 'H17C18', 'H18C19', 'H19C20', 'H20C21', 'H21C22', 'H22C23', 'H23C24', 'H24C25', 'H25C26', 'H26C27', 'H27C28', 'H28C29', 'H29C30', 'H30C31', 'H31C32', 'H32C33', 'H33C34', 'H34C35', 'H35C36', 'H36C37', 'H37C38', 'H38C39', 'H39C40', 'H40C41', 'H41C42', 'H42C43', 'H43C44', 'H44C45', 'H45C46', 'H46C47', 'H47C48', 'H48C49', 'H49C50', 'H50C51', 'H51C52', 'H52C53', 'H53C45_rep', 'H54C54', 'H55C55', 'H56C56', 'H57C57', 'H58C58', 'H59C59', 'H60C60', 'H61C61', 'H62C62', 'H63C63', 'H64C64', 'H65C65', 'H66C66', 'H67C53_rep', 'H68U2', 'H69U3', 'H70U4', 'H71U5', 'H72U6', 'H73U7', 'H74U8', 'H75U9', 'H76U10'])\n- 77 folders are found according to color-usage annotation.\n+ loading fov_info from file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_020.hdf5\n++ base attributes loaded:['combo_ref_im', 'dapi_im', 'relabeled_combo_ref_im'] in 2.578s.\n+ loading correction from file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_020.hdf5\n++ load bleed correction profile directly from savefile.\n++ load chromatic correction profile directly from savefile.\n++ load chromatic_constants correction profile directly from savefile.\n++ load illumination correction profile directly from savefile.\n+ loading segmentation from file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_020.hdf5\n++ base attributes loaded:[] in 0.016s.\n-- saving fov_info to file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_020.hdf5\n++ base attributes saved:['analysis_folder', 'annotated_folders', 'channels', 'color_dic', 'color_filename', 'color_format', 'combo_ref_im', 'correction_folder', 'dapi_channel', 'dapi_channel_index', 'dapi_im', 'data_folder', 'debug', 'drift', 'drift_channel', 'drift_filename', 'drift_folder', 'experiment_folder', 'folders', 'fov_id', 'fov_name', 'map_folder', 'num_threads', 'parallel', 'ref_filename', 'ref_id', 'relabeled_combo_ref_im', 'rotation', 'save_filename', 'save_folder', 'segmentation_dim', 'segmentation_folder', 'shared_parameters', 'use_dapi', 'verbose'] in 21.297s.\n-- folders not selected, allow processing all 77 folders\n-- checking combo, region:[ 1 2 67] in 0.047s.\n-- checking combo, region:[101 102 165] in 0.047s.\n-- checking combo, region:[ 3 4 68] in 0.047s.\n-- checking combo, region:[ 5 6 69] in 0.031s.\n-- checking combo, region:[ 7 8 70] in 0.064s.\n-- checking combo, region:[ 9 10 71] in 0.046s.\n-- checking combo, region:[11 12 72] in 0.078s.\n-- checking combo, region:[13 14 73] in 0.047s.\n-- checking combo, region:[15 16 74] in 0.049s.\n-- checking combo, region:[17 18 75] in 0.061s.\n-- checking combo, region:[19 20 76] in 0.031s.\n-- checking combo, region:[21 22 77] in 0.047s.\n-- checking combo, region:[23 24 78] in 0.047s.\n-- checking combo, region:[25 26 79] in 0.047s.\n-- checking combo, region:[27 28 80] in 0.039s.\n-- checking combo, region:[29 30 81] in 0.055s.\n-- checking combo, region:[31 32 82] in 0.047s.\n-- checking combo, region:[33 34 83] in 0.047s.\n-- checking combo, region:[35 36 84] in 0.047s.\n-- checking combo, region:[37 38 85] in 0.047s.\n-- checking combo, region:[39 40 86] in 0.047s.\n-- checking combo, region:[41 42 87] in 0.031s.\n-- checking combo, region:[43 44 88] in 0.047s.\n-- checking combo, region:[45 46 89] in 0.031s.\n-- checking combo, region:[47 48 90] in 0.054s.\n-- checking combo, region:[49 50 91] in 0.056s.\n-- checking combo, region:[51 52 92] in 0.047s.\n-- checking combo, region:[53 54 93] in 0.047s.\n-- checking combo, region:[55 56 94] in 0.047s.\n-- checking combo, region:[57 58 95] in 0.032s.\n-- checking combo, region:[59 60 96] in 0.047s.\n-- checking combo, region:[61 62 97] in 0.046s.\n-- checking combo, region:[63 64 98] in 0.033s.\n-- checking combo, region:[65 66 99] in 0.046s.\n-- checking combo, region:[151 152 190] in 0.031s.\n-- checking combo, region:[153 154 191] in 0.047s.\n-- checking combo, region:[155 156 192] in 0.047s.\n-- checking combo, region:[157 158 193] in 0.047s.\n-- checking combo, region:[159 160 194] in 0.047s.\n-- checking combo, region:[161 162 195] in 0.055s.\n-- checking combo, region:[163 164] in 0.038s.\n-- checking combo, region:[103 104 166] in 0.047s.\n-- checking combo, region:[105 106 167] in 0.047s.\n-- checking combo, region:[107 108 168] in 0.047s.\n-- checking combo, region:[109 110 169] in 0.047s.\n-- checking combo, region:[111 112 170] in 0.051s.\n-- checking combo, region:[113 114 171] in 0.043s.\n-- checking combo, region:[115 116 172] in 0.047s.\n-- checking combo, region:[117 118 173] in 0.047s.\n-- checking combo, region:[119 120 174] in 0.040s.\n-- checking combo, region:[121 122 175] in 0.038s.\n-- checking combo, region:[123 124 176] in 0.047s.\n-- checking combo, region:[125 126 177] in 0.047s.\n-- checking combo, region:[127 128 178] in 0.047s.\n-- checking combo, region:[129 130 179] in 0.047s.\n-- checking combo, region:[131 132 180] in 0.031s.\n-- checking combo, region:[133 134 181] in 0.047s.\n-- checking combo, region:[135 136 182] in 0.047s.\n-- checking combo, region:[137 138 183] in 0.058s.\n-- checking combo, region:[139 140 184] in 0.037s.\n-- checking combo, region:[141 142 185] in 0.046s.\n-- checking combo, region:[143 144 186] in 0.062s.\n-- checking combo, region:[145 146 187] in 0.031s.\n-- checking combo, region:[147 148 188] in 0.062s.\n-- checking combo, region:[149 150 189] in 0.031s.\n- No combo images and spots requires processing, skip.\n-- saving relabeled_combo to file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_020.hdf5\n-- folders not selected, allow processing all 77 folders\n-- checking relabeled_combo, region:[175 168] in 0.028s.\n-- checking relabeled_combo, region:[179 176] in 0.016s.\n- No relabeled_combo images and spots requires processing, skip.\n-- folders not selected, allow processing all 77 folders\n+ load reference image from file:\\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\H0U1\\Conv_zscan_020.dax\n- correct the whole fov for image: \\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\H0U1\\Conv_zscan_020.dax\n-- loading illumination correction profile from file:\n\t 488 illumination_correction_488_2048x2048.npy\n-- loading image from file:\\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\H0U1\\Conv_zscan_020.dax in 8.984s\n-- removing hot pixels for channels:['488'] in 4.844s\n-- illumination correction for channels: 488, in 1.703s\n-- -- generate translation function with drift:[0. 0. 0.] in 0.000s\n-- finish correction in 16.297s\n-- saving fov_info to file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_020.hdf5\n++ base attributes saved:['unique_ref_im'] in 5.125s.\n-- checking unique, region:[1 2 3] in 0.018s.\n-- checking unique, region:[4 5 6] in 0.016s.\n-- checking unique, region:[7 8 9] in 0.013s.\n-- checking unique, region:[10 11 12] in 0.015s.\n-- checking unique, region:[13 14 15] in 0.016s.\n-- checking unique, region:[16 17 18] in 0.016s.\n-- checking unique, region:[19 20 21] in 0.031s.\n-- checking unique, region:[22 23 24] in 0.016s.\n-- checking unique, region:[25 26 27] in 0.016s.\n-- checking unique, region:[28] in 0.016s.\n+ Start multi-processing of pre-processing for 10 images with 16 threads\n++ processing unique ids: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24\n 25 26 27 28] , finish in 2614.19s.\n-- saving relabeled_unique to file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_020.hdf5\n--- relabeled_unique attributes updated:['ids', 'channels', 'ims', 'spots', 'raw_spots', 'drifts', 'flags'] in 0.094s.\n-- folders not selected, allow processing all 77 folders\n+ load reference image from file:\\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\H0U1\\Conv_zscan_020.dax\n- correct the whole fov for image: \\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\H0U1\\Conv_zscan_020.dax\n-- loading illumination correction profile from file:\n\t 488 illumination_correction_488_2048x2048.npy\n" ], [ "_overwrite = False\n\n_save_images = False\n\n_fit_spots = True\n\n#sel_fov_ids = np.arange(21, 81) # batch1 in franklin\nsel_fov_ids = np.arange(0, 18) # batch1 in franklin\n\nfor _fov_id in sel_fov_ids:\n \n if 'bad_fovs_ids' in locals() and _fov_id in bad_fovs_ids:\n continue\n \n fov = classes.field_of_view.Field_of_View(fov_param, _fov_id=_fov_id,\n _color_info_kwargs={\n '_color_filename':'Color_Usage',\n }, \n _prioritize_saved_attrs=False,\n _save_info_to_file=True, # whether overwrite\n )\n # 2. Process image into candidate spots\n fov.parallel = True\n setattr(fov, \"combo_ref_id\", 0)\n fov._process_image_to_spots('combo', \n _load_common_reference=True, _load_with_multiple=False,\n _save_images=_save_images,\n _warp_images=False, _fit_spots=True,\n _overwrite_drift=False, _overwrite_image=_overwrite,\n _overwrite_spot=_overwrite)\n try:\n fov._save_to_file('relabeled_combo')\n except:\n pass\n setattr(fov, \"relabeled_combo_ref_id\", 0)\n fov._process_image_to_spots('relabeled_combo', \n _load_common_reference=True, _load_with_multiple=False,\n _save_images=_save_images,\n _warp_images=False, _fit_spots=True,\n _overwrite_drift=False, _overwrite_image=_overwrite,\n _overwrite_spot=_overwrite)\n # 3. unique\n setattr(fov, \"unique_ref_id\", 0)\n fov._process_image_to_spots('unique', \n _load_common_reference=True, _load_with_multiple=False,\n _save_images=_save_images,\n _warp_images=False, _fit_spots=True,\n _overwrite_drift=False, _overwrite_image=_overwrite,\n _overwrite_spot=_overwrite)\n fov.parallel = False\n fov._save_to_file('relabeled_unique')\n setattr(fov, \"relabeled_unique_ref_id\", 0)\n fov._process_image_to_spots('relabeled_unique', \n _load_common_reference=True, _load_with_multiple=False,\n _save_images=_save_images,\n _warp_images=False, _fit_spots=True,\n _overwrite_drift=False, _overwrite_image=_overwrite,\n _overwrite_spot=_overwrite)\n # 4. Process DAPI image\n fov._load_dapi_image()", "Get Folder Names: (ia.get_img_info.get_folders)\n- Number of folders: 57\n- Number of field of views: 168\nGet Folder Names: (ia.get_img_info.get_folders)\n- Number of folders: 20\n- Number of field of views: 169\n- Importing csv format color_usage file: \\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\Analysis\\Color_Usage.csv\n- header: ['Hyb', '750', '647', '561', '488', '405']\ndict_keys(['H0U1', 'H1C2', 'H2C3', 'H3C4', 'H4C5', 'H5C6', 'H6C7', 'H7C8', 'H8C9', 'H9C10', 'H10C11', 'H11C12', 'H12C13', 'H13C14', 'H14C15', 'H15C16', 'H16C17', 'H17C18', 'H18C19', 'H19C20', 'H20C21', 'H21C22', 'H22C23', 'H23C24', 'H24C25', 'H25C26', 'H26C27', 'H27C28', 'H28C29', 'H29C30', 'H30C31', 'H31C32', 'H32C33', 'H33C34', 'H34C35', 'H35C36', 'H36C37', 'H37C38', 'H38C39', 'H39C40', 'H40C41', 'H41C42', 'H42C43', 'H43C44', 'H44C45', 'H45C46', 'H46C47', 'H47C48', 'H48C49', 'H49C50', 'H50C51', 'H51C52', 'H52C53', 'H53C45_rep', 'H54C54', 'H55C55', 'H56C56', 'H57C57', 'H58C58', 'H59C59', 'H60C60', 'H61C61', 'H62C62', 'H63C63', 'H64C64', 'H65C65', 'H66C66', 'H67C53_rep', 'H68U2', 'H69U3', 'H70U4', 'H71U5', 'H72U6', 'H73U7', 'H74U8', 'H75U9', 'H76U10'])\n- 77 folders are found according to color-usage annotation.\n+ loading fov_info from file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_017.hdf5\n++ base attributes loaded:['combo_ref_im', 'relabeled_combo_ref_im', 'unique_ref_im'] in 3.532s.\n+ loading correction from file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_017.hdf5\n++ load bleed correction profile directly from savefile.\n++ load chromatic correction profile directly from savefile.\n++ load chromatic_constants correction profile directly from savefile.\n++ load illumination correction profile directly from savefile.\n+ loading segmentation from file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_017.hdf5\n++ base attributes loaded:[] in 0.013s.\n-- saving fov_info to file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_017.hdf5\n++ base attributes saved:['analysis_folder', 'annotated_folders', 'channels', 'color_dic', 'color_filename', 'color_format', 'combo_ref_im', 'correction_folder', 'dapi_channel', 'dapi_channel_index', 'data_folder', 'debug', 'drift', 'drift_channel', 'drift_filename', 'drift_folder', 'experiment_folder', 'folders', 'fov_id', 'fov_name', 'map_folder', 'num_threads', 'parallel', 'ref_filename', 'ref_id', 'relabeled_combo_ref_im', 'rotation', 'save_filename', 'save_folder', 'segmentation_dim', 'segmentation_folder', 'shared_parameters', 'unique_ref_im', 'use_dapi', 'verbose'] in 20.392s.\n-- folders not selected, allow processing all 77 folders\n-- checking combo, region:[ 1 2 67] in 0.063s.\n-- checking combo, region:[101 102 165] in 0.043s.\n-- checking combo, region:[ 3 4 68] in 0.034s.\n-- checking combo, region:[ 5 6 69] in 0.031s.\n-- checking combo, region:[ 7 8 70] in 0.031s.\n-- checking combo, region:[ 9 10 71] in 0.047s.\n-- checking combo, region:[11 12 72] in 0.031s.\n-- checking combo, region:[13 14 73] in 0.062s.\n-- checking combo, region:[15 16 74] in 0.032s.\n-- checking combo, region:[17 18 75] in 0.031s.\n-- checking combo, region:[19 20 76] in 0.047s.\n-- checking combo, region:[21 22 77] in 0.038s.\n-- checking combo, region:[23 24 78] in 0.026s.\n-- checking combo, region:[25 26 79] in 0.030s.\n-- checking combo, region:[27 28 80] in 0.032s.\n-- checking combo, region:[29 30 81] in 0.039s.\n-- checking combo, region:[31 32 82] in 0.022s.\n-- checking combo, region:[33 34 83] in 0.047s.\n-- checking combo, region:[35 36 84] in 0.038s.\n-- checking combo, region:[37 38 85] in 0.035s.\n-- checking combo, region:[39 40 86] in 0.023s.\n-- checking combo, region:[41 42 87] in 0.045s.\n-- checking combo, region:[43 44 88] in 0.031s.\n-- checking combo, region:[45 46 89] in 0.031s.\n-- checking combo, region:[47 48 90] in 0.031s.\n-- checking combo, region:[49 50 91] in 0.031s.\n-- checking combo, region:[51 52 92] in 0.031s.\n-- checking combo, region:[53 54 93] in 0.047s.\n-- checking combo, region:[55 56 94] in 0.031s.\n-- checking combo, region:[57 58 95] in 0.043s.\n-- checking combo, region:[59 60 96] in 0.035s.\n-- checking combo, region:[61 62 97] in 0.031s.\n-- checking combo, region:[63 64 98] in 0.032s.\n-- checking combo, region:[65 66 99] in 0.046s.\n-- checking combo, region:[151 152 190] in 0.031s.\n-- checking combo, region:[153 154 191] in 0.031s.\n-- checking combo, region:[155 156 192] in 0.046s.\n-- checking combo, region:[157 158 193] in 0.032s.\n-- checking combo, region:[159 160 194] in 0.031s.\n-- checking combo, region:[161 162 195] in 0.031s.\n-- checking combo, region:[163 164] in 0.031s.\n-- checking combo, region:[103 104 166] in 0.031s.\n-- checking combo, region:[105 106 167] in 0.051s.\n-- checking combo, region:[107 108 168] in 0.027s.\n-- checking combo, region:[109 110 169] in 0.044s.\n-- checking combo, region:[111 112 170] in 0.037s.\n-- checking combo, region:[113 114 171] in 0.028s.\n-- checking combo, region:[115 116 172] in 0.031s.\n-- checking combo, region:[117 118 173] in 0.047s.\n-- checking combo, region:[119 120 174] in 0.035s.\n-- checking combo, region:[121 122 175] in 0.043s.\n-- checking combo, region:[123 124 176] in 0.031s.\n-- checking combo, region:[125 126 177] in 0.031s.\n-- checking combo, region:[127 128 178] in 0.031s.\n-- checking combo, region:[129 130 179] in 0.047s.\n-- checking combo, region:[131 132 180] in 0.031s.\n-- checking combo, region:[133 134 181] in 0.033s.\n-- checking combo, region:[135 136 182] in 0.046s.\n-- checking combo, region:[137 138 183] in 0.037s.\n-- checking combo, region:[139 140 184] in 0.025s.\n-- checking combo, region:[141 142 185] in 0.046s.\n-- checking combo, region:[143 144 186] in 0.032s.\n-- checking combo, region:[145 146 187] in 0.031s.\n-- checking combo, region:[147 148 188] in 0.047s.\n-- checking combo, region:[149 150 189] in 0.036s.\n- No combo images and spots requires processing, skip.\n-- saving relabeled_combo to file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_017.hdf5\n--- relabeled_combo attributes updated:['relabeled_combo_ref_im'] in 1.180s.\n-- folders not selected, allow processing all 77 folders\n-- checking relabeled_combo, region:[175 168] in 0.029s.\n-- checking relabeled_combo, region:[179 176] in 0.017s.\n- No relabeled_combo images and spots requires processing, skip.\n-- folders not selected, allow processing all 77 folders\n-- checking unique, region:[1 2 3] in 0.018s.\n-- checking unique, region:[4 5 6] in 0.015s.\n-- checking unique, region:[7 8 9] in 0.033s.\n-- checking unique, region:[10 11 12] in 0.013s.\n-- checking unique, region:[13 14 15] in 0.039s.\n-- checking unique, region:[16 17 18] in 0.028s.\n-- checking unique, region:[19 20 21] in 0.027s.\n-- checking unique, region:[22 23 24] in 0.016s.\n-- checking unique, region:[25 26 27] in 0.031s.\n-- checking unique, region:[28] in 0.016s.\n- No unique images and spots requires processing, skip.\n-- saving relabeled_unique to file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_017.hdf5\n--- relabeled_unique attributes updated:[] in 0.016s.\n-- folders not selected, allow processing all 77 folders\n+ load reference image from file:\\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\H0U1\\Conv_zscan_017.dax\n- correct the whole fov for image: \\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\H0U1\\Conv_zscan_017.dax\n-- loading illumination correction profile from file:\n\t 488 illumination_correction_488_2048x2048.npy\n-- loading image from file:\\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\H0U1\\Conv_zscan_017.dax in 7.156s\n-- removing hot pixels for channels:['488'] in 4.750s\n-- illumination correction for channels: 488, in 1.672s\n-- -- generate translation function with drift:[0. 0. 0.] in 0.000s\n-- finish correction in 14.086s\n-- saving fov_info to file: \\\\franklin\\SSD_01\\Pu_Temp\\20220316-P_brain_CTP11-12-13_from_0304\\Conv_zscan_017.hdf5\n++ base attributes saved:['relabeled_unique_ref_im'] in 5.248s.\n-- checking relabeled_unique, region:[3] in 0.022s.\n+ Start sequential pre-processing for 1 images\n++ processed relabeled_unique ids: [3] + batch process image: \\\\10.245.74.158\\Chromatin_NAS_4\\20220316-P_brain_CTP11-12-13_from_0304\\H76U10\\Conv_zscan_017.dax for channels:['647']\n- loading relabeled_unique info from file:Conv_zscan_017.hdf5 in 0.124s.\n" ], [ "sel_ids = np.array([3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,68,69,70,71,72,73,74,75,76,101,102,165])\n\nwith h5py.File(fov.save_filename, 'r') as _f:\n _grp = _f.require_group('combo')\n _ids = list(_grp['ids'][:])\n sel_ims = []\n sel_raw_spots = []\n for _id in sel_ids:\n _ind = _ids.index(_id)\n sel_ims.append(_grp['ims'][_ind])\n sel_raw_spots.append(_grp['raw_spots'][_ind])", "_____no_output_____" ], [ "coord_dict = {\n 'coords': [],#np.fliplr(sel_spots[:,1:4]),\n 'class_ids': [],#sel_ids,\n}\n\nfor _i, _spots in enumerate(sel_raw_spots):\n _spots = _spots[_spots[:,0]>0]\n if len(_spots) > 0:\n coord_dict['coords'].extend(list(np.fliplr(_spots[:,1:4])))\n #coord_dict['coords'].extend(list(np.fliplr(_crop.crop_coords(_spots.to_coords()[_sel_inds]))))\n coord_dict['class_ids'].extend(list(np.ones(len(_spots),dtype=np.int32) * _i))\n ", "_____no_output_____" ], [ "visual_tools.imshow_mark_3d_v2(sel_ims, given_dic=coord_dict, image_names=sel_ids)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cbb872064de16eb2a7fe4b158f43e855460168e9
20,745
ipynb
Jupyter Notebook
wandb/run-20211009_203938-1lx9jtvc/tmp/code/00.ipynb
Programmer-RD-AI/Coronavirus-tweets-NLP-Text-Classification
78778fc737a3faa7cbf9ab7c388493d910e04862
[ "Apache-2.0" ]
1
2021-10-09T14:21:53.000Z
2021-10-09T14:21:53.000Z
wandb/run-20211009_203938-1lx9jtvc/tmp/code/00.ipynb
Programmer-RD-AI/Coronavirus-tweets-NLP-Text-Classification
78778fc737a3faa7cbf9ab7c388493d910e04862
[ "Apache-2.0" ]
null
null
null
wandb/run-20211009_203938-1lx9jtvc/tmp/code/00.ipynb
Programmer-RD-AI/Coronavirus-tweets-NLP-Text-Classification
78778fc737a3faa7cbf9ab7c388493d910e04862
[ "Apache-2.0" ]
null
null
null
30.824666
265
0.479537
[ [ [ "import wandb\nimport nltk\nfrom nltk.stem.porter import *\nfrom torch.nn import *\nfrom torch.optim import *\nimport numpy as np\nimport pandas as pd\nimport torch,torchvision\nimport random\nfrom tqdm import *\nfrom torch.utils.data import Dataset,DataLoader\nstemmer = PorterStemmer()\nPROJECT_NAME = 'kickstarter-NLP-v3'\ndevice = 'cuda'", "_____no_output_____" ], [ "def tokenize(sentence):\n return nltk.word_tokenize(sentence.lower())", "_____no_output_____" ], [ "tokenize('$100')", "_____no_output_____" ], [ "def stem(word):\n return stemmer.stem(word.lower())", "_____no_output_____" ], [ "stem('organic')", "_____no_output_____" ], [ "def bag_of_words(tokenized_words,all_words):\n tokenized_words = [stem(w) for w in tokenized_words]\n bag = np.zeros(len(all_words))\n for idx,w in enumerate(all_words):\n if w in tokenized_words:\n bag[idx] = 1.0\n return bag", "_____no_output_____" ], [ "bag_of_words(['hi'],['hi','how','hi'])", "_____no_output_____" ], [ "data = pd.read_csv('./data.csv',encoding='latin-1')[:5000]", "_____no_output_____" ], [ "data", "_____no_output_____" ], [ "X = data['OriginalTweet']\ny = data['Sentiment']", "_____no_output_____" ], [ "words = []\ndata = []\nidx = 0\nlabels = {}\nlabels_r = {}", "_____no_output_____" ], [ "for label in y:\n if label not in list(labels.keys()):\n idx += 1\n labels[label] = idx\n labels_r[idx] = label", "_____no_output_____" ], [ "for X_batch,y_batch in tqdm(zip(X,y)):\n X_batch = tokenize(X_batch)\n new_X = []\n for Xb in X_batch:\n new_X.append(stem(Xb))\n words.extend(new_X)\n data.append([\n new_X,\n np.eye(labels[y_batch],len(labels))[labels[y_batch]-1]\n ])", "5000it [00:03, 1580.99it/s]\n" ], [ "words = sorted(set(words))\nnp.random.shuffle(words)", "_____no_output_____" ], [ "np.random.shuffle(data)", "_____no_output_____" ], [ "X = []\ny = []", "_____no_output_____" ], [ "for X_batch,y_batch in tqdm(data):\n X.append(bag_of_words(X_batch,words))\n y.append(y_batch)", "100%|██████████████████████████████████████| 5000/5000 [00:23<00:00, 210.17it/s]\n" ], [ "from sklearn.model_selection import *\nX_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.125,shuffle=False)\nX_train = torch.from_numpy(np.array(X_train)).to(device).float()\ny_train = torch.from_numpy(np.array(y_train)).to(device).float()\nX_test = torch.from_numpy(np.array(X_test)).to(device).float()\ny_test = torch.from_numpy(np.array(y_test)).to(device).float()", "_____no_output_____" ], [ "def get_loss(model,X,y,criterion):\n preds = model(X)\n loss = criterion(preds,y)\n return loss.item()", "_____no_output_____" ], [ "def get_accuracy(model,X,y):\n preds = model(X)\n correct = 0\n total = 0\n for pred,yb in zip(preds,y):\n pred = int(torch.argmax(pred))\n yb = int(torch.argmax(yb))\n if pred == yb:\n correct += 1\n total += 1\n acc = round(correct/total,3)*100\n return acc", "_____no_output_____" ], [ "class Model(Module):\n def __init__(self):\n super().__init__()\n self.activation = ReLU()\n self.iters = 2\n self.dropout = Dropout()\n self.hidden = 512\n self.linear1 = Linear(len(words),self.hidden)\n self.linear2 = Linear(self.hidden,self.hidden)\n self.linear3 = Linear(self.hidden,self.hidden)\n self.linear4 = Linear(self.hidden,self.hidden)\n self.linear5 = Linear(self.hidden,self.hidden)\n self.output = Linear(self.hidden,len(labels))\n \n def forward(self,X):\n preds = self.linear1(X)\n preds = self.activation(self.linear2(preds))\n for _ in range(self.iters):\n preds = self.dropout(self.activation(self.linear3(preds)))\n preds = self.activation(self.linear4(preds))\n preds = self.activation(self.linear5(preds))\n preds = self.output(preds)\n return preds", "_____no_output_____" ], [ "model = Model().to(device)\ncriterion = MSELoss()\noptimizer = Adam(model.parameters(),lr=0.001)\nepochs = 100\nbatch_size = 32", "_____no_output_____" ], [ "torch.save(model,'model-custom.pt')\ntorch.save(model,'model-custom.pth')\ntorch.save(model.state_dict(),'model-custom-sd.pt')\ntorch.save(model.state_dict(),'model-custom-sd.pth')\ntorch.save(words,'words.pt')\ntorch.save(words,'words.pth')\ntorch.save(data,'data.pt')\ntorch.save(data,'data.pth')\ntorch.save(labels,'labels.pt')\ntorch.save(labels,'labels.pth')\ntorch.save(idx,'idx.pt')\ntorch.save(idx,'idx.pth')\ntorch.save(y_train,'y_train.pt')\ntorch.save(y_test,'y_test.pth')", "_____no_output_____" ], [ "wandb.init(project=PROJECT_NAME,name='baseline')\nfor _ in tqdm(range(epochs)):\n for i in range(0,len(X_train),batch_size):\n X_batch = X_train[i:i+batch_size]\n y_batch = y_train[i:i+batch_size]\n preds = model(X_batch)\n loss = criterion(preds,y_batch)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n model.eval()\n torch.cuda.empty_cache()\n wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion)/2)})\n torch.cuda.empty_cache()\n wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})\n torch.cuda.empty_cache()\n wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val Acc':get_accuracy(model,X_test,y_test)})\n torch.cuda.empty_cache()\n model.train()\nwandb.finish()\ntorch.cuda.empty_cache()", "\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mranuga-d\u001b[0m (use `wandb login --relogin` to force relogin)\n\u001b[34m\u001b[1mwandb\u001b[0m: wandb version 0.12.4 is available! To upgrade, please run:\n\u001b[34m\u001b[1mwandb\u001b[0m: $ pip install wandb --upgrade\n" ], [ "torch.save(model,'model.pt')\ntorch.save(model,'model.pth')\ntorch.save(model.state_dict(),'model-sd.pt')\ntorch.save(model.state_dict(),'model-sd.pth')\ntorch.save(words,'words.pt')\ntorch.save(words,'words.pth')\ntorch.save(data,'data.pt')\ntorch.save(data,'data.pth')\ntorch.save(labels,'labels.pt')\ntorch.save(labels,'labels.pth')\ntorch.save(idx,'idx.pt')\ntorch.save(idx,'idx.pth')\ntorch.save(y_train,'y_train.pt')\ntorch.save(y_test,'y_test.pth')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb88325d18cb2244d315554269eb56ace75135f
461,872
ipynb
Jupyter Notebook
03_getting-started-with-transformers.ipynb
SaturdaysAI/education-toolkit
7f0d44ea4b0f0b36915fda5ab1947e8a1016e54b
[ "Apache-2.0" ]
1
2022-02-22T18:57:15.000Z
2022-02-22T18:57:15.000Z
03_getting-started-with-transformers.ipynb
SaturdaysAI/education-toolkit
7f0d44ea4b0f0b36915fda5ab1947e8a1016e54b
[ "Apache-2.0" ]
null
null
null
03_getting-started-with-transformers.ipynb
SaturdaysAI/education-toolkit
7f0d44ea4b0f0b36915fda5ab1947e8a1016e54b
[ "Apache-2.0" ]
null
null
null
47.4835
40,369
0.61396
[ [ [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/education-toolkit/blob/main/03_getting-started-with-transformers.ipynb)\n\n💡 **Welcome!**\n\nWe’ve assembled a toolkit that university instructors and organizers can use to easily prepare labs, homework, or classes. The content is designed in a self-contained way such that it can easily be incorporated into the existing curriculum. This content is free and uses widely known Open Source technologies (`transformers`, `gradio`, etc).\n\nAlternatively, you can request for someone on the Hugging Face team to run the tutorials for your class via the [ML demo.cratization tour](https://huggingface2.notion.site/ML-Demo-cratization-tour-with-66847a294abd4e9785e85663f5239652) initiative!\n\nYou can find all the tutorials and resources we’ve assembled [here](https://huggingface2.notion.site/Education-Toolkit-7b4a9a9d65ee4a6eb16178ec2a4f3599). ", "_____no_output_____" ], [ "# Tutorial: Getting Started with Transformers", "_____no_output_____" ], [ "**Learning goals:** The goal of this tutorial is to learn how:\n\n1. Transformer neural networks can be used to tackle a wide range of tasks in natural language processing and beyond.\n3. Transfer learning allows one to adapt Transformers to specific tasks.\n2. The `pipeline()` function from the `transformers` library can be used to run inference with models from the [Hugging Face Hub](https://huggingface.co/models).\n\nThis tutorial is based on the first of our O'Reilly book [_Natural Language Processing with Transformers_](https://transformersbook.com/) - check it out if you want to dive deeper into the topic!\n\n**Duration**: 30-45 minutes\n\n**Prerequisites:** Knowledge of Python and basic familiarity with machine learning \n\n\n**Author**: [Lewis Tunstall](https://twitter.com/_lewtun) (feel free to ping me with any questions about this tutorial)\n\nAll of these steps can be done for free! All you need is an Internet browser and a place where you can write Python 👩‍💻", "_____no_output_____" ], [ "## 0. Why Transformers?", "_____no_output_____" ], [ "Deep learning is currently undergoing a period of rapid progress across a wide variety of domains, including: \n\n* 📖 Natural language processing\n* 👀 Computer vision\n* 🔊 Audio\n* 🧬 Biology\n* and many more!\n\nThe main driver of these breakthroughs is the **Transformer** -- a novel **neural network** developed by Google researchers in 2017. In short, if you’re into deep learning, you need Transformers!\n\nHere's a few examples of what Transformers can do:\n\n* 💻 They can **generate code** as in products like [GitHub Copilot](https://copilot.github.com/), which is based on OpenAI's family of [GPT models](https://huggingface.co/gpt2?text=My+name+is+Clara+and+I+am).\n* ❓ They can be used for **improve search engines**, like [Google did](https://www.blog.google/products/search/search-language-understanding-bert/) with a Transformer called [BERT](https://huggingface.co/bert-base-uncased).\n* 🗣️ They can **process speech in multiple languages** to perform speech recognition, speech translation, and language identification. For example, Facebook's [XLS-R model](https://huggingface.co/spaces/facebook/XLS-R-2B-22-16) can automatically transcribe audio in one language to another!\n\nTraining these models **from scratch** involves **a lot of resources**: you need large amounts of compute, data, and days to train for 😱.\n\nFortunately, you don't need to do this in most cases! Thanks to a technique known as **transfer learning**, it is possible to adapt a model that has been trained from scratch (usually called a **pretrained model**), to a variety of downstream tasks. This process is called **fine-tuning** and can typically be carried with a single GPU and a dataset of the size that you're like to find in your university or company.\n\nThe models that we'll be looking at in this tutorial are all examples of fine-tuned models, and you can learn more about the transfer learning process in the video below:\n", "_____no_output_____" ] ], [ [ "from IPython.display import YouTubeVideo\n\nYouTubeVideo('BqqfQnyjmgg')", "_____no_output_____" ] ], [ [ "Now, Transformers are coolest kids in town, but how can we use them? If only there was a library that could help us ... oh wait, there is! The [Hugging Face Transformers library](https://github.com/huggingface/transformers) provides a unified API across dozens of Transformer architectures, as well as the means to train models and run inference with them. So to get started, let's install the library with the following command:", "_____no_output_____" ] ], [ [ "%%capture\n%pip install transformers[sentencepiece]", "_____no_output_____" ] ], [ [ "Now that we've installed the library, let's take a look at some applications!", "_____no_output_____" ], [ "## 1. Pipelines for Transformers", "_____no_output_____" ], [ "The fastest way to learn what Transformers can do is via the `pipeline()` function. This function loads a model from the Hugging Face Hub and takes care of all the preprocessing and postprocessing steps that are needed to convert inputs into predictions:", "_____no_output_____" ], [ "<img src=\"https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/pipeline.png?raw=1\" alt=\"Alt text that describes the graphic\" title=\"Title text\" width=800>", "_____no_output_____" ], [ "In the next few sections we'll see how these steps are combined for different applications. If you want to learn more about what is happening under the hood, then check out the video below:", "_____no_output_____" ] ], [ [ "YouTubeVideo('1pedAIvTWXk')", "_____no_output_____" ] ], [ [ "## 2. Text classification", "_____no_output_____" ], [ "Let's start with one of the most common tasks in NLP: text classification. We need a snippet of text for our models to analyze, so let's use the following (fictious!) customer feedback about a certain online order:", "_____no_output_____" ] ], [ [ "text = \"\"\"Dear Amazon, last week I ordered an Optimus Prime action figure \\\nfrom your online store in Germany. Unfortunately, when I opened the package, \\\nI discovered to my horror that I had been sent an action figure of Megatron \\\ninstead! As a lifelong enemy of the Decepticons, I hope you can understand my \\\ndilemma. To resolve the issue, I demand an exchange of Megatron for the \\\nOptimus Prime figure I ordered. Enclosed are copies of my records concerning \\\nthis purchase. I expect to hear from you soon. Sincerely, Bumblebee.\"\"\"", "_____no_output_____" ] ], [ [ "While we're at it, let's create a simple wrapper so that we can pretty print out texts:", "_____no_output_____" ] ], [ [ "import textwrap\n\nwrapper = textwrap.TextWrapper(width=80, break_long_words=False, break_on_hyphens=False)\nprint(wrapper.fill(text))", "Dear Amazon, last week I ordered an Optimus Prime action figure from your online\nstore in Germany. Unfortunately, when I opened the package, I discovered to my\nhorror that I had been sent an action figure of Megatron instead! As a lifelong\nenemy of the Decepticons, I hope you can understand my dilemma. To resolve the\nissue, I demand an exchange of Megatron for the Optimus Prime figure I ordered.\nEnclosed are copies of my records concerning this purchase. I expect to hear\nfrom you soon. Sincerely, Bumblebee.\n" ] ], [ [ "Now suppose that we'd like to predict the _sentiment_ of this text, i.e. whether the feedback is positive or negative. This is a special type of text classification that is often used in industry to aggregate customer feedback across products or services. The example below shows how a Transformer like BERT converts the inputs into atomic chunks called **tokens** which are then fed through the network to produce a single prediction:", "_____no_output_____" ], [ "<img src=\"https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/clf_arch.png?raw=1\" alt=\"Alt text that describes the graphic\" title=\"Title text\" width=600>", "_____no_output_____" ], [ "To load a Transformer model for this task is quite simple. We just need to specify the task in the `pipeline()` function as follows;", "_____no_output_____" ] ], [ [ "from transformers import pipeline\n\nsentiment_pipeline = pipeline('text-classification')", "No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)\n" ] ], [ [ "When you run this code, you'll see a message about which Hub model is being used by default. In this case, the `pipeline()` function loads the `distilbert-base-uncased-finetuned-sst-2-english` model, which is a small BERT variant trained on [SST-2](https://paperswithcode.com/sota/sentiment-analysis-on-sst-2-binary) which is a sentiment analysis dataset.", "_____no_output_____" ], [ "💡 The first time you execute the code, the model will be automatically downloaded from the Hub and cached for later use! ", "_____no_output_____" ], [ "Now we are ready to run our example through pipeline and look at some predictions:", "_____no_output_____" ] ], [ [ "sentiment_pipeline(text)", "_____no_output_____" ] ], [ [ "The model predicts negative sentiment with a high confidence which makes sense given that we have a disgruntled customer. You can also see that the pipeline returns a list of Python dictionaries with the predictions. We can also pass several texts at the same time in which case we would get several dicts in the list for each text one.", "_____no_output_____" ], [ "⚡ **Your turn!** Feed a list of texts with different types of sentiment to the `sentiment_pipeline` object. Do the predictions always make sense?", "_____no_output_____" ], [ "## 3. Named entity recognition", "_____no_output_____" ], [ "Let's now do something a little more sophisticated. Instead of just finding the overall sentiment, let's see if we can extract **entities** such as organizations, locations, or individuals from the text. This task is called named entity recognition, or NER for short. Instead of predicting just a class for the whole text **a class is predicted for each token**, as shown in the example below:", "_____no_output_____" ], [ "<img src=\"https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/ner_arch.png?raw=1\" alt=\"Alt text that describes the graphic\" title=\"Title text\" width=600>", "_____no_output_____" ], [ "Again, we just load a pipeline for NER without specifying a model. This will load a default BERT model that has been trained on the [CoNLL-2003](https://huggingface.co/datasets/conll2003) dataset:", "_____no_output_____" ] ], [ [ "ner_pipeline = pipeline('ner')", "No model was supplied, defaulted to dbmdz/bert-large-cased-finetuned-conll03-english (https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english)\n" ] ], [ [ "When we pass our text through the model, we now get a long list of Python dictionaries, where each dictionary corresponds to one detected entity. Since multiple tokens can correspond to a a single entity ,we can apply an aggregation strategy that merges entities if the same class appears in consequtive tokens:", "_____no_output_____" ] ], [ [ "entities = ner_pipeline(text, aggregation_strategy=\"simple\")\nprint(entities)", "[{'entity_group': 'ORG', 'score': 0.8790103, 'word': 'Amazon', 'start': 5, 'end': 11}, {'entity_group': 'MISC', 'score': 0.9908588, 'word': 'Optimus Prime', 'start': 36, 'end': 49}, {'entity_group': 'LOC', 'score': 0.9997547, 'word': 'Germany', 'start': 90, 'end': 97}, {'entity_group': 'MISC', 'score': 0.5565697, 'word': 'Mega', 'start': 208, 'end': 212}, {'entity_group': 'PER', 'score': 0.59025633, 'word': '##tron', 'start': 212, 'end': 216}, {'entity_group': 'ORG', 'score': 0.6696923, 'word': 'Decept', 'start': 253, 'end': 259}, {'entity_group': 'MISC', 'score': 0.49834913, 'word': '##icons', 'start': 259, 'end': 264}, {'entity_group': 'MISC', 'score': 0.77536184, 'word': 'Megatron', 'start': 350, 'end': 358}, {'entity_group': 'MISC', 'score': 0.98785394, 'word': 'Optimus Prime', 'start': 367, 'end': 380}, {'entity_group': 'PER', 'score': 0.81209636, 'word': 'Bumblebee', 'start': 502, 'end': 511}]\n" ] ], [ [ "This isn't very easy to read, so let's clean up the outputs a bit:", "_____no_output_____" ] ], [ [ "for entity in entities:\n print(f\"{entity['word']}: {entity['entity_group']} ({entity['score']:.2f})\")", "Amazon: ORG (0.88)\nOptimus Prime: MISC (0.99)\nGermany: LOC (1.00)\nMega: MISC (0.56)\n##tron: PER (0.59)\nDecept: ORG (0.67)\n##icons: MISC (0.50)\nMegatron: MISC (0.78)\nOptimus Prime: MISC (0.99)\nBumblebee: PER (0.81)\n" ] ], [ [ "That's much better! It seems that the model found most of the named entities but was confused about \"Megatron\" andn \"Decepticons\", which are characters in the transformers franchise. This is no surprise since the original dataset probably did not contain many transformer characters. For this reason it makes sense to further fine-tune a model on your on dataset!\n\nNow that we've seen an example of text and token classification using Transformers, let's look at an interesting application called **question answering**.", "_____no_output_____" ], [ "## 4. Question answering", "_____no_output_____" ], [ "In this task, the model is given a **question** and a **context** and needs to find the answer to the question within the context. This problem can be rephrased as a classification problem: For each token the model needs to predict whether it is the start or the end of the answer. In the end we can extract the answer by looking at the span between the token with the highest start probability and highest end probability:", "_____no_output_____" ], [ "<img src=\"https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/qa_arch.png?raw=1\" alt=\"Alt text that describes the graphic\" title=\"Title text\" width=600>", "_____no_output_____" ], [ "You can imagine that this requires quite a bit of pre- and post-processing logic. Good thing that the pipeline takes care of all that! As usual, we load the model by specifying the task in the `pipeline()` function:", "_____no_output_____" ] ], [ [ "qa_pipeline = pipeline(\"question-answering\")", "No model was supplied, defaulted to distilbert-base-cased-distilled-squad (https://huggingface.co/distilbert-base-cased-distilled-squad)\n" ] ], [ [ "This default model is trained on the famous [SQuAD dataset](https://huggingface.co/datasets/squad). Let's see if we can ask it what the customer wants:", "_____no_output_____" ] ], [ [ "question = \"What does the customer want?\"\n\noutputs = qa_pipeline(question=question, context=text)\noutputs", "_____no_output_____" ] ], [ [ "Awesome, that sounds about right!", "_____no_output_____" ], [ "## 5. Text summarization", "_____no_output_____" ], [ "Let's see if we can go beyond these natural language understanding tasks (NLU) where BERT excels and delve into the generative domain. Note that generation is much more computationally demanding since we usually generate one token at a time and need to run this several times. An example for how this process works is shown below:", "_____no_output_____" ], [ "<img src=\"https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/gen_steps.png?raw=1\" alt=\"Alt text that describes the graphic\" title=\"Title text\" width=600>", "_____no_output_____" ], [ "A popular task involving generation is summarization. Let's see if we can use a transformer to generate a summary for us:", "_____no_output_____" ] ], [ [ "summarization_pipeline = pipeline(\"summarization\")", "No model was supplied, defaulted to sshleifer/distilbart-cnn-12-6 (https://huggingface.co/sshleifer/distilbart-cnn-12-6)\n" ] ], [ [ "This model is trained was trained on the [CNN/Dailymail dataset](https://huggingface.co/datasets/cnn_dailymail) to summarize news articles.", "_____no_output_____" ] ], [ [ "outputs = summarization_pipeline(text, max_length=45, clean_up_tokenization_spaces=True)\nprint(wrapper.fill(outputs[0]['summary_text']))", "Your min_length=56 must be inferior than your max_length=45.\n" ] ], [ [ "That's not too bad! We can see the model was able to get the main gist of the customer feedback and even identified the author as \"Bumblebee\".", "_____no_output_____" ], [ "## 6. Translation", "_____no_output_____" ], [ "But what if there is no model in the language of my data? You can still try to translate the text. The [Helsinki NLP team](https://huggingface.co/models?pipeline_tag=translation&sort=downloads&search=Helsinkie-NLP) has provided over 1,000 language pair models for translation 🤯. Here we load one that translates English to German:", "_____no_output_____" ] ], [ [ "translator = pipeline(\"translation_en_to_de\", model=\"Helsinki-NLP/opus-mt-en-de\")", "_____no_output_____" ] ], [ [ "Let's translate the our text to German:", "_____no_output_____" ] ], [ [ "outputs = translator(text, clean_up_tokenization_spaces=True, min_length=100)\nprint(wrapper.fill(outputs[0]['translation_text']))", "Sehr geehrter Amazon, letzte Woche habe ich eine Optimus Prime Action Figur aus\nIhrem Online-Shop in Deutschland bestellt. Leider, als ich das Paket öffnete,\nentdeckte ich zu meinem Entsetzen, dass ich stattdessen eine Action Figur von\nMegatron geschickt worden war! Als lebenslanger Feind der Decepticons, Ich\nhoffe, Sie können mein Dilemma verstehen. Um das Problem zu lösen, Ich fordere\neinen Austausch von Megatron für die Optimus Prime Figur habe ich bestellt.\nAnbei sind Kopien meiner Aufzeichnungen über diesen Kauf. Ich erwarte, bald von\nIhnen zu hören. Aufrichtig, Bumblebee.\n" ] ], [ [ "We can see that the text is clearly not perfectly translated, but the core meaning stays the same. Another cool application of translation models is data augmentation via backtranslation!", "_____no_output_____" ], [ "## 7. Zero-shot classification", "_____no_output_____" ], [ "As a last example let's have a look at a cool application showing the versatility of transformers: zero-shot classification. In zero-shot classification the model receives a text and a list of candidate labels and determines which labels are compatible with the text. Instead of having fixed classes this allows for flexible classification without any labelled data! Usually this is a good first baseline!", "_____no_output_____" ] ], [ [ "zero_shot_classifier = pipeline(\"zero-shot-classification\",\n model=\"vicgalle/xlm-roberta-large-xnli-anli\")", "_____no_output_____" ] ], [ [ "Let's have a look at an example:", "_____no_output_____" ] ], [ [ "text = 'Dieser Tutorial ist großartig! Ich hoffe, dass jemand von Hugging Face meine Universität besuchen wird :)'\nclasses = ['Treffen', 'Arbeit', 'Digital', 'Reisen']", "_____no_output_____" ], [ "zero_shot_classifier(text, classes, multi_label=True)", "_____no_output_____" ] ], [ [ "This seems to have worked really well on this short example. Naturally, for longer and more domain specific examples this approach might suffer.", "_____no_output_____" ], [ "## 8. Going beyond text", "_____no_output_____" ], [ "As mentioned at the start of this tutorial, Transformers can also be used for domains other than NLP! For these domains, there are many more pipelines that you can experiment with. Look at the following list for an overview:", "_____no_output_____" ] ], [ [ "from transformers import pipelines\nfor task in pipelines.SUPPORTED_TASKS:\n print(task)", "audio-classification\nautomatic-speech-recognition\nfeature-extraction\ntext-classification\ntoken-classification\nquestion-answering\ntable-question-answering\nfill-mask\nsummarization\ntranslation\ntext2text-generation\ntext-generation\nzero-shot-classification\nconversational\nimage-classification\nimage-segmentation\nobject-detection\n" ] ], [ [ "Let's have a look at an application involving images!", "_____no_output_____" ], [ "### Computer vision", "_____no_output_____" ], [ "Recently, transformer models have also entered computer vision. Check out the DETR model on the [Hub](https://huggingface.co/facebook/detr-resnet-101-dc5):", "_____no_output_____" ], [ "<img src=\"https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/object_detection.png?raw=1\" alt=\"Alt text that describes the graphic\" title=\"Title text\" width=400>", "_____no_output_____" ], [ "### Audio", "_____no_output_____" ], [ "Another promising area is audio processing. Especially Speech2Text there have been some promising advancements recently. See for example the [wav2vec2 model](https://huggingface.co/facebook/wav2vec2-base-960h):", "_____no_output_____" ], [ "<img src=\"https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/speech2text.png?raw=1\" alt=\"Alt text that describes the graphic\" title=\"Title text\" width=400>", "_____no_output_____" ], [ "### Table QA", "_____no_output_____" ], [ "Finally, a lot of real world data is still in form of tables. Being able to query tables is very useful and with [TAPAS](https://huggingface.co/google/tapas-large-finetuned-wtq) you can do tabular question-answering:", "_____no_output_____" ], [ "<img src=\"https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/tapas.png?raw=1\" alt=\"Alt text that describes the graphic\" title=\"Title text\" width=400>", "_____no_output_____" ], [ "## 9. Where to next?", "_____no_output_____" ], [ "Hopefully this tutorial has given you a taste of what Transformers can do and you're now excited to learn more! Here's a few resources you can use to dive deeper into the topic and the Hugging Face ecosystem:\n\n🤗 **A Tour through the Hugging Face Hub**\n\nIn this tutorial, you get to:\n- Explore the over 30,000 models shared in the Hub.\n- Learn efficient ways to find the right model and datasets for your own task.\n- Learn how to contribute and work collaboratively in your ML workflows\n\n***Duration: 20-40 minutes***\n\n👉 [click here to access the tutorial](https://www.notion.so/Workshop-A-Tour-through-the-Hugging-Face-Hub-2098e4bae9ba4288857e85c87ff1c851)\n\n✨ **Build and Host Machine Learning Demos with Gradio & Hugging Face**\n\nIn this tutorial, you get to:\n- Explore ML demos created by the community.\n- Build a quick demo for your machine learning model in Python using the `gradio` library\n- Host the demos for free with Hugging Face Spaces\n- Add your demo to the Hugging Face org for your class or conference\n\n***Duration: 20-40 minutes***\n\n👉 [click here to access the tutorial](https://colab.research.google.com/github/huggingface/education-toolkit/blob/main/02_ml-demos-with-gradio.ipynb)\n\n🎓 **The Hugging Face Course**\n\nThis course teaches you about applying Transformers to various tasks in natural language processing and beyond. Along the way, you'll learn how to use the Hugging Face ecosystem — 🤗 Transformers, 🤗 Datasets, 🤗 Tokenizers, and 🤗 Accelerate — as well as the Hugging Face Hub. It's completely free too!", "_____no_output_____" ] ], [ [ "YouTubeVideo('00GKzGyWFEs')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ] ]
cbb8ae3688b1092cad5190ccf7bde26c2d8f796a
34,823
ipynb
Jupyter Notebook
notebooks/adversarial_training_mnist.ipynb
aqsaimtiaz/adversarial-robustness-toolbox
2b086ecb4f5b6bd01b034052e75c387e7b6f14e4
[ "MIT" ]
1
2020-12-01T19:40:47.000Z
2020-12-01T19:40:47.000Z
notebooks/adversarial_training_mnist.ipynb
aqsaimtiaz/adversarial-robustness-toolbox
2b086ecb4f5b6bd01b034052e75c387e7b6f14e4
[ "MIT" ]
105
2020-08-24T06:15:43.000Z
2022-03-24T08:03:16.000Z
notebooks/adversarial_training_mnist.ipynb
honeypotz-eu/adversarial-robustness-toolbox
ddfb4e29329891731ff2f7d36b114be784db0d03
[ "MIT" ]
1
2020-09-28T12:58:01.000Z
2020-09-28T12:58:01.000Z
69.646
19,528
0.792149
[ [ [ "<table style=\"border: none\" align=\"center\">\n <tr style=\"border: none\">\n <th style=\"border: none\"><font face=\"verdana\" size=\"4\" color=\"black\"><b> Demonstrate adversarial training using ART </b></font></font></th>\n </tr> \n</table>", "_____no_output_____" ], [ "In this notebook we demonstrate adversarial training using ART on the MNIST dataset.\n\n\n## Contents\n\n1.\t[Load prereqs and data](#prereqs)\n2. [Train and evaluate a baseline classifier](#classifier)\n3. [Adversarially train a robust classifier](#adv_training)\n4.\t[Evaluate the robust classifier](#evaluation)", "_____no_output_____" ], [ "<a id=\"prereqs\"></a>\n## 1. Load prereqs and data", "_____no_output_____" ] ], [ [ "import warnings\nwarnings.filterwarnings('ignore')\nfrom keras.models import load_model\n\nfrom art.config import ART_DATA_PATH\nfrom art.utils import load_dataset, get_file\nfrom art.estimators.classification import KerasClassifier\nfrom art.attacks.evasion import FastGradientMethod\nfrom art.attacks.evasion import BasicIterativeMethod\nfrom art.defences.trainer import AdversarialTrainer\n\nimport numpy as np\n\n%matplotlib inline\nimport matplotlib.pyplot as plt", "Using TensorFlow backend.\n" ], [ "(x_train, y_train), (x_test, y_test), min_, max_ = load_dataset('mnist')", "_____no_output_____" ] ], [ [ "<a id=\"classifier\"></a>\n## 2. Train and evaluate a baseline classifier", "_____no_output_____" ], [ "Load the classifier model:", "_____no_output_____" ] ], [ [ "path = get_file('mnist_cnn_original.h5', extract=False, path=ART_DATA_PATH,\n url='https://www.dropbox.com/s/p2nyzne9chcerid/mnist_cnn_original.h5?dl=1')\nclassifier_model = load_model(path)\nclassifier = KerasClassifier(clip_values=(min_, max_), model=classifier_model, use_logits=False)", "_____no_output_____" ], [ "classifier_model.summary()", "Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_1 (Conv2D) (None, 26, 26, 32) 320 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 11, 11, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 5, 5, 64) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 1600) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 128) 204928 \n_________________________________________________________________\ndense_2 (Dense) (None, 10) 1290 \n=================================================================\nTotal params: 225,034\nTrainable params: 225,034\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "Evaluate the classifier performance on the first 100 original test samples:", "_____no_output_____" ] ], [ [ "x_test_pred = np.argmax(classifier.predict(x_test[:100]), axis=1)\nnb_correct_pred = np.sum(x_test_pred == np.argmax(y_test[:100], axis=1))\n\nprint(\"Original test data (first 100 images):\")\nprint(\"Correctly classified: {}\".format(nb_correct_pred))\nprint(\"Incorrectly classified: {}\".format(100-nb_correct_pred))", "Original test data (first 100 images):\nCorrectly classified: 100\nIncorrectly classified: 0\n" ] ], [ [ "Generate some adversarial samples:", "_____no_output_____" ] ], [ [ "attacker = FastGradientMethod(classifier, eps=0.5)\nx_test_adv = attacker.generate(x_test[:100])", "_____no_output_____" ] ], [ [ "And evaluate performance on those:", "_____no_output_____" ] ], [ [ "x_test_adv_pred = np.argmax(classifier.predict(x_test_adv), axis=1)\nnb_correct_adv_pred = np.sum(x_test_adv_pred == np.argmax(y_test[:100], axis=1))\n\nprint(\"Adversarial test data (first 100 images):\")\nprint(\"Correctly classified: {}\".format(nb_correct_adv_pred))\nprint(\"Incorrectly classified: {}\".format(100-nb_correct_adv_pred))", "Adversarial test data (first 100 images):\nCorrectly classified: 21\nIncorrectly classified: 79\n" ] ], [ [ "<a id=\"adv_training\"></a>\n## 3. Adversarially train a robust classifier", "_____no_output_____" ] ], [ [ "path = get_file('mnist_cnn_robust.h5', extract=False, path=ART_DATA_PATH,\n url='https://www.dropbox.com/s/yutsncaniiy5uy8/mnist_cnn_robust.h5?dl=1')\nrobust_classifier_model = load_model(path)\nrobust_classifier = KerasClassifier(clip_values=(min_, max_), model=robust_classifier_model, use_logits=False)", "_____no_output_____" ] ], [ [ "Note: the robust classifier has the same architecture as above, except the first dense layer has **1024** instead of **128** units. (This was recommend by Madry et al. (2017), *Towards Deep Learning Models Resistant to Adversarial Attacks*)", "_____no_output_____" ] ], [ [ "robust_classifier_model.summary()", "Model: \"sequential_2\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_3 (Conv2D) (None, 26, 26, 32) 320 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 13, 13, 32) 0 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 11, 11, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_4 (MaxPooling2 (None, 5, 5, 64) 0 \n_________________________________________________________________\nflatten_2 (Flatten) (None, 1600) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 1024) 1639424 \n_________________________________________________________________\ndense_4 (Dense) (None, 10) 10250 \n=================================================================\nTotal params: 1,668,490\nTrainable params: 1,668,490\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "Also as recommended by Madry et al., we use BIM/PGD attacks during adversarial training:", "_____no_output_____" ] ], [ [ "attacks = BasicIterativeMethod(robust_classifier, eps=0.3, eps_step=0.01, max_iter=40)", "_____no_output_____" ] ], [ [ "Perform adversarial training:", "_____no_output_____" ] ], [ [ "# We had performed this before, starting with a randomly intialized model.\n# Adversarial training takes about 80 minutes on an NVIDIA V100.\n# The resulting model is the one loaded from mnist_cnn_robust.h5 above.\n\n# Here is the command we had used for the Adversarial Training\n\n# trainer = AdversarialTrainer(robust_classifier, attacks, ratio=1.0)\n# trainer.fit(x_train, y_train, nb_epochs=83, batch_size=50)", "_____no_output_____" ] ], [ [ "<a id=\"evaluation\"></a>\n## 4. Evaluate the robust classifier", "_____no_output_____" ], [ "Evaluate the robust classifier's performance on the original test data:", "_____no_output_____" ] ], [ [ "x_test_robust_pred = np.argmax(robust_classifier.predict(x_test[:100]), axis=1)\nnb_correct_robust_pred = np.sum(x_test_robust_pred == np.argmax(y_test[:100], axis=1))\n\nprint(\"Original test data (first 100 images):\")\nprint(\"Correctly classified: {}\".format(nb_correct_robust_pred))\nprint(\"Incorrectly classified: {}\".format(100-nb_correct_robust_pred))", "Original test data (first 100 images):\nCorrectly classified: 99\nIncorrectly classified: 1\n" ] ], [ [ "Evaluate the robust classifier's performance on the adversarial test data (**white-box** setting):", "_____no_output_____" ] ], [ [ "attacker_robust = FastGradientMethod(robust_classifier, eps=0.5)\nx_test_adv_robust = attacker_robust.generate(x_test[:100])", "_____no_output_____" ], [ "x_test_adv_robust_pred = np.argmax(robust_classifier.predict(x_test_adv_robust), axis=1)\nnb_correct_adv_robust_pred = np.sum(x_test_adv_robust_pred == np.argmax(y_test[:100], axis=1))\n\nprint(\"Adversarial test data (first 100 images):\")\nprint(\"Correctly classified: {}\".format(nb_correct_adv_robust_pred))\nprint(\"Incorrectly classified: {}\".format(100-nb_correct_adv_robust_pred))", "Adversarial test data (first 100 images):\nCorrectly classified: 79\nIncorrectly classified: 21\n" ] ], [ [ "Compare the performance of the original and the robust classifier over a range of `eps` values:", "_____no_output_____" ] ], [ [ "eps_range = [0.01, 0.02, 0.03, 0.04, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]\nnb_correct_original = []\nnb_correct_robust = []\n\nfor eps in eps_range:\n attacker.set_params(**{'eps': eps})\n attacker_robust.set_params(**{'eps': eps})\n x_test_adv = attacker.generate(x_test[:100])\n x_test_adv_robust = attacker_robust.generate(x_test[:100])\n \n x_test_adv_pred = np.argmax(classifier.predict(x_test_adv), axis=1)\n nb_correct_original += [np.sum(x_test_adv_pred == np.argmax(y_test[:100], axis=1))]\n \n x_test_adv_robust_pred = np.argmax(robust_classifier.predict(x_test_adv_robust), axis=1)\n nb_correct_robust += [np.sum(x_test_adv_robust_pred == np.argmax(y_test[:100], axis=1))]\n\neps_range = [0] + eps_range\nnb_correct_original = [nb_correct_pred] + nb_correct_original\nnb_correct_robust = [nb_correct_robust_pred] + nb_correct_robust", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nax.plot(np.array(eps_range), np.array(nb_correct_original), 'b--', label='Original classifier')\nax.plot(np.array(eps_range), np.array(nb_correct_robust), 'r--', label='Robust classifier')\n\nlegend = ax.legend(loc='upper center', shadow=True, fontsize='large')\nlegend.get_frame().set_facecolor('#00FFCC')\n\nplt.xlabel('Attack strength (eps)')\nplt.ylabel('Correct predictions')\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cbb8c612ac0bce326e9a661e3a7ab07dbe75a97d
41,078
ipynb
Jupyter Notebook
Evaluation.ipynb
shanmukh98/tep-fault-detection
ee14d2f9ba30f9077f99f492caf5a793053d36e1
[ "MIT" ]
5
2019-11-06T10:48:41.000Z
2021-01-21T12:00:43.000Z
Evaluation.ipynb
shanmukh98/tep-fault-detection
ee14d2f9ba30f9077f99f492caf5a793053d36e1
[ "MIT" ]
1
2020-02-01T03:57:46.000Z
2020-02-01T03:57:46.000Z
Evaluation.ipynb
shanmukh98/tep-fault-detection
ee14d2f9ba30f9077f99f492caf5a793053d36e1
[ "MIT" ]
null
null
null
82.156
6,644
0.755149
[ [ [ "import numpy as np\nimport tensorflow as tf\nimport pyreadr\nimport pandas as pd\nimport keras\nfrom keras.layers import Dense,Dropout,BatchNormalization\nfrom keras.models import Sequential,Model\nfrom keras.callbacks import ModelCheckpoint,EarlyStopping,ReduceLROnPlateau\nfrom keras.optimizers import Adam\nfrom keras.regularizers import l1\nfrom sklearn.preprocessing import StandardScaler\nfrom keras.models import load_model\nfrom sklearn.covariance import MinCovDet,EmpiricalCovariance\nfrom matplotlib.pyplot import hist\nfrom matplotlib import pyplot as plt\nfrom sklearn.metrics import accuracy_score\n%matplotlib inline", "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\nUsing TensorFlow backend.\n" ], [ "from keras.backend.tensorflow_backend import set_session\nimport tensorflow as tf\nconfig = tf.ConfigProto()\nconfig.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU\n# config.log_device_placement = True # to log device placement (on which device the operation ran)\nsess = tf.Session(config=config)\nset_session(sess) # set this TensorFlow session as the default session for Keras", "_____no_output_____" ], [ "# Set seeds for random number generators for reproducable results\nseed = 0\nnp.random.seed(seed)\ntf.set_random_seed(seed)", "_____no_output_____" ], [ "# Load training data\ndata = pyreadr.read_r(\"/home/shanmukh/Documents/IICT/tep-fault-detection/dataset/TEP_FaultFree_Training.RData\")\ndf = data['fault_free_training']\ntraining_data = df.drop([\"faultNumber\",\"simulationRun\",\"sample\"],axis=1)", "_____no_output_____" ], [ "# Standard Normalization\n# 0 mean \n# 1 std\nscaler = StandardScaler()\nscaler.fit(training_data)\ntraining_data = scaler.transform(training_data)", "_____no_output_____" ], [ "model = load_model(\"/home/shanmukh/Documents/IICT/tep-fault-detection/models/weights-55-0.09.hdf5\")\nencoder = Model(inputs=model.input,outputs=model.get_layer('latent_space').output)\n # model.summary()\n # encoder.summary()", "WARNING:tensorflow:From /home/shanmukh/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nWARNING:tensorflow:From /home/shanmukh/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n\nWARNING:tensorflow:From /home/shanmukh/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nWARNING:tensorflow:From /home/shanmukh/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:148: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.\n\nWARNING:tensorflow:From /home/shanmukh/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n\nWARNING:tensorflow:From /home/shanmukh/.local/lib/python3.6/site-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\n" ], [ "# Get outputs\npredictions = model.predict(training_data,batch_size=512)\nlatent = encoder.predict(training_data,batch_size=512)\n# Set Percentile Tresholds\npercentile_treshold = 95", "_____no_output_____" ], [ "# SPE statistic\nspe = np.sum((training_data - predictions)**2,axis=1)\ncutoff_spe = np.percentile(spe,percentile_treshold)\nnp.savetxt(\"spe_train.dat\",spe)\n_ = hist(spe,bins=100)\nprint (cutoff_spe)", "8.317551184141351\n" ], [ "# Mahalanobis distance\ncov = EmpiricalCovariance().fit(latent)\nmd = cov.mahalanobis(latent)\ncutoff_md = np.percentile(md,percentile_treshold)\n_ = hist(md,bins=100)\nnp.savetxt(\"T2_train.dat\",md)\nprint (cutoff_md)", "46.18094748901319\n" ], [ "# Unified Index\nui = spe/cutoff_spe + md/cutoff_md\ncutoff_ui = np.percentile(ui,percentile_treshold)\n_ = hist(ui,bins=100)\nprint (cutoff_ui)\nnp.savetxt(\"Unified_index_train.dat\",ui)", "1.7770198359405076\n" ], [ "# Hotelling's T^2 Statistic\n# covariance = cov.covariance_\n# # pseudo inverse\n# inv = np.linalg.pinv(covariance)\n# t2 = [np.matmul(np.matmul(np.matrix(i),np.matrix(inv)),np.matrix(i).T) for i in latent]\n# t2 = np.array(t2).squeeze()", "_____no_output_____" ], [ "# Load and normalize Testing Data\ntest_files = []\nfor i in range(22):\n test_files.append('d'+format(i, '02d')+\"_te.dat\")\npath_to_test = \"/home/shanmukh/Documents/IICT/tep-fault-detection/dataset/TE_process/\"\ntest_data = []\ntest_data_normalized = []\nfor i in test_files:\n test_data.append(np.loadtxt(path_to_test+i))\n test_data_normalized.append(scaler.transform(test_data[-1]))\ntruth = np.ones(shape = (800,))", "_____no_output_____" ], [ "# Metrics\nspe_all = []\nmd_all = []\nui_all = []\nmissed_detection_rates = []\nx = np.array(list(range(960)))\ntemp = 0\nfor i in test_data_normalized:\n predictions_test = model.predict(i,batch_size=480)\n latent_test = encoder.predict(i,batch_size=480)\n spe_test = np.sum((i - predictions_test)**2,axis=1)\n md_test = cov.mahalanobis(latent_test)\n ui_test = spe_test/cutoff_spe + md_test/cutoff_md\n spe_y = np.zeros_like(spe_test)\n spe_y[spe_test>cutoff_spe] = 1 \n md_y = np.zeros_like(md_test)\n md_y[md_test>cutoff_md] = 1\n ui_y = np.zeros_like(ui_test)\n ui_y[ui_test>cutoff_ui] = 1\n np.savetxt(\"indices/spe_\"+test_files[temp],spe_test)\n np.savetxt(\"indices/T2_\"+test_files[temp],md_test)\n np.savetxt(\"indices/Unified_\"+test_files[temp],ui_test)\n# plt.plot(x,spe_test)\n print (temp,\",\",1-accuracy_score(spe_y[160:],truth),\",\",1-accuracy_score(md_y[160:],truth),\",\",1-accuracy_score(ui_y[160:],truth))\n missed_detection_rates.append(1-accuracy_score(ui_y[160:],truth))\n temp+=1\n ", "0 , 0.94125 , 0.93125 , 0.9299999999999999\n1 , 0.0024999999999999467 , 0.0050000000000000044 , 0.0024999999999999467\n2 , 0.015000000000000013 , 0.012499999999999956 , 0.01375000000000004\n3 , 0.94375 , 0.93625 , 0.93125\n4 , 0.0 , 0.35624999999999996 , 0.0\n5 , 0.63625 , 0.70375 , 0.6\n6 , 0.0 , 0.010000000000000009 , 0.0\n7 , 0.0 , 0.0 , 0.0\n8 , 0.02375000000000005 , 0.02375000000000005 , 0.01749999999999996\n9 , 0.94 , 0.935 , 0.92625\n10 , 0.47750000000000004 , 0.5900000000000001 , 0.39\n11 , 0.30374999999999996 , 0.385 , 0.17500000000000004\n12 , 0.03500000000000003 , 0.011249999999999982 , 0.008750000000000036\n13 , 0.04500000000000004 , 0.04749999999999999 , 0.04500000000000004\n14 , 0.0012499999999999734 , 0.0012499999999999734 , 0.0\n15 , 0.94125 , 0.925 , 0.915\n16 , 0.51125 , 0.76125 , 0.4675\n17 , 0.036250000000000004 , 0.1725 , 0.040000000000000036\n18 , 0.09499999999999997 , 0.09999999999999998 , 0.09375\n19 , 0.7462500000000001 , 0.78625 , 0.6675\n20 , 0.39749999999999996 , 0.6275 , 0.36\n21 , 0.51875 , 0.5587500000000001 , 0.48624999999999996\n" ], [ "np.mean(missed_detection_rates[1:])", "_____no_output_____" ], [ "x", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb8c997ae36e822c52a1cb3d798471d18cad605
269,831
ipynb
Jupyter Notebook
Azure ML Train/FacemaskAzureTrain.ipynb
altaga/Azure-Facemask-Detector
90615e921456c084e452e5f3b198bd671fe7e4e5
[ "MIT" ]
null
null
null
Azure ML Train/FacemaskAzureTrain.ipynb
altaga/Azure-Facemask-Detector
90615e921456c084e452e5f3b198bd671fe7e4e5
[ "MIT" ]
null
null
null
Azure ML Train/FacemaskAzureTrain.ipynb
altaga/Azure-Facemask-Detector
90615e921456c084e452e5f3b198bd671fe7e4e5
[ "MIT" ]
1
2021-05-24T08:23:10.000Z
2021-05-24T08:23:10.000Z
191.777541
6,558
0.446739
[ [ [ "!git clone https://github.com/altaga/Facemask-Opt-Dataset", "Cloning into 'Facemask-Opt-Dataset'...\nremote: Enumerating objects: 50, done.\u001b[K\nremote: Counting objects: 100% (50/50), done.\u001b[K\nremote: Compressing objects: 100% (47/47), done.\u001b[K\nremote: Total 2048 (delta 3), reused 48 (delta 3), pack-reused 1998\u001b[K\nReceiving objects: 100% (2048/2048), 109.06 MiB | 30.96 MiB/s, done.\nResolving deltas: 100% (4/4), done.\nUpdating files: 100% (1901/1901), done.\n" ], [ "import numpy as np\r\nimport matplotlib\r\nimport matplotlib.pyplot as plt\r\n\r\nimport azureml\r\nfrom azureml.core import Workspace, Run\r\nws = Workspace.from_config()\r\n\r\nexperiment_name = 'deeplearning_facemask'\r\nfrom azureml.core import Experiment\r\nexp = Experiment(workspace=ws, name=experiment_name)\r\nprint(exp)", "Experiment(Name: deeplearning_facemask,\nWorkspace: azureai)\n" ], [ "from azureml.core.compute import AmlCompute\r\nfrom azureml.core.compute import ComputeTarget\r\nimport os\r\n\r\n# choose a name for your cluster\r\ncompute_name = os.environ.get(\"AML_COMPUTE_CLUSTER_NAME\", \"cpu-cluster\")\r\ncompute_min_nodes = os.environ.get(\"AML_COMPUTE_CLUSTER_MIN_NODES\", 0)\r\ncompute_max_nodes = os.environ.get(\"AML_COMPUTE_CLUSTER_MAX_NODES\", 4)\r\n\r\n# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6\r\nvm_size = os.environ.get(\"AML_COMPUTE_CLUSTER_SKU\", \"STANDARD_D2_V2\")\r\n\r\n\r\nif compute_name in ws.compute_targets:\r\n compute_target = ws.compute_targets[compute_name]\r\n if compute_target and type(compute_target) is AmlCompute:\r\n print(\"found compute target: \" + compute_name)\r\nelse:\r\n print(\"creating new compute target...\")\r\n provisioning_config = AmlCompute.provisioning_configuration(vm_size = vm_size,\r\n min_nodes = compute_min_nodes, \r\n max_nodes = compute_max_nodes)\r\n\r\n # create the cluster\r\n compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\r\n \r\n # can poll for a minimum number of nodes and for a specific timeout. \r\n # if no min node count is provided it will use the scale settings for the cluster\r\n compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\r\n \r\n # For a more detailed view of current AmlCompute status, use get_status()\r\n print(compute_target.get_status().serialize())", "found compute target: cpu-cluster\n" ], [ "from azureml.core.environment import Environment\r\nfrom azureml.core.conda_dependencies import CondaDependencies\r\n\r\n# to install required packages\r\nenv = Environment('facemaskenv')\r\ncd = CondaDependencies.create(pip_packages=['Pillow','azureml-defaults',\"tensorflow\",\"scikit-learn\",\"opencv-python-headless\"])\r\nenv.python.conda_dependencies = cd\r\n\r\n# Register environment to re-use later\r\nenv.register(workspace = ws)", "_____no_output_____" ], [ "from azureml.core import ScriptRunConfig\r\n\r\nsrc = ScriptRunConfig(source_directory='',\r\n compute_target=compute_target,\r\n script='train.py',\r\n environment=env)", "_____no_output_____" ], [ "run = Experiment(workspace=ws, name='TF-Facemask').submit(src)\r\nrun.wait_for_completion(show_output=True)", "Submitting /mnt/batch/tasks/shared/LS_root/mounts/clusters/azureaitrain/code/Users/victor.altamirano directory for run. The size of the directory >= 25 MB, so it can take a few minutes.\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cbb8cf03e2266df55ad7c41dd9443eaf74d974cd
115,252
ipynb
Jupyter Notebook
docs/advanced/v1 cookbook/memtest.ipynb
pnewstein/pyABF
19969f86740373b17cea876a8c8d451b45f7944f
[ "MIT" ]
74
2017-11-06T17:53:48.000Z
2022-03-27T12:14:46.000Z
docs/advanced/v1 cookbook/memtest.ipynb
pnewstein/pyABF
19969f86740373b17cea876a8c8d451b45f7944f
[ "MIT" ]
116
2018-01-16T21:36:29.000Z
2022-03-31T11:46:04.000Z
docs/advanced/v1 cookbook/memtest.ipynb
pnewstein/pyABF
19969f86740373b17cea876a8c8d451b45f7944f
[ "MIT" ]
30
2018-06-28T13:19:53.000Z
2022-03-25T02:52:48.000Z
394.69863
90,658
0.923906
[ [ [ "# The pyabf Cookbook: Using `ABF.memtest`\n\nThis page demonstrates how to access the abf membrane test data. For theoretical details about membrane properties, how to measure them, and how to computationally create and analyze membrane test data see the [membrane test theory and simulation](memtest-simulation.ipynb) page.\n\nFor more resources, see the pyABF project website: http://www.GitHub.com/swharden/pyABF\n\n### Common variables:\n* $ I_{h} $ - average clamp current at the holding voltage (a.k.a. holding current)\n* $ C_{m} $ - membrane capacitance\n* $ R_{a} $ - access resistance (synonymous with series resistance)\n* $ R_{m} $ - membrane resistance (the true property of the cell membrane)\n* $ \\tau $ - (tau) the time constant of the decay curve of a current transient in response to a voltage step", "_____no_output_____" ], [ "### Prepare the Environment:", "_____no_output_____" ] ], [ [ "# prepare the environment\nimport numpy as np\nnp.set_printoptions(precision=3)\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn')\n%matplotlib inline", "_____no_output_____" ] ], [ [ "### Load the ABF Class", "_____no_output_____" ] ], [ [ "import sys\nsys.path.insert(0, '../src/')\nimport pyabf\npyabf.info()", "pyabf 0.1.4 was imported from C:\\Users\\scott\\Documents\\GitHub\\pyABF\\src\\pyabf\n" ] ], [ [ "### Import a Recording\n_Membrane tests can be analyzed from any episodic voltage clamp recording with a hyperpolarizing current step at the start of every sweep_", "_____no_output_____" ] ], [ [ "abf=pyabf.ABF(\"../data/16d05007_vc_tags.abf\")\nprint(\"This ABF has %d sweeps\"%abf.sweepCount)\nplt.plot(abf.dataX,abf.dataY)\nabf.plotDecorate()", "This ABF has 187 sweeps\n" ] ], [ [ "### Calculate $I_{h}$, $R_{m}$, $R_{a}$, $C_{m}$, and $\\tau$ for Every Sweep", "_____no_output_____" ] ], [ [ "abf.memtestAnalyzeAll()", "_____no_output_____" ] ], [ [ "### Display Memtest Averages", "_____no_output_____" ] ], [ [ "print(\"Ih:\", abf.memtest.Ih.average, abf.memtest.Ih.units)\nprint(\"Ra:\", abf.memtest.Ra.average, abf.memtest.Ra.units)\nprint(\"Rm:\", abf.memtest.Rm.average, abf.memtest.Rm.units)\nprint(\"Cm:\", abf.memtest.Cm.average, abf.memtest.Cm.units)\nprint(\"Tau:\", abf.memtest.Tau.average, abf.memtest.Tau.units)", "Ih: 35.0228064772 pA\nRa: 7.05271050172 MOhm\nRm: 165.566419111 MOhm\nCm: 226.252527971 pF\nTau: 1.59364790122 ms\n" ] ], [ [ "### Display Memtest Values per Sweep", "_____no_output_____" ] ], [ [ "print(abf.memtest.Ih)", "array([ 3.597, 5.85 , 6.705, 10.278, 10.42 , 7.291, 5.605,\n 17.65 , 14.525, 11.832, 17.731, 23.289, 20.621, 22.801,\n 24.929, 27.924, 23.792, 25.982, 26.719, 27.073, 31.428,\n 29.898, 30.746, 32.62 , 33.483, 33.558, 34.334, 33.765,\n 34.783, 35.515, 36.696, 36.926, 35.68 , 33.658, 37.676,\n 36.16 , 36.8 , 34.581, 34.859, 36.574, 37.926, 34.758,\n 38.247, 39.671, 35.643, 30.394, 34.903, 33.555, 32.702,\n 39.729, 35.832, 31.962, 36.729, 38.073, 30.859, 36.36 ,\n 36.835, 36.835, 35.743, 36.578, 37.191, 34.768, 38.811,\n 38.98 , 37.651, 39.971, 38.384, 36.567, 36.309, 37.382,\n 39.664, 41.707, 36.771, 36.984, 39.821, 34.745, 38.037,\n 39.115, 38.617, 37.513, 35.79 , 37.499, 35.796, 37.698,\n 38.427, 38.451, 38.552, 36.978, 38.49 , 35.712, 38.369,\n 38.786, 37.912, 43.17 , 41.452, 35.871, 40.341, 42.63 ,\n 38.451, 38.872, 37.022, 41.007, 37.195, 37.262, 40.233,\n 38.377, 40.309, 42.553, 40.983, 40.205, 39.104, 42.14 ,\n 41.85 , 40.399, 40.142, 40.619, 35.868, 38.327, 40.633,\n 37.136, 35.519, 38.303, 35.513, 35.135, 35.133, 34.479,\n 39.539, 34.499, 34.691, 34.26 , 34.355, 33.868, 35.977,\n 35.031, 34.699, 35.319, 37.226, 36.092, 35.462, 32.833,\n 36.718, 35.746, 36.645, 43.925, 38.027, 37.807, 35.685,\n 36.611, 34.175, 33.874, 34.835, 36.122, 40.199, 37.071,\n 37.933, 39.001, 44.499, 43.152, 40.516, 37.803, 38.573,\n 34.595, 39.458, 39.636, 40.927, 39.453, 39.096, 38.236,\n 40.768, 35.79 , 36.278, 37.097, 37.508, 37.211, 38.547,\n 34.509, 38.691, 39.585, 38.512, 41.334, 38.135, 38.296,\n 36.827, 38.573, 34.698, 34.715, 36.66 ])\n" ] ], [ [ "### Plot Memtest Information", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(6,8))\n\nax1=plt.subplot(411)\nplt.plot(abf.sweepTimesMin,abf.memtest.Ih)\nplt.title(abf.memtest.Ih.desc, fontweight='bold')\nplt.ylabel(abf.memtest.Ih.label)\n\nplt.subplot(412,sharex=ax1)\nplt.plot(abf.sweepTimesMin,abf.memtest.Ra)\nplt.title(abf.memtest.Ra.desc, fontweight='bold')\nplt.ylabel(abf.memtest.Ra.label)\n\nplt.subplot(413,sharex=ax1)\nplt.plot(abf.sweepTimesMin,abf.memtest.Rm)\nplt.title(abf.memtest.Rm.desc, fontweight='bold')\nplt.ylabel(abf.memtest.Rm.label)\n\nplt.subplot(414,sharex=ax1)\nplt.plot(abf.sweepTimesMin,abf.memtest.Cm)\nplt.title(abf.memtest.Cm.desc, fontweight='bold')\nplt.ylabel(abf.memtest.Cm.label)\nplt.xlabel(\"Experiment Duration (minutes)\")\n\nplt.margins(0,.1)\nplt.tight_layout()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb8d24636ed82d9c01614980b68e662af889216
1,413
ipynb
Jupyter Notebook
Launcher/CustomCode/SkyLight/Clone of/Research.ipynb
Synchronicity89/Lean
564af47ea980cf0524874643c7190da82236bcfb
[ "Apache-2.0" ]
null
null
null
Launcher/CustomCode/SkyLight/Clone of/Research.ipynb
Synchronicity89/Lean
564af47ea980cf0524874643c7190da82236bcfb
[ "Apache-2.0" ]
1
2020-08-25T03:02:47.000Z
2020-08-25T03:02:47.000Z
Launcher/CustomCode/SkyLight/Clone of/Research.ipynb
Synchronicity89/Lean
564af47ea980cf0524874643c7190da82236bcfb
[ "Apache-2.0" ]
null
null
null
20.478261
88
0.524416
[ [ [ "![QuantConnect Logo](https://cdn.quantconnect.com/web/i/icon.png)\n<hr>", "_____no_output_____" ] ], [ [ "// QuantBook C# Research Environment\n// For more information see https://www.quantconnect.com/docs/research/overview\n#load \"../QuantConnect.csx\"\nvar qb = new QuantBook();\nvar spy = qb.AddEquity(\"SPY\");\nvar history = qb.History(qb.Securities.Keys, 360, Resolution.Daily);", "_____no_output_____" ], [ "foreach(var slice in history.Take(5)) {\n Console.WriteLine(slice.Bars[spy.Symbol].ToString());\n}", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ] ]
cbb8d4d122b4b981c475d334a1874f928790dc83
1,167
ipynb
Jupyter Notebook
lab.ipynb
josh-hdz/cloud-computing-foundations
c06db10f8ef63e80be04a41b245627f54dd4ecfb
[ "MIT" ]
null
null
null
lab.ipynb
josh-hdz/cloud-computing-foundations
c06db10f8ef63e80be04a41b245627f54dd4ecfb
[ "MIT" ]
null
null
null
lab.ipynb
josh-hdz/cloud-computing-foundations
c06db10f8ef63e80be04a41b245627f54dd4ecfb
[ "MIT" ]
null
null
null
22.442308
238
0.486718
[ [ [ "<a href=\"https://colab.research.google.com/github/josh-hdz/cloud-computing-foundations/blob/master/lab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "print(\"hello world\")", "_____no_output_____" ] ], [ [ "# Title\nplus some text", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbb8d6d2b263b2f308c0dc3798d6ccb42d10ae19
2,060
ipynb
Jupyter Notebook
git.ipynb
marcoaversa/conditionalDDPM
e28e26febe8d21ec9793ba9b888ca80707eee10e
[ "MIT" ]
null
null
null
git.ipynb
marcoaversa/conditionalDDPM
e28e26febe8d21ec9793ba9b888ca80707eee10e
[ "MIT" ]
null
null
null
git.ipynb
marcoaversa/conditionalDDPM
e28e26febe8d21ec9793ba9b888ca80707eee10e
[ "MIT" ]
1
2021-11-15T11:15:50.000Z
2021-11-15T11:15:50.000Z
26.075949
146
0.587379
[ [ [ "!git config --global user.email [email protected]\n!git config --global user.name marcoaversa", "_____no_output_____" ], [ "!git config --list", "[email protected]\r\nuser.name=marcoaversa\r\ncore.repositoryformatversion=0\r\ncore.filemode=true\r\ncore.bare=false\r\ncore.logallrefupdates=true\r\nremote.origin.url=https://github.com/marcoaversa/conditionalDDPM.git\r\nremote.origin.fetch=+refs/heads/*:refs/remotes/origin/*\r\nbranch.master.remote=origin\r\nbranch.master.merge=refs/heads/master\r\ncredential.helper=cache --timeout=8000\r\n" ], [ "#When running this command, the first time you pull or push from the remote repository, you'll get asked about the username and password.\n#Afterwards, for consequent communications with the remote repository you don't have to provide the username and password.\n!git config --global credential.helper store\n!git config --global credential.helper 'cache --timeout=8000'\n\"\"\"Doesn't work anymore!\"\"\"", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cbb8f0078fbe05889311fdc538ca2247e20d1f51
644,225
ipynb
Jupyter Notebook
training-autoencoder.ipynb
carlosnatalino/2020-JOCN-efficient-ML
dce5db6f95550ead6d5fdbaf5cb3271b5545b548
[ "MIT" ]
1
2020-11-24T16:35:19.000Z
2020-11-24T16:35:19.000Z
training-autoencoder.ipynb
carlosnatalino/2020-JOCN-efficient-ML
dce5db6f95550ead6d5fdbaf5cb3271b5545b548
[ "MIT" ]
null
null
null
training-autoencoder.ipynb
carlosnatalino/2020-JOCN-efficient-ML
dce5db6f95550ead6d5fdbaf5cb3271b5545b548
[ "MIT" ]
null
null
null
41.55486
200
0.506879
[ [ [ "# Title of the work", "_____no_output_____" ] ], [ [ "import pickle\nimport logging\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split\n\nfrom matplotlib import rcParams\nrcParams['font.size'] = 14\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\n\n# logging.getLogger('tensorflow').setLevel(logging.INFO)\nprint('Tensorflow version:', tf.__version__)", "Tensorflow version: 2.2.0\n" ] ], [ [ "## Definitions", "_____no_output_____" ] ], [ [ "number_components = [x for x in range(1, 9)]\n\nencoder_layers = [\n [40],\n [100, 40],\n [400, 100, 40],\n]\nlr = 0.01\n# lr = 0.001\n\noptimizer = tf.keras.optimizers.SGD(learning_rate=lr)\n# optimizer = tf.keras.optimizers.RMSprop(learning_rate=lr)\n\n# dataset_filter = 'all' # done\ndataset_filter = 'normal' # doing now\n\n\nseed = 42\nnp.random.seed(seed)\nnumber_epochs = 600\ntest_size = 0.5 # proportion of the number of samples used for testing, i.e., (1-test_size) used for training\n\nfigure_format = 'svg'", "_____no_output_____" ], [ "folder = '/nobackup/carda/datasets/ml-simulation-optical/2019-ecoc-demo'", "_____no_output_____" ] ], [ [ "## Importing dataset", "_____no_output_____" ] ], [ [ "with open(folder + '/compiled-dataset.h5', 'rb') as file:\n final_dataframe, scaled_dataframe, class_columns, class_names = pickle.load(file)\ninput_dim = final_dataframe.shape[1] - 3 # the last three columns are classes", "_____no_output_____" ] ], [ [ "## Auxiliary functions", "_____no_output_____" ] ], [ [ "def build_model(data_dim, layers, optimizer='sgd', loss='mse', metrics=['mse', 'msle']):\n model = tf.keras.Sequential(name='encoder_' + '-'.join(str(x) for x in layers))\n model.add(tf.keras.layers.Dense(layers[0], input_shape=(data_dim,), name='input_and_0'))\n for i in range(1, len(layers)-1):\n model.add(tf.keras.layers.Dense(layers[i], name=f'encoder_{i}'))\n print('enc:', layers[i], i)\n# model.add(tf.keras.layers.Dense(layers[len(layers)-1], name=f'encoder_{len(layers)-1}', activation='tanh'))\n for i in range(len(layers)-1, -1, -1):\n model.add(tf.keras.layers.Dense(layers[i], name=f'decoder_{i}'))\n print('dec:', layers[i], i)\n# model.add(DenseTied(model.layers[i], name=f'decoder_{i}'))\n \n model.add(tf.keras.layers.Dense(data_dim, name=f'output'))\n model.compile(optimizer=optimizer, loss=loss, metrics=metrics)\n return model", "_____no_output_____" ] ], [ [ "## Building training and testing datasets", "_____no_output_____" ] ], [ [ "if dataset_filter == 'normal':\n normal_conditions = scaled_dataframe[(scaled_dataframe['attack'] == 0)].values\nelse:\n normal_conditions = scaled_dataframe.values\nx_train, x_test, y_train, y_test = train_test_split(normal_conditions[:, :input_dim], normal_conditions[:, -1], test_size=test_size, random_state=seed)", "_____no_output_____" ] ], [ [ "## Training the autoencoders", "_____no_output_____" ] ], [ [ "histories = []\nfor layer in encoder_layers:\n for n_components in number_components:\n final_layer = layer + [n_components]\n print(final_layer)\n model = build_model(input_dim, final_layer, optimizer=optimizer)\n model.summary()\n \n # saving a graphical representation\n tf.keras.utils.plot_model(model, to_file=f'./models/{dataset_filter}_{optimizer._name}_{lr}_{model.name}-model.png', show_shapes=True, show_layer_names=False)\n\n history = model.fit(x_train, x_train, epochs=number_epochs, batch_size=64, verbose=0, validation_data=(x_test, x_test))\n model.save(f'./models/{dataset_filter}_{optimizer._name}_{lr}_{model.name}-model.h5')\n histories.append(history.history)", "[40, 1]\ndec: 1 1\ndec: 40 0\nModel: \"encoder_40-1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 40) 1280 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 1) 41 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 40) 80 \n_________________________________________________________________\noutput (Dense) (None, 31) 1271 \n=================================================================\nTotal params: 2,672\nTrainable params: 2,672\nNon-trainable params: 0\n_________________________________________________________________\n[40, 2]\ndec: 2 1\ndec: 40 0\nModel: \"encoder_40-2\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 40) 1280 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 2) 82 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 40) 120 \n_________________________________________________________________\noutput (Dense) (None, 31) 1271 \n=================================================================\nTotal params: 2,753\nTrainable params: 2,753\nNon-trainable params: 0\n_________________________________________________________________\n[40, 3]\ndec: 3 1\ndec: 40 0\nModel: \"encoder_40-3\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 40) 1280 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 3) 123 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 40) 160 \n_________________________________________________________________\noutput (Dense) (None, 31) 1271 \n=================================================================\nTotal params: 2,834\nTrainable params: 2,834\nNon-trainable params: 0\n_________________________________________________________________\n[40, 4]\ndec: 4 1\ndec: 40 0\nModel: \"encoder_40-4\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 40) 1280 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 4) 164 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 40) 200 \n_________________________________________________________________\noutput (Dense) (None, 31) 1271 \n=================================================================\nTotal params: 2,915\nTrainable params: 2,915\nNon-trainable params: 0\n_________________________________________________________________\n[40, 5]\ndec: 5 1\ndec: 40 0\nModel: \"encoder_40-5\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 40) 1280 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 5) 205 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 40) 240 \n_________________________________________________________________\noutput (Dense) (None, 31) 1271 \n=================================================================\nTotal params: 2,996\nTrainable params: 2,996\nNon-trainable params: 0\n_________________________________________________________________\n[40, 6]\ndec: 6 1\ndec: 40 0\nModel: \"encoder_40-6\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 40) 1280 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 6) 246 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 40) 280 \n_________________________________________________________________\noutput (Dense) (None, 31) 1271 \n=================================================================\nTotal params: 3,077\nTrainable params: 3,077\nNon-trainable params: 0\n_________________________________________________________________\n[40, 7]\ndec: 7 1\ndec: 40 0\nModel: \"encoder_40-7\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 40) 1280 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 7) 287 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 40) 320 \n_________________________________________________________________\noutput (Dense) (None, 31) 1271 \n=================================================================\nTotal params: 3,158\nTrainable params: 3,158\nNon-trainable params: 0\n_________________________________________________________________\n[40, 8]\ndec: 8 1\ndec: 40 0\nModel: \"encoder_40-8\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 40) 1280 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 8) 328 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 40) 360 \n_________________________________________________________________\noutput (Dense) (None, 31) 1271 \n=================================================================\nTotal params: 3,239\nTrainable params: 3,239\nNon-trainable params: 0\n_________________________________________________________________\n[100, 40, 1]\nenc: 40 1\ndec: 1 2\ndec: 40 1\ndec: 100 0\nModel: \"encoder_100-40-1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 100) 3200 \n_________________________________________________________________\nencoder_1 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 1) 41 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 40) 80 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 100) 4100 \n_________________________________________________________________\noutput (Dense) (None, 31) 3131 \n=================================================================\nTotal params: 14,592\nTrainable params: 14,592\nNon-trainable params: 0\n_________________________________________________________________\n[100, 40, 2]\nenc: 40 1\ndec: 2 2\ndec: 40 1\ndec: 100 0\nModel: \"encoder_100-40-2\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 100) 3200 \n_________________________________________________________________\nencoder_1 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 2) 82 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 40) 120 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 100) 4100 \n_________________________________________________________________\noutput (Dense) (None, 31) 3131 \n=================================================================\nTotal params: 14,673\nTrainable params: 14,673\nNon-trainable params: 0\n_________________________________________________________________\n[100, 40, 3]\nenc: 40 1\ndec: 3 2\ndec: 40 1\ndec: 100 0\nModel: \"encoder_100-40-3\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 100) 3200 \n_________________________________________________________________\nencoder_1 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 3) 123 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 40) 160 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 100) 4100 \n_________________________________________________________________\noutput (Dense) (None, 31) 3131 \n=================================================================\nTotal params: 14,754\nTrainable params: 14,754\nNon-trainable params: 0\n_________________________________________________________________\n[100, 40, 4]\nenc: 40 1\ndec: 4 2\ndec: 40 1\ndec: 100 0\nModel: \"encoder_100-40-4\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 100) 3200 \n_________________________________________________________________\nencoder_1 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 4) 164 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 40) 200 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 100) 4100 \n_________________________________________________________________\noutput (Dense) (None, 31) 3131 \n=================================================================\nTotal params: 14,835\nTrainable params: 14,835\nNon-trainable params: 0\n_________________________________________________________________\n[100, 40, 5]\nenc: 40 1\ndec: 5 2\ndec: 40 1\ndec: 100 0\nModel: \"encoder_100-40-5\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 100) 3200 \n_________________________________________________________________\nencoder_1 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 5) 205 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 40) 240 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 100) 4100 \n_________________________________________________________________\noutput (Dense) (None, 31) 3131 \n=================================================================\nTotal params: 14,916\nTrainable params: 14,916\nNon-trainable params: 0\n_________________________________________________________________\n[100, 40, 6]\nenc: 40 1\ndec: 6 2\ndec: 40 1\ndec: 100 0\nModel: \"encoder_100-40-6\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 100) 3200 \n_________________________________________________________________\nencoder_1 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 6) 246 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 40) 280 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 100) 4100 \n_________________________________________________________________\noutput (Dense) (None, 31) 3131 \n=================================================================\nTotal params: 14,997\nTrainable params: 14,997\nNon-trainable params: 0\n_________________________________________________________________\n[100, 40, 7]\nenc: 40 1\ndec: 7 2\ndec: 40 1\ndec: 100 0\nModel: \"encoder_100-40-7\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 100) 3200 \n_________________________________________________________________\nencoder_1 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 7) 287 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 40) 320 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 100) 4100 \n_________________________________________________________________\noutput (Dense) (None, 31) 3131 \n=================================================================\nTotal params: 15,078\nTrainable params: 15,078\nNon-trainable params: 0\n_________________________________________________________________\n[100, 40, 8]\nenc: 40 1\ndec: 8 2\ndec: 40 1\ndec: 100 0\nModel: \"encoder_100-40-8\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 100) 3200 \n_________________________________________________________________\nencoder_1 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 8) 328 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 40) 360 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 100) 4100 \n_________________________________________________________________\noutput (Dense) (None, 31) 3131 \n=================================================================\nTotal params: 15,159\nTrainable params: 15,159\nNon-trainable params: 0\n_________________________________________________________________\n[400, 100, 40, 1]\nenc: 100 1\nenc: 40 2\ndec: 1 3\ndec: 40 2\ndec: 100 1\ndec: 400 0\nModel: \"encoder_400-100-40-1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 400) 12800 \n_________________________________________________________________\nencoder_1 (Dense) (None, 100) 40100 \n_________________________________________________________________\nencoder_2 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_3 (Dense) (None, 1) 41 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 40) 80 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 100) 4100 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 400) 40400 \n_________________________________________________________________\noutput (Dense) (None, 31) 12431 \n=================================================================\nTotal params: 113,992\nTrainable params: 113,992\nNon-trainable params: 0\n_________________________________________________________________\n[400, 100, 40, 2]\nenc: 100 1\nenc: 40 2\ndec: 2 3\ndec: 40 2\ndec: 100 1\ndec: 400 0\nModel: \"encoder_400-100-40-2\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 400) 12800 \n_________________________________________________________________\nencoder_1 (Dense) (None, 100) 40100 \n_________________________________________________________________\nencoder_2 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_3 (Dense) (None, 2) 82 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 40) 120 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 100) 4100 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 400) 40400 \n_________________________________________________________________\noutput (Dense) (None, 31) 12431 \n=================================================================\nTotal params: 114,073\nTrainable params: 114,073\nNon-trainable params: 0\n_________________________________________________________________\n[400, 100, 40, 3]\nenc: 100 1\nenc: 40 2\ndec: 3 3\ndec: 40 2\ndec: 100 1\ndec: 400 0\nModel: \"encoder_400-100-40-3\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 400) 12800 \n_________________________________________________________________\nencoder_1 (Dense) (None, 100) 40100 \n_________________________________________________________________\nencoder_2 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_3 (Dense) (None, 3) 123 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 40) 160 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 100) 4100 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 400) 40400 \n_________________________________________________________________\noutput (Dense) (None, 31) 12431 \n=================================================================\nTotal params: 114,154\nTrainable params: 114,154\nNon-trainable params: 0\n_________________________________________________________________\n[400, 100, 40, 4]\nenc: 100 1\nenc: 40 2\ndec: 4 3\ndec: 40 2\ndec: 100 1\ndec: 400 0\nModel: \"encoder_400-100-40-4\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 400) 12800 \n_________________________________________________________________\nencoder_1 (Dense) (None, 100) 40100 \n_________________________________________________________________\nencoder_2 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_3 (Dense) (None, 4) 164 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 40) 200 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 100) 4100 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 400) 40400 \n_________________________________________________________________\noutput (Dense) (None, 31) 12431 \n=================================================================\nTotal params: 114,235\nTrainable params: 114,235\nNon-trainable params: 0\n_________________________________________________________________\n[400, 100, 40, 5]\nenc: 100 1\nenc: 40 2\ndec: 5 3\ndec: 40 2\ndec: 100 1\ndec: 400 0\nModel: \"encoder_400-100-40-5\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 400) 12800 \n_________________________________________________________________\nencoder_1 (Dense) (None, 100) 40100 \n_________________________________________________________________\nencoder_2 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_3 (Dense) (None, 5) 205 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 40) 240 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 100) 4100 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 400) 40400 \n_________________________________________________________________\noutput (Dense) (None, 31) 12431 \n=================================================================\nTotal params: 114,316\nTrainable params: 114,316\nNon-trainable params: 0\n_________________________________________________________________\n[400, 100, 40, 6]\nenc: 100 1\nenc: 40 2\ndec: 6 3\ndec: 40 2\ndec: 100 1\ndec: 400 0\nModel: \"encoder_400-100-40-6\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 400) 12800 \n_________________________________________________________________\nencoder_1 (Dense) (None, 100) 40100 \n_________________________________________________________________\nencoder_2 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_3 (Dense) (None, 6) 246 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 40) 280 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 100) 4100 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 400) 40400 \n_________________________________________________________________\noutput (Dense) (None, 31) 12431 \n=================================================================\nTotal params: 114,397\nTrainable params: 114,397\nNon-trainable params: 0\n_________________________________________________________________\n[400, 100, 40, 7]\nenc: 100 1\nenc: 40 2\ndec: 7 3\ndec: 40 2\ndec: 100 1\ndec: 400 0\nModel: \"encoder_400-100-40-7\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 400) 12800 \n_________________________________________________________________\nencoder_1 (Dense) (None, 100) 40100 \n_________________________________________________________________\nencoder_2 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_3 (Dense) (None, 7) 287 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 40) 320 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 100) 4100 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 400) 40400 \n_________________________________________________________________\noutput (Dense) (None, 31) 12431 \n=================================================================\nTotal params: 114,478\nTrainable params: 114,478\nNon-trainable params: 0\n_________________________________________________________________\n[400, 100, 40, 8]\nenc: 100 1\nenc: 40 2\ndec: 8 3\ndec: 40 2\ndec: 100 1\ndec: 400 0\nModel: \"encoder_400-100-40-8\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_and_0 (Dense) (None, 400) 12800 \n_________________________________________________________________\nencoder_1 (Dense) (None, 100) 40100 \n_________________________________________________________________\nencoder_2 (Dense) (None, 40) 4040 \n_________________________________________________________________\ndecoder_3 (Dense) (None, 8) 328 \n_________________________________________________________________\ndecoder_2 (Dense) (None, 40) 360 \n_________________________________________________________________\ndecoder_1 (Dense) (None, 100) 4100 \n_________________________________________________________________\ndecoder_0 (Dense) (None, 400) 40400 \n_________________________________________________________________\noutput (Dense) (None, 31) 12431 \n=================================================================\nTotal params: 114,559\nTrainable params: 114,559\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "metrics = [x for x in histories[0].keys() if 'val' not in x]\nfor i, metric in enumerate(metrics):\n plt.figure(figsize=(12, 4.5))\n plt.subplot(1, 2, 1)\n plt.title(f'Optm: {optimizer._name} / lr: {lr}')\n \n for j, layer in enumerate(encoder_layers):\n for n_components in number_components:\n layers = layer + [n_components]\n ls = '-'\n if len(layers) == 2:\n ls = '-'\n elif len(layers) == 3:\n ls = ':'\n elif len(layers) == 4:\n ls = '--'\n plt.semilogy(histories[j][metric], label='-'.join(str(x) for x in layers), linestyle=ls)\n plt.xlabel('Epoch')\n plt.ylabel(metric)\n \n plt.subplot(1, 2, 2)\n for j, layer in enumerate(encoder_layers):\n for n_components in number_components:\n layers = layer + [n_components]\n ls = '-'\n if len(layers) == 2:\n ls = '-'\n elif len(layers) == 3:\n ls = ':'\n elif len(layers) == 4:\n ls = '--'\n diff = np.array(histories[j]['val_' + metric]) - np.array(histories[j][metric])\n print(j, np.sum(diff), np.mean(diff))\n plt.semilogy(histories[j]['val_' + metric], label='-'.join(str(x) for x in layers), linestyle=ls)\n \n plt.xlabel('Epoch')\n plt.ylabel('val ' + metric)\n# plt.xlim([-5, 50])\n plt.legend(ncol=2)\n plt.tight_layout()\n plt.savefig(f'./figures/{dataset_filter}_{optimizer._name}_{lr}_{\"-\".join(str(x) for x in layers)}-accuracy-{metric}.{figure_format}')\n plt.show()", "0 -1.1904021427035332 -0.0019840035711725552\n0 -1.1904021427035332 -0.0019840035711725552\n0 -1.1904021427035332 -0.0019840035711725552\n0 -1.1904021427035332 -0.0019840035711725552\n0 -1.1904021427035332 -0.0019840035711725552\n0 -1.1904021427035332 -0.0019840035711725552\n0 -1.1904021427035332 -0.0019840035711725552\n0 -1.1904021427035332 -0.0019840035711725552\n1 -0.8452026695013046 -0.0014086711158355076\n1 -0.8452026695013046 -0.0014086711158355076\n1 -0.8452026695013046 -0.0014086711158355076\n1 -0.8452026695013046 -0.0014086711158355076\n1 -0.8452026695013046 -0.0014086711158355076\n1 -0.8452026695013046 -0.0014086711158355076\n1 -0.8452026695013046 -0.0014086711158355076\n1 -0.8452026695013046 -0.0014086711158355076\n2 -1.433691293001175 -0.0023894854883352917\n2 -1.433691293001175 -0.0023894854883352917\n2 -1.433691293001175 -0.0023894854883352917\n2 -1.433691293001175 -0.0023894854883352917\n2 -1.433691293001175 -0.0023894854883352917\n2 -1.433691293001175 -0.0023894854883352917\n2 -1.433691293001175 -0.0023894854883352917\n2 -1.433691293001175 -0.0023894854883352917\n" ], [ "with open(f'./models/{dataset_filter}_histories.h5', 'wb') as file:\n pickle.dump({'histories': histories}, file)\nprint('done')", "done\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cbb8fa502f12c0bf33bd4a67a34d731113b2ad1d
16,957
ipynb
Jupyter Notebook
docs/feature-store/end-to-end-demo/03-deploy-serving-model.ipynb
adiso75/mlrun
0da2e72a1e2aa189074bd2ec059f2bc452f349cf
[ "Apache-2.0" ]
null
null
null
docs/feature-store/end-to-end-demo/03-deploy-serving-model.ipynb
adiso75/mlrun
0da2e72a1e2aa189074bd2ec059f2bc452f349cf
[ "Apache-2.0" ]
null
null
null
docs/feature-store/end-to-end-demo/03-deploy-serving-model.ipynb
adiso75/mlrun
0da2e72a1e2aa189074bd2ec059f2bc452f349cf
[ "Apache-2.0" ]
null
null
null
40.182464
2,332
0.565961
[ [ [ "# Part 3: Serving\nIn this part we will user MLRun's **serving runtime** to deploy our trained models from the previous stage a `Voting Ensemble` using **max vote** logic. \nWe will also use MLRun's **Feature store** to receive the online **Feature Vector** we define in the preveious stage.\n\nWe will:\n- Define a model class to load our models, run preprocessing and predict on the data\n- Define Voting Ensemble function on top of our models\n- Test the serving function locally using our `mock server`\n- Deploy the function to the cluster and test it live\n\n<img src=\"../../_static/images/feature_store_demo_diagram.png\" width=\"600px\" />", "_____no_output_____" ], [ "## Environment Setup\n\nSince our work is done in a this project scope, we will first want to define the project itself for all our MLRun work in this notebook.", "_____no_output_____" ] ], [ [ "import mlrun\nproject, _ = mlrun.set_environment(project='fsdemo', user_project=True)", "_____no_output_____" ] ], [ [ "## Define Model Class\n- Load models\n- Predict from the FS Online service via the `patient_id` key", "_____no_output_____" ] ], [ [ "# nuclio: start-code", "_____no_output_____" ], [ "from cloudpickle import load\nimport numpy as np\nimport mlrun\nimport os\n\nclass ClassifierModel(mlrun.serving.V2ModelServer):\n \n def load(self):\n \"\"\"load and initialize the model and/or other elements\"\"\"\n model_file, extra_data = self.get_model('.pkl')\n self.model = load(open(model_file, 'rb'))\n \n # Setup FS Online service\n self.feature_service = mlrun.feature_store.get_online_feature_service('patient-deterioration')\n \n # Get feature vector statistics for imputing\n self.feature_stats = self.feature_service.vector.get_stats_table()\n \n def preprocess(self, body: dict, op) -> list:\n # Get patient feature vector \n # from the patient_id given in the request\n vectors = self.feature_service.get([{'patient_id': patient_id} for patient_id in body['inputs']])\n \n # Impute inf's in the data to the feature's mean value\n # using the collected statistics from the Feature store\n feature_vectors = []\n for fv in vectors:\n new_vec = []\n for f, v in fv.items():\n if np.isinf(v):\n new_vec.append(self.feature_stats.loc[f, 'mean'])\n else:\n new_vec.append(v)\n feature_vectors.append(new_vec)\n \n # Set the final feature vector as our inputs\n # to pass to the predict function\n body['inputs'] = feature_vectors\n return body\n\n def predict(self, body: dict) -> list:\n \"\"\"Generate model predictions from sample\"\"\"\n feats = np.asarray(body['inputs'])\n result: np.ndarray = self.model.predict(feats)\n return result.tolist()", "_____no_output_____" ], [ "# nuclio: end-code", "_____no_output_____" ] ], [ [ "## Define Serving Function\n- Gather ClassifierModel code from this notebook\n- Define `VotingEnsemble` - Max-Vote based ensemble\n- Add downloaded models to the ensemble", "_____no_output_____" ] ], [ [ "# Create the serving function from our code above\nfn = mlrun.code_to_function('patient-prediction', kind='serving', image=\"mlrun/mlrun\")\n\n# Set the Voting Ensemble as our router\nfn.set_topology('router', 'mlrun.serving.VotingEnsemble', name='PatientDeterioration')\n\n# Add the three previously trained models to the ensemble\nmodels_dir = os.path.abspath('models')\nfor name in ['1-patient_det_rf', '2-patient_det_xgboost', '3-patient_det_adaboost']:\n fn.add_model(name, class_name=\"ClassifierModel\", model_path=f\"store://artifacts/{project}/training_model:latest\")\n \n# Plot the ensemble configuration\nfn.spec.graph.plot()", "_____no_output_____" ] ], [ [ "## Test the server locally", "_____no_output_____" ] ], [ [ "# Create a mock server from the serving function\nserver = fn.to_mock_server()", "> 2021-05-06 15:32:30,709 [info] model 1-patient_det_rf was loaded\n> 2021-05-06 15:32:30,963 [info] model 2-patient_det_xgboost was loaded\n> 2021-05-06 15:32:31,260 [info] model 3-patient_det_adaboost was loaded\n> 2021-05-06 15:32:31,261 [info] Loaded ['1-patient_det_rf', '2-patient_det_xgboost', '3-patient_det_adaboost']\n" ] ], [ [ "**Test the server locally with a sample id**", "_____no_output_____" ] ], [ [ "resp = server.test('/v2/models/infer', body={'inputs': ['622-37-0180']})\nresp", "_____no_output_____" ] ], [ [ "## Deploy the function to the cluster (using Nuclio) and test it ", "_____no_output_____" ] ], [ [ "# Enable MLRun's model monitoring \nfn.set_tracking()\n\n# Add the system mount to the function so\n# it will have access to our models\nfn.apply(mlrun.mount_v3io())\n\n# Deploy the function to the cluster\nfn.deploy()", "> 2021-05-06 15:32:31,370 [info] Starting remote function deploy\n2021-05-06 15:32:31 (info) Deploying function\n2021-05-06 15:32:31 (info) Building\n2021-05-06 15:32:31 (info) Staging files and preparing base images\n2021-05-06 15:32:31 (info) Building processor image\n2021-05-06 15:32:35 (info) Build complete\n2021-05-06 15:32:45 (info) Function deploy complete\n> 2021-05-06 15:32:45,264 [info] function deployed, address=default-tenant.app.yh30.iguazio-c0.com:32093\n" ] ], [ [ "**Test the function on the cluster using the `invoke` mechanism**", "_____no_output_____" ] ], [ [ "import json\nfn.invoke('/v2/models/infer', body={'inputs': ['622-37-0180']})", "_____no_output_____" ], [ "import json\nfn.invoke(path='/v2/models/infer', body=json.dumps({'inputs': ['622-37-0180']}))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cbb910458bb53e8e92b832c2a626f5fd6058f099
99,412
ipynb
Jupyter Notebook
doc/source/notebooks/advanced/ordinal_regression.ipynb
liuxu97531/GPflow
125846f4872c9ad1c5bd2020ba21d549245a8001
[ "Apache-2.0" ]
1
2020-01-27T19:05:28.000Z
2020-01-27T19:05:28.000Z
doc/source/notebooks/advanced/ordinal_regression.ipynb
liuxu97531/GPflow
125846f4872c9ad1c5bd2020ba21d549245a8001
[ "Apache-2.0" ]
null
null
null
doc/source/notebooks/advanced/ordinal_regression.ipynb
liuxu97531/GPflow
125846f4872c9ad1c5bd2020ba21d549245a8001
[ "Apache-2.0" ]
null
null
null
334.720539
55,396
0.932513
[ [ [ "Ordinal Regression\n--\nOrdinal regression aims at fitting a model to some data $(X, Y)$, where $Y$ is an ordinal variable. To do so, we use a `VPG` model with a specific likelihood (`gpflow.likelihoods.Ordinal`).", "_____no_output_____" ] ], [ [ "import gpflow\nimport numpy as np\nimport matplotlib\n%matplotlib inline\nmatplotlib.rcParams['figure.figsize'] = (12, 6)\nplt = matplotlib.pyplot", "_____no_output_____" ], [ "#make a one dimensional ordinal regression problem\n\n# This function generates a set of inputs X, \n# quantitative output f (latent) and ordinal values Y\n\ndef generate_data(num_data):\n # First generate random inputs\n X = np.random.rand(num_data, 1)\n \n # Now generate values of a latent GP\n kern = gpflow.kernels.RBF(1, lengthscales=0.1)\n K = kern.compute_K_symm(X)\n f = np.random.multivariate_normal(mean=np.zeros(num_data), cov=K).reshape(-1, 1)\n \n # Finally convert f values into ordinal values Y\n Y = np.round((f + f.min())*3)\n Y = Y - Y.min()\n Y = np.asarray(Y, np.float64)\n\n return X, f, Y\n\nnp.random.seed(1)\nnum_data = 20\nX, f, Y = generate_data(num_data)\n\nplt.figure(figsize=(11, 6))\nplt.plot(X, f, '.')\nplt.ylabel('latent function value')\n\nplt.twinx()\nplt.plot(X, Y, 'kx', mew=1.5)\nplt.ylabel('observed data value')", "_____no_output_____" ], [ "# construct ordinal likelihood - bin_edges is the same as unique(Y) but centered\nbin_edges = np.array(np.arange(np.unique(Y).size + 1), dtype=float)\nbin_edges = bin_edges - bin_edges.mean()\nlikelihood=gpflow.likelihoods.Ordinal(bin_edges)\n\n# build a model with this likelihood\nm = gpflow.models.VGP(X, Y, \n kern=gpflow.kernels.Matern32(1),\n likelihood=likelihood)\n\n# fit the model\ngpflow.train.ScipyOptimizer().minimize(m)", "INFO:tensorflow:Optimization terminated with:\n Message: b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'\n Objective function value: 25.484796\n Number of iterations: 199\n Number of functions evaluations: 220\n" ], [ "# here we'll plot the expected value of Y +- 2 std deviations, as if the distribution were Gaussian\nplt.figure(figsize=(11, 6))\nXtest = np.linspace(m.X.read_value().min(), m.X.read_value().max(), 100).reshape(-1, 1)\nmu, var = m.predict_y(Xtest)\nline, = plt.plot(Xtest, mu, lw=2)\ncol=line.get_color()\nplt.plot(Xtest, mu+2*np.sqrt(var), '--', lw=2, color=col)\nplt.plot(Xtest, mu-2*np.sqrt(var), '--', lw=2, color=col)\nplt.plot(m.X.read_value(), m.Y.read_value(), 'kx', mew=2)", "_____no_output_____" ], [ "# to see the predictive density, try predicting every possible discrete value for Y.\ndef pred_density(m):\n Xtest = np.linspace(m.X.read_value().min(), m.X.read_value().max(), 100).reshape(-1, 1)\n ys = np.arange(m.Y.read_value().max()+1)\n densities = []\n for y in ys:\n Ytest = np.ones_like(Xtest) * y\n # Predict the log density\n densities.append(m.predict_density(Xtest, Ytest))\n return np.hstack(densities).T", "_____no_output_____" ], [ "fig = plt.figure(figsize=(14, 6))\nplt.imshow(np.exp(pred_density(m)), interpolation='nearest',\n extent=[m.X.read_value().min(), m.X.read_value().max(), -0.5, m.Y.read_value().max()+0.5],\n origin='lower', aspect='auto', cmap=plt.cm.viridis)\nplt.colorbar()\nplt.plot(X, Y, 'kx', mew=2, scalex=False, scaley=False)", "_____no_output_____" ], [ "# Predictive density for a single input x=0.5\nx_new = 0.5\nys = np.arange(np.max(m.Y.value+1)).reshape([-1, 1])\nx_new_vec = x_new*np.ones_like(ys)\n# for predict_density x and y need to have the same number of rows\ndens_new = np.exp(m.predict_density(x_new_vec, ys))\nfig = plt.figure(figsize=(8, 4))\nplt.bar(x=ys.flatten(), height=dens_new.flatten())", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
cbb91ce717eab5f65a81f9ef294b696d7681d665
97,653
ipynb
Jupyter Notebook
lab13.ipynb
harmisbt/ia241
cf737c85563bb3688f3d92c0f8765030ccc7504e
[ "MIT" ]
null
null
null
lab13.ipynb
harmisbt/ia241
cf737c85563bb3688f3d92c0f8765030ccc7504e
[ "MIT" ]
null
null
null
lab13.ipynb
harmisbt/ia241
cf737c85563bb3688f3d92c0f8765030ccc7504e
[ "MIT" ]
null
null
null
120.708282
27,888
0.808997
[ [ [ "# Visualize Covid19 Data in Python", "_____no_output_____" ], [ "## data source ", "_____no_output_____" ], [ "The data is from [European Centre for Disease Prevention and Control](https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide)", "_____no_output_____" ], [ " ![covid image](https://www.jmu.edu/_images/news/2020/Covid-19Dashboard-06.png)", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport pandas", "_____no_output_____" ] ], [ [ "## a quick view of the data", "_____no_output_____" ] ], [ [ "df = pandas.read_excel('s3://harmison-ia241-2021-spring/coid lab13.xls')\ndf[:10]", "_____no_output_____" ] ], [ [ "## trend of the number of cases ", "_____no_output_____" ] ], [ [ "sum_cases_per_day=df.groupby('dateRep').sum()['cases']", "_____no_output_____" ], [ "sum_cases_per_day.plot()", "_____no_output_____" ] ], [ [ "## the top 10 countries with the highest deaths", "_____no_output_____" ] ], [ [ "sum_death_per_country=df.groupby('countriesAndTerritories').sum()['deaths']", "_____no_output_____" ], [ "sum_death_per_country.nlargest(10).plot.bar()", "_____no_output_____" ] ], [ [ "## list of all countries", "_____no_output_____" ] ], [ [ "pandas.unique(df['countriesAndTerritories'])", "_____no_output_____" ] ], [ [ "## The USA data", "_____no_output_____" ] ], [ [ "usa_data = df.loc[ df['countriesAndTerritories']=='United_States_of_America' ]\nusa_data[:10]", "_____no_output_____" ] ], [ [ "## how the # death is related to the # case in the USA", "_____no_output_____" ] ], [ [ "usa_data.plot.scatter(x='cases',y='deaths',c='month')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb923bf11e2d40cc514f9967e02db2247cb5fce
43,902
ipynb
Jupyter Notebook
Welcome_To_Colaboratory.ipynb
noorhaq/Machine-Learning-Projects
9c155b3ca2c3eab5e28583c1b10a14cb69ce3bc7
[ "MIT" ]
2
2020-12-03T05:20:47.000Z
2021-12-28T06:39:27.000Z
Welcome_To_Colaboratory.ipynb
noorhaq/Machine-Learning-Projects
9c155b3ca2c3eab5e28583c1b10a14cb69ce3bc7
[ "MIT" ]
null
null
null
Welcome_To_Colaboratory.ipynb
noorhaq/Machine-Learning-Projects
9c155b3ca2c3eab5e28583c1b10a14cb69ce3bc7
[ "MIT" ]
null
null
null
146.34
31,886
0.851784
[ [ [ "<a href=\"https://colab.research.google.com/github/noorhaq/Google_Colab/blob/master/Welcome_To_Colaboratory.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "<p><img alt=\"Colaboratory logo\" height=\"45px\" src=\"/img/colab_favicon.ico\" align=\"left\" hspace=\"10px\" vspace=\"0px\"></p>\n\n<h1>What is Colaboratory?</h1>\n\nColaboratory, or \"Colab\" for short, allows you to write and execute Python in your browser, with \n- Zero configuration required\n- Free access to GPUs\n- Easy sharing\n\nWhether you're a **student**, a **data scientist** or an **AI researcher**, Colab can make your work easier. Watch [Introduction to Colab](https://www.youtube.com/watch?v=inN8seMm7UI) to learn more, or just get started below!", "_____no_output_____" ], [ "## **Getting started**\n\nThe document you are reading is not a static web page, but an interactive environment called a **Colab notebook** that lets you write and execute code.\n\nFor example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:", "_____no_output_____" ] ], [ [ "seconds_in_a_day = 24 * 60 * 90\nseconds_in_a_day", "_____no_output_____" ] ], [ [ "To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut \"Command/Ctrl+Enter\". To edit the code, just click the cell and start editing.\n\nVariables that you define in one cell can later be used in other cells:", "_____no_output_____" ] ], [ [ "seconds_in_a_week = 7 * seconds_in_a_day\nseconds_in_a_week", "_____no_output_____" ] ], [ [ "Colab notebooks allow you to combine **executable code** and **rich text** in a single document, along with **images**, **HTML**, **LaTeX** and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see [Overview of Colab](/notebooks/basic_features_overview.ipynb). To create a new Colab notebook you can use the File menu above, or use the following link: [create a new Colab notebook](http://colab.research.google.com#create=true).\n\nColab notebooks are Jupyter notebooks that are hosted by Colab. To learn more about the Jupyter project, see [jupyter.org](https://www.jupyter.org).", "_____no_output_____" ], [ "## Data science\n\nWith Colab you can harness the full power of popular Python libraries to analyze and visualize data. The code cell below uses **numpy** to generate some random data, and uses **matplotlib** to visualize it. To edit the code, just click the cell and start editing.", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom matplotlib import pyplot as plt\n\nys = 200 + np.random.randn(100)\nx = [x for x in range(len(ys))]\n\nplt.plot(x, ys, '-')\nplt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)\n\nplt.title(\"Sample Visualization\")\nplt.show()", "_____no_output_____" ] ], [ [ "You can import your own data into Colab notebooks from your Google Drive account, including from spreadsheets, as well as from Github and many other sources. To learn more about importing data, and how Colab can be used for data science, see the links below under [Working with Data](#working-with-data).", "_____no_output_____" ], [ "## Machine learning\n\nWith Colab you can import an image dataset, train an image classifier on it, and evaluate the model, all in just [a few lines of code](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb). Colab notebooks execute code on Google's cloud servers, meaning you can leverage the power of Google hardware, including [GPUs and TPUs](#using-accelerated-hardware), regardless of the power of your machine. All you need is a browser.", "_____no_output_____" ], [ "Colab is used extensively in the machine learning community with applications including:\n- Getting started with TensorFlow\n- Developing and training neural networks\n- Experimenting with TPUs\n- Disseminating AI research\n- Creating tutorials\n\nTo see sample Colab notebooks that demonstrate machine learning applications, see the [machine learning examples](#machine-learning-examples) below.", "_____no_output_____" ], [ "## More Resources\n\n### Working with Notebooks in Colab\n- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)\n- [Guide to Markdown](/notebooks/markdown_guide.ipynb)\n- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)\n- [Saving and loading notebooks in GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)\n- [Interactive forms](/notebooks/forms.ipynb)\n- [Interactive widgets](/notebooks/widgets.ipynb)\n- <img src=\"/img/new.png\" height=\"20px\" align=\"left\" hspace=\"4px\" alt=\"New\"></img>\n [TensorFlow 2 in Colab](/notebooks/tensorflow_version.ipynb)\n\n<a name=\"working-with-data\"></a>\n### Working with Data\n- [Loading data: Drive, Sheets, and Google Cloud Storage](/notebooks/io.ipynb) \n- [Charts: visualizing data](/notebooks/charts.ipynb)\n- [Getting started with BigQuery](/notebooks/bigquery.ipynb)\n\n### Machine Learning Crash Course\nThese are a few of the notebooks from Google's online Machine Learning course. See the [full course website](https://developers.google.com/machine-learning/crash-course/) for more.\n- [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb)\n- [Tensorflow concepts](/notebooks/mlcc/tensorflow_programming_concepts.ipynb)\n- [First steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)\n- [Intro to neural nets](/notebooks/mlcc/intro_to_neural_nets.ipynb)\n- [Intro to sparse data and embeddings](/notebooks/mlcc/intro_to_sparse_data_and_embeddings.ipynb)\n\n<a name=\"using-accelerated-hardware\"></a>\n### Using Accelerated Hardware\n- [TensorFlow with GPUs](/notebooks/gpu.ipynb)\n- [TensorFlow with TPUs](/notebooks/tpu.ipynb)", "_____no_output_____" ], [ "<a name=\"machine-learning-examples\"></a>\n\n## Machine Learning Examples\n\nTo see end-to-end examples of the interactive machine learning analyses that Colaboratory makes possible, check out these tutorials using models from [TensorFlow Hub](https://tfhub.dev).\n\nA few featured examples:\n\n- [Retraining an Image Classifier](https://tensorflow.org/hub/tutorials/tf2_image_retraining): Build a Keras model on top of a pre-trained image classifier to distinguish flowers.\n- [Text Classification](https://tensorflow.org/hub/tutorials/tf2_text_classification): Classify IMDB movie reviews as either *positive* or *negative*.\n- [Style Transfer](https://tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization): Use deep learning to transfer style between images.\n- [Multilingual Universal Sentence Encoder Q&A](https://tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa): Use a machine learning model to answer questions from the SQuAD dataset.\n- [Video Interpolation](https://tensorflow.org/hub/tutorials/tweening_conv3d): Predict what happened in a video between the first and the last frame.\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cbb9291c889613b6f4e1b907767baed80420cb12
6,102
ipynb
Jupyter Notebook
examples/reference/elements/matplotlib/QuadMesh.ipynb
jewfro-cuban/holoviews
c59f847c3d05b6eea1b05d3e8162d9ea80428587
[ "BSD-3-Clause" ]
1
2019-01-02T20:20:09.000Z
2019-01-02T20:20:09.000Z
examples/reference/elements/matplotlib/QuadMesh.ipynb
jonmmease/holoviews
27407e1a5d8020c39c135fa3f8c4fdeb11fea5c0
[ "BSD-3-Clause" ]
null
null
null
examples/reference/elements/matplotlib/QuadMesh.ipynb
jonmmease/holoviews
27407e1a5d8020c39c135fa3f8c4fdeb11fea5c0
[ "BSD-3-Clause" ]
1
2021-10-31T05:26:08.000Z
2021-10-31T05:26:08.000Z
36.538922
353
0.608489
[ [ [ "<div class=\"contentcontainer med left\" style=\"margin-left: -50px;\">\n<dl class=\"dl-horizontal\">\n <dt>Title</dt> <dd> QuadMesh Element</dd>\n <dt>Dependencies</dt> <dd>Matplotlib</dd>\n <dt>Backends</dt> <dd><a href='./QuadMesh.ipynb'>Matplotlib</a></dd> <dd><a href='../bokeh/QuadMesh.ipynb'>Bokeh</a></dd>\n</dl>\n</div>", "_____no_output_____" ] ], [ [ "import numpy as np\nimport holoviews as hv\nhv.extension('matplotlib')", "_____no_output_____" ] ], [ [ "A ``QuadMesh`` represents 2D rectangular grid expressed as x- and y-coordinates defined as 1D or 2D arrays. Unlike the Image type, a QuadMesh may be regularly or irregularly spaced and contain either bin edges or bin centers. If bin edges are supplied, the shape of the x/y-coordinate arrays should be one greater than the value array shape.\n\nThe default interface expects data to be specified in the form:\n\n QuadMesh((X, Y, Z))\n\nwhere ``X`` and ``Y`` may be 1D or 2D arrays of the shape ``N(+1)`` and ``M(+1)`` respectively or ``N(+1)xM(+1)`` and the ``Z`` value array should be of shape NxM. Other gridded formats such as xarray are also supported if installed.\n\nThe grid orientation follows the standard matrix convention: An array ``Z`` with shape (nrows, ncolumns) is plotted with the column number as ``X`` and the row number as ``Y``. See the [Gridded Datasets](../../../user_guide/08-Gridded_Datasets.ipynb) user guide for all the other accepted data formats.\n\nHere is a simple ``QuadMesh`` with logarithmic sampling along the 'x' dimensions:", "_____no_output_____" ] ], [ [ "n = 8 # Number of bins in each direction\nxs = np.logspace(1, 3, n)\nys = np.linspace(1, 10, n)\nzs = np.arange((n-1)**2).reshape(n-1, n-1)\nprint('Shape of x-coordinates:', xs.shape)\nprint('Shape of y-coordinates:', ys.shape)\nprint('Shape of value array:', zs.shape)\nhv.QuadMesh((xs, ys, zs))", "_____no_output_____" ] ], [ [ "The coordinate system of a ``QuadMesh`` is defined by the bin edges, therefore any index falling into a binned region will return the appropriate value. As the bin edges have continuous values, you can use non-linear axes such as log axes:", "_____no_output_____" ] ], [ [ "%%opts QuadMesh [xticks=[10, 100,1000]] QuadMesh.LogScale [logx=True]\nhv.QuadMesh((xs, ys, zs), group='LinearScale') + hv.QuadMesh((xs, ys, zs), group='LogScale')", "_____no_output_____" ] ], [ [ "Unlike ``Image`` objects, slices must be inclusive of the bin edges but otherwise the slicing semantics are the same. The reason for this difference is that ``QuadMesh`` is really a two-dimensional histogram and when slicing, you only want to see the bins that fall within the specified slice ranges.\n\nIn the next example, we specify a slice along the x- and y-axis to extract the lower corner and we set the z-dimension range to maintain the full color range of the colormap:", "_____no_output_____" ] ], [ [ "qmesh = hv.QuadMesh((xs, ys, zs))\nqmesh[20:400, :8].redim.range(z=qmesh.range('z'))", "_____no_output_____" ] ], [ [ "To use an interactive hover tool to inspect the sample values, you can use ``QuadMesh`` with the hover tool in the [Bokeh backend](../bokeh/QuadMesh.ipynb).\n\nFor full documentation and the available style and plot options, use ``hv.help(hv.QuadMesh).``", "_____no_output_____" ], [ "## Irregular meshes\n\nIn addition to axis aligned meshes like those we worked with above, a ``QuadMesh`` may also be used to represent irregular or unstructured meshes. In this example we will create an irregular mesh consisting of 2D X, Y and Z arrays defining the position and value of each simplex in the mesh:", "_____no_output_____" ] ], [ [ "n=20\ncoords = np.linspace(-1.5,1.5,n)\nX,Y = np.meshgrid(coords, coords);\nQx = np.cos(Y) - np.cos(X)\nQy = np.sin(Y) + np.sin(X)\nZ = np.sqrt(X**2 + Y**2)\n\nprint('Shape of x-coordinates:', Qx.shape)\nprint('Shape of y-coordinates:', Qy.shape)\nprint('Shape of value array:', Z.shape)\n\nqmesh = hv.QuadMesh((Qx, Qy, Z))\nqmesh", "_____no_output_____" ] ], [ [ "To illustrate irregular meshes a bit further we will randomly jitter the mesh coordinates along both dimensions, demonstrating that ``QuadMesh`` may be used to represent completely arbitrary meshes. It may also be used to represent overlapping meshes, however the behavior during slicing and other operations may not be well defined in such cases.", "_____no_output_____" ] ], [ [ "np.random.seed(13)\nxs, ys = np.meshgrid(np.linspace(-20, 20, 10), np.linspace(0, 30, 8))\nxs += xs/10 + np.random.rand(*xs.shape)*4\nys += ys/10 + np.random.rand(*ys.shape)*4\n\nzs = np.arange(80).reshape(8, 10)\nhv.QuadMesh((xs, ys, zs))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb9330ec178ce90d620ad0a920539ed163a0ec0
10,304
ipynb
Jupyter Notebook
docs/examples/exercises.ipynb
sairina/ricecooker
0c13e74ee8cd720715679c361ffab259b5ffdb62
[ "MIT" ]
14
2017-01-10T09:33:03.000Z
2021-11-28T12:11:27.000Z
docs/examples/exercises.ipynb
sairina/ricecooker
0c13e74ee8cd720715679c361ffab259b5ffdb62
[ "MIT" ]
174
2016-09-29T17:32:54.000Z
2022-03-29T15:02:48.000Z
docs/examples/exercises.ipynb
sairina/ricecooker
0c13e74ee8cd720715679c361ffab259b5ffdb62
[ "MIT" ]
41
2016-08-29T23:26:17.000Z
2021-11-29T17:12:03.000Z
43.846809
398
0.539887
[ [ [ "# `ricecooker` exercises\n\n\nThis mini-tutorial will walk you through the steps of running a simple chef script `ExercisesChef` that creates two exercises nodes, and four exercises questions.\n\n\n### Running the notebooks\nTo follow along and run the code in this notebook, you'll need to clone the `ricecooker` repository, crate a virtual environement, install `ricecooker` using `pip install ricecooker`, install Jypyter notebook using `pip install jupyter`, then start the jupyter notebook server by running `jupyter notebook`. You will then be able to run all the code sections in this notebook and poke around.", "_____no_output_____" ], [ "### Creating a Sushi Chef class\n\n", "_____no_output_____" ] ], [ [ "from ricecooker.chefs import SushiChef\nfrom ricecooker.classes.nodes import TopicNode, ExerciseNode\nfrom ricecooker.classes.questions import SingleSelectQuestion, MultipleSelectQuestion, InputQuestion, PerseusQuestion\nfrom ricecooker.classes.licenses import get_license\nfrom le_utils.constants import licenses\nfrom le_utils.constants import exercises\nfrom le_utils.constants.languages import getlang\n\n\nclass ExercisesChef(SushiChef):\n channel_info = {\n 'CHANNEL_TITLE': 'Sample Exercises',\n 'CHANNEL_SOURCE_DOMAIN': '<yourdomain.org>', # where you got the content\n 'CHANNEL_SOURCE_ID': '<unique id for channel>', # channel's unique id CHANGE ME\n 'CHANNEL_LANGUAGE': 'en', # le_utils language code\n 'CHANNEL_DESCRIPTION': 'A test channel with different types of exercise questions', # (optional)\n 'CHANNEL_THUMBNAIL': None, # (optional)\n }\n\n def construct_channel(self, **kwargs):\n channel = self.get_channel(**kwargs)\n topic = TopicNode(title=\"Math Exercises\", source_id=\"folder-id\")\n channel.add_child(topic)\n\n exercise_node = ExerciseNode(\n source_id='<some unique id>',\n title='Basic questions',\n author='LE content team',\n description='Showcase of the simple question type supported by Ricecooker and Studio',\n language=getlang('en').code,\n license=get_license(licenses.PUBLIC_DOMAIN),\n thumbnail=None,\n exercise_data={\n 'mastery_model': exercises.M_OF_N, # \\\n 'm': 2, # learners must get 2/3 questions correct to complete exercise\n 'n': 3, # /\n 'randomize': True, # show questions in random order\n },\n questions=[\n MultipleSelectQuestion(\n id='sampleEX_Q1',\n question = \"Which numbers the following numbers are even?\",\n correct_answers = [\"2\", \"4\",],\n all_answers = [\"1\", \"2\", \"3\", \"4\", \"5\"],\n hints=['Even numbers are divisible by 2.'],\n ),\n SingleSelectQuestion(\n id='sampleEX_Q2',\n question = \"What is 2 times 3?\",\n correct_answer = \"6\",\n all_answers = [\"2\", \"3\", \"5\", \"6\"],\n hints=['Multiplication of $a$ by $b$ is like computing the area of a rectangle with length $a$ and width $b$.'],\n ),\n InputQuestion(\n id='sampleEX_Q3',\n question = \"Name one of the *factors* of 10.\",\n answers = [\"1\", \"2\", \"5\", \"10\"],\n hints=['The factors of a number are the divisors of the number that leave a whole remainder.'],\n )\n ]\n )\n topic.add_child(exercise_node)\n\n # LOAD JSON DATA (as string) FOR PERSEUS QUESTIONS \n RAW_PERSEUS_JSON_STR = open('../../examples/exercises/chefdata/perseus_graph_question.json', 'r').read()\n # or\n # import requests\n # RAW_PERSEUS_JSON_STR = requests.get('https://raw.githubusercontent.com/learningequality/sample-channels/master/contentnodes/exercise/perseus_graph_question.json').text\n exercise_node2 = ExerciseNode(\n source_id='<another unique id>',\n title='An exercise containing a perseus question',\n author='LE content team',\n description='An example exercise with a Persus question',\n language=getlang('en').code,\n license=get_license(licenses.CC_BY, copyright_holder='Copyright holder name'),\n thumbnail=None,\n exercise_data={\n 'mastery_model': exercises.M_OF_N,\n 'm': 1,\n 'n': 1,\n },\n questions=[\n PerseusQuestion(\n id='ex2bQ4',\n raw_data=RAW_PERSEUS_JSON_STR,\n source_url='https://github.com/learningequality/sample-channels/blob/master/contentnodes/exercise/perseus_graph_question.json'\n ),\n ]\n )\n topic.add_child(exercise_node2)\n\n return channel\n", "_____no_output_____" ] ], [ [ "### Running the chef\n", "_____no_output_____" ], [ "Run of you chef by creating an instance of the chef class and calling it's `run` method:", "_____no_output_____" ] ], [ [ "chef = ExercisesChef()\nargs = {\n 'command': 'dryrun', # use 'uploadchannel' for real run\n 'verbose': True,\n 'token': 'YOURTOKENHERE9139139f3a23232'\n}\noptions = {}\n\nchef.run(args, options)", "\u001b[32mINFO \u001b[0m \u001b[34mIn SushiChef.run method. args={'command': 'dryrun', 'reset': True, 'verbose': True, 'token': 'YOURTO...'} options={}\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m\n\n***** Starting channel build process *****\n\n\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34mCalling construct_channel... \u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m Setting up initial channel structure... \u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m Validating channel structure...\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m Sample Exercises (ChannelNode): 3 descendants\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m Math Exercises (TopicNode): 2 descendants\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m Basic questions (ExerciseNode): 3 questions\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m An exercise containing a perseus question (ExerciseNode): 1 question\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m Tree is valid\n\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34mDownloading files...\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34mProcessing content...\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m\t*** Processing images for exercise: Basic questions\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m\t*** Images for Basic questions have been processed\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m\t*** Processing images for exercise: An exercise containing a perseus question\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m\t*** Images for An exercise containing a perseus question have been processed\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34m All files were successfully downloaded\u001b[0m\n\u001b[32mINFO \u001b[0m \u001b[34mCommand is dryrun so we are not uploading chanel.\u001b[0m\n" ] ], [ [ "Congratulations, you put some math exercises on the internet!", "_____no_output_____" ], [ "**Note**: you will need to change the value of `CHANNEL_SOURCE_ID` if you\nbefore you try running this script with `{'command': 'uploadchannel', ...}`.\nThe combination of source domain and source id are used to compute the `channel_id`\nfor the Kolibri channel you're creating. If you keep the lines above unchanged,\nyou'll get an error because you don't have edit rights on that channel.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cbb93596b3439005dae72cdb865e5e50ba64b7e5
10,788
ipynb
Jupyter Notebook
notebooks/league_classifier_liveclient.ipynb
oracle-devrel/leagueoflegends-optimizer
3f833ee50f7bd39688007308f33612faa21f286b
[ "UPL-1.0" ]
5
2021-10-30T04:40:33.000Z
2022-02-15T19:36:30.000Z
notebooks/league_classifier_liveclient.ipynb
oracle-devrel/leagueoflegends-optimizer
3f833ee50f7bd39688007308f33612faa21f286b
[ "UPL-1.0" ]
8
2021-08-23T18:40:44.000Z
2022-03-29T18:40:59.000Z
notebooks/league_classifier_liveclient.ipynb
oracle-devrel/leagueoflegends-optimizer
3f833ee50f7bd39688007308f33612faa21f286b
[ "UPL-1.0" ]
2
2021-12-02T09:06:08.000Z
2022-03-26T06:14:43.000Z
26.121065
135
0.550519
[ [ [ "import numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport json\nimport cx_Oracle\nimport os\nimport pandas as pd\n\n\nos.environ['TNS_ADMIN'] = '/home/opc/adj_esportsdb'", "_____no_output_____" ], [ "\n!pip install dataprep\n!pip install dask\n!pip install pandas_profiling\n## install packages\n!pip install -q scikit-learn\n!pip install -U setuptools wheel\n!pip install -U \"mxnet<2.0.0\"\n!pip install autogluon", "_____no_output_____" ], [ "import cx_Oracle\nimport yaml\nimport os\nfrom pathlib import Path\nhome = str(Path.home())\n\ndef process_yaml():\n\twith open(\"../config.yaml\") as file:\n\t\treturn yaml.safe_load(file)\n\n\nclass OracleJSONDatabaseConnection:\n def __init__(self, data=process_yaml()):\n # wallet location (default is HOME/wallets/wallet_X)\n os.environ['TNS_ADMIN'] = '{}/{}'.format(home, process_yaml()['WALLET_DIR'])\n print(os.environ['TNS_ADMIN'])\n self.pool = cx_Oracle.SessionPool(data['db']['username'], data['db']['password'], data['db']['dsn'],\n min=1, max=4, increment=1, threaded=True,\n getmode=cx_Oracle.SPOOL_ATTRVAL_WAIT\n )\n print('Connection successful.')\n\n\n\n def close_pool(self):\n self.pool.close()\n print('Connection pool closed.')\n\n\n\n def insert(self, collection_name, json_object_to_insert):\n connection = self.pool.acquire()\n connection.autocommit = True\n soda = connection.getSodaDatabase()\n x_collection = soda.createCollection(collection_name)\n\n try:\n x_collection.insertOne(json_object_to_insert)\n print('[DBG] INSERT {} OK'.format(json_object_to_insert))\n except cx_Oracle.IntegrityError:\n print('[DBG] INSERT {} ERR'.format(json_object_to_insert))\n return 0\n self.pool.release(connection)\n return 1\n\n\n\n def delete(self, collection_name, on_column, on_value):\n connection = self.pool.acquire()\n soda = connection.getSodaDatabase()\n x_collection = soda.createCollection(collection_name)\n qbe = {on_column: on_value}\n x_collection.find().filter(qbe).remove()\n self.pool.release(connection)\n\n\n\n def get_connection(self):\n return self.pool.acquire() \n\n\n\n def close_connection(self, conn_object):\n self.pool.release(conn_object)\n\n\n\n def get_collection_names(self):\n connection = self.pool.acquire()\n returning_object = connection.getSodaDatabase().getCollectionNames(startName=None, limit=0)\n self.pool.release(connection)\n return returning_object\n\n\n\n def open_collection(self, collection_name):\n connection = self.pool.acquire()\n returning_object = self.pool.acquire().getSodaDatabase().openCollection(collection_name)\n self.pool.release(connection)\n return returning_object\n\n\n\ndef test_class():\n object = OracleJSONDatabaseConnection()\n print(object.pool)\n object.close_pool()", "_____no_output_____" ], [ "print(os.environ['TNS_ADMIN'])", "_____no_output_____" ], [ "db = OracleJSONDatabaseConnection()\nprint(db.get_collection_names())", "_____no_output_____" ], [ "data = db.open_collection('predictor_liveclient')\nall_data = list()\nfor doc in data.find().getCursor():\n content = doc.getContent()\n all_data.append(content)\n\nprint('Data length: {}'.format(len(all_data)))", "_____no_output_____" ], [ "df = pd.read_json(json.dumps(all_data), orient='records')\n\ndf.head(5)\n", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ], [ "from pandas_profiling import ProfileReport", "_____no_output_____" ], [ "report = ProfileReport(df)\nreport #uncomment to display all.", "_____no_output_____" ], [ "from autogluon.tabular import TabularPredictor, TabularDataset", "_____no_output_____" ], [ "df = TabularDataset(df)\n\n# drop columns we don't want (constant values + identifier)\ndf = df.drop(columns=['bonusArmorPenetrationPercent', 'bonusMagicPenetrationPercent',\n 'identifier', 'cooldownReduction', 'armorPenetrationFlat'])\n\ntrain = df.sample(frac=0.8,random_state=200) #random state is a seed value\ntest = df.drop(train.index)\n\ntrain.head(5)", "_____no_output_____" ], [ "label = 'winner'", "_____no_output_____" ], [ "save_path = './autogluon_trained_models_liveclient_classifier' # specifies folder to store trained models\npredictor = TabularPredictor(label=label, path=save_path).fit(train)", "_____no_output_____" ], [ "y_test = test[label] # values to predict\ntest_data_nolabel = test.drop(columns=[label]) # delete label column to prove we're not cheating, also drop identifier column\ntest_data_nolabel.head(5)", "_____no_output_____" ], [ "predictor = TabularPredictor.load(save_path)\n\ny_pred = predictor.predict(test_data_nolabel)\nprint(\"Predictions: \\n\", y_pred)\nperf = predictor.evaluate_predictions(y_true=y_test, y_pred=y_pred, auxiliary_metrics=True)\n", "_____no_output_____" ], [ "predictor.leaderboard(test, silent=False)", "_____no_output_____" ], [ "predictor.feature_importance(test)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbb9406a75d32aa1e7355ea5a1e972c2bf2232ec
30,303
ipynb
Jupyter Notebook
tutorial/PENSA_Tutorial_GPCRmd_MOR.ipynb
jmcavity/pensa
5f05dd02f4e48287b49987572cd597158ac4f1cf
[ "MIT" ]
55
2020-11-18T07:03:46.000Z
2022-03-29T02:47:10.000Z
tutorial/PENSA_Tutorial_GPCRmd_MOR.ipynb
jmcavity/pensa
5f05dd02f4e48287b49987572cd597158ac4f1cf
[ "MIT" ]
11
2020-11-18T16:43:43.000Z
2022-02-22T20:02:22.000Z
tutorial/PENSA_Tutorial_GPCRmd_MOR.ipynb
jmcavity/pensa
5f05dd02f4e48287b49987572cd597158ac4f1cf
[ "MIT" ]
11
2020-11-19T04:34:36.000Z
2022-03-01T23:48:57.000Z
37.503713
675
0.5842
[ [ [ "# PENSA Tutorial Using GPCRmd Trajectories\nHere we show some common functions included in PENSA, using trajectories of a G protein-coupled receptor (GPCR). We retrieve the molecular dynamics trajectories for this tutorial from [GPCRmd](https://submission.gpcrmd.org/home/), an online platform for collection and curation of GPCR simulations. It is described in more detail [here](https://www.nature.com/articles/s41592-020-0884-y).\n\n<p align=\"center\">\n<img src=\"https://pbs.twimg.com/media/Ej8-VJ5WkAAbgJc?format=jpg&name=large\" width=\"500\">\n</p>\n\n\nThe example system is the mu-opioid receptor (mOR), once in its apo form and once bound to the ligand [BU72](https://www.guidetopharmacology.org/GRAC/LigandDisplayForward?ligandId=9363). The structure of this GPCR (G protein-coupled receptor) is reported by [*Huang et al (2015)*](https://www.nature.com/articles/nature14886). \nWe are going to compare the structural ensembles of the receptor in these two conditions.\n\nThis tutorial assumes that you can download the trajectories (see below). If you can't, you can use any other system you have available and adapt the file names and residue selections accordingly.\n\nWe only need to import the module \"os\" and all functions from PENSA itself which in turn loads all the modules it needs.", "_____no_output_____" ] ], [ [ "import os\nfrom pensa import *", "_____no_output_____" ] ], [ [ "## Download\n\nPENSA has a predefined function to download GPCRmd trajectories.", "_____no_output_____" ] ], [ [ "# Define where to save the GPCRmd files\nroot_dir = './mor-data'\n# Define which files to download\nmd_files = ['11427_dyn_151.psf','11426_dyn_151.pdb', # MOR-apo\n '11423_trj_151.xtc','11424_trj_151.xtc','11425_trj_151.xtc',\n '11580_dyn_169.psf','11579_dyn_169.pdb', # MOR-BU72\n '11576_trj_169.xtc','11577_trj_169.xtc','11578_trj_169.xtc']\n# Download all the files that do not exist yet\nfor file in md_files:\n if not os.path.exists(os.path.join(root_dir,file)):\n download_from_gpcrmd(file,root_dir)", "_____no_output_____" ] ], [ [ "## Preprocessing \n\nTo work with the protein coordinates, we first need to extract them from the simulation, i.e., remove the solvent, lipids etc. This is the hardest part but you usually only have to do it once and can then play with your data. Preprocessing can handle many common trajectory formats (as it is based on MDAnalysis) but the internal featurization (based on PyEMMA) is a bit more restrictive, so we will always write xtc trajectories. For large trajectories, you might want to use the scripts provided in the PENSA repository, e.g., to run them on the computing cluster and then download the processed data. Once you know how PENSA works, you can write your own scripts.\n\nIn the following, we define the necessary files. For each simulation, we need a reference file (.psf for AMBER), a PDB file, and the trajetory. \n\nMake sure to adapt the root directory such that it links to wherever you have mounted Oak. I you cannot access the Sherlock cluster at Stanford, use any other simulation that you would like to compare. \n\nTo run this tutorial on another system, you'll have to adapt the file paths and names in the following box and, in case you need them, the residue selections in the folder ```selections```. We explain how they work further below. Note that for some PENSA functions it is sufficient that the derived features are the same while for others (especially those that involve trajectory manipulation), all atoms need to be the same. In our particular example, we exclude hydrogen atoms because residue Asp114 is protonated in the BU72 simulation but not in the apo simulation.", "_____no_output_____" ] ], [ [ "root_dir = './mor-data'\n# Simulation A\nref_file_a = root_dir+'/11427_dyn_151.psf'\npdb_file_a = root_dir+'/11426_dyn_151.pdb'\ntrj_file_a = [root_dir+'/11423_trj_151.xtc',\n root_dir+'/11424_trj_151.xtc',\n root_dir+'/11425_trj_151.xtc']\n# Simulation B\nref_file_b = root_dir+'/11580_dyn_169.psf'\npdb_file_b = root_dir+'/11579_dyn_169.pdb'\ntrj_file_b = [root_dir+'/11576_trj_169.xtc',\n root_dir+'/11577_trj_169.xtc',\n root_dir+'/11578_trj_169.xtc']\n# Base for the selection string for each simulation\nsel_base_a = \"(not name H*) and protein\"\nsel_base_b = \"(not name H*) and protein\"\n# Names of the output files\nout_name_a = \"traj/condition-a\"\nout_name_b = \"traj/condition-b\"\nout_name_combined=\"traj/combined\"", "_____no_output_____" ] ], [ [ "For this tutorial, we will save the processed trajectories in the subfolder ```traj```. We also create subfolders for other results that we will generate.", "_____no_output_____" ] ], [ [ "for subdir in ['traj','plots','vispdb','pca','clusters','results']:\n if not os.path.exists(subdir):\n os.makedirs(subdir)", "_____no_output_____" ] ], [ [ "We have to ensure that from both simulations, we use the exact same parts of the receptor for the analysis. Often, this will be easy and you just provide a simple selection string for the corresponding segment. For more complicated cases, we can use the function ```load_selection()``` to generate a complete residue list from a plain text file. This file should provide in each line the first and the last residue to be considered for a part of the protein. \n\nIn the first case, we will extract all protein residues, assuming (correctly) that the same ones are present in both simulations.", "_____no_output_____" ] ], [ [ "# Extract the coordinates of the receptor from the trajectory\nextract_coordinates(ref_file_a, pdb_file_a, trj_file_a, out_name_a+\"_receptor\", sel_base_a)\nextract_coordinates(ref_file_b, pdb_file_b, trj_file_b, out_name_b+\"_receptor\", sel_base_b)", "_____no_output_____" ] ], [ [ "In many cases, you probably have several runs of the same simulation that you want to combine to one structural ensemble. This is why the trajectory argument takes a list as arguments, e.g.\n\n extract_coordinates(system.psf, system.pdb, ['run1.nc','run2.nc','run3.nc'], \n 'rho_receptor', 'protein', start_frame=1000)\n \nWith the option ```start_frame```, you can exclude the equilibrium phase already at this stage. Be aware that in combined simulations, there is no straightforward way to exclude it later as it would require bookkeeping about how long each simulation was etc.", "_____no_output_____" ], [ "For some analysis types, we only want to use the part of the receptor that is inside the membrane. In this way, very flexible loops outside the membrane cannot distort the analysis result. We can manually construct a selection string in MDAnalysis format or load the selections from a file. We call this file ```mor_tm.txt``` and generate it on the fly so we can demonstrate the loader function. We use selections based on the definitions of transmembrane helices in the [GPCRdb](https://gpcrdb.org/protein/oprm_human/).", "_____no_output_____" ] ], [ [ "! echo \"76 98\\n105 133\\n138 173\\n182 208\\n226 264\\n270 308\\n315 354\" > mor_tm.txt\n! cat mor_tm.txt", "_____no_output_____" ], [ "# Load the selection and generate the strings\nsel_string_a = load_selection(\"mor_tm.txt\", sel_base_a+\" and \")\nprint('Selection A:\\n', sel_string_a, '\\n')\nsel_string_b = load_selection(\"mor_tm.txt\", sel_base_b+\" and \")\nprint('Selection B:\\n', sel_string_b, '\\n')\n# Extract the coordinates of the transmembrane region from the trajectory\nextract_coordinates(ref_file_a, pdb_file_a, [trj_file_a], out_name_a+\"_tm\", sel_string_a)\nextract_coordinates(ref_file_b, pdb_file_b, [trj_file_b], out_name_b+\"_tm\", sel_string_b)", "_____no_output_____" ] ], [ [ "### Generalization\nIf you want to combine data from different simulation conditions, you can use the ```_combined``` version of the extraction function: ```extract_coordinates_combined()```. It takes lists as arguments for the topology files, too. To use the same selection, \"multiply\" a list of one string, as demonstrated below. For this to work, the two selections need to have the exactly same atoms. ", "_____no_output_____" ] ], [ [ "extract_coordinates_combined([ref_file_a]*3 + [ref_file_b]*3,\n trj_file_a + trj_file_b, \n [sel_string_a]*3 + [sel_string_b]*3, \n 'traj/combined_tm.xtc', \n start_frame=400)", "_____no_output_____" ] ], [ [ "## Featurization\n\nThe analysis is not performed on the coordinates directly but on features derived from these coordinates.\nPENSA uses the featurization provided by PyEMMA, so far including:\n - backbone torsions: ```'bb-torsions'```, \n - backbone C-alpha distances: ```'bb-distances'```, and \n - sidechain torsions: ```'sc-torsions'```.\n\nYou can combine these with any other function implemented in PyEMMA, even if it is not included in PENSA.", "_____no_output_____" ], [ "In case the equilibration phase has not been already excluded during preprocessing, we can exclude it here by setting the start frame to a value greater than 0.", "_____no_output_____" ] ], [ [ "feature_start_frame = 400", "_____no_output_____" ] ], [ [ "The function ```get_structure_features``` loads the names of the features and their values separately ", "_____no_output_____" ] ], [ [ "sim_a_rec = get_structure_features(\"traj/condition-a_receptor.gro\", \n \"traj/condition-a_receptor.xtc\", \n feature_start_frame)\nsim_a_rec_feat, sim_a_rec_data = sim_a_rec", "_____no_output_____" ], [ "sim_b_rec = get_structure_features(\"traj/condition-b_receptor.gro\",\n \"traj/condition-b_receptor.xtc\", \n feature_start_frame)\nsim_b_rec_feat, sim_b_rec_data = sim_b_rec", "_____no_output_____" ] ], [ [ "Having a look at the shape of the loaded data, we see that the first dimension is the number of frames. The second dimension is the number of features. It must be the same for both simulations.", "_____no_output_____" ] ], [ [ "for k in sim_a_rec_data.keys(): \n print(k, sim_a_rec_data[k].shape)", "_____no_output_____" ], [ "for k in sim_b_rec_data.keys(): \n print(k, sim_b_rec_data[k].shape)", "_____no_output_____" ] ], [ [ "Now do the same only for the transmembrane region.", "_____no_output_____" ] ], [ [ "sim_a_tmr = get_structure_features(\"traj/condition-a_tm.gro\", \n \"traj/condition-a_tm.xtc\", \n feature_start_frame)\nsim_b_tmr = get_structure_features(\"traj/condition-b_tm.gro\", \n \"traj/condition-b_tm.xtc\", \n feature_start_frame)\nsim_a_tmr_feat, sim_a_tmr_data = sim_a_tmr\nsim_b_tmr_feat, sim_b_tmr_data = sim_b_tmr\n\nfor k in sim_a_rec_data.keys(): \n print(k, sim_a_rec_data[k].shape)\nfor k in sim_b_rec_data.keys(): \n print(k, sim_b_rec_data[k].shape)", "_____no_output_____" ] ], [ [ "## Comparison of Structural Ensembles\n\nHere we compare the two ensembles using measures for the relative entropy.\n\nYou can as well calculate the Kolmogorov-Smirnov metric and the corresponding p value using the function ```kolmogorov_smirnov_analysis()```. \n\nAnother possibility is to compare only the means and standard deviations of the distributions using ```mean_difference_analysis()```.", "_____no_output_____" ], [ "### Backbone Torsions\n\nWe start with the backbone torsions, which we can select via ```'bb-torsions'```. To do the same analysis on sidechain torsions, replace ```'bb-torsions'``` with ```'sc-torsions'```.", "_____no_output_____" ] ], [ [ "# Relative Entropy analysis with torsions\nrelen = relative_entropy_analysis(sim_a_rec_feat['bb-torsions'], \n sim_b_rec_feat['bb-torsions'], \n sim_a_rec_data['bb-torsions'], \n sim_b_rec_data['bb-torsions'],\n bin_num=10, verbose=False)\nnames_bbtors, jsd_bbtors, kld_ab_bbtors, kld_ba_bbtors = relen ", "_____no_output_____" ] ], [ [ "The above function also returns the Kullback-Leibler divergences of A with respect to B and vice versa.", "_____no_output_____" ], [ "To find out where the ensembles differ the most, let's print out the most different features and the corresponding value.", "_____no_output_____" ] ], [ [ "# Print the features with the 12 highest values\nsf = sort_features(names_bbtors, jsd_bbtors)\nfor f in sf[:12]: print(f[0], f[1])", "_____no_output_____" ] ], [ [ "To get an overview of how strongly the ensembles differ in which region, we can plot the maximum deviation of the features related to a certain residue.", "_____no_output_____" ] ], [ [ "# Plot the maximum Jensen-Shannon distance per residue as \"B factor\" in a PDB file\nref_filename = \"traj/condition-a_receptor.gro\"\nout_filename = \"receptor_bbtors-deviations_tremd\"\nvis = residue_visualization(names_bbtors, jsd_bbtors, ref_filename, \n \"plots/\"+out_filename+\"_jsd.pdf\", \n \"vispdb/\"+out_filename+\"_jsd.pdb\",\n y_label='max. JS dist. of BB torsions')\n", "_____no_output_____" ], [ "# Save the corresponding data\nnp.savetxt('results/'+out_filename+'_relen.csv', \n np.array(relen).T, fmt='%s', delimiter=',', \n header='Name, JSD(A,B), KLD(A,B), KLD(B,A)')\nnp.savetxt('results/'+out_filename+'_jsd.csv', \n np.array(vis).T, fmt='%s', delimiter=',', \n header='Residue, max. JSD(A,B)')", "_____no_output_____" ] ], [ [ "### Backbone C-alpha Distances\n\nAnother common representation for the overall structure of a protein are the distances between the C-alpha atoms. We can perform the same analysis on them.", "_____no_output_____" ] ], [ [ "# Relative entropy analysis for C-alpha distances\nrelen = relative_entropy_analysis(sim_a_rec_feat['bb-distances'], \n sim_b_rec_feat['bb-distances'], \n sim_a_rec_data['bb-distances'], \n sim_b_rec_data['bb-distances'],\n bin_num=10, verbose=False)\nnames_bbdist, jsd_bbdist, kld_ab_bbdist, kld_ba_bbdist = relen ", "_____no_output_____" ], [ "# Print the features with the 12 highest values\nsf = sort_features(names_bbdist, jsd_bbdist)\nfor f in sf[:12]: print(f[0], f[1])", "_____no_output_____" ] ], [ [ "To visualize distances, we need a two-dimensional representation with the residues on each axis. \nWe color each field with the value of the Jensen-Shannon distance (but could as well use Kullback-Leibler divergence, Kolmogorov-Smirnov statistic etc. instead).", "_____no_output_____" ] ], [ [ "# Visualize the deviations in a matrix plot\nmatrix = distances_visualization(names_bbdist, jsd_bbdist, \n \"plots/receptor_jsd-bbdist.pdf\",\n vmin = 0.0, vmax = 1.0,\n cbar_label='JSD')", "_____no_output_____" ] ], [ [ "## Principal Component Analysis", "_____no_output_____" ], [ "Here we show how to calculate the principal components in the space of backbone torsions. It is also common to calculate principal components in the space of backbone distances. For the latter, again just change ```'bb-torsions'``` to ```'bb-distances'```. As mentioned above, we only consider the transmembrane region here, so flexible loops outside the membrane do not distort the more important slow motions in the receptor core.", "_____no_output_____" ], [ "#### Combined PCA\n\nIn the spirit of comparing two simulations, we calculate the principal components of their joint ensemble of structures.", "_____no_output_____" ] ], [ [ "# Combine the data of the different simulations\ncombined_data_tors = np.concatenate([sim_a_tmr_data['bb-torsions'],sim_b_tmr_data['bb-torsions']],0)", "_____no_output_____" ] ], [ [ "We can now calculate the principal components of this combined dataset. The corresponding function returns a PyEMMA PCA object, so you can combine it with all functionality in PyEMMA to perform more advanced or specialized analysis.", "_____no_output_____" ] ], [ [ "pca_combined = calculate_pca(combined_data_tors)", "_____no_output_____" ] ], [ [ "To find out how relevant each PC is, let's have a look at their eigenvalues.", "_____no_output_____" ] ], [ [ "pca_eigenvalues_plot(pca_combined, num=12, plot_file='plots/combined_tmr_eigenvalues.pdf')", "_____no_output_____" ] ], [ [ "Let us now have a look at the most relevant features of the first three principal components. \nHere, we define a feature as important if its correlation with the respective PC is above a threshold of 0.4.\nThe function also plots the correlation analysis for each PC.", "_____no_output_____" ] ], [ [ "pca_features(pca_combined,sim_a_tmr_feat['bb-torsions'], 3, 0.4)", "_____no_output_____" ] ], [ [ "Now we can compare how the frames of each ensemble are distributed along the principal components.", "_____no_output_____" ] ], [ [ "compare_projections(sim_a_tmr_data['bb-torsions'],\n sim_b_tmr_data['bb-torsions'],\n pca_combined,\n label_a='A', \n label_b='B')", "_____no_output_____" ] ], [ [ "To get a better glimpse on what the Principal components look like, we would like to visualize them. \nFor that purpose, let us sort the structures from the trajectories along the principal components instead of along simulation time.\nWe can then look at the resulting PC trajectories with a molecular visualization program like VMD.\n\nThe trajectory to be sorted does not have to be the same subsystem from which we calcualted the PCA. Here, we are going to write frames with the entire receptor, sorted by the PCs of the transmembrane region.", "_____no_output_____" ] ], [ [ "_ = sort_trajs_along_common_pc(sim_a_tmr_data['bb-torsions'],\n sim_b_tmr_data['bb-torsions'],\n feature_start_frame,\n \"traj/condition-a_receptor.gro\",\n \"traj/condition-b_receptor.gro\",\n \"traj/condition-a_receptor.xtc\",\n \"traj/condition-b_receptor.xtc\",\n \"pca/receptor_by_tmr\",\n num_pc=3)", "_____no_output_____" ] ], [ [ "The above function deals with the special case of two input trajectories. We also provide the functions for a single one (see below). You use these to calculate PCA for any number of combined simulations and then sort the single or combined simulations.", "_____no_output_____" ], [ "#### Single simulation", "_____no_output_____" ], [ "Here are the major steps of a PCA demonstrated for a single simulation.", "_____no_output_____" ] ], [ [ "sim_a_tmr_data['bb-torsions'].shape", "_____no_output_____" ], [ "pca_a = calculate_pca(sim_a_tmr_data['bb-torsions'])", "_____no_output_____" ], [ "pca_features(pca_a, sim_a_tmr_feat['bb-torsions'], 3, 0.4)", "_____no_output_____" ], [ "_, __ = sort_traj_along_pc(sim_a_tmr_data['bb-torsions'], \n pca_a, feature_start_frame, \n \"traj/condition-a_receptor.gro\", \n \"traj/condition-a_receptor.xtc\", \n \"pca/condition-a_receptor_by_tmr\", num_pc=3)", "_____no_output_____" ] ], [ [ "## Clustering\n\nTo identify important states of an ensemble, we can use clustering algorithms. Here we show how to cluster a combined ensemble from two simulations into two clusters using k-means clustering. The plot shows how many frames from which simulation were sorted in which cluster.", "_____no_output_____" ] ], [ [ "cc = obtain_combined_clusters(sim_a_tmr_data['bb-torsions'],sim_b_tmr_data['bb-torsions'],\n label_a='A', label_b='B', start_frame=0,\n algorithm='kmeans', max_iter=100, num_clusters=3, min_dist=12,\n saveas='plots/combined_clust_bbtors.pdf')\ncidx, cond, oidx, wss, centroids = cc", "_____no_output_____" ], [ "np.savetxt('results/combined-cluster-indices.csv', \n np.array([cidx, cond, oidx], dtype=int).T,\n delimiter=',', fmt='%i',\n header='Cluster, Condition, Index within condition')", "_____no_output_____" ] ], [ [ "We can sort the frames from each ensemble into these clusters, writing them as separate trajectory files. As with pricipal components, we can look at them using VMD.", "_____no_output_____" ] ], [ [ "name = \"condition-a_tm\"\nwrite_cluster_traj(cidx[cond==0], \"traj/\"+name+\".gro\",\"traj/\"+name+\".xtc\", \n \"clusters/\"+\"combined_clust_bbtors_\"+name, feature_start_frame )\n\nname = \"condition-b_tm\"\nwrite_cluster_traj(cidx[cond==1], \"traj/\"+name+\".gro\",\"traj/\"+name+\".xtc\", \n \"clusters/\"+\"combined_clust_bbtors_\"+name, feature_start_frame )", "_____no_output_____" ] ], [ [ "A common method to obtain the optimal number of clusters is the elbow plot. We plot the with-in-sum-of-squares (WSS) for a few repetitions for an increasing number of clusters. Then we look for the \"elbow\" in the resulting plot. Unfortunately, sometimes there is no clear result though.", "_____no_output_____" ] ], [ [ "wss_avg, wss_std = wss_over_number_of_combined_clusters(sim_a_tmr_data['bb-torsions'], \n sim_b_tmr_data['bb-torsions'],\n label_a='A', label_b='B', \n start_frame=feature_start_frame,\n algorithm='kmeans', \n max_iter=100, num_repeats = 5, \n max_num_clusters = 12, \n plot_file = None)", "_____no_output_____" ] ], [ [ "Of course, we can also cluster a single simulation", "_____no_output_____" ] ], [ [ "_ci, _wss, _centroids = obtain_clusters( sim_a_tmr_data['bb-torsions'], num_clusters=5 )\nname = \"condition-a_tm\"\nwrite_cluster_traj( _ci, \"traj/\"+name+\".gro\",\"traj/\"+name+\".xtc\", \n \"clusters/\"+\"clust_bbtors_\"+name, feature_start_frame )", "_____no_output_____" ], [ "wss_avg, wss_std = wss_over_number_of_clusters(sim_a_tmr_data['bb-torsions'],\n algorithm='kmeans', \n max_iter=100, num_repeats = 5, \n max_num_clusters = 12, \n plot_file = None)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cbb945c77357f0c3cc88fc078081bdfffdddc8cc
168,970
ipynb
Jupyter Notebook
Pandas cheat sheet.ipynb
cjwinchester/pandas-cheat-sheet
a02f2121581305f9725936b1e2d2d621af23f9f1
[ "Unlicense" ]
2
2021-03-03T18:15:17.000Z
2021-05-27T00:12:27.000Z
Pandas cheat sheet.ipynb
cjwinchester/pandas-cheat-sheet
a02f2121581305f9725936b1e2d2d621af23f9f1
[ "Unlicense" ]
null
null
null
Pandas cheat sheet.ipynb
cjwinchester/pandas-cheat-sheet
a02f2121581305f9725936b1e2d2d621af23f9f1
[ "Unlicense" ]
1
2020-02-27T20:17:16.000Z
2020-02-27T20:17:16.000Z
30.839569
348
0.37702
[ [ [ "# Pandas cheat sheet\n\nThis notebook has some common data manipulations you might do while working in the popular Python data analysis library [`pandas`](https://pandas.pydata.org/). It assumes you're already are set up to analyze data in pandas using Python 3.\n\n(If you're _not_ set up, [here's IRE's guide](https://docs.google.com/document/d/1cYmpfZEZ8r-09Q6Go917cKVcQk_d0P61gm0q8DAdIdg/edit#) to setting up Python. [Hit me up](mailto:[email protected]) if you get stuck.)\n\n### Topics\n- [Importing pandas](#Importing-pandas)\n- [Creating a dataframe from a CSV](#Creating-a-dataframe-from-a-CSV)\n- [Checking out the data](#Checking-out-the-data)\n- [Selecting columns of data](#Selecting-columns-of-data)\n- [Getting unique values in a column](#Getting-unique-values-in-a-column)\n- [Running basic summary stats](#Running-basic-summary-stats)\n- [Sorting your data](#Sorting-your-data)\n- [Filtering rows of data](#Filtering-rows-of-data)\n- [Filtering text columns with string methods](#Filtering-text-columns-with-string-methods)\n- [Filtering against multiple values](#Filtering-against-multiple-values)\n- [Exclusion filtering](#Exclusion-filtering)\n- [Adding a calculated column](#Adding-a-calculated-column)\n- [Filtering for nulls](#Filtering-for-nulls)\n- [Grouping and aggregating data](#Grouping-and-aggregating-data)\n- [Pivot tables](#Pivot-tables)\n- [Applying a function across rows](#Applying-a-function-across-rows)\n- [Joining data](#Joining-data)", "_____no_output_____" ], [ "### Importing pandas\n\nBefore we can use pandas, we need to import it. The most common way to do this is:", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ] ], [ [ "### Creating a dataframe from a CSV\n\nTo begin with, let's import a CSV of Major League Baseball player salaries on opening day. The file, which is in the same directory as this notebook, is called `mlb.csv`.\n\nPandas has a `read_csv()` method that we can use to get this data into a [dataframe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) (it has methods to read other file types, too). At minimum, you need to tell this method where the file lives:", "_____no_output_____" ] ], [ [ "mlb = pd.read_csv('mlb.csv')", "_____no_output_____" ] ], [ [ "### Checking out the data\n\nWhen you first load up your data, you'll want to get a sense of what's in there. A pandas dataframe has several useful things to help you get a quick read of your data:\n\n- `.head()`: Shows you the first 5 records in the data frame (optionally, if you want to see a different number of records, you can pass in a number)\n- `.tail()`: Same as `head()`, but it pull records from the end of the dataframe\n- `.sample(n)` will give you a sample of *n* rows of the data -- just pass in a number\n- `.info()` will give you a count of non-null values in each column -- useful for seeing if any columns have null values\n- `.describe()` will compute summary stats for numeric columns\n- `.columns` will list the column names\n- `.dtypes` will list the data types of each column\n- `.shape` will give you a pair of numbers: _(number of rows, number of columns)_", "_____no_output_____" ] ], [ [ "mlb.head()", "_____no_output_____" ], [ "mlb.tail()", "_____no_output_____" ], [ "mlb.sample(5)", "_____no_output_____" ], [ "mlb.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 868 entries, 0 to 867\nData columns (total 7 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 NAME 868 non-null object\n 1 TEAM 868 non-null object\n 2 POS 868 non-null object\n 3 SALARY 868 non-null int64 \n 4 START_YEAR 868 non-null int64 \n 5 END_YEAR 868 non-null int64 \n 6 YEARS 868 non-null int64 \ndtypes: int64(4), object(3)\nmemory usage: 47.6+ KB\n" ], [ "mlb.describe()", "_____no_output_____" ], [ "mlb.columns", "_____no_output_____" ], [ "mlb.dtypes", "_____no_output_____" ], [ "mlb.shape", "_____no_output_____" ] ], [ [ "To get the number of records in a dataframe, you can access the first item in the `shape` pair, or you can just use the Python function `len()`:", "_____no_output_____" ] ], [ [ "len(mlb)", "_____no_output_____" ] ], [ [ "### Selecting columns of data\n\nIf you need to select just one column of data, you can use \"dot notation\" (`mlb.SALARY`) as long as your column name doesn't have spaces and it isn't the name of a dataframe method (e.g., `product`). Otherwise, you can use \"bracket notation\" (`mlb['SALARY']`).\n\nSelecting one column will return a [`Series`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).\n\nIf you want to select multiple columns of data, use bracket notation and pass in a _list_ of columns that you want to select. In Python, a list is a collection of items enclosed in square brackets, separated by commas: `['SALARY', 'NAME']`.\n\nSelecting multiple columns will return a [`DataFrame`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html).", "_____no_output_____" ] ], [ [ "# select one column of data\nteams = mlb.TEAM\n\n# bracket notation would do the same thing -- note the quotes around the column name\n# teams = mlb['TEAM']\n\nteams.head()", "_____no_output_____" ], [ "type(teams)", "_____no_output_____" ], [ "# select multiple columns of data\nsalaries_and_names = mlb[['SALARY', 'NAME']]", "_____no_output_____" ], [ "salaries_and_names.head()", "_____no_output_____" ], [ "type(salaries_and_names)", "_____no_output_____" ] ], [ [ "### Getting unique values in a column\n\nAs you evaluate your data, you'll often want to get a list of unique values in a column (for cleaning, filtering, grouping, etc.).\n\nTo do this, you can use the Series method `unique()`. If you wanted to get a list of baseball positions, you could do:", "_____no_output_____" ] ], [ [ "mlb.POS.unique()", "_____no_output_____" ] ], [ [ "If useful, you could also sort the results alphabetically with the Python [`sorted()`](https://docs.python.org/3/library/functions.html#sorted) function:", "_____no_output_____" ] ], [ [ "sorted(mlb.POS.unique())", "_____no_output_____" ] ], [ [ "Sometimes you just need the _number_ of unique values in a column. To do this, you can use the pandas method `nunique()`:", "_____no_output_____" ] ], [ [ "mlb.POS.nunique()", "_____no_output_____" ] ], [ [ "(You can also run `nunique()` on an entire dataframe:)", "_____no_output_____" ] ], [ [ "mlb.nunique()", "_____no_output_____" ] ], [ [ "If you want to count up the number of times a value appears in a column of data -- the equivalent of doing a pivot table in Excel and aggregating by count -- you can use the Series method [`value_counts()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.value_counts.html).\n\nTo get a list of MLB teams and the number of times each one appears in our salary data -- in other words, the roster count for each team -- we could do:", "_____no_output_____" ] ], [ [ "mlb.TEAM.value_counts()", "_____no_output_____" ] ], [ [ "### Running basic summary stats\n\nSome of this already surfaced with `describe()`, but in some cases you'll want to compute these stats manually:\n- `sum()`\n- `mean()`\n- `median()`\n- `max()`\n- `min()`\n\nYou can run these on a Series (e.g., a column of data), or on an entire DataFrame.", "_____no_output_____" ] ], [ [ "mlb.SALARY.sum()", "_____no_output_____" ], [ "mlb.SALARY.mean()", "_____no_output_____" ], [ "mlb.SALARY.median()", "_____no_output_____" ], [ "mlb.SALARY.max()", "_____no_output_____" ], [ "mlb.SALARY.min()", "_____no_output_____" ], [ "# entire dataframe\nmlb.mean()", "_____no_output_____" ] ], [ [ "### Sorting your data\n\nYou can use the [`sort_values()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) method to sort a dataframe by one or more columns. The default is to sort the values ascending; if you want your results sorted descending, specify `ascending=False`.\n\nLet's sort our dataframe by `SALARY` descending:", "_____no_output_____" ] ], [ [ "mlb.sort_values('SALARY', ascending=False).head()", "_____no_output_____" ] ], [ [ "To sort by multiple columns, pass a list of columns to the `sort_values()` method -- the sorting will happen in the order you specify in the list. You'll also need to pass a list to the `ascending` keyword argument, otherwise both will sort ascending.\n\nLet's sort our dataframe first by `TEAM` ascending, then by `SALARY` descending:", "_____no_output_____" ] ], [ [ "mlb.sort_values(['TEAM', 'SALARY'], ascending=[True, False]).head()", "_____no_output_____" ] ], [ [ "### Filtering rows of data\n\nTo filter your data by some criteria, you'd pass your filtering condition(s) to a dataframe using bracket notation.\n\nYou can use Python's [comparison operators](https://docs.python.org/3/reference/expressions.html#comparisons) in your filters, which include:\n- `>` greater than\n- `<` less than\n- `>=` greater than or equal to\n- `<=` less than or equal to\n- `==` equal to\n- `!=` not equal to\n\nExample: You want to filter your data to keep records where the `TEAM` value is 'ARI':", "_____no_output_____" ] ], [ [ "diamondbacks = mlb[mlb.TEAM == 'ARI']", "_____no_output_____" ], [ "diamondbacks.head()", "_____no_output_____" ] ], [ [ "We could filter to get all records where the `TEAM` value is _not_ 'ARI':", "_____no_output_____" ] ], [ [ "non_diamondbacks = mlb[mlb.TEAM != 'ARI']", "_____no_output_____" ], [ "non_diamondbacks.head()", "_____no_output_____" ] ], [ [ "We could filter our data to just grab the players that make at least $1 million:", "_____no_output_____" ] ], [ [ "million_a_year = mlb[mlb.SALARY >= 1000000]", "_____no_output_____" ], [ "million_a_year.head()", "_____no_output_____" ] ], [ [ "### Filtering against multiple values\n\nYou can use the `isin()` method to test a value against multiple matches -- just hand it a _list_ of values to check against.\n\nExample: Let's say we wanted to filter to get just players in Texas (in other words, just the Texas Rangers and the Houston Astros):", "_____no_output_____" ] ], [ [ "tx = mlb[mlb.TEAM.isin(['TEX', 'HOU'])]", "_____no_output_____" ], [ "tx.head()", "_____no_output_____" ] ], [ [ "### Exclusion filtering\n\nSometimes it's easier to specify what records you _don't_ want returned. To flip the meaning of a filter condition, prepend a tilde `~`.\n\nFor instance, if we wanted to get all players who are _not_ from Texas, we'd use the same filter condition we just used to get the TX players but add a tilde at the beginning:", "_____no_output_____" ] ], [ [ "not_tx = mlb[~mlb.TEAM.isin(['TEX', 'HOU'])]", "_____no_output_____" ], [ "not_tx.head()", "_____no_output_____" ] ], [ [ "### Filtering text columns with string methods\n\nYou can access the text values in a column with `.str`, and you can use any of Python's native string functions to manipulate them.\n\nFor our purposes, though, the pandas [`str.contains()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.str.contains.html) method is useful for filtering data by matching text patterns.\n\nIf we wanted to get every player with 'John' in their name, we could do something like this:", "_____no_output_____" ] ], [ [ "johns = mlb[mlb.NAME.str.contains('John', case=False)]", "_____no_output_____" ], [ "johns.head()", "_____no_output_____" ] ], [ [ "Note the `case=False` keyword argument -- we're telling pandas to match case-insensitive. And if the pattern you're trying to match is more complex, the method is set up to support [regular expressions](https://docs.python.org/3/howto/regex.html) by default.", "_____no_output_____" ], [ "### Multiple filters\n\nSometimes you have multiple filters to apply to your data. Lots of the time, it makes sense to break the filters out into separate statements.\n\nFor instance, if you wanted to get all Texas players who make at least $1 million, I might do this:", "_____no_output_____" ] ], [ [ "tx = mlb[mlb.TEAM.isin(['TEX', 'HOU'])]\n\n# note that I'm filtering the dataframe I just created, not the original `mlb` dataframe\ntx_million_a_year = tx[tx.SALARY >= 1000000]", "_____no_output_____" ], [ "tx_million_a_year.head()", "_____no_output_____" ] ], [ [ "But sometimes you want to chain your filters together into one statement. Use `|` for \"or\" and `&` for \"and\" rather than Python's built-in `or` and `and` statements, and use grouping parentheses around each statement.\n\nThe same filter in one statement:", "_____no_output_____" ] ], [ [ "tx_million_a_year = mlb[(mlb.TEAM.isin(['TEX', 'HOU'])) & (mlb.SALARY > 1000000)]", "_____no_output_____" ], [ "tx_million_a_year.head()", "_____no_output_____" ] ], [ [ "Do what works for you and makes sense in context, but I find the first version a little easier to read.", "_____no_output_____" ], [ "### Adding a calculated column\n\nTo add a new column to a dataframe, use bracket notation to supply the name of the new column (in quotes, or apostrophes, as long as they match), then set it equal to a value -- maybe a calculation derived from other data in your dataframe.\n\nFor example, let's create a new column, `contract_total`, that multiplies the annual salary by the number of contract years:", "_____no_output_____" ] ], [ [ "mlb['contract_total'] = mlb['SALARY'] * mlb['YEARS']", "_____no_output_____" ], [ "mlb.head()", "_____no_output_____" ] ], [ [ "### Filtering for nulls\n\nYou can use the `isnull()` method to get records that are null, or `notnull()` to get records that aren't. The most common use I've seen for these methods is during filtering to see how many records you're missing (and, therefore, how that affects your analysis).\n\nThe MLB data is complete, so to demonstrate this, let's load up a new data set: A cut of the [National Inventory of Dams](https://ire.org/nicar/database-library/databases/national-inventory-of-dams/) database, courtesy of the NICAR data library. (We'll need to specify the `encoding` on this CSV because it's not UTF-8.)", "_____no_output_____" ] ], [ [ "dams = pd.read_csv('dams.csv',\n encoding='latin-1')", "_____no_output_____" ], [ "dams.head()", "_____no_output_____" ] ], [ [ "Maybe we're interested in looking at the year the dam was completed (the `Year_Comp`) column. Running `.info()` on the dataframe shows that we're missing some values:", "_____no_output_____" ] ], [ [ "dams.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2482 entries, 0 to 2481\nData columns (total 42 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 NIDID 2482 non-null object \n 1 Dam_Name 2480 non-null object \n 2 Insp_Date 1093 non-null object \n 3 Submit_Date 2482 non-null object \n 4 River 2264 non-null object \n 5 City_02 1407 non-null object \n 6 County 2477 non-null object \n 7 State 2482 non-null object \n 8 Cong_Dist 2445 non-null object \n 9 Cong_Rep 2445 non-null object \n 10 Party 2445 non-null object \n 11 Owner_Type 2482 non-null object \n 12 Owner_Name 2199 non-null object \n 13 Year_Comp 1663 non-null float64\n 14 Year_Mod 438 non-null object \n 15 Private_Dam 2482 non-null object \n 16 NPDP_Hazard 1487 non-null object \n 17 Permit_Auth 2482 non-null object \n 18 Insp_Auth 2482 non-null object \n 19 Enfrc_Auth 2482 non-null object \n 20 Juris_Dam 2482 non-null object \n 21 NID_Height 2468 non-null float64\n 22 NID_Storage 2453 non-null float64\n 23 Dam_Length 1813 non-null float64\n 24 Max_Discharge 831 non-null float64\n 25 Drain_Area 1188 non-null float64\n 26 Dam_Designer 831 non-null object \n 27 EAP 2482 non-null object \n 28 Insp_Freq 2482 non-null int64 \n 29 St_Reg_Dam 2482 non-null object \n 30 St_Reg_Agncy 2374 non-null object \n 31 Volume 530 non-null float64\n 32 Fed_Fund 219 non-null object \n 33 Fed_Design 578 non-null object \n 34 Fed_Con 221 non-null object \n 35 Fed_Reg 132 non-null object \n 36 Fed_Insp 146 non-null object \n 37 Srce_Agncy 2482 non-null object \n 38 Oth_StrucID 17 non-null object \n 39 Num_Struc 2482 non-null int64 \n 40 Longitude 2482 non-null float64\n 41 Latitude 2482 non-null float64\ndtypes: float64(9), int64(2), object(31)\nmemory usage: 814.5+ KB\n" ] ], [ [ "We can filter for `isnull()` to take a closer look:", "_____no_output_____" ] ], [ [ "no_year_comp = dams[dams.Year_Comp.isnull()]", "_____no_output_____" ], [ "no_year_comp.head()", "_____no_output_____" ] ], [ [ "How many are we missing? That will help us determine whether the analysis would be valid:", "_____no_output_____" ] ], [ [ "# calculate the percentage of records with no Year_Comp value\n# (part / whole) * 100\n\n(len(no_year_comp) / len(dams)) * 100", "_____no_output_____" ] ], [ [ "So this piece of our analysis would exclude one-third of our records -- something you'd need to explain to your audience, if indeed your reporting showed that the results of your analysis would still be meaningful.\n\nTo get records where the `Year_Comp` is not null, we'd use `notnull()`:", "_____no_output_____" ] ], [ [ "has_year_comp = dams[dams.Year_Comp.notnull()]", "_____no_output_____" ], [ "has_year_comp.head()", "_____no_output_____" ] ], [ [ "What years remain? Let's use `value_counts()` to find out:", "_____no_output_____" ] ], [ [ "has_year_comp.Year_Comp.value_counts()", "_____no_output_____" ] ], [ [ "(To sort by year, not count, we could tack on a `sort_index()`:", "_____no_output_____" ] ], [ [ "has_year_comp.Year_Comp.value_counts().sort_index()", "_____no_output_____" ] ], [ [ "### Grouping and aggregating data\n\nYou can use the `groupby()` method to group and aggregate data in pandas, similar to what you'd get by running a pivot table in Excel or a `GROUP BY` query in SQL. We'll also provide the aggregate function to use.\n\nLet's group our baseball salary data by team to see which teams have the biggest payrolls -- in other words, we want to use `sum()` as our aggregate function:", "_____no_output_____" ] ], [ [ "grouped_mlb = mlb.groupby('TEAM').sum()", "_____no_output_____" ], [ "grouped_mlb.head()", "_____no_output_____" ] ], [ [ "If you don't specify what columns you want, it will run `sum()` on every numeric column. Typically I select just the grouping column and the column I'm running the aggregation on:", "_____no_output_____" ] ], [ [ "grouped_mlb = mlb[['TEAM', 'SALARY']].groupby('TEAM').sum()", "_____no_output_____" ], [ "grouped_mlb.head()", "_____no_output_____" ] ], [ [ "... and we can sort descending, with `head()` to get the top payrolls:", "_____no_output_____" ] ], [ [ "grouped_mlb.sort_values('SALARY', ascending=False).head(10)", "_____no_output_____" ] ], [ [ "You can use different aggregate functions, too. Let's say we wanted to get the top median salaries by team:", "_____no_output_____" ] ], [ [ "mlb[['TEAM', 'SALARY']].groupby('TEAM').median().sort_values('SALARY', ascending=False).head(10)", "_____no_output_____" ] ], [ [ "You can group by multiple columns by passing a list. Here, we'll select our columns of interest and group by `TEAM`, then by `POS`, using `sum()` as our aggregate function:", "_____no_output_____" ] ], [ [ "mlb[['TEAM', 'POS', 'SALARY']].groupby(['TEAM', 'POS']).sum()", "_____no_output_____" ] ], [ [ "### Pivot tables\n\nSometimes you need a full-blown pivot table, and [pandas has a function to make one](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html).\n\nFor this example, we'll look at some foreign trade data -- specifically, eel product imports from 2010 to mid-2017:", "_____no_output_____" ] ], [ [ "eels = pd.read_csv('eels.csv')", "_____no_output_____" ], [ "eels.head()", "_____no_output_____" ] ], [ [ "Let's run a pivot table where the grouping column is `country`, the values are the sum of `kilos`, and the columns are the year:", "_____no_output_____" ] ], [ [ "pivoted_sums = pd.pivot_table(eels,\n index='country',\n columns='year',\n values='kilos',\n aggfunc=sum)", "_____no_output_____" ], [ "pivoted_sums.head()", "_____no_output_____" ] ], [ [ "Let's sort by the `2017` value. While we're at it, let's fill in null values (`NaN`) with zeroes using the [`fillna()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html) method.", "_____no_output_____" ] ], [ [ "pivoted_sums.sort_values(2017, ascending=False).fillna(0)", "_____no_output_____" ] ], [ [ "### Applying a function across rows\n\nOften, you'll want to calculate a value for every column but it won't be that simple, and you'll write a separate function that accepts one row of data, does some calculations and returns a value. We'll use the [`apply()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.apply.html) method to accomplish this.\n\nFor this example, we're going to load up a CSV of gators killed by hunters in Florida:", "_____no_output_____" ] ], [ [ "gators = pd.read_csv('gators.csv')", "_____no_output_____" ], [ "gators.head()", "_____no_output_____" ] ], [ [ "We want to find the longest gator in our data, of course, but there's a problem: right now, the caracass size value is being stored as text: `{} ft. {} in.`. The pattern is predicatable, though, and we can use some Python to turn those values into constant numbers -- inches -- that we can then sort on. Here's our function:", "_____no_output_____" ] ], [ [ "def get_inches(row):\n '''Accepts a row from our dataframe, calculates carcass length in inches and returns that value'''\n\n # get the value in the 'Carcass Size' column\n carcass_size = row['Carcass Size']\n \n # split the text on 'ft.'\n # the result is a list\n size_split = carcass_size.split('ft.')\n \n # strip whitespace from the first item ([0]) in the resulting list -- the feet --\n # and coerce it to an integer with the Python `int()` function\n feet = int(size_split[0].strip())\n \n # in the second item ([1]) in the resulting list -- the inches -- replace 'in.' with nothing,\n # strip whitespace and coerce to an integer\n inches = int(size_split[1].replace('in.', '').strip())\n \n # add the feet times 12 plus the inches and return that value\n return inches + (feet * 12)", "_____no_output_____" ] ], [ [ "Now we're going to create a new column, `length_in` and use the [`apply()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.apply.html) method to apply our function to every row. The `axis=1` keyword argument means that we're applying our function row-wise, not column-wise.", "_____no_output_____" ] ], [ [ "gators['length_in'] = gators.apply(get_inches, axis=1)", "_____no_output_____" ], [ "gators.sort_values('length_in', ascending=False).head()", "_____no_output_____" ] ], [ [ "### Joining data\n\nYou can use [`merge()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.merge.html) to join data in pandas.\n\nIn this simple example, we're going to take a CSV of country population data in which each country is represented by an [ISO 3166-1 numeric country code](https://en.wikipedia.org/wiki/ISO_3166-1_numeric) and join it to a CSV that's basically a lookup table with the ISO codes and the names of the countries to which they refer.\n\nSome of the country codes have leading zeroes, so we're going to use the `dtype` keyword when we import each CSV to specify that the `'code'` column in each dataset should be treated as a string (text), not a number.", "_____no_output_____" ] ], [ [ "pop_csv = pd.read_csv('country-population.csv', dtype={'code': str})", "_____no_output_____" ], [ "pop_csv.head()", "_____no_output_____" ], [ "code_csv = pd.read_csv('country-codes.csv', dtype={'code': str})", "_____no_output_____" ], [ "code_csv.head()", "_____no_output_____" ] ], [ [ "Now we'll use `merge()` to join them.\n\nThe `on` keyword argument tells the method what column to join on. If the names of the columns were different, you'd use `left_on` and `right_on`, with the \"left\" dataframe being the first one you hand to the `merge()` function.\n\nThe `how` keyword argument tells the method what type of join to use -- the default is `'inner'`.", "_____no_output_____" ] ], [ [ "joined_data = pd.merge(pop_csv,\n code_csv,\n on='code',\n how='left')", "_____no_output_____" ], [ "joined_data.head()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cbb94ba4a4690bf41a47a4f74a01a0d7b2a481ef
12,761
ipynb
Jupyter Notebook
Lectures/01-Introduction/03-Magic-Commands.ipynb
kiarapashko/Python-Bootcamp
a7f7c8ebe95e82af0467a9685a37b43879f3bae9
[ "MIT" ]
7
2019-02-03T15:43:22.000Z
2022-01-29T15:04:49.000Z
Lectures/01-Introduction/03-Magic-Commands.ipynb
kiarapashko/Python-Bootcamp
a7f7c8ebe95e82af0467a9685a37b43879f3bae9
[ "MIT" ]
null
null
null
Lectures/01-Introduction/03-Magic-Commands.ipynb
kiarapashko/Python-Bootcamp
a7f7c8ebe95e82af0467a9685a37b43879f3bae9
[ "MIT" ]
18
2019-02-03T15:47:46.000Z
2022-01-29T15:24:00.000Z
12,761
12,761
0.706528
[ [ [ "# IPython Magic Commands", "_____no_output_____" ], [ "Here we'll begin discussing some of the enhancements that IPython adds on top of the normal Python syntax.\nThese are known in IPython as *magic commands*, and are prefixed by the ``%`` character.\nThese magic commands are designed to succinctly solve various common problems in standard data analysis.\nMagic commands come in two flavors: *line magics*, which are denoted by a single ``%`` prefix and operate on a single line of input, and *cell magics*, which are denoted by a double ``%%`` prefix and operate on multiple lines of input.\nWe'll demonstrate and discuss a few brief examples here, and come back to more focused discussion of several useful magic commands later in the chapter.", "_____no_output_____" ], [ "## Running External Code: ``%run``\nAs you begin developing more extensive code, you will likely find yourself working in both IPython for interactive exploration, as well as a text editor to store code that you want to reuse.\nRather than running this code in a new window, it can be convenient to run it within your IPython session.\nThis can be done with the ``%run`` magic.\n\nFor example, let's create a ``myscript.py`` file with the following contents (note that we are using the `%%bash` magic to write bash code in notebook:", "_____no_output_____" ] ], [ [ "%%bash\necho \"\"\"\n'''square functions'''\n\ndef square(x):\n '''square a number'''\n return x ** 2\n \nfor N in range(1, 4):\n print(N, 'squared is', square(N))\"\"\" > myscript.py", "_____no_output_____" ] ], [ [ "We can see the content of this file either from the Files tab on the laft bar or using a terminal command such as `cat`:", "_____no_output_____" ] ], [ [ "%%bash\ncat myscript.py", "\n'''square functions'''\n\ndef square(x):\n '''square a number'''\n return x ** 2\n \nfor N in range(1, 4):\n print(N, 'squared is', square(N))\n" ] ], [ [ "You can execute this from your IPython session as follows:", "_____no_output_____" ] ], [ [ "%run myscript.py", "1 squared is 1\n2 squared is 4\n3 squared is 9\n" ] ], [ [ "Note that after you've run this script, any functions defined within it are available for use in your IPython session:", "_____no_output_____" ] ], [ [ "square(5)", "_____no_output_____" ], [ "square??", "_____no_output_____" ] ], [ [ "There are several options to fine-tune how your code is run; you can see the documentation in the normal way, by typing **``%run?``** in the IPython interpreter.", "_____no_output_____" ], [ "## Timing Code Execution: ``%timeit``\nAnother example of a useful magic function is ``%timeit``, which will automatically determine the execution time of the single-line Python statement that follows it.\nFor example, we may want to check the performance of a list comprehension:", "_____no_output_____" ] ], [ [ "%timeit L = [n ** 2 for n in range(1000)]", "1000 loops, best of 5: 251 µs per loop\n" ] ], [ [ "The benefit of ``%timeit`` is that for short commands it will automatically perform multiple runs in order to attain more robust results.\nFor multi line statements, adding a second ``%`` sign will turn this into a cell magic that can handle multiple lines of input.\nFor example, here's the equivalent construction with a ``for``-loop:", "_____no_output_____" ] ], [ [ "%%timeit\nL = []\nfor n in range(1000):\n L.append(n ** 2)", "1000 loops, best of 5: 288 µs per loop\n" ] ], [ [ "We can immediately see that list comprehensions are about 20% faster than the equivalent ``for``-loop construction in this case.", "_____no_output_____" ], [ "## Help on Magic Functions: ``?``, ``%magic``, and ``%lsmagic``\n\nLike normal Python functions, IPython magic functions have docstrings, and this useful\ndocumentation can be accessed in the standard manner.\nSo, for example, to read the documentation of the ``%timeit`` magic simply type this:", "_____no_output_____" ] ], [ [ "%timeit?", "_____no_output_____" ] ], [ [ "Documentation for other functions can be accessed similarly.\nTo access a general description of available magic functions, including some examples, you can type this:", "_____no_output_____" ] ], [ [ "%magic", "_____no_output_____" ] ], [ [ "For a quick and simple list of all available magic functions, type this:", "_____no_output_____" ] ], [ [ "%lsmagic", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbb94f6b1c35b4887a282d7e18ae2b283c0d2e01
21,632
ipynb
Jupyter Notebook
how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.ipynb
rndazurescript/MachineLearningNotebooks
c520bd1d4130d9a01ee46e0937459e2de95d15ec
[ "MIT" ]
1
2020-03-02T12:40:42.000Z
2020-03-02T12:40:42.000Z
how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.ipynb
rndazurescript/MachineLearningNotebooks
c520bd1d4130d9a01ee46e0937459e2de95d15ec
[ "MIT" ]
null
null
null
how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.ipynb
rndazurescript/MachineLearningNotebooks
c520bd1d4130d9a01ee46e0937459e2de95d15ec
[ "MIT" ]
1
2021-06-02T06:31:15.000Z
2021-06-02T06:31:15.000Z
38.976577
495
0.562084
[ [ [ "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.png)", "_____no_output_____" ], [ "Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.", "_____no_output_____" ], [ "# Azure ML Hardware Accelerated Models Quickstart", "_____no_output_____" ], [ "This tutorial will show you how to deploy an image recognition service based on the ResNet 50 classifier using the Azure Machine Learning Accelerated Models service. Get more information about our service from our [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-accelerate-with-fpgas), [API reference](https://docs.microsoft.com/en-us/python/api/azureml-accel-models/azureml.accel?view=azure-ml-py), or [forum](https://aka.ms/aml-forum).\n\nWe will use an accelerated ResNet50 featurizer running on an FPGA. Our Accelerated Models Service handles translating deep neural networks (DNN) into an FPGA program.\n\nFor more information about using other models besides Resnet50, see the [README](./README.md).\n\nThe steps covered in this notebook are: \n1. [Set up environment](#set-up-environment)\n* [Construct model](#construct-model)\n * Image Preprocessing\n * Featurizer (Resnet50)\n * Classifier\n * Save Model\n* [Register Model](#register-model)\n* [Convert into Accelerated Model](#convert-model)\n* [Create Image](#create-image)\n* [Deploy](#deploy-image)\n* [Test service](#test-service)\n* [Clean-up](#clean-up)", "_____no_output_____" ], [ "<a id=\"set-up-environment\"></a>\n## 1. Set up environment", "_____no_output_____" ] ], [ [ "import os\nimport tensorflow as tf", "_____no_output_____" ] ], [ [ "### Retrieve Workspace\nIf you haven't created a Workspace, please follow [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) to do so. If you have, run the codeblock below to retrieve it. ", "_____no_output_____" ] ], [ [ "from azureml.core import Workspace\n\nws = Workspace.from_config()\nprint(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')", "_____no_output_____" ] ], [ [ "<a id=\"construct-model\"></a>\n## 2. Construct model\n\nThere are three parts to the model we are deploying: pre-processing, featurizer with ResNet50, and classifier with ImageNet dataset. Then we will save this complete Tensorflow model graph locally before registering it to your Azure ML Workspace.\n\n### 2.a. Image preprocessing\nWe'd like our service to accept JPEG images as input. However the input to ResNet50 is a tensor. So we need code that decodes JPEG images and does the preprocessing required by ResNet50. The Accelerated AI service can execute TensorFlow graphs as part of the service and we'll use that ability to do the image preprocessing. This code defines a TensorFlow graph that preprocesses an array of JPEG images (as strings) and produces a tensor that is ready to be featurized by ResNet50.\n\n**Note:** Expect to see TF deprecation warnings until we port our SDK over to use Tensorflow 2.0.", "_____no_output_____" ] ], [ [ "# Input images as a two-dimensional tensor containing an arbitrary number of images represented a strings\nimport azureml.accel.models.utils as utils\ntf.reset_default_graph()\n\nin_images = tf.placeholder(tf.string)\nimage_tensors = utils.preprocess_array(in_images)\nprint(image_tensors.shape)", "_____no_output_____" ] ], [ [ "### 2.b. Featurizer\nWe use ResNet50 as a featurizer. In this step we initialize the model. This downloads a TensorFlow checkpoint of the quantized ResNet50.", "_____no_output_____" ] ], [ [ "from azureml.accel.models import QuantizedResnet50\nsave_path = os.path.expanduser('~/models')\nmodel_graph = QuantizedResnet50(save_path, is_frozen = True)\nfeature_tensor = model_graph.import_graph_def(image_tensors)\nprint(model_graph.version)\nprint(feature_tensor.name)\nprint(feature_tensor.shape)", "_____no_output_____" ] ], [ [ "### 2.c. Classifier\nThe model we downloaded includes a classifier which takes the output of the ResNet50 and identifies an image. This classifier is trained on the ImageNet dataset. We are going to use this classifier for our service. The next [notebook](./accelerated-models-training.ipynb) shows how to train a classifier for a different data set. The input to the classifier is a tensor matching the output of our ResNet50 featurizer.", "_____no_output_____" ] ], [ [ "classifier_output = model_graph.get_default_classifier(feature_tensor)\nprint(classifier_output)", "_____no_output_____" ] ], [ [ "### 2.d. Save Model\nNow that we loaded all three parts of the tensorflow graph (preprocessor, resnet50 featurizer, and the classifier), we can save the graph and associated variables to a directory which we can register as an Azure ML Model.", "_____no_output_____" ] ], [ [ "# model_name must be lowercase\nmodel_name = \"resnet50\"\nmodel_save_path = os.path.join(save_path, model_name)\nprint(\"Saving model in {}\".format(model_save_path))\n\nwith tf.Session() as sess:\n model_graph.restore_weights(sess)\n tf.saved_model.simple_save(sess, model_save_path,\n inputs={'images': in_images},\n outputs={'output_alias': classifier_output})", "_____no_output_____" ] ], [ [ "### 2.e. Important! Save names of input and output tensors\n\nThese input and output tensors that were created during the preprocessing and classifier steps are also going to be used when **converting the model** to an Accelerated Model that can run on FPGA's and for **making an inferencing request**. It is very important to save this information! You can see our defaults for all the models in the [README](./README.md).\n\nBy default for Resnet50, these are the values you should see when running the cell below: \n* input_tensors = \"Placeholder:0\"\n* output_tensors = \"classifier/resnet_v1_50/predictions/Softmax:0\"", "_____no_output_____" ] ], [ [ "input_tensors = in_images.name\noutput_tensors = classifier_output.name\n\nprint(input_tensors)\nprint(output_tensors)", "_____no_output_____" ] ], [ [ "<a id=\"register-model\"></a>\n## 3. Register Model", "_____no_output_____" ], [ "You can add tags and descriptions to your models. Using tags, you can track useful information such as the name and version of the machine learning library used to train the model. Note that tags must be alphanumeric.", "_____no_output_____" ] ], [ [ "from azureml.core.model import Model\n\nregistered_model = Model.register(workspace = ws,\n model_path = model_save_path,\n model_name = model_name)\n\nprint(\"Successfully registered: \", registered_model.name, registered_model.description, registered_model.version, sep = '\\t')", "_____no_output_____" ] ], [ [ "<a id=\"convert-model\"></a>\n## 4. Convert Model", "_____no_output_____" ], [ "For conversion you need to provide names of input and output tensors. This information can be found from the model_graph you saved in step 2.e. above.\n\n**Note**: Conversion may take a while and on average for FPGA model it is about 1-3 minutes and it depends on model type.", "_____no_output_____" ] ], [ [ "from azureml.accel import AccelOnnxConverter\n\nconvert_request = AccelOnnxConverter.convert_tf_model(ws, registered_model, input_tensors, output_tensors)\n\nif convert_request.wait_for_completion(show_output = False):\n # If the above call succeeded, get the converted model\n converted_model = convert_request.result\n print(\"\\nSuccessfully converted: \", converted_model.name, converted_model.url, converted_model.version, \n converted_model.id, converted_model.created_time, '\\n')\nelse:\n print(\"Model conversion failed. Showing output.\")\n convert_request.wait_for_completion(show_output = True)", "_____no_output_____" ] ], [ [ "<a id=\"create-image\"></a>\n## 5. Package the model into an Image", "_____no_output_____" ], [ "You can add tags and descriptions to image. Also, for FPGA model an image can only contain **single** model.\n\n**Note**: The following command can take few minutes. ", "_____no_output_____" ] ], [ [ "from azureml.core.image import Image\nfrom azureml.accel import AccelContainerImage\n\nimage_config = AccelContainerImage.image_configuration()\n# Image name must be lowercase\nimage_name = \"{}-image\".format(model_name)\n\nimage = Image.create(name = image_name,\n models = [converted_model],\n image_config = image_config, \n workspace = ws)\nimage.wait_for_creation(show_output = False)", "_____no_output_____" ] ], [ [ "<a id=\"deploy-image\"></a>\n## 6. Deploy\nOnce you have an Azure ML Accelerated Image in your Workspace, you can deploy it to two destinations, to a Databox Edge machine or to an AKS cluster. \n\n### 6.a. Databox Edge Machine using IoT Hub\nSee the sample [here](https://github.com/Azure-Samples/aml-real-time-ai/) for using the Azure IoT CLI extension for deploying your Docker image to your Databox Edge Machine.\n\n### 6.b. Azure Kubernetes Service (AKS) using Azure ML Service\nWe are going to create an AKS cluster with FPGA-enabled machines, then deploy our service to it. For more information, see [AKS official docs](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#aks).\n\n#### Create AKS ComputeTarget", "_____no_output_____" ] ], [ [ "from azureml.core.compute import AksCompute, ComputeTarget\n\n# Uses the specific FPGA enabled VM (sku: Standard_PB6s)\n# Standard_PB6s are available in: eastus, westus2, westeurope, southeastasia\nprov_config = AksCompute.provisioning_configuration(vm_size = \"Standard_PB6s\",\n agent_count = 1, \n location = \"eastus\")\n\naks_name = 'my-aks-pb6'\n# Create the cluster\naks_target = ComputeTarget.create(workspace = ws, \n name = aks_name, \n provisioning_configuration = prov_config)", "_____no_output_____" ] ], [ [ "Provisioning an AKS cluster might take awhile (15 or so minutes), and we want to wait until it's successfully provisioned before we can deploy a service to it. If you interrupt this cell, provisioning of the cluster will continue. You can also check the status in your Workspace under Compute.", "_____no_output_____" ] ], [ [ "%%time\naks_target.wait_for_completion(show_output = True)\nprint(aks_target.provisioning_state)\nprint(aks_target.provisioning_errors)", "_____no_output_____" ] ], [ [ "#### Deploy AccelContainerImage to AKS ComputeTarget", "_____no_output_____" ] ], [ [ "%%time\nfrom azureml.core.webservice import Webservice, AksWebservice\n\n# Set the web service configuration (for creating a test service, we don't want autoscale enabled)\n# Authentication is enabled by default, but for testing we specify False\naks_config = AksWebservice.deploy_configuration(autoscale_enabled=False,\n num_replicas=1,\n auth_enabled = False)\n\naks_service_name ='my-aks-service-1'\n\naks_service = Webservice.deploy_from_image(workspace = ws,\n name = aks_service_name,\n image = image,\n deployment_config = aks_config,\n deployment_target = aks_target)\naks_service.wait_for_deployment(show_output = True)", "_____no_output_____" ] ], [ [ "<a id=\"test-service\"></a>\n## 7. Test the service", "_____no_output_____" ], [ "### 7.a. Create Client\nThe image supports gRPC and the TensorFlow Serving \"predict\" API. We will create a PredictionClient from the Webservice object that can call into the docker image to get predictions. If you do not have the Webservice object, you can also create [PredictionClient](https://docs.microsoft.com/en-us/python/api/azureml-accel-models/azureml.accel.predictionclient?view=azure-ml-py) directly.\n\n**Note:** If you chose to use auth_enabled=True when creating your AksWebservice, see documentation [here](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice(class)?view=azure-ml-py#get-keys--) on how to retrieve your keys and use either key as an argument to PredictionClient(...,access_token=key).\n**WARNING:** If you are running on Azure Notebooks free compute, you will not be able to make outgoing calls to your service. Try locating your client on a different machine to consume it.", "_____no_output_____" ] ], [ [ "# Using the grpc client in AzureML Accelerated Models SDK\nfrom azureml.accel import client_from_service\n\n# Initialize AzureML Accelerated Models client\nclient = client_from_service(aks_service)", "_____no_output_____" ] ], [ [ "You can adapt the client [code](https://github.com/Azure/aml-real-time-ai/blob/master/pythonlib/amlrealtimeai/client.py) to meet your needs. There is also an example C# [client](https://github.com/Azure/aml-real-time-ai/blob/master/sample-clients/csharp).\n\nThe service provides an API that is compatible with TensorFlow Serving. There are instructions to download a sample client [here](https://www.tensorflow.org/serving/setup).", "_____no_output_____" ], [ "### 7.b. Serve the model\nTo understand the results we need a mapping to the human readable imagenet classes", "_____no_output_____" ] ], [ [ "import requests\nclasses_entries = requests.get(\"https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt\").text.splitlines()", "_____no_output_____" ], [ "# Score image with input and output tensor names\nresults = client.score_file(path=\"./snowleopardgaze.jpg\", \n input_name=input_tensors, \n outputs=output_tensors)\n\n# map results [class_id] => [confidence]\nresults = enumerate(results)\n# sort results by confidence\nsorted_results = sorted(results, key=lambda x: x[1], reverse=True)\n# print top 5 results\nfor top in sorted_results[:5]:\n print(classes_entries[top[0]], 'confidence:', top[1])", "_____no_output_____" ] ], [ [ "<a id=\"clean-up\"></a>\n## 8. Clean-up\nRun the cell below to delete your webservice, image, and model (must be done in that order). In the [next notebook](./accelerated-models-training.ipynb) you will learn how to train a classfier on a new dataset using transfer learning and finetune the weights.", "_____no_output_____" ] ], [ [ "aks_service.delete()\naks_target.delete()\nimage.delete()\nregistered_model.delete()\nconverted_model.delete()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbb9579cc2af8d8edd5443f9deb523b8a2aa790f
41,417
ipynb
Jupyter Notebook
codes/Ques2.ipynb
faisalAkhtar/ML
ada21ffae65ff212a7a87f25adf34382d701c203
[ "MIT" ]
null
null
null
codes/Ques2.ipynb
faisalAkhtar/ML
ada21ffae65ff212a7a87f25adf34382d701c203
[ "MIT" ]
null
null
null
codes/Ques2.ipynb
faisalAkhtar/ML
ada21ffae65ff212a7a87f25adf34382d701c203
[ "MIT" ]
1
2021-11-21T21:26:26.000Z
2021-11-21T21:26:26.000Z
43.097815
15,876
0.64319
[ [ [ "## Faisal Akhtar\n## College Roll No.: 17/1409", "_____no_output_____" ], [ "Q2)<br>\nIris plants dataset (already available in Scikit Learn) has the following characteristics:<br>\nNumber of Instances: 150 (50 in each of three classes)<br>\nNumber of Attributes: 4 numeric, predictive attributes and the class<br>\nAttribute Information: sepal length in cm, sepal width in cm, petal length in cm, petal width in cm Class: Iris-Setosa, Iris-Versicolour, Iris-Virginica<br>\nMissing Attribute Values: None<br><br>\nWrite a program using Scikit Learn that utilizes Logistic regression to build a classification model using all the four features to predict the class of a plant. Print the confusion matrix, accuracy, precision and recall for the model.<br><br>\nAlso, build a classification model in Scikit Learn using Neural Networks using all the features to predict the class a plant belongs to. Print the confusion matrix, accuracy, precision and recall for the model and compare its performance with the model created using Logistic regression. ", "_____no_output_____" ], [ "# Classification Model using Logistic Regression", "_____no_output_____" ], [ "### Loading data", "_____no_output_____" ] ], [ [ "import pandas as pd\ndata=pd.read_csv('../input/IRIS.csv')\ndata.head()", "_____no_output_____" ], [ "data.describe()", "_____no_output_____" ] ], [ [ "### Preparing the training set", "_____no_output_____" ] ], [ [ "# X = feature values, all the columns except the last column\nX=data.iloc[:,:-1]\n\n# Y = target values, last column of the data frame\nY=data.iloc[:,-1]", "_____no_output_____" ] ], [ [ "### Plotting the relation of each feature with each species", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n\nplt.xlabel('Features')\nplt.ylabel('Species')\n\npltX = data.loc[:, 'sepal_length']\npltY = data.loc[:,'species']\nplt.scatter(pltX, pltY, color='blue', label='sepal_length')\n\npltX = data.loc[:, 'sepal_width']\npltY = data.loc[:,'species']\nplt.scatter(pltX, pltY, color='green', label='sepal_width')\n\npltX = data.loc[:, 'petal_length']\npltY = data.loc[:,'species']\nplt.scatter(pltX, pltY, color='red', label='petal_length')\n\npltX = data.loc[:, 'petal_width']\npltY = data.loc[:,'species']\nplt.scatter(pltX, pltY, color='black', label='petal_width')\n\nplt.legend(loc=4, prop={'size':8})\nplt.show()", "_____no_output_____" ] ], [ [ "### Splitting the data into 80% training and 20% testing", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\n\nx_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=50)", "_____no_output_____" ], [ "print(\"x_train shape : \", x_train.shape) \nprint(\"x_test shape : \", x_test.shape) \nprint(\"y_train shape : \", y_train.shape) \nprint(\"y_test shape : \", y_test.shape)", "x_train shape : (120, 4)\nx_test shape : (30, 4)\ny_train shape : (120,)\ny_test shape : (30,)\n" ] ], [ [ "### Logistic Regression", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\n\nmodel = LogisticRegression()\nmodel.fit(x_train, y_train)", "/opt/conda/lib/python3.7/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):\nSTOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n\nIncrease the number of iterations (max_iter) or scale the data as shown in:\n https://scikit-learn.org/stable/modules/preprocessing.html\nPlease also refer to the documentation for alternative solver options:\n https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)\n" ], [ "predictions = model.predict(x_test)\nprint(predictions)", "['Iris-versicolor' 'Iris-versicolor' 'Iris-setosa' 'Iris-setosa'\n 'Iris-virginica' 'Iris-virginica' 'Iris-virginica' 'Iris-setosa'\n 'Iris-setosa' 'Iris-versicolor' 'Iris-setosa' 'Iris-virginica'\n 'Iris-setosa' 'Iris-virginica' 'Iris-versicolor' 'Iris-setosa'\n 'Iris-versicolor' 'Iris-setosa' 'Iris-versicolor' 'Iris-virginica'\n 'Iris-virginica' 'Iris-versicolor' 'Iris-setosa' 'Iris-virginica'\n 'Iris-versicolor' 'Iris-virginica' 'Iris-versicolor' 'Iris-versicolor'\n 'Iris-versicolor' 'Iris-virginica']\n" ] ], [ [ "### Confusion matrix and accuracy of the model", "_____no_output_____" ] ], [ [ "from sklearn.metrics import confusion_matrix \n\ncm = confusion_matrix(y_test, predictions)\nprint(\"Confusion Matrix : \\n\",cm)", "Confusion Matrix : \n [[ 9 0 0]\n [ 0 11 1]\n [ 0 0 9]]\n" ], [ "from sklearn.metrics import accuracy_score\n\nprint('Accuracy Score :',accuracy_score(y_test, predictions))", "Accuracy Score : 0.9666666666666667\n" ] ], [ [ "### Checking precision, recall and f1-score of the model", "_____no_output_____" ] ], [ [ "from sklearn.metrics import classification_report\n\nprint( classification_report(y_test, predictions) )", " precision recall f1-score support\n\n Iris-setosa 1.00 1.00 1.00 9\nIris-versicolor 1.00 0.92 0.96 12\n Iris-virginica 0.90 1.00 0.95 9\n\n accuracy 0.97 30\n macro avg 0.97 0.97 0.97 30\n weighted avg 0.97 0.97 0.97 30\n\n" ] ], [ [ "# Classification model using Neural Network", "_____no_output_____" ] ], [ [ "data.head()", "_____no_output_____" ], [ "data.shape", "_____no_output_____" ] ], [ [ "### Preprocessing", "_____no_output_____" ], [ "### Encoding Categorical Values", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import LabelEncoder\nlabelencoder = LabelEncoder()\ndata[\"species\"] = labelencoder.fit_transform(data[\"species\"])\nspecies = pd.DataFrame({'species': ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']})", "_____no_output_____" ], [ "import numpy as np\nprint ('Class labels:', np.unique(data[\"species\"]))", "Class labels: [0 1 2]\n" ] ], [ [ "### Test Train split", "_____no_output_____" ] ], [ [ "# X = feature values, all the columns except the last column\nX=data.iloc[:,:-1]\n\n# Y = target values, last column of the data frame\nY=data.iloc[:,-1]", "_____no_output_____" ], [ "x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=50)", "_____no_output_____" ], [ "print(\"x_train shape : \", x_train.shape) \nprint(\"x_test shape : \", x_test.shape) \nprint(\"y_train shape : \", y_train.shape) \nprint(\"y_test shape : \", y_test.shape)", "x_train shape : (120, 4)\nx_test shape : (30, 4)\ny_train shape : (120,)\ny_test shape : (30,)\n" ], [ "print ('Lables count in y:', np.bincount(Y))\nprint('Lables counts in y_train:', np.bincount(y_train))\nprint('Lables counts in y_test:', np.bincount(y_test))", "Lables count in y: [50 50 50]\nLables counts in y_train: [41 38 41]\nLables counts in y_test: [ 9 12 9]\n" ] ], [ [ "### Scaling IRIS data with StandardScaler", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nsc.fit(x_train)\nx_train_std = sc.transform(x_train)\nx_test_std = sc.transform(x_test)", "_____no_output_____" ] ], [ [ "### Perceptron", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import Perceptron\nppn = Perceptron(max_iter = 40, eta0=0.1, random_state=1)\nppn.fit(x_train_std, y_train)", "_____no_output_____" ], [ "ypred = ppn.predict(x_test_std)", "_____no_output_____" ] ], [ [ "### Confusion matrix and accuracy of the model", "_____no_output_____" ] ], [ [ "cm = confusion_matrix(y_test, ypred)\nprint(\"Confusion Matrix : \\n\",cm)", "Confusion Matrix : \n [[9 0 0]\n [1 9 2]\n [0 0 9]]\n" ], [ "print('Accuracy Score :',accuracy_score(y_test, ypred))", "Accuracy Score : 0.9\n" ] ], [ [ "### Checking precision, recall and f1-score of the model", "_____no_output_____" ] ], [ [ "print( classification_report(y_test, ypred) )", " precision recall f1-score support\n\n 0 0.90 1.00 0.95 9\n 1 1.00 0.75 0.86 12\n 2 0.82 1.00 0.90 9\n\n accuracy 0.90 30\n macro avg 0.91 0.92 0.90 30\nweighted avg 0.92 0.90 0.90 30\n\n" ] ], [ [ "# Comparing performances", "_____no_output_____" ], [ "<table>\n <tr>\n <th>MODEL</th>\n <th>Accuracy</th>\n <th>Precision</th>\n <th>Recall</th>\n <th>F1-Score</th>\n </tr>\n <tr>\n <td>Logistic regression</td>\n <td>0.9666666666666667</td>\n <td>0.97</td>\n <td>0.97</td>\n <td>0.97</td>\n </tr>\n <tr>\n <td>Perceptron</td>\n <td>0.9</td>\n <td>0.92</td>\n <td>0.9</td>\n <td>0.9</td>\n </tr>\n</table>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cbb95d397a16175e564b7707a75a0ee3e572179e
7,535
ipynb
Jupyter Notebook
DAT-02-14-main/Homework/Unit1/project-eda-chipotle.ipynb
bweiss415/my-ga-repo
bce8d1a7a18d227dd339b302375a898d6c48c812
[ "MIT" ]
1
2022-02-15T02:26:50.000Z
2022-02-15T02:26:50.000Z
DAT-02-14-main/Homework/Unit1/project-eda-chipotle.ipynb
bweiss415/my-ga-repo
bce8d1a7a18d227dd339b302375a898d6c48c812
[ "MIT" ]
null
null
null
DAT-02-14-main/Homework/Unit1/project-eda-chipotle.ipynb
bweiss415/my-ga-repo
bce8d1a7a18d227dd339b302375a898d6c48c812
[ "MIT" ]
null
null
null
20.201072
357
0.529263
[ [ [ "<img src=\"http://imgur.com/1ZcRyrc.png\" style=\"float: left; margin: 20px; height: 55px\">\n\n# Project 2: Analyzing Chipotle Data\n\n_Author: Joseph Nelson (DC)_\n\n---", "_____no_output_____" ], [ "For Project 2, you will complete a series of exercises exploring [order data from Chipotle](https://github.com/TheUpshot/chipotle), compliments of _The New York Times'_ \"The Upshot.\"\n\nFor these exercises, you will conduct basic exploratory data analysis (Pandas not required) to understand the essentials of Chipotle's order data: how many orders are being made, the average price per order, how many different ingredients are used, etc. These allow you to practice business analysis skills while also becoming comfortable with Python.", "_____no_output_____" ], [ "---\n\n## Basic Level", "_____no_output_____" ], [ "### Part 1: Read in the file with `csv.reader()` and store it in an object called `file_nested_list`.\n\nHint: This is a TSV (tab-separated value) file, and `csv.reader()` needs to be told [how to handle it](https://docs.python.org/2/library/csv.html).", "_____no_output_____" ] ], [ [ "import csv\nfrom collections import namedtuple # Convenient to store the data rows\n\nDATA_FILE = './data/chipotle.tsv'", "_____no_output_____" ] ], [ [ "### Part 2: Separate `file_nested_list` into the `header` and the `data`.\n", "_____no_output_____" ], [ "---\n\n## Intermediate Level", "_____no_output_____" ], [ "### Part 3: Calculate the average price of an order.\n\nHint: Examine the data to see if the `quantity` column is relevant to this calculation.\n\nHint: Think carefully about the simplest way to do this!", "_____no_output_____" ], [ "### Part 4: Create a list (or set) named `unique_sodas` containing all of unique sodas and soft drinks that Chipotle sells.\n\nNote: Just look for `'Canned Soda'` and `'Canned Soft Drink'`, and ignore other drinks like `'Izze'`.", "_____no_output_____" ], [ "---\n\n## Advanced Level\n", "_____no_output_____" ], [ "### Part 5: Calculate the average number of toppings per burrito.\n\nNote: Let's ignore the `quantity` column to simplify this task.\n\nHint: Think carefully about the easiest way to count the number of toppings!\n", "_____no_output_____" ], [ "### Part 6: Create a dictionary. Let the keys represent chip orders and the values represent the total number of orders.\n\nExpected output: `{'Chips and Roasted Chili-Corn Salsa': 18, ... }`\n\nNote: Please take the `quantity` column into account!\n\nOptional: Learn how to use `.defaultdict()` to simplify your code.", "_____no_output_____" ], [ "---\n\n## Bonus: Craft a problem statement about this data that interests you, and then answer it!\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]