repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
karlstroetmann/Formal-Languages
Python/FSM-2-Dot.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css', 'r') as file:\n css = file.read()\nHTML(css)", "The function dfa2string converts the given deterministic <span style=\"font-variant:small-caps;\">Fsm</span> into a string.", "def dfa2string(Fsm):\n states, sigma, delta, q0, final = Fsm\n result = ''\n n = 0\n statesToNames = {}\n for q in states:\n statesToNames[q] = f'S{n}'\n n += 1\n result += 'states: {S0, ..., ' + f'S{n-1}' + '}\\n\\n' \n result += f'start state: {statesToNames[q0]}' + '\\n\\n'\n result += 'state encoding:\\n'\n for q in states:\n result += f'{statesToNames[q]} = {q}' + '\\n'\n result += '\\ntransitions:\\n'\n for q in states:\n for c in sigma: \n print(q, c, delta.get((q, c)))\n if delta.get((q, c)) != None:\n result += f'delta({statesToNames[q]}, {c}) = {statesToNames[delta[(q, c)]]}' + '\\n'\n result += '\\nset of accepting states: {'\n result += ', '.join({ statesToNames[q] for q in final })\n result += '}\\n'\n return result\n\nimport graphviz as gv", "The function dfa2dot converts the given deterministic <span style=\"font-variant:small-caps;\">Fsm</span> into a graph in dot-format.", "def dfa2dot(dfa):\n states, sigma, delta, q0, final = dfa\n dot = gv.Digraph('Deterministic FSM')\n dot.graph_attr['rankdir'] = 'LR'\n n = 0 # used to assign names to states\n statesToNames = {} # assigns a name to every state\n for q in states:\n statesToNames[q] = f'S{n}'\n n += 1\n startName = statesToNames[q0]\n dot.node('1', label='', width='0.1', height='0.1', style='filled', color='blue')\n dot.edge('1', startName)\n for q in states:\n if q in final:\n dot.node(statesToNames[q], peripheries='2')\n else:\n dot.node(statesToNames[q])\n for q in states:\n for c in sigma:\n p = delta.get((q, c))\n if p != None:\n dot.edge(statesToNames[q], statesToNames[p], label = c)\n return dot, statesToNames", "The function nfa2string converts a non-deterministic finite state machine nfa into a string.", "def nfa2string(nfa):\n states, sigma, delta, q0, final = nfa\n n = 0\n result = ''\n result += f'states: {states}' + '\\n\\n' \n result += f'start state: {q0}' + '\\n\\n'\n result += 'transitions:\\n'\n for q in states:\n for c in sigma:\n S = delta.get((q, c))\n if S != None:\n for p in S:\n result += f'[{q}, {c}] |-> {p}' + '\\n'\n S = delta.get((q, ''))\n if S != None:\n for p in S:\n result += f'[{q}, \"\"] |-> {p}' + '\\n'\n result += '\\n' + f'set of accepting states: {final}' + '\\n'\n return result", "The function nfa2dot takes a non-deterministic finite state machine and converts it \ninto a a dot graph.", "def nfa2dot(nfa):\n states, sigma, delta, q0, final = nfa\n result = ''\n n = 0\n startName = str(q0)\n dot = gv.Digraph('Non-Deterministic FSM')\n dot.graph_attr['rankdir'] = 'LR'\n dot.node('0', label='', width='0.1', height='0.1', style='filled', color='blue')\n dot.edge('0', startName)\n for q in states:\n if q in final:\n dot.node(str(q), peripheries='2')\n else:\n dot.node(str(q))\n for q in states:\n S = delta.get((q, ''))\n if S != None:\n for p in S:\n dot.edge(str(q), str(p), label='๐œ€', weight='0.1')\n for q in states:\n for c in sigma:\n S = delta.get((q, c))\n if S != None:\n for p in S:\n dot.edge(str(q), str(p), label=c, weight='10')\n return dot" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rickiepark/tfk-notebooks
first-contact-with-tensorflow/chapter5_convolution_neural_network.ipynb
mit
[ "simple_neural_network ์˜ˆ์ œ์—์„œ MNIST ๋ฐ์ดํ„ฐ๋ฅผ ์ด๋ฏธ ๋‹ค์šด ๋ฐ›์•˜์œผ๋ฏ€๋กœ ๋‹ค์‹œ ๋‹ค์šด ๋ฐ›์ง€ ์•Š์Šต๋‹ˆ๋‹ค.", "from tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\nimport tensorflow as tf", "x, y_ ํ”Œ๋ ˆ์ด์Šคํ™€๋”๋ฅผ ์ง€์ •ํ•˜๊ณ  x ๋ฅผ 28x28x1 ํฌ๊ธฐ๋กœ ์ฐจ์›์„ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค.", "x = tf.placeholder(\"float\", shape=[None, 784])\ny_ = tf.placeholder(\"float\", shape=[None, 10])\n\nx_image = tf.reshape(x, [-1,28,28,1])\nprint(\"x_image=\", x_image)", "๊ฐ€์ค‘์น˜๋ฅผ ํ‘œ์ค€ํŽธ์ฐจ 0.1์„ ๊ฐ–๋Š” ๋‚œ์ˆ˜๋กœ ์ดˆ๊ธฐํ™”ํ•˜๋Š” ํ•จ์ˆ˜์™€ ๋ฐ”์ด์–ด์Šค๋ฅผ 0.1๋กœ ์ดˆ๊ธฐํ™”ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.", "def weight_variable(shape):\n initial = tf.truncated_normal(shape, stddev=0.1)\n return tf.Variable(initial)\n\ndef bias_variable(shape):\n initial = tf.constant(0.1, shape=shape)\n return tf.Variable(initial)", "stride๋Š” 1๋กœ ํ•˜๊ณ  ํŒจ๋”ฉ์€ 0์œผ๋กœ ํ•˜๋Š” ์ฝ˜๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋ฅผ ๋งŒ๋“œ๋Š” ํ•จ์ˆ˜์™€ 2x2 ๋งฅ์Šค ํ’€๋ง ๋ ˆ์ด์–ด๋ฅผ ์œ„ํ•œ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.", "def conv2d(x, W):\n return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')\n\ndef max_pool_2x2(x):\n return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')", "์ฒซ๋ฒˆ์งธ ์ฝ˜๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋ฅผ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ๊ฐ€์ค‘์น˜์™€ ๋ฐ”์ด์–ด์Šค ํ…์„œ๋ฅผ ๋งŒ๋“ค๊ณ  ํ™œ์„ฑํ™”ํ•จ์ˆ˜๋Š” ๋ ๋ฃจ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ฝ˜๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด ๋’ค์— ๋งฅ์Šค ํ’€๋ง ๋ ˆ์ด์–ด๋ฅผ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค.", "W_conv1 = weight_variable([5, 5, 1, 32])\nb_conv1 = bias_variable([32])\n\nh_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)\nh_pool1 = max_pool_2x2(h_conv1)", "SAME ํŒจ๋”ฉ์ด๋ฏ€๋กœ ์ฝ˜๋ณผ๋ฃจ์…˜์œผ๋กœ๋Š” ์ฐจ์›์ด ๋ณ€๊ฒฝ๋˜์ง€ ์•Š๊ณ  ํ’€๋ง ๋‹จ๊ณ„์—์„œ ์ŠคํŠธ๋ผ์ด๋“œ์— ๋”ฐ๋ผ ์ฐจ์›์ด ๋ฐ˜์œผ๋กœ ์ค„์–ด๋“ ๋‹ค.", "print(x_image.get_shape())\nprint(h_conv1.get_shape())\nh_pool1.get_shape()", "๋‘๋ฒˆ์งธ ์ฝ˜๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด์™€ ํ’€๋ง ๋ ˆ์ด์–ด๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์ฒซ๋ฒˆ์งธ ์ฝ˜๋ณผ๋ฃจ์…˜์˜ ํ•„ํ„ฐ๊ฐ€ 32๊ฐœ๋ผ ๋‘๋ฒˆ์งธ ์ฝ˜๋ณผ๋ฃจ์…˜์˜ ์ปฌ๋Ÿฌ ์ฑ„๋„์ด 32๊ฐœ๊ฐ€ ๋˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์€ ํšจ๊ณผ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค.", "W_conv2 = weight_variable([5, 5, 32, 64])\nb_conv2 = bias_variable([64])\n\nh_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)\nh_pool2 = max_pool_2x2(h_conv2)", "SAME ํŒจ๋”ฉ์ด๋ฏ€๋กœ ์ฝ˜๋ณผ๋ฃจ์…˜์œผ๋กœ๋Š” ์ฐจ์›์ด ๋ณ€๊ฒฝ๋˜์ง€ ์•Š๊ณ  ํ’€๋ง ๋‹จ๊ณ„์—์„œ ์ŠคํŠธ๋ผ์ด๋“œ์— ๋”ฐ๋ผ ์ฐจ์›์ด ๋ฐ˜์œผ๋กœ ์ค„์–ด๋“ ๋‹ค.", "print(h_conv2.get_shape())\nh_pool2.get_shape()", "๋งˆ์ง€๋ง‰ ์†Œํ”„ํŠธ๋งฅ์Šค ๋ ˆ์ด์–ด์— ์—ฐ๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์™„์ „์—ฐ๊ฒฐ ๋ ˆ์ด์–ด๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด์ „ ์ฝ˜๋ณผ๋ฃจ์…˜์˜ ๋ ˆ์ด์–ด์˜ ๊ฒฐ๊ณผ ํ…์„œ๋ฅผ ๋‹ค์‹œ 1์ฐจ์› ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ๋ ๋ฃจ ํ™œ์„ฑํ™” ํ•จ์ˆ˜์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค.", "W_fc1 = weight_variable([7 * 7 * 64, 1024])\nb_fc1 = bias_variable([1024])\n\nh_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])\nh_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)", "๋“œ๋กญ์•„์›ƒ๋˜์ง€ ์•Š์„ ํ™•๋ฅ  ๊ฐ’์„ ์ €์žฅํ•  ํ”Œ๋ ˆ์ด์Šคํ™€๋”๋ฅผ ๋งŒ๋“ค๊ณ  ๋“œ๋กญ์•„์›ƒ ๋ ˆ์ด์–ด๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.", "keep_prob = tf.placeholder(\"float\")\nh_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)", "๋งˆ์ง€๋ง‰์œผ๋กœ ์†Œํ”„ํŠธ๋งฅ์Šค ๋ ˆ์ด์–ด๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.", "W_fc2 = weight_variable([1024, 10])\nb_fc2 = bias_variable([10])\n\ny_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)", "ํฌ๋กœ์Šค์—”ํŠธ๋กœํ”ผ์™€ ์ตœ์ ํ™”์•Œ๊ณ ๋ฆฌ์ฆ˜, ํ‰๊ฐ€๋ฅผ ์œ„ํ•œ ์—ฐ์‚ฐ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.", "cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))\ntrain_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)\ncorrect_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))", "์„ธ์…˜์„ ์‹œ์ž‘ํ•˜๊ณ  ๋ณ€์ˆ˜๋ฅผ ์ดˆ๊ธฐํ™” ํ•ฉ๋‹ˆ๋‹ค.", "sess = tf.Session()\nsess.run(tf.initialize_all_variables())", "20,000๋ฒˆ ๋ฐ˜๋ณต์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค.", "for i in range(20000):\n batch = mnist.train.next_batch(50)\n if i % 1000 == 0:\n train_accuracy = sess.run(accuracy, feed_dict={\n x:batch[0], y_: batch[1], keep_prob: 1.0})\n print(\"step %d, training accuracy %g\"%(i, train_accuracy))\n sess.run(train_step,feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})", "์ตœ์ข… ์ •ํ™•๋„๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค.", "print(\"test accuracy %g\"% sess.run(\n accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tritemio/FRETBursts
notebooks/Example - Customize the us-ALEX histogram.ipynb
gpl-2.0
[ "Example - Customize the ฮผs-ALEX histogram\nThis notebook is part of smFRET burst analysis software FRETBursts.\n\nIn this notebook shows how to plot different styles of ฮผs-ALEX histograms and $E$ and $S$ marginal distributions.\nFor a complete tutorial on burst analysis see \nFRETBursts - us-ALEX smFRET burst analysis.", "from fretbursts import *\n\nsns = init_notebook(apionly=True)\nprint('seaborn version: ', sns.__version__)\n\n# Tweak here matplotlib style\nimport matplotlib as mpl\nmpl.rcParams['font.sans-serif'].insert(0, 'Arial')\nmpl.rcParams['font.size'] = 12\n%config InlineBackend.figure_format = 'retina'", "Get and process data", "url = 'http://files.figshare.com/2182601/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'\ndownload_file(url, save_dir='./data')\nfull_fname = \"./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5\"\n\nd = loader.photon_hdf5(full_fname)\nloader.alex_apply_period(d)\nd.calc_bg(bg.exp_fit, time_s=1000, tail_min_us=(800, 4000, 1500, 1000, 3000))\nd.burst_search(L=10, m=10, F=6)\nds = d.select_bursts(select_bursts.size, add_naa=True, th1=30)", "ALEX joint plot\nThe alex_jointplot function allows plotting an ALEX histogram with marginals.\nThis is how it looks by default:", "alex_jointplot(ds)", "The inner plot in an hexbin plot, basically a 2D histogram with hexagonal bins.\nThis kind of histograms resembles a scatter plot when sample size is small,\nand is immune from grid artifacts typical of rectangular grids.\nFor more info for hexbin see this document.\nThe marginal plots are histograms with an overlay KDE plot. \nThe same FRETBursts function that plots standalone E and S histograms \nis used here to plot the marginals in the joint plot.\nBelow I show how to customize appearance and type of this plot.\nChanging colors\nBy default the colormap range is computed on the range S=[0.2, 0.8],\nso that the FRET populations (S ~ 0.5) have more contrast.\nTo normalize the colormap to the whole data use the vmax argument:", "alex_jointplot(ds, vmax_fret=False)\n\nalex_jointplot(ds, vmax_fret=False, marginal_color=8)\n\nalex_jointplot(ds, vmax_fret=False, marginal_color=7)\n\nalex_jointplot(ds, kind='kde')", "Or you can manually choose the max value mapped by the colormap (vmax):", "alex_jointplot(ds, vmax=40)", "Changing the colormap will affect both inner and marginal plots:", "alex_jointplot(ds, cmap='plasma')", "To pick a different color from the colormap for the marginal histograms use histcolor_id:", "alex_jointplot(ds, cmap='plasma', marginal_color=83)", "Kinds of joint-plots\nThe inner plot can be changed to a scatter plot or a KDE plot:", "alex_jointplot(ds, kind='scatter')\n\nalex_jointplot(ds, kind='kde')\n\ndsf = ds.select_bursts(select_bursts.naa, th1=40)\nalex_jointplot(dsf, kind='kde',\n joint_kws={'shade': False, 'n_levels': 12, 'bw': 0.04})", "No marginals\nFinally, we can plot only the hexbin 2D histogram without marginals:", "plt.figure(figsize=(5,5))\nhexbin_alex(ds)", "Figure layout\nYou can get an handle of the different axes in the figure for layout customization:", "g = alex_jointplot(ds)\ng.ax_marg_x.grid(False)\ng.ax_marg_y.grid(False)\ng.ax_joint.set_xlim(-0.1, 1.1)\ng.ax_joint.set_ylim(-0.1, 1.1)", "alex_jointplot returns g which contains the axis handles (g.ax_join, g.ax_marg_x, g.ax_marg_y).\nThe object g is a seaborn.JointGrid.", "g = alex_jointplot(ds)\ng.ax_marg_x.grid(False)\ng.ax_marg_y.grid(False)\ng.ax_joint.set_xlim(-0.19, 1.19)\ng.ax_joint.set_ylim(-0.19, 1.19)\nplt.subplots_adjust(wspace=0, hspace=0)\ng.ax_marg_y.spines['bottom'].set_visible(True)\ng.ax_marg_x.spines['left'].set_visible(True)\ng.ax_marg_y.tick_params(reset=True, bottom=True, top=False, right=False, labelleft=False)\ng.ax_marg_x.tick_params(reset=True, left=True, top=False, right=False, labelbottom=False)\n\ng = alex_jointplot(ds)\ng.ax_marg_x.grid(False)\ng.ax_marg_y.grid(False)\ng.ax_joint.set_xlim(-0.19, 1.19)\ng.ax_joint.set_ylim(-0.19, 1.19)\nplt.subplots_adjust(wspace=0, hspace=0)\ng.ax_marg_y.tick_params(reset=True, bottom=True, top=False, right=False, labelleft=False)\ng.ax_marg_x.tick_params(reset=True, left=True, top=False, right=False, labelbottom=False)\n\ng = alex_jointplot(ds)\ng.ax_marg_x.grid(False, axis='x')\ng.ax_marg_y.grid(False, axis='y')\ng.ax_joint.set_xlim(-0.19, 1.19)\ng.ax_joint.set_ylim(-0.19, 1.19)\nplt.subplots_adjust(wspace=0, hspace=0)", "Arguments of inner plots\nAdditional arguments can be passed to the inner or marginal plots passing \na dictionary to joint_kws and marginal_kws respectively.\nThe marginal plots are created by hist_burst_data \nwhich is the same function used to plot standalone E and S histograms\nin FRETBursts. \nFor example, we can remove the KDE overlay like this:", "alex_jointplot(ds, marginal_kws={'show_kde': False})", "Interactive plot", "from ipywidgets import widgets, interact, interactive, fixed\nfrom IPython.display import display, display_png, display_svg, clear_output\nfrom IPython.core.pylabtools import print_figure\n\ncmaps = ['viridis', 'plasma', 'inferno', 'magma',\n 'afmhot', 'Blues', 'BuGn', 'BuPu', 'GnBu', 'YlGnBu',\n 'coolwarm', 'RdYlBu', 'RdYlGn', 'Spectral',]# 'icefire'] uncomment if using seaborn 0.8\n\n@interact(overlay = widgets.RadioButtons(options=['fit model', 'KDE'], value='KDE'),\n binwidth = widgets.FloatText(value=0.03, min=0.01, max=1),\n bandwidth = widgets.FloatText(value=0.03, min=0.01, max=1),\n gridsize = (10, 100),\n min_size=(10, 500, 5),\n cmap=widgets.Dropdown(value='Spectral', options=cmaps),\n reverse_cmap = True,\n vmax_fret = True,\n )\ndef plot_(min_size=50, overlay='KDE', binwidth=0.03, bandwidth=0.03, \n gridsize=50, cmap='Spectral', reverse_cmap=False, \n vmax_fret=True):\n dx = d.select_bursts(select_bursts.size, add_naa=True, th1=min_size)\n bext.bursts_fitter(dx, 'E', binwidth=binwidth, bandwidth=bandwidth, \n model=mfit.factory_three_gaussians())\n bext.bursts_fitter(dx, 'S', binwidth=binwidth, bandwidth=bandwidth, \n model=mfit.factory_two_gaussians()) \n \n if reverse_cmap: cmap += '_r'\n\n if binwidth < 0.01: binwidth = 0.01\n if bandwidth < 0.01: bandwidth = 0.01\n if overlay == 'fit model':\n marginal_kws = dict(binwidth=binwidth, show_model=True, pdf=True, \n show_kde=False)\n else:\n marginal_kws = dict(binwidth=binwidth, show_kde=True, \n bandwidth=bandwidth)\n alex_jointplot(dx, cmap=cmap, gridsize=gridsize, vmax_fret=vmax_fret, \n marginal_kws=marginal_kws,)\n \n fig = gcf()\n plt.close()\n display(fig)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
StingraySoftware/notebooks
Simulator/Concepts/Inverse Transform Sampling.ipynb
mit
[ "Inverse Transform Sampling\nThis notebook will conceptualize how inverse transform sampling works", "import numpy as np\nfrom matplotlib import pyplot as plt\nimport numpy.random as ra\n\n%matplotlib inline", "Below is a spectrum which follows an almost bell-curve type distribution (anyway, the specific type of distribution is not important here).", "spectrum = [[1, 2, 3, 4, 5, 6],[2000, 4040, 6500, 6000, 4020, 2070]]\nenergies = np.array(spectrum[0])\nfluxes = np.array(spectrum[1])\nspectrum", "Below, first we compute probabilities of flux. Afterwards, we compute the cumulative probability.", "prob = fluxes/float(sum(fluxes))\ncum_prob = np.cumsum(prob)\ncum_prob", "We draw ten thousand numbers from uniform random distribution.", "N = 10000\nR = ra.uniform(0, 1, N)\nR[1:10]", "We assign energies to events corresponding to the random number drawn.\nNote: The command below finds bin interval using a single command. I am not sure though that it's very readble. Would\nwe want to split that in multiple lines and maybe use explicit loops to make it more readable? Or is it fine as it is?\nComments?", "gen_energies = [int(energies[np.argwhere(cum_prob == min(cum_prob[(cum_prob - r) > 0]))]) for r in R]\ngen_energies[1:10]", "Histogram energies to get shape approximation.", "gen_energies = ((np.array(gen_energies) - 1) / 1).astype(int)\ntimes = np.arange(1, 6, 1)\nlc = np.bincount(gen_energies, minlength=len(times))\nlc\n\nplot1, = plt.plot(lc/float(sum(lc)), 'r--', label='Assigned energies')\nplot2, = plt.plot(prob,'g',label='Original Spectrum')\nplt.xlabel('Energies')\nplt.ylabel('Probability')\nplt.legend(handles=[plot1,plot2])\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
spacedrabbit/PythonBootcamp
Advanced Python Objects - Test.ipynb
mit
[ "Advanced Python Objects Test\nAdvanced Numbers\nProblem 1: Convert 1024 to binary and hexadecimal representation:", "print bin(1024)\nprint hex(1024)", "Problem 2: Round 5.23222 to two decimal places", "print round(5.2322, 2)", "Advanced Strings\nProblem 3: Check if every letter in the string s is lower case", "s = 'hello how are you Mary, are you feeling okay?'\nprint 'Yup' if s.islower() else 'Nope'", "Problem 4: How many times does the letter 'w' show up in the string below?", "s = 'twywywtwywbwhsjhwuwshshwuwwwjdjdid'\nprint s.count('w')", "Advanced Sets\nProblem 5: Find the elements in set1 that are not in set2:", "set1 = {2,3,1,5,6,8}\nset2 = {3,1,7,5,6,8}\n\nprint set1.difference(set2) # in set 1 but not set 2\nprint set2.difference(set1) # in set 2 but not set 1", "Problem 6: Find all elements that are in either set:", "print set1.union(set2) # all unique elements in either set\nprint set1.intersection(set2) # all elements in both sets", "Advanced Dictionaries\nProblem 7: Create this dictionary:\n{0: 0, 1: 1, 2: 8, 3: 27, 4: 64}\n using dictionary comprehension.", "{x:x**3 for x in range(5)}", "Advanced Lists\nProblem 8: Reverse the list below:", "l = [1,2,3,4]\nl.reverse() # reverses in place, call the list again to check\nl", "Problem 9: Sort the list below", "l = [3,4,2,5,1]\nl.sort()\nl", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
diegocavalca/Studies
programming/Python/tensorflow/exercises/Seq2Seq_solutions.ipynb
cc0-1.0
[ "from __future__ import print_function\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ntf.__version__\n\nnp.__version__\n\nauthor = \"kyubyong. https://github.com/Kyubyong/tensorflow-exercises\"\n\nnp.random.seed(0)", "Q1. Let's practice the seq2seq framework with a simple example. In this example, we will take the last state of the encoder as the initial state of the decoder. Complete the code.", "# Inputs and outputs: ten digits\nx = tf.placeholder(tf.int32, shape=(32, 10))\ny = tf.placeholder(tf.int32, shape=(32, 10))\n\n# One-hot encoding\nenc_inputs = tf.one_hot(x, 10)\ndec_inputs = tf.concat((tf.zeros_like(y[:, :1]), y[:, :-1]), -1)\ndec_inputs = tf.one_hot(dec_inputs, 10)\n\n# encoder\nencoder_cell = tf.contrib.rnn.GRUCell(128)\nmemory, last_state = tf.nn.dynamic_rnn(encoder_cell, enc_inputs, dtype=tf.float32, scope=\"encoder\")\n\n# decoder\ndecoder_cell = tf.contrib.rnn.GRUCell(128)\noutputs, _ = tf.nn.dynamic_rnn(decoder_cell, dec_inputs, initial_state=last_state, scope=\"decoder\")\n\n# Readout\nlogits = tf.layers.dense(outputs, 10)\npreds = tf.argmax(logits, -1, output_type=tf.int32)\n\n# Evaluation\nhits = tf.reduce_sum(tf.to_float(tf.equal(preds, y)))\nacc = hits / tf.to_float(tf.size(x))\n\n# Loss and train\nloss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y)\nmean_loss = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001)\ntrain_op = opt.minimize(mean_loss)\n\n# Session\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n losses, accs = [], []\n for step in range(2000):\n # Data design\n # We feed sequences of random digits in the `x`,\n # and take its reverse as the target.\n _x = np.random.randint(0, 10, size=(32, 10), dtype=np.int32)\n _y = _x[:, ::-1] # Reverse\n _, _loss, _acc = sess.run([train_op, mean_loss, acc], {x:_x, y:_y})\n losses.append(_loss)\n accs.append(_acc)\n \n # Plot\n plt.plot(losses, label=\"loss\")\n plt.plot(accs, label=\"accuracy\")\n plt.legend()\n plt.grid()\n plt.show()", "Q2. At this time, we will use the Bahdanau attention mechanism. Complete the code.", "tf.reset_default_graph()\n# Inputs and outputs: ten digits\nx = tf.placeholder(tf.int32, shape=(32, 10))\ny = tf.placeholder(tf.int32, shape=(32, 10))\n\n# One-hot encoding\nenc_inputs = tf.one_hot(x, 10)\ndec_inputs = tf.concat((tf.zeros_like(y[:, :1]), y[:, :-1]), -1)\ndec_inputs = tf.one_hot(dec_inputs, 10)\n\n# encoder\nencoder_cell = tf.contrib.rnn.GRUCell(128)\nmemory, last_state = tf.nn.dynamic_rnn(encoder_cell, enc_inputs, dtype=tf.float32, scope=\"encoder\")\n\n# decoder\nattention_mechanism = tf.contrib.seq2seq.BahdanauAttention(128, memory) \ndecoder_cell = tf.contrib.rnn.GRUCell(128)\ncell_with_attention = tf.contrib.seq2seq.AttentionWrapper(decoder_cell, \n attention_mechanism, \n attention_layer_size=256,\n alignment_history=True,\n output_attention=False)\noutputs, state = tf.nn.dynamic_rnn(cell_with_attention, dec_inputs, dtype=tf.float32)\nalignments = tf.transpose(state.alignment_history.stack(),[1,2,0])\n\n# Readout\nlogits = tf.layers.dense(outputs, 10)\npreds = tf.argmax(logits, -1, output_type=tf.int32)\n\n# Evaluation\nhits = tf.reduce_sum(tf.to_float(tf.equal(preds, y)))\nacc = hits / tf.to_float(tf.size(x))\n\n# Loss and train\nloss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y)\nmean_loss = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001)\ntrain_op = opt.minimize(mean_loss)\n\n# Session\ndef plot_alignment(alignment):\n fig, ax = plt.subplots()\n im=ax.imshow(alignment, cmap='Greys', interpolation='none')\n fig.colorbar(im, ax=ax)\n plt.xlabel('Decoder timestep')\n plt.ylabel('Encoder timestep')\n plt.show()\n \nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n losses, accs = [], []\n for step in range(2000):\n # Data design\n # We feed sequences of random digits in the `x`,\n # and take its reverse as the target.\n _x = np.random.randint(0, 10, size=(32, 10), dtype=np.int32)\n _y = _x[:, ::-1] # Reverse\n _, _loss, _acc = sess.run([train_op, mean_loss, acc], {x:_x, y:_y})\n losses.append(_loss)\n accs.append(_acc)\n \n if step % 100 == 0:\n print(\"step=\", step)\n _alignments = sess.run(alignments, {x: _x, y: _y})\n plot_alignment(_alignments[0])\n \n # Plot\n plt.plot(losses, label=\"loss\")\n plt.plot(accs, label=\"accuracy\")\n plt.legend()\n plt.grid()\n plt.show()\n " ]
[ "code", "markdown", "code", "markdown", "code" ]
Cyb3rWard0g/HELK
docker/helk-jupyter/notebooks/tutorials/04-Intro_pyspark_sparkSQL.ipynb
gpl-3.0
[ "Introduction to Spark SQL via PySpark\n\nGoals:\n\nGet familiarized with the basics of Spark SQL and PySpark\nLearn to create a SparkSession\nVerify if Jupyter can talk to Spark Master\n\nReferences:\n* https://spark.apache.org/docs/latest/api/python/pyspark.html\n* https://spark.apache.org/docs/latest/sql-getting-started.html\n* https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.SparkSession\n* https://jaceklaskowski.gitbooks.io/mastering-spark-sql/\n* http://people.csail.mit.edu/matei/papers/2015/sigmod_spark_sql.pdf\nWhat is Spark SQL?\n\nIt is a Spark module that leverages Sparks functional programming APIs to allow SQL relational processing tasks on (semi)structured data.\nSpark SQL provides Spark with more information about the structure of both the data and the computation being performed\n\nWhat is PySpark?\nPySpark is the Python API for Spark.\nHow do I start using Pyspark and SparkSQL?\n\nStart by importing the PySpark SQL SparkSession Class and creating a SparkSession instance .\nA SparkSession class is considered the entry point to programming Spark with the Dataset and DataFrame API.\nA SparkSession can be used create DataFrames, register DataFrames as tables, execute SQL over tables, cache tables, and read parquet files.\n\nImport SparkSession Class", "from pyspark.sql import SparkSession", "What is a SparkSession?\n\nIt is the driver process that controls a spark application\nA SparkSession instance is responsible for executing the driver programโ€™s commands (code) across executors (in a cluster) to complete a given task.\nYou can have as many SparkSessions as you want in a single Spark application.\n\nHow do I create a SparkSession?\n\nYou can use the SparkSession class attribute called Builder.\nThe class attribute builder allows you to run some of the following functions:\nappName: Sets a name for the application\nmaster: URL for the Spark master (Local or Spark standalone cluster)\nenableHiveSupport: Enables Hive support, including connectivity to a persistent Hive metastore, support for Hive serdes, and Hive user-defined functions.\ngetOrCreate:Gets an existing SparkSession or, if there is no existing one, creates a new one based on the options set in this builder.\n\n\n\nCreate a SparkSession instance\n\nDefine a spark variable\nPass values to the appName and master functions\nFor the master function, we are going to use the HELK's Spark Master container (helk-spark-master)", "spark = SparkSession.builder \\\n .appName(\"Python Spark SQL basic example\") \\\n .master(\"spark://helk-spark-master:7077\") \\\n .enableHiveSupport() \\\n .getOrCreate()", "Check the SparkSession variable", "spark", "What is a Dataframe?\n\nIn Spark, a dataframe is the most common Structured API, and it is used to represent data in a table format with rows and columns.\nThink of a dataframe as a spreadsheet with headers. The difference is that one Spark Dataframe can be distributed across several computers due to its large size or high computation requirements for faster analysis.\nThe list of column names from a dataframe with its respective data types is called the schema\n\nIs a Spark Dataframe the same as a Python Pandas Dataframe?\n\nA Python dataframe sits on one computer whereas a Spark Dataframe, once again, can be distributed across several computers.\nPySpark allows the conversion from Python Pandas dataframes to Spark dataframes. \n\nCreate your first Dataframe\nLet's create our first dataframe by using range and toDF functions.\n* One column named numbers\n* 10 rows containing numbers from 0-9\nrange(start, end=None, step=1, numPartitions=None)\n* Create a DataFrame with single pyspark.sql.types.LongType column named id, containing elements in a range from start to end (exclusive) with step value step.\ntoDF(*cols)\n* Returns a new class:DataFrame that with new specified column names", "first_df = spark.range(10).toDF(\"numbers\")\n\nfirst_df.show()", "Create another Dataframe\ncreateDataFrame(data, schema=None, samplingRatio=None, verifySchema=True)\n\nCreates a DataFrame from an RDD, a list or a pandas.DataFrame.\nWhen schema is a list of column names, the type of each column will be inferred from data.\nWhen schema is None, it will try to infer the schema (column names and types) from data, which should be an RDD of Row, or namedtuple, or dict.", "dog_data=[['Pedro','Doberman',3],['Clementine','Golden Retriever',8],['Norah','Great Dane',6]\\\n ,['Mabel','Austrailian Shepherd',1],['Bear','Maltese',4],['Bill','Great Dane',10]]\ndog_df=spark.createDataFrame(dog_data, ['name','breed','age'])\n\ndog_df.show()", "Check the Dataframe schema\n\nWe are going to do apply a concept called schema inference which lets spark takes its best guess at figuring out the schema.\nSpark reads part of the dataframe and then tries to parse the types of data in each row. \nYou can also define a strict schema when you read in data which does not let Spark guess. This is recommended for production use cases. \n\nschema\n* Returns the schema of this DataFrame as a pyspark.sql.types.StructType.", "dog_df.schema", "printSchema()\n* Prints out the schema in the tree format", "dog_df.printSchema()", "Access Dataframe Columns\nselect(*cols)\n* Projects a set of expressions and returns a new DataFrame.\nAccess Dataframes's columns by attribute (df.name):", "dog_df.select(\"name\").show()", "Access Dataframe's columns by indexing (df['name']). \n* According to Sparks documentation, the indexing form is the recommended one because it is future proof and wonโ€™t break with column names that are also attributes on the DataFrame class.", "dog_df.select(dog_df[\"name\"]).show()", "Filter Dataframe\nfilter(condition)\n* Filters rows using the given condition.\nSelect dogs that are older than 4 years", "dog_df.filter(dog_df[\"age\"] > 4).show()", "Group Dataframe\ngroupBy(*cols)\n* Groups the DataFrame using the specified columns, so we can run aggregation on them. See GroupedData for all the available aggregate functions.\ngroup dogs and count them by their age", "dog_df.groupBy(dog_df[\"age\"]).count().show()", "Run SQL queries on your Dataframe\ncreateOrReplaceTempView(name)\n* Creates or replaces a local temporary view with this DataFrame.\n* The lifetime of this temporary table is tied to the SparkSession that was used to create this DataFrame.\nRegister the current Dataframe as a SQL temporary view", "dog_df.createOrReplaceTempView(\"dogs\")\n\nsql_dog_df = spark.sql(\"SELECT * FROM dogs\")\nsql_dog_df.show()\n\nsql_dog_df = spark.sql(\"SELECT * FROM dogs WHERE name='Pedro'\")\nsql_dog_df.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
opengeostat/pygslib
pygslib/Ipython_templates/broken/vtk_tools.ipynb
mit
[ "VTK tools\nPygslib use VTK:\n\nas data format and data converting tool\nto plot in 3D\nas a library with some basic computational geometry functions, for example to know if a point is inside a surface\n\nSome of the functions in VTK were obtained or modified from Adamos Kyriakou at https://pyscience.wordpress.com/", "import pygslib \nimport numpy as np", "Functions in vtktools", "help(pygslib.vtktools)", "Load a cube defined in an stl file and plot it\nSTL is a popular mesh format included an many non-commercial and commercial software, example: Paraview, Datamine Studio, etc.", "#load the cube \nmycube=pygslib.vtktools.loadSTL('../datasets/stl/cube.stl')\n\n# see the information about this data... Note that it is an vtkPolyData\nprint mycube\n\n# Create a VTK render containing a surface (mycube)\nrenderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.50, background=(1,1,1))\n# Now we plot the render\npygslib.vtktools.vtk_show(renderer, camera_position=(-20,20,20), camera_focalpoint=(0,0,0))", "Ray casting to find intersections of a lines with the cube\nThis is basically how we plan to find points inside solid and to define blocks inside solid", "# we have a line, for example a block model row \n# defined by two points or an infinite line passing trough a dillhole sample \npSource = [-50.0, 0.0, 0.0]\npTarget = [50.0, 0.0, 0.0]\n\n# now we want to see how this looks like\npygslib.vtktools.addLine(renderer,pSource, pTarget, color=(0, 1, 0))\npygslib.vtktools.vtk_show(renderer) # the camera position was already defined \n\n\n# now we find the point coordinates of the intersections \nintersect, points, pointsVTK= pygslib.vtktools.vtk_raycasting(mycube, pSource, pTarget)\n\nprint \"the line intersects? \", intersect==1\nprint \"the line is over the surface?\", intersect==-1\n\n# list of coordinates of the points intersecting \nprint points\n\n\n#Now we plot the intersecting points\n\n# To do this we add the points to the renderer \nfor p in points: \n pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))\n\npygslib.vtktools.vtk_show(renderer) \n ", "Test line on surface", "# we have a line, for example a block model row \n# defined by two points or an infinite line passing trough a dillhole sample \npSource = [-50.0, 5.01, 0]\npTarget = [50.0, 5.01, 0]\n\n# now we find the point coordinates of the intersections \nintersect, points, pointsVTK= pygslib.vtktools.vtk_raycasting(mycube, pSource, pTarget)\n\nprint \"the line intersects? \", intersect==1\nprint \"the line is over the surface?\", intersect==-1\n\n# list of coordinates of the points intersecting \nprint points\n\n# now we want to see how this looks like\npygslib.vtktools.addLine(renderer,pSource, pTarget, color=(0, 1, 0))\n\nfor p in points: \n pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))\n\npygslib.vtktools.vtk_show(renderer) # the camera position was already defined \n\n# note that there is a tolerance of about 0.01", "Finding points", "#using same cube but generation arbitrary random points\nx = np.random.uniform(-10,10,150)\ny = np.random.uniform(-10,10,150)\nz = np.random.uniform(-10,10,150)", "Find points inside a solid", "# selecting all inside the solid\n# This two methods are equivelent but test=4 also works with open surfaces \ninside,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=1)\ninside1,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=4)\nerr=inside==inside1\n#print inside, tuple(p)\nprint x[~err]\nprint y[~err]\nprint z[~err]\n\n# here we prepare to plot the solid, the x,y,z indicator and we also \n# plot the line (direction) used to ray trace\n\n# convert the data in the STL file into a renderer and then we plot it\nrenderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.70, background=(1,1,1))\n# add indicator (r->x, g->y, b->z)\npygslib.vtktools.addLine(renderer,[-10,-10,-10], [-7,-10,-10], color=(1, 0, 0))\npygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-7,-10], color=(0, 1, 0))\npygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-10,-7], color=(0, 0, 1))\n\n# add ray to see where we are pointing\npygslib.vtktools.addLine(renderer, (0.,0.,0.), tuple(p), color=(0, 0, 0))\n\n\n# here we plot the points selected and non-selected in different color and size\n# add the points selected\nfor i in range(len(inside)):\n p=[x[i],y[i],z[i]]\n \n if inside[i]!=0:\n #inside\n pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))\n else:\n pygslib.vtktools.addPoint(renderer, p, radius=0.2, color=(0.0, 1.0, 0.0))\n\n \n#lets rotate a bit this\npygslib.vtktools.vtk_show(renderer, camera_position=(0,0,50), camera_focalpoint=(0,0,0))", "Find points over a surface", "# selecting all over a solid (test = 2) \ninside,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=2)\n\n# here we prepare to plot the solid, the x,y,z indicator and we also \n# plot the line (direction) used to ray trace\n\n# convert the data in the STL file into a renderer and then we plot it\nrenderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.70, background=(1,1,1))\n# add indicator (r->x, g->y, b->z)\npygslib.vtktools.addLine(renderer,[-10,-10,-10], [-7,-10,-10], color=(1, 0, 0))\npygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-7,-10], color=(0, 1, 0))\npygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-10,-7], color=(0, 0, 1))\n\n# add ray to see where we are pointing\npygslib.vtktools.addLine(renderer, (0.,0.,0.), tuple(-p), color=(0, 0, 0))\n\n\n# here we plot the points selected and non-selected in different color and size\n# add the points selected\nfor i in range(len(inside)):\n p=[x[i],y[i],z[i]]\n \n if inside[i]!=0:\n #inside\n pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))\n else:\n pygslib.vtktools.addPoint(renderer, p, radius=0.2, color=(0.0, 1.0, 0.0))\n\n \n#lets rotate a bit this\npygslib.vtktools.vtk_show(renderer, camera_position=(0,0,50), camera_focalpoint=(0,0,0))", "Find points below a surface", "# selecting all over a solid (test = 2) \ninside,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=3)\n\n# here we prepare to plot the solid, the x,y,z indicator and we also \n# plot the line (direction) used to ray trace\n\n# convert the data in the STL file into a renderer and then we plot it\nrenderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.70, background=(1,1,1))\n# add indicator (r->x, g->y, b->z)\npygslib.vtktools.addLine(renderer,[-10,-10,-10], [-7,-10,-10], color=(1, 0, 0))\npygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-7,-10], color=(0, 1, 0))\npygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-10,-7], color=(0, 0, 1))\n\n# add ray to see where we are pointing\npygslib.vtktools.addLine(renderer, (0.,0.,0.), tuple(p), color=(0, 0, 0))\n\n\n# here we plot the points selected and non-selected in different color and size\n# add the points selected\nfor i in range(len(inside)):\n p=[x[i],y[i],z[i]]\n \n if inside[i]!=0:\n #inside\n pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))\n else:\n pygslib.vtktools.addPoint(renderer, p, radius=0.2, color=(0.0, 1.0, 0.0))\n\n \n#lets rotate a bit this\npygslib.vtktools.vtk_show(renderer, camera_position=(0,0,50), camera_focalpoint=(0,0,0))", "Export points to a VTK file", "data = {'inside': inside}\n\npygslib.vtktools.points2vtkfile('points', x,y,z, data)", "The results can be ploted in an external viewer, for example mayavi or paraview:\n<img src=\"figures/Fig_paraview.png\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bmanubay/open-forcefield-tools
examples/substructure_linking.ipynb
mit
[ "Linking molecule fragments via reaction SMIRKS\nTo test/develop SMIRFF force fields, it is extremely helpful to be able to generate molecules containing assorted combinations of various molecular fragments or substructures (i.e. substructures containing particular SMIRKS). Here, I experiment with using reaction SMIRKS to link substructures to generate libraries of molecules.\nLibrary generation can be done by legitimate chemical reactions in order to expand libraries. However, here, I just want to link together sets of fragments with pre-specified attachment points. This can be done more simply by involving dummy elements which are present only to react. Here I'll make a B and F as caps of groups I want to link together in a library.", "# Import OpenEye stuff\nimport openeye.oechem as oechem\nimport openeye.oedepict as oedepict\nfrom IPython.display import display\nimport openeye.oeomega as oeomega\n\n# Add utility function for depiction\ndef depict(mol, width=500, height=200):\n from IPython.display import Image\n dopt = oedepict.OEPrepareDepictionOptions()\n dopt.SetDepictOrientation( oedepict.OEDepictOrientation_Horizontal)\n oedepict.OEPrepareDepiction(mol, dopt)\n opts = oedepict.OE2DMolDisplayOptions(width, height, oedepict.OEScale_AutoScale)\n disp = oedepict.OE2DMolDisplay(mol, opts)\n ofs = oechem.oeosstream()\n oedepict.OERenderMolecule(ofs, 'png', disp)\n ofs.flush()\n return Image(data = \"\".join(ofs.str()))\n", "Try out a simple reaction to link two SMILES strings", "# Test by linking two molecules - anything with a C or O followed by a B can react with a C or O followed by an F to form \n# a bond betwen the two C or O atoms, dropping the B or F. \nlibgen = oechem.OELibraryGen(\"[C,O:1][B:2].[C,O:3][F:4]>>[C,O:1][C,O:3]\") \nmol = oechem.OEGraphMol()\noechem.OESmilesToMol(mol, 'COCCB')\nlibgen.SetStartingMaterial(mol, 0)\nmol.Clear()\noechem.OESmilesToMol(mol, 'COCOCOCF')\nlibgen.SetStartingMaterial(mol, 1)\n\nmols = []\nfor product in libgen.GetProducts():\n print(\"product smiles= %s\" %oechem.OEMolToSmiles(product))\n mols.append(oechem.OEMol(product))\n \n# Depict result\ndepict(mols[0])", "Proceed to library generation\nFirst, build some sets of molecules to link, capped by our \"reactant\" groups", "# Build two small libraries of molecules for linking\n\n# Build a first set of molecules\nimport itertools\nsmileslist1 = []\n#Take all two-item combinations of entries in the list\nfor item in itertools.permutations(['C','O','c1ccccc1','CC', 'COC', 'CCOC', 'CCCOC', 'C1CC1', 'C1CCC1', 'C1CCCC1', 'C1CCCCC1','C1OCOCC1'], 2): \n smileslist1.append( ''.join(item))\n#Now cap all of them terminally with a reaction site\nsmileslist1_rxn = [ smi+'B' for smi in smileslist1]\n\n# Build a second set of molecules in the same manner\nsmileslist2 = []\nfor item in itertools.permutations(['c1ccccc1OC','c1ccccc1COC','c1ccccc1O(CO)C','C(O)C','C(OCO)', 'C1OOC1','C1OCOC1', 'C1CCCCCCOC1','CO(COCO)C', 'COCO(O)OC'],2):\n smileslist2.append( ''.join(item))\n# Cap all with reaction site\nsmileslist2_rxn = [smi + 'F' for smi in smileslist2]", "Now, generate our library", "# Build overall set of reactants\nlibgen = oechem.OELibraryGen(\"[C,O:1][B:2].[C,O:3][F:4]>>[C,O:1][C,O:3]\") \nmol = oechem.OEGraphMol()\nfor idx, smi in enumerate(smileslist1_rxn):\n oechem.OESmilesToMol(mol, smi)\n libgen.AddStartingMaterial(mol, 0)\n mol.Clear()\nfor idx, smi in enumerate(smileslist2_rxn):\n oechem.OESmilesToMol(mol, smi)\n libgen.AddStartingMaterial(mol, 1)\n mol.Clear()\n \nproducts = [ oechem.OEMol(product) for product in libgen.GetProducts() ]\nprint len(products)\ndepict(products[0])\n\n\ndepict(products[4])\n", "Generate a conformer for each and write out", "omega = oeomega.OEOmega()\nomega.SetMaxConfs(1)\nomega.SetStrictStereo(False)\n# First do just the first 10 molecules\n#products = products[0:10]\nofs = oechem.oemolostream('linked_substructures.oeb')\nfor oemol in products:\n omega(oemol)\n oechem.OETriposAtomNames(mol)\n oechem.OEWriteMolecule(ofs, oemol)\nofs.close()\n\n\n# Make sure I can read and write to mol2\nifs = oechem.oemolistream('linked_substructures.oeb')\nofs = oechem.oemolostream('linked_substructures_sample.mol2')\nct = 0\nmol = oechem.OEMol()\nwhile oechem.OEReadMolecule(ifs, mol):\n oechem.OEWriteMolecule(ofs, mol)\n ct+=1\n mol=oechem.OEMol()\n if ct > 10: break #Don't eat up tons of space, just test\nifs.close()\nofs.close()\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kubeflow/kfp-tekton
samples/katib/early-stopping.ipynb
apache-2.0
[ "Kubeflow Pipelines with Katib component\nIn this notebook you will:\n- Create Katib Experiment using random algorithm.\n- Use median stopping rule as an early stopping algorithm.\n- Use Kubernetes Job with mxnet mnist training container as a Trial template.\n- Create Pipeline to get the optimal hyperparameters.\nReference documentation:\n- https://kubeflow.org/docs/components/katib/experiment/#random-search\n- https://kubeflow.org/docs/components/katib/early-stopping/\n- https://kubeflow.org/docs/pipelines/overview/concepts/component/\nInstall required package\nKubeflow Pipelines SDK and Kubeflow Katib SDK.", "# Update the PIP version.\n!python -m pip install --upgrade pip\n!pip install kfp==1.7.2\n!pip install kubeflow-katib==0.11.1\n!pip install kfp-tekton==1.0.0\n!pip install kubernetes==12.0.1", "Restart the Notebook kernel to use the SDK packages", "from IPython.display import display_html\ndisplay_html(\"<script>Jupyter.notebook.kernel.restart()</script>\",raw=True)", "Import required packages", "import kfp\nimport kfp.dsl as dsl\nfrom kfp import components\n\nfrom kubeflow.katib import ApiClient\nfrom kubeflow.katib import V1beta1ExperimentSpec\nfrom kubeflow.katib import V1beta1AlgorithmSpec\nfrom kubeflow.katib import V1beta1EarlyStoppingSpec\nfrom kubeflow.katib import V1beta1EarlyStoppingSetting\nfrom kubeflow.katib import V1beta1ObjectiveSpec\nfrom kubeflow.katib import V1beta1ParameterSpec\nfrom kubeflow.katib import V1beta1FeasibleSpace\nfrom kubeflow.katib import V1beta1TrialTemplate\nfrom kubeflow.katib import V1beta1TrialParameterSpec", "Define an Experiment\nYou have to create an Experiment object before deploying it. This Experiment is similar to this YAML.", "# Experiment name and namespace.\nexperiment_name = \"median-stop\"\n# for multi user deployment, please specify your own namespace instead of \"anonymous\"\nexperiment_namespace = \"anonymous\"\n\n# Trial count specification.\nmax_trial_count = 18\nmax_failed_trial_count = 3\nparallel_trial_count = 2\n\n# Objective specification.\nobjective=V1beta1ObjectiveSpec(\n type=\"maximize\",\n goal= 0.99,\n objective_metric_name=\"Validation-accuracy\",\n additional_metric_names=[\n \"Train-accuracy\"\n ]\n)\n\n# Algorithm specification.\nalgorithm=V1beta1AlgorithmSpec(\n algorithm_name=\"random\",\n)\n\n# Early Stopping specification.\nearly_stopping=V1beta1EarlyStoppingSpec(\n algorithm_name=\"medianstop\",\n algorithm_settings=[\n V1beta1EarlyStoppingSetting(\n name=\"min_trials_required\",\n value=\"2\"\n )\n ]\n)\n\n\n# Experiment search space.\n# In this example we tune learning rate, number of layer and optimizer.\n# Learning rate has bad feasible space to show more early stopped Trials.\nparameters=[\n V1beta1ParameterSpec(\n name=\"lr\",\n parameter_type=\"double\",\n feasible_space=V1beta1FeasibleSpace(\n min=\"0.01\",\n max=\"0.3\"\n ),\n ),\n V1beta1ParameterSpec(\n name=\"num-layers\",\n parameter_type=\"int\",\n feasible_space=V1beta1FeasibleSpace(\n min=\"2\",\n max=\"5\"\n ),\n ),\n V1beta1ParameterSpec(\n name=\"optimizer\",\n parameter_type=\"categorical\",\n feasible_space=V1beta1FeasibleSpace(\n list=[\n \"sgd\", \n \"adam\",\n \"ftrl\"\n ]\n ),\n ),\n]\n", "Define a Trial template\nIn this example, the Trial's Worker is the Kubernetes Job.", "# JSON template specification for the Trial's Worker Kubernetes Job.\ntrial_spec={\n \"apiVersion\": \"batch/v1\",\n \"kind\": \"Job\",\n \"spec\": {\n \"template\": {\n \"metadata\": {\n \"annotations\": {\n \"sidecar.istio.io/inject\": \"false\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"training-container\",\n \"image\": \"docker.io/kubeflowkatib/mxnet-mnist:v1beta1-45c5727\",\n \"command\": [\n \"python3\",\n \"/opt/mxnet-mnist/mnist.py\",\n \"--batch-size=64\",\n \"--lr=${trialParameters.learningRate}\",\n \"--num-layers=${trialParameters.numberLayers}\",\n \"--optimizer=${trialParameters.optimizer}\"\n ]\n }\n ],\n \"restartPolicy\": \"Never\"\n }\n }\n }\n}\n\n# Configure parameters for the Trial template.\n# We set the retain parameter to \"True\" to not clean-up the Trial Job's Kubernetes Pods.\ntrial_template=V1beta1TrialTemplate(\n retain=True,\n primary_container_name=\"training-container\",\n trial_parameters=[\n V1beta1TrialParameterSpec(\n name=\"learningRate\",\n description=\"Learning rate for the training model\",\n reference=\"lr\"\n ),\n V1beta1TrialParameterSpec(\n name=\"numberLayers\",\n description=\"Number of training model layers\",\n reference=\"num-layers\"\n ),\n V1beta1TrialParameterSpec(\n name=\"optimizer\",\n description=\"Training model optimizer (sdg, adam or ftrl)\",\n reference=\"optimizer\"\n ),\n ],\n trial_spec=trial_spec\n)\n ", "Define an Experiment specification\nCreate an Experiment specification from the above parameters.", "experiment_spec=V1beta1ExperimentSpec(\n max_trial_count=max_trial_count,\n max_failed_trial_count=max_failed_trial_count,\n parallel_trial_count=parallel_trial_count,\n objective=objective,\n algorithm=algorithm,\n early_stopping=early_stopping,\n parameters=parameters,\n trial_template=trial_template\n)", "Create a Pipeline using Katib component\nThe best hyperparameters are printed after Experiment is finished.\nThe Experiment is not deleted after the Pipeline is finished.", "# Get the Katib launcher.\nkatib_experiment_launcher_op = components.load_component_from_url(\n \"https://raw.githubusercontent.com/kubeflow/pipelines/master/components/kubeflow/katib-launcher/component.yaml\")\n\nPRINT_STR = \"\"\"\nname: print\ndescription: print msg\ninputs:\n - {name: message, type: JsonObject}\nimplementation:\n container:\n image: library/bash:4.4.23\n command:\n - sh\n - -c\n args:\n - |\n echo \"Best HyperParameters: $0\"\n - {inputValue: message}\n\"\"\"\n\nprint_op = components.load_component_from_text(PRINT_STR)\n\[email protected](\n name=\"launch-katib-early-stopping-experiment\",\n description=\"An example to launch Katib Experiment with early stopping\"\n)\ndef median_stop():\n \n # Katib launcher component.\n # Experiment Spec should be serialized to a valid Kubernetes object.\n op = katib_experiment_launcher_op(\n experiment_name=experiment_name,\n experiment_namespace=experiment_namespace,\n experiment_spec=ApiClient().sanitize_for_serialization(experiment_spec),\n experiment_timeout_minutes=60,\n delete_finished_experiment=False)\n \n # Output container to print the results.\n print_op(op.output)", "Run the Pipeline\nYou can check the Katib Experiment info in the Katib UI.\nIf you run this in a multi-user deployment, you need to follow the instructions\nhere: https://github.com/kubeflow/kfp-tekton/tree/master/guides/kfp-user-guide#2-upload-pipelines-using-the-kfp_tektontektonclient-in-python\nCheck the multi tenant section and create TektonClient with host and cookies arguments.\nFor example:\nTektonClient(\n host='http://&lt;Kubeflow_public_endpoint_URL&gt;/pipeline',\n cookies='authservice_session=xxxxxxx'\n )\nYou also need to specify namespace argument when calling create_run_from_pipeline_func function", "from kfp_tekton._client import TektonClient \n# Example code for multi user deployment:\n# TektonClient(\n# host='http://<Kubeflow_public_endpoint_URL>/pipeline',\n# cookies='authservice_session=xxxxxxx'\n# ).create_run_from_pipeline_func(median_stop, arguments={}, namespace='user namespace')\nTektonClient().create_run_from_pipeline_func(median_stop, arguments={})" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fionapigott/Data-Science-45min-Intros
python-decorators-101/python-decorators-101.ipynb
unlicense
[ "Python decorators\nJosh Montague, 2015-12\nBuilt on OS X, with IPython 3.0 on Python 2.7\nIn this session, we'll dig into some details of Python functions. The end goal will be to understand how and why you might want to create decorators with Python functions.\n\nNote: there is an Appendix at the end of this notebook that dives deeper into scope and Python namespaces. I wrote out the content because they're quite relevant to talking about decorators. But, ultimately, we only 45 minutes, and it couldn't all fit. If you're curious, take a few extra minutes to review that material, as well. \n\n\nFunctions\nA lot of this RST has to do with understanding the subtleties of Python functions. So, we're going to spend some time exploring them.\nIn Python, functions are objects\n\n[T]his means the language supports passing functions as arguments to other functions, returning them as the values from other functions, and assigning them to variables or storing them in data structures. (wiki)\n\nThis is not true (or at least not easy) in all programming languages. I don't have a ton of experience to back this up. But, many moons ago, I remember that Java functions only lived inside objects and classes. \nLet's take a moment to look at a relatively simple function and appreciate what it does and what we can do with it.", "def duplicator(str_arg):\n \"\"\"Create a string that is a doubling of the passed-in arg.\"\"\" \n # use the passed arg to create a larger string (double it, with a space between)\n internal_variable = ' '.join( [str_arg, str_arg] )\n \n return internal_variable\n\n# print (don't call) the function \nprint( duplicator )\n\n# equivalently (in IPython):\n#duplicator", "Remember that IPython and Jupyter will automatically (and conveniently!) call the __repr__ of an object if it is the last thing in the cell. But I'll use the print() function explicitly just to be clear. \nThis displays the string representation of the object. It usually includes: \n\nan object type (class)\nan object name\na memory location\n\nNow, let's actually call the function (which we do with use of the parentheses), and assign the return value (a string) to a new variable.", "# now *call* the function by using parens\noutput = duplicator('yo')\n\n# verify the expected behavior\nprint(output)", "Because functions are objects, they have attributes just like any other Python object.", "# the dir() built-in function displays the argument's attributes\ndir(duplicator)", "Because functions are objects, we can pass them around like any other data type. For example, we can assign them to other variables! If you occasionally still have dreams about the Enumerator, this will look familiar.", "# first, recall the normal behavior of useful_function() \nduplicator('ring')\n\n# now create a new variable and assign our function to it\nanother_duplicator = duplicator\n\n# now, we can use the *call* notation because the new variable is \n# assigned the original function\nanother_duplicator('giggity')\n\n# and we can verify that this is actually a reference to the \n# original function\nprint( \"original function: %s\" % duplicator )\nprint\nprint( \"new function: %s\" % another_duplicator )", "By looking at the memory location, we can see that the second function is just a pointer to the first function! Cool!\nFunctions inside functions\nWith an understanding of what's inside a function and what we can do with it, consider the case were we define a new function within another function. \nThis may seem overly complicated for a little while, but stick with me.\nIn the example below, we'll define an outer function which includes a local variable, then a local function definition. The inner function returns a string. The outer function calls the inner function, and returns the resulting value (a string).", "def speaker():\n \"\"\"\n Simply return a word (a string).\n \n Other than possibly asking 'why are you writing this simple \n function in such a complicated fashion?' this should \n hopefuly should be pretty clear.\n \"\"\"\n \n # define a local variable\n word='hello'\n \n def shout():\n \"\"\"Return a capitalized version of word.\"\"\"\n \n # though not in the innermost scope, this is in the namespace one \n # level out from here\n return word.upper()\n \n # call shout and then return the result of calling it (a string)\n return shout() \n\n# remember that the result is a string, now print it. the sequence:\n# - put word and shout in local namespace\n# - define shout()\n# - call shout()\n# - look for 'word', return it\n# - return the return value of shout()\nprint( speaker() )", "Now, this may be intuitive, but it's important to note that the inner function is not accessible outside of the outer function. The interpreter can always step out into larger (or \"more outer\") namespaces, but we can't dig deeper into smaller ones.", "try:\n # this function only exists in the local scope of the outer function\n shout()\nexcept NameError, e:\n print(e)", "Functions out of functions\nWhat if we'd like our outer function to return a function? For example, return the inner function instead of the return value of the inner function.", "def speaker_func():\n \"\"\"Similar to speaker(), but this time return the actual inner function!\"\"\"\n \n word = 'hello'\n \n def shout():\n \"\"\"Return an all-caps version of the passed word.\"\"\"\n return word.upper()\n \n # don't *call* shout(), just return it\n return shout\n\n# remember: our function returns another function \nprint( speaker_func() )", "Remember that the return value of the outer function is another function. And just like we saw earlier, we can print the function to see the name and memory location.\nNote that the name is that of the inner function. Makes sense, since that's what we returned.\nLike we said before, since this is an object, we can pass this function around and assign it to other variables.", "# this will assign to the variable new_shout, a value that is the shout function\nnew_shout = speaker_func()", "Which means we can also call it with parens, as usual.", "# which means we can *call* it\nnew_shout()", "Functions into functions\nIf functions are objects, we can just as easily pass a function into another function. You've probably seen this before in the context of sorting, or maybe using map:", "from operator import itemgetter\n\n# we might want to sort this by the first or second item\ntuple_list = [(1,5),(9,2),(5,4)]\n\n# itemgetter is a callable (like a function) that we pass in as an argument to sorted()\nsorted(tuple_list, key=itemgetter(1))\n\ndef tuple_add(tup):\n \"\"\"Sum the items in a tuple.\"\"\"\n return sum(tup)\n\n# now we can map the tuple_add() function across the tuple_list iterable.\n# note that we're passing a function as an argument!\nmap(tuple_add, tuple_list) ", "If we can pass functions into and out of other functions, then I propose that we can extend or modify the behavior of a function without actually editing the original function!\nDecorators\n๐ŸŽ‰๐Ÿ’ฅ๐ŸŽ‰๐Ÿ’ฅ๐ŸŽ‰๐Ÿ’ฅ๐ŸŽ‰๐Ÿ’ฅ๐ŸŽ‰๐Ÿ’ฅ\nFor example, say there's some previously-defined function in and you'd like it to be more verbose. For now, let's just assume that printing a bunch of information to stdout is our goal. \nBelow, we define a function verbose() that takes another function as an argument. It does other tasks both before and after actually calling the passed-in function.", "def verbose(func):\n \"\"\"Add some marginally annoying verbosity to the passed func.\"\"\"\n \n def inner():\n print(\"heeeeey everyone, I'm about to do a thing!\")\n print(\"hey hey hey, I'm about to call a function called: {}\".format(func.__name__))\n print\n # now call (and print) the passed function\n print func()\n print\n print(\"whoa, did everyone see that thing I just did?! SICK!!\")\n \n return inner", "Now, imagine we have a function that we wish had more of this type of \"logging.\" But, we don't want to jump in and add a bunch of code to the original function.", "# here's our original function (that we don't want to modify) \ndef say_hi():\n \"\"\"Return 'hi'.\"\"\"\n return '--> hi. <--'\n\n# understand the original behavior of the function\nsay_hi()", "Instead, we pass the original function as an arg to our verbose function. Remember that this returns the inner function, so we can assign it and then call it.", "# this is now a function...\nverbose_say_hi = verbose(say_hi)\n\n# which we can call...\nverbose_say_hi()", "Looking at the output, we can see that when we called verbose_say_hi(), all of the code in it ran:\n\ntwo print statements\nthen the passed function say_hi() was called \nit's return value was printed \nfinally, there was some other printing defined in the inner function\n\nWe'd now say that verbose_say_hi() is a decorated version of say_hi(). And, correspondingly, that verbose() is our decorator.\n\nA decorator is a callable that takes a function as an argument and returns a function (probably a modified version of the original function).\n\nNow, you may also decide that the modified version of the function is the only version you want around. And, further, you don't want to change any other code that may depend on this. In that case, you want to overwrite the namespace value for the original function!", "# this will clobber the existing namespace value (the original function def). \n# in it's place we have the verbose version!\nsay_hi = verbose(say_hi)\n\nsay_hi()", "Uneditable source code\nOne use-case where this technique can be useful is when you need to use an existing base of code that you can't edit. There's an existing library that defines classes and methods that are aligned with your needs, but you need a slight variation on them.\nImagine there is a library called (creatively) uneditable_lib that implements a Coordinate class (a point in two-dimensional space), and an add() method. The add() method allows you to add the vectors of two Coordinates together and returns a new Coordinate object. It has great documentation and you know the source Python source code looks like this:", "! cat _uneditable_lib.py", "BUT\nImagine you don't actually have the Python source, you have the compiled binary. Try opening this file in vi and see how it looks.", "! ls | grep .pyc\n\n# you can still *use* the compiled code\nfrom uneditable_lib import Coordinate, add\n\n# make a couple of coordinates using the existing library\ncoord_1 = Coordinate(x=100, y=200)\ncoord_2 = Coordinate(x=-500, y=400)\n\nprint( coord_1 )\n\nprint( add(coord_1, coord_2) )", "But, imagine that for our particular use-case, we need to confine the resulting coordinates to the first quadrant (that is, x &gt; 0 and y &gt; 0). We want any negative component in the coordinates to just be truncated to zero.\nWe can't edit the source code, but we can decorate (and modify) it!", "def coordinate_decorator(func): \n \"\"\"Decorates the pre-built source code for Coordinates.\n \n We need the resulting coordinates to only exist in the \n first quadrant, so we'll truncate negative values to zero.\n \"\"\"\n def checker(a, b): \n \"\"\"Enforces first-quadrant coordinates.\"\"\"\n ret = func(a, b)\n \n # check the result and make sure we're still in the \n # first quadrant at the end [ that is, x and y > 0 ]\n if ret.x < 0 or ret.y < 0:\n ret = Coordinate(ret.x if ret.x > 0 else 0, \n ret.y if ret.y > 0 else 0\n ) \n return ret\n \n return checker", "We can decorate the preexisting add() function with our new wrapper. And since we may be using other code from uneditable_lib with an API that expects the function to still be called add(), we can just overwrite that namespace variable.", "# first we decorate the existing function\nadd = coordinate_decorator(add)\n\n# then we can call it as before\nprint( add(coord_1, coord_2) )", "And, we now have a truncated Coordinate that lives in the first quadrant.", "from IPython.display import Image\nImage(url='http://i.giphy.com/8VrtCswiLDNnO.gif')", "If we are running out of time, this is an ok place to wrap up.\nExamples\nHere are some real examples you might run across in the wild:\n\nFlask (web framework) uses decorators really well\[email protected] is a decorator that lets you decorate an arbitrary Python function and turn it into a URL path.\n@login_required is a decorator that lets your function define the appropriate authentication.\n\n\nFabric (ops / remote server tasks) includes a number of \"helper decorators\" for task and hostname management.\n\nHere are some things we didn't cover\nIf you go home tonight and can't possibly wait to learn more about decorators, here are the next things to look up:\n\npassing arguments to a decorator \[email protected]\nimplementing a decorator as a class\n\nIf there is sufficient interest in a Decorators, Part Deux, those would be good starters.\nTHE END\n\nIs there still time?!\nIf so, here are a couple of other useful things worth saying, quickly...\nDecorating a function at definition (with @)\nYou might still want to use a decorator to modify a function that you wrote in your own code.\nYou might ask \"But if we're already writing the code, why not just make the function do what we want in the first place?\" Valid question. \nOne place where this comes up is in practicing DRY (Don't Repeat Yourself) software engineering practices. If an identical block of logic is to be used in many places, that code should ideally be written in only one place. \nIn our case, we could imagine making a bunch of different functions more verbose. Instead of adding the verbosity (print statements) to each of the functions, we should define that once and then decorate the other functions.\nAnother nice example is making your code easier to understand by separating necessary operational logic from the business logic.\nThere's a nice shorthand - some syntactic sugar - for this kind of statement. To illustrate it, let's just use a variation on a method from earlier. First, see how the original function behaves:", "def say_bye():\n \"\"\"Return 'bye'.\"\"\"\n return '--> bye. <--'\n\nsay_bye()", "Remember the verbose() decorator that we already created? If this function (and perhaps others) should be made verbose at the time they're defined, we can apply the decorator right then and there using the @ shorthand:", "@verbose\ndef say_bye():\n \"\"\"Return 'bye'.\"\"\"\n return '--> bye. <--'\n\nsay_bye()\n\nImage(url='http://i.giphy.com/piupi6AXoUgTe.gif')", "But that shouldn't actually blow your mind. Based on our discussion before, you can probably guess that the decorator notation is just shorthand for: \nsay_bye = verbose( say_bye )\nOne place where this shorthand can come in particularly handy is when you need to stack a bunch of decorators. In place of nested decorators like this:\npython\nmy_func = do_thing_a( add_numbers( subtract( verify( my_func ))))\nWe can write this as:\npython\n@do_thing_a\n@add_numbers\n@subtract\n@verify\ndef my_func():\n # something useful happens here\nNote that the order matters!\n\nOk, thank you, please come again.\nTHE END AGAIN\n\nOk, final round, I promise. \nAppendix\nThis is material that I originally intended to include in this RST (because it's relevant), but ultimately cut for clarity. You can come back and revisit it any time.\nScopes and namespaces\nRoughly speaking, scope and namespace are the reason you can type some code (like variable_1 = 'dog') and then transparently use variable_1 later in your code. Sounds obvious, but easy to take for granted!\nThe concepts of scope and namespace in Python are pretty interesting and complex. Some time when you're bored and want to learn some details, have a read of this nice explainer or the official docs on the Python Execution Model. \nA short way to think about them is the following:\n\nA namespace is a mapping from (variable) names to values; think about it like a Python dictionary, where our code can look up the keys (the variable names) and then use the corresponding values. You will generally have many namespaces in your code, they are usually nested (sort of like an onion), and they can even have identical keys (variable names). \nThe scope (at a particular location in the code) defines in which namespace we look for variables (dictionary keys) when our code is executing.\n\nWhile this RST isn't explicitly about scope, understanding these concepts will make it easier to read the code later on. Let's look at some examples.\nThere are two built-in functions that can aid in exploring the namespace at various points in your code: globals() and locals() return a dictionary of the names and values in their respective scope. \nSince the namespaces in IPython are often huge, let's use IPython's bash magic to call out to a normal Python session to test how globals() works:", "# -c option starts a new interpreter session in a subshell and evaluates the code in quotes. \n# here, we just assign the value 3 to the variable x and print the global namespace\n! python -c 'x=3; print( globals() )'", "Note that there are a bunch of other dunder names that are in the global namespace. In particular, note that '__name__' = '__main__' because we ran this code from the command line (a comparison that you've made many times in the past!). And you can see the variable x that we assigned the value of 3. \nWe can also look at the namespace in a more local scope with the locals() function. Inside the body of a function, the local namespace is limited to those variables defined within the function.", "# this var is defined at the \"outermost\" level of this code block\nz = 10\n\ndef printer(x):\n \"\"\"Print some things to stdout.\"\"\"\n \n # create a new var within the scope of this function\n animal = 'baboon'\n \n # ask about the namespace of the inner-most scope, \"local\" scope\n print('local namespace: {}\\n'.format(locals()))\n \n # now, what about this var, which is defined *outside* the function?\n print('variable defined *outside* the function: {}'.format(z))\n\nprinter(17)", "First, you can see that when our scope is 'inside the function', the namespace is very small. It's the local variables defined within the function, including the arg we passed the function. \nBut, you can also see that we can still \"see\" the variable z, which was defined outside the function. This is because even though z doesn't exist in the local namespace, this is just the \"innermost\" of a series of nested namespaces. When we failed to find z in locals(), the interpreter steps \"out\" a layer, and looks for a namespace key (variable name) that's defined outside of the function. If we look through this (and any larger) namespace and still fail to find a key (variable name) for z, the interpreter will raise a NameError.\nWhile the interpreter will always continue looking in larger or more outer scopes, it can't do the opposite. Since y is created and assigned within the scope of our function, it goes \"out of scope\" as soon as the function returns. Local variables defined within the scope of a function are only accessible from that same scope - inside the function.", "try:\n # remember that this var was created and assigned only within the function\n animal\nexcept NameError, e:\n print(e)", "Closures\nThis is all relevant, because part of the mechanism behind a decorator is the concept of a function closure. A function closure captures the enclosing state (namespace) at the time a non-global function is defined.\nTo see an example, consider the following code:", "def outer(x):\n def inner():\n print x\n return inner", "We saw earlier, that the variable x isn't directly accessible outside of the function outer() because it's created within the scope of that function. But, Python's function closures mean that because inner() is not defined in the global scope, it keeps track of the surrounding namespace wherein it was defined. We can verify this by inspecting an example object:", "o = outer(7)\n\no()\n\ntry: \n x\nexcept NameError, e:\n print(e)\n\nprint( dir(o) ) \n\nprint( o.func_closure ) ", "And, there in the repr of the object's func_closure attribute, we can see there is an int still stored! This is the value that we passed in during the creation of the function.\nLinks\n\nDecorators PEP\na big list of decorators (only marginally useful)\nSimeon Franklin's article\nThe Code Ship\nincredible SO explanation\nanother SO response\n\nfin" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dianafprieto/SS_2017
.ipynb_checkpoints/05_NB_VTKPython_Scalar-checkpoint.ipynb
mit
[ "<img src=\"imgs/header.png\">\nVisualization techniques for scalar fields in VTK + Python\nReminder: The VTK pipeline\n<img src=\"imgs/vtk_pipeline.png\", align=left>\n$~$\nVisualizing data within a recilinear grid\nThe following code snippets show step by step the how to create a pipeline to visualize the outline of a rectilinear grid.", "%gui qt\nimport vtk\nfrom vtkviewer import SimpleVtkViewer\n#help(vtk.vtkRectilinearGridReader())", "1. Data input (source)", "# do not forget to call \"Update()\" at the end of the reader\nrectGridReader = vtk.vtkRectilinearGridReader()\nrectGridReader.SetFileName(\"data/jet4_0.500.vtk\")\nrectGridReader.Update()", "2. Filters\n\nFilter 1: vtkRectilinearGridOutlineFilter() creates wireframe outline for a rectilinear grid.", "rectGridOutline = vtk.vtkRectilinearGridOutlineFilter()\nrectGridOutline.SetInputData(rectGridReader.GetOutput())", "3. Mappers\n\nMapper: vtkPolyDataMapper() maps vtkPolyData to graphics primitives.", "rectGridOutlineMapper = vtk.vtkPolyDataMapper()\nrectGridOutlineMapper.SetInputConnection(rectGridOutline.GetOutputPort())", "4. Actors", "outlineActor = vtk.vtkActor()\noutlineActor.SetMapper(rectGridOutlineMapper)\noutlineActor.GetProperty().SetColor(0, 0, 0)", "5. Renderers and Windows", "#Option 1: Default vtk render window\nrenderer = vtk.vtkRenderer()\nrenderer.SetBackground(0.5, 0.5, 0.5)\nrenderer.AddActor(outlineActor)\nrenderer.ResetCamera()\n\nrenderWindow = vtk.vtkRenderWindow()\nrenderWindow.AddRenderer(renderer)\nrenderWindow.SetSize(500, 500)\nrenderWindow.Render()\n\niren = vtk.vtkRenderWindowInteractor()\niren.SetRenderWindow(renderWindow)\niren.Start()\n\n#Option 2: Using the vtk-viewer for Jupyter to interactively modify the pipeline\nvtkSimpleWin = SimpleVtkViewer()\nvtkSimpleWin.resize(1000,800)\nvtkSimpleWin.hide_axes()\n\nvtkSimpleWin.add_actor(outlineActor)\nvtkSimpleWin.add_actor(gridGeomActor)\n\nvtkSimpleWin.ren.SetBackground(0.5, 0.5, 0.5)\nvtkSimpleWin.ren.ResetCamera()", "<font color='red'>Trick:</font> The autocomplete functionality in Jupyter is available by pressing the Tab button.\nUseful Resources\nhttp://www.vtk.org/Wiki/VTK/Examples/Python" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
smsaladi/murraylab_tools
examples/Echo Setup Usage Examples.ipynb
mit
[ "Welcome to the Echo package\nThe \"echo\" subpackage of murraylab_tools is designed to automate the setup of reactions (mostly TX-TL reactions) with the Echo. The EchoRun class can produce picklists for the Echo, as well as instructions for loading a source plate to go with those instructions, when appropriate. \nThere are currently three main modes of operations of the Echo package, distinguished mainly by their inputs:\n- TX-TL Setup Spreadsheet: Takes a pair of CSV spreadsheets saved from a TX-TL setup spreadsheet (version 2.X). Use this for most TX-TL experiments. \n- Programmatic Setup: You can build a TX-TL reaction (or, with a bit more difficulty, a non-TX-TL reaction) programmatically, without using a setup spreadsheet. There are a couple of functions for doing this, which can be combined:\n - build_dilution_series: Takes two materials and an array of concentrations for each, and builds a 2D dilution series out of the two. Can also be used to set up 1D dilution series (by setting one of the materials to a dummy material and giving it the concentration list [0]). \n - add_material_to_block: Adds a single ingredient to all wells in a rectangular block on the destination plate, at a fixed concentration.\n - add_material_to_well: Like add_material_to_block, but to a single well.\n\nAssociation Spreadsheet: Takes one or more spreadsheets describing the contents of an Echo source plate, plus a simple spreadsheet describing final concentrations of the materials of those source plates in each destination. Use this for non-TX-TL experiments, or if you have a TX-TL experiment with more source materials than can be handled in a TX-TL setup spreadsheet.\n\nA word of warning: The quick start examples will help you jump right into setting up an experiment, but they make a number of assumptions about your experiment. Some things you'll want to check before running your own:\n- Reaction Size: Default is 5 ยตL.\n- Buffer/Extract Fractions: Defaults are 0.42/0.33 (typical for French Press extracts).\n- Buffer/Extract Aliquot Size: Defaults are 30ยตL/37ยตL. \n- \"Master Mix Excess: For accounting for pipetting loss. Default is 1.1 (10% excess).\n- Master Mix Composition: Default is buffer and extract only. Really only relevant for dilution series.\n- Source Plate Type: Default is 384_PP. You probably want this.\n- Source Plate Material: Default is AQ_BP (buffer-like liquids).\n- Destination Plate Type: Default is \"Nunc_384_black_glassbottom\".\n- Destination Plate Size: The Echo package has no knowledge of the destination plate. There is no check to keep you from defining picks off the edge of the destination plate.\n- Controls: TX-TL experiments come with a negative control. Most TX-TL setup spreadsheets also define a positive control.\n- Dead Volume/Max Volume: Default dead volume is 21 ยตL, including loss to meniscus. Default max volume is 65 ยตL. \n- Volume/Aliquot of Buffer and Extract: Default is 30 ยตL extract/aliquot and 37 ยตL buffer/aliquot.\nAdditionally, be aware that the TX-TL setup scripts keep track of which source plate wells they've used in a .dat file. If you run those experiments repeatedly, they'll fill up the file and eventually error out when they run out of wells on the plate. \nYou can read more about how the echo package works in the next section, \"How it all works\", or you can skip to the example usage sections below that. For more information on tweaking settings, see Tweaking Settings at the end of this notebook.\nHow it all works\nEvery project begins with an EchoRun object. This object holds (nearly) all of the information about an experiment, and is ultimately responsible for coordinating plates and materials, and for actually writing picklists and experimental protocols. \nMost EchoRun objects need a SourcePlate object. This object is responsible for keeping track of which wells have been used, and for assigning new wells on the source plate to materials that you'll want to transfer. It will try to assign wells in a way that keeps the same material in a contiguous block, buffered on either side by an empty well for ease of pipetting. SourcePlate objects are also typically associated with a .dat tracking file that lists which wells have been used on the plate. That way, you can use the same source plate over many experiments without having to manually program in which wells are forbidden. The SourcePlate object will read this file to learn what it has available to it, and will automatically write back to the same file whenever the EchoRun object controlling it writes a picklist. \nMaterials (DNA, chemicals, water, TX-TL, etc) on a source plate are represented by EchoSourceMaterial objects. An EchoSourceMaterial has a concentration, which is always stored in nM. However, because dsDNA is usually measured in ng/uL, and dsDNA is one of the most common materials used, EchoSourceMaterials by default assume that the concentration set in their constructors is in units of ng/uL. Any EchoSourceMaterial with length > 0 will convert that ng/uL concentration into an internal nM concentration. Only if the length of the EchoSourceMaterial is set to 0 will it use its set concentration value directly. This convention appears in several other contexts in the echo package.\nAn EchoSourceMaterial is always associated with a SourcePlate. The EchoSourceMaterial keeps track of how much of itself has been used, and will request that wells be allocated on the SourcePlate. \nEchoRun objects can also have a MasterMix object, which defines what materials will be put into the master mix, and at what concentrations. Association lists don't currently support master mixes, and TX-TL reactions built from CSVs will always pull their master mix information from the CSV, so this is currently only useful if you're using the dilution series function(s).\nTo generate an Echo picklist, you will generally need to do three things:\n- Building an EchoRun object. This usually just means setting the name of a source plate tracking file\n- Describe the experiment. This almost always means one or more calls to build_picklist_from_txtl_setup_csvs, build_dilution_series, or build_picklist_from_association_spreadsheet from your EchoRun object. What this entails depends on your experiment; TX-TL setup with a setup spreadsheet and association file setups are almost completely defined by external files, while 2D dilution series experiments require some definition in your script. \n- Write the picklist, which is done with a call to write_picklist from your EchoRun object. \nFor more information, see the examples below.\nTX-TL Setup Spreadsheet\nQuick Start Example\nThe following creates an Echo picklist and experimental protocol for a variety of fluorescent protein mixes described in \"inputs/TX-TL_setup_example.xlsx\".", "import murraylab_tools.echo as mt_echo\nimport os.path\n\n# Relevant input and output files. Check these out for examples of input file format.\ntxtl_inputs = os.path.join(\"txtl_setup\", \"inputs\")\ntxtl_outputs = os.path.join(\"txtl_setup\", \"outputs\")\nstock_file = os.path.join(txtl_inputs, \"TX-TL_setup_example_stocks.csv\") # Source materials\nrecipe_file = os.path.join(txtl_inputs, \"TX-TL_setup_example_recipe.csv\") # Experimental setup \nplate_file = os.path.join(txtl_inputs, \"TX-TL_setup_example_plate.dat\") # Keeps track of wells used \noutput_name = os.path.join(txtl_outputs, \"TX-TL_setup_example\") # Output (both a picklist and a\n # small protocol for building the\n # source plate)\n\n# Build an EchoRun object\ntxtl_plate = mt_echo.SourcePlate(filename = plate_file)\ntxtl_echo_calculator = mt_echo.EchoRun(plate = txtl_plate)\n\n# Describe the experiment\ntxtl_echo_calculator.build_picklist_from_txtl_setup_csvs(stock_file, recipe_file)\n# Write results\ntxtl_echo_calculator.write_picklist(output_name)", "### More Information\nThe build_picklist_from_txtl_setup_csvs function is used for creating a picklist from a TX-TL setup spreadsheet (version 2.0 or later -- spreadsheets from before late 2016 will not work). You will need to feed it the names of two CSV files produced from the \"recipe\" sheet (the first sheet, containing explicit pipetting directions) and the \"stocks\" sheet (the second sheet, which details the concentrations of most of the materials used in the experiment). The standard workflow looks like:\n\nEdit your TX-TL setup Excel document (modifying the \"Stocks\" sheet and the \"Layout\" sheet, and the few cells shaded purple in the \"Recipe\" sheet).\nSave the \"Recipe\" and \"Stocks\" sheets from the xls/xlsx file as CSVs.\nRun build_picklist_from_txtl_setup_csvs, passing it the names of the two CSVs you just saved.\n\nReaction size and master mix excess ratio are read from the recipe spreadsheet. You probably should not mess with those settings \nNote that you must give each reaction a plate location, i.e. \"D4\" or \"E07\". In the Excel spreadsheet, plate locations can be added in the \"Layout\" tab, and will be automatically propagated to the \"Recipe\" tab. \nbuild_picklist_from_txtl_setup_csvs requires that the EchoRun object have a source plate object associated with a source plate file. It will automatically assign wells to that source plate for all the materials required to run that reaction.\nProgrammatic construction of TX-TL Reactions\nQuick Start Example\nThe following creates an Echo picklist and simple experimental protcol for a two-way dilution series of a reporter plasmid and inducer.", "import murraylab_tools.echo as mt_echo\nimport os.path\n\n# Relevant input and output files. Check these out for examples of input file format.\ndilution_inputs = os.path.join(\"2D_dilution_series\", \"inputs\")\ndilution_outputs = os.path.join(\"2D_dilution_series\", \"outputs\")\nplate_file = os.path.join(dilution_inputs, \"dilution_setup_example_plate.dat\") # Keeps track of wells used\noutput_name = os.path.join(dilution_outputs, \"dilution_setup_example\") # Output (both a picklist and a\n # small protocol for building the\n # source plate)\n\n# Build an EchoRun object\ndilution_plate = mt_echo.SourcePlate(filename = plate_file)\ndefault_master_mix = mt_echo.MasterMix(plate = dilution_plate)\ndilution_echo_calculator = mt_echo.EchoRun(plate = dilution_plate, master_mix = default_master_mix)\n\n# Set final concentrations of two materials\ngfp_final_concentrations = range(0,6,1) # in nM\natc_final_concentrations = range(0,100,10) # in ng/uL\n\n# Define reporter plasmid material\ngfp_conc = 294 # Concentration in ng/uL\ngfp_len = 3202 # Size of DNA in bp\ngfp = mt_echo.EchoSourceMaterial('GFP Plasmid', gfp_conc, gfp_len, dilution_plate)\n\n# Define inducer material\natc_conc = 1000 # Concentration in ng/uL (important that this matches the units of the final concentrations)\natc_len = 0 # This isn't dsDNA, so it has 0 length. \natc = mt_echo.EchoSourceMaterial(\"ATc\", atc_conc, atc_len, dilution_plate)\n\n# Plan out the experiment\nstarting_well = \"D2\"\ndilution_echo_calculator.build_dilution_series(gfp, atc, gfp_final_concentrations,\n atc_final_concentrations, starting_well)\n# Write results\ndilution_echo_calculator.write_picklist(output_name)", "Note the warnings -- a lot of mistakes you might make will cause you to pipette 0 nL at a time, so the code will warn you if you do so. In this case, those 0 volume pipetting steps are normal -- you just added a material to 0 concentration. Similar warnings will appear if you under-fill a reaction.\nYou can also manually add a single material to a well (add_material_to_well) or a rectangular block of wells (add_material_to_block) by specifying the material (an EchoSourceMaterial object), a final concentration, and a location. For example, if we set up a dilution series using the variables above...", "# Build an EchoRun object\ndilution_plate = mt_echo.SourcePlate(filename = plate_file)\ndefault_master_mix = mt_echo.MasterMix(plate = dilution_plate)\ndilution_echo_calculator = mt_echo.EchoRun(plate = dilution_plate, master_mix = default_master_mix)\n\n# Plan out a dilution series\nstarting_well = \"D2\"\ndilution_echo_calculator.build_dilution_series(gfp, atc, gfp_final_concentrations,\n atc_final_concentrations, starting_well)", "...then we can add, say, bananas, to a 2x2 square in the top-left corner of the reaction.", "# Bananas at 100 nM\nbananas = mt_echo.EchoSourceMaterial('Cavendish', 100, 0, None)\ndilution_echo_calculator.add_material_to_block(bananas, 3, 'D2', 'E3')", "You can also add to a single well, if you really need to.", "old_bananas = mt_echo.EchoSourceMaterial('Gros Michel', 100, 0, None)\ndilution_echo_calculator.add_material_to_well(old_bananas, 3, 'F5')", "More Information\nThe build_dilution_series function of EchoRun is useful for quickly building a grid of dilutions of two materials, one on each axis. This is useful for double titrations of, for example, plasmid vs. inducer, or titration of two inputs. If you want to do anything more complex, you'll probably need to move to a TX-TL setup spreadsheet. \nThis function will always output reactions in solid blocks, with the upper-left-most well specified by the starting well argument of build_dilution_series (last argument). The function will also add a negative control well one row below the last row of the dilution series block, aligned with its first column. You will not get a positive control reaction -- you'll have to add that yourself, if you want it. \nNote that you must manually define source materials for 2D dilution setup. An EchoSourceMaterial object has four attributes -- a name, a concentration, a length, and a plate object. \n* The name can be whatever you want, but note that the names \"water\" and \"txtl_mm\" are reserved for water and TX-TL master mix, respectively. If you make one of your materials \"water\" or \"txtl_mm\", be aware that they're going to be assumed to actually be water and TX-TL master mix. \n* Concentration and length attributes of an EchoSourceMaterial follow specific unit conventions. In brief, if the material is dsDNA, length is the number of base pairs and concentration is in units of ng/ยตL; otherwise, length is 0 and concentration is in units of nM. See \"How it all works\" above for more details. \n* The EchoSourcePlate object should be the same EchoSourcePlate associated with the EchoRun object you're going to use. \nIf you want to set up multiple dilutions series, you can call build_dilution_series multiple times on the same EchoRun object. All dilution series will be put on the same plate, and source wells for materials with the same names (including the master mix and water) will be combined. Just make sure to start each dilution series in a different well! \nbuild_dilution_series requires that the EchoRun object have a source plate object associated with a source plate file. It will automatically assign wells to that source plate for all the materials required to run that reaction.\nAssociation Spreadsheet\nQuick Start Example\nThe following creates an Echo picklist for an automated PCR setup with three sets of primers to be applied individually to three different plasmids, as defined in a pair of CSV files (one defining the source plate, one describing what should go in the destination plates).", "import murraylab_tools.echo as mt_echo\nimport os.path\n\n# Relevant input and output files. Check these out for examples of input file format.\nassoc_inputs = os.path.join(\"association_list\", \"inputs\")\nassoc_outputs = os.path.join(\"association_list\", \"outputs\") \nstock_file = os.path.join(assoc_inputs, 'association_source_sheet.csv')\nassoc_file = os.path.join(assoc_inputs, 'association_final_sheet.csv')\nassoc_name = os.path.join(assoc_outputs, 'association_example')\n\n# Build an EchoRun object\nassoc_echo_calculator = mt_echo.EchoRun()\nassoc_echo_calculator.rxn_vol = 50000 # PCR is large-volume!\n\n# Define which column of the source file is what\nname_col = 'B'\nconc_col = 'C'\nlen_col = 'D'\nwell_col = 'A'\nplate_col = 'E'\n\n# Define the source plate based on the stock file.\nassoc_echo_calculator.load_source_plate(stock_file, name_col, conc_col,\n len_col, well_col, plate_col)\n\n# Build a protocol, based on the association file.\nassoc_echo_calculator.build_picklist_from_association_spreadsheet(assoc_file,\n well_col)\n\n# Write the picklist\nassoc_echo_calculator.write_picklist(assoc_name)", "More Information\nThe build_picklist_from_association_spreadsheet function is used for arbitrary mappings of source plate wells to destination wells, using 1) a spreadsheet describing the contents of each source plate, and 2) a second spreadsheet describing what materials should be put in what wells, and at what concentration. You must always set up a source plate first. This should be done by calling load_source_plate with the name of the source plate spreadsheet and information about its organization. Alternatively, you could build your own manually. That is not recommended. \nThe first 'source' spreadsheet (describing the source plate) is a CSV-format spreadsheet with, at minimum, columns with the following information. You can add any number of additional columns to the source plate spreadsheet, which will be ignored by this function. This is the spreadsheet read in with the load_source_plate command.\n* Location: This is the well number of the material, i.e. \"C4\" or \"E08\". If the same material is found in multiple wells, it will need one row for each well.\n* Name: Brief string describing the material. This name will be used in the recipe output of EchoRun. It can be any string, but be aware that the names \"water\" and \"txtl_mm\" are reserved for describing water and TX-TL master mix, respectively. For this function, that won't matter, but if you try to combine other setup commands with this one, you should avoid using \"water\" and \"txtl_mm\" as material names. \n* Concentration: If the material is dsDNA, this should be the concentration of the DNA in ng/ยตL. Otherwise, it should be the concentration of the material in whatever units you want (nM recommended).\n* Length: If the material is dsDNA, this should be the number of base pairs in the DNA. Otherwise, length should be 0. This is important for correct unit usage.\n* Plate: A single source plate spreadsheet can contain materials from different source plates, so a column is required to determine which plate the material is coming from. Name of the source plate. Put a number N here, and the plate will be auto-named \"Plate[N]\" (recommended usage). Alternatively, you can give the plate a custom name.\nThe second 'association' spreadsheet (describing the what materials go together) is also a CSV-format spreadsheet. This spreadsheet determines what goes into the destination well. One column of the association spreadsheet determines that row's well on the destination plate. The EchoRun object will scan through every column, ignoring the well column, taking pairs of columns from left to right. There can be any number of pairs of columns; each one will cause one material to be moved from the source plate to the destination plate. The first clumn in each pair holds the name of a material. This name must exactly match one of the material names listed in the source spreadsheet, and determines where material will be taken from. The second column in each pair describes the final concentration of that material. If the material is dsDNA (has non-zero length), the units of final concentration are assumed to be nM. Otherwise, the units of final concentration are the same as the units of concentration used for that material in the source plate.\nNote that unlike the other two experimental settings of EchoRun, build_picklist_from_association_spreadsheet does not require that its EchoRun object be associated with a source plate file or SourcePlate object prior to the function being called -- a new SourcePlate object will be manufactured from the input spreadsheet when you call load_source_plate. \nTweaking Settings", "import murraylab_tools.echo as mt_echo\nimport os.path\n\nplate_file = os.path.join(\"tweaking\", \"plate_file_example.dat\")\n\n# Build an EchoRun object\nexample_plate = mt_echo.SourcePlate(filename = plate_file)\nexample_master_mix = mt_echo.MasterMix(example_plate)\nexample_echo_calculator = mt_echo.EchoRun(plate = example_plate, master_mix = example_master_mix)", "To change the reaction volume:\nReaction volume is a property of an EchoRun object.", "example_echo_calculator.rxn_vol = 10.5 * 1e3 # Volume IN nL!!!", "Make sure to run this before running build_picklist_from_association_spreadsheet or build_dilution_series. You almost certainly shouldn't do this at all when using build_picklist_from_txtl_setup_csvs, because that function will automatically extract a reaction volume from the setup spreadsheet.\nTo change the master mix composition/extract fraction:\nThis is really only relevant for the 2D dilution series TX-TL setup (build_dilution_series) -- TX-TL setup from a spreadsheet pulls the extract fraction from the spreadsheet, and the association spreadsheet method has no knowledge of TX-TL. Accordingly, the extract fraction is an optional argument in build_dilution_series. To modify the master mix of a reaction, you'll have to set its MasterMix object, which is most safely done with the add_master_mix function. \nChanging the buffer/extract composition can be accomplished in the constructor of the new MasterMix object. Be sure to add the new MasterMix object before calling write_picklist! Also be sure that the new MasterMix object has the same reaction volume (rxn_vol) as the EchoRun object. Otherwise you'll get an error.", "new_master_mix = mt_echo.MasterMix(example_plate, extract_fraction = 0.40, \n rxn_vol = example_echo_calculator.rxn_vol)\nexample_echo_calculator.add_master_mix(new_master_mix)\nexample_echo_calculator.build_dilution_series(gfp, atc, gfp_final_concentrations,\n atc_final_concentrations, starting_well)", "You can also add arbitrary components to the master mix. For example, the following code ads the dye DFHBI-1T to every well at a final concentration of 10 ยตM, from a 2 mM stock:", "new_master_mix = mt_echo.MasterMix(example_plate, \n rxn_vol = example_echo_calculator.rxn_vol)\ndfhbi = mt_echo.EchoSourceMaterial(\"DFHBI-1T\", 2000, 0, example_plate)\nnew_master_mix.add_material(dfhbi, 10)\nexample_echo_calculator.add_master_mix(new_master_mix)\nexample_echo_calculator.build_dilution_series(gfp, atc, gfp_final_concentrations,\n atc_final_concentrations, starting_well)", "To change buffer/extract aliquot size:\nBuffer and extract aliquot size are controlled by the MasterMix object. Like extract percentage, aliquot sizes can be changed in the MasterMix's constructor.\nNote that both aliquot sizes are in units of nL, not uL.", "new_master_mix = mt_echo.MasterMix(example_plate,\n extract_per_aliquot = 50000,\n buffer_per_aliquot = 70000,\n rxn_vol = example_echo_calculator.rxn_vol)\nexample_echo_calculator.add_master_mix(new_master_mix)\nexample_echo_calculator.build_dilution_series(gfp, atc, gfp_final_concentrations,\n atc_final_concentrations, starting_well)\n# ...\n# calculator_with_odd_destination.write_picklist(...)\n#...", "To run a (dilution series) reaction without master mix:\nYou can also make a dilution series without any master mix by either creating a new EchoRun object with None for its MasterMix, or by removing the MasterMix on an existing EchoRun object with a call to remove_master_mix.", "dye1 = mt_echo.EchoSourceMaterial('A Dye', 100, 0, dilution_plate)\ndye2 = mt_echo.EchoSourceMaterial('Another Dye', 122, 0, dilution_plate)\ndye_concentrations = [x for x in range(10)]\n\nexample_echo_calculator.remove_master_mix()\nexample_echo_calculator.build_dilution_series(dye1, dye2, dye_concentrations, \n dye_concentrations, starting_well)", "To change the source plate type/material type:\nSource plate types and material types are set as optional arguments in the constructor of a SourcePlate object. The type and material of a source plate are both set by a string, which can be any string that the Echo Plate Reformat software will recognize.", "plate_type = \"384PP_AQ_BP\" # This is actually the default plate value\nretyped_example_plate = mt_echo.SourcePlate(filename = plate_file, SPtype = plate_type)\nanother_example_echo_calculator = mt_echo.EchoRun(plate = retyped_example_plate)", "To change the source plate name:\nSource plate names are set much like source plate types with the argument SPname. In addition, as a shorthand, you can set SPname to be a number N, in which case the plate will be named Source[N].", "plate_name = \"FirstPlate\"\nrenamed_example_plate = mt_echo.SourcePlate(filename = plate_file, SPname = plate_name)\nyet_another_example_echo_calculator = mt_echo.EchoRun(plate = renamed_example_plate)", "To change the destination plate type:\nDestination plate types are determined and stored directly in the EchoRun object. The destination plate type can be set by the optional argument DPtype in the constructor of an EchoRun object, or set manually any time before calling write_picklist on that EchoRun.", "calculator_with_odd_destination = mt_echo.EchoRun(plate = example_plate, DPtype = \"some_96_well_plate\")\ncalculator_with_odd_destination.DPtype = \"Nunc_384_black_glassbottom\" # Just kidding!\n# ...\n# calculator_with_odd_destination.write_picklist(...)\n#...", "To change dead volume and max volume:\nYou probably shouldn't do this. If you absolutely must squeeze every last bit of efficiency out of your source wells, you can set the dead_volume and max_volume variables, which are static variables in the murraylab_tools.echo package. If you change them, also make sure to set the static variable usable_volume, which defines the volume of material in a well that can actually be used by the Echo (this is normally calculated from dead_volume and max_volume at package import). Also, you should do this before running any experimental protocol function.", "from murraylab_tools.echo.echo_functions import dead_volume, max_volume, usable_volume\ndead_volume = 10000 # Volume in nL!\nmax_volume = 75000 # Volume in nL!\nusable_volume = max_volume - dead_volume # Don't forget to re-calculate this!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/mpi-m/cmip6/models/mpi-esm-1-2-lr/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: MPI-M\nSource ID: MPI-ESM-1-2-LR\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:17\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-lr', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "ยฉ2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Upward-Spiral-Science/spect-team
Code/Assignment-10/SubjectSelectionExperiments (rCBF data).ipynb
apache-2.0
[ "Subject Selection Experiments disorder data - Srinivas (handle: thewickedaxe)\nInitial Data Cleaning", "# Standard\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# Dimensionality reduction and Clustering\nfrom sklearn.decomposition import PCA\nfrom sklearn.cluster import KMeans\nfrom sklearn.cluster import MeanShift, estimate_bandwidth\nfrom sklearn import manifold, datasets\nfrom itertools import cycle\n\n# Plotting tools and classifiers\nfrom matplotlib.colors import ListedColormap\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn import preprocessing\nfrom sklearn.datasets import make_moons, make_circles, make_classification\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA\nfrom sklearn import cross_validation\nfrom sklearn.cross_validation import LeaveOneOut\n\n\n# Let's read the data in and clean it\n\ndef get_NaNs(df):\n columns = list(df.columns.get_values()) \n row_metrics = df.isnull().sum(axis=1)\n rows_with_na = []\n for i, x in enumerate(row_metrics):\n if x > 0: rows_with_na.append(i)\n return rows_with_na\n\ndef remove_NaNs(df):\n rows_with_na = get_NaNs(df)\n cleansed_df = df.drop(df.index[rows_with_na], inplace=False) \n return cleansed_df\n\ninitial_data = pd.DataFrame.from_csv('Data_Adults_1_reduced_inv1.csv')\ncleansed_df = remove_NaNs(initial_data)\n\n# Let's also get rid of nominal data\nnumerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']\nX = cleansed_df.select_dtypes(include=numerics)\nprint X.shape\n\n# Let's now clean columns getting rid of certain columns that might not be important to our analysis\n\ncols2drop = ['GROUP_ID', 'doa', 'Baseline_header_id', 'Concentration_header_id', 'Baseline_Reading_id',\n 'Concentration_Reading_id']\nX = X.drop(cols2drop, axis=1, inplace=False)\nprint X.shape\n\n# For our studies children skew the data, it would be cleaner to just analyse adults\nX = X.loc[X['Age'] >= 18]\nY = X.loc[X['race_id'] == 1]\nX = X.loc[X['Gender_id'] == 1]\n\nprint X.shape\nprint Y.shape", "Extracting the samples we are interested in", "# Let's extract ADHd and Bipolar patients (mutually exclusive)\n\nADHD_men = X.loc[X['ADHD'] == 1]\nADHD_men = ADHD_men.loc[ADHD_men['Bipolar'] == 0]\n\nBP_men = X.loc[X['Bipolar'] == 1]\nBP_men = BP_men.loc[BP_men['ADHD'] == 0]\n\nADHD_cauc = Y.loc[Y['ADHD'] == 1]\nADHD_cauc = ADHD_cauc.loc[ADHD_cauc['Bipolar'] == 0]\n\nBP_cauc = Y.loc[Y['Bipolar'] == 1]\nBP_cauc = BP_cauc.loc[BP_cauc['ADHD'] == 0]\n\nprint ADHD_men.shape\nprint BP_men.shape\n\nprint ADHD_cauc.shape\nprint BP_cauc.shape\n\n# Keeping a backup of the data frame object because numpy arrays don't play well with certain scikit functions\nADHD_men = pd.DataFrame(ADHD_men.drop(['Patient_ID', 'Gender_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False))\nBP_men = pd.DataFrame(BP_men.drop(['Patient_ID', 'Gender_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False))\n\nADHD_cauc = pd.DataFrame(ADHD_cauc.drop(['Patient_ID', 'race_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False))\nBP_cauc = pd.DataFrame(BP_cauc.drop(['Patient_ID', 'race_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False))", "Dimensionality reduction\nManifold Techniques\nISOMAP", "combined1 = pd.concat([ADHD_men, BP_men])\ncombined2 = pd.concat([ADHD_cauc, BP_cauc])\n\nprint combined1.shape\nprint combined2.shape\n\ncombined1 = preprocessing.scale(combined1)\ncombined2 = preprocessing.scale(combined2)\n\ncombined1 = manifold.Isomap(20, 20).fit_transform(combined1)\nADHD_men_iso = combined1[:1056]\nBP_men_iso = combined1[1056:]\n\ncombined2 = manifold.Isomap(20, 20).fit_transform(combined2)\nADHD_cauc_iso = combined2[:1110]\nBP_cauc_iso = combined2[1110:]", "Clustering and other grouping experiments\nK-Means clustering - iso", "data1 = pd.concat([pd.DataFrame(ADHD_men_iso), pd.DataFrame(BP_men_iso)])\ndata2 = pd.concat([pd.DataFrame(ADHD_cauc_iso), pd.DataFrame(BP_cauc_iso)])\n\nprint data1.shape\nprint data2.shape\n\nkmeans = KMeans(n_clusters=2)\nkmeans.fit(data1.get_values())\nlabels1 = kmeans.labels_\ncentroids1 = kmeans.cluster_centers_\nprint('Estimated number of clusters: %d' % len(centroids1))\n\nfor label in [0, 1]:\n ds = data1.get_values()[np.where(labels1 == label)] \n plt.plot(ds[:,0], ds[:,1], '.') \n lines = plt.plot(centroids1[label,0], centroids1[label,1], 'o')\n\n\n\nkmeans = KMeans(n_clusters=2)\nkmeans.fit(data2.get_values())\nlabels2 = kmeans.labels_\ncentroids2 = kmeans.cluster_centers_\nprint('Estimated number of clusters: %d' % len(centroids2))\n\nfor label in [0, 1]:\n ds2 = data2.get_values()[np.where(labels2 == label)] \n plt.plot(ds2[:,0], ds2[:,1], '.') \n lines = plt.plot(centroids2[label,0], centroids2[label,1], 'o')", "As is evident from the above 2 experiments, no clear clustering is apparent.But there is some significant overlap and there 2 clear groups\nClassification Experiments\nLet's experiment with a bunch of classifiers", "ADHD_men_iso = pd.DataFrame(ADHD_men_iso)\nBP_men_iso = pd.DataFrame(BP_men_iso)\n\nADHD_cauc_iso = pd.DataFrame(ADHD_cauc_iso)\nBP_cauc_iso = pd.DataFrame(BP_cauc_iso)\n\nBP_men_iso['ADHD-Bipolar'] = 0\nADHD_men_iso['ADHD-Bipolar'] = 1\n\nBP_cauc_iso['ADHD-Bipolar'] = 0\nADHD_cauc_iso['ADHD-Bipolar'] = 1\n\ndata1 = pd.concat([ADHD_men_iso, BP_men_iso])\ndata2 = pd.concat([ADHD_cauc_iso, BP_cauc_iso])\nclass_labels1 = data1['ADHD-Bipolar']\nclass_labels2 = data2['ADHD-Bipolar']\ndata1 = data1.drop(['ADHD-Bipolar'], axis = 1, inplace = False)\ndata2 = data2.drop(['ADHD-Bipolar'], axis = 1, inplace = False)\ndata1 = data1.get_values()\ndata2 = data2.get_values()\n\n# Leave one Out cross validation\ndef leave_one_out(classifier, values, labels):\n leave_one_out_validator = LeaveOneOut(len(values))\n classifier_metrics = cross_validation.cross_val_score(classifier, values, labels, cv=leave_one_out_validator)\n accuracy = classifier_metrics.mean()\n deviation = classifier_metrics.std()\n return accuracy, deviation\n\nrf = RandomForestClassifier(n_estimators = 22) \nqda = QDA()\nlda = LDA()\ngnb = GaussianNB()\nclassifier_accuracy_list = []\nclassifiers = [(rf, \"Random Forest\"), (lda, \"LDA\"), (qda, \"QDA\"), (gnb, \"Gaussian NB\")]\nfor classifier, name in classifiers:\n accuracy, deviation = leave_one_out(classifier, data1, class_labels1)\n print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)\n classifier_accuracy_list.append((name, accuracy))\n\nfor classifier, name in classifiers:\n accuracy, deviation = leave_one_out(classifier, data2, class_labels2)\n print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)\n classifier_accuracy_list.append((name, accuracy))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cosmoscalibur/herramientas_computacionales
Presentaciones/Notas/09_Extraccion_web.ipynb
mit
[ "Extracciรณn de datos web (Web scrapping)\nAnte la generaciรณn masiva a traves de la red es importante tener herramientas que permitan la extracciรณn de datos a partir de fuentes cuya ubicaciรณn es esta. De esto se trata el web scrapping. \nSe pueden tener elementos poco especificos mediante las mismas alternativas del procesamiento de texto para algunos casos, sin embargo esto no siempre serรก efectivo ni eficiente. Por ejemplo, podemos usar wget para descargar una pรกgina y hacer la bรบsqueda de elementos html en ella por medio de expresiones regulares, pero la descarga de la pรกgina implica que el contenido debio ser estatico. Igualmente, las expresiones regulares no son la mejor herramienta siempre, y es mรกs eficiente usar elementos especialmente diseรฑados para recorrer la estructura html sin depender de la generaciรณn de expresiones de coincidencia sino obedeciendo exclusivamente a los patrones que ya sabemos que existirรกn por defecto. \nHerramientas (en python)\nPara esta labor contamos con algunas herramientas como lo son: \n\nurllib: Modulo incluido en python para la recuperaciรณn de contenido de una url. \nwebbrowser: Modulo incluido en python para la apertura de url's en una instancia del navegador predefinido. \nhtml: Modulo incluido en python para el analisis sintactico html. \nRequest: Reemplazo externo para urllib con mayores caracteristicas. \nBeautiful Soup: Reemplazo externo para html con mayores caracteristicas. \nSelenium: Reemplazo externo para webbrowser con mayores caracteristicas. \nWget: Port de wget para python. \n\nInstalar requisitos\nPrimero que todo, partimos que ya tenemos instalado al menos un navegador (firefox por defecto en la mayor parte de las distribuciones linux). Se puede trabajar con otros navegadores, y es de especial interes PhantomJS, una opciรณn de navegador que no genera interface grรกfica, ideal para pruebas o automatizaciรณn (en caso de ser molesto que el navegador se vea abrir y cerrar, etc...). \npip install selenium beautifulsoup4 Requests\n\nAplicando", "import webbrowser", "Al usar la funciรณn open de webbrowser se abrirรก el navegador o una pestaรฑa nueva si el navegador ya estaba abierto. Ya que no indicamos el navegador, esto se realiza con el navegador configurado por defecto en nuestro sistema. Se puede usar para abrir pestaรฑas y ventanas nuevas, cerrarlo, y tambien usar un navegador especifico.", "webbrowser.open('http://github.com/')", "Sin embargo, la labor de extracciรณn web depende de obtener el cรณdigo fuente o elementos disponibles en las pรกginas, lo cual es imposible con solo abrir el navegador. Para este fin es posible usar urllib o como lo haremos en esta sesiรณn, con request.", "import requests\nres = requests.get('http://www.gutenberg.org/files/18251/18251-0.txt')\nres.status_code == requests.codes.ok # Validar cรณdigo 200 (ok)\ntype(res)\nlen(res.text)\nprint(res.text[:250])", "Ante un fallo en el proceso de obtenciรณn del cรณdigo con la funciรณn get, es posible generar una notificaciรณn del motivo de fallo.", "res = requests.get('http://github.com/yomeinventoesto')\nres.raise_for_status()", "Cuando obtenemos un elemento de una direcciรณn, este se encuentra como binario y no como texto plano. Esto nos facilita algunas cosas. Nos permite descargar contenido que no se solo texto plano (archivos de texto o cรณdigo fuente) sino tambien directamente archivos binarios como imagenes, ejecutables, videos, archivos de word y otros. Es importante aclarar, que si vamos a almacenar el archivo de texto plano, debemos hacerlo con creaciรณn de archivos binarios para no perder la codificaciรณn original que tenga el archivo.", "res = requests.get('http://www.programmableweb.com/sites/default/files/github-jupyter.jpg')\narchivo_imagen = open('github-jupyter.jpg', 'wb')\nfor bloques in res.iter_content(100000):\n archivo_imagen.write(bloques)\narchivo_imagen.close()", "En el bloque anterior, el mรฉtodo iter_content genera bloques del archivo con el tamaรฑo indicado en su argumento. Esto conviene para la escritura de archivos de gran tamaรฑo.", "import bs4", "Usarmos ahora bs4 (forma de importar Beautiful Soup), lo cual nos permitirรก la bรบsqueda de texto y estructuras html especificas. Este es mรกs conveniente que usar expresiones regulares directamente en el cรณdigo fuente. \nAl crear el objeto, debemos indicar el texto sobre el cual actuarรก (puede ser obtenido directamente de un archivo abierto tambien) y el tipo de analizador sintactico, en este caso lxml.", "res = requests.get('https://github.com/cosmoscalibur/herramientas_computacionales')\ngh = bs4.BeautifulSoup(res.text, \"lxml\")\ntype(gh)", "Ahora, buscaremos todas las estructuras td que tengan el atributo class con valor content.", "tabla_archivos = gh.find_all('td', {'class':'content'})\ntype(tabla_archivos)", "El resultado es una lista con todos los resultados obtenidos. Tambien es posible una bรบsqueda uno a uno, usando find en lugar de find_all.", "len(tabla_archivos)\n\nprint(tabla_archivos)", "En el filtrado anterior, ahora buscaremos todas las etiquetas a las cuales asociamos con la presencia del atributo href. De esta forma localizaremos la lista de archivos. Para obtener el texto al interior de una etiqueta, usamos la propiedad string y el valor de un atributo con el mรฉtodo get.", "for content in tabla_archivos:\n lineas_a = content('a')\n if lineas_a:\n texto = \"Se encontro el archivo '{}'\".format(lineas_a[0].string.encode(\"utf-8\"))\n texto += \" con enlace '{}'.\".format(lineas_a[0].get(\"href\"))\n print(texto)", "Nos vimos en la necesidad de usar encode(\"utf-8\") ya que la codificaciรณn de la pรกgina es utf-8 y no ascii (el usado por defecto en python). Podemos consultar los atributos de una etiqueta o si posee un atributo especifico, y no solo obtener el valor, de la siguiente forma.", "lineas_a[0].has_attr(\"href\") # Existencia de un atributo\n\nlineas_a[0].attrs # Atributos existentes\n\nfrom selenium import webdriver", "Invocar la instancia del controlador del navegador depende del navegador de interes. Hay que tener encuenta que no todos los navegadores son soportados. Podemos encontrar soporte para Chrome, Firefox, Opera, IE y PhantomJS. Este รบltimo permite realizar la labor sin la generaciรณn de una ventana para el navegador (en caso de ser necesario, incluso se puede generar capturas de pantalla para su validaciรณn con ayuda del controlador).\nAcorde a cada navegador, se puede tener requerimientos especificos. En el caso de firefox, se presenta la necesidad de indicar el directorio del perfil de usuario, en el caso de chrome se requiere indicar la ruta del controlador (se descarga ya que no viene incluido como si sucede en firefox o phantomjs).\nPodrรญa ser posible (no he verificado) usar otros navegadores si usan el mismo motor de navegaciรณn realizando la indicaciรณn explicita de la ruta del ejecutable. Por ejemplo, se podrรญa controlar vivaldi realizando el cambio de ruta de chrome (usan el mismo motor de navegaciรณn).", "browser = webdriver.Chrome(\"/home/cosmoscalibur/Downloads/chromedriver\")\nbrowser.get('http://github.com')\nusername = browser.find_element_by_id(\"user[login]\")\nusername.send_keys(\"[email protected]\")\ndar_click = browser.find_element_by_link_text(\"privacy policy\")\ndar_click.click()", "Resulta bastante รบtil el uso de selenium no tanto en los casos que requieran de interacciรณn sino en los casos donde los contenidos (incluye elementos de interacciรณn) son de generaciรณn dinรกmica o tras la interacciรณn el nuevo enlace o contenido tiene retrasos apreciables, lo cual evitarรญa que Request obtenga el cรณdigo adecuado. Podemos extraer el cรณdigo fuente de la pรกgina en la cual se encuentra el foco del navegador de la siguiente forma.", "codigo = browser.page_source\nprint(codigo)", "Si se desea hacer pasar algรบn navegador especifico o una solicitud por request o urllib como un navegador dado (por ejemplo, evitar bloqueos de contenido por navegador o por politicas contra la extracciรณn web), es necesario realizar la modificaciรณn del user-agent.\nBibliografรญa\n\nHow to fetch Internet Resources Using The urllib Package. \nStructured Markup Processing Tools. \nInternet Protocols and Support. \nAutomate the boring stuff. Chapter 11 โ€“ Web Scraping. \nFirst web scraper. \nHOW TO DOWNLOAD DYNAMICALLY LOADED CONTENT USING PYTHON." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
awagner-mainz/notebooks
gallery/DHD2019_Azpilcueta.ipynb
mit
[ "Multimodale Versuche der Alignierung historischer Texte\nAndreas Wagner und Manuela Bragagnolo, Max-Planck-Institut fรผr europรคische Rechtsgeschichte, Frankfurt/M.\n&lt;&#119;&#97;&#103;&#110;&#101;&#114;&#64;&#114;&#103;&#46;&#109;&#112;&#103;&#46;&#100;&#101;&gt; &lt;&#98;&#114;&#97;&#103;&#97;&#103;&#110;&#111;&#108;&#111;&#103;&#64;&#114;&#103;&#46;&#109;&#112;&#103;&#46;&#100;&#101;&gt;\nTable of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Multimodale-Versuche-der-Alignierung-historischer-Texte\" data-toc-modified-id=\"Multimodale-Versuche-der-Alignierung-historischer-Texte-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Multimodale Versuche der Alignierung historischer Texte</a></div><div class=\"lev2 toc-item\"><a href=\"#Introduction\" data-toc-modified-id=\"Introduction-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Introduction</a></div><div class=\"lev1 toc-item\"><a href=\"#Preparations\" data-toc-modified-id=\"Preparations-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Preparations</a></div><div class=\"lev1 toc-item\"><a href=\"#TF/IDF-\" data-toc-modified-id=\"TF/IDF--3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>TF/IDF </a></div><div class=\"lev1 toc-item\"><a href=\"#Translations?\" data-toc-modified-id=\"Translations?-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Translations?</a></div><div class=\"lev2 toc-item\"><a href=\"#New-Approach:-Use-Aligner-from-Machine-Translation-Studies-\" data-toc-modified-id=\"New-Approach:-Use-Aligner-from-Machine-Translation-Studies--41\"><span class=\"toc-item-num\">4.1&nbsp;&nbsp;</span>New Approach: Use Aligner from Machine Translation Studies </a></div><div class=\"lev1 toc-item\"><a href=\"#Similarity-\" data-toc-modified-id=\"Similarity--5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Similarity </a></div><div class=\"lev1 toc-item\"><a href=\"#Word-Clouds-\" data-toc-modified-id=\"Word-Clouds--6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>Word Clouds </a></div>\n\n## Introduction\n\nThis file is the continuation of preceding work. Previously, I have worked my way through a couple of text-analysing approaches - such as tf/idf frequencies, n-grams and the like - in the context of a project concerned with Juan de Solรณrzano Pereira's *Politica Indiana*. This can be seen [here](TextProcessing_Solorzano.ipynb).\n\nIn the former context, I got somewhat stuck when I was trying to automatically align corresponding passages of two editions of the same work ... where the one edition would be a **translation** of the other and thus we would have two different languages. In vector terminology, two languages means two almost orthogonal vectors and it makes little sense to search for similarities there.\n\nThe present file takes this up, tries to refine an approach taken there and to find alternative ways of analysing a text across several languages. This time, the work concerned is Martรญn de Azpilcueta's *Manual de confesores*, a work of the 16th century that has seen very many editions and translations, quite a few of them even by the work's original author and it is the subject of the research project [\"Martรญn de Azpilcuetaโ€™s Manual for Confessors and the Phenomenon of Epitomisation\"](http://www.rg.mpg.de/research/martin-de-azpilcuetas-manual-for-confessors) by Manuela Bragagnolo. \n\n(There are a few DH-ey things about the project that are not directly of concern here, like a synoptic display of several editions or the presentation of the divergence of many actual translations of a given term. Such aspects are being treated with other software, like [HyperMachiavel](http://hyperprince.ens-lyon.fr/hypermachiavel) or [Lera](http://lera.uzi.uni-halle.de/).)\n\nAs in the previous case, the programming language used in the following examples is \"python\" and the tool used to get prose discussion and code samples together is called [\"jupyter\"](http://jupyter.org/). (A common way of installing both the language and the jupyter software, especially in windows, is by installing a python \"distribution\" like [Anaconda](https://www.anaconda.com/what-is-anaconda/).) In jupyter, you have a \"notebook\" that you can populate with text (if you want to use it, jupyter understands [markdown](http://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Working%20With%20Markdown%20Cells.html) code formatting) or code, and a program that pipes a nice rendering of the notebook to a web browser as you are reading right now. In many places in such a notebook, the output that the code samples produce is printed right below the code itself. Sometimes this can be quite a lot of output and depending on your viewing environment you might have to scroll quite some way to get to the continuation of the discussion.\n\nYou can save your notebook online (the current one is [here at github](https://github.com/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb)) and there is an online service, nbviewer, able to render any notebook that it can access online. So chances are you are reading this present notebook at the web address [https://nbviewer.jupyter.org/github/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb](https://nbviewer.jupyter.org/github/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb).\n\nA final word about the elements of this notebook:\n\n<div class=\"alert alertbox alert-success\">At some points I am mentioning things I consider to be important decisions or take-away messages for scholarly readers. E.g. whether or not to insert certain artefacts into the very transcription of your text, what the methodological ramifications of a certain approach or parameter are, what the implications of an example solution are, or what a possible interpretation of a certain result might be. I am highlighting these things in a block like this one here or at least in <font color=\"green\">**green bold font**</font>.</div>\n\n<div class=\"alert alertbox alert-danger\">**NOTE:** As I am continually improving the notebook on the side of the source text, wordlists and other parameters, it is sometimes hard to keep the prose description in sync. So while the actual descriptions still apply, the numbers that are mentioned in the prose (as where we have e.g. a \"table with 20 rows and 1.672 columns\") might no longer reflect the latest state of the sources, auxiliary files and parameters and you should take these with a grain of salt. Best double check them by reading the actual code ;-)\n\nI apologize for the inconsistency.</div>\n\n# Preparations", "from typing import Dict\nimport lxml\nfrom lxml import etree\n\ndocument=etree.fromstring(\"\"\"\n<TEI xmlns=\"http://www.tei-c.org/ns/1.0\">\n<text>\n <body>\n <div n=\"1\">\n <p>\n ... <milestone unit=\"number\" n=\"9\"/>aun que el amor de Dios ha de ser\n grandissimo ..., como despues de. S. Tho.\n <ref target=\"#nm-0406\">b</ref><note xml:id=\"nm-0406\"><p>1. Sec. quaestio\n 109. ar. 3.</p></note>, poco ha lo tratamos\n <ref target=\"#nm-0407\">c</ref><note xml:id=\"nm-0407\"><p>in addit. ca.\n Quoniam. de consec. disti. 1. nu. 10.</p></note>. Anadimos, (virtual)\n <milestone unit=\"number\" n=\"10\"/>porque aquella basta, ...\n <ref target=\"#nm-0408\">d</ref><note xml:id=\"nm-0408\"><p>in 4. dis. 14.\n q. 1. art. 3.</p></note>, que pone exemplo ..., que Gabriel sigue\n <ref target=\"#nm-0409\">e</ref><note xml:id=\"nm-0409\"><p>in 4. dis. 14.\n q. 1. col. 12. &amp; 13. &amp; in. 3. di. 27. q. 1. co. 15.</p></note>.\n <milestone unit=\"other\" rendition=\"#asterisk\"/> Y aun, aquel doctissimo,\n ... <ref target=\"#nm-040a\">f</ref><note xml:id=\"nm-040a\"><p>In Codice de\n poeni. q. 2.</p></note>, y con razon, ..., el martyrio atribuya esto\n <ref target=\"#nm-040b\">g</ref><note xml:id=\"nm-040b\"><p>Lib. 2. c. 16.\n de natu. &amp; gra.</p></note>, porque mas haze para esto el amor, ...\n que lo que se padece <ref target=\"#nm-040c\">h</ref><note xml:id=\"nm-040c\">\n <p>Arg. c. 13. 1. ad Corinth.</p></note>. Y puede ser que mas ame, ...,\n como lo prueua bien Medina\n <ref target=\"#nm-040d\">i</ref><note xml:id=\"nm-040d\"><p>in predi.\n q. 2.</p></note>. Por lo qual largamente paresce quan lexos esta esto\n dela opinion de Luthero<milestone unit=\"other\" rendition=\"#asterisk\"/>.\n De lo dicho se collige la razon, ..., segun Syluestro\n <ref target=\"#nm-040e\">k</ref><note xml:id=\"nm-040e\"><p>verb. Contritio.\n q. 1.</p></note>. Diximos <milestone unit=\"number\" n=\"11\"/> (auer\n pecado,) porque el arrepentimiento ...\n </p>\n </div>\n </body>\n</text>\n</TEI>\"\"\")\n\ndef segment(chapter: lxml.etree._Element) -> Dict[str, str]:\n segments = {} # this will be returned\n t = [] # this is a buffer\n chap_label = str(chapter.get(\"n\"))\n sect_label = \"0\"\n for element in chapter.iter():\n if element.get(\"unit\")==\"number\":\n # milestone: fill and close the previous segment:\n label = chap_label + \"_\" + sect_label\n segments[label] = \" \".join(t)\n # reset buffer\n t = []\n # if there is text after the milestone,\n # add it as first content to the buffer\n if element.tail:\n t.append(\" \".join(str.replace(element.tail, \"\\n\", \" \").strip().split()))\n # prepare for next labelmaking\n sect_label = str(element.get(\"n\"))\n else:\n if element.text:\n t.append(\" \".join(str.replace(element.text, \"\\n\", \" \").strip().split()))\n if element.tail:\n t.append(\" \".join(str.replace(element.tail, \"\\n\", \" \").strip().split()))\n # all elements are processed,\n # add text remainder/current text buffer content\n label = chap_label + \"_\" + sect_label\n segments[label] = \" \".join(t)\n return segments\n\nnsmap = {\"tei\": \"http://www.tei-c.org/ns/1.0\"}\nxp_divs = etree.XPath(\"(//tei:body/tei:div)\", namespaces = nsmap)\n\nsegmented = {}\ndivs = xp_divs(document)\nsegments = (segment(div) for div in divs)\nfor d in segments:\n print(d)\n\ndocument=etree.fromstring(\"\"\"\n<TEI xmlns=\"http://www.tei-c.org/ns/1.0\">\n<text><body>\n <div n=\"1\">\n <p>... <milestone unit=\"number\" n=\"9\"/>aa ab ac<ref target=\"#nm-0406\">ad</ref><note xml:id=\"nm-0406\"><p>ae af</p></note> ag\n <ref target=\"#nm-0407\">ah</ref><note xml:id=\"nm-0407\"><p>ai aj</p></note> ak\n <milestone unit=\"number\" n=\"10\"/>ba bb bc<ref target=\"#nm-0408\">bd</ref><note xml:id=\"nm-0408\"><p>be bf</p></note> bg\n <ref target=\"#nm-0409\">bh</ref><note xml:id=\"nm-0409\"><p>bi bj</p></note><milestone unit=\"other\" rendition=\"#asterisk\"/> bk bl<ref target=\"#nm-040a\">bm</ref><note xml:id=\"nm-040a\"><p>bn bo</p></note> bp\n <ref target=\"#nm-040b\">bq</ref><note xml:id=\"nm-040b\"><p>br bs</p></note> bt\n <ref target=\"#nm-040c\">bu</ref><note xml:id=\"nm-040c\"><p>bv</p></note> bw<milestone unit=\"other\" rendition=\"#asterisk\"/>bx. by <milestone unit=\"number\" n=\"11\"/>ca cb ...</p>\n </div>\n</body></text>\n</TEI>\"\"\")\n\nimport lxml\nfrom lxml import etree\n\ndef flatten(element: lxml.etree._Element):\n t = \"\"\n if element.text:\n t += \" \".join(str.replace(element.text, \"\\n\", \" \").strip().split())\n if element.get(\"unit\")==\"number\":\n t += t + \"+ms_\" + str(element.get(\"n\")) + \"+\"\n if element.tail:\n t += \" \".join(str.replace(element.tail, \"\\n\", \" \").strip().split())\n if element.getchildren():\n t += \" \".join((flatten(child)) for child in element.getchildren())\n if element.tail and not(element.get(\"unit\")==\"number\"):\n t += \" \".join(str.replace(element.tail, \"\\n\", \" \").strip().split())\n # all elements are processed, add text remainder/current text buffer content\n return t\n\nnsmap = {\"tei\": \"http://www.tei-c.org/ns/1.0\"}\nxp_divs = etree.XPath(\"(//tei:body/tei:div)\", namespaces = nsmap)\ndivs = xp_divs(document)\n\nsegments = \"\".join(flatten(div) for div in divs)\nprint(segments)", "Unlike in the previous case, where we had word files that we could export as plaintext, in this case Manuela has prepared a sample chapter with four editions transcribed in parallel in an office spreadsheet. So we first of all make sure that we have good UTF-8 comma-separated-value files, e.g. by uploading a csv export of our office program of choice to a CSV Linting service. (As a side remark, in my case, exporting with LibreOffice provided me with options to select UTF-8 encoding and choose the field delimiter and resulted in a valid csv file. MS Excel did neither of those.) Below, we expect the file at the following position:", "sourcePath = 'DHd2019/cap6_align_-_2018-01.csv'", "Then, we can go ahead and open the file in python's csv reader:", "import csv\n\nsourceFile = open(sourcePath, newline='', encoding='utf-8')\nsourceTable = csv.reader(sourceFile)", "And next, we read each line into new elements of four respective lists (since we're dealing with one sample chapter, we try to handle it all in memory first and see if we run into problems):\n(Note here and in the following that in most cases, when the program is counting, it does so beginning with zero. Which means that if we end up with 20 segments, they are going to be called segment 0, segment 1, ..., segment 19. There is not going to be a segment bearing the number twenty, although we do have twenty segments. The first one has the number zero and the twentieth one has the number nineteen. Even for more experienced coders, this sometimes leads to mistakes, called \"off-by-one errors\".)", "import re\n\n# Initialize a list of lists, or two-dimensional list ...\nEditions = [[]]\n\n# ...with four sub-lists 0 to 3\nfor i in range(3):\n a = []\n Editions.append(a)\n\n# Now populate it from our sourceTable\nsourceFile.seek(0) # in repeated runs, restart from the beginning of the file\nfor row in sourceTable:\n for i, field in enumerate(row): # We normalize quite a bit here already:\n p = field.replace('ยถ', ' ยถ ') # spaces around ยถ \n p = re.sub(\"&([^c])\",\" & \\\\1\", p) # always spaces around &, except for &c\n p = re.sub(\"([,.:?/])(\\S)\",\"\\\\1 \\\\2\", p) # always a space after ',.:?/'\n p = re.sub(\"([0-9])([a-zA-Z])\", \"\\\\1 \\\\2\", p) # always a space between numbers and word characters\n p = re.sub(\"([a-z]) ?\\\\(\\\\1\\\\b\", \" (\\\\1\", p) # if a letter is repeated on its own in a bracketed\n # expression it's a note and we eliminate the character\n # from the preceding word\n p = \" \".join(p.split()) # always only one space\n Editions[i].append(p)\n\nprint(str(len(Editions[0])) + \" rows read.\\n\")\n\n# As an example, see the first seven sections of the third edition (1556 SPA):\nfor field in range(len(Editions[2])):\n print(Editions[2][field])", "Actually, let's define two more list variables to hold information about the different editions - language and year of print:", "numOfEds = 4\nlanguage = [\"PT\", \"PT\", \"ES\", \"LA\"] # I am using language codes that later on can be used in babelnet\nyear = [1549, 1552, 1556, 1573]", "TF/IDF <a name=\"tfidf\"></a>\nIn the previous (i.e. Solรณrzano) analyses, things like tokenization, lemmatization and stop-word lists filtering are explained step by step. Here, we rely on what we have found there and feed it all into functions that are ready-made and available in suitable libraries...\nFirst, we build our lemmatization resource and \"function\":", "lemma = [{} for i in range(numOfEds)]\n# lemma = {} # we build a so-called dictionary for the lookups\n\nfor i in range(numOfEds):\n \n wordfile_path = 'Azpilcueta/wordforms-' + language[i].lower() + '.txt'\n\n # open the wordfile (defined above) for reading\n wordfile = open(wordfile_path, encoding='utf-8')\n\n tempdict = []\n for line in wordfile.readlines():\n tempdict.append(tuple(line.split('>'))) # we split each line by \">\" and append\n # a tuple to a temporary list.\n\n lemma[i] = {k.strip(): v.strip() for k, v in tempdict} # for every tuple in the temp. list,\n # we strip whitespace and make a key-value\n # pair, appending it to our \"lemma\"\n # dictionary\n wordfile.close\n\n print(str(len(lemma[i])) + ' ' + language[i] + ' wordforms known to the system.')\n", "Again, a quick test: Let's see with which \"lemma\"/basic word the particular wordform \"diremos\" is associated, or, in other words, what value our lemma variable returns when we query for the key \"diremos\":", "lemma[language.index(\"PT\")]['diremos']", "And we are going to need the stopwords lists:", "stopwords = []\n\nfor i in range(numOfEds):\n \n stopwords_path = 'DHd2019/stopwords-' + language[i].lower() + '.txt'\n stopwords.append(open(stopwords_path, encoding='utf-8').read().splitlines())\n\n print(str(len(stopwords[i])) + ' ' + language[i]\n + ' stopwords known to the system, e.g.: ' + str(stopwords[i][100:119]) + '\\n')", "(In contrast to simpler numbers that have been filtered out by the stopwords filter, I have left numbers representing years like \"1610\" in place.)\nAnd, later on when we try sentence segmentation, we are going to need the list of abbreviations - words where a subsequent period not necessarily means a new sentence:", "abbreviations = [] # As of now, this is one for all languages :-(\n\nabbrs_path = 'DHd2019/abbreviations.txt'\nabbreviations = open(abbrs_path, encoding='utf-8').read().splitlines()\n\nprint(str(len(abbreviations)) + ' abbreviations known to the system, e.g.: ' + str(abbreviations[100:119]))", "Next, we should find some very characteristic words for each segment for each edition. (Let's say we are looking for the \"Top 20\".) We should build a vocabulary for each edition individually and only afterwards work towards a common vocabulary of several \"Top n\" sets.", "import re\nimport pandas as pd\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\nnumTopTerms = 20\n\n# So first we build a tokenising and lemmatising function (per language) to work as\n# an input filter to the CountVectorizer function\ndef ourLaLemmatiser(str_input):\n wordforms = re.split('\\W+', str_input)\n return [lemma[language.index(\"LA\")][wordform].lower().strip() if wordform in lemma[language.index(\"LA\")] else wordform.lower().strip() for wordform in wordforms ]\ndef ourEsLemmatiser(str_input):\n wordforms = re.split('\\W+', str_input)\n return [lemma[language.index(\"ES\")][wordform].lower().strip() if wordform in lemma[language.index(\"ES\")] else wordform.lower().strip() for wordform in wordforms ]\ndef ourPtLemmatiser(str_input):\n wordforms = re.split('\\W+', str_input)\n return [lemma[language.index(\"PT\")][wordform].lower().strip() if wordform in lemma[language.index(\"PT\")] else wordform.lower().strip() for wordform in wordforms ]\n\ndef ourLemmatiser(lang):\n if (lang == \"LA\"):\n return ourLaLemmatiser\n if (lang == \"ES\"):\n return ourEsLemmatiser\n if (lang == \"PT\"):\n return ourPtLemmatiser\n\ndef ourStopwords(lang):\n if (lang == \"LA\"):\n return stopwords[language.index(\"LA\")]\n if (lang == \"ES\"):\n return stopwords[language.index(\"ES\")]\n if (lang == \"PT\"):\n return stopwords[language.index(\"PT\")]\n\ntopTerms = []\nfor i in range(numOfEds):\n\n topTermsEd = []\n # Initialize the library's function, specifying our\n # tokenizing function from above and our stopwords list.\n tfidf_vectorizer = TfidfVectorizer(stop_words=ourStopwords(language[i]), use_idf=True, tokenizer=ourLemmatiser(language[i]), norm='l2')\n\n # Finally, we feed our corpus to the function to build a new \"tfidf_matrix\" object\n tfidf_matrix = tfidf_vectorizer.fit_transform(Editions[i])\n\n # convert your matrix to an array to loop over it\n mx_array = tfidf_matrix.toarray()\n\n # get your feature names\n fn = tfidf_vectorizer.get_feature_names()\n\n # now loop through all segments and get the respective top n words.\n pos = 0\n for j in mx_array:\n # We have empty segments, i.e. none of the words in our vocabulary has any tf/idf score > 0\n if (j.max() == 0):\n topTermsEd.append([(\"\", 0)])\n # otherwise append (present) lemmatised words until numTopTerms or the number of words (-stopwords) is reached\n else:\n topTermsEd.append(\n [(fn[x], j[x]) for x in ((j*-1).argsort()) if j[x] > 0] \\\n [:min(numTopTerms, len(\n [word for word in re.split('\\W+', Editions[i][pos]) if ourLemmatiser(language[i])(word) not in stopwords]\n ))])\n pos += 1\n topTerms.append(topTermsEd)", "Translations?\nMaybe there is an approach to inter-lingual comparison after all. After a first unsuccessful try with conceptnet.io, I next want to try Babelnet in order to lookup synonyms, related terms and translations. I still have to study the API...\nFor example, let's take this single segment 19:", "segment_no = 18", "And then first let's see how this segment compares in the different editions:", "print(\"Comparing words from segments \" + str(segment_no) + \" ...\")\nprint(\" \")\nprint(\"Here is the segment in the four editions:\")\nprint(\" \")\nfor i in range(numOfEds):\n print(\"Ed. \" + str(i) + \":\")\n print(\"------\")\n print(Editions[i][segment_no])\n print(\" \")\n\nprint(\" \")\nprint(\" \")\n\n# Build List of most significant words for a segment\n\nprint(\"Most significant words in the segment:\")\nprint(\" \")\nfor i in range(numOfEds):\n print(\"Ed. \" + str(i) + \":\")\n print(\"------\")\n print(topTerms[i][segment_no])\n print(\" \")", "Now we look up the \"concepts\" associated to those words in babelnet. Then we look up the concepts associated with the words of the present segment from another edition/language, and see if the concepts are the same.\nBut we have to decide on some particular editions to get things started. Let's take the Spanish and Latin ones:", "startEd = 1\nsecondEd = 2", "And then we can continue...", "import urllib\nimport json\nfrom collections import defaultdict\n\nbabelAPIKey = '18546fd3-8999-43db-ac31-dc113506f825'\nbabelGetSynsetIdsURL = \"https://babelnet.io/v5/getSynsetIds?\" + \\\n \"targetLang=LA&targetLang=ES&targetLang=PT\" + \\\n \"&searchLang=\" + language[startEd] + \\\n \"&key=\" + babelAPIKey + \\\n \"&lemma=\"\n\n# Build lists of possible concepts\ntop_possible_conceptIDs = defaultdict(list)\nfor (word, val) in topTerms[startEd][segment_no]:\n concepts_uri = babelGetSynsetIdsURL + urllib.parse.quote(word)\n response = urllib.request.urlopen(concepts_uri)\n conceptIDs = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))\n for rel in conceptIDs:\n top_possible_conceptIDs[word].append(rel.get(\"id\"))\n\nprint(\" \")\nprint(\"For each of the '\" + language[startEd] + \"' words, here are possible synsets:\")\nprint(\" \")\n\nfor word in top_possible_conceptIDs:\n print(word + \":\" + \" \" + ', '.join(c for c in top_possible_conceptIDs[word]))\n print(\" \")\n\nprint(\" \")\nprint(\" \")\nprint(\" \")\n\nbabelGetSynsetIdsURL2 = \"https://babelnet.io/v5/getSynsetIds?\" + \\\n \"targetLang=LA&targetLang=ES&targetLang=PT\" + \\\n \"&searchLang=\" + language[secondEd] + \\\n \"&key=\" + babelAPIKey + \\\n \"&lemma=\"\n\n# Build list of 10 most significant words in the second language\ntop_possible_conceptIDs_2 = defaultdict(list)\nfor (word, val) in topTerms[secondEd][segment_no]:\n concepts_uri = babelGetSynsetIdsURL2 + urllib.parse.quote(word)\n response = urllib.request.urlopen(concepts_uri)\n conceptIDs = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))\n for rel in conceptIDs:\n top_possible_conceptIDs_2[word].append(rel.get(\"id\"))\n\nprint(\" \")\nprint(\"For each of the '\" + language[secondEd] + \"' words, here are possible synsets:\")\nprint(\" \")\nfor word in top_possible_conceptIDs_2:\n print(word + \":\" + \" \" + ', '.join(c for c in top_possible_conceptIDs_2[word]))\n print(\" \")\n\n# calculate number of overlapping terms\nvalues_a = set([item for sublist in top_possible_conceptIDs.values() for item in sublist])\nvalues_b = set([item for sublist in top_possible_conceptIDs_2.values() for item in sublist])\noverlaps = values_a & values_b\nprint(\"Overlaps: \" + str(overlaps))\n\nbabelGetSynsetInfoURL = \"https://babelnet.io/v5/getSynset?key=\" + babelAPIKey + \\\n \"&targetLang=LA&targetLang=ES&targetLang=PT\" + \\\n \"&id=\"\n\nfor c in overlaps:\n info_uri = babelGetSynsetInfoURL + c\n response = urllib.request.urlopen(info_uri)\n words = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))\n \n senses = words['senses']\n for result in senses[:1]:\n lemma = result['properties'].get('fullLemma')\n resultlang = result['properties'].get('language')\n print(c + \": \" + lemma + \" (\" + resultlang.lower() + \")\")\n\n# what's left: do a nifty ranking", "Actually I think this is somewhat promising - an overlap of four independent, highly meaning-bearing words, or of forty-something related concepts. At first glance, they should be capable of distinguishing this section from all the other ones. However, getting this result was made possible by quite a bit of manual tuning the stopwords and lemmatization dictionaries before, so this work is important and cannot be eliminated.\nNew Approach: Use Aligner from Machine Translation Studies <a name=\"newApproach\"/>\nIn contrast to what I thought previously, there is a couple of tools for automatically aligning parallel texts after all. After some investigation of the literature, the most promising candidate seems to be HunAlign. However, as this is a commandline tool written in C++ (there is LF Aligner, a GUI, available), it is not possible to run it from within this notebook.\nFirst results were problematic, due to the different literary conventions that our editions follow: Punctuation was used inconsistently (but sentence length is one of the most relevant factors for aligning), as were abbreviations and notes.\nMy current idea is to use this notebook to preprocess the texts and to feed a cleaned up version of them to hunalign...\nComing back to this after a first couple of rounds with Hunalign, I have the feeling that the fact that literary conventions are so divergent probably means that Aligning via sentence lengths is a bad idea in our from the outset. Probably better to approach this with GMA or similar methods. Anyway, here are the first attempts with Hunalign:", "from nltk import sent_tokenize\n\n## First, train the sentence tokenizer:\nfrom pprint import pprint\nfrom nltk.tokenize.punkt import PunktSentenceTokenizer, PunktLanguageVars, PunktTrainer\n \nclass BulletPointLangVars(PunktLanguageVars):\n sent_end_chars = ('.', '?', ':', '!', 'ยถ')\n\ntrainer = PunktTrainer()\ntrainer.INCLUDE_ALL_COLLOCS = True\ntokenizer = PunktSentenceTokenizer(trainer.get_params(), lang_vars = BulletPointLangVars())\nfor tok in abbreviations : tokenizer._params.abbrev_types.add(tok)\n\n## Now we sentence-segmentize all our editions, printing results and saving them to files:\n\n# folder for the several segment files:\noutputBase = 'Azpilcueta/sentences'\ndest = None\n\n# Then, sentence-tokenize our segments:\nfor i in range(numOfEds):\n dest = open(outputBase + '_' + str(year[i]) + '.txt',\n encoding='utf-8',\n mode='w')\n print(\"Sentence-split of ed. \" + str(i) + \":\")\n print(\"------\")\n for s in range(0, len(Editions[i])):\n for a in tokenizer.tokenize(Editions[i][s]):\n dest.write(a.strip() + '\\n')\n print(a)\n dest.write('<p>\\n')\n print('<p>')\n dest.close()\n", "... lemmatize/stopwordize it---", "# folder for the several segment files:\noutputBase = 'Azpilcueta/sentences-lemmatized'\ndest = None\n\n# Then, sentence-tokenize our segments:\nfor i in range(numOfEds):\n dest = open(outputBase + '_' + str(year[i]) + '.txt',\n encoding='utf-8',\n mode='w')\n stp = set(stopwords[i])\n print(\"Cleaned/lemmatized ed. \" + str(i) + \" [\" + language[i] + \"]:\")\n print(\"------\")\n for s in range(len(Editions[i])):\n for a in tokenizer.tokenize(Editions[i][s]):\n dest.write(\" \".join([x for x in ourLemmatiser(language[i])(a) if x not in stp]) + '\\n')\n print(\" \".join([x for x in ourLemmatiser(language[i])(a) if x not in stp]))\n dest.write('<p>\\n')\n print('<p>')\n dest.close()\n", "With these preparations made, Hunaligning 1552 and 1556 reports \"Quality 0.63417\" for unlemmatized and \"Quality 0.51392\" for lemmatized versions of the texts for its findings which still contain many errors. Removing \":\" from the sentence end marks gives \"Quality 0.517048/0.388377\", but from a first impression with fewer errors. Results can be output in different formats, xls files are here and here.\nSimilarity <a name=\"DocumentSimilarity\"/>\nIt seems we could now create another matrix replacing lemmata with concepts and retaining the tf/idf values (so as to keep a weight coefficient to the concepts). Then we should be able to calculate similarity measures across the same concepts...\nThe approach to choose would probably be the \"cosine similarity\" of concept vector spaces. Again, there is a library ready for us to use (but you can find some documentation here, here and here.)\nHowever, this is where I have to take a break now. I will return to here soon...", "from sklearn.metrics.pairwise import cosine_similarity\n\nsimilarities = pd.DataFrame(cosine_similarity(tfidf_matrix))\nsimilarities[round(similarities, 0) == 1] = 0 # Suppress a document's similarity to itself\nprint(\"Pairwise similarities:\")\nprint(similarities)\n\nprint(\"The two most similar segments in the corpus are\")\nprint(\"segments\", \\\n similarities[similarities == similarities.values.max()].idxmax(axis=0).idxmax(axis=1), \\\n \"and\", \\\n similarities[similarities == similarities.values.max()].idxmax(axis=0)[ similarities[similarities == similarities.values.max()].idxmax(axis=0).idxmax(axis=1) ].astype(int), \\\n \".\")\nprint(\"They have a similarity score of\")\nprint(similarities.values.max())", "<div class=\"alert alertbox alert-success\">Of course, in every set of documents, we will always find two that are similar in the sense of them being more similar to each other than to the other ones. Whether or not this actually *means* anything in terms of content is still up to scholarly interpretation. But at least it means that a scholar can look at the two documents and when she determines that they are not so similar after all, then perhaps there is something interesting to say about similar vocabulary used for different puproses. Or the other way round: When the scholar knows that two passages are similar, but they have a low \"similarity score\", shouldn't that say something about the texts's rhetorics?</div>\n\nWord Clouds <a name=\"WordClouds\"/>\nWe can use a library that takes word frequencies like above, calculates corresponding relative sizes of words and creates nice wordcloud images for our sections (again, taking the fourth segment as an example) like this:", "from wordcloud import WordCloud\nimport matplotlib.pyplot as plt\n\n# We make tuples of (lemma, tf/idf score) for one of our segments\n# But we have to convert our tf/idf weights to pseudo-frequencies (i.e. integer numbers)\nfrq = [ int(round(x * 100000, 0)) for x in Editions[1][3]]\nfreq = dict(zip(fn, frq))\n\nwc = WordCloud(background_color=None, mode=\"RGBA\", max_font_size=40, relative_scaling=1).fit_words(freq)\n\n# Now show/plot the wordcloud\nplt.figure()\nplt.imshow(wc, interpolation=\"bilinear\")\nplt.axis(\"off\")\nplt.show()", "In order to have a nicer overview over the many segments than is possible in this notebook, let's create a new html file listing some of the characteristics that we have found so far...", "outputDir = \"Azpilcueta\"\nhtmlfile = open(outputDir + '/Overview.html', encoding='utf-8', mode='w')\n\n# Write the html header and the opening of a layout table\nhtmlfile.write(\"\"\"<!DOCTYPE html>\n<html>\n <head>\n <title>Section Characteristics</title>\n <meta charset=\"utf-8\"/>\n </head>\n <body>\n <table>\n\"\"\")\n\na = [[]]\na.clear()\ndicts = []\nw = []\n\n# For each segment, create a wordcloud and write it along with label and\n# other information into a new row of the html table\nfor i in range(len(mx_array)):\n # this is like above in the single-segment example...\n a.append([ int(round(x * 100000, 0)) for x in mx_array[i]])\n dicts.append(dict(zip(fn, a[i])))\n w.append(WordCloud(background_color=None, mode=\"RGBA\", \\\n max_font_size=40, min_font_size=10, \\\n max_words=60, relative_scaling=0.8).fit_words(dicts[i]))\n # We write the wordcloud image to a file\n w[i].to_file(outputDir + '/wc_' + str(i) + '.png')\n # Finally we write the column row\n htmlfile.write(\"\"\"\n <tr>\n <td>\n <head>Section {a}: <b>{b}</b></head><br/>\n <img src=\"./wc_{a}.png\"/><br/>\n <small><i>length: {c} words</i></small>\n </td>\n </tr>\n <tr><td>&nbsp;</td></tr>\n\"\"\".format(a = str(i), b = label[i], c = len(tokenised[i])))\n\n# And then we write the end of the html file.\nhtmlfile.write(\"\"\"\n </table>\n </body>\n</html>\n\"\"\")\nhtmlfile.close()", "This should have created a nice html file which we can open here." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
natronics/rust-fc
analysis/results.ipynb
gpl-3.0
[ "Comparing rust-fc To Simulation Output\nThe simulator proccessing code adds realisic noise to the IMU input before sending it to rust-fc.\nWe'll compare the clean \"ideal\" simulator numbers to what was actually received by rust-fc", "import psas_packet\nfrom psas_packet.io import BinFile\nimport csv\nimport matplotlib.pyplot as plt\nfrom matplotlib import gridspec\n%matplotlib inline\n\nFPS2M = 0.3048\nLBF2N = 4.44822\nLBS2KG = 0.453592\n\n# Extend PSAS Packet to include our state message\npsas_packet.messages.MESSAGES[\"STAT\"] = psas_packet.messages.Message({\n 'name': \"State Vector\",\n 'fourcc': b'STAT',\n 'size': \"Fixed\",\n 'endianness': '!',\n 'members': [\n {'key': \"time\", 'stype': \"Q\"},\n {'key': \"accel\", 'stype': \"d\"},\n {'key': \"vel\", 'stype': \"d\"},\n {'key': \"alt\", 'stype': \"d\"},\n {'key': \"roll_rate\", 'stype': \"d\"},\n {'key': \"roll_angle\", 'stype': \"d\"},\n ]\n})\n\n\n# Read data from rust-fc\nlogfile = BinFile('../logfile-000')\nmax_acc = 0\nrust_time = []\nrust_accel_x = []\nrust_accel_y = []\nrust_accel_z = []\nrust_state_time = []\nrust_vel = []\nrust_alt = []\nfor fourcc, data in logfile.read():\n if fourcc == 'ADIS':\n if data['Acc_X'] > max_acc:\n max_acc = data['Acc_X']\n rust_t = data['timestamp']/1.0e9\n rust_time.append(data['timestamp']/1.0e9)\n rust_accel_x.append(data['Acc_X'])\n rust_accel_y.append(data['Acc_Y'])\n rust_accel_z.append(data['Acc_Z'])\n if fourcc == 'STAT':\n rust_state_time.append(data['timestamp']/1.0e9)\n rust_vel.append(data['vel'])\n rust_alt.append(data['alt'])\n\n# Read data from JSBSim\nmax_accel = 0\nsim_time = []\nmeasured_accel_x = []\nsim_vel_up = []\nsim_alt = []\nwith open('../simulation/data.csv') as datafile:\n reader = csv.reader(datafile, delimiter=',')\n for row in reader:\n # ignore first line\n if row[0][0] == 'T':\n continue\n sim_time.append(float(row[0]))\n force_x = float(row[18]) * LBF2N\n weight = float(row[6]) * LBS2KG\n measured_accel_x.append(force_x/weight)\n if (force_x/weight) > max_accel:\n max_accel = force_x/weight\n sim_t = sim_time[-1]\n sim_vel_up.append(-float(row[10]) * FPS2M)\n sim_alt.append(float(row[2]))\n\n# line up time\nsim_offset = rust_t - sim_t\nsim_time = [t + sim_offset for t in sim_time]", "Message Receive Time\nIn JSBSim the IMU messages are requested to be sent at the real IMU rate of 819.2 Hz:\n&lt;output name=\"localhost\" type=\"SOCKET\" protocol=\"UDP\" port=\"5123\" rate=\"819.2\"&gt;\n\nBut there they are then processed in python for noise and binary packing. Then it's sent as UDP packets which may get lost. Let's see how they appear in the flight comptuer.", "# Get the time difference between each ADIS message\ndiff = [(rust_time[i+1] - t)*1000 for i, t in enumerate(rust_time[:-1])]\n\nfig, ax1 = plt.subplots(figsize=(18,7))\nplt.title(r\"rust-fc ADIS Message Interval\")\nplt.ylabel(r\"Time Since Last Sample [ms]\")\nplt.xlabel(r\"Sample Number [#]\")\n\nplt.plot(range(len(diff)), diff, 'r.', alpha=1.0, ms=0.3, label=\"rust-fc Sample Interval\")\nplt.plot((0, len(diff)), (1.2207, 1.2207), 'k-', lw=0.6, alpha=0.7, label=\"Expected Sample Interval\")\n\nax1.set_yscale(\"log\", nonposy='clip')\nplt.ylim([0.1,100])\n#plt.xlim()\nax1.legend(loc=1)\nplt.show()\n\nfig, ax1 = plt.subplots(figsize=(18,7))\nplt.title(r\"rust-fc ADIS Message Interval\")\nplt.ylabel(r\"Number of Samples [#]\")\nplt.xlabel(r\"Time Since Last Sample [ms]\")\n\nn, bins, patches = plt.hist(diff, 1000, histtype='step', normed=1, alpha=0.8, linewidth=1, fill=True)\nplt.plot((1.2207, 1.2207), (0, 1000), 'k-', lw=0.6, alpha=0.7, label=\"Expected Sample Interval\")\n\nplt.ylim([0, 35])\n#plt.xlim()\nax1.legend(loc=1)\nplt.show()", "IMU Noisy Acceleration\nHere we see the noise put into the IMU data and the true acceleration.", "fig, ax1 = plt.subplots(figsize=(18,7))\nplt.title(r\"rust-fc Recorded IMU Acceleration\")\nplt.ylabel(r\"Acceleration [m/s${}^2$]\")\nplt.xlabel(r\"Run Time [s]\")\n\nplt.plot(rust_time, rust_accel_x, alpha=0.8, lw=0.5, label=\"rust-fc IMU 'Up'\")\nplt.plot(rust_time, rust_accel_y, alpha=0.8, lw=0.5, label=\"rust-fc IMU 'Y'\")\nplt.plot(rust_time, rust_accel_z, alpha=0.6, lw=0.5, label=\"rust-fc IMU 'Z'\")\n\nplt.plot(sim_time, measured_accel_x, 'k-', lw=1.3, alpha=0.6, label=\"JSBSim True Acceleration\")\n\n#plt.ylim()\n#plt.xlim()\nax1.legend(loc=1)\nplt.show()", "State Tracking\nThe flight comptuer only knows the Inertial state (acceleration). It keeps track of velocity and altitude by integrating this signal. Here we compare rust-fc internal state to the exact numbers from the simulator.", "# Computer difference from FC State and simulation \"real\" numbers\n\nsim_idx = 0\nvel = 0\nalt = 0\ni_count = 0\nsim_matched_vel = []\nvel_diff = []\nalt_diff = []\nfor i, t in enumerate(rust_state_time):\n vel += rust_vel[i]\n alt += rust_alt[i]\n i_count += 1\n if sim_time[sim_idx] < t:\n sim_matched_vel.append(vel/float(i_count))\n vel_diff.append(sim_vel_up[sim_idx] - (vel/float(i_count)))\n alt_diff.append(sim_alt[sim_idx] - (alt/float(i_count)))\n vel = 0\n alt = 0\n i_count = 0\n sim_idx += 1\n if sim_idx > len(sim_time)-1:\n break\n\n\nfig = plt.figure(figsize=(18,9))\nplt.subplots_adjust(hspace=0.001) # no space between vertical charts\ngs = gridspec.GridSpec(2, 1, height_ratios=[2, 1]) # stretch main chart to be most of the width\n\nax1 = plt.subplot(gs[0])\nplt.title(r\"rust-fc State Tracking: Velocity And Velocity Integration Error\")\nplt.ylabel(r\"Velocity [m/s]\")\n\nplt.plot(rust_state_time, rust_vel, alpha=0.8, lw=1.5, label=\"rust-fc State Vector Velocity\")\nplt.plot(sim_time, sim_vel_up, 'k-', lw=1.3, alpha=0.6, label=\"JSBSim True Velocity\")\n\nplt.ylim([-60,400])\nticklabels = ax1.get_xticklabels()\nplt.setp(ticklabels, visible=False)\n\nax2 = plt.subplot(gs[1])\nplt.xlabel(r\"Run Time [s]\")\nplt.ylabel(r\"Integration Drift Error [m/s]\")\n\nplt.plot(sim_time, vel_diff)\n\nax1.legend(loc=1)\nplt.show()\n\nfig = plt.figure(figsize=(18,9))\nplt.subplots_adjust(hspace=0.001) # no space between vertical charts\ngs = gridspec.GridSpec(2, 1, height_ratios=[2, 1]) # stretch main chart to be most of the width\n\nax1 = plt.subplot(gs[0])\nplt.title(r\"rust-fc State Tracking: Altitude And ALtitude Integration Error\")\nplt.ylabel(r\"Altitude MSL [m]\")\n\nplt.plot(rust_state_time, rust_alt, alpha=0.8, lw=1.5, label=\"rust-fc State Vector Altitude\")\nplt.plot(sim_time, sim_alt, 'k-', lw=1.3, alpha=0.6, label=\"JSBSim True Velocity\")\n\nplt.ylim([1390, 7500])\nticklabels = ax1.get_xticklabels()\nplt.setp(ticklabels, visible=False)\n\nax2 = plt.subplot(gs[1])\nplt.xlabel(r\"Run Time [s]\")\nplt.ylabel(r\"Integration Drift Error [m]\")\n\nplt.plot(sim_time, alt_diff)\n\n#plt.xlim()\nax1.legend(loc=1)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gaufung/Data_Analytics_Learning_Note
DesignPattern/BridgePattern.ipynb
mit
[ "ๆกฅๆขๆจกๅผ(Bridge Pattern) \n1 ไปฃ็ \nๅœจไธ€ไธช็”ปๅ›พ็จ‹ๅบไธญ๏ผŒๅธธไผš่งๅˆฐ่ฟ™ๆ ท็š„ๆƒ…ๅ†ต๏ผšๆœ‰ไธ€ไบ›้ข„่ฎพ็š„ๅ›พๅฝข๏ผŒๅฆ‚็Ÿฉๅฝขใ€ๅœ†ๅฝข็ญ‰๏ผŒ่ฟ˜ๆœ‰ไธ€ไธชๅฏน่ฑก-็”ป็ฌ”๏ผŒ่ฐƒ่Š‚็”ป็ฌ”็š„็ฑปๅž‹๏ผˆๅฆ‚็”ป็ฌ”่ฟ˜ๆ˜ฏ็”ปๅˆท๏ผŒ่ฟ˜ๆ˜ฏๆฏ›็ฌ”ๆ•ˆๆžœ็ญ‰๏ผ‰ๅนถ่ฎพๅฎšๅ‚ๆ•ฐ๏ผˆๅฆ‚้ขœ่‰ฒใ€็บฟๅฎฝ็ญ‰๏ผ‰๏ผŒ้€‰ๅฎšๅ›พๅฝข๏ผŒๅฐฑๅฏไปฅๅœจ็”ปๅธƒไธŠ็”ปๅ‡บๆƒณ่ฆ็š„ๅ›พๅฝขไบ†ใ€‚่ฆๅฎž็ŽฐไปฅไธŠ้œ€ๆฑ‚๏ผŒๅ…ˆไปŽๆœ€ๆŠฝ่ฑก็š„ๅ…ƒ็ด ๅผ€ๅง‹่ฎพ่ฎก๏ผŒๅณๅฝข็Šถๅ’Œ็”ป็ฌ”๏ผˆๆš‚ๆ—ถๅฟฝ็•ฅ็”ปๅธƒ๏ผŒๅŒๆ—ถๅฟฝ็•ฅ็”ป็ฌ”ๅ‚ๆ•ฐ๏ผŒๅช่€ƒ่™‘็”ป็ฌ”็ฑปๅž‹๏ผ‰ใ€‚", "class Shape(object):\n name=\"\"\n param=\"\"\n def __init__(self,*param):\n pass\n def getName(self):\n return self.name\n def getParam(self):\n return self.name,self.param\n\nclass Pen(object):\n shape=\"\"\n type=\"\"\n def __init__(self,shape):\n self.shape=shape\n def draw(self):\n pass", "ๅฝข็Šถๅฏน่ฑกๅ’Œ็”ป็ฌ”ๅฏน่ฑกๆ˜ฏๆœ€ไธบๆŠฝ่ฑก็š„ๅฝขๅผใ€‚ๆŽฅไธ‹ๆฅ๏ผŒๆž„้€ ๅคšไธชๅฝข็Šถ๏ผŒๅฆ‚็Ÿฉๅฝขๅ’Œๅœ†ๅฝข๏ผš", "class Rectangle(Shape):\n def __init__(self,long,width):\n self.name=\"Rectangle\"\n self.param=\"Long:%s Width:%s\"%(long,width)\n print (\"Create a rectangle:%s\"%self.param)\nclass Circle(Shape):\n def __init__(self,radius):\n self.name=\"Circle\"\n self.param=\"Radius:%s\"%radius\n print (\"Create a circle:%s\"%self.param)", "็ดงๆŽฅ็€ๆ˜ฏๆž„้€ ๅคš็ง็”ป็ฌ”๏ผŒๅฆ‚ๆ™ฎ้€š็”ป็ฌ”ๅ’Œ็”ปๅˆท๏ผš", "class NormalPen(Pen):\n def __init__(self,shape):\n Pen.__init__(self,shape)\n self.type=\"Normal Line\"\n def draw(self):\n print (\"DRAWING %s:%s----PARAMS:%s\"%(self.type,self.shape.getName(),self.shape.getParam()))\nclass BrushPen(Pen):\n def __init__(self,shape):\n Pen.__init__(self,shape)\n self.type=\"Brush Line\"\n def draw(self):\n print (\"DRAWING %s:%s----PARAMS:%s\" % (self.type,self.shape.getName(), self.shape.getParam()))\n\nnormal_pen = NormalPen(Rectangle('20cm','10cm'))\nbrush_pen = BrushPen(Circle('15cm'))\nnormal_pen.draw()\nbrush_pen.draw()", "2 Discriptions\nๆกฅๆขๆจกๅผๅˆๅซๆกฅๆŽฅๆจกๅผ๏ผŒๅฎšไน‰ๅฆ‚ไธ‹๏ผšๅฐ†ๆŠฝ่ฑกไธŽๅฎž็Žฐ่งฃ่€ฆ๏ผˆๆณจๆ„ๆญคๅค„็š„ๆŠฝ่ฑกๅ’Œๅฎž็Žฐ๏ผŒๅนถ้žๆŠฝ่ฑก็ฑปๅ’Œๅฎž็Žฐ็ฑป็š„้‚ฃ็งๅ…ณ็ณป๏ผŒ่€Œๆ˜ฏไธ€็ง่ง’่‰ฒ็š„ๅ…ณ็ณป๏ผŒ่ฟ™้‡Œ้œ€่ฆๅฅฝๅฅฝๅŒบๅˆ†ไธ€ไธ‹๏ผ‰๏ผŒๅฏไปฅไฝฟๅ…ถ็‹ฌ็ซ‹ๅ˜ๅŒ–ใ€‚ๅœจๅฝขๅฆ‚ไธŠไพ‹ไธญ๏ผŒPenๅช่ดŸ่ดฃ็”ป๏ผŒไฝ†ๆฒกๆœ‰ๅฝข็Šถ๏ผŒๅฎƒ็ปˆ็ฉถๆ˜ฏไธ็Ÿฅ้“่ฆ็”ปไป€ไนˆ็š„๏ผŒๆ‰€ไปฅๆˆ‘ไปฌๆŠŠๅฎƒๅซๅšๆŠฝ่ฑกๅŒ–่ง’่‰ฒ๏ผ›่€ŒShapeๆ˜ฏๅ…ทไฝ“็š„ๅฝข็Šถ๏ผŒๆˆ‘ไปฌๆŠŠๅฎƒๅซๅšๅฎž็ŽฐๅŒ–่ง’่‰ฒใ€‚ๆŠฝ่ฑกๅŒ–่ง’่‰ฒๅ’Œๅฎž็ŽฐๅŒ–่ง’่‰ฒๆ˜ฏ่งฃ่€ฆ็š„๏ผŒ่ฟ™ไนŸๅฐฑๆ„ๅ‘ณ็€๏ผŒๆ‰€่ฐ“็š„ๆกฅ๏ผŒๅฐฑๆ˜ฏๆŠฝ่ฑกๅŒ–่ง’่‰ฒ็š„ๆŠฝ่ฑก็ฑปๅ’Œๅฎž็ŽฐๅŒ–่ง’่‰ฒ็š„ๆŠฝ่ฑก็ฑปไน‹้—ด็š„ๅผ•็”จๅ…ณ็ณปใ€‚\n3 Advantages\n\nๆŠฝ่ฑก่ง’่‰ฒไธŽๅฎž็Žฐ่ง’่‰ฒ็›ธๅˆ†็ฆป๏ผŒไบŒ่€…ๅฏไปฅ็‹ฌ็ซ‹่ฎพ่ฎก๏ผŒไธๅ—็บฆๆŸ๏ผ›\nๆ‰ฉๅฑ•ๆ€งๅผบ๏ผšๆŠฝ่ฑก่ง’่‰ฒๅ’Œๅฎž็Žฐ่ง’่‰ฒๅฏไปฅ้žๅธธ็ตๆดปๅœฐๆ‰ฉๅฑ•ใ€‚\n\n4 Usages\n\nไธ้€‚็”จ็ปงๆ‰ฟๆˆ–่€…ๅŽŸ็ปงๆ‰ฟๅ…ณ็ณปไธญๆŠฝ่ฑก็ฑปๅฏ่ƒฝ้ข‘็นๅ˜ๅŠจ็š„ๆƒ…ๅ†ต๏ผŒๅฏไปฅๅฐ†ๅŽŸ็ฑป่ฟ›่กŒๆ‹†ๅˆ†๏ผŒๆ‹†ๆˆๅฎž็ŽฐๅŒ–่ง’่‰ฒๅ’ŒๆŠฝ่ฑกๅŒ–่ง’่‰ฒใ€‚\n้‡็”จๆ€งๆฏ”่พƒๅคง็š„ๅœบๆ™ฏใ€‚ๆฏ”ๅฆ‚ๅผ€ๅ…ณๆŽงๅˆถ้€ป่พ‘็š„็จ‹ๅบ๏ผŒๅผ€ๅ…ณๅฐฑๆ˜ฏๆŠฝ่ฑกๅŒ–่ง’่‰ฒ๏ผŒๅผ€ๅ…ณ็š„ๅฝขๅผๆœ‰ๅพˆๅคš็ง๏ผŒ\n\n5 Disadvantages\n\nๅขžๅŠ ๅฏน็ณป็ปŸ็†่งฃ็š„้šพๅบฆ" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
joelowj/Udacity-Projects
Udacity-Deep-Learning-Foundation-Nanodegree/Project-4/dlnd_language_translation.ipynb
apache-2.0
[ "Language Translation\nIn this project, youโ€™re going to take a peek into the realm of neural network machine translation. Youโ€™ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n \n source_id_text = [[source_vocab_to_int.get(letter, source_vocab_to_int['<UNK>'])\n for letter in line.split(' ')] for line in source_text.split('\\n')]\n target_id_text = [[target_vocab_to_int.get(letter, target_vocab_to_int['<UNK>'])\n for letter in line.split(' ')] + [target_vocab_to_int['<EOS>']]\n for line in target_text.split('\\n')]\n return source_id_text, target_id_text\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoding_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\n\nReturn the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate, keep probability)\n \"\"\"\n \n inputs = tf.placeholder(tf.int32, [None, None], name = 'input')\n targets = tf.placeholder(tf.int32, [None, None], name = 'targets')\n learning_rate = tf.placeholder(tf.float32, shape = None, name = 'learning_rate')\n keep_prob = tf.placeholder(tf.float32, shape = None, name = 'keep_prob')\n return inputs, targets, learning_rate, keep_prob\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoding Input\nImplement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.", "def process_decoding_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for dencoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n \n GO_ID = target_vocab_to_int['<GO>']\n target_data = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n concat_data = tf.fill([batch_size, 1], GO_ID)\n target_data = tf.concat([concat_data, target_data], 1)\n return target_data\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_decoding_input(process_decoding_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().", "def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :return: RNN state\n \"\"\"\n \n encoding_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)]\n * num_layers)\n encoding_cell = tf.contrib.rnn.DropoutWrapper(encoding_cell, keep_prob)\n _, rnn_state = tf.nn.dynamic_rnn(encoding_cell, rnn_inputs, dtype = tf.float32) \n return rnn_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.", "def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,\n output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param sequence_length: Sequence Length\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Train Logits\n \"\"\"\n \n dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)\n train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)\n train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,\n train_decoder_fn,\n dec_embed_input,\n sequence_length,\n scope = decoding_scope)\n train_logits = output_fn(train_pred)\n return train_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,\n maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param maximum_length: The maximum allowed time steps to decode\n :param vocab_size: Size of vocabulary\n :param decoding_scope: TensorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Inference Logits\n \"\"\"\n \n dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)\n infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(\n output_fn, encoder_state, dec_embeddings,\n start_of_sequence_id, end_of_sequence_id,\n maximum_length - 1, vocab_size)\n inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(\n dec_cell, infer_decoder_fn, scope = decoding_scope)\n return inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nCreate RNN cell for decoding using rnn_size and num_layers.\nCreate the output fuction using lambda to transform it's input, logits, to class logits.\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob):\n \"\"\"\n Create decoding layer\n :param dec_embed_input: Decoder embedded input\n :param dec_embeddings: Decoder embeddings\n :param encoder_state: The encoded state\n :param vocab_size: Size of vocabulary\n :param sequence_length: Sequence Length\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param keep_prob: Dropout keep probability\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n \n dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)\n \n with tf.variable_scope(\"decoding\") as decoding_scope:\n output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size,\n None, scope = decoding_scope)\n train_logits = decoding_layer_train(encoder_state, dec_cell,\n dec_embed_input, sequence_length,\n decoding_scope, output_fn, keep_prob)\n decoding_scope.reuse_variables()\n inference_logits = decoding_layer_infer(encoder_state, dec_cell,\n dec_embeddings, target_vocab_to_int['<GO>'],\n target_vocab_to_int['<EOS>'], sequence_length - 1,\n vocab_size, decoding_scope, output_fn, keep_prob)\n return train_logits, inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nApply embedding to the input data for the encoder.\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).\nProcess target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.\nApply embedding to the target data for the decoder.\nDecode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).", "def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param sequence_length: Sequence Length\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n \n enc_embed_input = tf.contrib.layers.embed_sequence(input_data,\n source_vocab_size,\n enc_embedding_size)\n encoder_state = encoding_layer(enc_embed_input, rnn_size,\n num_layers, keep_prob)\n dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n return decoding_layer(dec_embed_input, dec_embeddings, encoder_state,\n target_vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability", "# Number of Epochs\nepochs = 4\n# Batch Size\nbatch_size = 512\n# RNN Size\nrnn_size = 100\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 50\ndecoding_embedding_size = 50\n# Learning Rate\nlearning_rate = 0.01\n# Dropout Keep Probability\nkeep_probability = 0.9", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_source_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob = model_inputs()\n sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n \n train_logits, inference_logits = seq2seq_model(\n tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),\n encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)\n\n tf.identity(inference_logits, 'logits')\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([input_shape[0], sequence_length]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport time\n\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1]), (0,0)],\n 'constant')\n\n return np.mean(np.equal(target, np.argmax(logits, 2)))\n\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\n\nvalid_source = helper.pad_sentence_batch(source_int_text[:batch_size])\nvalid_target = helper.pad_sentence_batch(target_int_text[:batch_size])\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n start_time = time.time()\n \n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n sequence_length: target_batch.shape[1],\n keep_prob: keep_probability})\n \n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch, keep_prob: 1.0})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source, keep_prob: 1.0})\n \n train_acc = get_accuracy(target_batch, batch_train_logits)\n valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)\n end_time = time.time()\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n \n sentence = sentence.lower()\n word_list = [vocab_to_int.get(word, vocab_to_int['<UNK>'])\n for word in sentence.split(' ')]\n return word_list\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('logits:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))\nprint(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vallis/libstempo
demo/libstempo-demo.ipynb
mit
[ "libstempo tutorial: basic functionality\nMichele Vallisneri, [email protected]; latest revision: 2016/10/12 for v2.3 revision", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nfrom __future__ import print_function\nimport sys, math, numpy as N, matplotlib.pyplot as P", "Load the libstempo Python extension. It requires a source installation of tempo2, as well as current Python and compiler, and the numpy and Cython packages.\n(Both Python 2.7 and 3.4 are supported; this means that in Python 2.7 all returned strings will be unicode strings, while in Python 3 all function arguments should be default unicode strings rather than bytes. This should work transparently, although there are limitations to what characters can be passed to tempo2; you should probably restrain yourself to ASCII.", "from libstempo.libstempo import *\n\nimport libstempo\n\nlibstempo.__path__\n\nimport libstempo as T\n\nT.data = T.__path__[0] + '/data/' # example files\n\nprint(\"Python version :\",sys.version.split()[0])\nprint(\"libstempo version:\",T.__version__)\nprint(\"Tempo2 version :\",T.libstempo.tempo2version())", "We load a single-pulsar object. Doing this will automatically run the tempo2 fit routine once.", "psr = T.tempopulsar(\n parfile=T.data + \"/J1909-3744_NANOGrav_dfg+12.par\", \n timfile=T.data + \"/J1909-3744_NANOGrav_dfg+12.tim\"\n)", "Let's start simple: what is the name of this pulsar? (You can change it, by the way.)", "psr.name", "Next, let's look at observations: there are psr.nobs of them; we can get numpy arrays of the site TOAs [in MJDs] with psr.stoas, of the TOA measurement errors [in microseconds] with psr.toaerrs, and of the measurement frequencies with psr.freqs. These arrays are views of the tempo2 data, so you can write to them (but you cannot currently change the number of observations).", "psr.nobs\n\npsr.stoas\n\npsr.toaerrs.min()\n\npsr.toaerrs\n\npsr.freqs", "By contrast, barycentric TOAs and frequencies are computed on the basis of current pulsar parameters, so you get them by calling psr methods (with parentheses), and you get a copy of the current values. Writing to it has no effect on the tempo2 data.", "psr.toas()\n\npsr.ssbfreqs()", "Residuals (in seconds) are returned by residuals(). The method takes a few options... I'll let its docstring help describe them. libstempo is fully documented in this way (try help(T.tempopulsar)).", "help(psr.residuals)\n\npsr.residuals().min()\n\npsr.residuals()", "We can plot TOAs vs. residuals, but we should first sort the arrays; otherwise the array follow the order in the tim file, which may not be chronological.", "# get sorted array of indices\ni = N.argsort(psr.toas())\n# use numpy fancy indexing to order residuals \nP.errorbar(psr.toas()[i],psr.residuals()[i],yerr=1e-6*psr.toaerrs[i],fmt='.',alpha=0.2);", "We can also see what flags have been set on the observations, and what their values are. The latter returns a numpy vector of strings. Flags are not currently writable.", "psr.flags()\n\npsr.flagvals('chanid')", "In fact, there's a commodity routine in libstempo.plot to plot residuals, taking flags into account.", "import libstempo.plot as LP\n\nLP.plotres(psr,group='pta',alpha=0.2)", "Timing-model parameters can be accessed by using psr as a Python dictionary. Each parameter is a special object with properties val, err (as well as fit, which is true is the parameter is currently being fitted, and set, which is true if the parameter was assigned a value).", "psr['RAJ'].val, psr['RAJ'].err, psr['RAJ'].fit, psr['RAJ'].set", "The names of all fitted parameters, of all set parameters, and of all parameters are returned by psr.pars(which='fit'). We show only the first few.", "fitpars = psr.pars() # defaults to fitted parameters\nsetpars = psr.pars(which='set')\nallpars = psr.pars(which='all')\n\nprint(len(fitpars),len(setpars),len(allpars))\nprint(fitpars[:10])", "The number of fitting parameters is psr.ndim.", "psr.ndim", "Changing the parameter values results in different residuals.", "# look +/- 3 sigmas around the current value\nx0, dx = psr['RAJ'].val, psr['RAJ'].err\nxs = x0 + dx * N.linspace(-3,3,20) \n\nres = []\nfor x in xs:\n psr['RAJ'].val = x\n res.append(psr.rms()/1e-6)\npsr['RAJ'].val = x0 # restore the original value\n\nP.plot(xs,res)", "We can also call a least-squares fitting routine, which will fit around the current parameter values, replacing them with their new best values. Individual parameters can be included or excluded in the fitting by setting their 'fit' field. (Note: as of version 2.3.0, libstempo provides its own fit, although it does call tempo2 to compute the design matrix.)", "psr['DM'].fit\n\npsr['DM'].fit = True\nprint(psr['DM'].val)\n\nret = psr.fit()\n\nprint(psr['DM'].val,psr['DM'].err)", "The fit returns a tuple consisting of best-fit vector, standard errors, covariance matrix, and linearized chisq. Note that these vectors and matrix are (ndim+1)- or (ndim+1)x(ndim+1)-dimensional, with the first row/column corresponding to a constant phase offset referenced to the first TOA (even if that point is not used).\nThe exact chisq can be recomputed by psr.chisq() (which evaluates N.sum(psr.residuals()**2 / (1e-12 * psr.toaerrs**2))). \nThe pulsar parameters can be read in bulk by calling psr.vals(which='fit'), which will default to fitted parameters, but can also be given 'all', 'set', or even a list of parameter names.", "fitvals = psr.vals()\nprint(fitvals)\n\npsr.vals(which=['RAJ','DECJ','PMRA'])", "To set parameter values in bulk, you give a first argument to vals. Or call it with a dictionary.", "psr.vals([5.1,-0.6],which=['RAJ','DECJ','PMRA'])\npsr.vals({'PMRA': -9.5})\n\nprint(psr.vals(which=['RAJ','DECJ','PMRA']))\n\n# restore original values\npsr.vals(fitvals)", "Be careful about loss of precision; tempopar.val is a numpy longdouble, so you should be careful about assigning it a regular Python double. By contrast, doing arithmetics with numpy longdoubles will preserve their nature and precision.\nYou can access errors in a similar way with psr.errs(...).\nIt's also possible to obtain the design matrix computed at the current parameter values, which has shape psr.nobs * (len(psr.pars) + 1), since a constant offset is always included among the fitting parameters.", "d = psr.designmatrix()", "These, for instance, are the derivatives with respect to RAJ and DECJ, evaluated at the TOAs.", "# we need the sorted-index array compute above\nP.plot(psr.toas()[i]/365.25,d[i,1],'-x'); \nP.plot(psr.toas()[i]/365.25,d[i,2],'-x')", "It's easy to save the current timing-model to a new par file. Omitting the argument will overwrite the original parfile.", "psr.savepar('./foo.par')\n\n!head foo.par", "Same for writing tim files.", "psr.savetim('./foo.tim')\n\n!head foo.tim", "With libstempo, it's easy to replicate some of the \"toasim\" plugin functionality. By subtracting the residuals from the site TOAs (psr.stoas, vs. the barycentered psr.toas) and refitting, we can create a \"perfect\" timing solution. (Note that 1 ns is roughly tempo2's claimed accuracy.)", "print(math.sqrt(N.mean(psr.residuals()**2)) / 1e-6)\n\npsr.stoas[:] -= psr.residuals() / 86400.0\nret = psr.fit(iters = 4)\n\nprint(math.sqrt(N.mean(psr.residuals()**2)) / 1e-6)", "Then we can add, e.g., homoskedastic white measurement noise at 100 ns (remember the tempo units: days for TOAs, us for errors, s for residuals).", "psr.stoas[:] += 0.1e-6 * N.random.randn(psr.nobs) / 86400.0\npsr.toaerrs[:] = 0.1\nret = psr.fit()\n\ni = N.argsort(psr.toas())\nP.errorbar(psr.toas()[i],psr.residuals()[i],yerr=1e-6*psr.toaerrs[i],fmt='.')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.12/_downloads/plot_read_and_write_raw_data.ipynb
bsd-3-clause
[ "%matplotlib inline", "Reading and writing raw files\nIn this example, we read a raw file. Plot a segment of MEG data\nrestricted to MEG channels. And save these data in a new\nraw file.", "# Author: Alexandre Gramfort <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nfname = data_path + '/MEG/sample/sample_audvis_raw.fif'\n\nraw = mne.io.read_raw_fif(fname)\n\n# Set up pick list: MEG + STI 014 - bad channels\nwant_meg = True\nwant_eeg = False\nwant_stim = False\ninclude = ['STI 014']\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bad channels + 2 more\n\npicks = mne.pick_types(raw.info, meg=want_meg, eeg=want_eeg, stim=want_stim,\n include=include, exclude='bads')\n\nsome_picks = picks[:5] # take 5 first\nstart, stop = raw.time_as_index([0, 15]) # read the first 15s of data\ndata, times = raw[some_picks, start:(stop + 1)]\n\n# save 150s of MEG data in FIF file\nraw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks,\n overwrite=True)", "Show MEG data", "raw.plot()" ]
[ "code", "markdown", "code", "markdown", "code" ]
empet/Plotly-plots
Chord-diagram.ipynb
gpl-3.0
[ "Plotly plot of chord diagrams\nCircular layout or Chord diagram is a method of visualizing data that describe relationships. It was intensively promoted through Circos, a software package in Perl that was initially designed for displaying genomic data.\nThis 2013 stackoverflow question, whether there is a Python library for plotting chord diagrams, was closed as being off topic.\nTwo years later I presented in the initial version of this Jupyter Notebook a method to generate a chord diagram via Python Plotly.\nThis Jupyter Notebook is an updated ersion to be run using Python 3.7+, and Plotly 4.+.\nWe illustrate the method of generating a chord diagram from data recorded in a square matrix. The rows and columns represent the same entities.\nSuppose that for a community of 5 friends on Facebook we record the number of comments posted by each member on other friends wall. The data table is given in the next cell:", "from IPython.display import Image\nImage(filename='Data/Data-table.png')", "The aim of our visualization is to illustrate the total number of posts by each community member, and the \nflows of posts between pairs of friends.", "import platform, plotly\nimport numpy as np\nfrom numpy import pi\n\nprint(f'Python version: {platform.python_version()}')\nprint(f'Plotly version: {plotly.__version__}')", "Define the array of data:", "matrix = np.array([[16, 3, 28, 0, 18],\n [18, 0, 12, 5, 29],\n [ 9, 11, 17, 27, 0], \n [19, 0, 31, 11, 12],\n [23, 17, 10, 0, 34]], dtype=int)\n\ndef check_data(data_matrix):\n L, M = data_matrix.shape\n if L != M:\n raise ValueError('Data array must have a (n,n) shape')\n return L\n\nL = check_data(matrix)", "A chord diagram encodes information in two graphical objects:\n - ideograms, represented by distinctly colored arcs of circles;\n - ribbons, that are planar shapes bounded by two quadratic Bezier curves and two arcs of circle, that can degenerate to a point;\nIdeograms\nSumming up the entries on each matrix row, one gets a value (in our example this value is equal to the number of posts by a community member).\nLet us denote by total_comments the total number of posts recorded in this community.\nTheoretically the interval [0, total_comments) is mapped linearly onto the unit circle, identified with the interval $[0,2\\pi)$. \nFor a better looking plot one proceeds as follows: starting from the angular position $0$, in counter-clockwise direction, one draws succesively, around the unit circle, two parallel arcs of length equal to a mapped row sum value, minus a fixed gap. Click the image below:", "from IPython.display import IFrame\n\nIFrame('https://plot.ly/~empet/12234/', width=377, height=420)", "Now we are defining functions that process data in order to get the ideogram ends.\nAs we pointed out, the unit circle is oriented counter-clockwise.\nIn order to get an arc of circle of end angular\ncoordinates $\\theta_0<\\theta_1$, we define the function, moduloAB, that resolves the case when an arc contains\nthe point of angular coordinate $0$ (for example $\\theta_0=2\\pi-\\pi/12$, $\\theta_1=\\pi/9$). The function corresponding to $a=-\\pi, b=\\pi$ allows to map the interval $[0,2\\pi)$ onto $[-\\pi, \\pi)$. Via this transformation we have:\n$\\theta_0\\mapsto \\theta'_0=-\\pi/12$, and \n$ \\theta_1=\\mapsto \\theta'_1=\\pi/9$,\nand now $\\theta'_0<\\theta'_1$.", "def moduloAB(x, a, b): #maps a real number onto the unit circle identified with \n #the interval [a,b), b-a=2*PI\n if a>= b:\n raise ValueError('Incorrect interval ends')\n y = (x-a) % (b-a)\n return y+b if y < 0 else y+a\n\ndef test_2PI(x):\n return 0 <= x < 2*pi", "Compute the row sums and the lengths of corresponding ideograms:", "row_sum = [matrix[k,:].sum() for k in range(L)]\n\n#set the gap between two consecutive ideograms\ngap = 2*pi*0.005\nideogram_length = 2*pi * np.asarray(row_sum) / sum(row_sum) - gap*np.ones(L)", "The next function returns the list of end angular coordinates for each ideogram arc:", "def get_ideogram_ends(ideogram_len, gap):\n ideo_ends = []\n left = 0\n for k in range(len(ideogram_len)):\n right = left + ideogram_len[k]\n ideo_ends.append([left, right]) \n left = right + gap\n return ideo_ends \n\nideo_ends = get_ideogram_ends(ideogram_length, gap)", "The function make_ideogram_arc returns equally spaced points on an ideogram arc, expressed as complex\nnumbers in polar form:", "def make_ideogram_arc(R, phi, a=50):\n # R is the circle radius\n # phi is the list of angle coordinates of an arc ends\n # a is a parameter that controls the number of points to be evaluated on an arc\n if not test_2PI(phi[0]) or not test_2PI(phi[1]):\n phi = [moduloAB(t, 0, 2*pi) for t in phi]\n length = (phi[1]-phi[0]) % 2*pi \n nr = 5 if length <= pi/4 else int(a*length/pi)\n\n if phi[0] < phi[1]: \n theta = np.linspace(phi[0], phi[1], nr)\n else:\n phi = [moduloAB(t, -pi, pi) for t in phi]\n theta = np.linspace(phi[0], phi[1], nr)\n return R * np.exp(1j*theta) ", "The real and imaginary parts of these complex numbers will be used to define the ideogram as a Plotly\nshape bounded by a SVG path.", "make_ideogram_arc(1.3, [11*pi/6, pi/17])", "Set ideograms labels and colors:", "labels=['Emma', 'Isabella', 'Ava', 'Olivia', 'Sophia']\nideo_colors=['rgba(244, 109, 67, 0.75)',\n 'rgba(253, 174, 97, 0.75)',\n 'rgba(254, 224, 139, 0.75)',\n 'rgba(217, 239, 139, 0.75)',\n 'rgba(166, 217, 106, 0.75)']#brewer colors with alpha set on 0.75", "Ribbons in a chord diagram\nWhile ideograms illustrate how many comments posted each member of the Facebook community, ribbons\ngive a comparative information on the flows of comments from one friend to another.\nTo illustrate this flow we map data onto the unit circle. More precisely, for each matrix row, $k$, the application:\nt$\\mapsto$ t*ideogram_length[k]/row_sum[k]\nmaps the interval [0, row_sum[k]] onto\nthe interval [0, ideogram_length[k]]. Hence each entrymatrix[k][j]in the $k^{th}$ row is mapped tomatrix[k][j] * ideogram_length[k] / row_value[k]`.\nThe function map_data maps all matrix entries to the corresponding values in the intervals associated to ideograms:", "def map_data(data_matrix, row_value, ideogram_length):\n mapped = np.zeros(data_matrix.shape)\n for j in range(L):\n mapped[:, j] = ideogram_length * data_matrix[:,j] / row_value\n return mapped \n\nmapped_data = map_data(matrix, row_sum, ideogram_length)\nmapped_data", "To each pair of values (mapped_data[k][j], mapped_data[j][k]), $k<=j$, one associates a ribbon, that is a curvilinear filled rectangle (that can be degenerate), having as opposite sides two subarcs of the $k^{th}$ ideogram, respectively $j^{th}$ ideogram, and two arcs of quadratic B&eacute;zier curves.\n\nHere we illustrate the ribbons associated to pairs (mapped_data[0][j], mapped_data[j][0]), $j=\\overline{0,4}$,\nthat illustrate the flow of comments between Emma and all other friends, and herself:", "IFrame('https://plot.ly/~empet/12519/', width=420, height=420)", "For a better looking chord diagram, \nCircos documentation recommends to sort increasingly each row of the mapped_data. \n\nThe array idx_sort, defined below, has on each row the indices that sort the corresponding row in mapped_data:", "idx_sort = np.argsort(mapped_data, axis=1)\nidx_sort", "In the following we call ribbon ends, the lists l=[l[0], l[1]], r=[r[0], r[1]] having as elements the angular coordinates\nof the ends of arcs that are opposite sides in a ribbon. These arcs are sub-arcs in the internal boundaries of\nthe ideograms, connected by the ribbon\n(see the image above).\n\nCompute the ribbon ends and store them as tuples \nin a list of lists ($L\\times L$):", "def make_ribbon_ends(mapped_data, ideo_ends, idx_sort):\n L = mapped_data.shape[0]\n ribbon_boundary = np.zeros((L,L+1))\n for k in range(L):\n start = ideo_ends[k][0]\n ribbon_boundary[k][0] = start\n for j in range(1,L+1):\n J = idx_sort[k][j-1]\n ribbon_boundary[k][j] = start + mapped_data[k][J]\n start = ribbon_boundary[k][j]\n return [[(ribbon_boundary[k][j], ribbon_boundary[k][j+1] ) for j in range(L)] for k in range(L)] \n\nribbon_ends = make_ribbon_ends(mapped_data, ideo_ends, idx_sort)\nprint ('ribbon ends starting from the ideogram[2]\\n', ribbon_ends[2])", "We note that ribbon_ends[k][j] corresponds to mapped_data[i][idx_sort[k][j]], i.e. the length of the arc of ends in ribbon_ends[k][j] is equal to mapped_data[i][idx_sort[k][j]].\nNow we define a few functions that compute the control points for B&eacute;zier ribbon sides.\nThe function control_pts returns the cartesian coordinates of the control points, $b_0, b_1, b_2$, supposed as being initially located on the unit circle, and thus defined only by their angular coordinate. The angular coordinate\nof the point $b_1$ is the mean of angular coordinates of the points $b_0, b_2$.\nSince for a B&eacute;zier ribbon side only $b_0, b_2$ are placed on the unit circle, one gives radius as a parameter that controls position of $b_1$. radius is the distance of $b_1$ to the circle center.", "def control_pts(angle, radius):\n #angle is a 3-list containing angular coordinates of the control points b0, b1, b2\n #radius is the distance from b1 to the origin O(0,0) \n\n if len(angle) != 3:\n raise InvalidInputError('angle must have len =3')\n b_cplx = np.array([np.exp(1j*angle[k]) for k in range(3)])\n b_cplx[1] = radius * b_cplx[1]\n return list(zip(b_cplx.real, b_cplx.imag))\n\ndef ctrl_rib_chords(l, r, radius):\n # this function returns a 2-list containing control poligons of the two quadratic Bezier\n #curves that are opposite sides in a ribbon\n #l (r) the list of angular variables of the ribbon arc ends defining \n #the ribbon starting (ending) arc \n # radius is a common parameter for both control polygons\n if len(l) != 2 or len(r) != 2:\n raise ValueError('the arc ends must be elements in a list of len 2')\n return [control_pts([l[j], (l[j]+r[j])/2, r[j]], radius) for j in range(2)]", "Each ribbon is colored with the color of one of the two ideograms it connects. \nWe define an L-list of L-lists of colors for ribbons. Denote it by ribbon_color.\nribbon_color[k][j] is the Plotly color string for the ribbon associated to mapped_data[k][j] and mapped_data[j][k], i.e. the ribbon connecting two subarcs in the $k^{th}$, respectively, $j^{th}$ ideogram. Hence this structure is symmetric.\nInitially we define:", "ribbon_color = [L * [ideo_colors[k]] for k in range(L)]", "and then eventually we are changing the color in a few positions.\nFor our example we are perfotming the following color change:", "ribbon_color[0][4]=ideo_colors[4]\nribbon_color[1][2]=ideo_colors[2]\nribbon_color[2][3]=ideo_colors[3]\nribbon_color[2][4]=ideo_colors[4]", "The symmetric locations are not modified, because we do not access \nribbon_color[k][j], $k>j$, when drawing the ribbons.\nFunctions that return the Plotly SVG paths that are ribbon boundaries:", "def make_q_bezier(b):# defines the Plotly SVG path for a quadratic Bezier curve defined by the \n #list of its control points\n if len(b) != 3:\n raise valueError('control poligon must have 3 points')\n A, B, C = b \n return f'M {A[0]}, {A[1]} Q {B[0]}, {B[1]} {C[0]}, {C[1]}'\n\nb=[(1,4), (-0.5, 2.35), (3.745, 1.47)]\n\nmake_q_bezier(b)", "make_ribbon_arc returns the Plotly SVG path corresponding to an arc represented by its end angular coordinates, theta0, theta1.", "def make_ribbon_arc(theta0, theta1):\n\n if test_2PI(theta0) and test_2PI(theta1):\n if theta0 < theta1:\n theta0 = moduloAB(theta0, -pi, pi)\n theta1 = moduloAB(theta1, -pi, pi)\n if theta0 *theta1 > 0:\n raise ValueError('incorrect angle coordinates for ribbon')\n \n nr = int(40 * (theta0 - theta1) / pi)\n if nr <= 2: nr = 3\n theta = np.linspace(theta0, theta1, nr)\n pts=np.exp(1j*theta)# points in polar complex form, on the given arc\n \n string_arc = ''\n for k in range(len(theta)):\n string_arc += f'L {pts.real[k]}, {pts.imag[k]} '\n return string_arc \n else:\n raise ValueError('the angle coordinates for an arc side of a ribbon must be in [0, 2*pi]')\n\nmake_ribbon_arc(np.pi/3, np.pi/6)", "Finally we are ready to define data and layout for the Plotly plot of the chord diagram.", "import plotly.graph_objects as go\n\ndef plot_layout(title, plot_size):\n\n return dict(title=title,\n xaxis=dict(visible=False),\n yaxis=dict(visible=False),\n showlegend=False,\n width=plot_size,\n height=plot_size,\n margin=dict(t=25, b=25, l=25, r=25),\n hovermode='closest',\n ) ", "Function that returns the Plotly shape of an ideogram:", "def make_ideo_shape(path, line_color, fill_color):\n #line_color is the color of the shape boundary\n #fill_collor is the color assigned to an ideogram\n \n return dict(line=dict(color=line_color, \n width=0.45),\n path=path,\n layer='below',\n type='path',\n fillcolor=fill_color) ", "We generate two types of ribbons: a ribbon connecting subarcs in two distinct ideograms, respectively\na ribbon from one ideogram to itself (it corresponds to mapped_data[k][k], i.e. it gives the flow of comments\nfrom a community member to herself).", "def make_ribbon(l, r, line_color, fill_color, radius=0.2):\n #l=[l[0], l[1]], r=[r[0], r[1]] represent the opposite arcs in the ribbon \n #line_color is the color of the shape boundary\n #fill_color is the fill color for the ribbon shape\n \n poligon = ctrl_rib_chords(l,r, radius)\n b, c = poligon \n \n return dict(line=dict(color=line_color, \n width=0.5),\n path=make_q_bezier(b) + make_ribbon_arc(r[0], r[1])+\n make_q_bezier(c[::-1]) + make_ribbon_arc(l[1], l[0]),\n type='path',\n layer='below',\n fillcolor = fill_color, \n )\n\ndef make_self_rel(l, line_color, fill_color, radius):\n #radius is the radius of Bezier control point b_1\n \n b = control_pts([l[0], (l[0]+l[1])/2, l[1]], radius) \n \n return dict(line = dict(color=line_color, \n width=0.5),\n path = make_q_bezier(b)+make_ribbon_arc(l[1], l[0]),\n type = 'path',\n layer = 'below',\n fillcolor = fill_color \n )\n\ndef invPerm(perm):\n # function that returns the inverse of a permutation, perm\n inv = [0] * len(perm)\n for i, s in enumerate(perm):\n inv[s] = i\n return inv\n\nlayout=plot_layout('Chord diagram', 400) ", "Now let us explain the key point of associating ribbons to right data:\nFrom the definition of ribbon_ends we notice that ribbon_ends[k][j] corresponds to data stored in\nmatrix[k][sigma[j]], where sigma is the permutation of indices $0, 1, \\ldots L-1$, that sort the row k in mapped_data.\nIf sigma_inv is the inverse permutation of sigma, we get that to matrix[k][j] corresponds the\nribbon_ends[k][sigma_inv[j]].\nribbon_info is a list of dicts setting the information that is displayed when hovering the mouse over the ribbon ends.\nSet the radius of B&eacute;zier control point, $b_1$, for each ribbon associated to a diagonal data entry:", "radii_sribb = [0.4, 0.30, 0.35, 0.39, 0.12]# these value are set after a few trials \n\nribbon_info = []\nshapes = []\nfor k in range(L):\n \n sigma = idx_sort[k]\n sigma_inv = invPerm(sigma)\n for j in range(k, L):\n if matrix[k][j] == 0 and matrix[j][k]==0: continue\n eta = idx_sort[j]\n eta_inv = invPerm(eta)\n l = ribbon_ends[k][sigma_inv[j]] \n \n if j == k:\n shapes.append(make_self_rel(l, 'rgb(175,175,175)' ,\n ideo_colors[k], radius=radii_sribb[k])) \n z = 0.9*np.exp(1j*(l[0]+l[1])/2)\n \n #the text below will be displayed when hovering the mouse over the ribbon\n text = f'{labels[k]} commented on {int(matrix[k][k])} of herself Fb posts'\n\n ribbon_info.append(go.Scatter(x=[z.real],\n y=[z.imag],\n mode='markers',\n marker=dict(size=0.5, color=ideo_colors[k]),\n text=text,\n hoverinfo='text'\n )\n )\n else:\n r = ribbon_ends[j][eta_inv[k]]\n zi = 0.9 * np.exp(1j*(l[0]+l[1])/2)\n zf = 0.9 * np.exp(1j*(r[0]+r[1])/2)\n #texti and textf are the strings that will be displayed when hovering the mouse \n #over the two ribbon ends\n texti = f'{labels[k]} commented on {int(matrix[k][j])} of {labels[j]} Fb posts'\n textf = f'{labels[j]} commented on {int(matrix[j][k])} of {labels[k]} Fb posts'\n \n ribbon_info.append(go.Scatter(x=[zi.real],\n y=[zi.imag],\n mode='markers',\n marker=dict(size=0.5, color=ribbon_color[k][j]),\n text=texti,\n hoverinfo='text'\n )\n ),\n ribbon_info.append(go.Scatter(x=[zf.real],\n y=[zf.imag],\n mode='markers',\n marker=dict(size=0.5, color=ribbon_color[k][j]),\n text=textf,\n hoverinfo='text'\n )\n )\n r = (r[1], r[0]) # IMPORTANT!!! Reverse these arc ends because otherwise you get\n # a twisted ribbon\n #append the ribbon shape\n shapes.append(make_ribbon(l, r, 'rgb(175,175,175)' , ribbon_color[k][j]))\n \n \n ", "ideograms is a list of dicts that set the position, and color of ideograms, as well as the information associated to each ideogram.", "ideograms = []\nfor k in range(len(ideo_ends)):\n z = make_ideogram_arc(1.1, ideo_ends[k])\n zi = make_ideogram_arc(1.0, ideo_ends[k])\n m = len(z)\n n = len(zi)\n ideograms.append(go.Scatter(x=z.real,\n y=z.imag,\n mode='lines',\n line=dict(color=ideo_colors[k], shape='spline', width=0.25),\n text=f'{labels[k]} <br>{int(row_sum[k])} comments', \n hoverinfo='text'\n )\n )\n \n \n path = 'M '\n for s in range(m):\n path += f'{z.real[s]}, {z.imag[s]} L '\n \n Zi = np.array(zi.tolist()[::-1]) \n\n for s in range(m):\n path += f'{Zi.real[s]}, {Zi.imag[s]} L '\n path += f'{z.real[0]} ,{z.imag[0]}' \n \n shapes.append(make_ideo_shape(path,'rgb(150,150,150)' , ideo_colors[k]))\n \n\ndata = ideograms + ribbon_info\nlayout['shapes'] = shapes\nfig = go.Figure(data=data, layout=layout)\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, iplot, plot\ninit_notebook_mode(connected=True)\niplot(fig)", "Here is a chord diagram associated to a community of 8 Facebook friends:", "IFrame('https://plot.ly/~empet/12148/chord-diagram-of-facebook-comments-in-a-community/',\n width=500, height=500)\n\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Kaggle/learntools
notebooks/ml_intermediate/raw/tut3.ipynb
apache-2.0
[ "In this tutorial, you will learn what a categorical variable is, along with three approaches for handling this type of data.\nIntroduction\nA categorical variable takes only a limited number of values. \n\nConsider a survey that asks how often you eat breakfast and provides four options: \"Never\", \"Rarely\", \"Most days\", or \"Every day\". In this case, the data is categorical, because responses fall into a fixed set of categories.\nIf people responded to a survey about which what brand of car they owned, the responses would fall into categories like \"Honda\", \"Toyota\", and \"Ford\". In this case, the data is also categorical.\n\nYou will get an error if you try to plug these variables into most machine learning models in Python without preprocessing them first. In this tutorial, we'll compare three approaches that you can use to prepare your categorical data.\nThree Approaches\n1) Drop Categorical Variables\nThe easiest approach to dealing with categorical variables is to simply remove them from the dataset. This approach will only work well if the columns did not contain useful information.\n2) Ordinal Encoding\nOrdinal encoding assigns each unique value to a different integer.\n\nThis approach assumes an ordering of the categories: \"Never\" (0) < \"Rarely\" (1) < \"Most days\" (2) < \"Every day\" (3).\nThis assumption makes sense in this example, because there is an indisputable ranking to the categories. Not all categorical variables have a clear ordering in the values, but we refer to those that do as ordinal variables. For tree-based models (like decision trees and random forests), you can expect ordinal encoding to work well with ordinal variables. \n3) One-Hot Encoding\nOne-hot encoding creates new columns indicating the presence (or absence) of each possible value in the original data. To understand this, we'll work through an example.\n\nIn the original dataset, \"Color\" is a categorical variable with three categories: \"Red\", \"Yellow\", and \"Green\". The corresponding one-hot encoding contains one column for each possible value, and one row for each row in the original dataset. Wherever the original value was \"Red\", we put a 1 in the \"Red\" column; if the original value was \"Yellow\", we put a 1 in the \"Yellow\" column, and so on. \nIn contrast to ordinal encoding, one-hot encoding does not assume an ordering of the categories. Thus, you can expect this approach to work particularly well if there is no clear ordering in the categorical data (e.g., \"Red\" is neither more nor less than \"Yellow\"). We refer to categorical variables without an intrinsic ranking as nominal variables.\nOne-hot encoding generally does not perform well if the categorical variable takes on a large number of values (i.e., you generally won't use it for variables taking more than 15 different values). \nExample\nAs in the previous tutorial, we will work with the Melbourne Housing dataset. \nWe won't focus on the data loading step. Instead, you can imagine you are at a point where you already have the training and validation data in X_train, X_valid, y_train, and y_valid.", "#$HIDE$\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\n\n# Read the data\ndata = pd.read_csv('../input/melbourne-housing-snapshot/melb_data.csv')\n\n# Separate target from predictors\ny = data.Price\nX = data.drop(['Price'], axis=1)\n\n# Divide data into training and validation subsets\nX_train_full, X_valid_full, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,\n random_state=0)\n\n# Drop columns with missing values (simplest approach)\ncols_with_missing = [col for col in X_train_full.columns if X_train_full[col].isnull().any()] \nX_train_full.drop(cols_with_missing, axis=1, inplace=True)\nX_valid_full.drop(cols_with_missing, axis=1, inplace=True)\n\n# \"Cardinality\" means the number of unique values in a column\n# Select categorical columns with relatively low cardinality (convenient but arbitrary)\nlow_cardinality_cols = [cname for cname in X_train_full.columns if X_train_full[cname].nunique() < 10 and \n X_train_full[cname].dtype == \"object\"]\n\n# Select numerical columns\nnumerical_cols = [cname for cname in X_train_full.columns if X_train_full[cname].dtype in ['int64', 'float64']]\n\n# Keep selected columns only\nmy_cols = low_cardinality_cols + numerical_cols\nX_train = X_train_full[my_cols].copy()\nX_valid = X_valid_full[my_cols].copy()", "We take a peek at the training data with the head() method below.", "X_train.head()", "Next, we obtain a list of all of the categorical variables in the training data.\nWe do this by checking the data type (or dtype) of each column. The object dtype indicates a column has text (there are other things it could theoretically be, but that's unimportant for our purposes). For this dataset, the columns with text indicate categorical variables.", "# Get list of categorical variables\ns = (X_train.dtypes == 'object')\nobject_cols = list(s[s].index)\n\nprint(\"Categorical variables:\")\nprint(object_cols)", "Define Function to Measure Quality of Each Approach\nWe define a function score_dataset() to compare the three different approaches to dealing with categorical variables. This function reports the mean absolute error (MAE) from a random forest model. In general, we want the MAE to be as low as possible!", "#$HIDE$\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.metrics import mean_absolute_error\n\n# Function for comparing different approaches\ndef score_dataset(X_train, X_valid, y_train, y_valid):\n model = RandomForestRegressor(n_estimators=100, random_state=0)\n model.fit(X_train, y_train)\n preds = model.predict(X_valid)\n return mean_absolute_error(y_valid, preds)", "Score from Approach 1 (Drop Categorical Variables)\nWe drop the object columns with the select_dtypes() method.", "drop_X_train = X_train.select_dtypes(exclude=['object'])\ndrop_X_valid = X_valid.select_dtypes(exclude=['object'])\n\nprint(\"MAE from Approach 1 (Drop categorical variables):\")\nprint(score_dataset(drop_X_train, drop_X_valid, y_train, y_valid))", "Score from Approach 2 (Ordinal Encoding)\nScikit-learn has a OrdinalEncoder class that can be used to get ordinal encodings. We loop over the categorical variables and apply the ordinal encoder separately to each column.", "from sklearn.preprocessing import OrdinalEncoder\n\n# Make copy to avoid changing original data \nlabel_X_train = X_train.copy()\nlabel_X_valid = X_valid.copy()\n\n# Apply ordinal encoder to each column with categorical data\nordinal_encoder = OrdinalEncoder()\nlabel_X_train[object_cols] = ordinal_encoder.fit_transform(X_train[object_cols])\nlabel_X_valid[object_cols] = ordinal_encoder.transform(X_valid[object_cols])\n\nprint(\"MAE from Approach 2 (Ordinal Encoding):\") \nprint(score_dataset(label_X_train, label_X_valid, y_train, y_valid))", "In the code cell above, for each column, we randomly assign each unique value to a different integer. This is a common approach that is simpler than providing custom labels; however, we can expect an additional boost in performance if we provide better-informed labels for all ordinal variables.\nScore from Approach 3 (One-Hot Encoding)\nWe use the OneHotEncoder class from scikit-learn to get one-hot encodings. There are a number of parameters that can be used to customize its behavior.\n- We set handle_unknown='ignore' to avoid errors when the validation data contains classes that aren't represented in the training data, and\n- setting sparse=False ensures that the encoded columns are returned as a numpy array (instead of a sparse matrix).\nTo use the encoder, we supply only the categorical columns that we want to be one-hot encoded. For instance, to encode the training data, we supply X_train[object_cols]. (object_cols in the code cell below is a list of the column names with categorical data, and so X_train[object_cols] contains all of the categorical data in the training set.)", "from sklearn.preprocessing import OneHotEncoder\n\n# Apply one-hot encoder to each column with categorical data\nOH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False)\nOH_cols_train = pd.DataFrame(OH_encoder.fit_transform(X_train[object_cols]))\nOH_cols_valid = pd.DataFrame(OH_encoder.transform(X_valid[object_cols]))\n\n# One-hot encoding removed index; put it back\nOH_cols_train.index = X_train.index\nOH_cols_valid.index = X_valid.index\n\n# Remove categorical columns (will replace with one-hot encoding)\nnum_X_train = X_train.drop(object_cols, axis=1)\nnum_X_valid = X_valid.drop(object_cols, axis=1)\n\n# Add one-hot encoded columns to numerical features\nOH_X_train = pd.concat([num_X_train, OH_cols_train], axis=1)\nOH_X_valid = pd.concat([num_X_valid, OH_cols_valid], axis=1)\n\nprint(\"MAE from Approach 3 (One-Hot Encoding):\") \nprint(score_dataset(OH_X_train, OH_X_valid, y_train, y_valid))", "Which approach is best?\nIn this case, dropping the categorical columns (Approach 1) performed worst, since it had the highest MAE score. As for the other two approaches, since the returned MAE scores are so close in value, there doesn't appear to be any meaningful benefit to one over the other.\nIn general, one-hot encoding (Approach 3) will typically perform best, and dropping the categorical columns (Approach 1) typically performs worst, but it varies on a case-by-case basis. \nConclusion\nThe world is filled with categorical data. You will be a much more effective data scientist if you know how to use this common data type!\nYour Turn\nPut your new skills to work in the next exercise!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
radu941208/DeepLearning
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
mit
[ "Building your Deep Neural Network: Step by Step\nWelcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!\n\nIn this notebook, you will implement all the functions required to build a deep neural network.\nIn the next assignment, you will use these functions to build a deep neural network for image classification.\n\nAfter this assignment you will be able to:\n- Use non-linear units like ReLU to improve your model\n- Build a deeper neural network (with more than 1 hidden layer)\n- Implement an easy-to-use neural network class\nNotation:\n- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. \n - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.\n- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. \n - Example: $x^{(i)}$ is the $i^{th}$ training example.\n- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.\n - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).\nLet's get started!\n1 - Packages\nLet's first import all the packages that you will need during this assignment. \n- numpy is the main package for scientific computing with Python.\n- matplotlib is a library to plot graphs in Python.\n- dnn_utils provides some necessary functions for this notebook.\n- testCases provides some test cases to assess the correctness of your functions\n- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.", "import numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nfrom testCases_v3 import *\nfrom dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n%load_ext autoreload\n%autoreload 2\n\nnp.random.seed(1)", "2 - Outline of the Assignment\nTo build your neural network, you will be implementing several \"helper functions\". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:\n\nInitialize the parameters for a two-layer network and for an $L$-layer neural network.\nImplement the forward propagation module (shown in purple in the figure below).\nComplete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).\nWe give you the ACTIVATION function (relu/sigmoid).\nCombine the previous two steps into a new [LINEAR->ACTIVATION] forward function.\nStack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.\n\n\nCompute the loss.\nImplement the backward propagation module (denoted in red in the figure below).\nComplete the LINEAR part of a layer's backward propagation step.\nWe give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) \nCombine the previous two steps into a new [LINEAR->ACTIVATION] backward function.\nStack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function\n\n\nFinally update the parameters.\n\n<img src=\"images/final outline.png\" style=\"width:800px;height:500px;\">\n<caption><center> Figure 1</center></caption><br>\nNote that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. \n3 - Initialization\nYou will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.\n3.1 - 2-layer Neural Network\nExercise: Create and initialize the parameters of the 2-layer neural network.\nInstructions:\n- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID. \n- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.\n- Use zero initialization for the biases. Use np.zeros(shape).", "# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters(n_x, n_h, n_y):\n \"\"\"\n Argument:\n n_x -- size of the input layer\n n_h -- size of the hidden layer\n n_y -- size of the output layer\n \n Returns:\n parameters -- python dictionary containing your parameters:\n W1 -- weight matrix of shape (n_h, n_x)\n b1 -- bias vector of shape (n_h, 1)\n W2 -- weight matrix of shape (n_y, n_h)\n b2 -- bias vector of shape (n_y, 1)\n \"\"\"\n \n np.random.seed(1)\n \n ### START CODE HERE ### (โ‰ˆ 4 lines of code)\n W1 = np.random.randn(n_h, n_x)*0.01\n b1 = np.zeros(shape=(n_h, 1))\n W2 = np.random.randn(n_y, n_h)*0.01\n b2 = np.zeros(shape=(n_y, 1))\n ### END CODE HERE ###\n \n assert(W1.shape == (n_h, n_x))\n assert(b1.shape == (n_h, 1))\n assert(W2.shape == (n_y, n_h))\n assert(b2.shape == (n_y, 1))\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters \n\nparameters = initialize_parameters(3,2,1)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "Expected output:\n<table style=\"width:80%\">\n <tr>\n <td> **W1** </td>\n <td> [[ 0.01624345 -0.00611756 -0.00528172]\n [-0.01072969 0.00865408 -0.02301539]] </td> \n </tr>\n\n <tr>\n <td> **b1**</td>\n <td>[[ 0.]\n [ 0.]]</td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[ 0.01744812 -0.00761207]]</td>\n </tr>\n\n <tr>\n <td> **b2** </td>\n <td> [[ 0.]] </td> \n </tr>\n\n</table>\n\n3.2 - L-layer Neural Network\nThe initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:\n<table style=\"width:100%\">\n\n\n <tr>\n <td> </td> \n <td> **Shape of W** </td> \n <td> **Shape of b** </td> \n <td> **Activation** </td>\n <td> **Shape of Activation** </td> \n <tr>\n\n <tr>\n <td> **Layer 1** </td> \n <td> $(n^{[1]},12288)$ </td> \n <td> $(n^{[1]},1)$ </td> \n <td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td> \n\n <td> $(n^{[1]},209)$ </td> \n <tr>\n\n <tr>\n <td> **Layer 2** </td> \n <td> $(n^{[2]}, n^{[1]})$ </td> \n <td> $(n^{[2]},1)$ </td> \n <td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td> \n <td> $(n^{[2]}, 209)$ </td> \n <tr>\n\n <tr>\n <td> $\\vdots$ </td> \n <td> $\\vdots$ </td> \n <td> $\\vdots$ </td> \n <td> $\\vdots$</td> \n <td> $\\vdots$ </td> \n <tr>\n\n <tr>\n <td> **Layer L-1** </td> \n <td> $(n^{[L-1]}, n^{[L-2]})$ </td> \n <td> $(n^{[L-1]}, 1)$ </td> \n <td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td> \n <td> $(n^{[L-1]}, 209)$ </td> \n <tr>\n\n\n <tr>\n <td> **Layer L** </td> \n <td> $(n^{[L]}, n^{[L-1]})$ </td> \n <td> $(n^{[L]}, 1)$ </td>\n <td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>\n <td> $(n^{[L]}, 209)$ </td> \n <tr>\n\n</table>\n\nRemember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: \n$$ W = \\begin{bmatrix}\n j & k & l\\\n m & n & o \\\n p & q & r \n\\end{bmatrix}\\;\\;\\; X = \\begin{bmatrix}\n a & b & c\\\n d & e & f \\\n g & h & i \n\\end{bmatrix} \\;\\;\\; b =\\begin{bmatrix}\n s \\\n t \\\n u\n\\end{bmatrix}\\tag{2}$$\nThen $WX + b$ will be:\n$$ WX + b = \\begin{bmatrix}\n (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\\n (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\\n (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u\n\\end{bmatrix}\\tag{3} $$\nExercise: Implement initialization for an L-layer Neural Network. \nInstructions:\n- The model's structure is [LINEAR -> RELU] $ \\times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.\n- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.\n- Use zeros initialization for the biases. Use np.zeros(shape).\n- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the \"Planar Data classification model\" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers! \n- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).\npython\n if L == 1:\n parameters[\"W\" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01\n parameters[\"b\" + str(L)] = np.zeros((layer_dims[1], 1))", "# GRADED FUNCTION: initialize_parameters_deep\n\ndef initialize_parameters_deep(layer_dims):\n \"\"\"\n Arguments:\n layer_dims -- python array (list) containing the dimensions of each layer in our network\n \n Returns:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", ..., \"WL\", \"bL\":\n Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])\n bl -- bias vector of shape (layer_dims[l], 1)\n \"\"\"\n \n np.random.seed(3)\n parameters = {}\n L = len(layer_dims) # number of layers in the network\n\n for l in range(1, L):\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01\n parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))\n ### END CODE HERE ###\n \n assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))\n assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))\n\n \n return parameters\n\nparameters = initialize_parameters_deep([5,4,3])\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "Expected output:\n<table style=\"width:80%\">\n <tr>\n <td> **W1** </td>\n <td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]\n [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]\n [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]\n [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td> \n </tr>\n\n <tr>\n <td>**b1** </td>\n <td>[[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]]</td> \n </tr>\n\n <tr>\n <td>**W2** </td>\n <td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]\n [-0.01023785 -0.00712993 0.00625245 -0.00160513]\n [-0.00768836 -0.00230031 0.00745056 0.01976111]]</td> \n </tr>\n\n <tr>\n <td>**b2** </td>\n <td>[[ 0.]\n [ 0.]\n [ 0.]]</td> \n </tr>\n\n</table>\n\n4 - Forward propagation module\n4.1 - Linear Forward\nNow that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:\n\nLINEAR\nLINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. \n[LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID (whole model)\n\nThe linear forward module (vectorized over all the examples) computes the following equations:\n$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\\tag{4}$$\nwhere $A^{[0]} = X$. \nExercise: Build the linear part of forward propagation.\nReminder:\nThe mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.", "# GRADED FUNCTION: linear_forward\n\ndef linear_forward(A, W, b):\n \"\"\"\n Implement the linear part of a layer's forward propagation.\n\n Arguments:\n A -- activations from previous layer (or input data): (size of previous layer, number of examples)\n W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n b -- bias vector, numpy array of shape (size of the current layer, 1)\n\n Returns:\n Z -- the input of the activation function, also called pre-activation parameter \n cache -- a python dictionary containing \"A\", \"W\" and \"b\" ; stored for computing the backward pass efficiently\n \"\"\"\n \n ### START CODE HERE ### (โ‰ˆ 1 line of code)\n Z = np.dot(W,A)+b\n ### END CODE HERE ###\n \n assert(Z.shape == (W.shape[0], A.shape[1]))\n cache = (A, W, b)\n \n return Z, cache\n\nA, W, b = linear_forward_test_case()\n\nZ, linear_cache = linear_forward(A, W, b)\nprint(\"Z = \" + str(Z))", "Expected output:\n<table style=\"width:35%\">\n\n <tr>\n <td> **Z** </td>\n <td> [[ 3.26295337 -1.23429987]] </td> \n </tr>\n\n</table>\n\n4.2 - Linear-Activation Forward\nIn this notebook, you will use two activation functions:\n\n\nSigmoid: $\\sigma(Z) = \\sigma(W A + b) = \\frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value \"a\" and a \"cache\" that contains \"Z\" (it's what we will feed in to the corresponding backward function). To use it you could just call: \npython\nA, activation_cache = sigmoid(Z)\n\n\nReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value \"A\" and a \"cache\" that contains \"Z\" (it's what we will feed in to the corresponding backward function). To use it you could just call:\npython\nA, activation_cache = relu(Z)\n\n\nFor more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.\nExercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation \"g\" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.", "# GRADED FUNCTION: linear_activation_forward\n\ndef linear_activation_forward(A_prev, W, b, activation):\n \"\"\"\n Implement the forward propagation for the LINEAR->ACTIVATION layer\n\n Arguments:\n A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)\n W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n b -- bias vector, numpy array of shape (size of the current layer, 1)\n activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n\n Returns:\n A -- the output of the activation function, also called the post-activation value \n cache -- a python dictionary containing \"linear_cache\" and \"activation_cache\";\n stored for computing the backward pass efficiently\n \"\"\"\n \n if activation == \"sigmoid\":\n # Inputs: \"A_prev, W, b\". Outputs: \"A, activation_cache\".\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n Z, linear_cache = linear_forward(A_prev, W, b)\n A, activation_cache = sigmoid(Z)\n ### END CODE HERE ###\n \n elif activation == \"relu\":\n # Inputs: \"A_prev, W, b\". Outputs: \"A, activation_cache\".\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n Z, linear_cache = linear_forward(A_prev, W, b)\n A, activation_cache = relu(Z)\n ### END CODE HERE ###\n \n assert (A.shape == (W.shape[0], A_prev.shape[1]))\n cache = (linear_cache, activation_cache)\n\n return A, cache\n\nA_prev, W, b = linear_activation_forward_test_case()\n\nA, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = \"sigmoid\")\nprint(\"With sigmoid: A = \" + str(A))\n\nA, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = \"relu\")\nprint(\"With ReLU: A = \" + str(A))", "Expected output:\n<table style=\"width:35%\">\n <tr>\n <td> **With sigmoid: A ** </td>\n <td > [[ 0.96890023 0.11013289]]</td> \n </tr>\n <tr>\n <td> **With ReLU: A ** </td>\n <td > [[ 3.43896131 0. ]]</td> \n </tr>\n</table>\n\nNote: In deep learning, the \"[LINEAR->ACTIVATION]\" computation is counted as a single layer in the neural network, not two layers. \nd) L-Layer Model\nFor even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.\n<img src=\"images/model_architecture_kiank.png\" style=\"width:600px;height:300px;\">\n<caption><center> Figure 2 : [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>\nExercise: Implement the forward propagation of the above model.\nInstruction: In the code below, the variable AL will denote $A^{[L]} = \\sigma(Z^{[L]}) = \\sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\\hat{Y}$.) \nTips:\n- Use the functions you had previously written \n- Use a for loop to replicate [LINEAR->RELU] (L-1) times\n- Don't forget to keep track of the caches in the \"caches\" list. To add a new value c to a list, you can use list.append(c).", "# GRADED FUNCTION: L_model_forward\n\ndef L_model_forward(X, parameters):\n \"\"\"\n Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation\n \n Arguments:\n X -- data, numpy array of shape (input size, number of examples)\n parameters -- output of initialize_parameters_deep()\n \n Returns:\n AL -- last post-activation value\n caches -- list of caches containing:\n every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)\n the cache of linear_sigmoid_forward() (there is one, indexed L-1)\n \"\"\"\n\n caches = []\n A = X\n L = len(parameters) // 2 # number of layers in the neural network\n \n # Implement [LINEAR -> RELU]*(L-1). Add \"cache\" to the \"caches\" list.\n for l in range(1, L):\n A_prev = A \n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n A, cache = linear_activation_forward(A_prev, parameters['W'+str(l)], parameters['b'+str(l)], activation='relu')\n caches.append(cache)\n ### END CODE HERE ###\n \n # Implement LINEAR -> SIGMOID. Add \"cache\" to the \"caches\" list.\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n AL, cache = linear_activation_forward(A, parameters['W'+str(L)], parameters['b'+str(L)], activation='sigmoid')\n caches.append(cache)\n ### END CODE HERE ###\n \n assert(AL.shape == (1,X.shape[1]))\n \n return AL, caches\n\nX, parameters = L_model_forward_test_case_2hidden()\nAL, caches = L_model_forward(X, parameters)\nprint(\"AL = \" + str(AL))\nprint(\"Length of caches list = \" + str(len(caches)))", "<table style=\"width:50%\">\n <tr>\n <td> **AL** </td>\n <td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td> \n </tr>\n <tr>\n <td> **Length of caches list ** </td>\n <td > 3 </td> \n </tr>\n</table>\n\nGreat! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in \"caches\". Using $A^{[L]}$, you can compute the cost of your predictions.\n5 - Cost function\nNow you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.\nExercise: Compute the cross-entropy cost $J$, using the following formula: $$-\\frac{1}{m} \\sum\\limits_{i = 1}^{m} (y^{(i)}\\log\\left(a^{[L] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{L}\\right))ย \\tag{7}$$", "# GRADED FUNCTION: compute_cost\n\ndef compute_cost(AL, Y):\n \"\"\"\n Implement the cost function defined by equation (7).\n\n Arguments:\n AL -- probability vector corresponding to your label predictions, shape (1, number of examples)\n Y -- true \"label\" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)\n\n Returns:\n cost -- cross-entropy cost\n \"\"\"\n \n m = Y.shape[1]\n\n # Compute loss from aL and y.\n ### START CODE HERE ### (โ‰ˆ 1 lines of code)\n cost = (-1/m)*(np.dot(np.log(AL), Y.T)+np.dot(np.log(1-AL), (1-Y).T))\n ### END CODE HERE ###\n \n cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).\n assert(cost.shape == ())\n \n return cost\n\nY, AL = compute_cost_test_case()\n\nprint(\"cost = \" + str(compute_cost(AL, Y)))", "Expected Output:\n<table>\n\n <tr>\n <td>**cost** </td>\n <td> 0.41493159961539694</td> \n </tr>\n</table>\n\n6 - Backward propagation module\nJust like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. \nReminder: \n<img src=\"images/backprop_kiank.png\" style=\"width:650px;height:250px;\">\n<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>\n<!-- \nFor those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:\n\n$$\\frac{d \\mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \\frac{d\\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\\frac{{da^{[2]}}}{{dz^{[2]}}}\\frac{{dz^{[2]}}}{{da^{[1]}}}\\frac{{da^{[1]}}}{{dz^{[1]}}} \\tag{8} $$\n\nIn order to calculate the gradient $dW^{[1]} = \\frac{\\partial L}{\\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \\times \\frac{\\partial z^{[1]} }{\\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.\n\nEquivalently, in order to calculate the gradient $db^{[1]} = \\frac{\\partial L}{\\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \\times \\frac{\\partial z^{[1]} }{\\partial b^{[1]}}$.\n\nThis is why we talk about **backpropagation**.\n!-->\n\nNow, similar to forward propagation, you are going to build the backward propagation in three steps:\n- LINEAR backward\n- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation\n- [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)\n6.1 - Linear backward\nFor layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).\nSuppose you have already calculated the derivative $dZ^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.\n<img src=\"images/linearback_kiank.png\" style=\"width:250px;height:300px;\">\n<caption><center> Figure 4 </center></caption>\nThe three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:\n$$ dW^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial W^{[l]}} = \\frac{1}{m} dZ^{[l]} A^{[l-1] T} \\tag{8}$$\n$$ db^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial b^{[l]}} = \\frac{1}{m} \\sum_{i = 1}^{m} dZ^{l}\\tag{9}$$\n$$ dA^{[l-1]} = \\frac{\\partial \\mathcal{L} }{\\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \\tag{10}$$\nExercise: Use the 3 formulas above to implement linear_backward().", "# GRADED FUNCTION: linear_backward\n\ndef linear_backward(dZ, cache):\n \"\"\"\n Implement the linear portion of backward propagation for a single layer (layer l)\n\n Arguments:\n dZ -- Gradient of the cost with respect to the linear output (of current layer l)\n cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer\n\n Returns:\n dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n db -- Gradient of the cost with respect to b (current layer l), same shape as b\n \"\"\"\n A_prev, W, b = cache\n m = A_prev.shape[1]\n\n ### START CODE HERE ### (โ‰ˆ 3 lines of code)\n dW = np.dot(dZ, cache[0].T)/m\n db = ((np.sum(dZ, axis=1, keepdims=True))/m)\n dA_prev = np.dot(cache[1].T, dZ)\n ### END CODE HERE ###\n \n assert (dA_prev.shape == A_prev.shape)\n assert (dW.shape == W.shape)\n assert (db.shape == b.shape)\n # print (b.shape, db.shape)\n return dA_prev, dW, db\n\n# Set up some test inputs\ndZ, linear_cache = linear_backward_test_case()\n\ndA_prev, dW, db = linear_backward(dZ, linear_cache)\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db))", "Expected Output: \n<table style=\"width:90%\">\n <tr>\n <td> **dA_prev** </td>\n <td > [[ 0.51822968 -0.19517421]\n [-0.40506361 0.15255393]\n [ 2.37496825 -0.89445391]] </td> \n </tr> \n\n <tr>\n <td> **dW** </td>\n <td > [[-0.10076895 1.40685096 1.64992505]] </td> \n </tr> \n\n <tr>\n <td> **db** </td>\n <td> [[ 0.50629448]] </td> \n </tr> \n\n</table>\n\n6.2 - Linear-Activation backward\nNext, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward. \nTo help you implement linear_activation_backward, we provided two backward functions:\n- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:\npython\ndZ = sigmoid_backward(dA, activation_cache)\n\nrelu_backward: Implements the backward propagation for RELU unit. You can call it as follows:\n\npython\ndZ = relu_backward(dA, activation_cache)\nIf $g(.)$ is the activation function, \nsigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \\tag{11}$$. \nExercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.", "# GRADED FUNCTION: linear_activation_backward\n\ndef linear_activation_backward(dA, cache, activation):\n \"\"\"\n Implement the backward propagation for the LINEAR->ACTIVATION layer.\n \n Arguments:\n dA -- post-activation gradient for current layer l \n cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently\n activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n \n Returns:\n dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n db -- Gradient of the cost with respect to b (current layer l), same shape as b\n \"\"\"\n linear_cache, activation_cache = cache\n \n if activation == \"relu\":\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n dZ = relu_backward(dA, activation_cache)\n dA_prev, dW, db = linear_backward(dZ, linear_cache)\n ### END CODE HERE ###\n \n elif activation == \"sigmoid\":\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n dZ = sigmoid_backward(dA, activation_cache)\n dA_prev, dW, db = linear_backward(dZ, linear_cache)\n ### END CODE HERE ###\n \n return dA_prev, dW, db\n\n\nAL, linear_activation_cache = linear_activation_backward_test_case()\n\ndA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = \"sigmoid\")\nprint (\"sigmoid:\")\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db) + \"\\n\")\n\ndA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = \"relu\")\nprint (\"relu:\")\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db))", "Expected output with sigmoid:\n<table style=\"width:100%\">\n <tr>\n <td > dA_prev </td> \n <td >[[ 0.11017994 0.01105339]\n [ 0.09466817 0.00949723]\n [-0.05743092 -0.00576154]] </td> \n\n </tr> \n\n <tr>\n <td > dW </td> \n <td > [[ 0.10266786 0.09778551 -0.01968084]] </td> \n </tr> \n\n <tr>\n <td > db </td> \n <td > [[-0.05729622]] </td> \n </tr> \n</table>\n\nExpected output with relu:\n<table style=\"width:100%\">\n <tr>\n <td > dA_prev </td> \n <td > [[ 0.44090989 0. ]\n [ 0.37883606 0. ]\n [-0.2298228 0. ]] </td> \n\n </tr> \n\n <tr>\n <td > dW </td> \n <td > [[ 0.44513824 0.37371418 -0.10478989]] </td> \n </tr> \n\n <tr>\n <td > db </td> \n <td > [[-0.20837892]] </td> \n </tr> \n</table>\n\n6.3 - L-Model Backward\nNow you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. \n<img src=\"images/mn_backward.png\" style=\"width:450px;height:300px;\">\n<caption><center> Figure 5 : Backward pass </center></caption>\n Initializing backpropagation:\nTo backpropagate through this network, we know that the output is, \n$A^{[L]} = \\sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \\frac{\\partial \\mathcal{L}}{\\partial A^{[L]}}$.\nTo do so, use this formula (derived using calculus which you don't need in-depth knowledge of):\npython\ndAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL\nYou can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : \n$$grads[\"dW\" + str(l)] = dW^{[l]}\\tag{15} $$\nFor example, for $l=3$ this would store $dW^{[l]}$ in grads[\"dW3\"].\nExercise: Implement backpropagation for the [LINEAR->RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID model.", "# GRADED FUNCTION: L_model_backward\n\ndef L_model_backward(AL, Y, caches):\n \"\"\"\n Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group\n \n Arguments:\n AL -- probability vector, output of the forward propagation (L_model_forward())\n Y -- true \"label\" vector (containing 0 if non-cat, 1 if cat)\n caches -- list of caches containing:\n every cache of linear_activation_forward() with \"relu\" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)\n the cache of linear_activation_forward() with \"sigmoid\" (it's caches[L-1])\n \n Returns:\n grads -- A dictionary with the gradients\n grads[\"dA\" + str(l)] = ... \n grads[\"dW\" + str(l)] = ...\n grads[\"db\" + str(l)] = ... \n \"\"\"\n grads = {}\n L = len(caches) # the number of layers\n m = AL.shape[1]\n Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL\n \n # Initializing the backpropagation\n ### START CODE HERE ### (1 line of code)\n dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))\n ### END CODE HERE ###\n \n # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: \"AL, Y, caches\". Outputs: \"grads[\"dAL\"], grads[\"dWL\"], grads[\"dbL\"]\n ### START CODE HERE ### (approx. 2 lines)\n current_cache = caches[-1]\n grads[\"dA\" + str(L)], grads[\"dW\" + str(L)], grads[\"db\" + str(L)] = linear_activation_backward(dAL, current_cache, activation='sigmoid')\n ### END CODE HERE ###\n \n for l in reversed(range(L-1)):\n # lth layer: (RELU -> LINEAR) gradients.\n # Inputs: \"grads[\"dA\" + str(l + 2)], caches\". Outputs: \"grads[\"dA\" + str(l + 1)] , grads[\"dW\" + str(l + 1)] , grads[\"db\" + str(l + 1)] \n ### START CODE HERE ### (approx. 5 lines)\n current_cache = caches[l]\n dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads[\"dA\"+str(l+2)], current_cache, activation='relu')\n grads[\"dA\" + str(l + 1)] = dA_prev_temp\n grads[\"dW\" + str(l + 1)] = dW_temp\n grads[\"db\" + str(l + 1)] = db_temp\n ### END CODE HERE ###\n\n return grads\n\nAL, Y_assess, caches = L_model_backward_test_case()\ngrads = L_model_backward(AL, Y_assess, caches)\nprint_grads(grads)", "Expected Output\n<table style=\"width:60%\">\n\n <tr>\n <td > dW1 </td> \n <td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]\n [ 0. 0. 0. 0. ]\n [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td> \n </tr> \n\n <tr>\n <td > db1 </td> \n <td > [[-0.22007063]\n [ 0. ]\n [-0.02835349]] </td> \n </tr> \n\n <tr>\n <td > dA1 </td> \n <td > [[ 0.12913162 -0.44014127]\n [-0.14175655 0.48317296]\n [ 0.01663708 -0.05670698]] </td> \n\n </tr> \n</table>\n\n6.4 - Update Parameters\nIn this section you will update the parameters of the model, using gradient descent: \n$$ W^{[l]} = W^{[l]} - \\alpha \\text{ } dW^{[l]} \\tag{16}$$\n$$ b^{[l]} = b^{[l]} - \\alpha \\text{ } db^{[l]} \\tag{17}$$\nwhere $\\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. \nExercise: Implement update_parameters() to update your parameters using gradient descent.\nInstructions:\nUpdate parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.", "# GRADED FUNCTION: update_parameters\n\ndef update_parameters(parameters, grads, learning_rate):\n \"\"\"\n Update parameters using gradient descent\n \n Arguments:\n parameters -- python dictionary containing your parameters \n grads -- python dictionary containing your gradients, output of L_model_backward\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n parameters[\"W\" + str(l)] = ... \n parameters[\"b\" + str(l)] = ...\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural network\n\n # Update rule for each parameter. Use a for loop.\n ### START CODE HERE ### (โ‰ˆ 3 lines of code)\n for l in range(L):\n parameters[\"W\" + str(l + 1)] = parameters[\"W\" + str(l + 1)] - learning_rate * grads[\"dW\" + str(l + 1)]\n parameters[\"b\" + str(l + 1)] = parameters[\"b\" + str(l + 1)] - learning_rate * grads[\"db\" + str(l + 1)]\n ### END CODE HERE ###\n return parameters\n\nparameters, grads = update_parameters_test_case()\nparameters = update_parameters(parameters, grads, 0.1)\n\nprint (\"W1 = \"+ str(parameters[\"W1\"]))\nprint (\"b1 = \"+ str(parameters[\"b1\"]))\nprint (\"W2 = \"+ str(parameters[\"W2\"]))\nprint (\"b2 = \"+ str(parameters[\"b2\"]))", "Expected Output:\n<table style=\"width:100%\"> \n <tr>\n <td > W1 </td> \n <td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008]\n [-1.76569676 -0.80627147 0.51115557 -1.18258802]\n [-1.0535704 -0.86128581 0.68284052 2.20374577]] </td> \n </tr> \n\n <tr>\n <td > b1 </td> \n <td > [[-0.04659241]\n [-1.28888275]\n [ 0.53405496]] </td> \n </tr> \n <tr>\n <td > W2 </td> \n <td > [[-0.55569196 0.0354055 1.32964895]]</td> \n </tr> \n\n <tr>\n <td > b2 </td> \n <td > [[-0.84610769]] </td> \n </tr> \n</table>\n\n7 - Conclusion\nCongrats on implementing all the functions required for building a deep neural network! \nWe know it was a long assignment but going forward it will only get better. The next part of the assignment is easier. \nIn the next assignment you will put all these together to build two models:\n- A two-layer neural network\n- An L-layer neural network\nYou will in fact use these models to classify cat vs non-cat images!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cliburn/sta-663-2017
homework/05_Making_Python_Faster_Solutions.ipynb
mit
[ "%matplotlib inline\n%load_ext Cython\n\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.datasets import make_blobs \nfrom numba import jit, vectorize, float64, int64\n\nsns.set_context('notebook', font_scale=1.5)\n\n! conda update --yes numba", "Making Python faster\nThis homework provides practice in making Python code faster. Note that we start with functions that already use idiomatic numpy (which are about two orders of magnitude faster than the pure Python versions).\nFunctions to optimize", "def logistic(x):\n \"\"\"Logistic function.\"\"\"\n return np.exp(x)/(1 + np.exp(x))\n\ndef gd(X, y, beta, alpha, niter):\n \"\"\"Gradient descent algorihtm.\"\"\"\n n, p = X.shape\n Xt = X.T\n for i in range(niter):\n y_pred = logistic(X @ beta)\n epsilon = y - y_pred\n grad = Xt @ epsilon / n\n beta += alpha * grad\n return beta\n\nx = np.linspace(-6, 6, 100)\nplt.plot(x, logistic(x))\npass", "Data set for classification", "n = 10000\np = 2\nX, y = make_blobs(n_samples=n, n_features=p, centers=2, cluster_std=1.05, random_state=23)\nX = np.c_[np.ones(len(X)), X]\ny = y.astype('float')", "Using gradient descent for classification by logistic regression", "# initial parameters\nniter = 1000\nฮฑ = 0.01\nฮฒ = np.zeros(p+1)\n\n# call gradient descent\nฮฒ = gd(X, y, ฮฒ, ฮฑ, niter)\n\n# assign labels to points based on prediction\ny_pred = logistic(X @ ฮฒ)\nlabels = y_pred > 0.5\n\n# calculate separating plane\nsep = (-ฮฒ[0] - ฮฒ[1] * X)/ฮฒ[2]\n\nplt.scatter(X[:, 1], X[:, 2], c=labels, cmap='winter')\nplt.plot(X, sep, 'r-')\npass", "1. Rewrite the logistic function so it only makes one np.exp call. Compare the time of both versions with the input x given below using the @timeit magic. (10 points)", "np.random.seed(123)\nn = int(1e7)\nx = np.random.normal(0, 1, n)\n\ndef logistic2(x):\n \"\"\"Logistic function.\"\"\"\n return 1/(1 + np.exp(-x))\n\n%timeit logistic(x)\n\n%timeit logistic2(x)", "2. (20 points) Use numba to compile the gradient descent function. \n\nUse the @vectorize decorator to create a ufunc version of the logistic function and call this logistic_numba_cpu with function signatures of float64(float64). Create another function called logistic_numba_parallel by giving an extra argument to the decorator of target=parallel (5 points)\nFor each function, check that the answers are the same as with the original logistic function using np.testing.assert_array_almost_equal. Use %timeit to compare the three logistic functions (5 points)\nNow use @jit to create a JIT_compiled version of the logistic and gd functions, calling them logistic_numba and gd_numba. Provide appropriate function signatures to the decorator in each case. (5 points)\nCompare the two gradient descent functions gd and gd_numba for correctness and performance. (5 points)", "@vectorize([float64(float64)], target='cpu')\ndef logistic_numba_cpu(x):\n \"\"\"Logistic function.\"\"\"\n return 1/(1 + math.exp(-x))\n\n@vectorize([float64(float64)], target='parallel')\ndef logistic_numba_parallel(x):\n \"\"\"Logistic function.\"\"\"\n return 1/(1 + math.exp(-x))\n\nnp.testing.assert_array_almost_equal(logistic(x), logistic_numba_cpu(x))\nnp.testing.assert_array_almost_equal(logistic(x), logistic_numba_parallel(x))\n\n%timeit logistic(x)\n\n%timeit logistic_numba_cpu(x)\n\n%timeit logistic_numba_parallel(x)\n\n@jit(float64[:](float64[:]), nopython=True)\ndef logistic_numba(x):\n return 1/(1 + np.exp(-x))\n\n@jit(float64[:](float64[:,:], float64[:], float64[:], float64, int64), nopython=True)\ndef gd_numba(X, y, beta, alpha, niter):\n \"\"\"Gradient descent algorihtm.\"\"\"\n n, p = X.shape\n Xt = X.T\n for i in range(niter):\n y_pred = logistic_numba(X @ beta)\n epsilon = y - y_pred\n grad = Xt @ epsilon / n\n beta += alpha * grad\n return beta\n\nbeta1 = gd(X, y, ฮฒ, ฮฑ, niter)\nbeta2 = gd_numba(X, y, ฮฒ, ฮฑ, niter)\nnp.testing.assert_almost_equal(beta1, beta2)\n\n%timeit gd(X, y, ฮฒ, ฮฑ, niter)\n\n%timeit gd_numba(X, y, ฮฒ, ฮฑ, niter)\n\n# initial parameters\nniter = 1000\nฮฑ = 0.01\nฮฒ = np.zeros(p+1)\n\n# call gradient descent\nฮฒ = gd_numba(X, y, ฮฒ, ฮฑ, niter)\n\n# assign labels to points based on prediction\ny_pred = logistic(X @ ฮฒ)\nlabels = y_pred > 0.5\n\n# calculate separating plane\nsep = (-ฮฒ[0] - ฮฒ[1] * X)/ฮฒ[2]\n\nplt.scatter(X[:, 1], X[:, 2], c=labels, cmap='winter')\nplt.plot(X, sep, 'r-')\npass", "3. (30 points) Use cython to compile the gradient descent function. \n\nCythonize the logistic function as logistic_cython. Use the --annotate argument to the cython magic function to find slow regions. Compare accuracy and performance. The final performance should be comparable to the numba cpu version. (10 points)\nNow cythonize the gd function as gd_cython. This function should use of the cythonized logistic_cython as a C function call. Compare accuracy and performance. The final performance should be comparable to the numba cpu version. (20 points)\n\nHints: \n\nGive static types to all variables\nKnow how to use def, cdef and cpdef\nUse Typed MemoryViews\nFind out how to transpose a Typed MemoryView to store the transpose of X\nTyped MemoryVeiws are not numpy arrays - you often have to write explicit loops to operate on them\nUse the cython boundscheck, wraparound, and cdivision operators", "%%cython --annotate\n\nimport cython\nimport numpy as np\ncimport numpy as np\nfrom libc.math cimport exp\n\[email protected](False)\[email protected](False)\[email protected](True)\ndef logistic_cython(double[:] x):\n \"\"\"Logistic function.\"\"\"\n cdef int i\n cdef int n = x.shape[0]\n cdef double [:] s = np.empty(n)\n \n for i in range(n):\n s[i] = 1.0/(1.0 + exp(-x[i]))\n return s\n\nnp.testing.assert_array_almost_equal(logistic(x), logistic_cython(x))\n\n%timeit logistic2(x)\n\n%timeit logistic_cython(x)\n\n%%cython --annotate\n\nimport cython\nimport numpy as np\ncimport numpy as np\nfrom libc.math cimport exp\n\[email protected](False)\[email protected](False)\[email protected](True)\ncdef double[:] logistic_(double[:] x):\n \"\"\"Logistic function.\"\"\"\n cdef int i\n cdef int n = x.shape[0]\n cdef double [:] s = np.empty(n)\n \n for i in range(n):\n s[i] = 1.0/(1.0 + exp(-x[i]))\n return s\n\[email protected](False)\[email protected](False)\[email protected](True)\ndef gd_cython(double[:, ::1] X, double[:] y, double[:] beta, double alpha, int niter):\n \"\"\"Gradient descent algorihtm.\"\"\" \n cdef int n = X.shape[0]\n cdef int p = X.shape[1]\n cdef double[:] eps = np.empty(n)\n cdef double[:] y_pred = np.empty(n)\n cdef double[:] grad = np.empty(p) \n cdef int i, j, k\n cdef double[:, :] Xt = X.T\n \n for i in range(niter):\n y_pred = logistic_(np.dot(X, beta))\n for j in range(n):\n eps[j] = y[j] - y_pred[j]\n grad = np.dot(Xt, eps) / n\n for k in range(p):\n beta[k] += alpha * grad[k]\n return beta\n\nniter = 1000\nalpha = 0.01\nbeta = np.random.random(X.shape[1])\n\nbeta1 = gd(X, y, ฮฒ, ฮฑ, niter)\nbeta2 = gd_cython(X, y, ฮฒ, ฮฑ, niter)\nnp.testing.assert_almost_equal(beta1, beta2)\n\n%timeit gd(X, y, beta, alpha, niter)\n\n%timeit gd_cython(X, y, beta, alpha, niter)", "4. (40 points) Wrapping modules in C++.\nRewrite the logistic and gd functions in C++, using pybind11 to create Python wrappers. Compare accuracy and performance as usual. Replicate the plotted example using the C++ wrapped functions for logistic and gd\n\nWriting a vectorized logistic function callable from both C++ and Python (10 points)\nWriting the gd function callable from Python (25 points)\nChecking accuracy, benchmarking and creating diagnostic plots (5 points)\n\nHints:\n\nUse the C++ Eigen library to do vector and matrix operations\nWhen calling the exponential function, you have to use exp(m.array()) instead of exp(m) if you use an Eigen dynamic template.\nUse cppimport to simplify the wrapping for Python\nSee pybind11 docs\nSee my examples for help", "import os\nif not os.path.exists('./eigen'):\n ! git clone https://github.com/RLovelett/eigen.git\n\n%%file wrap.cpp\n<%\ncfg['compiler_args'] = ['-std=c++11']\ncfg['include_dirs'] = ['./eigen']\nsetup_pybind11(cfg)\n%>\n\n#include <pybind11/pybind11.h>\n#include <pybind11/numpy.h>\n#include <pybind11/eigen.h>\n\nnamespace py = pybind11;\n\nEigen::VectorXd logistic(Eigen::VectorXd x) {\n return 1.0/(1.0 + exp((-x).array()));\n}\n\nEigen::VectorXd gd(Eigen::MatrixXd X, Eigen::VectorXd y, Eigen::VectorXd beta, double alpha, int niter) {\n int n = X.rows();\n \n Eigen::VectorXd y_pred;\n Eigen::VectorXd resid;\n Eigen::VectorXd grad;\n Eigen::MatrixXd Xt = X.transpose();\n \n for (int i=0; i<niter; i++) {\n y_pred = logistic(X * beta);\n resid = y - y_pred;\n grad = Xt * resid / n;\n beta = beta + alpha * grad;\n }\n return beta;\n}\n\nPYBIND11_PLUGIN(wrap) {\n py::module m(\"wrap\", \"pybind11 example plugin\");\n m.def(\"gd\", &gd, \"The gradient descent fucntion.\");\n m.def(\"logistic\", &logistic, \"The logistic fucntion.\");\n\n return m.ptr();\n}\n\nimport cppimport\ncppimport.force_rebuild() \nfuncs = cppimport.imp(\"wrap\")\n\nnp.testing.assert_array_almost_equal(logistic(x), funcs.logistic(x))\n\n%timeit logistic(x)\n\n%timeit funcs.logistic(x)\n\nฮฒ = np.array([0.0, 0.0, 0.0])\ngd(X, y, ฮฒ, ฮฑ, niter)\n\nฮฒ = np.array([0.0, 0.0, 0.0])\nfuncs.gd(X, y, ฮฒ, ฮฑ, niter)\n\n%timeit gd(X, y, ฮฒ, ฮฑ, niter)\n\n%timeit funcs.gd(X, y, ฮฒ, ฮฑ, niter)\n\n# initial parameters\nniter = 1000\nฮฑ = 0.01\nฮฒ = np.zeros(p+1)\n\n# call gradient descent\nฮฒ = funcs.gd(X, y, ฮฒ, ฮฑ, niter)\n\n# assign labels to points based on prediction\ny_pred = funcs.logistic(X @ ฮฒ)\nlabels = y_pred > 0.5\n\n# calculate separating plane\nsep = (-ฮฒ[0] - ฮฒ[1] * X)/ฮฒ[2]\n\nplt.scatter(X[:, 1], X[:, 2], c=labels, cmap='winter')\nplt.plot(X, sep, 'r-')\npass" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Guide to Building End-to-End Reinforcement Learning Application Pipelines using Vertex AI\n<table align=\"left\">\n\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n\nOverview\nThis demo showcases the use of TF-Agents, Kubeflow Pipelines (KFP) and Vertex AI, particularly Vertex Pipelines, in building an end-to-end reinforcement learning (RL) pipeline of a movie recommendation system. The demo is intended for developers who want to create RL applications using TensorFlow, TF-Agents and Vertex AI services, and those who want to build end-to-end production pipelines using KFP and Vertex Pipelines. It is recommended for developers to have familiarity with RL and the contextual bandits formulation, and the TF-Agents interface.\nDataset\nThis demo uses the MovieLens 100K dataset to simulate an environment with users and their respective preferences. It is available at gs://cloud-samples-data/vertex-ai/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/u.data.\nObjective\nIn this notebook, you will learn how to build an end-to-end RL pipeline for a TF-Agents (particularly the bandits module) based movie recommendation system, using KFP, Vertex AI and particularly Vertex Pipelines which is fully managed and highly scalable.\nThis Vertex Pipeline includes the following components:\n1. Generator to generate MovieLens simulation data\n2. Ingester to ingest data\n3. Trainer to train the RL policy\n4. Deployer to deploy the trained policy to a Vertex AI endpoint\nAfter pipeline construction, you (1) create the Simulator (which utilizes Cloud Functions, Cloud Scheduler and Pub/Sub) to send simulated MovieLens prediction requests, (2) create the Logger to asynchronously log prediction inputs and results (which utilizes Cloud Functions, Pub/Sub and a hook in the prediction code), and (3) create the Trigger to trigger recurrent re-training.\nA more general ML pipeline is demonstrated in MLOps on Vertex AI.\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nBigQuery\nCloud Build\nCloud Functions\nCloud Scheduler\nCloud Storage\nPub/Sub\n\nLearn about Vertex AI\npricing, BigQuery pricing, Cloud Build, Cloud Functions, Cloud Scheduler, Cloud Storage\npricing, and Pub/Sub pricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets\nall the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements.\nYou need the following:\n\nThe Google Cloud SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Google Cloud guide to Setting up a Python development\nenvironment and the Jupyter\ninstallation guide provide detailed instructions\nfor meeting these requirements. The following steps provide a condensed set of\ninstructions:\n\n\nInstall and initialize the Cloud SDK.\n\n\nInstall Python 3.\n\n\nInstall\n virtualenv\n and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the\ncommand-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstall additional packages\nInstall additional package dependencies not installed in your notebook environment, such as the Kubeflow Pipelines (KFP) SDK.", "import os\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# Google Cloud Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_GOOGLE_CLOUD_NOTEBOOK:\n USER_FLAG = \"--user\"\n\n! pip3 install {USER_FLAG} google-cloud-aiplatform\n! pip3 install {USER_FLAG} google-cloud-pipeline-components\n! pip3 install {USER_FLAG} --upgrade kfp\n! pip3 install {USER_FLAG} numpy\n! pip3 install {USER_FLAG} --upgrade tensorflow\n! pip3 install {USER_FLAG} --upgrade pillow\n! pip3 install {USER_FLAG} --upgrade tf-agents\n! pip3 install {USER_FLAG} --upgrade fastapi", "Restart the kernel\nAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.", "# Automatically restart kernel after installs\nimport os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nSelect a GPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select \"Runtime --> Change runtime type > GPU\"\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex AI API, BigQuery API, Cloud Build, Cloud Functions, Cloud Scheduler, Cloud Storage, and Pub/Sub API.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.", "import os\n\nPROJECT_ID = \"\"\n\n# Get your Google Cloud project ID from gcloud\nif not os.getenv(\"IS_TESTING\"):\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)", "Otherwise, set your project ID here.", "if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions\nwhen prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\n\n\nIn the Cloud Console, go to the Create service account key\n page.\n\n\nClick Create service account.\n\n\nIn the Service account name field, enter a name, and\n click Create.\n\n\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex AI\"\ninto the filter box, and select\n Vertex AI Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\n\n\nClick Create. A JSON file that contains your key downloads to your\nlocal environment.\n\n\nEnter the path to your service account key as the\nGOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# If on Google Cloud Notebooks, then don't execute this code\nif not IS_GOOGLE_CLOUD_NOTEBOOK:\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nIn this tutorial, a Cloud Storage bucket holds the MovieLens dataset files to be used for model training. Vertex AI also saves the trained model that results from your training job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.\nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Vertex AI services are\navailable. You may\nnot use a Multi-Regional Storage bucket for training with Vertex AI. Also note that Vertex\nPipelines is currently only supported in select regions such as \"us-central1\" (reference).", "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\nREGION = \"[your-region]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Import libraries and define constants", "import os\nimport sys\n\nfrom google.cloud import aiplatform\nfrom google_cloud_pipeline_components import aiplatform as gcc_aip\nfrom kfp.v2 import compiler, dsl\nfrom kfp.v2.google.client import AIPlatformClient", "Fill out the following configurations", "# BigQuery parameters (used for the Generator, Ingester, Logger)\nBIGQUERY_DATASET_ID = f\"{PROJECT_ID}.movielens_dataset\" # @param {type:\"string\"} BigQuery dataset ID as `project_id.dataset_id`.\nBIGQUERY_LOCATION = \"us\" # @param {type:\"string\"} BigQuery dataset region.\nBIGQUERY_TABLE_ID = f\"{BIGQUERY_DATASET_ID}.training_dataset\" # @param {type:\"string\"} BigQuery table ID as `project_id.dataset_id.table_id`.", "Set additional configurations\nYou may use the default values below as is.", "# Dataset parameters\nRAW_DATA_PATH = \"gs://[your-bucket-name]/raw_data/u.data\" # @param {type:\"string\"}\n\n# Download the sample data into your RAW_DATA_PATH\n! gsutil cp \"gs://cloud-samples-data/vertex-ai/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/u.data\" $RAW_DATA_PATH\n\n# Pipeline parameters\nPIPELINE_NAME = \"movielens-pipeline\" # Pipeline display name.\nENABLE_CACHING = False # Whether to enable execution caching for the pipeline.\nPIPELINE_ROOT = f\"{BUCKET_NAME}/pipeline\" # Root directory for pipeline artifacts.\nPIPELINE_SPEC_PATH = \"metadata_pipeline.json\" # Path to pipeline specification file.\nOUTPUT_COMPONENT_SPEC = \"output-component.yaml\" # Output component specification file.\n\n# BigQuery parameters (used for the Generator, Ingester, Logger)\nBIGQUERY_TMP_FILE = (\n \"tmp.json\" # Temporary file for storing data to be loaded into BigQuery.\n)\nBIGQUERY_MAX_ROWS = 5 # Maximum number of rows of data in BigQuery to ingest.\n\n# Dataset parameters\nTFRECORD_FILE = (\n f\"{BUCKET_NAME}/trainer_input_path/*\" # TFRecord file to be used for training.\n)\n\n# Logger parameters (also used for the Logger hook in the prediction container)\nLOGGER_PUBSUB_TOPIC = \"logger-pubsub-topic\" # Pub/Sub topic name for the Logger.\nLOGGER_CLOUD_FUNCTION = \"logger-cloud-function\" # Cloud Functions name for the Logger.", "Create the RL pipeline components\nThis section consists of the following steps:\n1. Create the Generator to generate MovieLens simulation data\n2. Create the Ingester to ingest data\n3. Create the Trainer to train the RL policy\n4. Create the Deployer to deploy the trained policy to a Vertex AI endpoint\nAfter pipeline construction, create the Simulator to send simulated MovieLens prediction requests, create the Logger to asynchronously log prediction inputs and results, and create the Trigger to trigger re-training.\nHere's the entire workflow:\n1. The startup pipeline has the following components: Generator --> Ingester --> Trainer --> Deployer. This pipeline only runs once.\n2. Then, the Simulator generates prediction requests (e.g. every 5 mins), and the Logger gets invoked immediately at each prediction request and logs each prediction request asynchronously into BigQuery. The Trigger runs the re-training pipeline (e.g. every 30 mins) with the following components: Ingester --> Trainer --> Deploy.\nYou can find the KFP SDK documentation here.\nCreate the Generator to generate MovieLens simulation data\nCreate the Generator component to generate the initial set of training data using a MovieLens simulation environment and a random data-collecting policy. Store the generated data in BigQuery.\nThe Generator source code is src/generator/generator_component.py.\nRun unit tests on the Generator component\nBefore running the command, you should update the RAW_DATA_PATH in src/generator/test_generator_component.py.", "! python3 -m unittest src.generator.test_generator_component", "Create the Ingester to ingest data\nCreate the Ingester component to ingest data from BigQuery, package them as tf.train.Example objects, and output TFRecord files.\nRead more about tf.train.Example and TFRecord here.\nThe Ingester component source code is in src/ingester/ingester_component.py.\nRun unit tests on the Ingester component", "! python3 -m unittest src.ingester.test_ingester_component", "Create the Trainer to train the RL policy\nCreate the Trainer component to train a RL policy on the training dataset, and then submit a remote custom training job to Vertex AI. This component trains a policy using the TF-Agents LinUCB agent on the MovieLens simulation dataset, and saves the trained policy as a SavedModel.\nThe Trainer component source code is in src/trainer/trainer_component.py. You use additional Vertex AI platform code in pipeline construction to submit the training code defined in Trainer as a custom training job to Vertex AI. (The additional code is similar to what kfp.v2.google.experimental.run_as_aiplatform_custom_job does. You can find an example notebook here for how to use that first-party Trainer component.)\nThe Trainer performs off-policy training, where you train a policy on a static set of pre-collected data records containing information including observation, action and reward. For a data record, the policy in training might not output the same action given the observation in that data record.\nIf you're interested in pipeline metrics, read about KFP Pipeline Metrics here.", "# Trainer parameters\nTRAINING_ARTIFACTS_DIR = (\n f\"{BUCKET_NAME}/artifacts\" # Root directory for training artifacts.\n)\nTRAINING_REPLICA_COUNT = 1 # Number of replica to run the custom training job.\nTRAINING_MACHINE_TYPE = (\n \"n1-standard-4\" # Type of machine to run the custom training job.\n)\nTRAINING_ACCELERATOR_TYPE = \"ACCELERATOR_TYPE_UNSPECIFIED\" # Type of accelerators to run the custom training job.\nTRAINING_ACCELERATOR_COUNT = 0 # Number of accelerators for the custom training job.", "Run unit tests on the Trainer component", "! python3 -m unittest src.trainer.test_trainer_component", "Create the Deployer to deploy the trained policy to a Vertex AI endpoint\nUse google_cloud_pipeline_components.aiplatform components during pipeline construction to:\n1. Upload the trained policy\n2. Create a Vertex AI endpoint\n3. Deploy the uploaded trained policy to the endpoint\nThese 3 components formulate the Deployer. They support flexible configurations; for instance, if you want to set up traffic splitting for the endpoint to run A/B testing, you may pass in your configurations to google_cloud_pipeline_components.aiplatform.ModelDeployOp.", "# Deployer parameters\nTRAINED_POLICY_DISPLAY_NAME = (\n \"movielens-trained-policy\" # Display name of the uploaded and deployed policy.\n)\nTRAFFIC_SPLIT = {\"0\": 100}\nENDPOINT_DISPLAY_NAME = \"movielens-endpoint\" # Display name of the prediction endpoint.\nENDPOINT_MACHINE_TYPE = \"n1-standard-4\" # Type of machine of the prediction endpoint.\nENDPOINT_REPLICA_COUNT = 1 # Number of replicas of the prediction endpoint.\nENDPOINT_ACCELERATOR_TYPE = \"ACCELERATOR_TYPE_UNSPECIFIED\" # Type of accelerators to run the custom training job.\nENDPOINT_ACCELERATOR_COUNT = 0 # Number of accelerators for the custom training job.", "Create a custom prediction container using Cloud Build\nBefore setting up the Deployer, define and build a custom prediction container that serves predictions using the trained policy. The source code, Cloud Build YAML configuration file and Dockerfile are in src/prediction_container.\nThis prediction container is the serving container for the deployed, trained policy. See a more detailed guide on building prediction custom containers here.", "# Prediction container parameters\nPREDICTION_CONTAINER = \"prediction-container\" # Name of the container image.\nPREDICTION_CONTAINER_DIR = \"src/prediction_container\"", "Create a Cloud Build YAML file using Kaniko build\nNote: For this application, you are recommended to use E2_HIGHCPU_8 or other high resouce machine configurations instead of the standard machine type listed here to prevent out-of-memory errors.", "cloudbuild_yaml = \"\"\"steps:\n- name: \"gcr.io/kaniko-project/executor:latest\"\n args: [\"--destination=gcr.io/{PROJECT_ID}/{PREDICTION_CONTAINER}:latest\",\n \"--cache=true\",\n \"--cache-ttl=99h\"]\n env: [\"AIP_STORAGE_URI={ARTIFACTS_DIR}\",\n \"PROJECT_ID={PROJECT_ID}\",\n \"LOGGER_PUBSUB_TOPIC={LOGGER_PUBSUB_TOPIC}\"]\noptions:\n machineType: \"E2_HIGHCPU_8\"\n\"\"\".format(\n PROJECT_ID=PROJECT_ID,\n PREDICTION_CONTAINER=PREDICTION_CONTAINER,\n ARTIFACTS_DIR=TRAINING_ARTIFACTS_DIR,\n LOGGER_PUBSUB_TOPIC=LOGGER_PUBSUB_TOPIC,\n)\n\nwith open(f\"{PREDICTION_CONTAINER_DIR}/cloudbuild.yaml\", \"w\") as fp:\n fp.write(cloudbuild_yaml)", "Run unit tests on the prediction code", "! python3 -m unittest src.prediction_container.test_main", "Build custom prediction container", "! gcloud builds submit --config $PREDICTION_CONTAINER_DIR/cloudbuild.yaml $PREDICTION_CONTAINER_DIR", "Author and run the RL pipeline\nYou author the pipeline using custom KFP components built from the previous section, and create a pipeline run using Vertex Pipelines. You can read more about whether to enable execution caching here. You can also specifically configure the worker pool spec for training if for instance you want to train at scale and/or at a higher speed; you can adjust the replica count, machine type, accelerator type and count, and many other specifications.\nHere, you build a \"startup\" pipeline that generates randomly sampled training data (with the Generator) as the first step. This pipeline runs only once.", "from google_cloud_pipeline_components.experimental.custom_job import utils\nfrom kfp.components import load_component_from_url\n\ngenerate_op = load_component_from_url(\n \"https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/62a2a7611499490b4b04d731d48a7ba87c2d636f/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/src/generator/component.yaml\"\n)\ningest_op = load_component_from_url(\n \"https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/62a2a7611499490b4b04d731d48a7ba87c2d636f/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/src/ingester/component.yaml\"\n)\ntrain_op = load_component_from_url(\n \"https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/62a2a7611499490b4b04d731d48a7ba87c2d636f/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/src/trainer/component.yaml\"\n)\n\n\[email protected](pipeline_root=PIPELINE_ROOT, name=f\"{PIPELINE_NAME}-startup\")\ndef pipeline(\n # Pipeline configs\n project_id: str,\n raw_data_path: str,\n training_artifacts_dir: str,\n # BigQuery configs\n bigquery_dataset_id: str,\n bigquery_location: str,\n bigquery_table_id: str,\n bigquery_max_rows: int = 10000,\n # TF-Agents RL configs\n batch_size: int = 8,\n rank_k: int = 20,\n num_actions: int = 20,\n driver_steps: int = 3,\n num_epochs: int = 5,\n tikhonov_weight: float = 0.01,\n agent_alpha: float = 10,\n) -> None:\n \"\"\"Authors a RL pipeline for MovieLens movie recommendation system.\n\n Integrates the Generator, Ingester, Trainer and Deployer components. This\n pipeline generates initial training data with a random policy and runs once\n as the initiation of the system.\n\n Args:\n project_id: GCP project ID. This is required because otherwise the BigQuery\n client will use the ID of the tenant GCP project created as a result of\n KFP, which doesn't have proper access to BigQuery.\n raw_data_path: Path to MovieLens 100K's \"u.data\" file.\n training_artifacts_dir: Path to store the Trainer artifacts (trained policy).\n\n bigquery_dataset: A string of the BigQuery dataset ID in the format of\n \"project.dataset\".\n bigquery_location: A string of the BigQuery dataset location.\n bigquery_table_id: A string of the BigQuery table ID in the format of\n \"project.dataset.table\".\n bigquery_max_rows: Optional; maximum number of rows to ingest.\n\n batch_size: Optional; batch size of environment generated quantities eg.\n rewards.\n rank_k: Optional; rank for matrix factorization in the MovieLens environment;\n also the observation dimension.\n num_actions: Optional; number of actions (movie items) to choose from.\n driver_steps: Optional; number of steps to run per batch.\n num_epochs: Optional; number of training epochs.\n tikhonov_weight: Optional; LinUCB Tikhonov regularization weight of the\n Trainer.\n agent_alpha: Optional; LinUCB exploration parameter that multiplies the\n confidence intervals of the Trainer.\n \"\"\"\n # Run the Generator component.\n generate_task = generate_op(\n project_id=project_id,\n raw_data_path=raw_data_path,\n batch_size=batch_size,\n rank_k=rank_k,\n num_actions=num_actions,\n driver_steps=driver_steps,\n bigquery_tmp_file=BIGQUERY_TMP_FILE,\n bigquery_dataset_id=bigquery_dataset_id,\n bigquery_location=bigquery_location,\n bigquery_table_id=bigquery_table_id,\n )\n \n # Run the Ingester component.\n ingest_task = ingest_op(\n project_id=project_id,\n bigquery_table_id=generate_task.outputs[\"bigquery_table_id\"],\n bigquery_max_rows=bigquery_max_rows,\n tfrecord_file=TFRECORD_FILE,\n )\n\n # Run the Trainer component and submit custom job to Vertex AI.\n # Convert the train_op component into a Vertex AI Custom Job pre-built component\n custom_job_training_op = utils.create_custom_training_job_op_from_component(\n component_spec=train_op,\n replica_count=TRAINING_REPLICA_COUNT,\n machine_type=TRAINING_MACHINE_TYPE,\n accelerator_type=TRAINING_ACCELERATOR_TYPE,\n accelerator_count=TRAINING_ACCELERATOR_COUNT,\n )\n\n train_task = custom_job_training_op(\n training_artifacts_dir=training_artifacts_dir,\n tfrecord_file=ingest_task.outputs[\"tfrecord_file\"],\n num_epochs=num_epochs,\n rank_k=rank_k,\n num_actions=num_actions,\n tikhonov_weight=tikhonov_weight,\n agent_alpha=agent_alpha,\n project=PROJECT_ID,\n location=REGION,\n )\n\n # Run the Deployer components.\n # Upload the trained policy as a model.\n model_upload_op = gcc_aip.ModelUploadOp(\n project=project_id,\n display_name=TRAINED_POLICY_DISPLAY_NAME,\n artifact_uri=train_task.outputs[\"training_artifacts_dir\"],\n serving_container_image_uri=f\"gcr.io/{PROJECT_ID}/{PREDICTION_CONTAINER}:latest\",\n )\n # Create a Vertex AI endpoint. (This operation can occur in parallel with\n # the Generator, Ingester, Trainer components.)\n endpoint_create_op = gcc_aip.EndpointCreateOp(\n project=project_id, display_name=ENDPOINT_DISPLAY_NAME\n )\n # Deploy the uploaded, trained policy to the created endpoint. (This operation\n # has to occur after both model uploading and endpoint creation complete.)\n gcc_aip.ModelDeployOp(\n endpoint=endpoint_create_op.outputs[\"endpoint\"],\n model=model_upload_op.outputs[\"model\"],\n deployed_model_display_name=TRAINED_POLICY_DISPLAY_NAME,\n traffic_split=TRAFFIC_SPLIT,\n dedicated_resources_machine_type=ENDPOINT_MACHINE_TYPE,\n dedicated_resources_accelerator_type=ENDPOINT_ACCELERATOR_TYPE,\n dedicated_resources_accelerator_count=ENDPOINT_ACCELERATOR_COUNT,\n dedicated_resources_min_replica_count=ENDPOINT_REPLICA_COUNT,\n )\n\n# Compile the authored pipeline.\ncompiler.Compiler().compile(pipeline_func=pipeline, package_path=PIPELINE_SPEC_PATH)\n\n# Create a pipeline run job.\njob = aiplatform.PipelineJob(\n display_name=f\"{PIPELINE_NAME}-startup\",\n template_path=PIPELINE_SPEC_PATH,\n pipeline_root=PIPELINE_ROOT,\n parameter_values={\n # Pipeline configs\n \"project_id\": PROJECT_ID,\n \"raw_data_path\": RAW_DATA_PATH,\n \"training_artifacts_dir\": TRAINING_ARTIFACTS_DIR,\n # BigQuery configs\n \"bigquery_dataset_id\": BIGQUERY_DATASET_ID,\n \"bigquery_location\": BIGQUERY_LOCATION,\n \"bigquery_table_id\": BIGQUERY_TABLE_ID,\n },\n enable_caching=ENABLE_CACHING,\n)\n\njob.run()", "Create the Simulator to send simulated MovieLens prediction requests\nCreate the Simulator to obtain observations from the MovieLens simulation environment, formats them, and sends prediction requests to the Vertex AI endpoint.\nThe workflow is: Cloud Scheduler --> Pub/Sub --> Cloud Functions --> Endpoint\nIn production, this Simulator logic can be modified to that of gathering real-world input features as observations, getting prediction results from the endpoint and communicating those results to real-world users.\nThe Simulator source code is src/simulator/main.py.", "# Simulator parameters\nSIMULATOR_PUBSUB_TOPIC = (\n \"simulator-pubsub-topic\" # Pub/Sub topic name for the Simulator.\n)\nSIMULATOR_CLOUD_FUNCTION = (\n \"simulator-cloud-function\" # Cloud Functions name for the Simulator.\n)\nSIMULATOR_SCHEDULER_JOB = (\n \"simulator-scheduler-job\" # Cloud Scheduler cron job name for the Simulator.\n)\nSIMULATOR_SCHEDULE = \"*/5 * * * *\" # Cloud Scheduler cron job schedule for the Simulator. Eg. \"*/5 * * * *\" means every 5 mins.\nSIMULATOR_SCHEDULER_MESSAGE = (\n \"simulator-message\" # Cloud Scheduler message for the Simulator.\n)\n# TF-Agents RL configs\nBATCH_SIZE = 8\nRANK_K = 20\nNUM_ACTIONS = 20", "Run unit tests on the Simulator", "! python3 -m unittest src.simulator.test_main", "Create a Pub/Sub topic\n\nRead more about creating Pub/Sub topics here", "! gcloud pubsub topics create $SIMULATOR_PUBSUB_TOPIC", "Set up a recurrent Cloud Scheduler job for the Pub/Sub topic\n\nRead more about possible ways to create cron jobs here.\nRead about the cron job schedule format here.", "scheduler_job_args = \" \".join(\n [\n SIMULATOR_SCHEDULER_JOB,\n f\"--schedule='{SIMULATOR_SCHEDULE}'\",\n f\"--topic={SIMULATOR_PUBSUB_TOPIC}\",\n f\"--message-body={SIMULATOR_SCHEDULER_MESSAGE}\",\n ]\n)\n\n! echo $scheduler_job_args\n\n! gcloud scheduler jobs create pubsub $scheduler_job_args", "Define the Simulator logic in a Cloud Function to be triggered periodically, and deploy this Function\n\nSpecify dependencies of the Function in src/simulator/requirements.txt.\nRead more about the available configurable arguments for deploying a Function here. For instance, based on the complexity of your Function, you may want to adjust its memory and timeout.\nNote that the environment variables in ENV_VARS should be comma-separated; there should not be additional spaces, or other characters in between. Read more about setting/updating/deleting environment variables here.\nRead more about sending predictions to Vertex endpoints here.", "endpoints = ! gcloud ai endpoints list \\\n --region=$REGION \\\n --filter=display_name=$ENDPOINT_DISPLAY_NAME\nprint(\"\\n\".join(endpoints), \"\\n\")\n\nENDPOINT_ID = endpoints[2].split(\" \")[0]\nprint(f\"ENDPOINT_ID={ENDPOINT_ID}\")\n\nENV_VARS = \",\".join(\n [\n f\"PROJECT_ID={PROJECT_ID}\",\n f\"REGION={REGION}\",\n f\"ENDPOINT_ID={ENDPOINT_ID}\",\n f\"RAW_DATA_PATH={RAW_DATA_PATH}\",\n f\"BATCH_SIZE={BATCH_SIZE}\",\n f\"RANK_K={RANK_K}\",\n f\"NUM_ACTIONS={NUM_ACTIONS}\",\n ]\n)\n\n! echo $ENV_VARS\n\n! gcloud functions deploy $SIMULATOR_CLOUD_FUNCTION \\\n --region=$REGION \\\n --trigger-topic=$SIMULATOR_PUBSUB_TOPIC \\\n --runtime=python37 \\\n --memory=512MB \\\n --timeout=200s \\\n --source=src/simulator \\\n --entry-point=simulate \\\n --stage-bucket=$BUCKET_NAME \\\n --update-env-vars=$ENV_VARS", "Create the Logger to asynchronously log prediction inputs and results\nCreate the Logger to get environment feedback as rewards from the MovieLens simulation environment based on prediction observations and predicted actions, formulate trajectory data, and store said data back to BigQuery. The Logger closes the RL feedback loop from prediction to training data, and allows re-training of the policy on new training data.\nThe Logger is triggered by a hook in the prediction code. At each prediction request, the prediction code messages a Pub/Sub topic, which triggers the Logger code.\nThe workflow is: prediction container code (at prediction request) --> Pub/Sub --> Cloud Functions (logging predictions back to BigQuery)\nIn production, this Logger logic can be modified to that of gathering real-world feedback (rewards) based on observations and predicted actions.\nThe Logger source code is src/logger/main.py.\nRun unit tests on the Logger", "! python3 -m unittest src.logger.test_main", "Create a Pub/Sub topic\n\nRead more about creating Pub/Sub topics here", "! gcloud pubsub topics create $LOGGER_PUBSUB_TOPIC", "Define the Logger logic in a Cloud Function to be triggered by a Pub/Sub topic, which is triggered by the prediction code at each prediction request.\n\nSpecify dependencies of the Function in src/logger/requirements.txt.\nRead more about the available configurable arguments for deploying a Function here. For instance, based on the complexity of your Function, you may want to adjust its memory and timeout.\nNote that the environment variables in ENV_VARS should be comma-separated; there should not be additional spaces, or other characters in between. Read more about setting/updating/deleting environment variables here.", "ENV_VARS = \",\".join(\n [\n f\"PROJECT_ID={PROJECT_ID}\",\n f\"RAW_DATA_PATH={RAW_DATA_PATH}\",\n f\"BATCH_SIZE={BATCH_SIZE}\",\n f\"RANK_K={RANK_K}\",\n f\"NUM_ACTIONS={NUM_ACTIONS}\",\n f\"BIGQUERY_TMP_FILE={BIGQUERY_TMP_FILE}\",\n f\"BIGQUERY_DATASET_ID={BIGQUERY_DATASET_ID}\",\n f\"BIGQUERY_LOCATION={BIGQUERY_LOCATION}\",\n f\"BIGQUERY_TABLE_ID={BIGQUERY_TABLE_ID}\",\n ]\n)\n\n! echo $ENV_VARS\n\n! gcloud functions deploy $LOGGER_CLOUD_FUNCTION \\\n --region=$REGION \\\n --trigger-topic=$LOGGER_PUBSUB_TOPIC \\\n --runtime=python37 \\\n --memory=512MB \\\n --timeout=200s \\\n --source=src/logger \\\n --entry-point=log \\\n --stage-bucket=$BUCKET_NAME \\\n --update-env-vars=$ENV_VARS", "Create the Trigger to trigger re-training\nCreate the Trigger to recurrently re-run the pipeline to re-train the policy on new training data, using kfp.v2.google.client.AIPlatformClient.create_schedule_from_job_spec. You create a pipeline for orchestration on Vertex Pipelines, and a Cloud Scheduler job that recurrently triggers the pipeline. The method also automatically creates a Cloud Function that acts as an intermediary between the Scheduler and Pipelines. You can find the source code here.\nWhen the Simulator sends prediction requests to the endpoint, the Logger is triggered by the hook in the prediction code to log prediction results to BigQuery, as new training data. As this pipeline has a recurrent schedule, it utlizes the new training data in training a new policy, therefore closing the feedback loop. Theoretically speaking, if you set the pipeline scheduler to be infinitely frequent, then you would be approaching real-time, continuous training.", "TRIGGER_SCHEDULE = \"*/30 * * * *\" # Schedule to trigger the pipeline. Eg. \"*/30 * * * *\" means every 30 mins.\n\ningest_op = load_component_from_url(\n \"https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/62a2a7611499490b4b04d731d48a7ba87c2d636f/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/src/ingester/component.yaml\"\n)\ntrain_op = load_component_from_url(\n \"https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/62a2a7611499490b4b04d731d48a7ba87c2d636f/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/src/trainer/component.yaml\"\n)\n\n\[email protected](pipeline_root=PIPELINE_ROOT, name=f\"{PIPELINE_NAME}-retraining\")\ndef pipeline(\n # Pipeline configs\n project_id: str,\n training_artifacts_dir: str,\n # BigQuery configs\n bigquery_table_id: str,\n bigquery_max_rows: int = 10000,\n # TF-Agents RL configs\n rank_k: int = 20,\n num_actions: int = 20,\n num_epochs: int = 5,\n tikhonov_weight: float = 0.01,\n agent_alpha: float = 10,\n) -> None:\n \"\"\"Authors a re-training pipeline for MovieLens movie recommendation system.\n\n Integrates the Ingester, Trainer and Deployer components.\n\n Args:\n project_id: GCP project ID. This is required because otherwise the BigQuery\n client will use the ID of the tenant GCP project created as a result of\n KFP, which doesn't have proper access to BigQuery.\n training_artifacts_dir: Path to store the Trainer artifacts (trained policy).\n\n bigquery_table_id: A string of the BigQuery table ID in the format of\n \"project.dataset.table\".\n bigquery_max_rows: Optional; maximum number of rows to ingest.\n\n rank_k: Optional; rank for matrix factorization in the MovieLens environment;\n also the observation dimension.\n num_actions: Optional; number of actions (movie items) to choose from.\n num_epochs: Optional; number of training epochs.\n tikhonov_weight: Optional; LinUCB Tikhonov regularization weight of the\n Trainer.\n agent_alpha: Optional; LinUCB exploration parameter that multiplies the\n confidence intervals of the Trainer.\n \"\"\"\n # Run the Ingester component.\n ingest_task = ingest_op(\n project_id=project_id,\n bigquery_table_id=bigquery_table_id,\n bigquery_max_rows=bigquery_max_rows,\n tfrecord_file=TFRECORD_FILE,\n )\n\n # Run the Trainer component and submit custom job to Vertex AI.\n # Convert the train_op component into a Vertex AI Custom Job pre-built component\n custom_job_training_op = utils.create_custom_training_job_op_from_component(\n component_spec=train_op,\n replica_count=TRAINING_REPLICA_COUNT,\n machine_type=TRAINING_MACHINE_TYPE,\n accelerator_type=TRAINING_ACCELERATOR_TYPE,\n accelerator_count=TRAINING_ACCELERATOR_COUNT,\n )\n\n train_task = custom_job_training_op(\n training_artifacts_dir=training_artifacts_dir,\n tfrecord_file=ingest_task.outputs[\"tfrecord_file\"],\n num_epochs=num_epochs,\n rank_k=rank_k,\n num_actions=num_actions,\n tikhonov_weight=tikhonov_weight,\n agent_alpha=agent_alpha,\n project=PROJECT_ID,\n location=REGION,\n )\n\n # Run the Deployer components.\n # Upload the trained policy as a model.\n model_upload_op = gcc_aip.ModelUploadOp(\n project=project_id,\n display_name=TRAINED_POLICY_DISPLAY_NAME,\n artifact_uri=train_task.outputs[\"training_artifacts_dir\"],\n serving_container_image_uri=f\"gcr.io/{PROJECT_ID}/{PREDICTION_CONTAINER}:latest\",\n )\n # Create a Vertex AI endpoint. (This operation can occur in parallel with\n # the Generator, Ingester, Trainer components.)\n endpoint_create_op = gcc_aip.EndpointCreateOp(\n project=project_id, display_name=ENDPOINT_DISPLAY_NAME\n )\n # Deploy the uploaded, trained policy to the created endpoint. (This operation\n # has to occur after both model uploading and endpoint creation complete.)\n gcc_aip.ModelDeployOp(\n endpoint=endpoint_create_op.outputs[\"endpoint\"],\n model=model_upload_op.outputs[\"model\"],\n deployed_model_display_name=TRAINED_POLICY_DISPLAY_NAME,\n dedicated_resources_machine_type=ENDPOINT_MACHINE_TYPE,\n dedicated_resources_accelerator_type=ENDPOINT_ACCELERATOR_TYPE,\n dedicated_resources_accelerator_count=ENDPOINT_ACCELERATOR_COUNT,\n dedicated_resources_min_replica_count=ENDPOINT_REPLICA_COUNT,\n )\n\n# Compile the authored pipeline.\ncompiler.Compiler().compile(pipeline_func=pipeline, package_path=PIPELINE_SPEC_PATH)\n\n# Createa Vertex AI client.\napi_client = AIPlatformClient(project_id=PROJECT_ID, region=REGION)\n\n# Schedule a recurring pipeline.\nresponse = api_client.create_schedule_from_job_spec(\n job_spec_path=PIPELINE_SPEC_PATH,\n schedule=TRIGGER_SCHEDULE,\n parameter_values={\n # Pipeline configs\n \"project_id\": PROJECT_ID,\n \"training_artifacts_dir\": TRAINING_ARTIFACTS_DIR,\n # BigQuery config\n \"bigquery_table_id\": BIGQUERY_TABLE_ID,\n },\n)\nresponse[\"name\"]", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial (you also need to clean up other resources that are difficult to delete here, such as the all/partial of data in BigQuery, the recurring pipeline and its Scheduler job, the uploaded policy/model, etc.):", "# Delete endpoint resource.\n! gcloud ai endpoints delete $ENDPOINT_ID --quiet --region $REGION\n\n# Delete Pub/Sub topics.\n! gcloud pubsub topics delete $SIMULATOR_PUBSUB_TOPIC --quiet\n! gcloud pubsub topics delete $LOGGER_PUBSUB_TOPIC --quiet\n\n# Delete Cloud Functions.\n! gcloud functions delete $SIMULATOR_CLOUD_FUNCTION --quiet\n! gcloud functions delete $LOGGER_CLOUD_FUNCTION --quiet\n\n# Delete Scheduler job.\n! gcloud scheduler jobs delete $SIMULATOR_SCHEDULER_JOB --quiet\n\n# Delete Cloud Storage objects that were created.\n! gsutil -m rm -r $PIPELINE_ROOT\n! gsutil -m rm -r $TRAINING_ARTIFACTS_DIR" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Unidata/unidata-python-workshop
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
mit
[ "<a name=\"top\"></a>\n<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>Working with Surface Observations in Siphon and MetPy</h1>\n<h3>Unidata Python Workshop</h3>\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">\n\n<div style=\"float:right; width:250 px\"><img src=\"http://weather-geek.net/images/metar_what.png\" alt=\"METAR\" style=\"height: 200px;\"></div>\n\nOverview:\n\nTeaching: 20 minutes\nExercises: 20 minutes\n\nQuestions\n\nWhat's the best way to get surface station data from a THREDDS data server?\nWhat's the best way to make a station plot of data?\nHow can I request a time series of data for a single station?\n\nObjectives\n\n<a href=\"#ncss\">Use the netCDF Subset Service (NCSS) to request a portion of the data</a>\n<a href=\"#stationplot\">Download data for a single time across stations and create a station plot</a>\n<a href=\"#timeseries\">Request a time series of data and plot</a>\n\n<a name=\"ncss\"></a>\n1. Using NCSS to get point data", "from siphon.catalog import TDSCatalog\n\n# copied from the browser url box\nmetar_cat_url = ('http://thredds.ucar.edu/thredds/catalog/'\n 'irma/metar/catalog.xml?dataset=irma/metar/Metar_Station_Data_-_Irma_fc.cdmr')\n\n# Parse the xml\ncatalog = TDSCatalog(metar_cat_url)\n\n# what datasets are here?\nprint(list(catalog.datasets))\n\nmetar_dataset = catalog.datasets['Feature Collection']", "Once we've grabbed the \"Feature Collection\" dataset, we can request a subset of the data:", "# Can safely ignore the warnings\nncss = metar_dataset.subset()", "What variables do we have available?", "ncss.variables", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"stationplot\"></a>\n2. Making a station plot\n\nMake new NCSS query\nRequest data closest to a time", "from datetime import datetime\n\nquery = ncss.query()\nquery.lonlat_box(north=34, south=24, east=-80, west=-90)\nquery.time(datetime(2017, 9, 10, 12))\nquery.variables('temperature', 'dewpoint', 'altimeter_setting',\n 'wind_speed', 'wind_direction', 'sky_coverage')\nquery.accept('csv')\n\n# Get the data\ndata = ncss.get_data(query)\ndata", "Now we need to pull apart the data and perform some modifications, like converting winds to components and convert sky coverage percent to codes (octets) suitable for plotting.", "import numpy as np\n\nimport metpy.calc as mpcalc\nfrom metpy.units import units\n\n# Since we used the CSV data, this is just a dictionary of arrays\nlats = data['latitude']\nlons = data['longitude']\ntair = data['temperature']\ndewp = data['dewpoint']\nalt = data['altimeter_setting']\n\n# Convert wind to components\nu, v = mpcalc.wind_components(data['wind_speed'] * units.knots, data['wind_direction'] * units.degree)\n\n# Need to handle missing (NaN) and convert to proper code\ncloud_cover = 8 * data['sky_coverage'] / 100.\ncloud_cover[np.isnan(cloud_cover)] = 10\ncloud_cover = cloud_cover.astype(np.int)\n\n# For some reason these come back as bytes instead of strings\nstid = np.array([s.tostring().decode() for s in data['station']])", "Create the map using cartopy and MetPy!\nOne way to create station plots with MetPy is to create an instance of StationPlot and call various plot methods, like plot_parameter, to plot arrays of data at locations relative to the center point.\nIn addition to plotting values, StationPlot has support for plotting text strings, symbols, and plotting values using custom formatting.\nPlotting symbols involves mapping integer values to various custom font glyphs in our custom weather symbols font. MetPy provides mappings for converting WMO codes to their appropriate symbol. The sky_cover function below is one such mapping.", "%matplotlib inline\n\nimport cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nimport matplotlib.pyplot as plt\n\nfrom metpy.plots import StationPlot, sky_cover\n\n# Set up a plot with map features\nfig = plt.figure(figsize=(12, 12))\nproj = ccrs.Stereographic(central_longitude=-95, central_latitude=35)\nax = fig.add_subplot(1, 1, 1, projection=proj)\nax.add_feature(cfeature.STATES, edgecolor='black')\nax.coastlines(resolution='50m')\nax.gridlines()\n\n# Create a station plot pointing to an Axes to draw on as well as the location of points\nstationplot = StationPlot(ax, lons, lats, transform=ccrs.PlateCarree(),\n fontsize=12)\nstationplot.plot_parameter('NW', tair, color='red')\n\n# Add wind barbs\nstationplot.plot_barb(u, v)\n\n# Plot the sky cover symbols in the center. We give it the integer code values that\n# should be plotted, as well as a mapping class that can convert the integer values\n# to the appropriate font glyph.\nstationplot.plot_symbol('C', cloud_cover, sky_cover)", "Notice how there are so many overlapping stations? There's a utility in MetPy to help with that: reduce_point_density. This returns a mask we can apply to data to filter the points.", "# Project points so that we're filtering based on the way the stations are laid out on the map\nproj = ccrs.Stereographic(central_longitude=-95, central_latitude=35)\nxy = proj.transform_points(ccrs.PlateCarree(), lons, lats)\n\n# Reduce point density so that there's only one point within a 200km circle\nmask = mpcalc.reduce_point_density(xy, 200000)", "Now we just plot with arr[mask] for every arr of data we use in plotting.", "# Set up a plot with map features\nfig = plt.figure(figsize=(12, 12))\nax = fig.add_subplot(1, 1, 1, projection=proj)\nax.add_feature(cfeature.STATES, edgecolor='black')\nax.coastlines(resolution='50m')\nax.gridlines()\n\n# Create a station plot pointing to an Axes to draw on as well as the location of points\nstationplot = StationPlot(ax, lons[mask], lats[mask], transform=ccrs.PlateCarree(),\n fontsize=12)\nstationplot.plot_parameter('NW', tair[mask], color='red')\nstationplot.plot_barb(u[mask], v[mask])\nstationplot.plot_symbol('C', cloud_cover[mask], sky_cover)", "More examples for MetPy Station Plots:\n- MetPy Examples\n- MetPy Symbol list\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>Modify the station plot (reproduced below) to include dewpoint, altimeter setting, as well as the station id. The station id can be added using the `plot_text` method on `StationPlot`.</li>\n <li>Re-mask the data to be a bit more finely spaced, say: 75km</li>\n <li>Bonus Points: Use the `formatter` argument to `plot_parameter` to only plot the 3 significant digits of altimeter setting. (Tens, ones, tenths)</li>\n </ul>\n</div>", "# Use reduce_point_density\n\n# Set up a plot with map features\nfig = plt.figure(figsize=(12, 12))\nax = fig.add_subplot(1, 1, 1, projection=proj)\nax.add_feature(cfeature.STATES, edgecolor='black')\nax.coastlines(resolution='50m')\nax.gridlines()\n\n# Create a station plot pointing to an Axes to draw on as well as the location of points\n\n# Plot dewpoint\n\n# Plot altimeter setting--formatter can take a function that formats values\n\n# Plot station id\n\n# %load solutions/reduce_density.py\n", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"timeseries\"></a>\n3. Time Series request and plot\n\nLet's say we want the past days worth of data...\n...for Boulder (i.e. the lat/lon)\n...for the variables mean sea level pressure, air temperature, wind direction, and wind_speed", "from datetime import timedelta\n\n# define the time range we are interested in\nend_time = datetime(2017, 9, 12, 0)\nstart_time = end_time - timedelta(days=2)\n\n# build the query\nquery = ncss.query()\nquery.lonlat_point(-80.25, 25.8)\nquery.time_range(start_time, end_time)\nquery.variables('altimeter_setting', 'temperature', 'dewpoint',\n 'wind_direction', 'wind_speed')\nquery.accept('csv')", "Let's get the data!", "data = ncss.get_data(query)\n\nprint(list(data.keys()))", "What station did we get?", "station_id = data['station'][0].tostring()\nprint(station_id)", "That indicates that we have a Python bytes object, containing the 0-255 values corresponding to 'K', 'M', 'I', 'A'. We can decode those bytes into a string:", "station_id = station_id.decode('ascii')\nprint(station_id)", "Let's get the time into datetime objects. We see we have an array with byte strings in it, like station id above.", "data['time']", "So we can use a list comprehension to turn this into a list of date time objects:", "time = [datetime.strptime(s.decode('ascii'), '%Y-%m-%dT%H:%M:%SZ') for s in data['time']]", "Now for the obligatory time series plot...", "from matplotlib.dates import DateFormatter, AutoDateLocator\n\nfig, ax = plt.subplots(figsize=(10, 6))\nax.plot(time, data['wind_speed'], color='tab:blue')\n\nax.set_title(f'Site: {station_id} Date: {time[0]:%Y/%m/%d}')\nax.set_xlabel('Hour of day')\nax.set_ylabel('Wind Speed')\nax.grid(True)\n\n# Improve on the default ticking\nlocator = AutoDateLocator()\nhoursFmt = DateFormatter('%H')\nax.xaxis.set_major_locator(locator)\nax.xaxis.set_major_formatter(hoursFmt)", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>Pick a different location</li>\n <li>Plot temperature and dewpoint together on the same plot</li>\n </ul>\n</div>", "# Your code goes here\n\n\n# %load solutions/time_series.py", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
calico/basenji
manuscripts/akita/tutorial.ipynb
apache-2.0
[ "Required inputs for Akita are:\n* binned Hi-C or Micro-C data stored in cooler format (https://github.com/mirnylab/cooler)\n* Genome FASTA file\nFirst, make sure you have a FASTA file available consistent with genome used for the coolers. Either add a symlink for a the data directory or download the machine learning friendly simplified version in the next cell.", "import json\nimport os\nimport shutil\nimport subprocess\n\nif not os.path.isfile('./data/hg38.ml.fa'):\n print('downloading hg38.ml.fa')\n subprocess.call('curl -o ./data/hg38.ml.fa.gz https://storage.googleapis.com/basenji_barnyard/hg38.ml.fa.gz', shell=True)\n subprocess.call('gunzip ./data/hg38.ml.fa.gz', shell=True)", "Download a few Micro-C datasets, processed using distiller (https://github.com/mirnylab/distiller-nf), binned to 2048bp, and iteratively corrected.", "if not os.path.exists('./data/coolers'):\n os.mkdir('./data/coolers')\nif not os.path.isfile('./data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool'):\n subprocess.call('curl -o ./data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool'+\n ' https://storage.googleapis.com/basenji_hic/tutorials/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool', shell=True)\n subprocess.call('curl -o ./data/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool'+\n ' https://storage.googleapis.com/basenji_hic/tutorials/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool', shell=True)\n\nls ./data/coolers/", "Write out these cooler files and labels to a samples table.", "lines = [['index','identifier','file','clip','sum_stat','description']]\nlines.append(['0', 'HFF', './data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool', '2', 'sum', 'HFF'])\nlines.append(['1', 'H1hESC', './data/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool', '2', 'sum', 'H1hESC'])\n\nsamples_out = open('data/microc_cools.txt', 'w')\nfor line in lines:\n print('\\t'.join(line), file=samples_out)\nsamples_out.close()", "Next, we want to choose genomic sequences to form batches for stochastic gradient descent, divide them into training/validation/test sets, and construct TFRecords to provide to downstream programs.\nThe script akita_data.py implements this procedure.\nThe most relevant options here are:\n| Option/Argument | Value | Note |\n|:---|:---|:---|\n| --sample | 0.1 | Down-sample the genome to 10% to speed things up here. |\n| -g | data/hg38_gaps_binsize2048_numconseq10.bed | Dodge large-scale unmappable regions determined from filtered cooler bins. |\n| -l | 1048576 | Sequence length. |\n| --crop | 65536 | Crop edges of matrix so loss is only computed over the central region. |\n| --local | True | Run locally, as opposed to on a SLURM scheduler. |\n| -o | data/1m | Output directory |\n| -p | 8 | Uses multiple concourrent processes to read/write. |\n| -t | .1 | Hold out 10% sequences for testing. |\n| -v | .1 | Hold out 10% sequences for validation. |\n| -w | 2048 | Pool the nucleotide-resolution values to 2048 bp bins. |\n| fasta_file| data/hg38.ml.fa | FASTA file to extract sequences from. |\n| targets_file | data/microc_cools.txt | Target table with cooler paths. |\nNote: make sure to export BASENJIDIR as outlined in the basenji installation tips \n(https://github.com/calico/basenji/tree/master/#installation).", "if os.path.isdir('data/1m'):\n shutil.rmtree('data/1m')\n\n! akita_data.py --sample 0.05 -g ./data/hg38_gaps_binsize2048_numconseq10.bed -l 1048576 --crop 65536 --local -o ./data/1m --as_obsexp -p 8 -t .1 -v .1 -w 2048 --snap 2048 --stride_train 262144 --stride_test 32768 ./data/hg38.ml.fa ./data/microc_cools.txt", "The data for training is now saved in data/1m as tfrecords (for training, validation, and testing), where contigs.bed contains the original large contiguous regions from which training sequences were taken, and sequences.bed contains the train/valid/test sequences.", "! cut -f4 data/1m/sequences.bed | sort | uniq -c\n\n! head -n3 data/1m/sequences.bed", "Now train a model!\n(Note: for training production-level models, please remove the --sample option when generating tfrecords)", "# specify model parameters json to have only two targets\nparams_file = './params.json'\nwith open(params_file) as params_file:\n params_tutorial = json.load(params_file) \nparams_tutorial['model']['head_hic'][-1]['units'] =2\nwith open('./data/1m/params_tutorial.json','w') as params_tutorial_file:\n json.dump(params_tutorial,params_tutorial_file) \n \n### note that training with default parameters requires GPU with >12Gb RAM ###\n\n! akita_train.py -k -o ./data/1m/train_out/ ./data/1m/params_tutorial.json ./data/1m/", "See explore_model.ipynb for tips on investigating the output of a trained model." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
balmandhunter/jupyter-tips-and-tricks
notebooks/04-More_basics.ipynb
mit
[ "Jupyter Notebook Basics", "names = ['alice', 'jonathan', 'bobby']\nages = [24, 32, 45]\nranks = ['kinda cool', 'really cool', 'insanely cool']\n\nfor (name, age, rank) in zip(names, ages, ranks):\n print name, age, rank\n\nfor index, (name, age, rank) in enumerate(zip(names, ages, ranks)):\n print index, name, age, rank\n\n# return, esc, shift+enter, ctrl+enter\n# text keyboard shortcuts -- cmd > (right), < left,\n# option delete (deletes words)\n# type \"h\" for help\n# tab\n# shift-tab\n# keyboard shortcuts\n# - a, b, y, m, dd, h, ctrl+shift+-\n\n%matplotlib inline\n%config InlineBackend.figure_format='retina'\n\nimport matplotlib.pyplot as plt\n# no pylab\nimport seaborn as sns\nsns.set_context('talk')\nsns.set_style('darkgrid') \nplt.rcParams['figure.figsize'] = 12, 8 # plotsize \n\nimport numpy as np\n# don't do `from numpy import *`\nimport pandas as pd\n\n# If you have a specific function that you'd like to import\nfrom numpy.random import randn\n\nx = np.arange(100)\ny = np.sin(x)\nplt.plot(x, y)#;\n\n%matplotlib notebook\n\nx = np.arange(10)\ny = np.sin(x)\nplt.plot(x, y)#;", "Magics!\n\n% and %% magics\ninteract\nembed image\nembed links, youtube\nlink notebooks\n\nCheck out http://matplotlib.org/gallery.html select your favorite.", "%%bash\nfor num in {1..5}\ndo\n for infile in *;\n do\n echo $num $infile\n done\n wc $infile\ndone\n\nprint \"hi\"\n!pwd\n\n!ping google.com\n\nthis_is_magic = \"Can you believe you can pass variables and strings like this?\"\n\nhey = !echo $this_is_magic\n\nhey", "Numpy\nIf you have arrays of numbers, use numpy or pandas (built on numpy) to represent the data. Tons of very fast underlying code.", "x = np.arange(10000)\n\nprint x # smart printing\n\nprint x[0] # first element \nprint x[-1] # last element\nprint x[0:5] # first 5 elements (also x[:5])\nprint x[:] # \"Everything\"\n\nprint x[-5:] # last five elements\n\nprint x[-5:-2]\n\nprint x[-5:-1] # not final value -- not inclusive on right\n\nx = np.random.randint(5, 5000, (3, 5))\n\nx\n\nnp.sum(x)\n\nx.sum()\n\nnp.sum(x)\n\nnp.sum(x, axis=0)\n\nnp.sum(x, axis=1)\n\nx.sum(axis=1)\n\n# Multi dimension array slice with a comma\nx[:, 2]\n\ny = np.linspace(10, 20, 11)\ny\n\nnp.linspace?\n\nnp.linspace()\n# shift-tab; shift-tab-tab\nnp.\n\ndef does_it(first=x, second=y):\n \"\"\"This is my doc\"\"\"\n pass\n\ny[[3, 5, 7]]\n\ndoes_it()\n\nnum = 3000\nx = np.linspace(1.0, 300.0, num)\ny = np.random.rand(num)\nz = np.sin(x)\nnp.savetxt(\"example.txt\", np.transpose((x, y, z)))\n\n%less example.txt\n\n!wc example.txt\n\n!head example.txt\n\n#Not a good idea\na = []\nb = []\nfor line in open(\"example.txt\", 'r'):\n a.append(line[0])\n b.append(line[2])\n \na[:10] # Whoops! \n\na = []\nb = []\nfor line in open(\"example.txt\", 'r'):\n line = line.split()\n a.append(line[0])\n b.append(line[2])\n \na[:10] # Strings! \n\na = []\nb = []\nfor line in open(\"example.txt\", 'r'):\n line = line.split()\n a.append(float(line[0]))\n b.append(float(line[2]))\n \na[:10] # Lists!\n\n# Do this!\na, b = np.loadtxt(\"example.txt\", unpack=True, usecols=(0,2))\n\na", "Matplotlib and Numpy", "from numpy.random import randn\n\nnum = 50\nx = np.linspace(2.5, 300, num)\ny = randn(num)\nplt.scatter(x, y)\n\ny > 1\n\ny[y > 1]\n\ny[(y < 1) & (y > -1)]\n\nplt.scatter(x, y, c='b', s=50)\nplt.scatter(x[(y < 1) & (y > -1)], y[(y < 1) & (y > -1)], c='r', s=50)\n\ny[~((y < 1) & (y > -1))] = 1.0\nplt.scatter(x, y, c='b')\nplt.scatter(x, np.clip(y, -0.5, 0.5), color='red')\n\nnum = 350\nslope = 0.3\nx = randn(num) * 50. + 150.0 \ny = randn(num) * 5 + x * slope\nplt.scatter(x, y, c='b')\n\n# plt.scatter(x[(y < 1) & (y > -1)], y[(y < 1) & (y > -1)], c='r')\n# np.argsort, np.sort, complicated index slicing\ndframe = pd.DataFrame({'x': x, 'y': y})\ng = sns.jointplot('x', 'y', data=dframe, kind=\"reg\")", "Grab Python version of ggplot http://ggplot.yhathq.com/", "from ggplot import ggplot, aes, geom_line, stat_smooth, geom_dotplot, geom_point\n\nggplot(aes(x='x', y='y'), data=dframe) + geom_point() + stat_smooth(colour='blue', span=0.2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs
site/en/tutorials/images/cnn.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Convolutional Neural Network (CNN)\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/images/cnn\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/cnn.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/cnn.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/cnn.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nThis tutorial demonstrates training a simple Convolutional Neural Network (CNN) to classify CIFAR images. Because this tutorial uses the Keras Sequential API, creating and training your model will take just a few lines of code.\nImport TensorFlow", "import tensorflow as tf\n\nfrom tensorflow.keras import datasets, layers, models\nimport matplotlib.pyplot as plt", "Download and prepare the CIFAR10 dataset\nThe CIFAR10 dataset contains 60,000 color images in 10 classes, with 6,000 images in each class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes are mutually exclusive and there is no overlap between them.", "(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()\n\n# Normalize pixel values to be between 0 and 1\ntrain_images, test_images = train_images / 255.0, test_images / 255.0", "Verify the data\nTo verify that the dataset looks correct, let's plot the first 25 images from the training set and display the class name below each image:", "class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',\n 'dog', 'frog', 'horse', 'ship', 'truck']\n\nplt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i])\n # The CIFAR labels happen to be arrays, \n # which is why you need the extra index\n plt.xlabel(class_names[train_labels[i][0]])\nplt.show()", "Create the convolutional base\nThe 6 lines of code below define the convolutional base using a common pattern: a stack of Conv2D and MaxPooling2D layers.\nAs input, a CNN takes tensors of shape (image_height, image_width, color_channels), ignoring the batch size. If you are new to these dimensions, color_channels refers to (R,G,B). In this example, you will configure your CNN to process inputs of shape (32, 32, 3), which is the format of CIFAR images. You can do this by passing the argument input_shape to your first layer.", "model = models.Sequential()\nmodel.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(64, (3, 3), activation='relu'))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(64, (3, 3), activation='relu'))", "Let's display the architecture of your model so far:", "model.summary()", "Above, you can see that the output of every Conv2D and MaxPooling2D layer is a 3D tensor of shape (height, width, channels). The width and height dimensions tend to shrink as you go deeper in the network. The number of output channels for each Conv2D layer is controlled by the first argument (e.g., 32 or 64). Typically, as the width and height shrink, you can afford (computationally) to add more output channels in each Conv2D layer.\nAdd Dense layers on top\nTo complete the model, you will feed the last output tensor from the convolutional base (of shape (4, 4, 64)) into one or more Dense layers to perform classification. Dense layers take vectors as input (which are 1D), while the current output is a 3D tensor. First, you will flatten (or unroll) the 3D output to 1D, then add one or more Dense layers on top. CIFAR has 10 output classes, so you use a final Dense layer with 10 outputs.", "model.add(layers.Flatten())\nmodel.add(layers.Dense(64, activation='relu'))\nmodel.add(layers.Dense(10))", "Here's the complete architecture of your model:", "model.summary()", "The network summary shows that (4, 4, 64) outputs were flattened into vectors of shape (1024) before going through two Dense layers.\nCompile and train the model", "model.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nhistory = model.fit(train_images, train_labels, epochs=10, \n validation_data=(test_images, test_labels))", "Evaluate the model", "plt.plot(history.history['accuracy'], label='accuracy')\nplt.plot(history.history['val_accuracy'], label = 'val_accuracy')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.ylim([0.5, 1])\nplt.legend(loc='lower right')\n\ntest_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)\n\nprint(test_acc)", "Your simple CNN has achieved a test accuracy of over 70%. Not bad for a few lines of code! For another CNN style, check out the TensorFlow 2 quickstart for experts example that uses the Keras subclassing API and tf.GradientTape." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dsteurer/cs4814fa15
turing/turing.ipynb
mit
[ "%run -i turing.py\ninit()", "Turing machine computation\nTape\nWe will represent the tape as a list of tape symbols and we will represent tape symbols as Python strings.\nThe string ' ' represents the blank symbol.\nThe string '|&gt;' represents the start symbol, which indicates the beginning of the tape.\nStates\nWe will also encode states as Python strings. \nThe string 'start' represents that start state.\nThe strings 'accept', 'reject', and 'halt' represent final states of the machine, that indicate acceptance, rejection, and halting, respectively. \nSimulation\nThe following function simulates a given Turing machine for a given number of steps on a given input", "def run(transitions, input, steps):\n \"\"\"simulate Turing machine for the given number of steps and the given input\"\"\"\n\n # convert input from string to list of symbols\n # we use '|>' as a symbol to indicate the beginning of the tape\n input = ['|>'] + list(input) + [' ']\n\n # sanitize transitions for 'accept' and 'reject' states and for symbol '|>'\n transitions = sanitize_transitions(transitions)\n\n # create initial configuration\n c = Configuration(state='start', head=1, tape=input)\n\n for i in range(0, steps):\n # read tape content under head\n current = c.state\n read = c.tape[c.head]\n\n # lookup transition based on state and read symbol\n next, write, move = transitions(current, read)\n\n # update configuration\n c.state = next\n c.tape[c.head] = write\n c.head += move\n if c.head >= len(c.tape):\n c.tape += [' ']\n\n # return final configuration\n return c\n", "The following function checks that the transition functions satisfies some simple syntactic requirements (don't move to the left of the start symbol, don't remove or add start symbols, don't change state after accepting, rejecting, or halting.)", "def check_transitions(transitions, states, alphabet):\n\n transitions = sanitize_transitions(transitions)\n\n for current in states:\n for read in alphabet:\n next, write, move = transitions(current, read)\n\n # we either stay in place or move one position\n # to the left or right\n assert(move in [-1,0,1])\n\n # if we read the begin symbol,\n if read == '|>':\n # we need to write it back\n assert(write == '|>')\n # we need to move to the right\n assert(move == 1)\n else:\n # we cannot write the begin symbol\n assert(write != '|>')\n\n # if we are in one of the final states\n if current in ['accept', 'reject', 'halt']:\n # we cannot change to a different state\n assert(next == current)\n\n print(\"transition checks passed\")\n", "Examples\nCopy machine\nThe following Turing machine copies its input, i.e., it computes the function $f(x)=xx$. \nThe actual implementation uses different versions of the '0' and '1' symbol (called '0-read', '0-write' and '1-read', '1-write') in the two copies of the string $x$.\nWe could replace those by regular '0' and '1' symbols by sweeping once more over the tape before the end of the computation.", "def transitions_copy(current, read):\n if read == '|>':\n return 'start', read, 1\n elif current == 'start':\n if 'write' not in read:\n return read + '-write', read + '-read', 1\n else:\n return 'accept', read, 1\n elif 'write' in current:\n if read != ' ':\n return current, read, 1\n else:\n return 'rewind', current, -1\n elif current == 'rewind':\n if 'read' not in read:\n return current, read, -1\n else:\n return 'start', read, 1", "Here is the full transitions function table of the machine:", "transitions_table(transitions_copy, \n ['start', '0-write', '1-write', 'rewind'],\n ['0', '1', '0-read', '1-read', '0-write', '1-write'])", "Here is an interactive simulation of the copy Turing machine (requires that ipython notebook is run locally).\nYou can either click on the simulate button to view the computation during a given range of steps or you can drag the current step slider to view the configuration of the machine at a particular step. (If you click on the current step slider, you can also change it using the arrow keys.)", "simulate(transitions_copy, input='10011', unary=False)", "Power-of-2 machine\nThe following Turing machine determines if the input is the unary encoding of a power of 2.\nFurthermore, given any string $1^n$, it outputs a string of the form ${0,1}^n2^i$, where $i$ is the largest number such that $2^i$ divides $n$.", "def transitions_power(current,read):\n if read == '|>':\n return 'start', read, 1;\n elif current == 'rewind':\n return current, read, -1\n elif read == 'x':\n return current, read, 1 \n elif current == 'start':\n if read != '1':\n return 'reject', read, 1\n else: \n return 'start-even', read, 1\n elif 'even' in current and read == '1':\n return 'odd', 'x', 1\n elif current == 'odd' and read == '1':\n return 'even', read, 1\n elif current == 'odd':\n if read == ' ':\n return 'rewind', '2', -1\n else:\n return current, read, 1\n elif current == 'start-even' and read != '1':\n return 'accept', read, -1\n elif current == 'even' and read != '1':\n return 'reject', read, -1", "Here is the full transition function table of the Turing machine:", "transitions_table(transitions_power, \n ['start', 'start-even', 'even', 'odd', 'rewind'], \n ['0', '1', 'x', ' ', '|>'])", "Here is an interactive simulation of the power Turing machine (requires that ipython notebook is run locally).\nYou can either click on the simulate button to view the computation during a given range of steps or you can drag the current step slider to view the configuration of the machine at a particular step.\n(If you click on the current step slider, you can also change it using the arrow keys.)", "simulate(transitions_power, input_unary=16, step_to=200, unary=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
stable/_downloads/64e3b6395952064c08d4ff33d6236ff3/evoked_whitening.ipynb
bsd-3-clause
[ "%matplotlib inline", "Whitening evoked data with a noise covariance\nEvoked data are loaded and then whitened using a given noise covariance\nmatrix. It's an excellent quality check to see if baseline signals match\nthe assumption of Gaussian white noise during the baseline period.\nCovariance estimation and diagnostic plots are based on\n:footcite:EngemannGramfort2015.\nReferences\n.. footbibliography::", "# Authors: Alexandre Gramfort <[email protected]>\n# Denis A. Engemann <[email protected]>\n#\n# License: BSD-3-Clause\n\nimport mne\n\nfrom mne import io\nfrom mne.datasets import sample\nfrom mne.cov import compute_covariance\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nmeg_path = data_path / 'MEG' / 'sample'\nraw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'\nevent_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'\n\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(1, 40, n_jobs=1, fir_design='firwin')\nraw.info['bads'] += ['MEG 2443'] # bads + 1 more\nevents = mne.read_events(event_fname)\n\n# let's look at rare events, button presses\nevent_id, tmin, tmax = 2, -0.2, 0.5\nreject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=('meg', 'eeg'),\n baseline=None, reject=reject, preload=True)\n\n# Uncomment next line to use fewer samples and study regularization effects\n# epochs = epochs[:20] # For your data, use as many samples as you can!", "Compute covariance using automated regularization", "method_params = dict(diagonal_fixed=dict(mag=0.01, grad=0.01, eeg=0.01))\nnoise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',\n return_estimators=True, verbose=True, n_jobs=1,\n projs=None, rank=None,\n method_params=method_params)\n\n# With \"return_estimator=True\" all estimated covariances sorted\n# by log-likelihood are returned.\n\nprint('Covariance estimates sorted from best to worst')\nfor c in noise_covs:\n print(\"%s : %s\" % (c['method'], c['loglik']))", "Show the evoked data:", "evoked = epochs.average()\n\nevoked.plot(time_unit='s') # plot evoked response", "We can then show whitening for our various noise covariance estimates.\nHere we should look to see if baseline signals match the\nassumption of Gaussian white noise. we expect values centered at\n0 within 2 standard deviations for 95% of the time points.\nFor the Global field power we expect a value of 1.", "evoked.plot_white(noise_covs, time_unit='s')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gutouyu/cs231n
cs231n/assignment/assignment2/TensorFlow.ipynb
mit
[ "What's this TensorFlow business?\nYou've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.\nFor the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you switch over to that notebook)\nWhat is it?\nTensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray.\nWhy?\n\nOur code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.\nWe want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. \nWe want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) \nWe want you to be exposed to the sort of deep learning code you might run into in academia or industry. \n\nHow will I learn TensorFlow?\nTensorFlow has many excellent tutorials available, including those from Google themselves.\nOtherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.\nLoad Datasets", "import tensorflow as tf\nimport numpy as np\nimport math\nimport timeit\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom cs231n.data_utils import load_CIFAR10\n\ndef get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000):\n \"\"\"\n Load the CIFAR-10 dataset from disk and perform preprocessing to prepare\n it for the two-layer neural net classifier. These are the same steps as\n we used for the SVM, but condensed to a single function. \n \"\"\"\n # Load the raw CIFAR-10 data\n cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n # Subsample the data\n mask = range(num_training, num_training + num_validation)\n X_val = X_train[mask]\n y_val = y_train[mask]\n mask = range(num_training)\n X_train = X_train[mask]\n y_train = y_train[mask]\n mask = range(num_test)\n X_test = X_test[mask]\n y_test = y_test[mask]\n\n # Normalize the data: subtract the mean image\n mean_image = np.mean(X_train, axis=0)\n X_train -= mean_image\n X_val -= mean_image\n X_test -= mean_image\n\n return X_train, y_train, X_val, y_val, X_test, y_test\n\n\n# Invoke the above function to get our data.\nX_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()\nprint('Train data shape: ', X_train.shape)\nprint('Train labels shape: ', y_train.shape)\nprint('Validation data shape: ', X_val.shape)\nprint('Validation labels shape: ', y_val.shape)\nprint('Test data shape: ', X_test.shape)\nprint('Test labels shape: ', y_test.shape)", "Example Model\nSome useful utilities\n. Remember that our image data is initially N x H x W x C, where:\n* N is the number of datapoints\n* H is the height of each image in pixels\n* W is the height of each image in pixels\n* C is the number of channels (usually 3: R, G, B)\nThis is the right way to represent the data when we are doing something like a 2D convolution, which needs spatial understanding of where the pixels are relative to each other. When we input image data into fully connected affine layers, however, we want each data example to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data.\nThe example model itself\nThe first step to training your own model is defining its architecture.\nHere's an example of a convolutional neural network defined in TensorFlow -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. \nIn that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Hinge loss function, and the Adam optimizer being used. \nMake sure you understand why the parameters of the Linear layer are 5408 and 10.\nTensorFlow Details\nIn TensorFlow, much like in our previous notebooks, we'll first specifically initialize our variables, and then our network model.", "# clear old variables\ntf.reset_default_graph()\n\n# setup input (e.g. the data that changes every batch)\n# The first dim is None, and gets sets automatically based on batch size fed in\nX = tf.placeholder(tf.float32, [None, 32, 32, 3])\ny = tf.placeholder(tf.int64, [None])\nis_training = tf.placeholder(tf.bool)\n\ndef simple_model(X,y):\n # define our weights (e.g. init_two_layer_convnet)\n \n # setup variables\n Wconv1 = tf.get_variable(\"Wconv1\", shape=[7, 7, 3, 32])\n bconv1 = tf.get_variable(\"bconv1\", shape=[32])\n W1 = tf.get_variable(\"W1\", shape=[5408, 10])\n b1 = tf.get_variable(\"b1\", shape=[10])\n\n # define our graph (e.g. two_layer_convnet)\n a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1\n h1 = tf.nn.relu(a1)\n h1_flat = tf.reshape(h1,[-1,5408])\n y_out = tf.matmul(h1_flat,W1) + b1\n return y_out\n\ny_out = simple_model(X,y)\n\n# define our loss\ntotal_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out)\nmean_loss = tf.reduce_mean(total_loss)\n\n# define our optimizer\noptimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate\ntrain_step = optimizer.minimize(mean_loss)", "TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). \n\nLayers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn\nOptimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers\nBatchNorm: https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization\n\nTraining the model on one epoch\nWhile we have defined a graph of operations above, in order to execute TensorFlow Graphs, by feeding them input data and computing the results, we first need to create a tf.Session object. A session encapsulates the control and state of the TensorFlow runtime. For more information, see the TensorFlow Getting started guide.\nOptionally we can also specify a device context such as /cpu:0 or /gpu:0. For documentation on this behavior see this TensorFlow guide\nYou should see a validation loss of around 0.4 to 0.6 and an accuracy of 0.30 to 0.35 below", "def run_model(session, predict, loss_val, Xd, yd,\n epochs=1, batch_size=64, print_every=100,\n training=None, plot_losses=False):\n # have tensorflow compute accuracy\n correct_prediction = tf.equal(tf.argmax(predict,1), y)\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # shuffle indicies\n train_indicies = np.arange(Xd.shape[0])\n np.random.shuffle(train_indicies)\n\n training_now = training is not None\n \n # setting up variables we want to compute (and optimizing)\n # if we have a training function, add that to things we compute\n variables = [mean_loss,correct_prediction,accuracy]\n if training_now:\n variables[-1] = training\n \n # counter \n iter_cnt = 0\n for e in range(epochs):\n # keep track of losses and accuracy\n correct = 0\n losses = []\n # make sure we iterate over the dataset once\n for i in range(int(math.ceil(Xd.shape[0]/batch_size))):\n # generate indicies for the batch\n start_idx = (i*batch_size)%Xd.shape[0]\n idx = train_indicies[start_idx:start_idx+batch_size]\n \n # create a feed dictionary for this batch\n feed_dict = {X: Xd[idx,:],\n y: yd[idx],\n is_training: training_now }\n # get batch size\n actual_batch_size = yd[idx].shape[0]\n \n # have tensorflow compute loss and correct predictions\n # and (if given) perform a training step\n loss, corr, _ = session.run(variables,feed_dict=feed_dict)\n \n # aggregate performance stats\n losses.append(loss*actual_batch_size)\n correct += np.sum(corr)\n \n # print every now and then\n if training_now and (iter_cnt % print_every) == 0:\n print(\"Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}\"\\\n .format(iter_cnt,loss,np.sum(corr)/actual_batch_size))\n iter_cnt += 1\n total_correct = correct/Xd.shape[0]\n total_loss = np.sum(losses)/Xd.shape[0]\n print(\"Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}\"\\\n .format(total_loss,total_correct,e+1))\n if plot_losses:\n plt.plot(losses)\n plt.grid(True)\n plt.title('Epoch {} Loss'.format(e+1))\n plt.xlabel('minibatch number')\n plt.ylabel('minibatch loss')\n plt.show()\n return total_loss,total_correct\n\nwith tf.Session() as sess:\n with tf.device(\"/cpu:0\"): #\"/cpu:0\" or \"/gpu:0\" \n sess.run(tf.global_variables_initializer())\n print('Training')\n run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step,True)\n print('Validation')\n run_model(sess,y_out,mean_loss,X_val,y_val,1,64)", "Training a specific model\nIn this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model. \nUsing the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture:\n\n7x7 Convolutional Layer with 32 filters and stride of 1\nReLU Activation Layer\nSpatial Batch Normalization Layer (trainable parameters, with scale and centering)\n2x2 Max Pooling layer with a stride of 2\nAffine layer with 1024 output units\nReLU Activation Layer\nAffine layer from 1024 input units to 10 outputs", "# clear old variables\ntf.reset_default_graph()\n\n# define our input (e.g. the data that changes every batch)\n# The first dim is None, and gets sets automatically based on batch size fed in\nX = tf.placeholder(tf.float32, [None, 32, 32, 3])\ny = tf.placeholder(tf.int64, [None])\nis_training = tf.placeholder(tf.bool)\n\n# define model\ndef complex_model(X,y,is_training):\n pass\n\ny_out = complex_model(X,y,is_training)", "To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):", "# Now we're going to feed a random batch into the model \n# and make sure the output is the right size\nx = np.random.randn(64, 32, 32,3)\nwith tf.Session() as sess:\n with tf.device(\"/cpu:0\"): #\"/cpu:0\" or \"/gpu:0\"\n tf.global_variables_initializer().run()\n\n ans = sess.run(y_out,feed_dict={X:x,is_training:True})\n %timeit sess.run(y_out,feed_dict={X:x,is_training:True})\n print(ans.shape)\n print(np.array_equal(ans.shape, np.array([64, 10])))", "You should see the following from the run above \n(64, 10)\nTrue\nGPU!\nNow, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.", "try:\n with tf.Session() as sess:\n with tf.device(\"/gpu:0\") as dev: #\"/cpu:0\" or \"/gpu:0\"\n tf.global_variables_initializer().run()\n\n ans = sess.run(y_out,feed_dict={X:x,is_training:True})\n %timeit sess.run(y_out,feed_dict={X:x,is_training:True})\nexcept tf.errors.InvalidArgumentError:\n print(\"no gpu found, please use Google Cloud if you want GPU acceleration\") \n # rebuild the graph\n # trying to start a GPU throws an exception \n # and also trashes the original graph\n tf.reset_default_graph()\n X = tf.placeholder(tf.float32, [None, 32, 32, 3])\n y = tf.placeholder(tf.int64, [None])\n is_training = tf.placeholder(tf.bool)\n y_out = complex_model(X,y,is_training)", "You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.\nTrain the model.\nNow that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above).\nMake sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation.\nFirst, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function. See the TensorFlow documentation for more information\n* Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn\n* Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers", "# Inputs\n# y_out: is what your model computes\n# y: is your TensorFlow variable with label information\n# Outputs\n# mean_loss: a TensorFlow variable (scalar) with numerical loss\n# optimizer: a TensorFlow optimizer\n# This should be ~3 lines of code!\nmean_loss = None\noptimizer = None\npass\n\n\n# batch normalization in tensorflow requires this extra dependency\nextra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)\nwith tf.control_dependencies(extra_update_ops):\n train_step = optimizer.minimize(mean_loss)", "Train the model\nBelow we'll create a session and train the model over one epoch. You should see a loss of 1.4 to 2.0 and an accuracy of 0.4 to 0.5. There will be some variation due to random seeds and differences in initialization", "sess = tf.Session()\n\nsess.run(tf.global_variables_initializer())\nprint('Training')\nrun_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step)", "Check the accuracy of the model.\nLet's see the train and test code in action -- feel free to use these methods when evaluating the models you develop below. You should see a loss of 1.3 to 2.0 with an accuracy of 0.45 to 0.55.", "print('Validation')\nrun_model(sess,y_out,mean_loss,X_val,y_val,1,64)", "Train a great model on CIFAR-10!\nNow it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >= 70% accuracy on the validation set of CIFAR-10. You can use the run_model function from above.\nThings you should try:\n\nFilter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient\nNumber of filters: Above we used 32 filters. Do more or fewer do better?\nPooling vs Strided Convolution: Do you use max pooling or just stride convolutions?\nBatch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?\nNetwork architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:\n[conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n[conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n[batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]\n\n\nUse TensorFlow Scope: Use TensorFlow scope and/or tf.layers to make it easier to write deeper networks. See this tutorial for how to use tf.layers. \nUse Learning Rate Decay: As the notes point out, decaying the learning rate might help the model converge. Feel free to decay every epoch, when loss doesn't change over an entire epoch, or any other heuristic you find appropriate. See the Tensorflow documentation for learning rate decay.\nGlobal Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture).\nRegularization: Add l2 weight regularization, or perhaps use Dropout as in the TensorFlow MNIST tutorial\n\nTips for training\nFor each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:\n\nIf the parameters are working well, you should see improvement within a few hundred iterations\nRemember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.\nOnce you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.\nYou should use the validation set for hyperparameter search, and we'll save the test set for evaluating your architecture on the best parameters as selected by the validation set.\n\nGoing above and beyond\nIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.\n\nAlternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.\nAlternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.\nModel ensembles\nData augmentation\nNew Architectures\nResNets where the input from the previous layer is added to the output.\nDenseNets where inputs into previous layers are concatenated together.\nThis blog has an in-depth overview\n\nIf you do decide to implement something extra, clearly describe it in the \"Extra Credit Description\" cell below.\nWhat we expect\nAt the very least, you should be able to train a ConvNet that gets at >= 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.\nYou should use the space below to experiment and train your network. The final cell in this notebook should contain the training and validation set accuracies for your final trained network.\nHave fun and happy training!", "# Feel free to play with this cell\n\ndef my_model(X,y,is_training):\n pass\n\ntf.reset_default_graph()\n\nX = tf.placeholder(tf.float32, [None, 32, 32, 3])\ny = tf.placeholder(tf.int64, [None])\nis_training = tf.placeholder(tf.bool)\n\ny_out = my_model(X,y,is_training)\nmean_loss = None\noptimizer = None\n\n\npass\n\n# batch normalization in tensorflow requires this extra dependency\nextra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)\nwith tf.control_dependencies(extra_update_ops):\n train_step = optimizer.minimize(mean_loss)\n\n# Feel free to play with this cell\n# This default code creates a session\n# and trains your model for 10 epochs\n# then prints the validation set accuracy\nsess = tf.Session()\n\nsess.run(tf.global_variables_initializer())\nprint('Training')\nrun_model(sess,y_out,mean_loss,X_train,y_train,10,64,100,train_step,True)\nprint('Validation')\nrun_model(sess,y_out,mean_loss,X_val,y_val,1,64)\n\n# Test your model here, and make sure \n# the output of this cell is the accuracy\n# of your best model on the training and val sets\n# We're looking for >= 70% accuracy on Validation\nprint('Training')\nrun_model(sess,y_out,mean_loss,X_train,y_train,1,64)\nprint('Validation')\nrun_model(sess,y_out,mean_loss,X_val,y_val,1,64)", "Describe what you did here\nIn this cell you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network\nTell us here\nTest Set - Do this only once\nNow that we've gotten a result that we're happy with, we test our final model on the test set. This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.", "print('Test')\nrun_model(sess,y_out,mean_loss,X_test,y_test,1,64)", "Going further with TensorFlow\nThe next assignment will make heavy use of TensorFlow. You might also find it useful for your projects. \nExtra Credit Description\nIf you implement any additional features for extra credit, clearly describe them here with pointers to any code in this or other files if applicable." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pfschus/fission_bicorrelation
methods/build_det_df_angles_pairs.ipynb
mit
[ "<h1 id=\"tocheading\">Table of Contents</h1>\n<div id=\"toc\"></div>", "%%javascript\n$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')", "Chi-Nu Array Detector Angles\nAuthor: Patricia Schuster\nDate: Fall 2016/Winter 2017\nInstitution: University of Michigan NERS\nEmail: [email protected]\nWhat are we doing today?\nGoal: Import and analyze the angles between all of the detector pairs in the Chi-Nu array.\nAs a reminder, this is what the Chi-Nu array looks like:", "%%html\n<img src=\"fig/setup.png\",width=80%,height=80%>", "There are 45 detectors in this array, making for 990 detector pairs:", "45*44/2", "In order to characterize the angular distribution of the neutrons and gamma-rays emitted in a fission interaction, we are going to analyze the data from pairs of detectors at different angles from one another. \nIn this notebook I am going to import the detector angle data that Matthew provided me and explore the data. \n1) Import the angular data to a dictionary\n2) Visualize the angular data\n3) Find detector pairs in a given angular range\n4) Generate pairs vs. angle ranges", "# Import packages\nimport os.path\nimport time\nimport numpy as np\nnp.set_printoptions(threshold=np.nan) # print entire matrices\nimport sys\nimport inspect\nimport matplotlib.pyplot as plt\nimport scipy.io as sio\nfrom tqdm import *\nimport pandas as pd\n\nimport seaborn as sns\nsns.set_palette('spectral')\nsns.set_style(style='white')\n\nsys.path.append('../scripts/')\nimport bicorr as bicorr\n\n%load_ext autoreload\n%autoreload 2", "Step 1: Initialize pandas DataFrame with detector pairs\nThe detector pair angles are stored in a file lanl_detector_angles.mat. Write a function to load it as an array and then generate a pandas DataFrame\nThis was done before in bicorr.build_dict_det_pair(). Replace with a pandas dataFrame.\nColumns will be:\n\nDetector 1\nDetector 2\nIndex in bicorr_hist_master\nAngle between detectors\n\nWe can add more columns later very easily.\nLoad channel lists\nUse the function bicorr.built_ch_lists() to generate numpy arrays with all of the channel numbers:", "help(bicorr.build_ch_lists)\n\nchList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists(print_flag = True)", "Initialize dataFrame with detector channel numbers", "det_df = pd.DataFrame(columns=('d1', 'd2', 'd1d2', 'angle'))", "The pandas dataFrame should have 990 entries, one for each detector pair. Generate this.", "# Fill pandas dataFrame with d1, d2, and d1d2 \ncount = 0\ndet_pair_chs = np.zeros(num_det_pairs,dtype=np.int)\n\n# Loop through all detector pairs\nfor i1 in np.arange(0,num_dets):\n det1ch = detList[i1]\n for i2 in np.arange(i1+1,num_dets):\n det2ch = detList[i2]\n det_df.loc[count,'d1' ] = det1ch\n det_df.loc[count,'d2' ] = det2ch\n det_df.loc[count,'d1d2'] = 100*det1ch+det2ch\n count = count+1\n\ndet_df.head()\n\nplt.plot(det_df['d1d2'],det_df.index,'.k')\nplt.xlabel('Detector pair (100*det1ch+det2ch)')\nplt.ylabel('Index in det_df')\nplt.title('Mapping between detector pair and index')\nplt.show()", "Visualize the dataFrame so far\nTry using the built-in pandas.DataFrame.plot method.", "ax = det_df.plot('d1','d2',kind='scatter', marker = 's',edgecolor='none',s=13, c='d1d2')\nplt.xlim([0,50])\nplt.ylim([0,50])\nax.set_aspect('equal')\nplt.xlabel('Detector 1 channel')\nplt.ylabel('Detector 2 channel')\n\nplt.show()", "There are some problems with displaying the labels, so instead I will use matplotlib directly. I am writing a function to generate this plot since I will likely want to view it a lot.", "bicorr.plot_det_df(det_df, which=['index'])", "Step 2: Fill angles column\nThe lanl_detector_angles.mat file is located in my measurements folder:", "os.listdir('../meas_info/')", "What does this file look like? Import the .mat file and take a look.", "det2detAngle = sio.loadmat('../meas_info/lanl_detector_angles.mat')['det2detAngle']\ndet2detAngle.shape\n\nplt.pcolormesh(det2detAngle, cmap = \"viridis\")\ncbar = plt.colorbar()\ncbar.set_label('Angle (degrees)')\nplt.xlabel('Detector 1')\nplt.ylabel('Detector 2')\nplt.show()", "The array currently is ndets x ndets with an angle at every index. This is twice as many entries as we need because pairs are repeated at (d1,d2) and (d2,d1). Loop through the pairs and store the angle to the dataframe.\nFill the 'angle' column of the DataFrame:", "for pair in det_df.index:\n det_df.loc[pair,'angle'] = det2detAngle[int(det_df.loc[pair,'d1'])][int(det_df.loc[pair,'d2'])]\n\ndet_df.head()", "Visualize the angular data", "bicorr.plot_det_df(det_df,which=['angle'])", "Verify accuracy of pandas method\nMake use of git to checkout old versions.\nPreviously, I generated a dictionary that mapped the detector pair d1d2 index to the angle. Verify that the new method using pandas is producing the same array of angles. \n Old version using channel lists, dictionary", "dict_pair_to_index, dict_index_to_pair = bicorr.build_dict_det_pair()\n\ndict_pair_to_angle = bicorr.build_dict_pair_to_angle(dict_pair_to_index,foldername='../../measurements/')\n\ndet1ch_old, det2ch_old, angle_old = bicorr.unpack_dict_pair_to_angle(dict_pair_to_angle)", "New method using pandas det_df", "det_df = bicorr.load_det_df()\n\ndet1ch_new = det_df['d1'].values\ndet2ch_new = det_df['d2'].values\nangle_new = det_df['angle'].values", "Compare the two", "plt.plot([0,180],[0,180],'r')\nplt.plot(angle_old, angle_new, '.k')\nplt.xlabel('Angle old (degrees)')\nplt.ylabel('Angle new (degrees)')\nplt.title('Compare angles from new and old method')\nplt.show()", "Are the angle vectors within 0.001 degrees of each other? If so, then consider the two equal.", "np.sum((angle_old - angle_new) < 0.001)", "Yes, consider them the same. \nStep 3: Extract information from det_df\nI need to exact information from det_df using the pandas methods. What are a few things I want to do?", "det_df.head()", "Return rows that meet a given condition\nThere are two primary methods for accessing rows in the dataFrame that meet certain conditions. In our case, the conditions may be which detector pairs or which angle ranges we want to access.\n\nReturn a True/False mask indicating which entries meet the conditions\nReturn a pandas Index structure containing the indices of those entries\n\nAs an example, I will look for rows in which d2=8. As a note, this will not be all entries in which channel 8 was involved because there are other pairs in which d1=8 that will not be included.\n Return the rows \nStart with the mask method, which can be used to store our conditions.", "d = 8\nind_mask = (det_df['d2'] == d)\n\n# Get a glimpse of the mask's first five elements\nind_mask.head()\n\n# View the mask entries that are equal to true\nind_mask[ind_mask]", "The other method is to use the .index method to return a pandas index structure. Pull the indices from det_df using the mask.", "ind = det_df.index[ind_mask]\nprint(ind)", "Count the number of rows \nUsing the mask", "np.sum(ind_mask)", "Using the index structure", "len(ind)", "Extract information for a single detector\n Find indices for that detector", "# A single detector, may be d1 or d2\nd = 8\nind_mask = (det_df['d1']==d) | (det_df['d2']==d)\nind = det_df.index[ind_mask]", "These lines can be accessed in det_df directly.", "det_df[ind_mask].head()", "Return a list of the other detector pair \nSince the detector may be d1 or d2, I may need to return a list of the other pair, regardless of the order. How can I generate an array of the other detector in the pair?", "det_df_this_det = det_df.loc[ind,['d1','d2']]\n\ndet_df_this_det.head()", "This is a really stupid method, but I can multiply the two detectors together and then divide by 8 to divide out that channel.", "det_df_this_det['dN'] = det_df_this_det.d1 * det_df_this_det.d2 / d\n\ndet_df_this_det.head()\n\nplt.plot(det_df_this_det['dN'],'.k')\nplt.xlabel('Array in dataFrame')\nplt.ylabel('dN (other channel)')\nplt.title('Other channel for pairs including ch '+str(d))\nplt.show()", "Return the angles", "plt.plot(det_df.loc[ind,'angle'],'.k')\nplt.xlabel('Index')\nplt.ylabel('Angle between pairs')\nplt.title('Angle for pairs including ch '+ str(d))\nplt.show()\n\nplt.plot(det_df_this_det['dN'],det_df.loc[ind,'angle'],'.k')\nplt.axvline(d,color='r')\nplt.xlabel('dN (other channel)')\nplt.ylabel('Angle between pairs')\nplt.title('Angle for pairs including ch '+ str(d))\nplt.show()", "Extract information for a given pair\n Find indices for that pair", "d1 = 1\nd2 = 4\n\nif d2 < d1:\n print('Warning: d2 < d1. Channels inverted')\n\nind_mask = (det_df['d1']==d1) & (det_df['d2']==d2)\nind = det_df.index[ind_mask]\n\ndet_df[ind_mask]\n\ndet_df[ind_mask]['angle']", "I will write a function that returns the index.", "bicorr.d1d2_index(det_df,4,1)", "Compare to speed of dictionary\nFor a large number of detector pairs, which is faster for retrieving the indices?", "bicorr_data = bicorr.load_bicorr(bicorr_path = '../2017_01_09_pfs_build_bicorr_hist_master/1/bicorr1')\nbicorr_data.shape\n\ndet_df = bicorr.load_det_df()\ndict_pair_to_index, dict_index_to_pair = bicorr.build_dict_det_pair()\n\nd1 = 4\nd2 = 8\nprint(dict_pair_to_index[100*d1+d2])\nprint(bicorr.d1d2_index(det_df,d1,d2))", "Loop through bicorr_data and generate the index for all pairs.\n Using the dictionary method", "start_time = time.time()\nfor i in tqdm(np.arange(bicorr_data.size),ascii=True):\n d1 = bicorr_data[i]['det1ch']\n d2 = bicorr_data[i]['det2ch']\n index = dict_pair_to_index[100*d1+d2]\nprint(time.time()-start_time)", "Using the pandas dataFrame method", "start_time = time.time()\nfor i in tqdm(np.arange(bicorr_data.size),ascii=True):\n d1 = bicorr_data[i]['det1ch']\n d2 = bicorr_data[i]['det2ch']\n index = bicorr.d1d2_index(det_df,d1,d2)\nprint(time.time()-start_time)", "I'm not going to run this because tqdm says it will take approximately 24 minutes. So instead I should go with the dict method. But I would like to produce the dictionary from the pandas array directly. \n Produce dictionaries from det_df \nInstead of relying on dict_pair_to_index all the time, I will generate it on the fly when filling bicorr_hist_master in build_bicorr_hist_master since that function requires generating the index so many times.\nThe three dictionaries that I need are:\n\ndict_pair_to_index\ndict_index_to_pair\ndict_pair_to_angle", "det_df.index\n\ndet_df.head()\n\ndet_df[['d1d2','d2']].head()\n\ndict_index_to_pair = det_df['d1d2'].to_dict()\ndict_pair_to_index = {v: k for k, v in dict_index_to_pair.items()}\n\ndict_pair_to_angle = pd.Series(det_df['angle'].values,index=det_df['d1d2']).to_dict()", "Functionalize these dictionaries so I can produce them on the fly.", "help(bicorr.build_dict_det_pair)\n\ndict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)", "Instructions: Save, load det_df file\nI'm going to store the dataFrame using to_pickle. At this point, it only contains information on the pairs and angles. No bin column has been added.", "det_df.to_pickle('../meas_info/det_df_pairs_angles.pkl')\ndet_df.to_csv('../meas_info/det_df_pairs_angles.csv',index = False)", "Revive the dataFrame from the .pkl file. Write a function to do this automatically. Option to display plots.", "help(bicorr.load_det_df)\n\ndet_df = bicorr.load_det_df()\ndet_df.head()\n\ndet_df = bicorr.load_det_df()\nbicorr.plot_det_df(det_df, show_flag = True, which = ['index'])\nbicorr.plot_det_df(det_df, show_flag = True, which = ['angle'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
InsightSoftwareConsortium/SimpleITK-Notebooks
Python/02_Pythonic_Image.ipynb
apache-2.0
[ "Pythonic Syntactic Sugar <a href=\"https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F02_Pythonic_Image.ipynb\"><img style=\"float: right;\" src=\"https://mybinder.org/badge_logo.svg\"></a>\nThe Image Basics Notebook was straight forward and closely follows ITK's C++ interface.\nSugar is great it gives your energy to get things done faster! SimpleITK has applied a generous about of syntactic sugar to help get things done faster too.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n\nmpl.rc(\"image\", aspect=\"equal\")\nimport SimpleITK as sitk\n\n# Download data to work on\n%run update_path_to_download_script\nfrom downloaddata import fetch_data as fdata", "Let us begin by developing a convenient method for displaying images in our notebooks.", "img = sitk.GaussianSource(size=[64] * 2)\nplt.imshow(sitk.GetArrayViewFromImage(img))\n\nimg = sitk.GaborSource(size=[64] * 2, frequency=0.03)\nplt.imshow(sitk.GetArrayViewFromImage(img))\n\ndef myshow(img):\n nda = sitk.GetArrayViewFromImage(img)\n plt.imshow(nda)\n\n\nmyshow(img)", "Multi-dimension slice indexing\nIf you are familiar with numpy, sliced index then this should be cake for the SimpleITK image. The Python standard slice interface for 1-D object:\n<table>\n <tr><td>Operation</td> <td>Result</td></tr>\n <tr><td>d[i]</td> <td>i-th item of d, starting index 0</td></tr>\n <tr><td>d[i:j]</td> <td>slice of d from i to j</td></tr>\n <tr><td>d[i:j:k]</td> <td>slice of d from i to j with step k</td></tr>\n</table>\n\nWith this convenient syntax many basic tasks can be easily done.", "img[24, 24]", "Cropping", "myshow(img[16:48, :])\n\nmyshow(img[:, 16:-16])\n\nmyshow(img[:32, :32])", "Flipping", "img_corner = img[:32, :32]\nmyshow(img_corner)\n\nmyshow(img_corner[::-1, :])\n\nmyshow(\n sitk.Tile(\n img_corner,\n img_corner[::-1, ::],\n img_corner[::, ::-1],\n img_corner[::-1, ::-1],\n [2, 2],\n )\n)", "Slice Extraction\nA 2D image can be extracted from a 3D one.", "img = sitk.GaborSource(size=[64] * 3, frequency=0.05)\n\n# Why does this produce an error?\nmyshow(img)\n\nmyshow(img[:, :, 32])\n\nmyshow(img[16, :, :])", "Subsampling", "myshow(img[:, ::3, 32])", "Mathematical Operators\nMost python mathematical operators are overloaded to call the SimpleITK filter which does that same operation on a per-pixel basis. They can operate on a two images or an image and a scalar.\nIf two images are used then both must have the same pixel type. The output image type is usually the same.\nAs these operators basically call ITK filter, which just use raw C++ operators, care must be taken to prevent overflow, and divide by zero etc.\n<table>\n <tr><td>Operators</td></tr>\n <tr><td>+</td></tr>\n <tr><td>-</td></tr>\n <tr><td>&#42;</td></tr>\n <tr><td>/</td></tr>\n <tr><td>//</td></tr>\n <tr><td>**</td></tr>\n</table>", "img = sitk.ReadImage(fdata(\"cthead1.png\"))\nimg = sitk.Cast(img, sitk.sitkFloat32)\nmyshow(img)\nimg[150, 150]\n\ntimg = img**2\nmyshow(timg)\ntimg[150, 150]", "Division Operators\nAll three Python division operators are implemented __floordiv__, __truediv__, and __div__.\nThe true division's output is a double pixel type.\nSee PEP 238 to see why Python changed the division operator in Python 3.\nBitwise Logic Operators\n<table>\n <tr><td>Operators</td></tr>\n <tr><td>&</td></tr>\n <tr><td>|</td></tr>\n <tr><td>^</td></tr>\n <tr><td>~</td></tr>\n</table>", "img = sitk.ReadImage(fdata(\"cthead1.png\"))\nmyshow(img)", "Comparative Operators\n<table>\n <tr><td>Operators</td></tr>\n <tr><td>&gt;</td></tr>\n <tr><td>&gt;=</td></tr>\n <tr><td>&lt;</td></tr>\n <tr><td>&lt;=</td></tr>\n <tr><td>==</td></tr>\n</table>\n\nThese comparative operators follow the same convention as the reset of SimpleITK for binary images. They have the pixel type of sitkUInt8 with values of 0 and 1.", "img = sitk.ReadImage(fdata(\"cthead1.png\"))\nmyshow(img)", "Amazingly make common trivial tasks really trivial", "myshow(img > 90)\n\nmyshow(img > 150)\n\nmyshow((img > 90) + (img > 150))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
WMD-group/SMACT
examples/Inverse_perovskites/Inverse_formate_perovskites.ipynb
mit
[ "Formate perovskites\nPerovskites are an important class of inorganic materials, particularly for solar energy applications.\nUsually, the perovskite structure contains an A cation, a B cation and an C anion in the ratio 1:1:3 (black, blue and red in the figure below, respectively, see wikipeidia for more information.\n<img src=\"Perovskite.jpg\">\nHere we search for charge inverted perovskites, i.e. with an anion on the A site. This class of material is closely related to perovskites, and may represent another fruitful search space for new photovoltaic materials. \nIn this example we assume a simple formate moelcule as the C-site and uses Goldschmidt ratio rules as part of the screening.\nThese rules allow us to estimate whether or not a perovskite structure is likely to form based on data about ionic size alone. The tolerance factor is defined as a ratio of the radii of the A, B and C species\n\\begin{equation}\nt = \\frac{r^A + r^C}{\\sqrt{2}(r^B + r^C)} ,\n\\end{equation}\nValues of t > 1 imply a relatively large A site favoring a hexagonal structure, 0.9 < t < 1 predicts a cubic structure, and 0.7 < t < 0.9 means that the A site is small, preferring an orthorhombic structure. For t < 0.7, other (non-perovskite) structures are predicted.\nWe also apply the standard charge neutrality and electronegativity tests as described in the docs.\nWe outline 2 approaches for achieveing the same result:\n\n\nThe methodology is written out explicitly for transparency, including accessing the smact data directory directly and storing element and species information as simple lists of strings, ints and floats.\n\n\nFewer lines of code are used, making use of inbuilt smact functions. Element and species information is stored directly as Element and Species objects.", "### IMPORTS ###\nget_ipython().magic(u'matplotlib inline') # plots appear in the notebook\nimport smact\nfrom smact import Species, Element\nimport smact.lattice as lattice\nimport smact.screening as screening\n\nfrom itertools import product\nimport copy\nimport os\nimport csv\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nfrom smact import data_directory", "First we define the positions of the 3 sites in the perovskite structure and specify the allowed oxidation states at each site. Note that the A site is defined as an anion (i.e. with a -1 oxidation state).", "site_A = lattice.Site([0,0,0],[-1])\nsite_B = lattice.Site([0.5,0.5,0.5],[+5,+4])\nsite_C = lattice.Site([0.5,0.5,0.5],[-2,-1])\nperovskite = lattice.Lattice([site_A,site_B,site_C],space_group=221)", "Approach 1\nWe now search through the elements of interest (Li-Fr) and find those that are allowed on each site. In this example, we use the F- anion with an increased Shannon radius to simulate the formate anion. We access the Shannon radii data directly from the smact data directory and are interested in the octahedral (6_n) Shannon radius.", "search = smact.ordered_elements(3,87) # Li - Fr\n\nA_list = [] # will be populated with anions\nB_list = [] # will be populated with cations\nC_list = [['F',-1,4.47]] # is always the \"formate anion\"\nfor element in search:\n with open(os.path.join(data_directory, 'shannon_radii.csv'),'r') as f:\n reader = csv.reader(f)\n r_shannon=False\n for row in reader:\n if row[2] ==\"6_n\" and row[0]==element and int(row[1]) in site_A.oxidation_states:\n A_list.append([row[0],int(row[1]),float(row[4])])\n if row[2]==\"6_n\" and row[0]==element and int(row[1]) in site_B.oxidation_states:\n B_list.append([row[0],int(row[1]),float(row[4])])", "NB: We access the data directly from the data directory file here for transparency. However, reading the file multiple times would slow down the code if we were looping over many (perhaps millions to billions) of compositions. As such, reading all the data in once into a dictionary, then accessing that dictionary from within a loop, could be preferable, e.g.:\npython\nfor element in search:\n ...\n r_shannon = shannon_radii[element][coordination]\n ...\nWe go through and apply the electronegativity order test (pauling_test) to each combo. Then, we use Goldschmidt tolernace factor to group into crystal structure types.", "# We define the different categories of list we will populate\ncharge_balanced = []\ngoldschmidt_cubic = []\ngoldschmidt_ortho = []\na_too_large = []\nA_B_similar = []\npauling_perov = []\nanion_stats = []\n\n# We recursively search all ABC combinations using nested for loops\nfor C in C_list:\n anion_hex = 0\n anion_cub = 0\n anion_ort = 0\n for B in B_list:\n for A in A_list:\n # We check that we have 3 different elements\n if B[0] != A[0]: \n # Check for charge neutrality\n if int(A[1])+int(B[1])+3*int(C[1]) == 0:\n charge_balanced.append([A[0],B[0],C[0]])\n # We apply the pauling electronegativity test\n paul_a = smact.Element(A[0]).pauling_eneg\n paul_b = smact.Element(B[0]).pauling_eneg\n paul_c = smact.Element(C[0]).pauling_eneg\n electroneg_makes_sense = screening.pauling_test([A[1],B[1],C[1]], [paul_a,paul_b,paul_c])\n if electroneg_makes_sense:\n pauling_perov.append([A[0],B[0],C[0]])\n # We calculate the Goldschmidt tolerance factor\n tol = (float(A[2]) + C[2])/(np.sqrt(2)*(float(B[2])+C[2]))\n if tol > 1.0:\n a_too_large.append([A[0],B[0],C[0]])\n anion_hex = anion_hex+1\n if tol > 0.9 and tol <= 1.0:\n goldschmidt_cubic.append([A[0],B[0],C[0]])\n anion_cub = anion_cub + 1\n if tol >= 0.71 and tol < 0.9:\n goldschmidt_ortho.append([A[0],B[0],C[0]])\n anion_ort = anion_ort + 1\n if tol < 0.71:\n A_B_similar.append([A[0],B[0],C[0]])\nanion_stats.append([anion_hex,anion_cub,anion_ort])\n \n\nprint (anion_stats)\ncolours=['#991D1D','#8D6608','#857070']\nmatplotlib.rcParams.update({'font.size': 22})\nplt.pie(anion_stats[0],labels=['Hex','Cubic','Ortho']\n ,startangle=90,autopct='%1.1f%%',colors=colours)\nplt.axis('equal')\nplt.savefig('Form-perovskites.png')\n\nprint ('Number of possible charge neutral perovskites from', search[0], 'to', search[len(search)-1], '=', len(charge_balanced))\nprint ('Number of Pauling sensible perovskites from', search[0], 'to', search[len(search)-1], '=', len(pauling_perov))\nprint ('Number of possible cubic perovskites from', search[0], 'to', search[len(search)-1], '=', len(goldschmidt_cubic))\nprint ('Number of possible ortho perovskites from', search[0], 'to', search[len(search)-1], '=', len(goldschmidt_ortho))\nprint ('Number of possible hexagonal perovskites from', search[0], 'to', search[len(search)-1], '=', len(a_too_large))\nprint ('Number of possible non-perovskites from', search[0], 'to', search[len(search)-1], '=', len(A_B_similar))\n\n\n#print goldschmidt_cubic\nprint( \"----------------------------------------------------------------\")\nprint( \"Structures identified with cubic tolerance factor 0.9 < t < 1.0 \")\nprint( \"----------------------------------------------------------------\")\nfor structure in goldschmidt_cubic:\n print( structure[0],structure[1],'(HCOO)3')\n", "Approach 2", "# Get list of Element objects\nsearch = [el for el in smact.ordered_elements(3,87) if \n Element(el).oxidation_states]\n\n# Covert to list of Species objects\nall_species = []\nfor el in search:\n for oxi_state in Element(el).oxidation_states:\n all_species.append(Species(el,oxi_state,\"6_n\"))\n \n# Define lists of interest\nA_list = [sp for sp in all_species if \n (sp.oxidation == -1) and (sp.ionic_radius)]\nB_list = [sp for sp in all_species if \n (4 <= sp.oxidation <= 5) and (sp.ionic_radius)]\nC_list = [Species('F',-1,4.47)]\n\n# We define the different categories of list we will populate\ncharge_balanced = []\ngoldschmidt_cubic = []\ngoldschmidt_ortho = []\na_too_large = []\nA_B_similar = []\npauling_perov = []\nanion_stats = []\n\nfor combo in product(A_list,B_list,C_list):\n A, B, C = combo[0], combo[1], combo[2]\n # Check for charge neutrality in 1:1:3 ratio\n if (1,1,3) in screening.neutral_ratios(\n [A.oxidation, B.oxidation, C.oxidation])[1]:\n charge_balanced.append(combo)\n # Check for pauling test\n if screening.pauling_test([A.oxidation, B.oxidation, C.oxidation],\n [A.pauling_eneg, B.pauling_eneg, C.pauling_eneg]):\n pauling_perov.append(combo)\n # Calculate tolerance factor\n tol = (float(A.ionic_radius) + 4.47)/(np.sqrt(2)*(float(B.ionic_radius)+4.47))\n if tol > 1.0:\n a_too_large.append(combo)\n if tol > 0.9 and tol <= 1.0:\n goldschmidt_cubic.append([combo])\n if tol >= 0.71 and tol < 0.9:\n goldschmidt_ortho.append(combo)\n if tol < 0.71:\n A_B_similar.append(combo)\n\nprint ('Number of possible charge neutral perovskites from', search[0], 'to', search[len(search)-1], '=', len(charge_balanced))\nprint ('Number of Pauling sensible perovskites from', search[0], 'to', search[len(search)-1], '=', len(pauling_perov))\nprint ('Number of possible cubic perovskites from', search[0], 'to', search[len(search)-1], '=', len(goldschmidt_cubic))\nprint ('Number of possible ortho perovskites from', search[0], 'to', search[len(search)-1], '=', len(goldschmidt_ortho))\nprint ('Number of possible hexagonal perovskites from', search[0], 'to', search[len(search)-1], '=', len(a_too_large))\nprint ('Number of possible non-perovskites from', search[0], 'to', search[len(search)-1], '=', len(A_B_similar))\n\n\n#print goldschmidt_cubic\nprint( \"----------------------------------------------------------------\")\nprint( \"Structures identified with cubic tolerance factor 0.9 < t < 1.0 \")\nprint( \"----------------------------------------------------------------\")\nfor structure in goldschmidt_cubic:\n print( structure[0][0].symbol,structure[0][1].symbol,'(HCOO)3')\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
StingraySoftware/notebooks
Simulator/Lag Analysis.ipynb
mit
[ "Contents\nThis notebook analyses lag-frequency spectrums of the light curves simulated through impulse response approach. First, a simple case with delta impulse response is covered. Subsequently, an energy-dependent impulse response scenario is analysed.\nSetup\nImport some useful libraries.", "import numpy as np\nfrom matplotlib import pyplot as plt\n%matplotlib inline", "Import relevant stingray libraries.", "from stingray import Lightcurve, Crossspectrum, sampledata\nfrom stingray.simulator import simulator, models", "Initializing\nInstantiate a simulator object and define a variability signal.", "var = sampledata.sample_data()\n\n# Beware: set tstart here, or nothing will work!\nsim = simulator.Simulator(N=1024, mean=0.5, dt=0.125, rms=0.4, tstart=var.tstart)\n", "For ease of analysis, define a simple delta impulse response with width 1. Here, start parameter refers to the lag delay, which we will soon see.", "delay = 10\ns_ir = sim.simple_ir(start=delay, width=1)", "Finally, simulate a filtered light curve. Here, filtered means that the initial lag delay portion is cut.", "lc = sim.simulate(var.counts, s_ir)\n\nplt.plot(lc.time, lc.counts)\nplt.plot(var.time, var.counts)", "Analysis\nCompute crossspectrum.", "cross = Crossspectrum(var, lc)", "Rebin the crosss-spectrum for ease of visualization.", "cross = cross.rebin(0.0050)", "Calculate time lag.", "lag = cross.time_lag()", "Plot lag.", "plt.figure()\n\n# Plot lag-frequency spectrum.\nplt.plot(cross.freq, lag, 'r')\n\n# Find cutoff points\nv_cutoff = 1.0/(2*delay)\nh_cutoff = lag[int((v_cutoff-0.0050)*1/0.0050)]\n\nplt.axvline(v_cutoff, color='g',linestyle='--')\nplt.axhline(h_cutoff, color='g', linestyle='-.')\n\n# Define axis\nplt.axis([0,0.2,-20,20])\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('Lag')\nplt.title('Lag-frequency Spectrum')\nplt.show()", "According to Uttley et al (2014), the lag-frequency spectrum shows a constant delay until the frequency (1/2*time_delay) which is represented by the green vertical line in the above figure. After this point, the phase wraps and the lag becomes negative. \nEnergy Dependent Impulse Responses\nIn practical situations, different channels may have different impulse responses and hence, would react differently to incoming light curves. To account for this, stingray an option to simulate light curves and add them to corresponding energy channels.\nBelow, we analyse the lag-frequency spectrum in such cases. \nWe define two delta impulse responses with same intensity but varying positions, each applicable on different energy channels (say '3.5-4.5 keV' and '4.5-5.5 keV' energy ranges).", "delays = [10,20]\nh1 = sim.simple_ir(start=delays[0], width=1)\nh2 = sim.simple_ir(start=delays[1], width=1)", "Now, we create two energy channels to simulate light curves for these two impulse responses.", "sim.simulate_channel('3.5-4.5', var, h1)\nsim.simulate_channel('4.5-5.5', var, h2)", "Compute cross-spectrum for each channel.", "cross = [Crossspectrum(var, lc).rebin(0.005) for lc in sim.get_channels(['3.5-4.5', '4.5-5.5'])]", "Calculate lags.", "lags = [c.time_lag() for c in cross]", "Get cut-off points.", "v_cuts = [1.0/(2*d) for d in delays]\nh_cuts = [lag[int((v_cutoff-0.005)*1/0.005)] for lag, v_cut in zip(lags, v_cuts)]", "Plot lag-frequency spectrums.", "plt.figure()\nplots = []\ncolors = ['r','g']\nenergies = ['3.5-4.5 keV', '4.5-5.5 keV']\n\n# Plot lag-frequency spectrum\nfor i in range(0,len(lags)):\n plots += plt.plot(cross[i].freq, lags[i], colors[i], label=energies[i])\n plt.axvline(v_cuts[i],color=colors[i],linestyle='--')\n plt.axhline(h_cuts[i], color=colors[i], linestyle='-.')\n\n# Define axes and add labels\nplt.axis([0,0.2,-20,20])\nplt.legend()\nplt.xlabel('Frequencies (Hz)')\nplt.ylabel('Lags')\nplt.title('Energy Dependent Frequency-lag Spectrum')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dereneaton/ipyrad
newdocs/API-analysis/cookbook-vcf2hdf5.ipynb
gpl-3.0
[ "<span style=\"color:gray\">ipyrad-analysis toolkit:</span> vcf_to_hdf5\nView as notebook\nMany genome assembly tools will write variant SNP calls to the VCF format (variant call format). This is a plain text file that stores variant calls relative to a reference genome in tabular format. It includes a lot of additional information about the quality of SNP calls, etc., but is not very easy to read or efficient to parse. To make analyses run a bit faster ipyrad uses a simplified format to store this information in the form of an HDF5 database. You can easily convert any VCF file to this HDF5 format using the ipa.vcf_to_hdf5() tool. \nThis tool includes an added benefit of allowing you to enter an (optional) ld_block_size argument when creating the file which will store information that can be used downstream by many other tools to subsample SNPs and perform bootstrap resampling in a way that reduces the effects of linkage among SNPs. If your data are assembled RAD data then the ld_block_size is not required, since we can simply use RAD loci as the linkage blocks. But if you want to combine reference-mapped RAD loci located nearby in the genome as being on the same linkage block then you can enter a value such as 50,000 to create 50Kb linkage block that will join many RAD loci together and sample only 1 SNP per block in each bootstrap replicate. If your data are not RAD data, e.g., whole genome data, then the ld_block_size argument will be required in order to encode linkage information as discrete blocks into your database. \nRequired software\nIf you are converting a VCF file assembled from some other tool (e.g., GATK, freebayes, etc.) then you will need to install the htslib and bcftools software and use them as described below.", "# conda install ipyrad -c bioconda \n# conda install htslib -c bioconda\n# conda install bcftools -c bioconda\n\nimport ipyrad.analysis as ipa\nimport pandas as pd", "Pre-filter data from other programs (e.g., FreeBayes, GATK)\nYou can use the program bcftools to pre-filter your data to exclude indels and low quality SNPs. If you ran the conda install commands above then you will have all of the required tools installed. To achieve the format that ipyrad expects you will need to exclude indel containing SNPs (this may change in the future). Further quality filtering is optional. \nThe example below reduced the size of a VCF data file from 29Gb to 80Mb! VCF contains a lot of information that you do not need to retain through all of your analyses. We will keep only the final genotype calls. \nNote that the code below is bash script. You can run this from a terminal, or in a jupyter notebook by appending the (%%bash) header like below.", "%%bash\n\n# compress the VCF file if not already done (creates .vcf.gz)\nbgzip data.vcf\n\n# tabix index the compressed VCF (creates .vcf.gz.tbi)\ntabix data.vcf.gz\n\n# remove multi-allelic SNPs and INDELs and PIPE to next command\nbcftools view -m2 -M2 -i'CIGAR=\"1X\" & QUAL>30' data.vcf.gz -Ou | \n\n # remove extra annotations/formatting info and save to new .vcf\n bcftools annotate -x FORMAT,INFO > data.cleaned.vcf\n \n# recompress the final file (create .vcf.gz)\nbgzip data.cleaned.vcf", "A peek at the cleaned VCF file", "# load the VCF as an datafram\ndfchunks = pd.read_csv(\n \"/home/deren/Documents/ipyrad/sandbox/Macaque-Chr1.clean.vcf.gz\",\n sep=\"\\t\", \n skiprows=1000, \n chunksize=1000,\n)\n\n# show first few rows of first dataframe chunk\nnext(dfchunks).head()", "Converting clean VCF to HDF5\nHere I using a VCF file from whole geome data for 20 monkey's from an unpublished study (in progress). It contains >6M SNPs all from chromosome 1. Because many SNPs are close together and thus tightly linked we will likely wish to take linkage into account in our downstream analyses.\nThe ipyrad analysis tools can do this by encoding linkage block information into the HDF5 file. Here we encode ld_block_size of 20K bp. This breaks the 1 scaffold (chromosome) into about 10K linkage blocks. See the example below of this information being used in an ipyrad PCA analysis.", "# init a conversion tool\nconverter = ipa.vcf_to_hdf5(\n name=\"Macaque_LD20K\",\n data=\"/home/deren/Documents/ipyrad/sandbox/Macaque-Chr1.clean.vcf.gz\",\n ld_block_size=20000,\n)\n\n# run the converter\nconverter.run()", "Downstream analyses\nThe data file now contains 6M SNPs across 20 samples and N linkage blocks. By default the PCA tool subsamples a single SNP per linkage block. To explore variation over multiple random subsamplings we can use the nreplicates argument.", "# init a PCA tool and filter to allow no missing data\npca = ipa.pca(\n data=\"./analysis-vcf2hdf5/Macaque_LD20K.snps.hdf5\",\n mincov=1.0, \n)", "Run a single PCA analysis from subsampled unlinked SNPs", "pca.run_and_plot_2D(0, 1, seed=123);", "Run multiple PCAs over replicates of subsampled SNPs\nHere you can see the results for a different 10K SNPs that are sampled in each replicate iteration. If the signal in the data is robust then we should expect to see the points clustering at a similar place across replicates. Internally ipyrad will rotate axes to ensure the replicate plots align despite axes swapping (which is arbitrary in PCA space). You can see this provides a better view of uncertainty in our estimates than the plot above (and it looks cool!)", "pca.run_and_plot_2D(0, 1, seed=123, nreplicates=25);", "More details on running PCAs, toggling options, and styling plots can be found in our ipyrad.analysis PCA tutorial" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Unidata/MetPy
talks/MetPy Exercise.ipynb
bsd-3-clause
[ "from datetime import datetime, timedelta\nfrom siphon.catalog import TDSCatalog\nfrom siphon.ncss import NCSS\nimport numpy as np\nimport metpy.calc as mpcalc\nfrom metpy.units import units, concatenate\nfrom metpy.plots import SkewT", "Just some helper code to make things easier. metpy_units_handler plugins into siphon to automatically add units to variables. post_process_data is used to clean up some oddities from the NCSS point feature collection.", "units.define('degrees_north = 1 degree')\nunits.define('degrees_east = 1 degree')\nunit_remap = dict(inches='inHg', Celsius='celsius')\ndef metpy_units_handler(vals, unit):\n arr = np.array(vals)\n if unit:\n unit = unit_remap.get(unit, unit)\n arr = arr * units(unit)\n return arr\n\n# Fix dates and sorting\ndef sort_list(list1, list2):\n return [l1 for (l1, l2) in sorted(zip(list1, list2), key=lambda i: i[1])]\n\ndef post_process_data(data):\n data['time'] = [datetime.strptime(d.decode('ascii'), '%Y-%m-%d %H:%M:%SZ') for d in data['time']]\n ret = dict()\n for key,val in data.items():\n try:\n val = units.Quantity(sort_list(val.magnitude.tolist(), data['time']), val.units)\n except AttributeError:\n val = sort_list(val, data['time'])\n ret[key] = val\n return ret", "METAR Meteogram\nFirst we need to grab the catalog for the METAR feature collection data from http://thredds.ucar.edu/thredds/catalog.html", "cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/nws/metar/ncdecoded/catalog.xml?dataset=nws/metar/ncdecoded/Metar_Station_Data_fc.cdmr')", "Set up NCSS access to the dataset", "ds = list(cat.datasets.values())[0]\n\nncss = NCSS(ds.access_urls['NetcdfSubset'])\nncss.unit_handler = metpy_units_handler", "What variables do we have available?", "ncss.variables", "Create a query for the last 7 days of data for a specific lon/lat point. We should ask for: air temperature, dewpoint temperature, wind speed, and wind direction.", "now = datetime.utcnow()\nquery = ncss.query().accept('csv')\nquery.lonlat_point(-97, 35.25).time_range(now - timedelta(days=7), now)\nquery.variables('air_temperature', 'dew_point_temperature', 'wind_speed', 'wind_from_direction')", "Get the data", "data = ncss.get_data(query)\ndata = post_process_data(data) # Fixes for NCSS point", "Heat Index\nFirst, we need relative humidity:\n $$RH = e / e_s$$", "e = mpcalc.saturation_vapor_pressure(data['dew_point_temperature'])\ne_s = mpcalc.saturation_vapor_pressure(data['air_temperature'])\nrh = e / e_s", "Calculate heat index:", "# RH should be [0, 100]\nhi = mpcalc.heat_index(data['air_temperature'], rh * 100)", "Plot the temperature, dewpoint, and heat index. Bonus points to also plot wind speed and direction.", "import matplotlib.pyplot as plt\ntimes = data['time']\nfig, axes = plt.subplots(2, 1, figsize=(9, 9))\naxes[0].plot(times, data['air_temperature'].to('degF'), 'r', linewidth=2)\naxes[0].plot(times, data['dew_point_temperature'].to('degF'), 'g', linewidth=2)\naxes[0].plot(times, hi, color='darkred', linestyle='--', linewidth=2)\naxes[0].grid(True)\naxes[1].plot(times, data['wind_speed'].to('mph'), 'b')\ntwin = plt.twinx(axes[1])\ntwin.plot(times, data['wind_from_direction'], 'kx')", "Sounding\nFirst grab the catalog for the Best dataset from the GSD HRRR from http://thredds.ucar.edu/thredds/catalog.html", "cat = TDSCatalog('http://thredds-jumbo.unidata.ucar.edu/thredds/catalog/grib/HRRR/CONUS_3km/wrfprs/catalog.xml?dataset=grib/HRRR/CONUS_3km/wrfprs/Best')", "Set up NCSS access to the dataset", "best_ds = list(cat.datasets.values())[0]\n\nncss = NCSS(best_ds.access_urls['NetcdfSubset'])\nncss.unit_handler = metpy_units_handler", "What variables do we have?", "ncss.variables", "Set up a query for the most recent set of data from a point. We should request temperature, dewpoint, and U and V.", "query = ncss.query().accept('csv')\nquery.lonlat_point(-105, 40).time(datetime.utcnow())\nquery.variables('Temperature_isobaric', 'Dewpoint_temperature_isobaric',\n 'u-component_of_wind_isobaric', 'v-component_of_wind_isobaric')", "Get the data", "data = ncss.get_data(query)\n\nT = data['Temperature_isobaric'].to('degC')\nTd = data['Dewpoint_temperature_isobaric'].to('degC')\np = data['vertCoord'].to('mbar')", "Plot a sounding of the data", "fig = plt.figure(figsize=(9, 9))\nskew = SkewT(fig=fig)\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.ax.set_ylim(1050, 100)\nskew.plot_mixing_lines()\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()", "Also calculate the parcel profile and add that to the plot", "prof = mpcalc.parcel_profile(p[::-1], T[-1], Td[-1])\nskew.plot(p[::-1], prof.to('degC'), 'k', linewidth=2)\nfig", "Let's also plot the location of the LCL and the 0 isotherm as well:", "lcl = mpcalc.lcl(p[-1], T[-1], Td[-1])\nlcl_temp = mpcalc.dry_lapse(concatenate((p[-1], lcl)), T[-1])[-1].to('degC')\nskew.plot(lcl, lcl_temp, 'bo')\nskew.ax.axvline(0, color='blue', linestyle='--', linewidth=2)\nfig" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
james-prior/cohpy
20160826-dojo-regex-travis.ipynb
mit
[ "from __future__ import print_function\nimport re", "Let's start with the original regular expression and\nstring to search from Travis'\nregex problem.", "pattern = re.compile(r\"\"\"\n (?P<any>any4?) # \"any\"\n # association\n | # or\n (?P<object_eq>object ([\\w-]+) eq (\\d+)) # object\n alone\n # association\n | # or\n (?P<object_range>object ([a-z0-9A-Z-]+) range (\\d+) (\\d+)) # object range\n # association\n | # or\n (?P<object_group>object-group ([a-z0-9A-Z-]+)) # object group\n # association\n | # or\n (?P<object_alone>object ([[a-z0-9A-Z-]+)) # object alone\n # association\n\"\"\", re.VERBOSE)\n\ns = ''' object-group jfi-ip-ranges object DA-TD-WEB01 eq 8850\n'''", "The regex had two bugs.\n- Two [[ near the end of the pattern string.\n- The significant spaces in the pattern (such as after object-group) were being ignored because of re.VERBOSE.\nSo those bugs are fixed in the pattern below.", "pattern = re.compile(r\"\"\"\n (?P<any>any4?) # \"any\"\n # association\n | # or\n (?P<object_eq>object\\ ([\\w-]+)\\ eq\\ (\\d+)) # object\n alone\n # association\n | # or\n (?P<object_range>object\\ ([a-z0-9A-Z-]+)\\ range\\ (\\d+)\\ (\\d+)) # object range\n # association\n | # or\n (?P<object_group>object-group\\ ([a-z0-9A-Z-]+)) # object group\n # association\n | # or\n (?P<object_alone>object\\ ([a-z0-9A-Z-]+)) # object alone\n # association\n\"\"\", re.VERBOSE)\n\nre.findall(pattern, s)\n\nfor m in re.finditer(pattern, s):\n print(repr(m))\n print('groups', m.groups())\n print('groupdict', m.groupdict())", "The above works, but keeping track of the indexes of the unnamed groups drives me crazy. So I add names for all groups.", "pattern = re.compile(r\"\"\"\n (?P<any>any4?) # \"any\"\n # association\n | # or\n (?P<object_eq>object\\ (?P<oe_name>[\\w-]+)\\ eq\\ (?P<oe_i>\\d+)) # object\n alone\n # association\n | # or\n (?P<object_range>object\\ (?P<or_name>[a-z0-9A-Z-]+)\n \\ range\\ (?P<oe_r_start>\\d+)\\ (?P<oe_r_end>\\d+)) # object range\n # association\n | # or\n (?P<object_group>object-group\\ (?P<og_name>[a-z0-9A-Z-]+)) # object group\n # association\n | # or\n (?P<object_alone>object\\ (?P<oa_name>[a-z0-9A-Z-]+)) # object alone\n # association\n\"\"\", re.VERBOSE)\n\nfor m in re.finditer(pattern, s):\n print(repr(m))\n print('groups', m.groups())\n print('groupdict', m.groupdict())", "The following shows me just the groups that matched.", "for m in re.finditer(pattern, s):\n for key, value in m.groupdict().items():\n if value is not None:\n print(key, repr(value))\n print()", "Looking at the above,\nI see that I probably don't care about the big groups,\njust the parameters,\nso I remove the big groups (except for \"any\")\nfrom the regular expression.", "pattern = re.compile(r\"\"\"\n (?P<any>any4?) # \"any\"\n # association\n | # or\n (object\\ (?P<oe_name>[\\w-]+)\\ eq\\ (?P<oe_i>\\d+)) # object\n alone\n # association\n | # or\n (object\\ (?P<or_name>[a-z0-9A-Z-]+)\n \\ range\\ (?P<oe_r_start>\\d+)\\ (?P<oe_r_end>\\d+)) # object range\n # association\n | # or\n (object-group\\ (?P<og_name>[a-z0-9A-Z-]+)) # object group\n # association\n | # or\n (object\\ (?P<oa_name>[a-z0-9A-Z-]+)) # object alone\n # association\n\"\"\", re.VERBOSE)", "Now it tells me just the meat of what I want to know.", "for m in re.finditer(pattern, s):\n for key, value in m.groupdict().items():\n if value is not None:\n print(key, repr(value))\n print()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ShantanuKamath/PythonWorkshop
2. Python Advanced.ipynb
mit
[ "Python Advanced\nDisclaimer - This document is only meant to serve as a reference for the attendees of the workshop. It does not cover all the concepts or implementation details discussed during the actual workshop.\n\nVariable Scope\nWhen you program, you'll often find that similar ideas come up again and again. You'll use variables for things like counting, iterating and accumulating values to return. In order to write readable code, you'll find yourself wanting to use similar names for similar ideas. As soon as you put multiple piece of code together (for instance, multiple functions or function calls in a single script) you might find that you want to use the same name for two separate concepts. \nFortunately, you don't need to come up with new names endlessly. Reusing names for objects is OK as long as you keep them in separate scope. \"Scope\" refers to which parts of a program a variable can be referenced from. \nIf a variable is created inside a function, it can only be used within that function. \nConsider these two functions, word_count and nearest_square. Both functions include a answer variable, but they are distinct variables that only exist within their respective functions:", "def word_count(document, search_term):\n \"\"\" Count how many times search_term appears in document. \"\"\"\n words = document.split() \n answer = 0\n for word in words:\n if word == search_term:\n answer += 1\n return answer\n\ndef nearest_square(limit):\n \"\"\" Find the largest square number smaller than limit. \"\"\"\n answer = 0\n while (answer+1)**2 < limit:\n answer += 1\n return answer**2", "Since the variable answer here is defined within each function seperately, you can reuse the same name of the variable, as the scope of the variables itself is different. \nNote : Functions, however, can access variables that are defined outside of its scope or in the larger scope, but can only read the value of the variable, not modify it. This is shown by the UnboundLocalError in the example below.", "egg_count = 0\n\ndef buy_eggs():\n egg_count += 12 # purchase a dozen eggs\n\n# buy_eggs()", "In such situations its better to redefine the functions as below.", "egg_count = 0\n\ndef buy_eggs():\n return egg_count + 12\n\negg_count = buy_eggs()\nprint(egg_count)\negg_count = buy_eggs()\nprint(egg_count)", "List Basics\nIn Python, it is possible to create a list of values. Each item in the list is called an element and can be accessed individually using a zero-based index. Hence avoiding the need to create multiple variables to store individual values.\nNote: negative indexes help access elements from the end of the array. -1 refers to the last element and -2 refers to the second last element and so on.", "# list of numbers of type Integer\nnumbers = [1, 2, 3, 4, 5]\nprint(\"List :\", numbers)\nprint(\"Second element :\", numbers[1]) ## 2\nprint(\"Length of list :\",len(numbers)) ## 5\nprint() # Empty line\n\n# list of strings\ncolors = ['red', 'blue', 'green']\nprint(\"List :\", colors)\nprint (\"First color :\", colors[0]) ## red\nprint (\"Third color :\", colors[2]) ## green\nprint (\"Last color :\", colors[-1]) ## green\nprint (\"Second last color :\", colors[-2]) ## blue\nprint (\"Length of list :\",len(colors)) ## 3\nprint() # Empty line\n\n# list with multiple variable types\nme = ['Shantanu Kamath', 'Computer Science', 20, 1000000]\nprint(\"List :\", me)\nprint(\"Fourth element :\", me[3]) ## 1000000\nprint(\"Length of list :\", len(me)) ## 4", "Since lists are considered to be sequentially ordered, they support a number of operations that can be applied to any Python sequence. \n|Operation Name|Operator|Explanation|\n|:-------------|:-------|:----------|\n|Indexing|[ ]|Access an element of a sequence|\n|Concatenation|+|Combine sequences together|\n|Repetition|*|Concatenate a repeated number of times|\n|Membership|in|Ask whether an item is in a sequence|\n|Length|len|Ask the number of items in the sequence|\n|Slicing|[ : ]|Extract a part of a sequence|", "myList = [1,2,3,4]\n\n# Indexing\nA = myList[2]\nprint(A)\n\n# Repititoin\nA = [A]*3\nprint(A)\n\n# Concatenation\nprint(myList + A)\n\n# Membership\nprint(1 in myList)\n\n# Length\nprint(len(myList))\n\n# Slicing [inclusive : exclusive]\nprint(myList[1:3])\n\n# Leaving the exclusive parameter empty\nprint(myList[-3:])", "Mutability\nStrings are immutable and list are mutable. \nFor example :", "# Creating sentence and list form of sentence\nname = \"Welcome to coding with Python v3.6\"\nwords = [\"Welcome\", \"to\", \"coding\", \"with\", \"Python\", \"v3.6\"]\n\nprint(name[4])\nprint(words[4])\n\n# This is okay\nwords[5] = \"v2.7\"\nprint(words)\n\n# This is not\n# name[5] = \"d\"\n# print(name)", "Passed by reference\nThe list is stored at a memory locations and only a reference of this memory location is what the variable holds. So changes applied to one variable reflect in other variables as well.", "langs = [\"Python\", \"Java\", \"C++\", \"C\"]\nlanguages = langs\nlangs.append(\"C#\")\nprint(langs)\nprint(languages)", "List Methods\nBesides simple accessing of values, lists have a large variety of methods that are used to performed different useful manipulations on them.\nSome of them are: \n\nlist.append(element): adds a single element to the end of the list. Common error: does not return the new list, just modifies the original.", "# list.append example\nnames = ['Hermione Granger', 'Ronald Weasley']\nnames.append('Harry Potter')\nprint(\"New list :\", names) ## ['Hermione Granger', 'Ronald Weasley', 'Harry Potter']", "list.insert(index, element): inserts the element at the given index, shifting elements to the right.", "# list.insert example\nnames = ['Ronald Weasley', 'Hermione Granger']\nnames.insert(1, 'Harry Potter')\nprint(\"New list :\", names) ## ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']", "list.extend(list2): adds the elements in list2 to the end of the list. Using + or += on a list is similar to using extend().", "# list.extend example\nMainChar = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']\nSupChar = ['Neville Longbottom', 'Luna Lovegood']\nMainChar.extend(SupChar)\nprint(\"Full list :\", MainChar) ## ['Ronald Weasley', 'Harry Potter', 'Hermione Granger', 'Neville Longbottom', 'Luna Lovegood']", "list.index(element): searches for the given element from the start of the list and returns its index. Throws a ValueError if the element does not appear (use 'in' to check without a ValueError).", "# list.index example\nnames = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']\nindex = names.index('Harry Potter') \nprint(\"Index of Harry Potter in list :\",index) ## 1\n\n# Throws a ValueError (Uncomment to see error.)\n# index = names.index('Albus Dumbledore')", "list.remove(element): searches for the first instance of the given element and removes it (throws ValueError if not present)", "names = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']\nindex = names.remove('Harry Potter') ## ['Ronald Weasley', 'Hermione Granger']\nprint(\"Modified list :\", names)\n", "list.pop(index): removes and returns the element at the given index. Returns the rightmost element if index is omitted (roughly the opposite of append()).", "names = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']\nindex = names.pop(1)\nprint(\"Modified list :\", names) ## ['Ronald Weasley', 'Hermione Granger']", "list.sort(): sorts the list in place (does not return it). (The sorted() function shown below is preferred.)", "alphabets = ['a', 'f','c', 'e','b', 'd']\nalphabets.sort();\nprint (\"Sorted list :\", alphabets) ## ['a', 'b', 'c', 'd', 'e', 'f']\n", "list.reverse(): reverses the list in place (does not return it).", "alphabets = ['a', 'b', 'c', 'd', 'e', 'f']\nalphabets.reverse()\nprint(\"Reversed list :\", alphabets) ## ['f', 'e', 'd', 'c', 'b', 'a']", "Others methods include :\n- Count : list.count()\n- Delete : del list[index]\n- Join : \"[Seperator string]\".join(list)\n\nList Comprehensions\nIn Python, List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence, or to create a subsequence of those elements that satisfy a certain condition.\nIt can be used to construct lists in a very natural, easy way, like a mathematician is used to do.\nThis is how we can explain sets in maths:\n- Squares = {xยฒ : x in {0 ... 9}}\n- Exponents = (1, 2, 4, 8, ..., 2ยนยฒ)\n- EvenSquares = {x | x in S and x even}\nLets try to do this in Python using normal loops and list methods:", "# Using loops and list methods\n\nsquares = []\nfor x in range(10):\n squares.append(x**2)\nprint(\"Squares :\", squares) ## [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\n \nexponents = []\nfor i in range(13):\n exponents.append(2**i)\nprint(\"Exponents :\", exponents) ## [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096]\n\nevenSquares = []\nfor x in squares:\n if x % 2 == 0:\n evenSquares.append(x)\nprint(\"Even Squares :\", evenSquares) ## [0, 4, 16, 36, 64]\n", "These extend to more than one line. But by using list comprehensions you can bring it down to just one line.", "# Using list comprehensions\n\nsquares = [x**2 for x in range(10)]\nexponents = [2**i for i in range(13)]\nevenSquares = [x for x in squares if x % 2 == 0]\nprint(\"Squares :\", squares) ## [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\nprint(\"Exponents :\", exponents) ## [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096]\nprint(\"Even Squares :\", evenSquares) ## [0, 4, 16, 36, 64]\n\n", "Searching\nSearching is the process of finding a particular item in a collections of items. It is one of the most common problems that arise in computer programming. A search typically answers either True or False as to whether the item is present.\nIn Python, there is a very easy way to ask whether an item is in a list of items. We use the in operator.", "# Using in to check if number is present in the list.\n\nprint(15 in [3,5,2,4,1])\nprint('Work' in 'Python Advanced Workshop')\n", "Sometimes it can be important to get the position of the searched value. In that case, we can use index method for lists and the find method for strings.", "# Using index to get position of the number if present in list.\n# In case of lists, its important to remember that the index function will throw an error if the value isn't present in the list.\n\nvalues = [3,5,2,4,1]\nif 5 in values:\n print(\"Value present at\",values.index(5)) ## 1\nelse:\n print(\"Value not present in list\")\n\n# Using find to get the index of the first occurrence of the word in a sentence.\n\nsentence = \"This be a string\"\nindex = sentence.find(\"is\")\nif index == -1:\n print(\"There is no 'is' here!\")\nelse:\n print(\"Found 'is' in the sentence at position \"+str(index))\n\n# Using index to find words in a list of words\nsentence = \"This be a string\"\nwords = sentence.split(' ')\nif 'is' in words:\n print(\"Found 'is' in the list at position \"+str(words.index('is')))\nelse:\n print(\"There is no 'is' here!\")", "For more efficient Search Algorithms, look through the Algorithm Implementation section of this repository\n\nSorting\nSorting is the process of placing elements from a collection in some kind of order. \nFor example, a list of words could be sorted alphabetically or by length. \nA list of cities could be sorted by population, by area, or by zip code.\nPython lists have a built-in sort() method that modifies the list in-place and a sorted() built-in function that builds a new sorted list from an iterable.\n\nlist.sort(): Modifies existing list and can be used only with lists.\nsorted(list): Creates a new list when called and can be used with other iterables.\n\nBasic sorting functions\nThe most basic use of the sorted function can be seen below :", "# Using sort() with a list.\n\nvalues = [7, 4, 3, 6, 1, 2, 5]\nprint(\"Unsorted list :\", values) ## [7, 4, 3, 6, 1, 2, 5]\nnewValues = values.sort()\nprint(\"New list :\", newValues) ## None\nprint(\"Old list :\", values) ## [1, 2, 3, 4, 5, 6, 7]\nprint()\n# Using sorted() with a list.\n\nvalues = [7, 4, 3, 6, 1, 2, 5]\nprint(\"Unsorted list :\", values) ## [7, 4, 3, 6, 1, 2, 5]\nnewValues = sorted(values)\nprint(\"New list :\", newValues) ## [1, 2, 3, 4, 5, 6, 7]\nprint(\"Old list :\", values) ## [7, 4, 3, 6, 1, 2, 5]\n", "Sorting using additional key\nFor more complex custom sorting, sorted() takes an optional \"key=\" specifying a \"key\" function that transforms each element before comparison. \nThe key function takes in 1 value and returns 1 value, and the returned \"proxy\" value is used for the comparisons within the sort.", "# Using key in sorted\n\nvalues = ['ccc', 'aaaa', 'd', 'bb']\nprint (sorted(values, key=len)) ## ['d', 'bb', 'ccc', 'aaaa']\n\n# Remember case sensitivity : All upper case characters come before lower case character in an ascending sequence.\nsentence = \"This is a test string from Andrew\"\nprint(sorted(sentence.split(), key=str.lower)) ## ['a', 'Andrew', 'from', 'is', 'string', 'test', 'This']\n\n# Using reverse for ascending and descending\nstrs = ['aa', 'BB', 'zz', 'CC']\nprint (sorted(strs)) ## ['BB', 'CC', 'aa', 'zz'] (case sensitive)\nprint (sorted(strs, reverse=True)) ## ['zz', 'aa', 'CC', 'BB']\n", "Basics on Class and OOP\nThis section is built around the fundamental of Object Oriented Programming (OOP).\nIt aims strengthening basics but doesn't justify the broad topic itself. As OOP is a very important programming concept you should read further to better get a grip on python as well as go deep in understanding how it is useful and essential to programming.\nBelow are some essential resources :\n- Improve Your Python: Python Classes and Object Oriented Programming\n- Learn Python The Hard Way\n- Python For Beginners\n- A Byte Of Python\nOOP\nIn all the code we wrote till now, we have designed our program around functions i.e. blocks of statements which manipulate data. This is called the procedure-oriented way of programming.\nThere is another way of organizing your program which is to combine data and functionality and wrap it inside something called an object. This is called the object oriented programming paradigm. \nMost of the time you can use procedural programming, but when writing large programs or have a problem that is better suited to this method, you can use object oriented programming techniques.\nClasses and Objects\nClasses and objects are the two main aspects of object oriented programming. A class creates a new type where objects are instances of the class. \nObjects can store data using ordinary variables that belong to the object. Variables that belong to an object or class are referred to as fields or attributes.\nObjects can also have functionality by using functions that belong to a class. Such functions are called methods of the class.\nThe simplest class possible is shown in the following example :", "class Person:\n pass # An empty block\n\np = Person()\nprint(p)\n", "Methods\nClass methods have only one specific difference from ordinary functions - they must have an extra first name that has to be added to the beginning of the parameter list, but you do not give a value for this parameter when you call the method, Python will provide it. This particular variable refers to the object itself, and by convention, it is given the name self.", "class Person:\n def say_hi(self):\n print('Hello, how are you?')\n\np = Person()\np.say_hi()\n", "The init\nThere are many method names which have special significance in Python classes. We will see the significance of the init method now. \nThe init method is run as soon as an object of a class is instantiated. The method is useful to do any initialization you want to do with your object. Notice the double underscores both at the beginning and at the end of the name.", "class Person:\n def __init__(self, name):\n self.name = name\n\n def say_hi(self):\n print('Hello, my name is', self.name)\n\np = Person('Shantanu')\np.say_hi()", "Object variables\nNow let us learn about the data part. The data part, i.e. fields, are nothing but ordinary variables that are bound to the namespaces of the classes and objects. This means that these names are valid within the context of these classes and objects only. That's why they are called name spaces. \nThere are two types of fields - class variables and object variables which are classified depending on whether the class or the object owns the variables respectively. \nClass variables are shared - they can be accessed by all instances of that class. There is only one copy of the class variable and when any one object makes a change to a class variable, that change will be seen by all the other instances. \nObject variables are owned by each individual object/instance of the class. In this case, each object has its own copy of the field i.e. they are not shared and are not related in any way to the field by the same name in a different instance.\nAn example will make this easy to understand.", "class Robot:\n ## Represents a robot, with a name.\n\n # A class variable, counting the number of robots\n population = 0\n\n def __init__(self, name):\n ## Initializes the data.\n self.name = name\n print(\"(Initializing {})\".format(self.name))\n\n # When this person is created, the robot\n # adds to the population\n Robot.population += 1\n\n def die(self):\n ## I am dying.\n print(\"{} is being destroyed!\".format(self.name))\n\n Robot.population -= 1\n\n if Robot.population == 0:\n print(\"{} was the last one.\".format(self.name))\n else:\n print(\"There are still {:d} robots working.\".format(\n Robot.population))\n\n def say_hi(self):\n ## Greeting by the robot. Yeah, they can do that.\n print(\"Greetings, my masters call me {}.\".format(self.name))\n\n @classmethod\n def how_many(cls):\n ## Prints the current population.\n print(\"We have {:d} robots.\".format(cls.population))\n\n\ndroid1 = Robot(\"R2-D2\")\ndroid1.say_hi()\nRobot.how_many()\n\ndroid2 = Robot(\"C-3PO\")\ndroid2.say_hi()\nRobot.how_many()\n\nprint(\"\\nRobots can do some work here.\\n\")\n\nprint(\"Robots have finished their work. So let's destroy them.\")\ndroid1.die()\ndroid2.die()\n\nRobot.how_many()\n", "How It Works\nThis is a long example but helps demonstrate the nature of class and object variables. Here, population belongs to the Robot class and hence is a class variable. The name variable belongs to the object (it is assigned using self) and hence is an object variable. \nThus, we refer to the population class variable as Robot.population and not as self.population. We refer to the object variable name using self.name notation in the methods of that object. Remember this simple difference between class and object variables. Also note that an object variable with the same name as a class variable will hide the class variable! \nInstead of Robot.population, we could have also used self.class.population because every object refers to its class via the self.class attribute. \nThe how_many is actually a method that belongs to the class and not to the object. This means we can define it as either a classmethod or a staticmethod depending on whether we need to know which class we are part of. Since we refer to a class variable, let's use classmethod.\nWe have marked the how_many method as a class method using a decorator.\nDecorators can be imagined to be a shortcut to calling a wrapper function, so applying the @classmethod decorator is same as calling:\nhow_many = classmethod(how_many)\nObserve that the init method is used to initialize the Robot instance with a name. In this method, we increase the population count by 1 since we have one more robot being added. Also observe that the values of self.name is specific to each object which indicates the nature of object variables.\nRemember, that you must refer to the variables and methods of the same object using the self only. This is called an attribute reference.\nAll class members are public. One exception: If you use data members with names using the double underscore prefix such as __privatevar , Python uses name-mangling to effectively make it a private variable.\nThus, the convention followed is that any variable that is to be used only within the class or object should begin with an underscore and all other names are public and can be used by other classes/objects. Remember that this is only a convention and is not enforced by Python (except for the double underscore prefix).\nThere are more concepts in OOP such as Inheritance, Abstraction and Polymorphism, which would require a lot more time to cover. You may refer to reference material for explanation on these topics.\n\nFile I/O\nFile handling is super simplified in Python compared to other programming languages.\nThe first thing youโ€™ll need to know is Pythonโ€™s built-in open function to get a file object. \nThe open function opens a file. When you use the open function, it returns something called a file object. File objects contain methods and attributes that can be used to collect information about the file you opened. They can also be used to manipulate said file.\nFor example, the mode attribute of a file object tells you which mode a file was opened in. And the name attribute tells you the name of the file that the file object has opened.\nFile Types\nIn Python, a file is categorized as either text or binary, and the difference between the two file types is important.\nText files are structured as a sequence of lines, where each line includes a sequence of characters. This is what you know as code or syntax.\nEach line is terminated with a special character, called the EOL or End of Line character. There are several types, but the most common is the comma {,} or newline character. It ends the current line and tells the interpreter a new one has begun.\nA backslash character can also be used, and it tells the interpreter that the next character โ€“ following the slash โ€“ should be treated as a new line. This character is useful when you donโ€™t want to start a new line in the text itself but in the code.\nA binary file is any type of file that is not a text file. Because of their nature, binary files can only be processed by an application that know or understand the fileโ€™s structure. In other words, they must be applications that can read and interpret binary.\nOpen ( ) Function\nThe syntax to open a file object in Python is:\npython\nfile_object = open(\"filename\", \"mode\") ## where file_object is the variable to add the file object.\nThe second argument you see โ€“ mode โ€“ tells the interpreter and developer which way the file will be used.\nMode\nIncluding a mode argument is optional because a default value of r will be assumed if it is omitted.\nThe modes are:\n\nr โ€“ Read mode which is used when the file is only being read\nw โ€“ Write mode which is used to edit and write new information to the file (any existing files with the same name will be erased when this mode is activated)\na โ€“ Appending mode, which is used to add new data to the end of the file; that is new information is automatically amended to the end\nr+ โ€“ Special read and write mode, which is used to handle both actions when working with a file\n\nCreate a text file\nUsing a simple text editor, letโ€™s create a file. You can name it anything you like, and itโ€™s better to use something youโ€™ll identify with.\nFor the purpose of this workshop, however, we are going to call it \"testfile.txt\".\nJust create the file and leave it blank.\nTo manipulate the file :\n```python\nfile = open(\"testfile.txt\",\"w\")\nfile.write(\"Hello World\")\nfile.write(\"This is our new text file\")\nfile.write(\"and this is another line.\")\nfile.write(\"Why? Because we can.\")\nfile.close()\n```\nReading a text file\nFollowing methods allow reading a file :\n- file.read(): extract a string that contains all characters in the file.\npython\nfile = open(\"testfile.text\", \"r\")\nprint(file.read())\n- file.read(numberOfCharacters): extract only a certain number of characters.\npython\nfile = open(\"testfile.txt\", \"r\")\nprint(file.read(5))\n- file.readline(): read a file line by line โ€“ as opposed to pulling the content of the entire file at once.\npython\nfile = open(\"testfile.txt\", \"r\")\nprint(file.readline())\n- file.readline(lineNumber): return a specific line\npython\nfile = open(\"testfile.txt\", \"r\")\nprint(file.readline(3))\n- file.readlines(): return every line in the file, properly separated in a list\npython\nfile = open(\"testfile.txt\", \"r\")\nprint (file.readlines())\nLooping over file\nWhen you want to read โ€“ or return โ€“ all the lines from a file in a more memory efficient, and fast manner, you can use the loop over method. The advantage to using this method is that the related code is both simple and easy to read.\npython\nfile = open(\"testfile.txt\", \"r\")\nfor line in file:\nprint line\nUsing the File Write Method\nThis method is used to add information or content to an existing file. To start a new line after you write data to the file, you can add an EOL (\"\\n\")) character.\n```python\nfile = open(\"testfile.txt\", \"w\")\nfile.write(\"This is a test\")\nfile.write(\"To add more lines.\")\nfile.close()\n```\nClosing a file\nWhen youโ€™re done working, you can use the fh.close() command to end things. What this does is close the file completely, terminating resources in use, in turn freeing them up for the system to deploy elsewhere.\nItโ€™s important to understand that when you use the fh.close() method, any further attempts to use the file object will fail.\npython\nfile.close()\n\nException Handling\nThere are two types of errors that typically occur when writing programs. The first, known as a syntax error, simply means that the programmer has made a mistake in the structure of a statement or expression. For example, it is incorrect to write a for statement and forget the colon.", "# ( Uncomment to see Syntax error. )\n# for i in range(10)", "The other type of error, known as a logic error, denotes a situation where the program executes but gives the wrong result. This can be due to an error in the underlying algorithm or an error in your translation of that algorithm. In some cases, logic errors lead to very bad situations such as trying to dividing by zero or trying to access an item in a list where the index of the item is outside the bounds of the list. In this case, the logic error leads to a runtime error that causes the program to terminate. These types of runtime errors are typically called exceptions. \nWhen an exception occurs, we say that it has been raised. You can handle the exception that has been raised by using a try statement. For example, consider the following session that asks the user for an integer and then calls the square root function from the math library. If the user enters a value that is greater than or equal to 0, the print will show the square root. However, if the user enters a negative value, the square root function will report a ValueError exception.", "import math\nanumber = int(input(\"Please enter an integer \"))\n# Give input as negative number and also see output from next code snippet\nprint(math.sqrt(anumber))", "We can handle this exception by calling the print function from within a try block. A corresponding except block catches the exception and prints a message back to the user in the event that an exception occurs. For example:", "try:\n print(math.sqrt(anumber))\nexcept:\n print(\"Bad Value for square root\")\n print(\"Using absolute value instead\")\n print(math.sqrt(abs(anumber)))", "It is also possible for a programmer to cause a runtime exception by using the raise statement. For example, instead of calling the square root function with a negative number, we could have checked the value first and then raised our own exception. The code fragment below shows the result of creating a new RuntimeError exception. Note that the program would still terminate but now the exception that caused the termination is something explicitly created by the programmer.", "if anumber < 0:\n raise RuntimeError(\"You can't use a negative number\")\nelse:\n print(math.sqrt(anumber))", "There are many kinds of exceptions that can be raised in addition to the RuntimeError shown above. See the Python reference manual for a list of all the available exception types and for how to create your own.\n\nImporting Modules\nSo far we haven't explained what a Python module is. To put it in a nutshell: every file, which has the file extension .py and consists of proper Python code, can be seen or is a module. \nThere is no special syntax required to make such a file a module. A module can contain arbitrary objects, for example files, classes or attributes. \nAll those objects can be accessed after an import. There are different ways to import a modules. \nImport one module:\n```python\n\n\n\nimport math\nmath.pi\n3.141592653589793\nmath.sin(math.pi/2)\n1.0\nmath.cos(math.pi/2)\n6.123031769111886e-17\nmath.cos(math.pi)\n-1.0\n```\n\n\n\nImport more than one module in one import statement:\npython\nimport math, random\nIf only certain objects of a module are needed, we can import only those:\npython\nfrom math import sin, pi\nInstead of explicitly importing certain objects from a module, it's also possible to import everything in the namespace of the importing module:\n```python\n\n\n\nfrom math import *\nsin(3.01) + tan(cos(2.1)) + e\n2.2968833711382604\ne\n2.718281828459045\n```\n\n\n\n\nScripting\nThis topic is a little more complex than you would have expected it to be.\nProbably practice more python before you venture into this topic.\nBut just to get you to understand what this topic consists of and also appreciate the powerful nature of python,\nHere are a few video clips to showing the result of various python scripts.\nArticles showcasing python scripts \nMovie subtitle downloader \nPython Script to download course content from Blackboard learn" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fcollonval/coursera_data_visualization
Making_Data_Management.ipynb
mit
[ "Assignment: Making Data Management Decisions - Python\nFollowing is the Python program I wrote to fulfill the third assignment of the Data Management and Visualization online course.\nI decided to use Jupyter Notebook as it is a pretty way to write code and present results.\nResearch question\nUsing the Gapminder database, I would like to see if an increasing Internet usage results in an increasing suicide rate. A study shows that other factors like unemployment could have a great impact.\nSo for this third assignment, the three following variables will be analyzed:\n\nInternet Usage Rate (per 100 people)\nSuicide Rate (per 100 000 people)\nUnemployment Rate (% of the population of age 15+)\n\nData management\nFor the question, I'm interested in the countries for which data are missing will be discarded. As missing data in Gapminder database are replace directly by NaN no special data treatment is needed.", "# Load a useful Python libraries for handling data\nimport pandas as pd\nimport numpy as np\nfrom IPython.display import Markdown, display\n\n# Read the data\ndata_filename = r'gapminder.csv'\ndata = pd.read_csv(data_filename, low_memory=False)\ndata = data.set_index('country')", "General information on the Gapminder data", "display(Markdown(\"Number of countries: {}\".format(len(data))))\ndisplay(Markdown(\"Number of variables: {}\".format(len(data.columns))))\n\n# Convert interesting variables in numeric format\nfor variable in ('internetuserate', 'suicideper100th', 'employrate'):\n data[variable] = pd.to_numeric(data[variable], errors='coerce')", "But the unemployment rate is not provided directly. In the database, the employment rate (% of the popluation) is available. So the unemployement rate will be computed as 100 - employment rate:", "data['unemployrate'] = 100. - data['employrate']", "The first records of the data restricted to the three analyzed variables are:", "subdata = data[['internetuserate', 'suicideper100th', 'unemployrate']]\nsubdata.head(10)", "Data analysis\nWe will now have a look at the frequencies of the variables after grouping them as all three are continuous variables. I will group the data in intervals using the cut function.\nInternet use rate frequencies", "display(Markdown(\"Internet Use Rate (min, max) = ({0:.2f}, {1:.2f})\".format(subdata['internetuserate'].min(), subdata['internetuserate'].max())))\n\ninternetuserate_bins = pd.cut(subdata['internetuserate'], \n bins=np.linspace(0, 100., num=21))\n\ncounts1 = internetuserate_bins.value_counts(sort=False, dropna=False)\npercentage1 = internetuserate_bins.value_counts(sort=False, normalize=True, dropna=False)\ndata_struct = {\n 'Counts' : counts1,\n 'Cumulative counts' : counts1.cumsum(),\n 'Percentages' : percentage1,\n 'Cumulative percentages' : percentage1.cumsum()\n}\n\ninternetrate_summary = pd.DataFrame(data_struct)\ninternetrate_summary.index.name = 'Internet use rate (per 100 people)'\n(internetrate_summary[['Counts', 'Cumulative counts', 'Percentages', 'Cumulative percentages']]\n .style.set_precision(3)\n .set_properties(**{'text-align':'right'}))", "Suicide per 100,000 people frequencies", "display(Markdown(\"Suicide per 100,000 people (min, max) = ({:.2f}, {:.2f})\".format(subdata['suicideper100th'].min(), subdata['suicideper100th'].max())))\n\nsuiciderate_bins = pd.cut(subdata['suicideper100th'], \n bins=np.linspace(0, 40., num=21))\n\ncounts2 = suiciderate_bins.value_counts(sort=False, dropna=False)\npercentage2 = suiciderate_bins.value_counts(sort=False, normalize=True, dropna=False)\ndata_struct = {\n 'Counts' : counts2,\n 'Cumulative counts' : counts2.cumsum(),\n 'Percentages' : percentage2,\n 'Cumulative percentages' : percentage2.cumsum()\n}\n\nsuiciderate_summary = pd.DataFrame(data_struct)\nsuiciderate_summary.index.name = 'Suicide (per 100 000 people)'\n(suiciderate_summary[['Counts', 'Cumulative counts', 'Percentages', 'Cumulative percentages']]\n .style.set_precision(3)\n .set_properties(**{'text-align':'right'}))", "Unemployment rate frequencies", "display(Markdown(\"Unemployment rate (min, max) = ({0:.2f}, {1:.2f})\".format(subdata['unemployrate'].min(), subdata['unemployrate'].max())))\n\nunemployment_bins = pd.cut(subdata['unemployrate'], \n bins=np.linspace(0, 100., num=21))\n\n\ncounts3 = unemployment_bins.value_counts(sort=False, dropna=False)\npercentage3 = unemployment_bins.value_counts(sort=False, normalize=True, dropna=False)\ndata_struct = {\n 'Counts' : counts3,\n 'Cumulative counts' : counts3.cumsum(),\n 'Percentages' : percentage3,\n 'Cumulative percentages' : percentage3.cumsum()\n}\n\nunemployment_summary = pd.DataFrame(data_struct)\nunemployment_summary.index.name = 'Unemployement rate (% population age 15+)'\n(unemployment_summary[['Counts', 'Cumulative counts', 'Percentages', 'Cumulative percentages']]\n .style.set_precision(3)\n .set_properties(**{'text-align':'right'}))", "Summary\nThe Gapminder data based provides information for 213 countries. \nAs the unemployment rate is not provided directly in the database, it was computed as 100 - employment rate.\nThe distributions of the variables are as follow:\n\nInternet Use Rate per 100 people\nData missing for 21 countries\nRate ranges from 0.21 to 95.64\nThe majority of the countries (64%) have a rate below 50\n\n\nSuicide Rate per 100 000\nData missing for 22 countries\nRate ranges from 0.2 to 35.75\nThe rate is more often between 4 and 12\n\n\nUnemployment Rate for age 15+\nData missing for 35 countries\nRate ranges from 16.8 to 68\nFor the majority of the countries the rate lies below 45\n\n\n\nFrom those data, I was surprised that so few people have access to the internet especially now that smartphones are cheap.\nAnother astonishing facts is the high unemployment rate, I was expected much less; especially in so called developped countries. But I presume that long school time and retirement can explain those high values as people of age 15+ are considered here.\n\nIf you are interested by the subject, follow me on Tumblr." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
amueller/advanced_training
05.1 Trees and Forests.ipynb
bsd-2-clause
[ "Trees and Forests", "%matplotlib notebook\nfrom preamble import *", "Decision Tree Classification", "from plots import plot_tree_interactive\nplot_tree_interactive()", "Random Forests", "from plots import plot_forest_interactive\nplot_forest_interactive()\n\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import make_moons\nfrom sklearn.model_selection import train_test_split\n\nX, y = make_moons(n_samples=100, noise=0.25, random_state=3)\nX_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)\n\nforest = RandomForestClassifier(n_estimators=5, random_state=2)\nforest.fit(X_train, y_train)\n\nfig, axes = plt.subplots(2, 3, figsize=(20, 10))\nfor i, (ax, tree) in enumerate(zip(axes.ravel(), forest.estimators_)):\n ax.set_title(\"tree %d\" % i)\n mglearn.plots.plot_tree_partition(X_train, y_train, tree, ax=ax)\nmglearn.plots.plot_2d_separator(forest, X_train, fill=True, ax=axes[-1, -1], alpha=.4)\naxes[-1, -1].set_title(\"random forest\")\nplt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s=60, cmap=mglearn.cm2)", "Selecting the Optimal Estimator via Cross-Validation", "from sklearn.model_selection import GridSearchCV\nfrom sklearn.datasets import load_boston\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestRegressor\n\nboston = load_boston()\nX, y = boston.data, boston.target\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)\n\nrf = RandomForestRegressor(n_estimators=200, n_jobs=-1)\nparameters = {'max_features':['sqrt', 'log2'],\n 'max_depth':[5, 7, 9]}\n\ngrid = GridSearchCV(rf, parameters, cv=5)\ngrid.fit(X_train, y_train)\n\ngrid.score(X_train, y_train)\n\ngrid.score(X_test, y_test)\n\ngrid.best_estimator_.feature_importances_", "Exercises\nCompare training and test performance of the decision tree (sklearn.tree.DecisionTreeRegressor) and random forest on the bike dataset.\nWhat is the effect of changing max_depth to training and test set score?\nHow do the feature importances of trees and forest differ?\nUse mglearn.tools.get_tree to visualize a decision tree." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Jwuthri/Mozinor
mozinor/example/Mozinor example Class.ipynb
mit
[ "Use mozinor for classification\nImport the main module", "from mozinor.baboulinet import Baboulinet", "Prepare the pipeline\n(str) filepath: Give the csv file\n(str) y_col: The column to predict\n(bool) regression: Regression or Classification ?\n(bool) process: (WARNING) apply some preprocessing on your data (tune this preprocess with params below)\n(char) sep: delimiter\n(list) col_to_drop: which columns you don't want to use in your prediction\n(bool) derivate: for all features combination apply, n1 * n2, n1 / n2 ...\n(bool) transform: for all features apply, log(n), sqrt(n), square(n)\n(bool) scaled: scale the data ?\n(bool) infer_datetime: for all columns check the type and build new columns from them (day, month, year, time) if they are date type\n(str) encoding: data encoding\n(bool) dummify: apply dummies on your categoric variables\n\nThe data files have been generated by sklearn.dataset.make_classification", "cls = Baboulinet(filepath=\"toto.csv\", y_col=\"predict\", regression=False)", "Now run the pipeline\nMay take some times", "res = cls.babouline()", "The class instance, now contains 2 objects, the model for this data, and the best stacking for this data\nTo make auto generate the code of the model\nGenerate the code for the best model", "cls.bestModelScript()", "Generate the code for the best stacking", "cls.bestStackModelScript()", "To check which model is the best\nBest model", "res.best_model\n\nshow = \"\"\"\n Model: {},\n Score: {}\n\"\"\"\nprint(show.format(res.best_model[\"Estimator\"], res.best_model[\"Score\"]))", "Best stacking", "res.best_stack_models\n\nshow = \"\"\"\n FirstModel: {},\n SecondModel: {},\n Score: {}\n\"\"\"\nprint(show.format(res.best_stack_models[\"Fit1stLevelEstimator\"], res.best_stack_models[\"Fit2ndLevelEstimator\"], res.best_stack_models[\"Score\"]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
danresende/deep-learning
autoencoder/Convolutional_Autoencoder.ipynb
mit
[ "Convolutional Autoencoder\nSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.", "%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)\n\nimg = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')", "Network Architecture\nThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.\n\nHere our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.\nWhat's going on with the decoder\nOkay, so the decoder has these \"Upsample\" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called \"transpose convolution\" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose. \nHowever, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.\n\nExercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.", "learning_rate = 0.001\ninputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='input')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='input')\n\n### Encoder\nconv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 28x28x16\nmaxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='SAME')\n# Now 14x14x16\nconv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 14x14x8\nmaxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='SAME')\n# Now 7x7x8\nconv3 = tf.layers.conv2d(maxpool2, 8, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 7x7x8\nencoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='SAME')\n# Now 4x4x8\n\n### Decoder\nupsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))\n# Now 7x7x8\nconv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 7x7x8\nupsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))\n# Now 14x14x8\nconv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 14x14x8\nupsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))\n# Now 28x28x8\nconv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 28x28x16\n\nlogits = tf.layers.conv2d(conv6, 1, (3, 3), padding='SAME', activation=None)\n#Now 28x28x1\n\n# Pass logits through sigmoid to get reconstructed image\ndecoded = tf.nn.sigmoid(logits)\n\n# Pass logits through sigmoid and calculate the cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\n\n# Get cost and define the optimizer\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "Training\nAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.", "sess = tf.Session()\n\nepochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n imgs = batch[0].reshape((-1, 28, 28, 1))\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,\n targets_: imgs})\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))\n\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n\nfig.tight_layout(pad=0.1)\n\nsess.close()", "Denoising\nAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.\n\nSince this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.\n\nExercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.", "learning_rate = 0.001\ninputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')\n\n### Encoder\nconv1 = tf.layers.conv2d(inputs_, 32, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 28x28x32\nmaxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='SAME')\n# Now 14x14x32\nconv2 = tf.layers.conv2d(maxpool1, 32, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 14x14x32\nmaxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='SAME')\n# Now 7x7x32\nconv3 = tf.layers.conv2d(maxpool2, 16, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 7x7x16\nencoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='SAME')\n# Now 4x4x16\n\n### Decoder\nupsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))\n# Now 7x7x16\nconv4 = tf.layers.conv2d(upsample1, 16, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 7x7x16\nupsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))\n# Now 14x14x16\nconv5 = tf.layers.conv2d(upsample2, 32, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 14x14x32\nupsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))\n# Now 28x28x32\nconv6 = tf.layers.conv2d(upsample3, 32, (3, 3), padding='SAME', activation=tf.nn.relu)\n# Now 28x28x32\n\nlogits = tf.layers.conv2d(conv6, 1, (3, 3), padding='SAME', activation=None)\n#Now 28x28x1\n\n# Pass logits through sigmoid to get reconstructed image\ndecoded = tf.nn.sigmoid(logits)\n\n# Pass logits through sigmoid and calculate the cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\n\n# Get cost and define the optimizer\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(learning_rate).minimize(cost)\n\nsess = tf.Session()\n\nepochs = 100\nbatch_size = 200\n# Set's how much noise we're adding to the MNIST images\nnoise_factor = 0.5\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n # Get images from the batch\n imgs = batch[0].reshape((-1, 28, 28, 1))\n \n # Add random noise to the input images\n noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)\n # Clip the images to be between 0 and 1\n noisy_imgs = np.clip(noisy_imgs, 0., 1.)\n \n # Noisy images as inputs, original images as targets\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,\n targets_: imgs})\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))", "Checking out the performance\nHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.", "fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nnoisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)\nnoisy_imgs = np.clip(noisy_imgs, 0., 1.)\n\nreconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([noisy_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
avtlearns/automatic_text_summarization
TextRank_Automatic_Summarization_for_Medical_Articles.ipynb
gpl-3.0
[ "Automatic Summarization of Medical Articles\nAuthor: Abhijit V Thatte\nSolution: We will use TextRank for automatic summarization of medical articles. NIH's (National Institues for Health) PubMed repository consists of links to hundreds of thousands of medical articles. We will use articles relevant to various types of cancer. We will use the abstract of each article as the \"ground truth\". We will apply the TextRank algorithm to only the body of the PubMed article without the abstract to generate an extractive summary. We will use a Java based implementation of ROUGE software to evaluate the precision, recall and F1 score of extractive summary with respect to the ground truth. \nStep 1: Import required modules", "from nltk.tokenize.punkt import PunktSentenceTokenizer\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.feature_extraction.text import TfidfTransformer\nimport networkx as nx\nimport re\nimport urllib2\nfrom bs4 import BeautifulSoup\nimport pandas as pd\n# -*- coding: utf-8 -*-", "Step 2: Generate a list of documents", "urls = []\n\n#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1994795/\nurls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=1994795')\n\n#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC314300/\nurls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=314300')\n\n#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4383356/\nurls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4383356')\n\n#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4596899/\nurls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4596899')\n\n#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4303126/\nurls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4303126')\n\n#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4637461/\nurls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4637461')\n\n#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4690355/\nurls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4690355')\n\n#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3505152/\nurls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=3505152')\n\n#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3976810/\nurls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=3976810')\n\n#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4061037/\nurls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4061037')", "Step 3: Preprocess the documents", "documents = []\nabstracts = []\ntexts = []\n\nprint 'Preprocessing documents. This may take few minutes ...'\nfor i, url in enumerate(urls):\n print 'Preprocessing document %d ...' % (i+1)\n # Download the document\n my_url = urllib2.urlopen(url)\n raw_doc = BeautifulSoup(my_url.read(), 'xml')\n documents.append(raw_doc)\n\n # Extract the cleaned abstract\n raw_abstract = raw_doc.abstract\n my_abstract = re.sub(r'<\\/?\\w+>', r' ', str(raw_abstract)) # remove xml tags\n abstracts.append(my_abstract)\n\n # Extract the cleaned text\n text = raw_doc.body\n text = re.sub(r'\\\\n', r' ', str(text)) # remove newline characters\n text = re.sub(r'<[^>]+>', r' ', str(text)) # remove xml tags\n text = re.sub(r'\\[[^\\[^\\]]+\\]', r' ', str(text)) # remove references\n text = re.sub(r'\\[', r' ', str(text)) # remove any remaining [\n text = re.sub(r'\\]', r' ', str(text)) # remove any remaining ]\n text = re.sub(r'[\\s]{2,}', r' ', str(text)) # remove more than a single blank space\n text = re.sub(r'\\.\\s+,\\s+\\S', r' ', str(text)) # remove , after a period\n\n text = text.decode('utf-8')\n texts.append(text)\n\nprint 'All documents preprocessed successfully.'\nprint 'We have %d documents with %d abstracts and %d texts.' % (len(documents), len(abstracts), len(texts))\nassert len(documents) == len(abstracts)\nassert len(documents) == len(texts)\n ", "Step 4: Split the documents into sentences", "punkttokenizer = PunktSentenceTokenizer()\ntext_sentences = []\n\nfor text in texts:\n sentences = []\n seen = set()\n for sentence in punkttokenizer.tokenize(text):\n sentences.append(sentence)\n text_sentences.append(sentences)", "Step 5: Count the term frequency for sentences", "tf_matrices = []\ntfidf_matrices = []\ncosine_similarity_matrices = []\n\nprint 'Calculating sentence simiarities. This may take few minutes ...'\nfor i, sentences in enumerate(text_sentences):\n print 'Calculating sentence simiarities of document %d ...' % (i+1)\n tf_matrix = CountVectorizer().fit_transform(sentences)\n tf_matrices.append(tf_matrix)\n \n tfidf_matrix = TfidfTransformer().fit_transform(tf_matrix)\n tfidf_matrices.append(tfidf_matrix)\n \n cosine_similarity_matrix = tfidf_matrix * tfidf_matrix.T\n cosine_similarity_matrices.append(cosine_similarity_matrix)\n\nprint 'All documents processed successfully.'\nprint 'We have %d documents with %d tf_matrices %d tfidf_matrices and %d cosine_similarity_matrices.' \\\n % (len(documents), len(tf_matrices), len(tfidf_matrices), len(cosine_similarity_matrices))\nassert len(documents) == len(tf_matrices)\nassert len(documents) == len(tfidf_matrices)\nassert len(documents) == len(cosine_similarity_matrices)\n", "Step 6: Calculate TextRank", "similarity_graphs = []\ngraph_ranks = []\nhighest_ranks = []\nlowest_ranks = []\n\nprint 'Calculating TextRanks. This may take few minutes ...'\nfor i, cosine_similarity_matrix in enumerate(cosine_similarity_matrices):\n print 'Calculating TextRanks of document %d ...' % (i+1)\n similarity_graph = nx.from_scipy_sparse_matrix(cosine_similarity_matrix)\n similarity_graphs.append(similarity_graph)\n \n ranks = nx.pagerank(similarity_graph)\n graph_ranks.append(ranks)\n \n highest = sorted(((ranks[j],s) for j,s in enumerate(text_sentences[i])), reverse=True)\n highest_ranks.append(highest)\n \n lowest = sorted(((ranks[j],s) for j,s in enumerate(text_sentences[i])), reverse=False)\n lowest_ranks.append(lowest)\n \nprint 'All documents processed successfully.'\nprint 'We have %d documents with %d similarity_graphs %d graph_ranks and %d highest_ranks.' \\\n % (len(documents), len(similarity_graphs), len(graph_ranks), len(highest_ranks))\nassert len(documents) == len(similarity_graphs)\nassert len(documents) == len(graph_ranks)\nassert len(documents) == len(highest_ranks)\n", "Step 7: Save extractive summaries", "print 'Saving extractive summaries. This may take a few minutes ...'\nfor i, highest in enumerate(highest_ranks):\n print 'Writing extractive summary for document %d ...' % (i+1)\n out_file = '\\\\TextRank\\\\system\\\\article%d_system1.txt' % (i+1)\n with open(out_file, 'w') as f:\n for i in range(5):\n f.write((highest[i][1] + '\\n').encode('utf-8'))\nprint 'All documents processed successfully.'", "Step 8: Save ground truths.", "print 'Saving ground truths. This may take a few minutes ...'\nfor i, abstract in enumerate(abstracts):\n print 'Writing ground truth for document %d ...' % (i+1)\n out_file = '\\\\TextRank\\\\reference\\\\article%d_reference1.txt' % (i+1)\n with open(out_file, 'w') as f:\n f.write(abstract.strip() + '\\n')\nprint 'All documents processed successfully.'", "Step 9: Calculate ROUGE score", "%cd C:\\ROUGE\n!java -jar rouge2.0_0.2.jar\n\ndf = pd.read_csv('results.csv')\nprint df.sort_values('Avg_F-Score', ascending=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.24/_downloads/b7659d33d6ffe8531d004e9d6051f16f/forward_sensitivity_maps.ipynb
bsd-3-clause
[ "%matplotlib inline", "Display sensitivity maps for EEG and MEG sensors\nSensitivity maps can be produced from forward operators that\nindicate how well different sensor types will be able to detect\nneural currents from different regions of the brain.\nTo get started with forward modeling see tut-forward.", "# Author: Eric Larson <[email protected]>\n#\n# License: BSD-3-Clause\n\nimport numpy as np\nimport mne\nfrom mne.datasets import sample\nfrom mne.source_space import compute_distance_to_sensors\nfrom mne.source_estimate import SourceEstimate\nimport matplotlib.pyplot as plt\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nfwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\n\nsubjects_dir = data_path + '/subjects'\n\n# Read the forward solutions with surface orientation\nfwd = mne.read_forward_solution(fwd_fname)\nmne.convert_forward_solution(fwd, surf_ori=True, copy=False)\nleadfield = fwd['sol']['data']\nprint(\"Leadfield size : %d x %d\" % leadfield.shape)", "Compute sensitivity maps", "grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed')\nmag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed')\neeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')", "Show gain matrix a.k.a. leadfield matrix with sensitivity map", "picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False)\npicks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True)\n\nfig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True)\nfig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14)\nfor ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']):\n im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto',\n cmap='RdBu_r')\n ax.set_title(ch_type.upper())\n ax.set_xlabel('sources')\n ax.set_ylabel('sensors')\n fig.colorbar(im, ax=ax)\n\nfig_2, ax = plt.subplots()\nax.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()],\n bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'],\n color=['c', 'b', 'k'])\nfig_2.legend()\nax.set(title='Normal orientation sensitivity',\n xlabel='sensitivity', ylabel='count')\n\nbrain_sens = grad_map.plot(\n subjects_dir=subjects_dir, clim=dict(lims=[0, 50, 100]), figure=1)\nbrain_sens.add_text(0.1, 0.9, 'Gradiometer sensitivity', 'title', font_size=16)", "Compare sensitivity map with distribution of source depths", "# source space with vertices\nsrc = fwd['src']\n\n# Compute minimum Euclidean distances between vertices and MEG sensors\ndepths = compute_distance_to_sensors(src=src, info=fwd['info'],\n picks=picks_meg).min(axis=1)\nmaxdep = depths.max() # for scaling\n\nvertices = [src[0]['vertno'], src[1]['vertno']]\n\ndepths_map = SourceEstimate(data=depths, vertices=vertices, tmin=0.,\n tstep=1.)\n\nbrain_dep = depths_map.plot(\n subject='sample', subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[0, maxdep / 2., maxdep]), figure=2)\nbrain_dep.add_text(0.1, 0.9, 'Source depth (m)', 'title', font_size=16)", "Sensitivity is likely to co-vary with the distance between sources to\nsensors. To determine the strength of this relationship, we can compute the\ncorrelation between source depth and sensitivity values.", "corr = np.corrcoef(depths, grad_map.data[:, 0])[0, 1]\nprint('Correlation between source depth and gradiomter sensitivity values: %f.'\n % corr)", "Gradiometer sensitiviy is highest close to the sensors, and decreases rapidly\nwith inreasing source depth. This is confirmed by the high negative\ncorrelation between the two." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phungkh/phys202-2015-work
assignments/assignment05/InteractEx03.ipynb
mit
[ "Interact Exercise 3\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display", "Using interact for animation with data\nA soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Kortewegโ€“de Vries equation, which has the following analytical solution:\n$$\n\\phi(x,t) = \\frac{1}{2} c \\mathrm{sech}^2 \\left[ \\frac{\\sqrt{c}}{2} \\left(x - ct - a \\right) \\right]\n$$\nThe constant c is the velocity and the constant a is the initial location of the soliton.\nDefine soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.", "def soliton(x, t, c, a):\n i=(((c**(1/2))/2)*(x-c*t-a))\n return ((1/2)*c*(np.cos(i)**(-2)))\n \n\nassert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))", "To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:", "tmin = 0.0\ntmax = 10.0\ntpoints = 100\nt = np.linspace(tmin, tmax, tpoints)\n\nxmin = 0.0\nxmax = 10.0\nxpoints = 200\nx = np.linspace(xmin, xmax, xpoints)\n\nc = 1.0\na = 0.0", "Compute a 2d NumPy array called phi:\n\nIt should have a dtype of float.\nIt should have a shape of (xpoints, tpoints).\nphi[i,j] should contain the value $\\phi(x[i],t[j])$.", "phi=np.ndarray((xpoints,tpoints), dtype=float)\n\nfor i in range(200):\n for j in range(100):\n phi[i,j]=soliton(x[i],t[j],c,a)\n\nassert phi.shape==(xpoints, tpoints)\nassert phi.ndim==2\nassert phi.dtype==np.dtype(float)\nassert phi[0,0]==soliton(x[0],t[0],c,a)", "Write a plot_soliton_data(i) function that plots the soliton wave $\\phi(x, t[i])$. Customize your plot to make it effective and beautiful.", "def plot_soliton_data(i=0):\n plt.figure(figsize=(9,6))\n plt.plot(x,soliton(x,t[i],c,a))\n plt.box(False)\n plt.ylim(0,6000)\n plt.grid(True)\n plt.ylabel('soliton wave')\n plt.xlabel('x')\n \n\nplot_soliton_data(0)\n\nassert True # leave this for grading the plot_soliton_data function", "Use interact to animate the plot_soliton_data function versus time.", "interact(plot_soliton_data, i=(0,100,5))\n\nassert True # leave this for grading the interact with plot_soliton_data cell" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CalPolyPat/Python-Workshop
Python Workshop/Containers.ipynb
mit
[ "Containers\nContainers are set of objects including lists, sets, tuples, and dictionaries. They hold other variables in various different ways following different rules. These are arguably the most used, and most useful, objects you will encounter in today's course.\nLists\nLists are objects that hold variables in a particular order that you choose. They can hold any variable you wish, in fact, one list can hold more than one type of variable. Recall that our \"basic\" types are int, float, str, and bool. Let's see some examples of lists.", "list1=[1,2,3,4]\nprint(list1)\nlist2=[\"a\",\"b\",\"c\",\"d\"]\nprint(list2)\nlist3=[True, False, True]\nprint(list3)\nlist3=[1,2,\"c\",\"d\"]\nprint(list3)\nlist4=[2>3, 2, 4, 3>2]\nprint(list4)", "Let's notice a few key things:\n1. To make a list we use []\n2. We separate individual elements with commas\n3. The elements of a list can be any type\n4. The elements need not be explicitly written\nOther Properties of Lists\n\nYou can add two lists to create a new list.\nYou can append an element to the end of a list by using the append function.\nYou can append a list to the end of another list by using the += operator.\nYou can access a single element with something called slicing", "la=[1,2,3]\nlb=[\"1\",\"2\",\"3\"]\nprint(la+lb)\nla.append(4)\nprint(la)\nlb+=la\nprint(lb)", "Slicing\nAt one point, you will want to access certain elements of a list, this is done by slicing. There are a couple of ways to do this.\n1. la[0] will give the first element of the list la. Note that lists are indexed starting at zero, not one.\n2. la[n] will give the (n+1)th element of the list la.\n3. la[-1] will give the last element of the list la.\n4. la[-n] will give the nth to last element of the list la.\n5. la[p:q:k] will give you every kth element starting at p and ending at q of the list la. Ex/la[1:7:2] will give you every second element starting at 1 and ending at 7.\n5b. If p is omitted, it is assumed to be 0. If k is omitted, it is assumed to be 1, if q is omitted, it is assumed to be the last index or -1, which is really the same thing as we see above.", "la=[1,2,3,4,5,6,7,8,9,10]\nprint(la[0])\nprint(la[3])\nprint(la[-1])\nprint(la[-3])\nprint(la[0:10:2])\nprint(la[3::2])\nprint(la[2:4:])\nprint(la[2:4])", "Exercises\n\n\nMake 2 lists and add them together.\n\n\nTake your list from 1., append the list [\"I\", \"Love\", \"Python\"] to it without using the append command, then print every second element.\n\n\nCan you make a list of lists? Try it.\nConsider the list x=[3,5,7,8,\"Pi\"].\n\n\n4a. Type out what you would expect python to print along with the type of object printed would be for the following slices:\nx[2]\n\nx[0]\n\nx[-2]\n\nx[1::2]\n\nx[1::]\n\nx[::-4]\n\n4b. Check your answer by creating that list and printing out the corresponding slices.\nSets\nSets are a special type of list that adhere to certain rules. If you have taken any higher level or proof-based math classes, you will recognize that sets in Python are exactly the same as those in mathematics. Instead of using [], {} are used to create a set. Sets have the following properties:\n\nSets will not contain duplicates. If you make a set with duplicates, it will only retain one of them.\nSets are not ordered. This means no slicing.\nSets have the familiar set operations from math. These are outlined below.\nYou can convert a list to a set in the following manner: Let t be a list, then set(t) is now a set containing the elements of t.\n\nSet Operations\nConsider sets s={1,2,3} and t={1,2,3,4,5};\n| Operation | Meaning | Example |\n|:----------:|:-------:|:-------:|\n| s&#124;t | Union | {1,2,3,4,5} |\n| s&t | Intersection | {1,2,3} |\n| s-t | Difference | {} |\n| s^t | Symmetric Difference | {4,5} |\n| s<t | Strict Subset | True |\n| s<=t | Subset | True |\n| s>t | Strict Superset | False |\n| s>= | Superset | False |", "t = {1,2,3,3,3,3,3,3,3,3,3,3,3,3,3,4,5}\nprint(t)\ns = {1,4,5,7,8}\nprint(t-s)\nprint(s-t)\nprint(t^s)\nprint(t-s|s-t)\nprint(t^s==t-s|s-t)", "Exercises\n\nWrite a set containing the letters of the word \"dog\".\nFind the difference between the set in 1. and the set {\"d\", 5, \"g\"}\nRemove all the duplicates in the list [1,2,4,3,3,3,5,2,3,4,5,6,3,5,7] and print the resulting object in two lines.\n\nTuples\nTuples are just like lists except that you cannot append elements to a tuple. You may, however, combine two tuples. To create a tuple, one uses ().", "a = (1,2,3,4,5)\nb = ('a', 'b', 'c')\nprint(a)\nprint(b)\nprint(a+b)", "Dictionaries\nDictionaries are quite different from any container we have seen so far. A dictionary is a bunch of unordered key/value pairs. That is, each element of a dictionary has a key and a value and the elements are in no particular order. It is good to keep this unorderedness in mind later on, for now, let's look at some examples. To create a dictionary we use the following syntax, { key:value}.", "#Let's say we have some students take a test and we want to store their scores\nscores = {'Sally':89, 'Lucy':75, 'Jeff':45, 'Jose':96}\nprint(scores)\n#We can, however, not combine two different dictionaries\nscores2 = {'Devin':64, 'John':23, 'Jake':75}\nprint(scores2)\nprint(scores+scores2)", "Unlike with lists, we cannot access elements of the dictionary with slicing. We must instead use the keys. Let's see how to do this.", "print(scores['Sally'])\nprint(scores2['John'])", "As we can see, the key returns us the value. This can be useful if you have a bunch of items that need to be paired together.\nAccessing Just the Keys or Values\nWant to get a list of the keys in a dictionary? How about the values? Fret not, there is a way!", "print(scores.keys())\nprint(scores.values())", "Exercises\n\nBuild a dictionary of some constants in physics and math. Print out at least two of these values.\nGive an example of something that would be best represented by a dictionary, think pairs.\n\nIn and Not In\nNo, this section isn't about fashion. It is about the in and not in operators. They return a boolean value based on whether or not a value is in or not in a container. Note that for dictionaries, this refers to the keys, no the values.", "print('Devin' in scores2)\nprint(2 in a)\nprint('Hello World' not in scores)", "Converting Between Containers\nBut what if I want my set to be a list or my tuple to be a set? To convert between types of containers, you can use any of the following functions:\nlist() : Converts any container type into a list, for dictionaries it will be the list of keys.\ntuple() : Converts any container type into a tuple, for dictionaries it will be the tuple of keys.\nset() : Converts any container type into a set, for dictionaries it will be the set of keys. Note that as above, this will remove all duplicates.", "a = [1,2,3]\nb = (1,2,3)\nc = {1,2,3}\nd = {1:2,3:4}\n\nprint(list(b))\nprint(list(c))\nprint(list(d))\n\nprint(tuple(a))\nprint(tuple(c))\nprint(tuple(d))\n\nprint(set(a))\nprint(set(b))\nprint(set(d))", "Exercises\n\nTake the tuple (1,1,1,1,3,4,3,5,57,6,4,4,4,6,5,6) and remove its duplicates. Then turn it into a list.\nEarlier we saw that you could retrieve the keys and values of a dictionary individually, but they weren't lists yet. Using the dictionary d = {'tall':624, 'short':234, 'Feynman':'diagrams', 'dead':'cat', 'alive':'cat'}, try converting the keys and values into lists and print them out.\nConsider a = [1,2,3,4,5] and b = ['a','b','c','p'] is 'p' in a+b? Is 34 in a+b? Write a boolean statement the is true involving 34 and a+b." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Jim00000/Numerical-Analysis
2_Systems_Of_Equations.ipynb
unlicense
[ "CHAPTER 2 - Systems Of Equations", "# Import modules\nimport sys\nimport numpy as np\nimport numpy.linalg\nimport scipy\nimport sympy\nimport sympy.abc\nfrom scipy import linalg\nfrom scipy.sparse import linalg as slinalg", "2.1 Gaussian Elimination", "def naive_gaussian_elimination(matrix):\n \"\"\"\n A simple gaussian elimination to solve equations\n \n Args:\n matrix : numpy 2d array\n \n Returns:\n mat : The matrix processed by gaussian elimination\n x : The roots of the equation\n \n Raises:\n ValueError:\n - matrix is null\n RuntimeError :\n - Zero pivot encountered\n \"\"\"\n if matrix is None :\n raise ValueError('args matrix is null')\n \n #Clone the matrix\n mat = matrix.copy().astype(np.float64)\n \n # Row Size\n m = mat.shape[0]\n \n # Column Size\n n = mat.shape[1]\n \n # Gaussian Elimaination\n for i in range(0, m):\n if np.abs(mat[i , i]) == 0 :\n raise RuntimeError('zero pivot encountered')\n for j in range(i + 1, m):\n mult = mat[j, i] / mat[i, i]\n for k in range(i, m):\n mat[j, k] -= mult * mat[i, k]\n mat[j, n - 1] -= mult * mat[i, n - 1]\n \n # Back Substitution\n x = np.zeros(m, dtype=np.float64)\n for i in range(m - 1,-1,-1):\n for j in range(i + 1, m):\n mat[i, n-1] = mat[i ,n-1] - mat[i,j] * x[j]\n mat[i, j] = 0.0\n x[i] = mat[i, n-1] / mat[i, i]\n \n return mat, x", "Example\nApply Gaussian elimination in tableau form for the system of three equations in three\nunknowns:\n$$\n\\large\n\\begin{matrix}\nx + 2y - z = 3 & \\ \n2x + y - 2z = 3 & \\ \n-3x + y + z = -6 \n\\end{matrix}\n$$", "\"\"\"\nInput:\n[[ 1 2 -1 3]\n [ 2 1 -2 3]\n [-3 1 1 -6]]\n\"\"\"\ninput_mat = np.array([1, 2, -1, 3, 2, 1, -2, 3, -3, 1, 1, -6])\ninput_mat = input_mat.reshape(3, 4)\noutput_mat, x = naive_gaussian_elimination(input_mat)\n\nprint(output_mat)\nprint('[x, y, z] = {}'.format(x))", "Additional Examples\n\nPut the system $x + 2y - z = 3,-3x + y + z = -6,2x + z = 8$ into tableau form and solve by Gaussian elimination.", "input_mat = np.array([\n [ 1, 2, -1, 3],\n [-3, 1, 1, -6],\n [ 2, 0, 1, 8]\n])\n\noutput_mat, x = naive_gaussian_elimination(input_mat)\n\nprint(output_mat)\nprint('[x, y, z] = {}'.format(x))", "2.1 Computer Problems\n\nPut together the code fragments in this section to create a MATLAB program for โ€œnaiveโ€ Gaussian elimination (meaning no row exchanges allowed). Use it to solve the systems of Exercise 2.\n\nSee my implementation naive_gaussian_elimination in python.\n\nLet $H$ denote the $n \\times n$ Hilbert matrix, whose $(i, j)$ entry is $1 / (i + j - 1)$. Use the MATLAB program from Computer Problem 1 to solve $Hx = b$, where $b$ is the vector of all ones, for (a) n = 2 (b) n = 5 (c) n = 10.", "def computer_problems2__2_1(n):\n # generate Hilbert matrix H\n H = scipy.linalg.hilbert(n)\n \n # generate b\n b = np.ones(n).reshape(n, 1)\n \n # combine H:b in tableau form\n mat = np.hstack((H, b))\n \n # gaussian elimination\n _, x = naive_gaussian_elimination(mat)\n \n return x\n \nwith np.printoptions(precision = 6, suppress = True):\n print('(a) n = 2 โ†’ x = {}'.format(computer_problems2__2_1( 2)))\n print('(b) n = 5 โ†’ x = {}'.format(computer_problems2__2_1( 5)))\n print('(c) n = 10 โ†’ x = {}'.format(computer_problems2__2_1(10)))", "2.2 The LU Factorization", "def LU_factorization(matrix):\n \"\"\"\n LU decomposition\n \n Arguments:\n matrix : numpy 2d array\n \n Return:\n L : lower triangular matrix\n U : upper triangular matrix\n \n Raises:\n ValueError:\n - matrix is null\n - matrix is not a 2d array\n RuntimeError :\n - zero pivot encountered\n \"\"\"\n if matrix is None :\n raise ValueError('args matrix is null')\n \n if matrix.ndim != 2 :\n raise ValueError('matrix is not a 2d-array')\n \n # dimension\n dim = matrix.shape[0]\n \n # Prepare LU matrixs\n L = np.identity(dim).astype(np.float64)\n U = matrix.copy().astype(np.float64)\n \n # Gaussian Elimaination\n for i in range(0, dim - 1):\n # Check pivot is not zero\n if np.abs(U[i , i]) == 0 :\n raise RuntimeError('zero pivot encountered')\n for j in range(i + 1, dim):\n mult = U[j, i] / U[i, i]\n for k in range(i, dim):\n U[j, k] -= mult * U[i, k]\n L[j, i] = mult\n \n return L, U", "DEFINITION 2.2\nAn $m \\times n$ matrix $L$ is lower triangular if its entries satisfy $l_{ij} = 0$ for $i < j$. An $m \\times n$ matrix $U$ is upper triangular if its entries satisfy $u_{ij} = 0$ for $i > j$.\nExample\nFind the LU factorization for the matrix $A$ in\n$$\n\\large\n\\begin{bmatrix}\n1 & 1 \\ \n3 & -4 \\ \n\\end{bmatrix}\n$$", "A = np.array([\n [1, 1],\n [3, -4]\n])\n\nL, U = LU_factorization(A)\n\nprint('L = ')\nprint(L)\n\nprint()\n\nprint('U = ')\nprint(U)", "Example\nFind the LU factorization of A =\n$$\n\\large\n\\begin{bmatrix}\n1 & 2 & -1 \\ \n2 & 1 & -2 \\ \n-3 & 1 & 1 \\\n\\end{bmatrix}\n$$", "A = np.array([\n [ 1, 2, -1],\n [ 2, 1, -2],\n [-3, 1, 1]\n])\n\nL, U = LU_factorization(A)\n\nprint('L = ')\nprint(L)\n\nprint()\n\nprint('U = ')\nprint(U)", "Example\nSolve system\n$$\n\\large\n\\begin{bmatrix}\n1 & 1 \\ \n3 & -4 \\ \n\\end{bmatrix}\n\\begin{bmatrix}\nx_1 \\ \nx_2 \\ \n\\end{bmatrix}\n=\n\\begin{bmatrix}\n3 \\ \n2 \\ \n\\end{bmatrix}\n$$\n, using the LU factorization", "A = np.array([\n [ 1, 1],\n [ 3, -4]\n])\n\nb = np.array([3, 2]).reshape(2, 1)\n\nL, U = LU_factorization(A)\n\n# calculate Lc = b where Ux = c\nmat = np.hstack((L, b))\nc = naive_gaussian_elimination(mat)[1].reshape(2, 1)\n\n# calculate Ux = c\nmat = np.hstack((U, c))\nx = naive_gaussian_elimination(mat)[1].reshape(2, 1)\n\n# output the result\nprint('x1 = {}, x2 = {}'.format(x[0][0], x[1][0]))", "Example\nSolve system\n\\begin{matrix}\nx + 2y - z = 3 & \\ \n2x + y - 2z = 3 & \\ \n-3x + y + z = -6 \n\\end{matrix}\nusing the LU factorization", "A = np.array([\n [ 1, 2, -1],\n [ 2, 1, -2],\n [-3, 1, 1]\n])\n\nb = np.array([3, 3, -6]).reshape(3, 1)\n\nL, U = LU_factorization(A)\n\n# calculate Lc = b where Ux = c\nmat = np.hstack((L, b))\nc = naive_gaussian_elimination(mat)[1].reshape(3, 1)\n\n# calculate Ux = c\nmat = np.hstack((U, c))\nx = naive_gaussian_elimination(mat)[1].reshape(3, 1)\n\n# output the result\nprint('x1 = {}, x2 = {}, x3 = {}'.format(x[0][0], x[1][0], x[2][0]))", "Additional Examples\n\nSolve\n\n$$\n\\large\n\\begin{bmatrix}\n2 & 4 & -2 \\ \n1 & -2 & 1 \\ \n4 & -4 & 8 \\\n\\end{bmatrix}\n\\begin{bmatrix}\nx_1 \\ \nx_2 \\ \nx_3 \\\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n6 \\ \n3 \\ \n0 \\\n\\end{bmatrix}\n$$\nusing the A = LU factorization", "A = np.array([\n [ 2, 4, -2],\n [ 1, -2, 1],\n [ 4, -4, 8]\n])\n\nb = np.array([6, 3, 0]).reshape(3, 1)\n\nL, U = LU_factorization(A)\n\n# calculate Lc = b where Ux = c\nmat = np.hstack((L, b))\nc = naive_gaussian_elimination(mat)[1].reshape(3, 1)\n\n# calculate Ux = c\nmat = np.hstack((U, c))\nx = naive_gaussian_elimination(mat)[1].reshape(3, 1)\n\n# output the result\nprint('x1 = {}, x2 = {}, x3 = {}'.format(x[0][0], x[1][0], x[2][0]))", "2.2 Computer Problems\n\nUse the code fragments for Gaussian elimination in the previous section to write a MATLAB script to take a matrix A as input and output L and U. No row exchanges are allowed - the program should be designed to shut down if it encounters a zero pivot. Check your program by factoring the matrices in Exercise 2.\n\nSee my implementation LU_factorization in python.", "# Exercise 2 - (a)\nA = np.array([\n [ 3, 1, 2],\n [ 6, 3, 4],\n [ 3, 1, 5]\n])\n\nL, U = LU_factorization(A)\n\nprint('L = ')\nprint(L)\n\nprint()\n\nprint('U = ')\nprint(U)\n\n# Exercise 2 - (b)\nA = np.array([\n [ 4, 2, 0],\n [ 4, 4, 2],\n [ 2, 2, 3]\n])\n\nL, U = LU_factorization(A)\n\nprint('L = ')\nprint(L)\n\nprint()\n\nprint('U = ')\nprint(U)\n\n# Exercise 2 - (c)\nA = np.array([\n [ 1, -1, 1, 2],\n [ 0, 2, 1, 0],\n [ 1, 3, 4, 4],\n [ 0, 2, 1, -1]\n])\n\nL, U = LU_factorization(A)\n\nprint('L = ')\nprint(L)\n\nprint()\n\nprint('U = ')\nprint(U)", "Add two-step back substitution to your script from Computer Problem 1, and use it to solve the systems in Exercise 4.", "def LU_factorization_with_back_substitution(A, b):\n \"\"\"\n LU decomposition with two-step back substitution\n where Ax = b\n \n Arguments:\n A : coefficient matrix\n b : constant vector\n \n Return:\n x : solution vector\n \"\"\"\n L, U = LU_factorization(A)\n\n # row size\n rowsz = b.size\n \n # calculate Lc = b where Ux = c\n matrix = np.hstack((L, b))\n c = naive_gaussian_elimination(matrix)[1].reshape(rowsz, 1)\n\n # calculate Ux = c\n matrix = np.hstack((U, c))\n x = naive_gaussian_elimination(matrix)[1].reshape(rowsz)\n \n return x\n\n# Exercise 4 - (a)\nA = np.array([\n [ 3, 1, 2],\n [ 6, 3, 4],\n [ 3, 1, 5]\n])\n\nb = np.array([0, 1, 3]).reshape(3, 1)\n\nx = LU_factorization_with_back_substitution(A, b)\n\nprint(x)\n\n# Exercise 4 - (b)\nA = np.array([\n [ 4, 2, 0],\n [ 4, 4, 2],\n [ 2, 2, 3]\n])\n\nb = np.array([2, 4, 6]).reshape(3, 1)\n\nx = LU_factorization_with_back_substitution(A, b)\n\nprint(x)", "2.3 Sources Of Error\nDEFINITION 2.3\nThe in๏ฌnity norm, or maximum norm, of the vector $x = (x_1, \\cdots, x_n)$ is $||x||_{\\infty} = \\text{max}|x_i|, i = 1,\\cdots,n$, that is, the maximum of the absolute values of the components of x.\nDEFINITION 2.4\nLet $x_a$ be an approximate solution of the linear system $Ax = b$. The residual is the vector $r = b - Ax_a$. The backward error is the norm of the residual $||b - Ax_a||{\\infty}$,\nand the forward error is $||x - x_a||{\\infty}$.\nExample\nFind the backward and forward errors for the approximate solution $x_a = [1, 1]$ of the system\n$$\n\\large\n\\begin{bmatrix}\n1 & 1 \\ \n3 & -4 \\ \n\\end{bmatrix}\n\\begin{bmatrix}\nx_1 \\ \nx_2 \\ \n\\end{bmatrix}\n=\n\\begin{bmatrix}\n3 \\ \n2 \\ \n\\end{bmatrix}\n$$", "A = np.array([\n [ 1, 1],\n [ 3, -4]\n])\n\nb = np.array([3, 2])\n\nxa = np.array([1, 1])\n\n# Get correct solution\nsystem = sympy.Matrix(((1, 1, 3), (3, -4, 2)))\nsolver = sympy.solve_linear_system(system, sympy.abc.x, sympy.abc.y)\n\n# Packed as list\nx = np.array([solver[sympy.abc.x].evalf(), solver[sympy.abc.y].evalf()])\n\n# Output\nprint(x)\n\n# Get backward error (differences in the input)\nresidual = b - np.matmul(A, xa)\nbackward_error = np.max(np.abs(residual))\nprint('backward error is {:f}'.format(backward_error))\n\n# Get fowrawd error (differences in the output)\nforward_error = np.max(np.abs(x - xa))\nprint('forward error is {:f}'.format(forward_error))", "Example\nFind the forward and backward errors for the approximate solution [-1, 3.0001] of the system\n$$\n\\large\n\\begin{align}\nx_1 + x_2 &= 2 \\ \n1.0001 x_1 + x_2 &= 2.0001 \\\n\\end{align}\n$$", "A = np.array([\n [ 1, 1],\n [ 1.0001, 1],\n])\n\nb = np.array([2, 2.0001])\n\n# approximated solution\nxa = np.array([-1, 3.0001])\n\n# correct solution\nx = LU_factorization_with_back_substitution(A, b.reshape(2, 1))\n\n# Get backward error \nresidual = b - np.matmul(A, xa)\nbackward_error = np.max(np.abs(residual))\nprint('backward error is {:f}'.format(backward_error))\n\n# Get fowrawd error \nforward_error = np.max(np.abs(x - xa))\nprint('forward error is {:f}'.format(forward_error))", "The relative backward error of system $Ax = b$ is de๏ฌned to be $\\large \\frac{||r||{\\infty}}{||b||{\\infty}}$.\nThe relative forward error is $\\large \\frac{||x - x_a||{\\infty}}{||x||{\\infty}}$.\nThe error magni๏ฌcation factor for $Ax = b$ is the ratio of the two, or $\\large \\text{error magnification factor} = \\frac{\\text{relative forward error}}{\\text{relative backward error}} = \\frac{\\frac{||x - x_a||{\\infty}}{||x||{\\infty}}}{\\frac{||r||{\\infty}}{||b||{\\infty}}}$\nDEFINITION 2.5\nThe condition number of a square matrix A, cond(A), is the maximum possible error magni๏ฌcation factor for solving Ax = b, over all right-hand sides b.\nThe matrix norm of an n x n matrix A as \n$$\n\\large ||A||_{\\infty} = \\text{maximum absolute row sum}\n$$", "def matrix_norm(A):\n rowsum = np.sum(np.abs(A), axis = 1)\n return np.max(rowsum)", "THEOREM 2.6\nThe condition number of the n x n matrix A is\n$$\n\\large cond(A) = ||A|| \\cdot ||A^{-1}||\n$$", "def condition_number(A):\n inv_A = np.linalg.inv(A)\n cond = matrix_norm(A) * matrix_norm(inv_A)\n return cond", "Additional Examples\n\nFind the determinant and the condition number (in the in๏ฌnity norm) of the matrix\n\n$$\n\\large\n\\begin{bmatrix}\n811802 & 810901 \\ \n810901 & 810001 \\ \n\\end{bmatrix}\n$$", "A = np.array([\n [ 811802, 810901],\n [ 810901, 810001],\n])\n\nprint('determinant of A is {}'.format(scipy.linalg.det(A)))\nprint('condition number : {:.4e}'.format(condition_number(A)))", "The solution of the system\n\n$$\n\\large\n\\begin{bmatrix}\n2 & 4.01 \\ \n3 & 6 \\ \n\\end{bmatrix}\n\\begin{bmatrix}\nx_1 \\ \nx_2 \\ \n\\end{bmatrix}\n=\n\\begin{bmatrix}\n6.01 \\ \n9 \\ \n\\end{bmatrix}\n$$\nis $[1, 1]$\n(a) Find the relative forward and backward errors and error magni๏ฌcation (in the in๏ฌnity norm) for the approximate solution [21,-9].", "A = np.array([\n [ 2, 4.01],\n [ 3, 6.00],\n])\n\nb = np.array([6.01, 9])\n\n# approximated solution\nxa = np.array([21, -9])\n\n# correct solution\nx = LU_factorization_with_back_substitution(A, b.reshape(2, 1))\n\n# forward error\nforward_error = np.max(np.abs(x - xa))\n\n# relative forward error\nrelative_forward_error = forward_error / np.max(np.abs(x))\n\n# backward error\nbackward_error = np.max(np.abs(b - np.matmul(A, xa)))\n\n# relative backward error\nrelative_backward_error = backward_error / np.max(np.abs(b))\n\n# error magnification factor\nerror_magnification_factor = relative_forward_error / relative_backward_error\n\nprint('relative forward error : {}'.format(relative_forward_error))\nprint('relative backward error : {}'.format(relative_backward_error))\nprint('error magnification factor : {}'.format(error_magnification_factor))", "(b) Find the condition number of the coef๏ฌcient matrix.", "A = np.array([\n [ 2, 4.01],\n [ 3, 6.00],\n])\n\nprint('condition number : {}'.format(condition_number(A)))", "2.3 Computer Problems\n\nFor the n x n matrix with entries $A_{ij} = 5 / (i + 2j - 1)$, set $x = [1,\\cdots,1]^T$ and $b = Ax$. Use the MATLAB program from Computer Problem 2.1.1 or MATLABโ€™s backslash command to compute $x_c$, the double precision computed solution. Find the in๏ฌnity norm of the forward error and the error magni๏ฌcation factor of the problem $Ax = b$, and compare it with the condition number of A: (a) n = 6 (b) n = 10.", "def system_provider(n, data_generator):\n A = np.zeros([n, n])\n x = np.ones(n)\n \n for i in range(n):\n for j in range(n):\n A[i, j] = data_generator(i + 1, j + 1)\n \n b = np.matmul(A, x)\n \n return A, x, b\n\ndef problem_2_3_1_generic_solver(n, data_generator):\n A, x, b = system_provider(n, data_generator)\n xc = np.linalg.solve(A, b)\n \n # forward error\n forward_error = np.max(np.abs(x - xc))\n \n # relative forward error\n relative_forward_error = forward_error / np.max(np.abs(x))\n \n # backward error\n backward_error = np.max(np.abs(b - np.matmul(A, xc)))\n \n # relative backward error\n relative_backward_error = backward_error / np.max(np.abs(b))\n \n # error magnification factor\n error_magnification_factor = relative_forward_error / relative_backward_error\n \n # condition number\n condA = condition_number(A)\n \n return forward_error, error_magnification_factor, condA\n\ndef problem_2_3_1_solver(n):\n return problem_2_3_1_generic_solver(n, lambda i, j : 5 / (i + 2 * j - 1))\n\n# (a) n = 6\nprint('(a) n = 6, forward error = {:.3g}, error magnification factor = {:.3g}, condition number = {:.3g}'.format(*problem_2_3_1_solver(6)))\n\n# (b) n = 10\nprint('(b) n = 10, forward error = {:.3g}, error magnification factor = {:.3g}, condition number = {:.3g}'.format(*problem_2_3_1_solver(10)))", "Carry out Computer Problem 1 for the matrix with entries $A_{ij} = 1/(|i - j| + 1)$.", "def problem_2_3_2_solver(n):\n return problem_2_3_1_generic_solver(n, lambda i, j : 1 / (np.abs(i - j) + 1))\n\n# (a) n = 6\nprint('(a) n = 6, forward error = {:.3g}, error magnification factor = {:.3g}, condition number = {:.3g}'.format(*problem_2_3_2_solver(6)))\n\n# (b) n = 10\nprint('(b) n = 10, forward error = {:.3g}, error magnification factor = {:.3g}, condition number = {:.3g}'.format(*problem_2_3_2_solver(10)))", "Let A be the n x n matrix with entries $A_{ij} = |i - j| + 1$. De๏ฌne $x = [1,\\cdots,1]^T$ and $b = Ax$. For n = 100,200,300,400, and 500, use the MATLAB program from Computer Problem 2.1.1 or MATLABโ€™s backslash command to compute $x_c$, the double precision computed solution. Calculate the in๏ฌnity norm of the forward error for each solution. Find the ๏ฌve error magni๏ฌcation factors of the problems $Ax = b$, and compare with the corresponding condition numbers.", "def problem_2_3_3_solver(n):\n return problem_2_3_1_generic_solver(n, lambda i, j : np.abs(i - j) + 1)\n\n# n = 100\nprint('n = 100, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_3_solver(100)))\n# n = 200\nprint('n = 200, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_3_solver(200)))\n# n = 300\nprint('n = 300, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_3_solver(300)))\n# n = 400\nprint('n = 400, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_3_solver(400)))\n# n = 500\nprint('n = 500, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_3_solver(500)))", "Carry out the steps of Computer Problem 3 for the matrix with entries $A_{ij} = \\sqrt{(i - j)^2 + n / 10}$.", "def problem_2_3_4_solver(n):\n return problem_2_3_1_generic_solver(n, lambda i, j : np.sqrt(np.power(i - j, 2) + n / 10))\n\n# n = 100\nprint('n = 100, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_4_solver(100)))\n# n = 200\nprint('n = 200, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_4_solver(200)))\n# n = 300\nprint('n = 300, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_4_solver(300)))\n# n = 400\nprint('n = 400, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_4_solver(400)))\n# n = 500\nprint('n = 500, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_4_solver(500)))", "For what values of n does the solution in Computer Problem 1 have no correct signi๏ฌcant digits?", "print('n = 11, forward error = {:.3g}, error magnification factor = {:.3g}, condition number = {:.3g}'.format(*problem_2_3_1_solver(11)))", "2.4 The PA=LU Factorization\nExample\nApply Gaussian elimination with partial pivoting to solve the system\n\\begin{matrix}\n x_1 - x_2 + 3x_3 = -3 & \\ \n-1x_1 - 2x_3 = 1 & \\ \n 2x_1 + 2x_2 + 4x_3 = 0 \n\\end{matrix}", "A = np.array([1, -1, 3, -1, 0, -2, 2, 2, 4]).reshape(3, 3)\nb = np.array([-3, 1, 0])\nlu, piv = linalg.lu_factor(A)\nx = linalg.lu_solve([lu, piv], b)\nprint(x)", "Example\nSolve the system $2x_1 + 3x_2 = 4$,$3x_1 + 2x_2 = 1$ using the PA = LU factorization with partial pivoting", "\"\"\"\n[[2, 3]\n [3, 2]]\n\"\"\"\nA = np.array([2, 3, 3, 2]).reshape(2, 2)\nb = np.array([4, 1])\nlu, piv = linalg.lu_factor(A)\nx = linalg.lu_solve([lu, piv], b)\nprint(x)", "2.5 Iterative Methods\nJacobi Method", "def jacobi_method(A, b, x0, k):\n \"\"\"\n Use jacobi method to solve equations\n \n Args:\n A (numpy 2d array): the matrix\n b (numpy 1d array): the right hand side vector\n x0 (numpy 1d array): initial guess\n k (real number): iterations\n \n Return:\n The approximate solution\n \n Exceptions:\n ValueError\n The size of matrix's column is not equal to the size of vector's size\n \"\"\"\n if A.shape[1] is not x0.shape[0] :\n raise ValueError('The size of the columns of matrix A must be equal to the size of the x0')\n \n D = np.diag(A.diagonal())\n inv_D = linalg.inv(D) \n LU = A - D\n xk = x0\n \n for _ in range(k):\n xk = np.matmul(b - np.matmul(LU, xk), inv_D)\n \n return xk", "Example\nApply the Jacobi Method to the system $3u + v = 5$, $u + 2v = 5$", "A = np.array([3, 1, 1, 2]).reshape(2, 2)\nb = np.array([5, 5])\nx = jacobi_method(A, b, np.array([0, 0]), 20)\nprint('x = %s' %x)", "Gauss-Seidel Method", "def gauss_seidel_method(A, b, x0, k):\n \"\"\"\n Use gauss seidel method to solve equations\n \n Args:\n A (numpy 2d array): the matrix\n b (numpy 1d array): the right hand side vector\n x0 (numpy 1d array): initial guess\n k (real number): iterations\n \n Return:\n The approximate solution\n \n Exceptions:\n ValueError\n The size of matrix's column is not equal to the size of vector's size\n \"\"\"\n if A.shape[1] is not x0.shape[0] :\n raise ValueError('The size of the columns of matrix A must be equal to the size of the x0')\n \n D = np.diag(A.diagonal())\n L = np.tril(A) - D\n U = np.triu(A) - D\n inv_LD = linalg.inv(L + D)\n xk = x0\n \n for _ in range(k):\n xk = np.matmul(inv_LD, -np.matmul(U, xk) + b)\n \n return xk", "Example\nApply the Gauss-Seidel Method to the system\n$$\n\\begin{bmatrix}\n3 & 1 & -1 \\ \n2 & 4 & 1 \\ \n-1 & 2 & 5\n\\end{bmatrix}\n\\begin{bmatrix}\nu \\ \nv \\ \nw\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n4 \\ \n1 \\ \n1\n\\end{bmatrix}\n$$", "A = np.array([3, 1, -1, 2, 4, 1, -1, 2, 5]).reshape(3, 3)\nb = np.array([4, 1, 1])\nx0 = np.array([0, 0, 0])\ngauss_seidel_method(A, b, x0, 24)", "Successive Over-Relaxation", "def gauss_seidel_sor_method(A, b, w, x0, k):\n \"\"\"\n Use gauss seidel method with sor to solve equations\n \n Args:\n A (numpy 2d array): the matrix\n b (numpy 1d array): the right hand side vector\n w (real number): weight\n x0 (numpy 1d array): initial guess\n k (real number): iterations\n \n Return:\n The approximate solution\n \n Exceptions:\n ValueError\n The size of matrix's column is not equal to the size of vector's size\n \"\"\"\n if A.shape[1] is not x0.shape[0] :\n raise ValueError('The size of the columns of matrix A must be equal to the size of the x0')\n \n D = np.diag(A.diagonal())\n L = np.tril(A) - D\n U = np.triu(A) - D\n inv_LD = linalg.inv(w * L + D)\n xk = x0\n \n for _ in range(k):\n xk = np.matmul(w * inv_LD, b) + np.matmul(inv_LD, (1 - w) * np.matmul(D, xk) - w * np.matmul(U, xk))\n \n return xk", "Example\nApply the Gauss-Seidel Method with sor to the system\n$$\n\\begin{bmatrix}\n3 & 1 & -1 \\ \n2 & 4 & 1 \\ \n-1 & 2 & 5\n\\end{bmatrix}\n\\begin{bmatrix}\nu \\ \nv \\ \nw\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n4 \\ \n1 \\ \n1\n\\end{bmatrix}\n$$", "A = np.array([3, 1, -1, 2, 4, 1, -1, 2, 5]).reshape(3, 3)\nb = np.array([4, 1, 1])\nx0 = np.array([0, 0, 0])\nw = 1.25\ngauss_seidel_sor_method(A, b, w, x0, 14)", "2.6 Methods for symmetric positive-definite matrices\nCholesky factorization\nExample\nFind the Cholesky factorization of \n$\\begin{bmatrix}\n4 & -2 & 2 \\ \n-2 & 2 & -4 \\ \n2 & -4 & 11\n\\end{bmatrix}$", "A = np.array([4, -2, 2, -2, 2, -4, 2, -4, 11]).reshape(3, 3)\nR = linalg.cholesky(A)\nprint(R)", "Conjugate Gradient Method", "def conjugate_gradient_method(A, b, x0, k):\n \"\"\"\n Use conjugate gradient to solve linear equations\n \n Args:\n A : input matrix\n b : input right hand side vector\n x0 : initial guess\n k : iteration\n \n Returns:\n approximate solution\n \n \n \"\"\"\n xk = x0\n dk = rk = b - np.matmul(A, x0)\n for _ in range(k):\n if not np.any(rk) or all( abs(i) <= 1e-16 for i in rk) is True:\n break\n ak = float(np.matmul(rk.T, rk)) / float(np.matmul(dk.T, np.matmul(A, dk)))\n xk = xk + ak * dk\n rk1 = rk - ak * np.matmul(A, dk)\n bk = np.matmul(rk1.T, rk1) / np.matmul(rk.T, rk)\n dk = rk1 + bk * dk\n rk = rk1\n return xk", "Example\nSolve \n$$\n\\begin{bmatrix}\n2 & 2 \\ \n2 & 5 \\\n\\end{bmatrix}\n\\begin{bmatrix}\nu \\ \nv\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n6 \\ \n3 \n\\end{bmatrix}\n$$\nusing the Conjugate Gradient Method", "A = np.array([2, 2, 2, 5]).reshape(2, 2)\nb = np.array([6, 3])\nx0 = np.array([0, 0])\nconjugate_gradient_method(A, b, x0, 2)", "Example\nSolve \n$$\n\\begin{bmatrix}\n1 & -1 & 0 \\\n-1 & 2 & 1 \\\n0 & 1 & 2 \\\n\\end{bmatrix}\n\\begin{bmatrix}\nu \\\nv \\\nw \\\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n0 \\\n2 \\\n3 \\\n\\end{bmatrix}\n$$", "A = np.array([1, -1, 0, -1, 2, 1, 0, 1, 2]).reshape(3, 3)\nb = np.array([0, 2, 3])\nx0 = np.array([0, 0, 0])\nconjugate_gradient_method(A, b, x0, 10)", "Example\nSolve \n$$\n\\begin{bmatrix}\n1 & -1 & 0 \\\n-1 & 2 & 1 \\\n0 & 1 & 5 \\\n\\end{bmatrix}\n\\begin{bmatrix}\nu \\\nv \\\nw \\\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n3 \\\n-3 \\\n4 \\\n\\end{bmatrix}\n$$", "A = np.array([1, -1, 0, -1, 2, 1, 0, 1, 5]).reshape(3, 3)\nb = np.array([3, -3, 4])\nx0 = np.array([0, 0, 0])\nx = slinalg.cg(A, b, x0)[0]\nprint('x = %s' %x )", "Preconditioning\n2.7 Nonlinear Systems Of Equations\nMultivariate Newton's Method", "def multivariate_newton_method(fA, fDA, x0, k):\n \"\"\"\n Args:\n fA (function handle) : coefficient matrix with arguments\n fDA (function handle) : right-hand side vector with arguments\n x0 (numpy 2d array) : initial guess\n k (real number) : iteration\n \n Return:\n Approximate solution xk after k iterations\n \"\"\"\n xk = x0\n for _ in range(k):\n lu, piv = linalg.lu_factor(fDA(*xk))\n s = linalg.lu_solve([lu, piv], -fA(*xk))\n xk = xk + s\n return xk", "Example\nUse Newton's method with starting guess $(1,2)$ to find a solution of the system\n$$\nv - u^3 = 0 \\\nu^2 + v^2 - 1 = 0\n$$", "fA = lambda u,v : np.array([v - pow(u, 3), pow(u, 2) + pow(v, 2) - 1], dtype=np.float64)\nfDA = lambda u,v : np.array([-3 * pow(u, 2), 1, 2 * u, 2 * v], dtype=np.float64).reshape(2, 2)\nx0 = np.array([1, 2])\nmultivariate_newton_method(fA, fDA, x0, 10)", "Example\nUse Newton's method to find the solutions of the system\n$$\nf_1(u,v) = 6u^3 + uv - 3^3 - 4 = 0 \\\nf_2(u,v) = u^2 - 18uv^2 + 16v^3 + 1 = 0\n$$", "fA = lambda u,v : np.array([6 * pow(u, 3) + u * v - 3 * pow(v, 3) - 4,\n pow(u, 2) - 18 * u * pow(v, 2) + 16 * pow(v, 3) + 1], \n dtype=np.float64)\n\nfDA = lambda u,v : np.array([18 * pow(u, 2) + v, \n u - 9 * pow(v, 2), \n 2 * u - 18 * pow(v, 2), \n -36 * u * v + 48 * pow(v, 2)], \n dtype=np.float64).reshape(2, 2)\n\nx0 = np.array([2, 2], dtype=np.float64)\n\nmultivariate_newton_method(fA, fDA, x0, 5)", "MIT License\nCopyright (c) Jim00000\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jeicher/cobrapy
documentation_builder/solvers.ipynb
lgpl-2.1
[ "Solver Interface\nEach cobrapy solver must expose the following API. The solvers all will have their own distinct LP object types, but each can be manipulated by these functions. This API can be used directly when implementing algorithms efficiently on linear programs because it has 2 primary benefits:\n\n\nAvoid the overhead of creating and destroying LP's for each operation\n\n\nMany solver objects preserve the basis between subsequent LP's, making each subsequent LP solve faster\n\n\nWe will walk though the API with the cglpk solver, which links the cobrapy solver API with GLPK's C API.", "import cobra.test\n\nmodel = cobra.test.create_test_model(\"textbook\")\nsolver = cobra.solvers.cglpk", "Attributes and functions\nEach solver has some attributes:\nsolver_name\nThe name of the solver. This is the name which will be used to select the solver in cobrapy functions.", "solver.solver_name\n\nmodel.optimize(solver=\"cglpk\")", "_SUPPORTS_MILP\nThe presence of this attribute tells cobrapy that the solver supports mixed-integer linear programming", "solver._SUPPORTS_MILP", "solve\nModel.optimize is a wrapper for each solver's solve function. It takes in a cobra model and returns a solution", "solver.solve(model)", "create_problem\nThis creates the LP object for the solver.", "lp = solver.create_problem(model, objective_sense=\"maximize\")\nlp", "solve_problem\nSolve the LP object and return the solution status", "solver.solve_problem(lp)", "format_solution\nExtract a cobra.Solution object from a solved LP object", "solver.format_solution(lp, model)", "get_objective_value\nExtract the objective value from a solved LP object", "solver.get_objective_value(lp)", "get_status\nGet the solution status of a solved LP object", "solver.get_status(lp)", "change_variable_objective\nchange the objective coefficient a reaction at a particular index. This does not change any of the other objectives which have already been set. This example will double and then revert the biomass coefficient.", "model.reactions.index(\"Biomass_Ecoli_core\")\n\nsolver.change_variable_objective(lp, 12, 2)\nsolver.solve_problem(lp)\nsolver.get_objective_value(lp)\n\nsolver.change_variable_objective(lp, 12, 1)\nsolver.solve_problem(lp)\nsolver.get_objective_value(lp)", "change variable_bounds\nchange the lower and upper bounds of a reaction at a particular index. This example will set the lower bound of the biomass to an infeasible value, then revert it.", "solver.change_variable_bounds(lp, 12, 1000, 1000)\nsolver.solve_problem(lp)\n\nsolver.change_variable_bounds(lp, 12, 0, 1000)\nsolver.solve_problem(lp)", "change_coefficient\nChange a coefficient in the stoichiometric matrix. In this example, we will set the entry for ADP in the ATMP reaction to in infeasible value, then reset it.", "model.metabolites.index(\"atp_c\")\n\nmodel.reactions.index(\"ATPM\")\n\nsolver.change_coefficient(lp, 16, 10, -10)\nsolver.solve_problem(lp)\n\nsolver.change_coefficient(lp, 16, 10, -1)\nsolver.solve_problem(lp)", "set_parameter\nSet a solver parameter. Each solver will have its own particular set of unique paramters. However, some have unified names. For example, all solvers should accept \"tolerance_feasibility.\"", "solver.set_parameter(lp, \"tolerance_feasibility\", 1e-9)\n\nsolver.set_parameter(lp, \"objective_sense\", \"minimize\")\nsolver.solve_problem(lp)\nsolver.get_objective_value(lp)\n\nsolver.set_parameter(lp, \"objective_sense\", \"maximize\")\nsolver.solve_problem(lp)\nsolver.get_objective_value(lp)", "Example with FVA\nConsider flux variability analysis (FVA), which requires maximizing and minimizing every reaction with the original biomass value fixed at its optimal value. If we used the cobra Model API in a naive implementation, we would do the following:", "%%time\n# work on a copy of the model so the original is not changed\nm = model.copy()\n\n# set the lower bound on the objective to be the optimal value\nf = m.optimize().f\nfor objective_reaction, coefficient in m.objective.items():\n objective_reaction.lower_bound = coefficient * f\n\n# now maximize and minimze every reaction to find its bounds\nfva_result = {}\nfor r in m.reactions:\n m.change_objective(r)\n fva_result[r.id] = {\n \"maximum\": m.optimize(objective_sense=\"maximize\").f,\n \"minimum\": m.optimize(objective_sense=\"minimize\").f\n }", "Instead, we could use the solver API to do this more efficiently. This is roughly how cobrapy implementes FVA. It keeps uses the same LP object and repeatedly maximizes and minimizes it. This allows the solver to preserve the basis, and is much faster. The speed increase is even more noticeable the larger the model gets.", "%%time\n# create the LP object\nlp = solver.create_problem(model)\n\n# set the lower bound on the objective to be the optimal value\nsolver.solve_problem(lp)\nf = solver.get_objective_value(lp)\nfor objective_reaction, coefficient in model.objective.items():\n objective_index = model.reactions.index(objective_reaction)\n # old objective is no longer the objective\n solver.change_variable_objective(lp, objective_index, 0.)\n solver.change_variable_bounds(\n lp, objective_index, f * coefficient,\n objective_reaction.upper_bound)\n\n# now maximize and minimze every reaction to find its bounds\nfva_result = {}\nfor index, r in enumerate(model.reactions):\n solver.change_variable_objective(lp, index, 1.)\n result = {}\n solver.solve_problem(lp, objective_sense=\"maximize\")\n result[\"maximum\"] = solver.get_objective_value(lp)\n solver.solve_problem(lp, objective_sense=\"minimize\")\n result[\"minimum\"] = solver.get_objective_value(lp)\n solver.change_variable_objective(lp, index, 0.)\n fva_result[r.id] = result" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kfollette/ASTR200-Spring2017
Labs/Lab9/Lab 9.ipynb
mit
[ "Names: [Insert Your Names Here]\nLab 9 - Data Investigation 1 (Week 1) - Educational Research Data\nLab 9 Contents\n\nBackground Information\nIntro to the Second Half of the Class\nIntro to Dataset 1: The Quantitative Reasoning for College Science Assessment\nInvestigating Tabular Data with Pandas\nReading in and Cleaning Data\nThe describe() Method\nComputing Descriptive Statistics\nCreating Statistical Graphics\nSelecting a Subset of Data\nTesting Differences Between Datasets\nComputing Confidence Intervals\nVisualizing Differences with Overlapping Plots\nData Investigation 1 - Week 2 Instructions", "#various things that we will need\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport scipy.stats as st", "1. Background Information\n1.1 Introduction to the Second Half of the Class\nThe remainder of this course will be divided into three two week modules, each dealing with a different dataset. During the first week of each module, you will complete a (two class) lab in which you are introduced to the dataset and various techniques that you need to use to explore it.\nAt the end of Week 1, you and your lab partner will write a brief (1 paragraph) proposal to Professor Follette detailing an investigation that you would like to complete using that dataset in Week 2. You and your partener will complete this investigation and write it up as your lab the following week. Detailed instructions for submitting your proposal are at the end of this lab. Detailed instructions for the lab writeups will be provided next week. \n1.2. Introduction to the QuaRCS Dataset\nThe Quantitative Reasoning for College Science (QuaRCS) assessment is an assessment instrument that Profssor Follette has been administering in general education science classes across the country since 2012. It consists of 25 quantitative questions involving \"real world\" mathematical skills plus 24 attitudinal and demographic questions. It has been administered to more than 5000 students at eleven institutions. You will be reading the published results of this study for class on Thursday, and exploring the data in class this week and next. \nA description of all of the variables (pandas dataframe columns) in the QuaRCS dataset and what each numerical answer choice \"stands for\" is in the file QuaRCS_descriptions.pdf. \n2. Investigating Tabular Data with Pandas\n2.1 Reading In and Cleaning Data", "# these set the pandas defaults so that it will print ALL values, even for very long lists and large dataframes\npd.set_option('display.max_columns', None)\npd.set_option('display.max_rows', None)", "Read in the QuaRCS data as a pandas dataframe called \"data\".", "data=pd.read_csv('AST200_data_anonymized.csv', encoding=\"ISO-8859-1\")\nmask = np.where(data == 999)\ndata = data.replace(999,np.nan)", "Once a dataset has been read in as a pandas dataframe, several useful built-in pandas methods are made available to us. Recall that you call methods with data.method. Check out each of the following", "# the * is a trick to print without the ...s for an ordinary python object\nprint(*data.columns)\n\ndata.dtypes", "2.2 The describe() method\nThere are also a whole bunch of built in functions that can operate on a pandas dataframe that become available once you've defined it. To see a full list type data. in an empty frame and then hit tab. \nAn especially useful one is dataframe.describe() method, which creates a summary table with some common statistics for all of the columns in the dataframe. \nIn our case here there are a number of NaNs in our table (cases where an answer was left blank), and the describe method ignores them for mean, standard deviation (std), min and max. However, there is a known bug in the pandas module that cause NaNs to break the quartiles in the describe method, so these will always be NaN for any column that has a NaN anywhere in it, rendering them mostly useless here. Still, this is a nice quick way to get descriptive statistics for a table.", "data.describe()", "2.3. Computing Descriptive Statistics\nYou can also of course compute descriptive statistics for columns in a pandas dataframe individually. Examples of each one applied to a single column - student scores on the assessment (PRE_SCORE) are shown below.", "np.mean(data[\"PRE_SCORE\"])\n\n#or\ndata[\"PRE_SCORE\"].mean()\n\nnp.nanmedian(data[\"PRE_SCORE\"])\n\n#or\ndata[\"PRE_SCORE\"].median()\n\ndata[\"PRE_SCORE\"].max()\n\ndata[\"PRE_SCORE\"].min()\n\ndata[\"PRE_SCORE\"].mode() \n#where first number is the index (should be zero unless column has multiple dimensions\n# and second number is the mode\n#not super useful for continuous variables for example, if you put in a continuous variable (like ZPR_1) it won't\n#return anything because there are no repeat values\n\n#perhaps equally useful is the value_counts method, which will tell you how many times each value appears int he column\ndata[\"PRE_SCORE\"].value_counts()\n\n#and to count all of the non-zero values\ndata[\"PRE_SCORE\"].count()\n\n#different generally from len(dataframe[\"column name]) because len will count NaNs\n# but the Score column has no NaNs, so swap this cell and the one before our with \n#a column that does have NaNs to verify\nlen(data[\"PRE_SCORE\"])\n\n#standard deviation\ndata[\"PRE_SCORE\"].std()\n\n#variance\ndata[\"PRE_SCORE\"].var()\n\n#verify relationship between variance and standard deviation\nnp.sqrt(data[\"PRE_SCORE\"].var())\n\n#quantiles\ndata[\"PRE_SCORE\"].quantile(0.5) # should return the median!\n\ndata[\"PRE_SCORE\"].quantile(0.25)\n\ndata[\"PRE_SCORE\"].quantile(0.75)\n\n#interquartile range\ndata[\"PRE_SCORE\"].quantile(0.75)-data[\"PRE_SCORE\"].quantile(0.25)\n\ndata[\"PRE_SCORE\"].skew()\n\ndata[\"PRE_SCORE\"].kurtosis()", "<div class=hw>\n### Exercise 1\n------------------\n\nChoose one categorical (answer to any demographic or attitudinal question) and one continuous variable (e.g. PRE_TIME, ZPR_1) and compute all of the statistics from the list above ***in one code cell*** (use print statements) for each variable. Write a paragraph describing all of the statistics that are informative for that variable in words. An example is given below for PRE_SCORE. Because score is numerical ***and*** discrete, all of the statistics above are informative. In your two cases, fewer statistics will be informative, so your explanations may be shorter, though you should challenge yourselves to go beyond merely reporting the statistcs, and should interpret them as well, as below. \n\n*QuaRCS score can take discrete integer values between 0 and 25. The minimum score for this dataset is 1 and the maximum is 25. There are 2,777 valid entries for score in this QuaRCS dataset, for which the mean is 13.9 and the median is 14 (both 56\\% of the maximum score). These are very close together, suggesting a reasonably centrally-concentrated score distrubution, and the low skewness value of 0.1 supports this. The kurtosis of the distribution is negative (platykurtic), which tells us that the distribution of scores is flat rather than peaky. The most common score (\"mode\") is 10, with 197 (~7%) of participants getting this score, however all score values from 7-21 have counts of greater than 100, supporting the flat nature of the distribution suggested by the negative kurtosis. The interquartile range (25-75 percentiles) is 8 points, and the standard deviation is 5.3. These represent a large fraction (20 and 32\\%) of the entire available score range, respectively, making the distribution quite wide.\n\n*Your description of categorical distribution here*\n\n*Your description of continuous distribution here*", "#your code computing all descriptive statistics for your categorical variable here\n\n#your code computing all descriptive statistics for your categorical variable here", "2.4. Creating Statistical Graphics\n<div class=hw>\n### Exercise 2 - Summary plots for distributions\n\n*Warning: Although you will be using QuaRCS data to investigate and experiment with each type of plot below, when you write up your descriptions, they should refer to the **general properties** of the plots, and not to the QuaRCS data specifically. In other words, your descriptions should be general descriptions of the plot types that could be applied to any dataset.*\n\n### 2a - Histogram\nThe syntax for creating a histogram for a pandas dataframe column is: \n\ndataframe[\"Column Name\"].hist(bins=nbins)\n\nPlay around with the column name and bins and refer to the docstring as needed until you understand thoroughly what is being shown. Describe what this ***type of plot*** (not any individual plot that you've made) shows in words and describe when you think it might be useful. \n\nPlay around with inputs (e.g. column name) until you find a case (dataframe column) where you think the histogram tells you something important and use it as an example to inform your answer. Inputs that do not produce informative histograms should also help to inform your answer. Save a couple of representative histograms (good and bad, use plt.savefig(\"figure name\")) and integrate them into your written (markdown) explanation to support your argument.", "#this cell is for playing around with histograms", "Your explanation here, with figures\n<div class=hw>\n### 2b - Box plot\n\nThe syntax for creating a box plot for a pair of pandas dataframe columns is: \n\ndataframe.boxplot(column=\"column name 1\", by=\"column name 2\")\n\nPlay around with the column and by variables and refer to the docstring as needed until you understand thoroughly what is being shown. Describe what this ***type of plot*** (not any individual plot that you've made) shows in words and describe when you think it might be useful. \n\nPlay around with inputs (e.g. column names) until you find a case that you think is well-described by a box and whisker plot and use it as an example to inform your answer. Inputs that do not produce informative box plots should also help to inform your answer. Save a couple of representative box plots (good and bad) and integrate them into your written explanation.", "#your sample boxplot code here", "Your explanation here\n<div class=hw>\n### 2c - Pie Chart\n\nThe format for making the kind of pie chart that might be useful in this context is as follows: \nnewdataframe = dataframe[\"column name\"].value()counts \nnewdataframe.plot.pie(figsize=(6,6))\n\nPlay around with the column and refer to the docstring as needed until you understand thoroughly what is being shown. Describe what this ***type of plot*** (not any individual plot that you've made) shows in words and describe when you think it might be useful. In your explanation here, focus on how a bar chart compares to a histogram, and when you think one or the other might be useful.\n\nPlay around with inputs (e.g. column names) until you find a case that you think is well-described by a pie chart and use it as an example to inform your answer. Inputs that do not produce informative pie charts should also help to inform your answer. Save a couple of representative pie charts (good and bad) and integrate them into your written explanation.", "#your sample pie chart code here", "Your explanation here\n<div class=hw>\n### 2d - Scatter Plot\nThe syntax for creating a scatter plot is: \n\ndataframe.plot.scatter(x='column name',y='column name')\n\nPlay around with the column and refer to the docstring as needed until you understand thoroughly what is being shown. Describe what this ***type of plot*** (not any individual plot that you've made) shows in words and describe when you think it might be useful.\n\nPlay around with inputs (e.g. column names) until you find a case that you think is well-described by a scatter plot and use it as an example to inform your answer. Inputs that do not produce informative scatter plots should also help to inform your answer. Save a couple of representative pie charts (good and bad) and integrate them into your written explanation.", "#your sample scatter plot code here", "Your explanation here\n2.5. Selecting a Subset of Data\n<div class=hw>\n### Exercise 3\n--------------\n\nWrite a function called \"filter\" that takes a dataframe, column name, and value for that column as input and returns a new dataframe containing only those rows where column name = value. For example filter(data, \"PRE_GENDER\", 1) should return a dataframe about half the size of the original dataframe where all values in the PRE_GENDER column are 1.", "#your function here\n\n#your tests here", "If you get to this point during lab time on Tuesday, stop here\n3. Testing Differences Between Datasets\n3.1 Computing Confidence Intervals\nNow that we have a mechanism for filtering the dataset, we can test differences between groups with confidence intervals. The syntax for computing the confidence interval on a mean for a given variable is as follows. \nvariable1 = st.t.interval(conf_level,n,loc=np.nanmean(variable2), scale=st.sem(variable2))\nwhere conf_level is the confidence level you with to calculate (e.g. 0.95 is 95% confidence, 0.98 is 98%, etc.)\nn is the number of samples and should generally be set to the number of valid entries in variable2 -1. \nAn example can be found below.", "## apply filter to select only men from data, and pull the scores from this group into a variable\ndf2=filter(data,'PRE_GENDER',1)\nmen_scores=df2['PRE_SCORE']\n\n#compute 95% confidence intervals on the mean (low and high)\nmen_conf=st.t.interval(0.95, len(men_scores)-1, loc=np.mean(men_scores), scale=st.sem(men_scores))\nmen_conf ", "<div class=hw>\n### Exercise 4\n------------------\n\nChoose a categorical variable (any demographic or attitudinal variable) that you find interesting and that has at least four possible values and calculate the condifence intervals on the mean score for each group. Then write a paragraph describing the results. Are the differences between the groups significant according to your data? Would they still be significant if you were to compute the 98% (3-sigma) confidence intervals?", "#code to filter data and compute confidence intervals for each answer choice", "explanatory text\n3.2 Visualizing Differences with Overlapping Plots\n<div class=hw>\n### Exercise 5 \n---------------\n\nMake another dataframe consisting only of students who \"devoted effort\" to the assessment, meaning their answer for PRE_EFFORT was EITHER a 4 or a 5 (you may have to modify your filter function to accept more than one value for \"value\").\n\nMake overlapping histograms showing (a) scores for the entire student population and (b) scores for this \"high effort\" subset. The \"alpha\" keyword inside the plot commands will set the transparency of your histogram so that you can see both. Play around with it until it looks good. Make sure your chart includes a legend, and describe what conclusions you can draw from the result in a paragraph below the final chart.", "#modified filter function here\n\n#define your new high effort dataframe using the filter\n\n#plot two overlapping histograms", "explanatory text here\n4. Data Investigation - Week 2 Instructions\nNow that you are familar with the QuaRCS dataset, you and your partner must come up with an investigation that you would like to complete using this data. For the next two modules, this will be more open, but for this first investigation, I will suggest the following three options, of which each group will need to pick one (we will divide in class):\n\nDesign visualizations that compare student attitudes pre and post-semester\nDesign visualizations that compare student skills (by topical area) pre and post semester\nDesign visualizations that compare students' awareness of their own skills pre and post semester\n\nBefore 5pm next Monday evening (3/27), you must send Professor Follette a brief e-mail (that you write together, one e-mail per group) describing a plan for how you will approach the problem you've been assigned. What do you need to know that you don't know already? What kind of plots will you make and what kinds of statistics will you compute? What is your first thought for what your final data representations will look like (histograms? box and whisker plots? overlapping plots or side by side?).", "from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
weikang9009/pysal
notebooks/explore/spaghetti/Spaghetti_Pointpatterns_Empirical.ipynb
bsd-3-clause
[ "$SPA$tial $G$rap$H$s: n$ET$works, $T$opology, & $I$nference\nTutorial for pysal.spaghetti: Working with point patterns: empirical observations\nJames D. Gaboardi [&#106;&#103;&#97;&#98;&#111;&#97;&#114;&#100;&#105;&#64;&#102;&#115;&#117;&#46;&#101;&#100;&#117;]\n\nInstantiating a pysal.spaghetti.Network\nAllocating observations to a network\nsnapping\n\n\nVisualizing original and snapped locations\nvisualization with geopandas and matplotlib", "import os\nlast_modified = None\nif os.name == \"posix\":\n last_modified = !stat -f\\\n \"# This notebook was last updated: %Sm\"\\\n Spaghetti_Pointpatterns_Empirical.ipynb\nelif os.name == \"nt\":\n last_modified = !for %a in (Spaghetti_Pointpatterns_Empirical.ipynb)\\\n do echo # This notebook was last updated: %~ta\n \nif last_modified:\n get_ipython().set_next_input(last_modified[-1])\n\n# This notebook was last updated: Dec 9 14:23:58 2018", "", "from pysal.explore import spaghetti as spgh\nfrom pysal.lib import examples\nimport geopandas as gpd\nimport matplotlib.pyplot as plt\nimport matplotlib.lines as mlines\nfrom shapely.geometry import Point, LineString\n\n%matplotlib inline\n\n__author__ = \"James Gaboardi <[email protected]>\"", "1. Instantiating a pysal.spaghetti.Network\nInstantiate the network from .shp file", "ntw = spgh.Network(in_data=examples.get_path('streets.shp'))", "2. Allocating observations to a network\nSnap point patterns to the network", "# Crimes with attributes\nntw.snapobservations(examples.get_path('crimes.shp'),\n 'crimes',\n attribute=True)\n\n# Schools without attributes\nntw.snapobservations(examples.get_path('schools.shp'),\n 'schools',\n attribute=False)", "3. Visualizing original and snapped locations\nTrue and snapped school locations", "schools_df = spgh.element_as_gdf(ntw,\n pp_name='schools',\n snapped=False)\n\nsnapped_schools_df = spgh.element_as_gdf(ntw,\n pp_name='schools',\n snapped=True)", "True and snapped crime locations", "crimes_df = spgh.element_as_gdf(ntw,\n pp_name='crimes',\n snapped=False)\n\nsnapped_crimes_df = spgh.element_as_gdf(ntw,\n pp_name='crimes',\n snapped=True)", "Create geopandas.GeoDataFrame objects of the vertices and arcs", "# network nodes and edges\nvertices_df, arcs_df = spgh.element_as_gdf(ntw,\n vertices=True,\n arcs=True)", "Plotting geopandas.GeoDataFrame objects", "# legend patches\narcs = mlines.Line2D([], [], color='k', label='Network Arcs', alpha=.5)\nvtxs = mlines.Line2D([], [], color='k', linewidth=0, markersize=2.5,\n marker='o', label='Network Vertices', alpha=1)\nschl = mlines.Line2D([], [], color='k', linewidth=0, markersize=25,\n marker='X', label='School Locations', alpha=1)\nsnp_schl = mlines.Line2D([], [], color='k', linewidth=0, markersize=12,\n marker='o', label='Snapped Schools', alpha=1)\ncrme = mlines.Line2D([], [], color='r', linewidth=0, markersize=7,\n marker='x', label='Crime Locations', alpha=.75)\nsnp_crme = mlines.Line2D([], [], color='r', linewidth=0, markersize=3,\n marker='o', label='Snapped Crimes', alpha=.75)\n\npatches = [arcs, vtxs, schl, snp_schl, crme, snp_crme]\n\n# plot figure\nbase = arcs_df.plot(color='k', alpha=.25, figsize=(12,12), zorder=0)\nvertices_df.plot(ax=base, color='k', markersize=5, alpha=1)\n\ncrimes_df.plot(ax=base, color='r', marker='x',\n markersize=50, alpha=.5, zorder=1)\nsnapped_crimes_df.plot(ax=base, color='r',\n markersize=20, alpha=.5, zorder=1)\n\nschools_df.plot(ax=base, cmap='tab20', column='id', marker='X',\n markersize=500, alpha=.5, zorder=2)\nsnapped_schools_df.plot(ax=base,cmap='tab20', column='id',\n markersize=200, alpha=.5, zorder=2)\n\n# add legend\nplt.legend(handles=patches, fancybox=True, framealpha=0.8,\n scatterpoints=1, fontsize=\"xx-large\", bbox_to_anchor=(1.04, .6))", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jldinh/multicell
examples/06 - Growth and divisions.ipynb
mit
[ "Preparation", "%matplotlib notebook", "Imports", "import multicell\nimport numpy as np", "Problem definition\nSimulation and tissue structure", "sim = multicell.simulation_builder.generate_cell_grid_sim(20, 20, 1, 1e-3)", "Tissue growth\nWe first enable growth and specify the number of growth steps to be applied over the duration of the simulation. Growth steps are spaced evenly.\nNote: this should not be used in conjunction with set_time_steps, as the two settings would otherwise conflict.", "sim.enable_growth(n_steps=11)", "We then register the growth method we would like to apply. In this case, it is linear_growth, which requires a coefficient parameter specifying the scaling to be applied at each time step, along each axis.", "sim.register_growth_method(multicell.growth.linear_growth, {\"coefficient\": [1.1, 1.05, 1.]})", "Cell divisions\nWe first enable cell divisions and register the method we would like to use. In this case, we use a method called symmetrical_division, which divides a cell through its centroid, perpendicularly to its longest axis.", "sim.enable_division()\nsim.register_division_method(multicell.division.symmetrical_division)", "We also register the division trigger, which is used to check if a cell needs to be divided. Here, it is a volume-related trigger, which requires a threshold.", "sim.register_division_trigger(multicell.division.volume_trigger, {\"volume_threshold\": 2.})", "Rendering", "sim.register_renderer(multicell.rendering.MatplotlibRenderer, None, {\"view_size\": 60, \"view\": (90, -90), \"axes\": False})\n", "Visualization of the initial state", "sim.renderer.display()", "Simulation\nAs the tissue grows, it maintains its rectangular shape. Cells grow in a uniform manner (they all grow by the same amount) and all divide at the same time when they reach the volume threshold.", "sim.simulate()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
barakovic/COMMIT
doc/tutorials/AdvancedSolvers/tutorial_solvers.ipynb
gpl-3.0
[ "You can find the text version of this tutorial at this link.\nAdvanced solvers\nThis tutorial shows how to exploit the advanced features of the COMMIT framework from the side of the optimisation problem. The general formulation is the following:\n\\begin{equation}\nx^ = \\arg\\min_{x\\in R^n_+} \\frac12 \\|Ax-y\\|2^2 + \\lambda{IC}\\Omega_{IC}(x) + \\lambda_{EC}\\Omega_{EC}(x) + \\lambda_{ISO}\\Omega_{ISO}(x),\n\\end{equation}\nwhere $A$ is the COMMIT dictionary, $n$ is defined in such a way that the product $Ax$ makes sense and $y$ is the datum that we want to fit. The three regularisation terms allow us to exploit distinct penalties for each compartment*.\nNote: before exploring this tutorial, you should follow the Getting Started tutorial.\nDownload and unpack the data\nDownload and extract the example dataset from the following ZIP archive, which contains the following files:\n\nDWI.nii: a diffusion MRI dataset with 100 measurements distributed on 2 shells, respectively at b=700 s/mm^2 and b=2000 s/mm^2;\nDWI.scheme: its corresponding acquisition scheme;\npeaks.nii.gz: main diffusion orientations estimated with CSD;\nfibers.trk: tractogram with about 280K fibers estimated using a streamline-based algorithm;\nWM.nii.gz: white-matter mask extracted from an anatomical T1w image.\n\n<span style=\"color:crimson\">Make sure that your working directory is the folder where you unzipped the downloaded archive.</span>", "path_to_the_directory_with_the_unzipped_archive = '.' # edit this\ncd path_to_the_directory_with_the_unzipped_archive", "Load the usual COMMIT structure", "from commit import trk2dictionary\n\ntrk2dictionary.run(\n filename_trk = 'LausanneTwoShell/fibers.trk',\n path_out = 'LausanneTwoShell/CommitOutput',\n filename_peaks = 'LausanneTwoShell/peaks.nii.gz',\n filename_mask = 'LausanneTwoShell/WM.nii.gz',\n fiber_shift = 0.5,\n peaks_use_affine = True\n)\n\nimport commit\nmit = commit.Evaluation( '.', 'LausanneTwoShell' )\nmit.load_data( 'DWI.nii', 'DWI.scheme' )\n\nmit.set_model( 'StickZeppelinBall' )\n\nd_par = 1.7E-3 # Parallel diffusivity [mm^2/s]\nICVFs = [ 0.7 ] # Intra-cellular volume fraction(s) [0..1]\nd_ISOs = [ 1.7E-3, 3.0E-3 ] # Isotropic diffusivitie(s) [mm^2/s]\n\nmit.model.set( d_par, ICVFs, d_ISOs )\nmit.generate_kernels( regenerate=True )\nmit.load_kernels()\n\nmit.load_dictionary( 'CommitOutput' )\nmit.set_threads()\nmit.build_operator()", "Perform clustering of the streamlines\nYou will need dipy, which is among the requirements of COMMIT, hence there should be no problem.\nThe threshold parameter has to be tuned for each brain. Do not consider our choice as a standard one.", "from nibabel import trackvis as tv\nfname='LausanneTwoShell/fibers.trk'\nstreams, hdr = tv.read(fname)\nstreamlines = [i[0] for i in streams]\n\nfrom dipy.segment.clustering import QuickBundles\nthreshold = 15.0\nqb = QuickBundles(threshold=threshold)\nclusters = qb.cluster(streamlines)\n\nimport numpy as np\nstructureIC = np.array([c.indices for c in clusters])\nweightsIC = np.array([1.0/np.sqrt(len(c)) for c in structureIC])", "Notice that we defined structure_IC as a numpy.array that contains a list of lists containing the indices associated to each group. We know it sounds a little bit bizarre but it computationally convenient.\nDefine the regularisation term\nEach compartment must be regularised separately. The user can choose among the following penalties:\n\n\n$\\sum_{g\\in G}w_g\\|x_g\\|_k$ : commit.solvers.group_sparsity with $k\\in {2, \\infty}$ (only for IC compartment)\n\n\n$\\|x\\|_1$ : commit.solvers.norm1\n\n\n$\\|x\\|_2$ : commit.solvers.norm2\n\n\n$\\iota_{\\ge 0}(x)$ : commit.solvers.non_negative (Default for all compartments)\n\n\nIf the chosen regularisation for the IC compartment is $\\sum_{g\\in G}\\|x_g\\|_k$, we can define $k$ via the group_norm field, which must be one between\n\n\n$\\|x\\|_2$ : commit.solvers.norm2 (Default)\n\n\n$\\|x\\|_\\infty$ : commit.solvers.norminf\n\n\nIn this example we consider the following penalties:\n\n\nIntracellular: group sparsity with 2-norm of each group\n\n\nExtracellular: 2-norm\n\n\nIsotropic: 1-norm", "regnorms = [commit.solvers.group_sparsity, commit.solvers.norm2, commit.solvers.norm1]\n\ngroup_norm = 2 # each group is penalised with its 2-norm", "The regularisation parameters are specified within the lambdas field. Again, do not consider our choice as a standard one.", "lambdas = [10.,10.,10.]", "Call the constructor of the data structure", "regterm = commit.solvers.init_regularisation(mit,\n regnorms = regnorms,\n structureIC = structureIC,\n weightsIC = weightsIC,\n group_norm = group_norm,\n lambdas = lambdas)", "Call the fit function to perform the optimisation", "mit.fit(regularisation=regterm, max_iter=1000)", "Save the results", "suffix = 'IC'+str(regterm[0])+'EC'+str(regterm[1])+'ISO'+str(regterm[2])\nmit.save_results(path_suffix=suffix)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
darkomen/TFG
medidas/20072015/FILAEXTRUDER/.ipynb_checkpoints/Analisis-checkpoint.ipynb
cc0-1.0
[ "Anรกlisis de los datos obtenidos\nProducciรณn del dรญa 20 de Julio de 2015\nLos datos del experimento:\n* Hora de inicio: 16:08\n* Hora final : 16:35 \n* $T: 150ยบC$\n* $V_{min} tractora: 1 mm/s$\n* $V_{max} tractora: 3 mm/s$\nSe desea comprobar si el filamento que podemos llegar a extruir con el sistema de la tractora puede llegar a ser bueno como para regularlo.", "%pylab inline\n#Importamos las librerรญas utilizadas\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n#Mostramos las versiones usadas de cada librerรญas\nprint (\"Numpy v{}\".format(np.__version__))\nprint (\"Pandas v{}\".format(pd.__version__))\nprint (\"Seaborn v{}\".format(sns.__version__))\n\n#Abrimos el fichero csv con los datos de la muestra\ndatos = pd.read_csv('datos.csv')\n\n#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\ncolumns = ['Diametro X','Diametro Y','VELOCIDAD']\n\n#Mostramos un resumen de los datos obtenidoss\ndatos[columns].describe()\n#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]", "Representamos ambos diรกmetro y la velocidad de la tractora en la misma grรกfica", "#datos.ix[:, \"Diametro X\":\"Diametro Y\"].plot(secondary_y=['VELOCIDAD'],figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')\ndatos[columns].plot(secondary_y=['VELOCIDAD'],figsize=(10,5),title='Modelo matemรกtico del sistema').hlines([1.6 ,1.8],0,2000,colors='r')\n\n#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')\n\ndatos.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')", "Con esta segunda aproximaciรณn se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como cuarta aproximaciรณn, vamos a modificar las velocidades de tracciรณn. El rango de velocidades propuesto es de 1.5 a 5.3, manteniendo los incrementos del sistema experto como en el actual ensayo.\nComparativa de Diametro X frente a Diametro Y para ver el ratio del filamento", "plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')", "Filtrado de datos\nLas muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.", "datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]\n\n#datos_filtrados.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')", "Representaciรณn de X/Y", "plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')", "Analizamos datos del ratio", "ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']\nratio.describe()\n\nrolling_mean = pd.rolling_mean(ratio, 50)\nrolling_std = pd.rolling_std(ratio, 50)\nrolling_mean.plot(figsize=(12,6))\n#ย plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)\nratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))", "Lรญmites de calidad\nCalculamos el nรบmero de veces que traspasamos unos lรญmites de calidad. \n$Th^+ = 1.85$ and $Th^- = 1.65$", "Th_u = 1.85\nTh_d = 1.65\n\ndata_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |\n (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]\n\ndata_violations.describe()\n\ndata_violations.plot(subplots=True, figsize=(12,12))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
barbagroup/JITcode-MechE
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
mit
[ "Version 0.1 -- February 2014\nJITcode 1, lesson 1\nThis is lesson 1 of the first Just-in-Time (JIT) module for teaching computing to engineers, in context. The first module lays the foundations for building computational skills. It is not meant to support a particular engineering course, so it can be used by freshman students. The context problems should be interesting to any science-minded student.\nLesson 1 builds competency on these basic skills:\n\nreading data from a file in comma-separated format (CSV)\nplotting data\nanalyzing data with statistics\nwriting an image of a plot to a file\n\nContext โ€” Earth temperature over time\nIs global temperature rising? How much? This is a question of burning importance in today's world!\nData about global temperatures are available from several sources: NASA, the National Climatic Data Center (NCDC) and the University of East Anglia in the UK. Check out the University Corporation for Atmospheric Research (UCAR) for an in-depth discussion.\nThe NASA Goddard Space Flight Center is one of our sources of global climate data. They produced this video showing a color map of the changing global surface temperature anomalies from 1880 to 2011.\nThe term global temperature anomaly means the difference in temperature with respect to a reference value or a long-term average. It is a very useful way of looking at the problem and in many ways better than absolute temperature. For example, a winter month may be colder than average in Washington DC, and also in Miami, but the absolute temperatures will be different in both places.", "from IPython.display import YouTubeVideo\nYouTubeVideo('lyb4gau3LyI')", "How would we go about understanding the trends from the data on global temperature?\nThe first step in analyzing unknown data is to generate some simple plots. We are going to look at the temperature-anomaly history, contained in a file, and make our first plot to explore this data. \nWe are going to smooth the data and then we'll fit a line to it to find a trend, plotting along the way to see how it all looks.\nLet's get started!\nThe first thing to do is to load our favorite library: the NumPy library for array operations.", "import numpy", "Make sure you have studied the introduction to JITcode in Python to know a bit about this library and why we need it.\nStep 1: Read a data file\nThe data is contained in the file:\nGlobalTemperatureAnomaly-1958-2008.csv\nwith the year on the first column and 12 monthly averages of temperature anomaly listed sequentially on the second column. We will read the file, then make an initial plot to see what it looks like.\nTo load the file, we use a function from the NumPy library called loadtxt(). To tell Python where to look for this function, we precede the function name with the library name, and use a dot between the two names. This is how it works:", "numpy.loadtxt(fname='./resources/GlobalTemperatureAnomaly-1958-2008.csv', delimiter=',')", "Note that we called the function with two parameters: the file name and path, and the delimiter that separates each value on a line (a comma). Both parameters are strings (made up of characters) and we put them in single quotes.\nAs the output of the function, we get an array. Because it's rather big, Python shows only a few rows and columns of the array. \nSo far, so good. Now, what if we want to manipulate this data? Or plot it? We need to refer to it with a name. We've only just read the file, but we did not assign the array any name! Let's try again.", "T=numpy.loadtxt(fname='./resources/GlobalTemperatureAnomaly-1958-2008.csv', delimiter=',')", "That's interesting. Now, we don't see any output from the function call. Why? It's simply that the output was stored into the variable T, so to see it, we can do:", "print(T)", "Ah, there it is! Let's find out how big the array is. For that, we use a cool NumPy function called shape():", "numpy.shape(T)", "Again, we've told Python where to find the function shape() by attaching it to the library name with a dot. However, NumPy arrays also happen to have a property shape that will return the same value, so we can get the same result another way:", "T.shape", "It's just shorter. The array T holding our temperature-anomaly data has two columns and 612 rows. Since we said we had monthly data, how many years is that?", "612/12", "That's right: from 1958 through 2008.\nStep 2: Plot the data\nWe will display the data in two ways: as a time series of the monthly temperature anomalies versus time, and as a histogram. To be fancy, we'll put both plots in one figure. \nLet's first load our plotting library, called matplotlib. To get the plots inside the notebook (rather than as popups), we use a special command, %matplotlib inline:", "from matplotlib import pyplot\n%matplotlib inline", "What's this from business about? matplotlib is a pretty big (and awesome!) library. All that we need is a subset of the library for creating 2D plots, so we ask for the pyplot module of the matplotlib library. \nPlotting the time series of temperature is as easy as calling the function plot() from the module pyplot. \nBut remember the shape of T? It has two columns and the temperature-anomaly values are in the second column. We extract the values of the second column by specifying 1 as the second index (the first column has index 0) and using the colon notation : to mean all rows. Check it out:", "pyplot.plot(T[:,1])", "You can add a semicolon at the end of the plotting command to avoid that stuff that appeared on top of the figure, that Out[x]: [&lt; ...&gt;] ugliness. Try it.\nDo you see a trend in the data?\nThe plot above is certainly useful, but wouldn't it be nicer if we could look at the data relative to the year, instead of the location of the data in the array?\nThe plot function can take another input; let's get the year displayed as well.", "pyplot.plot(T[:,0],T[:,1]);", "The temperature anomaly certainly seems to show an increasing trend. But we're not going to stop there, of course. It's not that easy to convince people that the planet is warming, as you know.\nPlotting a histogram is as easy as calling the function hist(). Why should it be any harder?", "pyplot.hist(T[:,1]);", "What does this plot tell you about the data? It's more interesting than just an increasing trend, that's for sure. You might want to look at more statistics now: mean, median, standard deviation ... NumPy makes that easy for you:", "meanT = numpy.mean(T[:,1])\nmedianT = numpy.median(T[:,1])\nprint( meanT, medianT)", "You can control several parameters of the hist() plot. Learn more by reading the manual page (yes, you have to read the manual sometimes!). The first option is the number of binsโ€”the default is 10โ€”but you can also change the appearance (color, transparency). Try some things out.", "pyplot.hist(T[:,1], 20, normed=1, facecolor='g', alpha=0.55);", "This is fun. Finally, we'll put both plots on the same figure using the subplot() function, which creates a grid of plots. The argument tells this function how many rows and columns of sub-plots we want, and where in the grid each plot will go.\nTo help you see what each plotting command is doing, we added comments, which in Python follow the # symbol.", "pyplot.figure(figsize=(12,4)) # the size of the figure area\npyplot.subplot(121) # creates a grid of 1 row, 2 columns and selects the first plot\npyplot.plot(T[:,0],T[:,1],'g') # our time series, but now green\npyplot.xlim(1958,2008) # set the x-axis limits\npyplot.subplot(122) # prepares for the second plot\npyplot.hist(T[:,1], 20, normed=1, facecolor='g', alpha=0.55);", "Step 3: Smooth the data and do regression\nYou see a lot of fluctuations on the time series, so you might be asking yourself \"How can I smooth it out?\" No? Let's do it anyway.\nOne possible approach to smooth the data (there are others) is using a moving average, also known as a sliding-window average. This is defined as:\n$$\\hat{x}{i,n} = \\frac{1}{n} \\sum{j=1}^{n} x_{i-j}$$\nThe only parameter to the moving average is the value $n$. As you can see, the moving average smooths the set of data points by creating a new data set consisting of local averages (of the $n$ previous data points) at each point in the new set.\nA moving average is technically a convolution, and luckily NumPy has a built-in function for that, convolve(). We use it like this:", "N = 12\nwindow = numpy.ones(N)/N\nsmooth = numpy.convolve(T[:,1], window, 'same')\npyplot.figure(figsize=(10, 4))\npyplot.plot(T[:,0], smooth, 'r')\npyplot.xlim(1958,2008);", "Did you notice the function ones()? It creates an array filled with ... you guessed it: ones!\nWe use a window of 12 data points, meaning that the plot shows the average temperature over the last 12 months. Looking at the plot, we can still see a trend, but the range of values is smaller. Let's plot the original time series together with the smoothed version:", "pyplot.figure(figsize=(10, 4))\npyplot.plot(T[:,0], T[:,1], 'g', linewidth=1) # we specify the line width here ...\npyplot.plot(T[:,0], smooth, 'r', linewidth=2) # making the smoothed data a thicker line\npyplot.xlim(1958, 2008);", "That is interesting! The smoothed data follows the trend nicely but has much less noise. Well, that is what filtering data is all about. \nLet's now fit a straight line through the temperature-anomaly data, to see the trends. We need to perform a least-squares linear regression to find the slope and intercept of a line \n$$y = mx+b$$\nthat fits our data. Thankfully, Python and NumPy are here to help with the polyfit() function. The function takes three arguments: the two array variables $x$ and $y$, and the order of the polynomial for the fit (in this case, 1 for linear regression).", "year = T[:,0] # it's time to use a more friendly name for column 1 of our data\nm,b = numpy.polyfit(year, T[:,1], 1)\npyplot.figure(figsize=(10, 4))\npyplot.plot(year, T[:,1], 'g', linewidth=1)\npyplot.plot(year, m * year + b, 'k--', linewidth=2)\npyplot.xlim(1958, 2008);", "There is more than one way to do this. Another of the favorite Python libraries is SciPy, and it has a linregress(x,y) function that will work as well. But let's not get carried away.\nStep 4: Checking for auto-correlation in the data\nWe won't go into details, but you will learn more about all this if you take a course on experimental methodsโ€”for example, at GW, the Mechanical and Aerospace Engineering department offers \"Methods of Engineering Experimentation\" (MAE-3120).\nThe fact is that in time series (like global temperature anomaly, stock values, etc.), the fluctuations in the data are not random: adjacent data points are not independent. We say that there is auto-correlation in the data.\nThe problem with auto-correlation is that various techniques in statistical analysis rely on the assumption that scatter (or error) is random. If you apply these techniques willy-nilly, you can get false trends, overestimate uncertainties or exaggerate the goodness of a fit. All bad things!\nFor the global temperature anomaly, this discussion is crucial: many critics claim that since there is auto-correlation in the data, no reliable trends can be obtained\nAs a well-educated engineering student who cares about the planet, you will appreciate this: we can estimate the trend for the global temperature anomalies taking into account that the data points are not independent. We just need to use more advanced techniques of data analysis.\nTo finish off this lesson, your first in data analysis with Python, we'll put all our nice plots in one figure frame, and add the residual. Because the residual is not random \"white\" noise, you can conclude that there is auto-correlation in this time series.\nFinally, we'll save the plot to an image file using the savefig() command of Pyplotโ€”this will be useful to you when you have to prepare reports for your engineering courses!", "pyplot.figure(figsize=(10, 8)) # the size of the figure area\npyplot.subplot(311) # creates a grid of 3 columns, 1 row and place the first plot\npyplot.plot(year, T[:,1], 'g', linewidth=1) # we specify the line width here ...\npyplot.plot(year, smooth, 'r', linewidth=2) # making the smoothed data a thicker line\npyplot.xlim(1958, 2008)\npyplot.subplot(312)\npyplot.plot(year, T[:,1], 'g', linewidth=1)\npyplot.plot(year, m * year + b, 'k--', linewidth=2)\npyplot.xlim(1958, 2008)\npyplot.subplot(313)\npyplot.plot(year, T[:,1] - m * year + b, 'o', linewidth=2)\npyplot.xlim(1958, 2008)\npyplot.savefig(\"TemperatureAnomaly.png\")", "Step 5: Generating useful output\nHere, we'll use our linear fit to project the temperature into the future. We'll also save some image files that we could later add to a document or report based on our findings. First, let's create an expectation of the temperature difference up to the year 2100.", "spacing = (2008 + 11 / 12 - 1958) / 612\nlength = (2100 - 1958) / spacing\nlength = int(length) #we'll need an integer for the length of our array\nyears = numpy.linspace(1958, 2100, num = length)\ntemp = m * years + b#use our linear regression to estimate future temperature change\npyplot.figure(figsize=(10, 4))\npyplot.plot(years, temp)\npyplot.xlim(1958, 2100)\nout=(years, temp) #create a tuple out of years and temperature we can output\nout = numpy.array(out).T #form an array and transpose it", "Ok, that estimation looks reasonable. Let's save the data that describes it back to a .csv file, like the one we originally imported.", "numpy.savetxt('./resources/GlobalTemperatureEstimate-1958-2100.csv', out, delimiter=\",\")", "Now, lets make a nicer picture that we can show to back up some of our information. We can plot the linear regression as well as the original data and then save the figure.", "pyplot.figure(figsize = (10, 4))\npyplot.plot(year, T[:,1], 'g')\npyplot.plot(years, temp, 'k--')\npyplot.xlim(1958, 2100)\npyplot.savefig('./resources/GlobalTempPlot.png')", "Nice! Now we've got some stuff that we could use in a report, or show to someone unfamiliar with coding. Remember to play with our settings; I'm sure you could get an even nicer-looking plot if you try!\nDig Deeper & Think\n\nHow is the global temperature anomaly calculated?\nWhat does it mean and why is it employed instead of the global mean temperature to quantify global warming?\nWhy is it important to check that the residuals are independent and random when performing linear regression?\nIn this particular case, is it possible to still estimate a trend with confidence?\nWhat is your best estimate of the global temperature by the end of the 22nd century?\n\nWhat did we learn?\nYou should have played around with the embedded code in this notebook, and also written your own version of all the code in a separate Python script to learn:\n\nhow to read data from a comma-separated file\nhow to plot the data\nhow to do some basic analysis on the data\nhow to write to a file", "from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
amueller/nyu_ml_lectures
Unsupervised Transformers.ipynb
bsd-2-clause
[ "%matplotlib nbagg\nimport matplotlib.pyplot as plt\nimport numpy as np", "<img src=\"figures/unsupervised_workflow.svg\" width=100%>", "from sklearn.datasets import load_digits\nfrom sklearn.cross_validation import train_test_split\nimport numpy as np\nnp.set_printoptions(suppress=True)\n\ndigits = load_digits()\nX, y = digits.data, digits.target\nX_train, X_test, y_train, y_test = train_test_split(X, y)", "Removing mean and scaling variance", "from sklearn.preprocessing import StandardScaler", "1) Instantiate the model", "scaler = StandardScaler()", "2) Fit using only the data.", "scaler.fit(X_train)", "3) transform the data (not predict).", "X_train_scaled = scaler.transform(X_train)\n\nX_train.shape\n\nX_train_scaled.shape", "The transformed version of the data has the mean removed:", "X_train_scaled.mean(axis=0)\n\nX_train_scaled.std(axis=0)\n\nX_test_transformed = scaler.transform(X_test)", "Principal Component Analysis\n0) Import the model", "from sklearn.decomposition import PCA", "1) Instantiate the model", "pca = PCA(n_components=2)", "2) Fit to training data", "pca.fit(X)", "3) Transform to lower-dimensional representation", "print(X.shape)\nX_pca = pca.transform(X)\nX_pca.shape", "Visualize", "plt.figure()\nplt.scatter(X_pca[:, 0], X_pca[:, 1], c=y)\n\npca.components_.shape\n\nplt.matshow(pca.components_[0].reshape(8, 8), cmap=\"gray\")\nplt.colorbar()\nplt.matshow(pca.components_[1].reshape(8, 8), cmap=\"gray\")\nplt.colorbar()", "Manifold Learning", "from sklearn.manifold import Isomap\nisomap = Isomap()\n\nX_isomap = isomap.fit_transform(X)\n\nplt.scatter(X_isomap[:, 0], X_isomap[:, 1], c=y)", "Exercises\n\nVisualize the digits dataset using the TSNE algorithm from the sklearn.manifold module (it runs for a couple of seconds).\nExtract non-negative components from the digits dataset using NMF. Visualize the resulting components. The interface of NMF is identical to the PCA one. What qualitative difference can you find compared to PCA?", "# %load solutions/digits_unsupervised.py\nfrom sklearn.manifold import TSNE\nfrom sklearn.decomposition import NMF\n\n# Compute TSNE embedding\ntsne = TSNE()\nX_tsne = tsne.fit_transform(X)\n\n# Visualize TSNE results\nplt.title(\"All classes\")\nplt.figure()\nplt.scatter(X_tsne[:, 0], X_tsne[:, 1], c=y)\n\n# build an NMF factorization of the digits dataset\nnmf = NMF(n_components=16).fit(X)\n\n# visualize the components\nfig, axes = plt.subplots(4, 4)\nfor ax, component in zip(axes.ravel(), nmf.components_):\n ax.imshow(component.reshape(8, 8), cmap=\"gray\", interpolation=\"nearest\")\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pelson/python-stratify
INTRO.ipynb
bsd-3-clause
[ "Stratify\nVertical interpolation for numerical weather prediction (NWP) model data\nWhilst this is not the only use for the stratify package, NWP vertical interpolation was the motivating usecase for the package's creation.\nIn its simplest form, vertical interpolation ammounts to a 1d interpolation at each grid-point. Whilst some more sophistication exists for a number of interpolators, stratify can be seen as an optimisation to vectorize these interpolations beyond naรฏve nested for-loops.\nData setup\nIn order to setup the problem, let's manufacture some reasonably realistic NWP data.\nFirst, let's randomly generate some orography (or, if this were an ocean model, bathymetry) that we can use for our model:", "import numpy as np\n\nnx, ny = 6, 3\n\nnp.random.seed(0)\norography = np.random.normal(1000, 600, size=(ny, nx)) - 400\nsea_level_temp = np.random.normal(290, 5, size=(ny, nx))\n\n# Now visualise:\n\nimport matplotlib.pyplot as plt\n\nplt.set_cmap('viridis')\nfig = plt.figure(figsize=(8, 4))\n\nplt.subplot(1, 2, 1)\nplt.pcolormesh(orography)\ncbar = plt.colorbar(orientation='horizontal',\n label='Orography (m)')\n# Reduce the maximum number of ticks to 5.\ncbar.ax.xaxis.get_major_locator().nbins = 5\n\nplt.subplot(1, 2, 2)\nplt.pcolormesh(sea_level_temp)\ncbar = plt.colorbar(orientation='horizontal',\n label='Sea level temperature (K)')\n# Reduce the maximum number of ticks to 5.\ncbar.ax.xaxis.get_major_locator().nbins = 5\n\nplt.show()", "Next, let's define a vertical coordinate system that minimises missing data values, and gives good resolution at the (orographic) surface.\nTo achieve this we invent a scheme where the \"bottom\" of the model closely follows the orography/bathymetry, and as we reach the \"top\" of the model we get levels of approximately constant height.", "nz = 9\n\nmodel_levels = np.arange(nz)\n\nmodel_top = 5000 # m\n\n# The proportion of orographic influence on the model altitude. In this case,\n# we define this as a log progression from full influence to no influence.\nsigma = 1.1 - np.logspace(-1, np.log10(1.1), nz)\n\n# Broadcast sigma so that when we multiply the orography we get a 3D array of z, y, x.\nsigma = sigma[:, np.newaxis, np.newaxis]\n\n# Combine sigma with the orography and model top value to\n# produce 3d (z, y, x) altitude data for our \"model levels\".\naltitude = (orography * sigma) + (model_top * (1 - sigma))", "Our new 3d array now represents altitude (height above sea surface) at each of our \"model levels\".\nLet's look at a cross-section of the data to see how these levels:", "plt.fill_between(np.arange(6), np.zeros(6), orography[1, :],\n color='green', linewidth=2, label='Orography')\n\nplt.plot(np.zeros(nx),\n color='blue', linewidth=1.2,\n label='Sea level')\n\nfor i in range(9):\n plt.plot(altitude[i, 1, :], color='gray', linestyle='--',\n label='Model levels' if i == 0 else None)\n\nplt.ylabel('altitude / m')\nplt.margins(0.1)\nplt.legend()\nplt.show()", "To recap, we now have a model vertical coordinate system that maximises the number grid-point locations close to the orography. In addition, we have a 3d array of \"altitudes\" so that we can relate any phenomenon measured on this grid to useful vertical coordinate information.\nLet's now define the temperature at each of our x, y, z points. We use the International Standard Atmosphere lapse rate of $ -6.5\\ ^{\\circ}C\\ /\\ km $ combined with our sea level standard temperature as an appoximate model for our temperature profile.", "lapse = -6.5 / 1000 # degC / m\ntemperature = sea_level_temp + lapse * altitude\n\nfrom matplotlib.colors import LogNorm\n\nfig = plt.figure(figsize=(6, 6))\nnorm = plt.Normalize(vmin=temperature.min(), vmax=temperature.max())\n\nfor i in range(nz):\n plt.subplot(3, 3, i + 1)\n qm = plt.pcolormesh(temperature[i], cmap='viridis', norm=norm)\n\nplt.subplots_adjust(right=0.84, wspace=0.3, hspace=0.3)\ncax = plt.axes([0.85, 0.1, 0.03, 0.8])\nplt.colorbar(cax=cax)\nplt.suptitle('Temperature (K) at each \"model level\"')\n\nplt.show()", "Restratification / vertical interpolation\nOur data is in the form:\n\n1d \"model level\" vertical coordinate (z axis)\n2 x 1d horizontal coordinates (x, y)\n3d \"altitude\" variable (x, y, z)\n3d \"temperature\" variable (x, y, z)\n\nSuppose we now want to change the vertical coordinate system of our variables so that they are on levels of constant altitude, not levels of constant \"model levels\":", "target_altitudes = np.linspace(700, 5500, 5) # m", "If we visualise this, we can see that we need to consider the behaviour for a number of situations, including what should happen when we are sampling below the orography, and when we are above the model top.", "plt.figure(figsize=(7, 5))\nplt.fill_between(np.arange(6), np.zeros(6), orography[1, :],\n color='green', linewidth=2, label='Orography')\n\nfor i in range(9):\n plt.plot(altitude[i, 1, :],\n color='gray', lw=1.2,\n label=None if i > 0 else 'Source levels \\n(model levels)')\nfor i, target in enumerate(target_altitudes):\n plt.plot(np.repeat(target, 6),\n color='gray', linestyle='--', lw=1.4, alpha=0.6,\n label=None if i > 0 else 'Target levels \\n(altitude)')\n\nplt.ylabel('height / m')\nplt.margins(top=0.1)\nplt.legend()\nplt.savefig('summary.png')\nplt.show()", "The default behaviour depends on the scheme, but for linear interpolation we recieve NaNs both below the orography and above the model top:", "import stratify\n\ntarget_nz = 20\ntarget_altitudes = np.linspace(400, 5200, target_nz) # m\n\nnew_temperature = stratify.interpolate(target_altitudes, altitude, temperature,\n axis=0)", "With some work, we can visualise this result to compare a cross-section before and after. In particular this will allow us to see precisely what the interpolator has done at the extremes of our target levels:", "ax1 = plt.subplot(1, 2, 1)\nplt.fill_between(np.arange(6), np.zeros(6), orography[1, :],\n color='green', linewidth=2, label='Orography')\ncs = plt.contourf(np.tile(np.arange(6), nz).reshape(nz, 6),\n altitude[:, 1],\n temperature[:, 1])\nplt.scatter(np.tile(np.arange(6), nz).reshape(nz, 6),\n altitude[:, 1],\n c=temperature[:, 1])\n\nplt.subplot(1, 2, 2, sharey=ax1)\nplt.fill_between(np.arange(6), np.zeros(6), orography[1, :],\n color='green', linewidth=2, label='Orography')\nplt.contourf(np.arange(6), target_altitudes,\n np.ma.masked_invalid(new_temperature[:, 1]),\n cmap=cs.cmap, norm=cs.norm)\nplt.scatter(np.tile(np.arange(nx), target_nz).reshape(target_nz, nx),\n np.repeat(target_altitudes, nx).reshape(target_nz, nx),\n c=new_temperature[:, 1])\nplt.scatter(np.tile(np.arange(nx), target_nz).reshape(target_nz, nx),\n np.repeat(target_altitudes, nx).reshape(target_nz, nx),\n s=np.isnan(new_temperature[:, 1]) * 15, marker='x')\n\nplt.suptitle('Temperature cross-section before and after restratification')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
John-Keating/ThinkStats2
code/chap04ex.ipynb
gpl-3.0
[ "Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>\nAllen Downey\nRead the pregnancy file.", "%matplotlib inline\n\nimport nsfg\npreg = nsfg.ReadFemPreg()", "Select live births, then make a CDF of <tt>totalwgt_lb</tt>.", "import thinkstats2 as ts\n\nlive = preg[preg.outcome == 1]\n\nwgt_cdf = ts.Cdf(live.totalwgt_lb, label = 'weight')", "Display the CDF.", "import thinkplot as tp\n\ntp.Cdf(wgt_cdf, label = 'weight')\ntp.Show()", "Find out how much you weighed at birth, if you can, and compute CDF(x). \nIf you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children.\nCompute the percentile rank of your birthweight\nCompute the median birth weight by looking up the value associated with p=0.5.\nCompute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75. \nMake a random selection from <tt>cdf</tt>.\nDraw a random sample from <tt>cdf</tt>.\nDraw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks.\nGenerate 1000 random values using <tt>random.random()</tt> and plot their PMF.", "import random\nrandom.random?\n\nimport random\n\nthousand = [random.random() for x in range(1000)]\nthousand_pmf = ts.Pmf(thousand, label = 'rando')\ntp.Pmf(thousand_pmf, linewidth=0.1)\ntp.Show()\n\nt_hist = ts.Hist(thousand)\ntp.Hist(t_hist, label = \"rando\")\ntp.Show()", "Assuming that the PMF doesn't work very well, try plotting the CDF instead.", "thousand_cdf = ts.Cdf(thousand, label='rando')\ntp.Cdf(thousand_cdf)\ntp.Show()\n\nimport scipy.stats\nscipy.stats?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
evgeniiegorov/evgeniiegorov.github.io
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
mit
[ "Seminar 5: Tree and Bootstrap Aggregation\nCourse: MA06018, Machine Learning by professor Evgeny Burnaev <br>\nAuthor: Evgenii Egorov\nTable of contents:\n\nTree\nSimple Stamp\nLimitations of trees\n\n\nBagging\nBootstrap\nRandom Forest", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline", "<a id='tree'></a>\nTree\nLet's play with a toy example and write our own decion-regression stamp. First, consider following toy dataset:", "X_train = np.linspace(0, 1, 100)\nX_test = np.linspace(0, 1, 1000)\n\[email protected]\ndef target(x):\n return x > 0.5\n\nY_train = target(X_train) + np.random.randn(*X_train.shape) * 0.1\nY_test = target(X_test) + np.random.randn(*X_test.shape) * 0.1\n\nplt.figure(figsize = (16, 9));\nplt.scatter(X_train, Y_train, s=50);\nplt.title('Train dataset');\nplt.xlabel('X');\nplt.ylabel('Y');", "<a id='stamp'></a>\nTask 1\nTo define tree (even that simple), we need to define following functions:\n Loss function \nFor regression it can be MSE, MAE .etc We will use MSE\n$$\n\\begin{aligned}\n& y \\in \\mathbb{R}^N \\\n& \\text{MSE}(\\hat{y}, y) = \\dfrac{1}{N}\\|\\hat{y} - y\\|_2^2\n\\end{aligned}\n$$\nNote, that for MSE optimal prediction will be just mean value of the target.\n Gain function \nWe need to select over different splitting by comparing them with their gain value. It is also reasonable to take into account number of points at the area of belongs to the split.\n$$\n\\begin{aligned}\n& R_i := \\text{region i; c = current, l = left, r = right} \\\n& Gain(R_c, R_l, R_r) = Loss(R_c) - \\left(\\frac{|R_l|}{|R_c|}Loss(R_l) + \\frac{|R_r|}{|R_c|}Loss(R_r)\\right)\n\\end{aligned}\n$$\nAlso for efficiency, we should not try all the x values, but just according to the histogram\n<img src=\"stamp.jpg\" alt=\"Stamp Algo\" style=\"height: 700px;\"/>\nAlso don't forget return left and right leaf predictions\nImplement algorithm and please, put your loss rounded to the 3 decimals at the form: https://goo.gl/forms/AshZ8gyirm0Zftz53", "def loss_mse(predict, true):\n return np.mean((predict - true) ** 2)\n\ndef stamp_fit(x, y):\n root_prediction = np.mean(y)\n root_loss = loss_mse(root_prediction, y)\n gain = []\n _, thresholds = np.histogram(x)\n thresholds = thresholds[1:-1]\n for i in thresholds:\n left_predict = np.mean(y[x < i])\n left_weight = np.sum(x < i) / x.shape[0]\n \n right_predict = np.mean(y[x >= i])\n right_weight = np.sum(x >= i) / x.shape[0]\n \n loss = left_weight * loss_mse(left_predict, y[x < i]) + right_weight * loss_mse(right_predict, y[x >= i])\n gain.append(root_loss - loss)\n \n threshold = thresholds[np.argmax(gain)]\n left_predict = np.mean(y[x < threshold])\n right_predict = np.mean(y[x >= threshold])\n \n return threshold, left_predict, right_predict\n\[email protected]\ndef stamp_predict(x, threshold, predict_l, predict_r):\n prediction = predict_l if x < threshold else predict_r\n return prediction\n\npredict_params = stamp_fit(X_train, Y_train)\n\nprediction = stamp_predict(X_test, *predict_params)\n\nloss_mse(prediction, Y_test)\n\nplt.figure(figsize = (16, 9));\nplt.scatter(X_test, Y_test, s=50);\nplt.plot(X_test, prediction, 'r');\nplt.title('Test dataset');\nplt.xlabel('X');\nplt.ylabel('Y');", "<a id='lim'></a>\nLimitations\nNow let's discuss some limitations of decision trees. Consider another toy example. Our target is distance between the origin $(0;0)$ and data point $(x_1, x_2)$", "from sklearn.tree import DecisionTreeRegressor\n\ndef get_grid(data):\n x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1\n y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1\n return np.meshgrid(np.arange(x_min, x_max, 0.01),\n np.arange(y_min, y_max, 0.01))\n\ndata_x = np.random.normal(size=(100, 2))\ndata_y = (data_x[:, 0] ** 2 + data_x[:, 1] ** 2) ** 0.5\nplt.figure(figsize=(8, 8));\nplt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=100, cmap='spring');", "Sensitivity with respect to the subsample\nLet's see how predictions and structure of tree change, if we fit them at the random $90\\%$ subset if the data.", "plt.figure(figsize=(20, 6))\nfor i in range(3):\n clf = DecisionTreeRegressor(random_state=42)\n\n indecies = np.random.randint(data_x.shape[0], size=int(data_x.shape[0] * 0.9))\n clf.fit(data_x[indecies], data_y[indecies])\n xx, yy = get_grid(data_x)\n predicted = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)\n\n plt.subplot2grid((1, 3), (0, i))\n plt.pcolormesh(xx, yy, predicted, cmap='winter')\n plt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=30, cmap='winter', edgecolor='k')\n", "Sensitivity with respect to the hyper parameters", "plt.figure(figsize=(14, 14))\nfor i, max_depth in enumerate([2, 4, None]):\n for j, min_samples_leaf in enumerate([15, 5, 1]):\n clf = DecisionTreeRegressor(max_depth=max_depth, min_samples_leaf=min_samples_leaf)\n clf.fit(data_x, data_y)\n xx, yy = get_grid(data_x)\n predicted = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)\n \n plt.subplot2grid((3, 3), (i, j))\n plt.pcolormesh(xx, yy, predicted, cmap='spring')\n plt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=30, cmap='spring', edgecolor='k')\n plt.title('max_depth=' + str(max_depth) + ', min_samples_leaf: ' + str(min_samples_leaf))", "To overcome this disadvantages, we will consider bagging or bootstrap aggregation \n<a id='bootbag'></a>\nBagging\n<a id='bootbag'></a>\nBootstrap\nUsually, we apply the following approach to the ML problem:\n\nWe have some finite sample $X={x_i}_{i=1}^{N}$, $x_i\\in\\mathbb{R}^{d}$ from unknown complex distirubution $F$\nInference some machine learning algorithm $T = T(x_1,\\dots,x_N)$\n\nHowever, if we want to study statistical proporities of the algorithm, we are in the trouble. For variance:\n$$\n\\mathbb{V}T = \\int_{\\text{range } x}(T(x))^2dF(x) - \\left(\\int_{\\text{range } x}T(x)dF(x)\\right)^2\n$$\n Troubles: \n * We do not have the true distribution $F(x)$ \n * We can not analytically integrate over complex ml-algorithm $T$ as tree, or even median\n Solutions: \n * Model $F(y)$ with emperical density $p_e(y)$:\n$$ p_{e}(y) = \\sum\\limits_{i=1}^{N}\\frac{1}{N}\\delta(y-x_i) $$\n\nEsitemate any integral of the form $\\int f(T(x))dF(x)\\approx \\int f(T(x))dF_{e}(x)$ via Mone-Carlo:\n\n$$\n \\int f(T(x))dF(x)\\approx \\int f(T(x))dF_{e}(x) \\approx \\frac{1}{B}\\sum\\limits_{j=1}^{B}f(T_j),\\text{where } T_j = T(X^j), X^j\\sim F_e\n $$\nNote, that sampling from $p_e(y)$ is just selection with repetition from $X={x_i}_{i=1}^{N}$. So it is the cheap and simple procedure. \nLet's play with model example and estimate variance of the algorithm:\n$$\n\\begin{aligned}\n& x_i \\in \\mathbb{R} \\\n& T(X) = \\text{median }X\n\\end{aligned}\n$$\n\nTask 1\nFor this example let's make simultated data from Cauchy distribution", "def median(X):\n return np.median(X)\n\ndef make_sample_cauchy(n_samples):\n sample = np.random.standard_cauchy(size=n_samples)\n return sample\n\nX = make_sample_cauchy(int(1e2))\nplt.hist(X, bins=int(1e1));", "So, our model median will be:", "med = median(X)\nmed", "Exact variance formula for sample cauchy median is following:\n$$\n\\mathbb{V}\\text{med($X_n$)} = \\dfrac{2n!}{(k!)^2\\pi^n}\\int\\limits_{0}^{\\pi/2}x^k(\\pi-x)^k(\\text{cot}x)^2dx\n$$\nSo hard! We will find it by bootstrap method.\nNow, please apply boostrap algorithm to calculate its variance. \nFirst, you need to write bootstrap sampler. It will be usefull https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html#numpy.random.choice", "def make_sample_bootstrap(X):\n size = X.shape[0]\n idx_range = range(size)\n new_idx = np.random.choice(idx_range, size, replace=True)\n return X[new_idx]", "Second, for $K$ bootstrap samples your shoud estimate its median.\n\nMake K=500 samples\nFor each samples estimate median ont it\nsave in median_boot_samples array", "K = 500\nmedian_boot_samples = []\nfor i in range(K):\n boot_sample = make_sample_bootstrap(X)\n meadian_boot_sample = median(boot_sample)\n median_boot_samples.append(meadian_boot_sample)\nmedian_boot_samples = np.array(median_boot_samples)", "Now we can obtain mean and variance from median_boot_samples as we are usually done it in statistics", "mean = np.mean(median_boot_samples)\nstd = np.std(median_boot_samples)\nprint(mean, std)", "Please, put your estimation of std rounded to the 3 decimals at the form:\nhttps://goo.gl/forms/Qgs4O7U1Yvs5csnM2", "plt.hist(median_boot_samples, bins=int(50));", "<a id='rf'></a>\nTree + Bootstrap = Random Forest\nWe want to make many different trees and then aggregate score. So, we need to specify what is different and how to aggregate.\n How to aggregate \nFor base algorithms $b_1(x),\\dots, b_N(x)$\n\nFor classification task => majority vote: $a(x) = \\text{arg}\\max_{y}\\sum_{i=1}^N[b_i(x) = y]$ \nFor regression task => $a(x) = \\frac{1}{N}\\sum_{i=1}^{N}b_i(x)$\n\n Different trees \n\nNote, that more different trees, than less covariance their predictions have. Hence then we get more gain from aggregation.\nOne source of the difference: bootstrap sample, as we consider above\nAnother one: select random subset of features for each $b_i(x)$ fitting\n\nLet' see how it works on our toy task", "from sklearn.ensemble import RandomForestRegressor\nclf = RandomForestRegressor(n_estimators=100)\nclf.fit(data_x, data_y)\n\nxx, yy = get_grid(data_x)\n\npredicted = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)\n\nplt.figure(figsize=(8, 8));\nplt.pcolormesh(xx, yy, predicted, cmap='spring');\nplt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=100, cmap='spring', edgecolor='k');", "You can note, that all boundaries become much more smoother. Now we will compare methods on the Boston Dataset", "from sklearn.datasets import load_boston\n\ndata = load_boston()\nX = data.data\ny = data.target", "Task 1\nGet cross validation score for variety of algorithms: BaggingRegressor and RandomForestRegressor with different parameters.\nFor example, for simple decision tree:", "from sklearn.model_selection import KFold, cross_val_score\ncv = KFold(shuffle=True, random_state=1011)\nregr = DecisionTreeRegressor()\nprint(cross_val_score(regr, X, y, cv=cv,\n scoring='r2').mean())", "Find best parameter with CV. Please put score at the https://goo.gl/forms/XZ7xHR54Fjk5cBy92", "from sklearn.ensemble import BaggingRegressor\nfrom sklearn.ensemble import RandomForestRegressor\n\n# usuall cv code" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
feststelltaste/software-analytics
notebooks/Generating Synthetic Data based on a Git Log.ipynb
gpl-3.0
[ "Context\nOften, it isn't possible to get the real data where we applied our analysis. In these cases, we can generate similar dataset that contain similar phenomena based on real data. This notebook shows an example about how we can do it.\nGet base data\nThe data, we want to derive another dataset. It's just there to get some realistic file names", "from lib.ozapfdis import git_tc\n\nlog = git_tc.log_numstat(\"C:/dev/repos/buschmais-spring-petclinic\")\nlog.head()\n\nlog = log[log.file.str.contains(\".java\")]\nlog.loc[log.file.str.contains(\"/jdbc/\"), 'type'] = \"jdbc\"\nlog.loc[log.file.str.contains(\"/jpa/\"), 'type'] = \"jpa\"\nlog.loc[log.type.isna(), 'type'] = \"other\"\nlog.head()", "Create synthetic dataset 1\nFor the first technology, where \"JDBC\" was used.\nCreate committed lines", "import numpy as np\nimport pandas as pd\n\nnp.random.seed(0)\n# adding period\nadded_lines = [int(np.random.normal(30,50)) for i in range(0,600)]\n# deleting period\nadded_lines.extend([int(np.random.normal(-50,100)) for i in range(0,200)])\nadded_lines.extend([int(np.random.normal(-2,20)) for i in range(0,200)])\nadded_lines.extend([int(np.random.normal(-3,10)) for i in range(0,200)])\ndf_jdbc = pd.DataFrame()\ndf_jdbc['lines'] = added_lines\ndf_jdbc.head()", "Add timestamp", "times = pd.timedelta_range(\"00:00:00\",\"23:59:59\", freq=\"s\")\ntimes = pd.Series(times)\ntimes.head()\n\ndates = pd.date_range('2013-05-15', '2017-07-23')\ndates = pd.to_datetime(dates)\ndates = dates[~dates.dayofweek.isin([5,6])]\ndates = pd.Series(dates)\ndates = dates.add(times.sample(len(dates), replace=True).values)\ndates.head()\n\ndf_jdbc['timestamp'] = dates.sample(len(df_jdbc), replace=True).sort_values().reset_index(drop=True)\ndf_jdbc = df_jdbc.sort_index()\ndf_jdbc.head()", "Treat first commit separetely\nSet a fixed value because we have to start with some code at the beginning", "df_jdbc.loc[0, 'lines'] = 250\ndf_jdbc.head()\n\ndf_jdbc = df_jdbc", "Add file names\nSample file names including their paths from an existing dataset", "df_jdbc['file'] = log[log['type'] == 'jdbc']['file'].sample(len(df_jdbc), replace=True).values", "Check dataset", "%matplotlib inline\ndf_jdbc.lines.hist()", "Sum up the data and check if it was created as wanted.", "df_jdbc_timed = df_jdbc.set_index('timestamp')\ndf_jdbc_timed['count'] = df_jdbc_timed.lines.cumsum()\ndf_jdbc_timed['count'].plot()\n\nlast_non_zero_timestamp = df_jdbc_timed[df_jdbc_timed['count'] >= 0].index.max()\nlast_non_zero_timestamp\n\ndf_jdbc = df_jdbc[df_jdbc.timestamp <= last_non_zero_timestamp]\ndf_jdbc.head()", "Create synthetic dataset 2", "df_jpa = pd.DataFrame([int(np.random.normal(20,50)) for i in range(0,600)], columns=['lines'])\ndf_jpa.loc[0,'lines'] = 150\ndf_jpa['timestamp'] = pd.DateOffset(years=2) + dates.sample(len(df_jpa), replace=True).sort_values().reset_index(drop=True)\ndf_jpa = df_jpa.sort_index()\ndf_jpa['file'] = log[log['type'] == 'jpa']['file'].sample(len(df_jpa), replace=True).values\ndf_jpa.head()", "Check dataset", "df_jpa.lines.hist()\n\ndf_jpa_timed = df_jpa.set_index('timestamp')\ndf_jpa_timed['count'] = df_jpa_timed.lines.cumsum()\ndf_jpa_timed['count'].plot()", "Add some noise", "dates_other = pd.date_range(df_jdbc.timestamp.min(), df_jpa.timestamp.max())\ndates_other = pd.to_datetime(dates_other)\ndates_other = dates_other[~dates_other.dayofweek.isin([5,6])]\ndates_other = pd.Series(dates_other)\ndates_other = dates_other.add(times.sample(len(dates_other), replace=True).values)\ndates_other.head()\n\ndf_other = pd.DataFrame([int(np.random.normal(5,100)) for i in range(0,40000)], columns=['lines'])\ndf_other['timestamp'] = dates_other.sample(len(df_other), replace=True).sort_values().reset_index(drop=True)\ndf_other = df_other.sort_index()\ndf_other['file'] = log[log['type'] == 'other']['file'].sample(len(df_other), replace=True).values\ndf_other.head()", "Check dataset", "df_other.lines.hist()\n\ndf_other_timed = df_other.set_index('timestamp')\ndf_other_timed['count'] = df_other_timed.lines.cumsum()\ndf_other_timed['count'].plot()", "Concatenate all datasets", "df = pd.concat([df_jpa, df_jdbc, df_other], ignore_index=True).sort_values(by='timestamp')\ndf.loc[df.lines > 0, 'additions'] = df.lines\ndf.loc[df.lines < 0, 'deletions'] = df.lines * -1\ndf = df.fillna(0).reset_index(drop=True)\ndf = df[['additions', 'deletions', 'file', 'timestamp']]\ndf.loc[(df.deletions > 0) & (df.loc[0].timestamp == df.timestamp),'additions'] = df.deletions\ndf.loc[df.loc[0].timestamp == df.timestamp,'deletions'] = 0\ndf['additions'] = df.additions.astype(int)\ndf['deletions'] = df.deletions.astype(int)\ndf = df.sort_values(by='timestamp', ascending=False)\ndf.head()", "Truncate data until fixed date", "df = df[df.timestamp < pd.Timestamp('2018-01-01')]\ndf.head()", "Export the data", "df.to_csv(\"datasets/git_log_refactoring.gz\", index=None, compression='gzip')", "Check loaded data", "df_loaded = pd.read_csv(\"datasets/git_log_refactoring.gz\")\ndf_loaded.head()\n\ndf_loaded.info()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mspcvsp/cincinnati311Data
ClusterServiceCodes.ipynb
gpl-3.0
[ "Setup Code Environment", "import csv\nimport re\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport nltk\nfrom sklearn.cluster import KMeans\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.metrics.pairwise import cosine_similarity\nfrom collections import defaultdict\nimport seaborn as sns\n%matplotlib inline", "Initialize service code data structures\n\nService code / service name map\nService code histogram", "h_file = open(\"./serviceCodesCount.tsv\",\"r\")\n\ncode_name_map = {}\ncode_histogram = {}\n \npatternobj = re.compile('^([0-9a-z]+)\\s\\|\\s([0-9a-z\\s]+)$')\n\nfor fields in csv.reader(h_file, delimiter=\"\\t\"):\n matchobj = patternobj.match(fields[0])\n \n cur_code = matchobj.group(1)\n code_name_map[cur_code] = matchobj.group(2)\n code_histogram[cur_code] = float(fields[1])\n \nh_file.close()", "Plot Cincinnati 311 Service Code Statistics\n\nReferences\nDescending Array Sort\nChange Plot Font Size", "total_count_fraction = code_histogram.values()\ntotal_count_fraction.sort()\n\ntotal_count_fraction = total_count_fraction[::-1]\ntotal_count_fraction /= np.sum(total_count_fraction)\ntotal_count_fraction = np.cumsum(total_count_fraction)\n\nsns.set(font_scale=2)\nf,h_ax = plt.subplots(1,2,figsize=(12,6))\nh_ax[0].bar(range(0,len(code_histogram.values())),\n code_histogram.values())\nh_ax[0].set_xlim((0,len(total_count_fraction)))\nh_ax[0].set_xlabel('Service Code #')\nh_ax[0].set_ylabel('Service Code Count')\nh_ax[0].set_title('Cincinnati 311\\nService Code Histogram')\n\nh_ax[1].plot(total_count_fraction, linewidth=4)\nh_ax[1].set_xlim((0,len(total_count_fraction)))\nh_ax[1].set_xlabel('Sorted Service Code #')\nh_ax[1].set_ylabel('Total Count Fraction')\nf.tight_layout()\nplt.savefig(\"./cincinatti311Stats.png\")", "Cluster service code names\n\nCompute Term Frequency Inverse Document Frequency (TF-IDF) feature vectors\nApply the K-means algorithm to cluster service code names based on their TF-IDF feature vector\nReferences:\nRose, B. \"Document Clustering in Python\"\nText pre-processing to reduce dictionary size", "from nltk.stem.snowball import SnowballStemmer\n\ndef tokenize(text):\n \"\"\" Extracts unigrams (i.e. words) from a string that contains\n a service code name.\n \n Args:\n text: String that stores a service code name\n \n Returns:\n filtered_tokens: List of words contained in a service code name\"\"\"\n tokens = [word.lower() for word in nltk.word_tokenize(text)]\n\n filtered_tokens =\\\n filter(lambda elem: re.match('^[a-z]+$', elem) != None,\n tokens)\n \n filtered_tokens =\\\n map(lambda elem: re.sub(\"\\s+\",\" \", elem),\n filtered_tokens)\n\n return filtered_tokens\n\ndef tokenize_and_stem(text):\n \"\"\" Applies the Snowball stemmer to unigrams (i.e. words) extracted\n from a string that contains a service code name.\n \n Args:\n text: String that stores a service code name\n \n Returns:\n filtered_tokens: List of words contained in a service code name\"\"\"\n stemmer = SnowballStemmer('english')\n\n tokens = [word.lower() for word in nltk.word_tokenize(text)]\n\n filtered_tokens =\\\n filter(lambda elem: re.match('^[a-z]+$', elem) != None,\n tokens)\n\n filtered_tokens =\\\n map(lambda elem: re.sub(\"\\s+\",\" \", elem),\n filtered_tokens)\n\n filtered_tokens = [stemmer.stem(token) for token in filtered_tokens]\n\n return filtered_tokens\n\ndef compute_tfidf_features(code_name_map,\n tokenizer,\n params):\n \"\"\" Constructs a Term Frequency Inverse Document Frequency (TF-IDF)\n matrix for the Cincinnati 311 service code names.\n \n Args:\n code_name_map: Dictionary that stores the mapping of service\n codes to service names\n \n tokenizer: Function that transforms a string into a list of\n words\n \n params: Dictionary that stores parameters that configure the\n TfidfVectorizer class constructor\n \n - mindocumentcount: Minimum number of term occurrences\n in separate service code names\n \n - maxdocumentfrequency: Maximum document frequency\n \n Returns:\n Tuple that stores a TF-IDF matrix and a TfidfVectorizer class\n object.\n \n Index: Description:\n ----- -----------\n 0 TF-IDF matrix\n 1 TfidfVectorizer class object\"\"\"\n token_count = 0\n for key in code_name_map.keys():\n token_count += len(tokenize(code_name_map[key]))\n\n num_codes = len(code_name_map.keys())\n\n min_df = float(params['mindocumentcount']) / num_codes\n \n tfidf_vectorizer =\\\n TfidfVectorizer(max_df=params['maxdocumentfrequency'],\n min_df=min_df,\n stop_words = 'english',\n max_features = token_count,\n use_idf=True,\n tokenizer=tokenizer,\n ngram_range=(1,1))\n\n tfidf_matrix =\\\n tfidf_vectorizer.fit_transform(code_name_map.values())\n\n return (tfidf_matrix,\n tfidf_vectorizer)\n\ndef cluster_311_services(tfidf_matrix,\n num_clusters,\n random_seed):\n \"\"\"Applies the K-means algorithm to cluster Cincinnati 311 service\n codes based on their service name Term Frequency Inverse Document\n Frequency (TF-IDF) feature vector.\n \n Args:\n tfidf_matrix: Cincinnati 311 service names TF-IDF feature matrix\n \n num_clusters: K-means algorithm number of clusters input\n \n random_seed: K-means algorithm random seed input:\n \n Returns:\n clusterid_code_map: Dictionary that stores the mapping of\n cluster identifier to Cincinnati 311\n service code\n\n clusterid_name_map: Dictionary that stores the mapping of\n cluster identifier to Cincinnati 311\n service name\"\"\"\n km = KMeans(n_clusters = num_clusters,\n random_state=np.random.RandomState(seed=random_seed))\n\n km.fit(tfidf_matrix)\n\n clusters = km.labels_.tolist()\n\n clusterid_code_map = defaultdict(list)\n clusterid_name_map = defaultdict(list)\n\n codes = code_name_map.keys()\n names = code_name_map.values()\n\n for idx in range(0, len(codes)):\n clusterid_code_map[clusters[idx]].append(codes[idx])\n clusterid_name_map[clusters[idx]].append(names[idx])\n \n return (clusterid_code_map,\n clusterid_name_map)\n\ndef compute_clusterid_totalcounts(clusterid_code_map,\n code_histogram):\n \"\"\" Computes the total Cincinnati 311 requests / service\n names cluster\n \n Args:\n clusterid_code_map: Dictionary that stores the mapping of\n cluster identifier to Cincinnati 311\n service code\n\n code_histogram: Dictionary that stores the number of \n occurrences for each Cincinnati 311 service \n code\n\n Returns:\n clusterid_total_count: Dictionary that stores the total\n Cincinnati 311 requests / service\n names cluster\"\"\"\n clusterid_total_count = defaultdict(int)\n \n num_clusters = len(clusterid_code_map.keys())\n\n for cur_cluster_id in range(0, num_clusters):\n for cur_code in clusterid_code_map[cur_cluster_id]:\n clusterid_total_count[cur_cluster_id] +=\\\n code_histogram[cur_code]\n \n return clusterid_total_count\n\ndef print_cluster_stats(clusterid_name_map,\n clusterid_total_count):\n \"\"\" Prints the total number of codes and total requests count\n for each Cincinnati 311 service names cluster.\n \n Args:\n clusterid_name_map: Dictionary that stores the mapping of\n cluster identifier to Cincinnati 311\n service name\n\n clusterid_total_count: Dictionary that stores the total\n Cincinnati 311 requests / service\n names cluster\n\n Returns:\n None\"\"\"\n num_clusters = len(clusterid_total_count.keys())\n\n for cur_cluster_id in range(0, num_clusters):\n\n print \"clusterid %d | # of codes: %d | total count: %d\" %\\\n (cur_cluster_id,\n len(clusterid_name_map[cur_cluster_id]),\n clusterid_total_count[cur_cluster_id])\n\ndef eval_maxcount_clusterid(clusterid_code_map,\n clusterid_total_count,\n code_histogram):\n \"\"\" This function performs the following two operations:\n\n 1.) Plots the requests count for each service name in the\n maximum count service names cluster.\n\n 2. Prints the maximum count service name in the maximum count\n service names cluster\n \n Args:\n clusterid_name_map: Dictionary that stores the mapping of\n cluster identifier to Cincinnati 311\n service name\n\n clusterid_total_count: Dictionary that stores the total\n Cincinnati 311 requests / service\n names cluster\n\n code_histogram: Dictionary that stores the number of \n occurrences for each Cincinnati 311 service \n code\n \n Returns:\n None\"\"\"\n num_clusters = len(clusterid_code_map.keys())\n\n contains_multiple_codes = np.empty(num_clusters, dtype=bool)\n\n for idx in range(0, num_clusters):\n contains_multiple_codes[idx] = len(clusterid_code_map[idx]) > 1\n\n filtered_clusterid =\\\n np.array(clusterid_total_count.keys())\n \n filtered_total_counts =\\\n np.array(clusterid_total_count.values())\n\n filtered_clusterid =\\\n filtered_clusterid[contains_multiple_codes]\n\n filtered_total_counts =\\\n filtered_total_counts[contains_multiple_codes]\n\n max_count_idx = np.argmax(filtered_total_counts)\n\n maxcount_clusterid = filtered_clusterid[max_count_idx]\n \n cluster_code_counts =\\\n np.zeros(len(clusterid_code_map[maxcount_clusterid]))\n\n for idx in range(0, len(cluster_code_counts)):\n key = clusterid_code_map[maxcount_clusterid][idx]\n cluster_code_counts[idx] = code_histogram[key]\n\n plt.bar(range(0,len(cluster_code_counts)),cluster_code_counts)\n plt.grid(True)\n plt.xlabel('Service Code #')\n plt.ylabel('Service Code Count')\n plt.title('Cluster #%d Service Code Histogram' %\\\n (maxcount_clusterid))\n\n max_idx = np.argmax(cluster_code_counts)\n print \"max count code: %s\" %\\\n (clusterid_code_map[maxcount_clusterid][max_idx])\n \ndef add_new_cluster(from_clusterid,\n service_code,\n clusterid_total_count,\n clusterid_code_map,\n clusterid_name_map):\n \"\"\"Creates a new service name(s) cluster\n\n Args:\n from_clusterid: Integer that refers to a service names\n cluster that is being split\n \n servicecode: String that refers to a 311 service code\n\n clusterid_code_map: Dictionary that stores the mapping of\n cluster identifier to Cincinnati 311\n service code\n\n clusterid_name_map: Dictionary that stores the mapping of\n cluster identifier to Cincinnati 311\n service name\n \n Returns:\n None - Service names cluster data structures are updated\n in place\"\"\"\n code_idx =\\\n np.argwhere(np.array(clusterid_code_map[from_clusterid]) ==\\\n service_code)[0][0]\n \n service_name = clusterid_name_map[from_clusterid][code_idx]\n\n next_clusterid = (clusterid_code_map.keys()[-1])+1\n\n clusterid_code_map[from_clusterid] =\\\n filter(lambda elem: elem != service_code,\n clusterid_code_map[from_clusterid])\n \n clusterid_name_map[from_clusterid] =\\\n filter(lambda elem: elem != service_name,\n clusterid_name_map[from_clusterid])\n \n clusterid_code_map[next_clusterid] = [service_code]\n clusterid_name_map[next_clusterid] = [service_name]\n\ndef print_clustered_servicenames(cur_clusterid,\n clusterid_name_map):\n \"\"\"Prints the Cincinnati 311 service names(s) for a specific \n Cincinnati 311 service names cluster\n\n Args:\n cur_clusterid: Integer that refers to a specific Cincinnati 311\n service names cluster\n \n clusterid_name_map: Dictionary that stores the mapping of\n cluster identifier to Cincinnati 311\n service name\"\"\"\n for cur_name in clusterid_name_map[cur_clusterid]:\n print \"%s\" % (cur_name)\n\ndef plot_cluster_stats(clusterid_code_map,\n clusterid_total_count):\n \"\"\"Plots the following service name(s) cluster statistics:\n\n - Number of service code(s) / service name(s) cluster\n - Total number of requests / service name(s) cluster\n\n Args:\n clusterid_name_map: Dictionary that stores the mapping of\n cluster identifier to Cincinnati 311\n service name\n \n clusterid_total_count: Dictionary that stores the total\n Cincinnati 311 requests / service\n names cluster\n\n Returns:\n None\"\"\"\n codes_per_cluster =\\\n map(lambda elem: len(elem), clusterid_code_map.values())\n\n num_clusters = len(codes_per_cluster)\n\n f,h_ax = plt.subplots(1,2,figsize=(12,6))\n h_ax[0].bar(range(0,num_clusters), codes_per_cluster)\n h_ax[0].set_xlabel('Service Name(s) cluster id')\n h_ax[0].set_ylabel('Number of service codes / cluster')\n h_ax[1].bar(range(0,num_clusters), clusterid_total_count.values())\n h_ax[1].set_xlabel('Service Name(s) cluster id')\n h_ax[1].set_ylabel('Total number of requests')\n plt.tight_layout()", "Apply a word tokenizer to the service names and construct a TF-IDF feature matrix", "params = {'maxdocumentfrequency': 0.25,\n 'mindocumentcount': 10}\n\n(tfidf_matrix,\n tfidf_vectorizer) = compute_tfidf_features(code_name_map,\n tokenize,\n params)\n\nprint \"# of terms: %d\" % (tfidf_matrix.shape[1])\nprint tfidf_vectorizer.get_feature_names()", "Apply the K-means algorithm to cluster the Cincinnati 311 service names based on their TF-IDF feature vector", "num_clusters = 20\nkmeans_seed = 3806933558\n\n(clusterid_code_map,\n clusterid_name_map) = cluster_311_services(tfidf_matrix,\n num_clusters,\n kmeans_seed)\n\nclusterid_total_count =\\\n compute_clusterid_totalcounts(clusterid_code_map,\n code_histogram)\n \nprint_cluster_stats(clusterid_name_map,\n clusterid_total_count)", "Plot the service code histogram for the maximum size cluster", "eval_maxcount_clusterid(clusterid_code_map,\n clusterid_total_count,\n code_histogram)", "Apply a word tokenizer (with stemming) to the service names and construct a TF-IDF feature matrix", "params = {'maxdocumentfrequency': 0.25,\n 'mindocumentcount': 10}\n\n(tfidf_matrix,\n tfidf_vectorizer) = compute_tfidf_features(code_name_map,\n tokenize_and_stem,\n params)\n\nprint \"# of terms: %d\" % (tfidf_matrix.shape[1])\nprint tfidf_vectorizer.get_feature_names()", "Apply the K-means algorithm to cluster the Cincinnati 311 service names based on their TF-IDF feature vector", "num_clusters = 20\nkmeans_seed = 3806933558\n\n(clusterid_code_map,\n clusterid_name_map) = cluster_311_services(tfidf_matrix,\n num_clusters,\n kmeans_seed)\n\nclusterid_total_count =\\\n compute_clusterid_totalcounts(clusterid_code_map,\n code_histogram)\n \nprint_cluster_stats(clusterid_name_map,\n clusterid_total_count)\n\nplot_cluster_stats(clusterid_code_map,\n clusterid_total_count)", "Plot the service code histogram for the maximum size cluster", "eval_maxcount_clusterid(clusterid_code_map,\n clusterid_total_count,\n code_histogram)", "Create a separate service name(s) cluster for the 'mtlfrn' service code", "add_new_cluster(1,\n 'mtlfrn',\n clusterid_total_count,\n clusterid_code_map,\n clusterid_name_map)", "Evaluate the service name(s) cluster statistics", "clusterid_total_count =\\\n compute_clusterid_totalcounts(clusterid_code_map,\n code_histogram)\n \nprint_cluster_stats(clusterid_name_map,\n clusterid_total_count)", "Plot the service code histogram for the maximum size cluster", "eval_maxcount_clusterid(clusterid_code_map,\n clusterid_total_count,\n code_histogram)", "Create a separate service name(s) cluster for the 'ydwstaj' service code", "add_new_cluster(1,\n 'ydwstaj',\n clusterid_total_count,\n clusterid_code_map,\n clusterid_name_map)", "Evaluate the service name(s) cluster statistics", "clusterid_total_count =\\\n compute_clusterid_totalcounts(clusterid_code_map,\n code_histogram)\n \nprint_cluster_stats(clusterid_name_map,\n clusterid_total_count)", "Plot the service code histogram for the maximum size cluster", "eval_maxcount_clusterid(clusterid_code_map,\n clusterid_total_count,\n code_histogram)", "Create a separate service name(s) cluster for the 'grfiti' service code", "add_new_cluster(1,\n 'grfiti',\n clusterid_total_count,\n clusterid_code_map,\n clusterid_name_map)", "Evaluate the service name(s) cluster statistics", "clusterid_total_count =\\\n compute_clusterid_totalcounts(clusterid_code_map,\n code_histogram)\n \nprint_cluster_stats(clusterid_name_map,\n clusterid_total_count)", "Plot the service code histogram for the maximum size cluster", "eval_maxcount_clusterid(clusterid_code_map,\n clusterid_total_count,\n code_histogram)", "Create a separate service name(s) cluster for the 'dapub1' service code", "add_new_cluster(1,\n 'dapub1',\n clusterid_total_count,\n clusterid_code_map,\n clusterid_name_map)", "Evaluate the service name(s) cluster statistics", "clusterid_total_count =\\\n compute_clusterid_totalcounts(clusterid_code_map,\n code_histogram)\n \nprint_cluster_stats(clusterid_name_map,\n clusterid_total_count)\n\nplot_cluster_stats(clusterid_code_map,\n clusterid_total_count)", "Label each service name(s) cluster", "cur_clusterid = 0\nclusterid_category_map = {}\nclusterid_category_map[cur_clusterid] = 'streetmaintenance'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'miscellaneous'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'trashcart'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'buildinghazzard'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'buildingcomplaint'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'repairrequest'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'propertymaintenance'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'defaultrequest'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'propertycomplaint'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'trashcomplaint'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'servicecompliment'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'inspection'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'servicecomplaint'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'buildinginspection'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'buildingcomplaint'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'signmaintenance'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'requestforservice'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'litter'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'recycling'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid +=1 \n\nclusterid_category_map[cur_clusterid] = 'treemaintenance'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'metalfurniturecollection'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'yardwaste'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'graffitiremoval'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map[cur_clusterid] = 'deadanimal'\n\nprint_clustered_servicenames(cur_clusterid,\n clusterid_name_map)\n\ncur_clusterid += 1\n\nclusterid_category_map", "Plot Cincinnati 311 Service Name Categories", "import pandas as pd\n\ncategory_totalcountdf =\\\n pd.DataFrame({'totalcount': clusterid_total_count.values()},\n index=clusterid_category_map.values())\n \nsns.set(font_scale=1.5)\ncategory_totalcountdf.plot(kind='barh')", "Write service code / category map to disk\n\nStoring Python Dictionaries", "servicecode_category_map = {}\n\nfor clusterid in clusterid_name_map.keys():\n cur_category = clusterid_category_map[clusterid]\n \n for servicecode in clusterid_code_map[clusterid]:\n\n servicecode_category_map[servicecode] = cur_category\n \nwith open('serviceCodeCategory.txt', 'w') as fp:\n num_names = len(servicecode_category_map)\n\n keys = servicecode_category_map.keys()\n values = servicecode_category_map.values()\n\n for idx in range(0, num_names):\n if idx == 0:\n fp.write(\"%s{\\\"%s\\\": \\\"%s\\\",\\n\" % (\" \" * 12,\n keys[idx],\n values[idx]))\n #----------------------------------------\n elif idx > 0 and idx < num_names-1:\n fp.write(\"%s\\\"%s\\\": \\\"%s\\\",\\n\" % (\" \" * 13,\n keys[idx],\n values[idx]))\n #----------------------------------------\n else:\n fp.write(\"%s\\\"%s\\\": \\\"%s\\\"}\" % (\" \" * 13,\n keys[idx],\n values[idx]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
solved - 02 - Data structures.ipynb
bsd-2-clause
[ "import pandas as pd\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\ntry:\n import seaborn\nexcept ImportError:\n pass", "Tabular data", "df = pd.read_csv(\"data/titanic.csv\")\n\ndf.head()", "Starting from reading this dataset, to answering questions about this data in a few lines of code:\nWhat is the age distribution of the passengers?", "df['Age'].hist()", "How does the survival rate of the passengers differ between sexes?", "df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))", "Or how does it differ between the different classes?", "df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')", "Are young people more likely to survive?", "df['Survived'].sum() / df['Survived'].count()\n\ndf25 = df[df['Age'] <= 25]\ndf25['Survived'].sum() / len(df25['Survived'])", "All the needed functionality for the above examples will be explained throughout this tutorial.\nData structures\nPandas provides two fundamental data objects, for 1D (Series) and 2D data (DataFrame).\nSeries\nA Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created:", "s = pd.Series([0.1, 0.2, 0.3, 0.4])\ns", "Attributes of a Series: index and values\nThe series has a built-in concept of an index, which by default is the numbers 0 through N - 1", "s.index", "You can access the underlying numpy array representation with the .values attribute:", "s.values", "We can access series values via the index, just like for NumPy arrays:", "s[0]", "Unlike the NumPy array, though, this index can be something other than integers:", "s2 = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])\ns2\n\ns2['c']", "In this way, a Series object can be thought of as similar to an ordered dictionary mapping one typed value to another typed value.\nIn fact, it's possible to construct a series directly from a Python dictionary:", "pop_dict = {'Germany': 81.3, \n 'Belgium': 11.3, \n 'France': 64.3, \n 'United Kingdom': 64.9, \n 'Netherlands': 16.9}\npopulation = pd.Series(pop_dict)\npopulation", "We can index the populations like a dict as expected:", "population['France']", "but with the power of numpy arrays:", "population * 1000", "DataFrames: Multi-dimensional Data\nA DataFrame is a tablular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.\n<img src=\"img/dataframe.png\" width=110%>\nOne of the most common ways of creating a dataframe is from a dictionary of arrays or lists.\nNote that in the IPython notebook, the dataframe will display in a rich HTML view:", "data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],\n 'population': [11.3, 64.3, 81.3, 16.9, 64.9],\n 'area': [30510, 671308, 357050, 41526, 244820],\n 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}\ncountries = pd.DataFrame(data)\ncountries", "Attributes of the DataFrame\nA DataFrame has besides a index attribute, also a columns attribute:", "countries.index\n\ncountries.columns", "To check the data types of the different columns:", "countries.dtypes", "An overview of that information can be given with the info() method:", "countries.info()", "Also a DataFrame has a values attribute, but attention: when you have heterogeneous data, all values will be upcasted:", "countries.values", "If we don't like what the index looks like, we can reset it and set one of our columns:", "countries = countries.set_index('country')\ncountries", "To access a Series representing a column in the data, use typical indexing syntax:", "countries['area']", "Basic operations on Series/Dataframes\nAs you play around with DataFrames, you'll notice that many operations which work on NumPy arrays will also work on dataframes.", "# redefining the example objects\n\npopulation = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3, \n 'United Kingdom': 64.9, 'Netherlands': 16.9})\n\ncountries = pd.DataFrame({'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],\n 'population': [11.3, 64.3, 81.3, 16.9, 64.9],\n 'area': [30510, 671308, 357050, 41526, 244820],\n 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']})", "Elementwise-operations (like numpy)\nJust like with numpy arrays, many operations are element-wise:", "population / 100\n\ncountries['population'] / countries['area']", "Alignment! (unlike numpy)\nOnly, pay attention to alignment: operations between series will align on the index:", "s1 = population[['Belgium', 'France']]\ns2 = population[['France', 'Germany']]\n\ns1\n\ns2\n\ns1 + s2", "Reductions (like numpy)\nThe average population number:", "population.mean()", "The minimum area:", "countries['area'].min()", "For dataframes, often only the numeric columns are included in the result:", "countries.median()", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Calculate the population numbers relative to Belgium\n</div>", "population / population['Belgium'].mean()", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Calculate the population density for each country and add this as a new column to the dataframe.\n</div>", "countries['population']*1000000 / countries['area']\n\ncountries['density'] = countries['population']*1000000 / countries['area']\ncountries", "Some other useful methods\nSorting the rows of the DataFrame according to the values in a column:", "countries.sort_values('density', ascending=False)", "One useful method to use is the describe method, which computes summary statistics for each column:", "countries.describe()", "The plot method can be used to quickly visualize the data in different ways:", "countries.plot()", "However, for this dataset, it does not say that much:", "countries['population'].plot(kind='bar')", "You can play with the kind keyword: 'line', 'bar', 'hist', 'density', 'area', 'pie', 'scatter', 'hexbin'\nImporting and exporting data\nA wide range of input/output formats are natively supported by pandas:\n\nCSV, text\nSQL database\nExcel\nHDF5\njson\nhtml\npickle\n...", "pd.read\n\nstates.to", "Other features\n\nWorking with missing data (.dropna(), pd.isnull())\nMerging and joining (concat, join)\nGrouping: groupby functionality\nReshaping (stack, pivot)\nTime series manipulation (resampling, timezones, ..)\nEasy plotting\n\nThere are many, many more interesting operations that can be done on Series and DataFrame objects, but rather than continue using this toy data, we'll instead move to a real-world example, and illustrate some of the advanced concepts along the way.\nSee the next notebooks!\nAcknowledgement\n\nยฉ 2015, Stijn Van Hoey and Joris Van den Bossche (&#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons\nThis notebook is partly based on material of Jake Vanderplas (https://github.com/jakevdp/OsloWorkshop2014)." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
quantopian/research_public
notebooks/lectures/Spearman_Rank_Correlation/questions/notebook.ipynb
apache-2.0
[ "Exercises: Spearman Rank Correlation\nLecture Link\nThis exercise notebook refers to this lecture. Please use the lecture for explanations and sample code.\nhttps://www.quantopian.com/lectures#Spearman-Rank-Correlation\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public", "import numpy as np\nimport pandas as pd\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\nimport math", "Exercise 1: Finding Correlations of Non-Linear Relationships\na. Traditional (Pearson) Correlation\nFind the correlation coefficient for the relationship between x and y.", "n = 100\nx = np.linspace(1, n, n)\ny = x**5\n\n#Your code goes here", "b. Spearman Rank Correlation\nFind the Spearman rank correlation coefficient for the relationship between x and y using the stats.rankdata function and the formula \n$$r_S = 1 - \\frac{6 \\sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$\nwhere $d_i$ is the difference in rank of the ith pair of x and y values.", "#Your code goes here", "Check your results against scipy's Spearman rank function. stats.spearmanr", "# Your code goes here", "Exercise 2: Limitations of Spearman Rank Correlation\na. Lagged Relationships\nFirst, create a series b that is identical to a but lagged one step (b[i] = a[i-1]). Then, find the Spearman rank correlation coefficient of the relationship between a and b.", "n = 100\na = np.random.normal(0, 1, n)\n\n#Your code goes here", "b. Non-Monotonic Relationships\nFirst, create a series d using the relationship $d=10c^2 - c + 2$. Then, find the Spearman rank rorrelation coefficient of the relationship between c and d.", "n = 100\nc = np.random.normal(0, 2, n)\n\n#Your code goes here", "Exercise 3: Real World Example\na. Factor and Forward Returns\nHere we'll define a simple momentum factor (model). To evaluate it we'd need to look at how its predictions correlate with future returns over many days. We'll start by just evaluating the Spearman rank correlation between our factor values and forward returns on just one day.\nCompute the Spearman rank correlation between factor values and 10 trading day forward returns on 2015-1-2.\nFor help on the pipeline API, see this tutorial: https://www.quantopian.com/tutorials/pipeline", "#Pipeline Setup\nfrom quantopian.research import run_pipeline\nfrom quantopian.pipeline import Pipeline\nfrom quantopian.pipeline.data.builtin import USEquityPricing\nfrom quantopian.pipeline.factors import CustomFactor, Returns, RollingLinearRegressionOfReturns\nfrom quantopian.pipeline.classifiers.morningstar import Sector\nfrom quantopian.pipeline.filters import QTradableStocksUS\nfrom time import time\n\n#MyFactor is our custom factor, based off of asset price momentum\n\nclass MyFactor(CustomFactor):\n \"\"\" Momentum factor \"\"\"\n\n inputs = [USEquityPricing.close] \n window_length = 60\n\n def compute(self, today, assets, out, close): \n out[:] = close[-1]/close[0]\n \nuniverse = QTradableStocksUS()\n\npipe = Pipeline(\n columns = {\n 'MyFactor' : MyFactor(mask=universe),\n },\n screen=universe\n)\n\nstart_timer = time()\nresults = run_pipeline(pipe, '2015-01-01', '2015-06-01')\nend_timer = time()\nresults.fillna(value=0);\n\nprint \"Time to run pipeline %.2f secs\" % (end_timer - start_timer)\n\nmy_factor = results['MyFactor']\n\nn = len(my_factor)\n\nasset_list = results.index.levels[1].unique()\nprices_df = get_pricing(asset_list, start_date='2015-01-01', end_date='2016-01-01', fields='price')\n\n# Compute 10-day forward returns, then shift the dataframe back by 10\nforward_returns_df = prices_df.pct_change(10).shift(-10)\n\n# The first trading day is actually 2015-1-2\nsingle_day_factor_values = my_factor['2015-1-2']\n\n# Because prices are indexed over the total time period, while the factor values dataframe\n# has a dynamic universe that excludes hard to trade stocks, each day there may be assets in \n# the returns dataframe that are not present in the factor values dataframe. We have to filter down\n# as a result.\nsingle_day_forward_returns = forward_returns_df.loc['2015-1-2'][single_day_factor_values.index]\n\n#Your code goes here", "b. Rolling Spearman Rank Correlation\nRepeat the above correlation for the first 60 days in the dataframe as opposed to just a single day. You should get a time series of Spearman rank correlations. From this we can start getting a better sense of how the factor correlates with forward returns.\nWhat we're driving towards is known as an information coefficient. This is a very common way of measuring how predictive a model is. All of this plus much more is automated in our open source alphalens library. In order to see alphalens in action you can check out these resources:\nA basic tutorial:\nhttps://www.quantopian.com/tutorials/getting-started#lesson4\nAn in-depth lecture:\nhttps://www.quantopian.com/lectures/factor-analysis", "rolling_corr = pd.Series(index=None, data=None)\n\n#Your code goes here", "b. Rolling Spearman Rank Correlation\nPlot out the rolling correlation as a time series, and compute the mean and standard deviation.", "# Your code goes here", "Congratulations on completing the Spearman rank correlation exercises!\nAs you learn more about writing trading models and the Quantopian platform, enter a daily Quantopian Contest. Your strategy will be evaluated for a cash prize every day.\nStart by going through the Writing a Contest Algorithm tutorial.\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solic\nitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
UCBerkeleySETI/breakthrough
GBT/pulsar_searches/Pulsar_Search/Pulsar_DedisperseV3.ipynb
gpl-3.0
[ "Pulsar Folding and Searching\nBy Peter Ma\nPulsar searching is a very compute-intensive task. Searching for repeating signals within noisy data is difficult because pulses tend to have a low signal to noise ratio. Our goal is to process, clean, identify potential periods, and fold the pulses to increase the SNR. This notebook demonstrates the algorithms used for searching regular pulses within radio spectrograms.\nKeep in mind, there are faster algorithms used in state-of-the-art pulsar search pipelines [ex. Tree dedispersion]. This notebook implements the simplest pulsar searching technique.\nFirst, we start with downloading the data and BLIMPY which is an I/O tool developed to interface with the radio data.\nNote: Run this notebook in COLAB as some operations are resource-intensive. For example, downloading and loading +5GB files into RAM. This notebook is not optimized.", "!pip install blimpy\n# Pulsar data\n!wget http://blpd13.ssl.berkeley.edu/borisov/AGBT19B_999_124/spliced_blc40414243444546o7o0515253545556o7o061626364656667_guppi_58837_86186_PSR_B0355+54_0013.gpuspec.8.0001.fil\n# For more info on pulsar searches check out this deck\n# http://ipta.phys.wvu.edu/files/student-week-2017/IPTA2017_KuoLiu_pulsartiming.pdf", "Loading Data\nFirst, we load the data. NOTE, targets with the starting name of PSR are radio scans of known pulsars PSR_B0355+54_0013. But, files with HIP65960 cataloged targets that shouldn't have pulsar characteristics. If you wish to learn more about the data check out https://ui.adsabs.harvard.edu/abs/2019PASP..131l4505L/abstract\nThe header information gives vital information about the observational setup of the telescope. For example, the coarse channel width or the observation time and duration, etc.", "from blimpy import Waterfall\nimport pylab as plt\nimport numpy as np\nimport math\nfrom scipy import stats, interpolate\nfrom copy import deepcopy\n%matplotlib inline\n\n\n\nobs = Waterfall('/content/spliced_blc40414243444546o7o0515253545556o7o061626364656667_guppi_58837_86186_PSR_B0355+54_0013.gpuspec.8.0001.fil', \n t_start=0,t_stop= 80000,max_load=10)\nobs.info()\n# Loads data into numpy array \ndata = obs.data\ndata.shape\ncoarse_channel_width = np.int(np.round(187.5/64/abs(obs.header['foff'])))\n# Here we plot the integrated signal over time.\nobs.plot_spectrum()\n\nfig = plt.figure(figsize=(10,8))\nplt.title('Spectrogram With Bandpass')\nplt.xlabel(\"Fchans\")\nplt.ylabel(\"Time\")\nplt.imshow(data[:3000,0,1500:3000], aspect='auto')\nplt.colorbar()", "Band Pass Removal\nThe goal of this process is to clean the data of its artifacts created by combining multiple bands. Our data is created by taking sliding windows of the raw voltage data and computing an FFT of that sliding window. With these FFTs (each containing frequency information about a timestamp) for each coarse channel, we use a bandpass filter to cut off frequencies that donโ€™t belong to that coarse channelโ€™s frequency range. But we canโ€™t achieve a perfect cut, and thatโ€™s why there's a falling off at the edges. \nTheyโ€™re called band-pass because they only allow signals in a particular frequency range, called a band, to pass-through. When we assemble the products we see these dips in the spectrogram. In other words - they aren't real signals.\nTo remove the bandpass features, we use spline lines to fit each channel to get a model of the bandpass of that channel. By using splines, we can fit the bandpass without fitting the more significant signals.\nIf you want more details on this check out https://github.com/FX196/SETI-Energy-Detection for a detailed explanation.", "average_power = np.zeros((data.shape[2]))\nshifted_power = np.zeros((int(data.shape[2]/8)))\nx=[]\nspl_order = 2\nprint(\"Fitting Spline\")\ndata_adjust = np.zeros(data.shape)\naverage_power = data.mean(axis=0)\n# Note the value 8 is the COARSE CHANNEL WIDTH\n# We adjust each coarse channel to correct the bandpass artifacts\nfor i in range(0, data.shape[2], 8):\n average_channel = average_power[0,i:i+8]\n x = np.arange(0,coarse_channel_width,1)\n knots = np.arange(0, coarse_channel_width, coarse_channel_width//spl_order+1)\n\n tck = interpolate.splrep(x, average_channel, s=knots[1:])\n xnew = np.arange(0, coarse_channel_width,1)\n ynew = interpolate.splev(xnew, tck, der=0)\n data_adjust[:,0,i:i+8] = data[:,0,i:i+8] - ynew\n\nplt.figure()\nplt.plot( data_adjust.mean(axis=0)[0,:])\nplt.title('Spline Fit - adjusted')\nplt.xlabel(\"Fchans\")\nplt.ylabel(\"Power\")\nfig = plt.figure(figsize=(10,8))\nplt.title('After bandpass correction')\nplt.imshow(data_adjust[:3000,0,:], aspect='auto')\nplt.colorbar()", "Dedispersion\nWhen pulses reach Earth they reach the observer at different times due to dispersion. This dispersion is the result of the interstellar medium causing time delays. This creates a \"swooping curve\" on the radio spectrogram instead of plane waves. If we are going to fold the pulses to increase the SNR then we're making the assumption that the pulses arrive at the same time. Thus we need to correct the dispersion by shifting each channel down a certain time delay relative to its frequency channel. We index a frequency column in the spectrogram. Then we split it between a time delay and original data and swap the positions.\nHowever, the problem is, we don't know the dispersion measure DM of the signal. The DM is the path integral of the signal through the interstellar medium with an electron density measure of.\n$$DM =\\int_0^d n_e dl$$ \nWhat we do is we brute force the DM by executing multiple trials DMs and we take the highest SNR created by the dedispersion with the given trial DM.", "def delay_from_DM(DM, freq_emitted):\n if (type(freq_emitted) == type(0.0)):\n if (freq_emitted > 0.0):\n return DM / (0.000241 * freq_emitted * freq_emitted)\n else:\n return 0.0\n else:\n return Num.where(freq_emitted > 0.0,\n DM / (0.000241 * freq_emitted * freq_emitted), 0.0)\n\ndef de_disperse(data,DM,fchan,width,tsamp):\n clean = deepcopy(data)\n for i in range(clean.shape[1]):\n end = clean.shape[0]\n freq_emitted = i*width+ fchan\n time = int((delay_from_DM(DM, freq_emitted))/tsamp)\n if time!=0 and time<clean.shape[0]:\n # zero_block = np.zeros((time))\n zero_block = clean[:time,i]\n shift_block = clean[:end-time,i]\n clean[time:end,i]= shift_block\n clean[:time,i]= zero_block\n\n elif time!=0:\n clean[:,i]= np.zeros(clean[:,i].shape)\n return clean\n\ndef DM_can(data, data_base, sens, DM_base, candidates, fchan,width,tsamp ):\n snrs = np.zeros((candidates,2))\n for i in range(candidates):\n DM = DM_base+sens*i\n data = de_disperse(data, DM, fchan,width,tsamp)\n time_series = data.sum(axis=1)\n snrs[i,1] = SNR(time_series)\n snrs[i,0] =DM\n if int((delay_from_DM(DM, fchan))/tsamp)+1 > data.shape[0]:\n break\n if i %1==0:\n print(\"Candidate \"+str(i)+\"\\t SNR: \"+str(round(snrs[i,1],4)) + \"\\t Largest Time Delay: \"+str(round(delay_from_DM(DM, fchan), 6))+' seconds'+\"\\t DM val:\"+ str(DM)+\"pc/cm^3\")\n data = data_base\n return snrs\n\n# Functions to determine SNR and TOP candidates\ndef SNR(arr):\n index = np.argmax(arr)\n average_noise = abs(arr.mean(axis=0))\n return math.log(arr[index]/average_noise) \n\ndef top(arr, top = 10):\n candidate = []\n # Delete the first and second element fourier transform\n arr[0]=0\n arr[1]=0\n for i in range(top):\n # We add 1 as the 0th index = period of 1 not 0\n index = np.argmax(arr)\n candidate.append(index+1)\n arr[index]=0\n return candidate ", "Dedispersion Trials\nThe computer now checks multiple DM values and adjust each frequency channel where it records its SNR. We increment the trial DM by a tunable parameter sens. After the trials, we take the largest SNR created by adjusting the time delays. We use that data to perform the FFT's and record the folded profiles.", "small_data = data_adjust[:,0,:]\ndata_base = data_adjust[:,0,:]\nsens =0.05\nDM_base = 6.4 \ncandidates = 50\nfchan = obs.header['fch1']\nwidth = obs.header['foff']\ntsamp = obs.header['tsamp']\nfchan = fchan+ width*small_data.shape[1]\nsnrs = DM_can(small_data, data_base, sens, DM_base, candidates, fchan, abs(width),tsamp)\nplt.plot(snrs[:,0], snrs[:,1])\nplt.title('DM values vs SNR')\nplt.xlabel(\"DM values\")\nplt.ylabel(\"SNR of Dedispersion\")\n\nDM = snrs[np.argmax(snrs[:,1]),0]\nprint(DM)\nfchan = fchan+ width*small_data.shape[1]\ndata_adjust[:,0,:] = de_disperse(data_adjust[:,0,:], DM, fchan,abs(width),tsamp)\nfig = plt.figure(figsize=(10, 8))\nplt.imshow(data_adjust[:,0,:], aspect='auto')\nplt.title('De-dispersed Data')\nplt.xlabel(\"Fchans\")\nplt.ylabel(\"Time\")\nplt.colorbar()\nplt.show() ", "Detecting Pulses - Fourier Transforms and Folding\nNext, we apply the discrete Fourier transform on the data to detect periodic pulses. To do so, we look for the greatest magnitude of the Fourier transform. This indicates potential periods within the data. Then we check for consistency by folding the data by the period which the Fourier transform indicates.\nThe folding algorithm is simple. You take each period and you fold the signals on top of itself. If the period you guessed matches the true period then by the law of superposition it will increase the SNR. This spike in signal to noise ratio appears in the following graph. This algorithm is the following equation.", "# Preforming the fourier transform.\n%matplotlib inline\nimport scipy.fftpack\nfrom scipy.fft import fft\nN = 1000\nT = 1.0 / 800.0\nx = np.linspace(0.0, N*T, N)\ny = abs(data_adjust[:,0,:].mean(axis=1))\nyf = fft(y)\n\nxf = np.linspace(0.0, 1.0/(2.0*T), N//2)\n\n# Magintude of the fourier transform\n# Between 0.00035 and 3.5 seconds\nmag = np.abs(yf[:60000])\ncandidates = top(mag, top=15)\nplt.plot(2.0/N * mag[1:])\nplt.grid()\nplt.title('Fourier Transform of Signal')\nplt.xlabel(\"Periods\")\nplt.ylabel(\"Magnitude of Fourier Transform\")\nplt.show()\n\nprint(\"Signal To Noise Ratio for the Fourier Transform is: \"+str(SNR(mag)))\nprint(\"Most likely Candidates are: \"+str(candidates))", "Folding Algorithm\nThe idea of the folding algorithm is to see if the signal forms a consistent profile as you fold/integrate the values together. If the profile appears consistent/stable then you're looking at an accurate reading of the pulsar's period. This confirms the implications drawn from the Fourier transform. This is profiling the pulsar. When folding the pulses it forms a \"fingerprint\" of the pulsar. These folds are unique to the pulsar detected. \n$$s_j = \\sum^{N/P-1}{K=0} D{j+kP} $$\nWe are suming over the regular intervals of period P. This is implemented below.", "# Lets take an example of such a period!\n# The 0th candidate is the top ranked candidate by the FFT\nperiod = 895\nfold = np.zeros((period, data.shape[2]))\nmultiples = int(data.data.shape[0]/period)\nresults = np.zeros((period))\n\nfor i in range(multiples-1):\n fold[:,:]=data_adjust[i*period:(i+1)*period,0,:]+ fold\n\nresults = fold.mean(axis=1)\nresults = results - results.min()\nresults = results / results.max()\n\nprint(SNR(results))\n\nplt.plot(results)\nplt.title('Folded Signal Profile With Period: '+str(round(period*0.000349,5)))\nplt.xlabel(\"Time (Multiples of 0.00035s)\")\nplt.ylabel(\"Normalized Integrated Signal\")\n\n# Lets take an example of such a period!\n# The 0th candidate is the top ranked candidate by the FFT\ncan_snr =[]\nfor k in range(len(candidates)):\n period = candidates[k]\n fold = np.zeros((period, data.shape[2]))\n multiples = int(data.data.shape[0]/period)\n results = np.zeros((period))\n\n for i in range(multiples-1):\n fold[:,:]=data[i*period:(i+1)*period,0,:]+ fold\n\n results = fold.mean(axis=1)\n results = results - results.min()\n results = results / results.max()\n can_snr.append(SNR(results))\n # print(SNR(results))\n\nprint(\"Max SNR of Fold Candidates: \"+ str(max(can_snr)))\n\n# Generates multiple images saved to create a GIF \nfrom scipy import stats \ndata = data\nperiod = candidates[0]\nfold = np.zeros((period, data.shape[2]))\nmultiples = int(data.data.shape[0]/period)\nresults = np.zeros((period))\n\nfor i in range(multiples-1):\n fold[:,:]=data[i*period:(i+1)*period,0,:]+ fold\n results = fold.mean(axis=1)\n results = results - results.min()\n results = results / results.max()\n # Generates multiple frames of the graph as it folds! \n plt.plot(results)\n plt.title('Folded Signal Period '+str(period*0.000349)+\" seconds| Fold Iteration: \"+str(i))\n plt.xlabel(\"Time (Multiples of 0.00035s)\")\n plt.ylabel(\"Normalized Integrated Signal\")\n plt.savefig('/content/drive/My Drive/Deeplearning/Pulsars/output/candidates/CAN_3/multi_chan_'+str(period)+'_'+str(i)+'.png')\n plt.close()\n \nresults = fold.mean(axis=1)\nresults = results - results.min()\nresults = results / results.max()\n\nprint(\"The Signal To Noise of the Fold is: \"+str(SNR(results)))\nplt.plot(results)", "What Happens If The Data Doesn't Contain Pulses?\nBelow we will show you that this algorithm detects pulses and excludes targets that do not include this feature. We will do so by loading a target that isn't known to be a pulsar. HIP65960 is a target that doesn't contain repeating signals.\nBelow we will repeat and apply the same algorithm but on a target that isn't a pulsar. We won't reiterate the explanations again.", "!wget http://blpd13.ssl.berkeley.edu/dl/GBT_58402_66282_HIP65960_time.h5\n\nfrom blimpy import Waterfall\nimport pylab as plt\nimport numpy as np\nimport math\nfrom scipy import stats, interpolate\n\n%matplotlib inline\n\nobs = Waterfall('/content/GBT_58402_66282_HIP65960_time.h5', \n f_start=0,f_stop= 361408,max_load=5)\nobs.info()\n# Loads data into numpy array \ndata = obs.data\ncoarse_channel_width = np.int(np.round(187.5/64/abs(obs.header['foff'])))\nobs.plot_spectrum()\n\naverage_power = np.zeros((data.shape[2]))\nshifted_power = np.zeros((int(data.shape[2]/8)))\nx=[]\nspl_order = 2\nprint(\"Fitting Spline\")\ndata_adjust = np.zeros(data.shape)\naverage_power = data.mean(axis=0)\n# Note the value 8 is the COARSE CHANNEL WIDTH\n# We adjust each coarse channel to correct the bandpass artifacts\nfor i in range(0, data.shape[2], coarse_channel_width):\n average_channel = average_power[0,i:i+coarse_channel_width]\n x = np.arange(0,coarse_channel_width,1)\n knots = np.arange(0, coarse_channel_width, coarse_channel_width//spl_order+1)\n\n tck = interpolate.splrep(x, average_channel, s=knots[1:])\n xnew = np.arange(0, coarse_channel_width,1)\n ynew = interpolate.splev(xnew, tck, der=0)\n data_adjust[:,0,i:i+coarse_channel_width] = data[:,0,i:i+coarse_channel_width] - ynew\n\n\nfrom copy import deepcopy\nsmall_data = data[:,0,:]\ndata_base = data[:,0,:]\nsens =0.05\nDM_base = 6.4 \ncandidates = 50\nfchan = obs.header['fch1']\nwidth = obs.header['foff']\ntsamp = obs.header['tsamp']\n# fchan = fchan+ width*small_data.shape[1]\nfchan = 7501.28173828125\nsnrs = DM_can(small_data, data_base, sens, DM_base, candidates, fchan, abs(width),tsamp)\nplt.plot(snrs[:,0], snrs[:,1])\nplt.title('DM values vs SNR')\nplt.xlabel(\"DM values\")\nplt.ylabel(\"SNR of Dedispersion\")\n\nDM = snrs[np.argmax(snrs[:,1]),0]\nprint(DM)\nfchan = fchan+ width*small_data.shape[1]\ndata_adjust[:,0,:] = de_disperse(data_adjust[:,0,:], DM, fchan,abs(width),tsamp)\n\n# Preforming the fourier transform.\n%matplotlib inline\nimport scipy.fftpack\nfrom scipy.fft import fft\nN = 60000\nT = 1.0 / 800.0\nx = np.linspace(0.0, N*T, N)\ny = data[:,0,:].mean(axis=1)\nyf = fft(y)\n\nxf = np.linspace(0.0, 1.0/(2.0*T), N//2)\n\n# Magintude of the fourier transform\n# Between 0.00035 and 3.5 seconds\n# We set this to a limit of 200 because \n# The total tchan is only 279\nmag = np.abs(yf[:200])\ncandidates = top(mag, top=15)\nplt.plot(2.0/N * mag[1:])\nplt.grid()\nplt.title('Fourier Transform of Signal')\nplt.xlabel(\"Periods\")\nplt.ylabel(\"Magnitude of Fourier Transform\")\nplt.show()\n\nprint(\"Signal To Noise Ratio for the Fourier Transform is: \"+str(SNR(mag)))\nprint(\"Most likely Candidates are: \"+str(candidates))\n", "NOTICE\nNotice how the signal to noise ratio is a lot smaller, It's smaller by 2 orders of magnitude (100x) than the original pulsar fold. Typically with a SNR of 1, it isn't considered a signal of interest as it's most likely just noise.", "# Lets take an example of such a period!\n# The 0th candidate is the top ranked candidate by the FFT\ncan_snr =[]\nfor k in range(len(candidates)):\n period = candidates[k]\n fold = np.zeros((period, data.shape[2]))\n multiples = int(data.data.shape[0]/period)\n results = np.zeros((period))\n\n for i in range(multiples-1):\n fold[:,:]=data[i*period:(i+1)*period,0,:]+ fold\n\n results = fold.mean(axis=1)\n results = results - results.min()\n results = results / results.max()\n can_snr.append(SNR(results))\n \n\nprint(\"Max SNR of Fold Candidates: \"+ str(max(can_snr)))", "Result\nIt is fair to conclude that given this observation of HIP65960 this target is most likely not a pulsar as the SNR of the FFT and the folding is not high enough to suggest otherwise. \nAny Questions?\nFeel free to reach out with any questions about this notebook: [email protected]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
regisDe/compagnons
Calcul symbolique.ipynb
gpl-2.0
[ "Calcul symbolique en Python", "%matplotlib inline", "Introduction\nCe notebook est la traduction franรงaise du cours sur SymPy disponible entre autre sur Wakari avec quelques modifications et complรฉments notamment pour la rรฉsolution d'รฉquations diffรฉrentielles. Il a pour but de permettre aux รฉtudiants de diffรฉrents niveaux d'expรฉrimenter des notions mathรฉmatiques en leur fournissant une base de code qu'ils peuvent modifier.\nSymPy - est un module Python qui peut รชtre utilisรฉ dans un programme Python ou dans une session IPython. Il fournit de puissantes fonctionnalitรฉs de calcul symbolique.\nPour commencer ร  utiliser SymPy dans un programme ou un notebook Python, importer le module sympy:", "from sympy import *", "Pour obtenir des sorties mathรฉmatiques formatรฉes $\\LaTeX$ :", "from sympy import init_printing\ninit_printing(use_latex=True)", "Variables symboliques\nDans SymPy on a besoin de crรฉer des symboles pour les variables qu'on souhaite employer. Pour cela on utilise la class Symbol:", "x = Symbol('x')\n\n(pi + x)**2\n\n# maniรจre alternative de dรฉfinir plusieurs symboles en une seule instruction\na, b, c = symbols(\"a, b, c\")", "On peut ajouter des contraintes sur les symboles lors de leur crรฉation :", "x = Symbol('x', real=True)\n\nx.is_imaginary\n\nx = Symbol('x', positive=True)\n\nx > 0", "Nombres complexes\nL'unitรฉ imaginaire est notรฉe I dans Sympy.", "1+1*I\n\nI**2\n\n(1 + x * I)**2", "Nombres rationnels\nIl y a trois types numรฉriques diffรฉrents dans SymPy : Real (rรฉel), Rational (rationnel), Integer (entier) :", "r1 = Rational(4,5)\nr2 = Rational(5,4)\n\nr1\n\nr1+r2\n\nr1/r2", "Evaluation numรฉrique\nSymPy permet une prรฉcision arbitraire des รฉvaluations numรฉriques et fournit des expressions pour quelques constantes comme : pi, E, oo pour l'infini.\nPour รฉvaluer numรฉriquement une expression nous utilisons la fonction evalf (ou N). Elle prend un argument n qui spรฉcifie le nombre de chiffres significatifs.", "pi.evalf(n=50)\n\nE.evalf(n=4)\n\ny = (x + pi)**2\n\nN(y, 5) # raccourci pour evalf", "Quand on รฉvalue des expressions algรฉbriques on souhaite souvent substituer un symbole par une valeur numรฉrique. Dans SymPy cela s'effectue par la fonction subs :", "y.subs(x, 1.5)\n\nN(y.subs(x, 1.5))", "La fonction subs permet de substituer aussi des symboles et des expressions :", "y.subs(x, a+pi)", "On peut aussi combiner l'รฉvolution d'expressions avec les tableaux de NumPy (pour tracer une fonction par ex) :", "import numpy\n\nx_vec = numpy.arange(0, 10, 0.1)\ny_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nax.plot(x_vec, y_vec);", "Manipulations algรฉbriques\nUne des principales utilisations d'un systรจme de calcul symbolique est d'effectuer des manipulations algรฉbriques d'expression. Il est possible de dรฉvelopper un produit ou bien de factoriser une expression. Les fonctions pour rรฉaliser ces opรฉrations de bases figurent dans les exemples des sections suivantes.\nDรฉvelopper et factoriser\nLes premiers pas dans la manipulation algรฉbrique", "(x+1)*(x+2)*(x+3)\n\nexpand((x+1)*(x+2)*(x+3))", "La fonction expand (dรฉvelopper) prend des mots clรฉs en arguments pour indiquer le type de dรฉveloppement ร  rรฉaliser. Par exemple pour dรฉvelopper une expression trigonomรจtrique on utilise l'argument trig=True :", "sin(a+b)\n\nexpand(sin(a+b), trig=True)\n\nsin(a+b)**3\n\nexpand(sin(a+b)**3, trig=True)", "Lancer help(expand) pour une explication dรฉtaillรฉe des diffรฉrents types de dรฉveloppements disponibles.\nL'opรฉration opposรฉe au dรฉveloppement est bien sur la factorisation qui s'effectue grรขce ร  la fonction factor :", "factor(x**3 + 6 * x**2 + 11*x + 6)\n\nx1, x2 = symbols(\"x1, x2\")\n\nfactor(x1**2*x2 + 3*x1*x2 + x1*x2**2)", "Simplify\nThe simplify tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the simplify functions also exists: trigsimp, powsimp, logcombine, etc. \nThe basic usages of these functions are as follows:", "# simplify expands a product\nsimplify((x+1)*(x+2)*(x+3))\n\n# simplify uses trigonometric identities\nsimplify(sin(a)**2 + cos(a)**2)\n\nsimplify(cos(x)/sin(x))", "simplify permet aussi de tester l'รฉgalitรฉ d'expressions :", "exp1 = sin(a+b)**3\nexp2 = sin(a)**3*cos(b)**3 + 3*sin(a)**2*sin(b)*cos(a)*cos(b)**2 + 3*sin(a)*sin(b)**2*cos(a)**2*cos(b) + sin(b)**3*cos(a)**3\nsimplify(exp1 - exp2)\n\nif simplify(exp1 - exp2) == 0:\n print \"{0} = {1}\".format(exp1, exp2)\nelse:\n print \"exp1 et exp2 sont diffรฉrentes\"", "apart and together\nPour manipuler des expressions numรฉriques de fractions on dispose des fonctions apart and together :", "f1 = 1/((a+1)*(a+2))\n\nf1\n\napart(f1)\n\nf2 = 1/(a+2) + 1/(a+3)\n\nf2\n\ntogether(f2)", "Simplify combine les fractions mais ne factorise pas :", "simplify(f2)", "Calcul\nEn plus des manipulations algรฉbriques, l'autre grande utilisation d'un systรจme de calcul symbolique et d'effectuer des calculs comme des dรฉrivรฉes et intรฉgrales d'expressions algรฉbriques.\nDรฉrivation\nLa dรฉrivation est habituellement simple. On utilise la fonction diff avec pour premier argument l'expression ร  dรฉriver et comme second le symbole de la variable suivant laquelle dรฉriver :", "y", "Dรฉrivรฉe premiรจre", "diff(y**2, x)", "Pour des dรฉrivรฉes d'ordre supรฉrieur :", "diff(y**2, x, x) # dรฉrivรฉe seconde\n\ndiff(y**2, x, 2) # dรฉrivรฉe seconde avec une autre syntaxe", "Pour calculer la dรฉrivรฉe d'une expression ร  plusieurs variables :", "x, y, z = symbols(\"x,y,z\")\n\nf = sin(x*y) + cos(y*z)", "$\\frac{d^3f}{dxdy^2}$", "diff(f, x, 1, y, 2)", "Integration\nL'intรฉgration est rรฉalisรฉe de maniรจre similaire :", "f\n\nintegrate(f, x)", "En fournissant des limites pour la variable d'intรฉgration on peut รฉvaluer des intรฉgrales dรฉfinies :", "integrate(f, (x, -1, 1))", "et aussi des intรฉgrales impropres pour lesquelles on ne connait pas de primitive", "x_i = numpy.arange(-5, 5, 0.1)\ny_i = numpy.array([N((exp(-x**2)).subs(x, xx)) for xx in x_i])\nfig2, ax2 = plt.subplots()\nax2.plot(x_i, y_i)\nax2.set_title(\"$e^{-x^2}$\")\n\nintegrate(exp(-x**2), (x, -oo, oo))", "Rappel, oo est la notation SymPy pour l'infini.\nSommes et produits\nOn peut รฉvaluer les sommes et produits d'expression avec les fonctions Sum et Product :", "n = Symbol(\"n\")\n\nSum(1/n**2, (n, 1, 10))\n\nSum(1/n**2, (n,1, 10)).evalf()\n\nSum(1/n**2, (n, 1, oo)).evalf()\n\nN(pi**2/6) # fonction zeta(2) de Riemann", "Les produits sont calculรฉs de maniรจre trรจs semblables :", "Product(n, (n, 1, 10)) # 10!", "Limites\nLes limites sont รฉvaluรฉes par la fonction limit. Par exemple :", "limit(sin(x)/x, x, 0)", "On peut changer la direction d'approche du point limite par l'argument du mot clรฉ dir :", "limit(1/x, x, 0, dir=\"+\")\n\nlimit(1/x, x, 0, dir=\"-\")", "Sรฉries\nLe dรฉveloppement en sรฉrie est une autre fonctionnalitรฉ trรจs utile d'un systรจme de calcul symbolique. Dans SymPy on rรฉalise les dรฉveloppements en sรฉrie grรขce ร  la fonction series :", "series(exp(x), x)", "Par dรฉfaut le dรฉveloppement de l'expression s'effectue au voisinage de $x=0$, mais on peut dรฉvelopper la sรฉrie au voisinage de toute autre valeur de $x$ en incluant explicitement cette valeur lors de l'appel ร  la fonction :", "series(exp(x), x, 1)", "Et on peut explicitement dรฉfinir jusqu'ร  quel ordre le dรฉveloppement doit รชtre menรฉ :", "series(exp(x), x, 1, 10)", "Le dรฉveloppement en sรฉries inclue l'ordre d'approximation. Ceci permet de gรฉrer l'ordre du rรฉsultat de calculs utilisant des dรฉveloppements en sรฉries d'ordres diffรฉrents :", "s1 = cos(x).series(x, 0, 5)\ns1\n\ns2 = sin(x).series(x, 0, 2)\ns2\n\nexpand(s1 * s2)", "Si on ne souhaite pas afficher l'ordre on utilise la mรฉthode removeO :", "expand(s1.removeO() * s2.removeO())", "Mais cela conduit ร  des rรฉsultats incorrects pour des calculs avec plusieurs dรฉveloppements :", "(cos(x)*sin(x)).series(x, 0, 6)", "Plus sur les sรฉries\n\nhttps://fr.wikipedia.org/wiki/D%C3%A9veloppement_limit%C3%A9 - Article de Wikipedia.\n\nAlgรจbre linรฉaire\nMatrices\nLes matrices sont dรฉfinies par la classe Matrix :", "m11, m12, m21, m22 = symbols(\"m11, m12, m21, m22\")\nb1, b2 = symbols(\"b1, b2\")\n\nA = Matrix([[m11, m12],[m21, m22]])\nA\n\nb = Matrix([[b1], [b2]])\nb", "Avec les instances de la classe Matrix on peut faire les opรฉrations algรฉbriques classiques :", "A**2\n\nA * b", "Et calculer les dรฉterminants et inverses :", "A.det()\n\nA.inv()", "Rรฉsolution d'รฉquations\nPour rรฉsoudre des รฉquations et des systรจmes d'รฉquations on utilise la fonction solve :", "solve(x**2 - 1, x)\n\nsolve(x**4 - x**2 - 1, x)\n\nexpand((x-1)*(x-2)*(x-3)*(x-4)*(x-5))\n\nsolve(x**5 - 15*x**4 + 85*x**3 - 225*x**2 + 274*x - 120, x)", "Systรจme d'รฉquations :", "solve([x + y - 1, x - y - 1], [x,y])", "En termes d'autres expressions symboliques :", "solve([x + y - a, x - y - c], [x,y])", "Rรฉsolution d'รฉquations diffรฉrentielles\nPour rรฉsoudre des รฉquations difรฉrentielles et des systรจmes d'รฉquations diffรฉrentielles on utilise la fonction dsolve :", "from sympy import Function, dsolve, Eq, Derivative, sin, cos, symbols\nfrom sympy.abc import x", "Exemple d'รฉquation diffรฉrentielle du 2e ordre", "f = Function('f')\ndsolve(Derivative(f(x), x, x) + 9*f(x), f(x))\n\ndsolve(diff(f(x), x, 2) + 9*f(x), f(x), hint='default', ics={f(0):0, f(1):10})\n\n# Essai de rรฉcupรฉration de la valeur de la constante C1 quand une condition initiale est fournie\neqg = Symbol(\"eqg\")\ng = Function('g')\neqg = dsolve(Derivative(g(x), x) + g(x), g(x), ics={g(2): 50})\neqg\n\nprint \"g(x) est de la forme {}\".format(eqg.rhs)\n\n# recherche manuelle de la valeur de c1 qui vรฉrifie la condition initiale\nc1 = Symbol(\"c1\")\nc1 = solve(Eq(c1*E**(-2),50), c1)\nprint c1", "SymPy ne sait pas rรฉsoudre cette equation diffรฉrentielle non linรฉaire avec $h(x)^2$ :", "h = Function('h')\ntry:\n dsolve(Derivative(h(x), x) + 0.001*h(x)**2 - 10, h(x))\nexcept:\n print \"une erreur s'est produite\"", "On peut rรฉsoudre cette รฉquation diffรฉrentielle avec une mรฉthode numรฉrique fournie par la fonction odeint de SciPy :\nMรฉthode numรฉrique pour รฉquations diffรฉrentielles (non SymPy)", "from scipy.integrate import odeint\n\ndef dv_dt(vec, t, k, m, g):\n z, v = vec[0], vec[1]\n dz = -v\n dv = -k/m*v**2 + g\n return [dz, dv]\n\nvec0 = [0, 0] # conditions initiales [altitude, vitesse]\nt_si = numpy.linspace (0, 30 ,150) # de 0 ร  30 s, 150 points\nk = 0.1 # coefficient aรฉrodynamique\nm = 80 # masse (kg)\ng = 9.81 # accรฉlรฉration pesanteur (m/s/s)\nv_si = odeint(dv_dt, vec0, t_si, args=(k, m, g))\n\nprint \"vitesse finale : {0:.1f} m/s soit {1:.0f} km/h\".format(v_si[-1, 1], v_si[-1, 1] * 3.6)\n\nfig_si, ax_si = plt.subplots()\nax_si.set_title(\"Vitesse en chute libre\")\nax_si.set_xlabel(\"s\")\nax_si.set_ylabel(\"m/s\")\nax_si.plot(t_si, v_si[:,1], 'b')", "Pour aller plus loin\n\nhttp://sympy.org/fr/index.html - La page web de SymPy.\nhttps://github.com/sympy/sympy - Le code source de SymPy.\nhttp://live.sympy.org - Version en ligne de SymPy pour des tests et des dรฉmonstrations." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jpilgram/phys202-2015-work
assignments/midterm/AlgorithmsEx03.ipynb
mit
[ "Algorithms Exercise 3\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact", "Character counting and entropy\nWrite a function char_probs that takes a string and computes the probabilities of each character in the string:\n\nFirst do a character count and store the result in a dictionary.\nThen divide each character counts by the total number of character to compute the normalized probabilties.\nReturn the dictionary of characters (keys) and probabilities (values).", "def char_probs(s):\n \"\"\"Find the probabilities of the unique characters in the string s.\n \n Parameters\n ----------\n s : str\n A string of characters.\n \n Returns\n -------\n probs : dict\n A dictionary whose keys are the unique characters in s and whose values\n are the probabilities of those characters.\n \"\"\"\n # YOUR CODE HERE\n #raise NotImplementedError()\n s=s.replace(' ','')\n l = [i for i in s]\n dic={i:l.count(i) for i in l}\n prob = [(dic[i]/len(l)) for i in dic]\n result = {i:prob[j] for i in l for j in range(len(prob))}\n return result\n\ntest1 = char_probs('aaaa')\nassert np.allclose(test1['a'], 1.0)\ntest2 = char_probs('aabb')\nassert np.allclose(test2['a'], 0.5)\nassert np.allclose(test2['b'], 0.5)\ntest3 = char_probs('abcd')\nassert np.allclose(test3['a'], 0.25)\nassert np.allclose(test3['b'], 0.25)\nassert np.allclose(test3['c'], 0.25)\nassert np.allclose(test3['d'], 0.25)", "The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:\n$$H = - \\Sigma_i P_i \\log_2(P_i)$$ \nIn this expression $\\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.\nWrite a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.\nTo compute the entropy, you should:\n\nFirst convert the values (probabilities) of the dict to a Numpy array of probabilities.\nThen use other Numpy functions (np.log2, etc.) to compute the entropy.\nDon't use any for or while loops in your code.", "def entropy(d):\n \"\"\"Compute the entropy of a dict d whose values are probabilities.\"\"\"\n # YOUR CODE HERE\n #raise NotImplementedError()\n s = char_probs(d)\n z = [(i,s[i]) for i in s]\n w=np.array(z)\n P = np.array(w[::,1])\n np.log2(P[1])\nentropy('haldjfhasdf')\n\nassert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)\nassert np.allclose(entropy({'a': 1.0}), 0.0)", "Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.", "# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # use this for grading the pi digits histogram" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wtsi-medical-genomics/team-code
python-club/notebooks/regular-expressions.ipynb
gpl-2.0
[ "import re # Python's regular expression module\n\ndef re_test(regex, query):\n \"\"\"A helper function to test if a regex has a match in a query.\"\"\"\n p = re.compile(regex)\n result = 'MATCH' if p.match(query) else 'NOT FOUND'\n print '\"{}\" with regex \"{}\": {}'.format(query, regex, result)", "Regular Expressions\nDaniel Rice\n\nIntroduction\nDefinition\nExamples\nExercise 1\nDecomposing the syntax\nCharacter classes\nMetacharacters\nRepetition\nCapture groups\n\n\nRegex's in Python\nmatch\nsearch\n\n\n\nIntroduction\nDefinition\nA regular expression (also known as a RE, regex, regex pattern, or regexp) is a sequence of symbols and characters expressing a text pattern. A regular expression allows us to specify a string pattern that we can then search for within a body of text. The idea is to make a pattern template (regex), and then query some text to see if the template is present or not.\nExample 1\nLet's say we want to determine if a string begins with the word PASS. Our regular expression will simply be:", "pass_regex = 'PASS'", "This pattern will match the occurence of PASS in the query text. Now let's test it out:", "re_test(pass_regex, 'PASS: Data good')\n\nre_test(pass_regex, 'FAIL: Data bad')", "Example 2\nLet's say we have a text file that contains numerical readings that we need to perform some analysis on. Here's the first few lines from the file:", "lines = \\\n\"\"\"\nDevice-initialized.\nVersion-19.23\n12-12-2014\n12\n4353\n3452\nERROR\n498\n34598734\n345982398\n23\nERROR\n3434345798\n\"\"\"", "We don't want the header lines and those ERROR lines are going to ruin our analysis! Let's filter these out with with a regex. First we will create the pattern template (or regex) for what we want to find:\n^\\d+$\nThis regex can be split into four parts:\n\n^ This indicates the start of the string.\n\\d This specifies we want to match decimal digits (the numbers 0-9).\n+ This symbol means we want to find one or more of the previous symbol (which in this case is a decimal digit).\n$ This indicates the end of the string.\n\nPutting it all together we want to find patterns that are one or more (+) numbers (\\d) from start (^) to finish ($).\nLet's load the regex into Python's re module:", "integer_regex = re.compile('\\d+$')", "Now let's get our string of lines into a list of strings:", "lines = lines.split()\nprint lines", "Now we need to run through each of these lines and determine if it matches our regex. Converting to integer would be nice as well.", "clean_data = [] # We will accumulate our filtered integers here\nfor line in lines:\n if integer_regex.match(line):\n clean_data.append(int(line))\nprint clean_data\n\n# If you're into one liners you could also do one of these:\n# clean_data = [int(line) for line in lines if integer_regex.match(line)]\n# clean_data = map(int, filter(integer_regex.match, lines))", "It worked like a dream. You may be arguing that there other non-regex solutions to this problem and indeed there are (for example integer typecasting with a catch clause) but this example was given to show you the process of:\n\nCreating a regex pattern for what you want to find.\nAppyling it to some text.\nExtracting the positive hits.\n\nThere will be situations where regex's will really be the only viable solution when you want to match some super-complex strings.\nExercise 1\nYou have a file consisting of DNA bases which you want to perform analysis on:", "lines = \\\n\"\"\"\nAcme-DNA-Reader\nACTG\nAA\n-1\nCCTC\nTTTCG\nC\nTGCTA\n-1\nTCCCCCC\n\"\"\"", "The -1 represent reading erros and we want these removed. Using the preceeding example as a guide, filter out the header and the reading errors.\nHint The bases can be represented with the pattern [ACGT].", "bases_regex = re.compile('[ACGT]+$')\nlines = lines.split()\n#print lines\nclean_data = [] # We will accumulate our filtered integers here\nfor line in lines:\n print line\n if bases_regex.match(line):\n clean_data.append(line)\nprint clean_data\n", "Decomposing the syntax\nRegexps can appear cryptic but they can be decomposed into character classes and metacharacters.\nCharacter classes\nThese allow us to concisely specify the types or classes of characters to match. In the example above \\d is a character class that represents decimal digits. There are many such character classes and we will go through these below.\nThe square brackets allow us to specify a set of characters to match. We have already seen this with [ACGT]. We can also use the hyphen - to specify ranges.\n| Character Class | Description | Match Examples |\n|:---------------:| ----------- | -------------- |\n| \\d | Matches any decimal digit; this is equivalent to the class [0-9]. | 0, 1, 2, ... |\n| \\D | Matches any non-digit character; this is equivalent to the class [^0-9]. | a, @, ; |\n| \\s | Matches any whitespace character; this is equivalent to the class [ \\t\\n\\r\\f\\v]. | space, tab, newline |\n| \\S | Matches any non-whitespace character; this is equivalent to the class [^ \\t\\n\\r\\f\\v]. | 1, A, &amp; |\n| \\w | Matches any alphanumeric character (word character) ; this is equivalent to the class [a-zA-Z0-9_].| x, Z, 2 |\n| \\W | Matches any non-alphanumeric character; this is equivalent to the class [^a-zA-Z0-9_]. | ยฃ, (, space |\n| . | Matches anything (except newline). | 8, (, a, space |\nThis can look like a lot to remember but there are some menomics here:\n| Character Class | Mnemonic |\n|:---------------:| -------- |\n| \\d | decimal digit |\n| \\D | uppercase so not \\d |\n| \\s | whitespace character |\n| \\S | uppercase so not \\s |\n| \\w | word character |\n| \\W | uppercase so not \\w|\nMetacharacters\nRepitition\nThe character classes will match only a single character. How can say match exactly 3 occurences of Q? The metacharacters include different sybmols to reflect repetition:\n| Repetition Metacharacter | Description |\n|:------------------------:| ----------- |\n| * | Matches zero or more occurences of the previous character (class). |\n| + | Matches one or more occurences of the previous character (class). |\n| {m,n} | With integers m and n, specifies at least m and at most n occurences of the previous character (class). Do not put any space after the comma as this prevents the metacharacter from being recognized. |\nExamples", "re_test('A*', ' ')\nre_test('A*', 'A')\nre_test('A*', 'AA')\nre_test('A*', 'Z12345')\n\nre_test('A+', ' ')\nre_test('A+', 'A')\nre_test('A+', 'ZZZZ')\n\nre_test('BA{1,3}B', 'BB')\nre_test('BA{1,3}B', 'BAB')\nre_test('BA{1,3}B', 'BAAAB')\nre_test('BA{1,3}B', 'BAAAAAB')\n\nre_test('.*', 'AB12[]9023')\nre_test('\\d{1,3}B', '123B')\nre_test('\\w{1,3}\\d+', 'aaa2')\nre_test('\\w{1,3}\\d+', 'aaaa2')\n\n#http://path/ssh://dr9@farm3-login:/path\np = re.compile(r'http://(\\w+)/ssh://(\\w+)@(\\w+):/(\\w+)')\n\nm = p.match(r'http://path/ssh://dr9@farm3-login:/path')\n\nRE_SSH = re.compile(r'/ssh://(\\w+)@(.+):(.+)/(?:chr)?([mxy0-9]{1,2}):(\\d+)-(\\d+)$', re.IGNORECASE)\n\nRE_SSH = re.compile(r'/ssh://(\\w+)@(.+)$', re.IGNORECASE)\n\nt = '/ssh://dr9@farm3-login'\n\nm = RE_SSH.match(t)\n#user, server, path, lchr, lmin, lmax = m.groups()\n\nfor el in m.groups():\n print el", "Exercise 2\nDetermine if a string contains \"wazup\" or \"wazzup\" or \"wazzzup\" where the number of z's must be greater than zero. Use the following list of strings:", "L = [\n'So I said wazzzzzzzup?',\n'And she said wazup back to me',\n'waup isn\\'t a word',\n'what is up',\n'wazzzzzzzzzzzzzzzzzzzzzzzup']\n\nwazup_regex = re.compile(r'.*waz+up.*')\nmatches = [el for el in L if wazup_regex.match(el)]\nprint matches", "Example\nWe have a list of strings and some of these contain names that we want to extract. The names have the format\n0123_FirstName_LastName\nwhere the quantity of numbers at the beginning of the string are variable (e.g. 1_Bob_Smith, 12_Bob_Smith, 123456_Bob_Smith) are all valid).", "L = [\n'123_George_Washington',\n'Blah blah',\n'894542342_Winston_Churchill',\n'More blah blah',\n'String_without_numbers']", "Don't worry if the following regex looks cryptic, it will soon be broken down.", "p = re.compile(r'\\d+_([A-Z,a-z]+)_([A-Z,a-z]+)')\n\nfor el in L:\n m = p.match(el)\n if m:\n print m.groups()", "Exercise 3\nFind all occurences of AGT within a string of DNA where contiguous repeated occurences should be counted only once (e.g. AGTAGTAGT will be counted once and not three times).", "dna = 'AGTAGTACTACAAGTAGTCCAGTCCTTGGGAGTAGTAGTAGTAAGGGCCT'\n\np = re.compile(r'(AGT)+')\nm = p.finditer(dna)\nfor match in m:\n print '(start, stop): {}'.format(match.span())\n print 'matching string: {}'.format(match.group())\n\np.finditer?", "Exercise 4\nA text file contains some important information about a test that has been run. The individual who wrote this file is \ninconsistent with date formats.", "L = [\n'Test 1-2 commencing 2012-12-12 for multiple reads.',\n'Date of birth of individual 803232435345345 is 1983/06/27.',\n'Test 1-2 complete 20130420.']", "Convert all dates to the format YYYYMMDD.\nHints:\n * Use groups ()\n * Use {m, n} where m=n=2 or m=n=4\n * Use ? for the bits between date components\n * You can use either search or match, though in the latter you will need to specify what happens before and after the date (.* maybe)?\n * The second element in the list will present you with issues as there is a number there that may accidentally be captured as a date. Use \\D to make sure your date is not surrounded by decimal digits.", "p = re.compile(r'\\D+\\d{4,4}[-/]?\\d{2,2}[-/]?\\d{2,2}\\D')\n\ndate_regex = re.compile(r'\\D(\\d{4,4})[-/]?(\\d{2,2})[-/]?(\\d{2,2})\\D')\nstandard_dates = []\nfor el in L:\n m = date_regex.search(el)\n if m:\n standard_dates.append(''.join(m.groups()))\nprint standard_dates", "Resources\n| Resource | Description |\n| ------------------------------------------ | ----------------------------------------------------------------- |\n| https://docs.python.org/2/howto/regex.html | A great in-depth tutorial from the official Python documentation. |\n| https://www.regex101.com/#python | A useful online tool to quickly test regular expressions. |\n| http://regexcrossword.com/ | A nice way to practice your regular expression skills. |", "text = 'abcd \\e'\n\nprint text\n\nre.compile(r'\\\\')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
janusnic/21v-python
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
mit
[ "Distributed Model Selection and Assessment\nOutline of the session:\n\nIntroduction to IPython.parallel\nSharing Data Between Processes with Memory Mapping\nParallel Grid Search and Model Selection\nParallel Computation of Learning Curves (TODO)\nDistributed Computation on EC2 Spot Instances with StarCluster\n\nMotivation\nWhen doing model evaluations and parameters tuning, many models must be trained independently on the same data. This is an embarrassingly parallel problem but having a copy of the dataset in memory for each process is waste of RAM:\n<img src=\"files/images/grid_search_parameters.png\" style=\"display:inline; width: 49%\" />\n<img src=\"files/images/grid_search_cv_splits.png\" style=\"display:inline; width: 49%\" />\nWhen doing 3 folds cross validation on a 9 parameters grid, a naive implementation could read the data from the disk and load it in memory 27 times. If this happens concurrently (e.g. on a compute node with 32 cores) the RAM might blow up hence breaking the potential linear speed up.\nIPython.parallel, a Primer\nThis section gives a primer on some tools best utilizing computational resources when doing predictive modeling in the Python / NumPy ecosystem namely:\n\n\noptimal usage of available CPUs and cluster nodes with IPython.parallel\n\n\noptimal memory re-use using shared memory between Python processes using numpy.memmap and joblib\n\n\nWhat is so great about IPython.parallel:\n\nSingle node multi-CPUs\nMultiple node multi-CPUs\nInteractive In-memory computing\nIPython notebook integration with %px and %%px magics\nPossibility to interactively connect to individual computing processes to launch interactive debugger (#priceless)\n\nLet's get started:\nLet start an IPython cluster using the ipcluster common (usually run from your operating system console). To make sure that we are not running several clusters on the same host, let's try to shut down any running IPython cluster first:", "!ipcluster stop\n\n!ipcluster start -n=2 --daemon", "Go to the \"Cluster\" tab of the notebook and start a local cluster with 2 engines. Then come back here. We should now be able to use our cluster from our notebook session (or any other Python process running on localhost):", "from IPython.parallel import Client\nclient = Client()\n\nlen(client)", "The %px and %%px magics\nAll the engines of the client can be accessed imperatively using the %px and %%px IPython cell magics:", "%%px\n\nimport os\nimport socket\n\nprint(\"This is running in process with pid {0} on host '{1}'.\".format(\n os.getpid(), socket.gethostname()))", "The content of the __main__ namespace can also be read and written via the %px magic:", "%px a = 1\n\n%px print(a)\n\n%%px\n\na *= 2\nprint(a)", "It is possible to restrict the %px and %%px magic instructions to specific engines:", "%%px --targets=-1\na *= 2\nprint(a)\n\n%px print(a)", "The DirectView objects\nCell magics are very nice to work interactively from the notebook but it's also possible to replicate their behavior programmatically with more flexibility with a DirectView instance. A DirectView can be created by slicing the client object:", "all_engines = client[:]\nall_engines", "The namespace of the __main__ module of each running python engine can be accessed in read and write mode as a python dictionary:", "all_engines['a'] = 1\n\nall_engines['a']", "Direct views can also execute the same code in parallel on each engine of the view:", "def my_sum(a, b):\n return a + b\n\nmy_sum_apply_results = all_engines.apply(my_sum, 11, 31)\nmy_sum_apply_results", "The ouput of the apply method is an asynchronous handle returned immediately without waiting for the end of the computation. To block until the results are ready use:", "my_sum_apply_results.get()", "Here is a more useful example to fetch the network hostname of each engine in the cluster. Let's study it in more details:", "def hostname():\n \"\"\"Return the name of the host where the function is being called\"\"\"\n import socket\n return socket.gethostname()\n\nhostname_apply_result = all_engines.apply(hostname)", "When doing the above, the hostname function is first defined locally (the client python process). The DirectView.apply method introspects it, serializes its name and bytecode and ships it to each engine of the cluster where it is reconstructed as local function on each engine. This function is then called on each engine of the view with the optionally provided arguments.\nIn return, the client gets a python object that serves as an handle to asynchronously fetch the list of the results of the calls:", "hostname_apply_result\n\nhostname_apply_result.get()", "It is also possible to key the results explicitly with the engine ids with the AsyncResult.get_dict method. This is a very simple idiom to fetch metadata on the runtime environment of each engine of the direct view:", "hostnames = hostname_apply_result.get_dict()\nhostnames", "It can be handy to invert this mapping to find one engine id per host in the cluster so as to execute host specific operation:", "one_engine_by_host = dict((hostname, engine_id) for engine_id, hostname\n in hostnames.items())\none_engine_by_host\n\none_engine_by_host_ids = list(one_engine_by_host.values())\none_engine_by_host_ids\n\none_engine_per_host_view = client[one_engine_by_host_ids]\none_engine_per_host_view", "Trick: you can even use those engines ids to execute shell commands in parallel on each host of the cluster:", "one_engine_by_host.values()\n\n%%px --targets=[1]\n\n!pip install flask", "Note on Importing Modules on Remote Engines\nIn the previous example we put the import socket statement inside the body of the hostname function to make sure to make sure that is is available when the rest of the function is executed in the python processes of the remote engines.\nAlternatively it is possible to import the required modules ahead of time on all the engines of a directview using a context manager / with syntax:", "with all_engines.sync_imports():\n import numpy", "However this method does not support alternative import syntaxes:\n&gt;&gt;&gt; import numpy as np\n&gt;&gt;&gt; from numpy import linalg\n\nHence the method of importing in the body of the \"applied\" functions is more flexible. Additionally, this does not pollute the __main__ namespace of the engines as it only impact the local namespace of the function itself.\nExercise:\n\nWrite a function that returns the memory usage of each engine process in the cluster.\nAllocate a largish numpy array of zeros of known size (e.g. 100MB) on each engine of the cluster.\n\nHints:\nUse the psutil module to collect the runtime info on a specific process or host. For instance to fetch the memory usage of the currently running process in MB:\n&gt;&gt;&gt; import os\n&gt;&gt;&gt; import psutil\n&gt;&gt;&gt; psutil.Process(os.getpid()).get_memory_info().rss / 1e6\n\nTo allocate a numpy array with 1000 zeros stored as 64bit floats you can use:\n&gt;&gt;&gt; import numpy as np\n&gt;&gt;&gt; z = np.zeros(1000, dtype=np.float64)\n\nThe size in bytes of such a numpy array can then be fetched with z.nbytes:\n&gt;&gt;&gt; z.nbytes / 1e6\n0.008", "def get_engines_memory(client):\n def memory_mb():\n import os, psutil\n return psutil.Process(os.getpid()).get_memory_info().rss / 1e6\n \n return client[:].apply(memory_mb).get_dict()\n\nget_engines_memory(client)\n\nsum(get_engines_memory(client).values())\n\n%%px\nimport numpy as np\nz = np.zeros(int(1e7), dtype=np.float64)\nprint(\"Allocated {0}MB on engine.\".format(z.nbytes / 1e6))\n\nget_engines_memory(client)", "Load Balanced View\nLoadBalancedView is an alternative to the DirectView to run one function call at a time on a free engine.", "lv = client.load_balanced_view()\n\ndef slow_square(x):\n import time\n time.sleep(2)\n return x ** 2\n\nresult = lv.apply(slow_square, 4)\n\nresult\n\nresult.ready()\n\nresult.get() # blocking call", "It is possible to spread some tasks among the engines of the LB view by passing a callable and an iterable of task arguments to the LoadBalancedView.map method:", "results = lv.map(slow_square, [0, 1, 2, 3])\nresults\n\nresults.ready()\n\nresults.progress\n\n# results.abort()\n\n# Iteration on AsyncMapResult is blocking\nfor r in results:\n print(r)", "The load balanced view will be used in the following to schedule work on the cluster while being able to monitor progress and occasionally add new computing nodes to the cluster while computing to speed up the processing when using EC2 and StarCluster (see later).\nSharing Read-only Data between Processes on the Same Host with Memmapping\nLet's restart the cluster to kill the existing python processes and restart with a new client instances to be able to monitor the memory usage in details:", "!ipcluster stop\n\n!ipcluster start -n=2 --daemon\n\nfrom IPython.parallel import Client\nclient = Client()\nlen(client)", "The numpy package makes it possible to memory map large contiguous chunks of binary files as shared memory for all the Python processes running on a given host:", "%px import numpy as np", "Creating a numpy.memmap instance with the w+ mode creates a file on the filesystem and zeros its content. Let's do it from the first engine process or our current IPython cluster:", "%%px --targets=-1\n\n# Cleanup any existing file from past session (necessary for windows)\nimport os\nif os.path.exists('small.mmap'):\n os.unlink('small.mmap')\n\nmm_w = np.memmap('small.mmap', shape=10, dtype=np.float32, mode='w+')\nprint(mm_w)", "Assuming the notebook process was launched with:\ncd notebooks\nipython notebook\n\nand the cluster was launched from the ipython notebook UI, the engines will have a the same current working directory as the notebook process, hence we can find the small.mmap file the current folder:", "ls -lh small.mmap", "This binary file can then be mapped as a new numpy array by all the engines having access to the same filesystem. The mode='r+' opens this shared memory area in read write mode:", "%%px\n\nmm_r = np.memmap('small.mmap', dtype=np.float32, mode='r+')\nprint(mm_r)\n\n%%px --targets=-1\n\nmm_w[0] = 42\nprint(mm_w)\nprint(mm_r)\n\n%px print(mm_r)", "Memory mapped arrays created with mode='r+' can be modified and the modifications are shared with all the engines:", "%%px --targets=1\n\nmm_r[1] = 43\n\n%%px\nprint(mm_r)", "Be careful those, there is no builtin read nor write lock available on this such datastructures so it's better to avoid concurrent read & write operations on the same array segments unless there engine operations are made to cooperate with some synchronization or scheduling orchestrator.\nMemmap arrays generally behave very much like regular in-memory numpy arrays:", "%%px\nprint(\"sum={:.3}, mean={:.3}, std={:.3}\".format(\n float(mm_r.sum()), np.mean(mm_r), np.std(mm_r)))", "Before allocating more data in memory on the cluster let us define a couple of utility functions from the previous exercise (and more) to monitor what is used by which engine and what is still free on the cluster as a whole:", "def get_engines_memory(client):\n \"\"\"Gather the memory allocated by each engine in MB\"\"\"\n def memory_mb():\n import os\n import psutil\n return psutil.Process(os.getpid()).get_memory_info().rss / 1e6\n \n return client[:].apply(memory_mb).get_dict()\n\ndef get_host_free_memory(client):\n \"\"\"Free memory on each host of the cluster in MB.\"\"\"\n all_engines = client[:]\n def hostname():\n import socket\n return socket.gethostname()\n \n hostnames = all_engines.apply(hostname).get_dict()\n one_engine_per_host = dict((hostname, engine_id)\n for engine_id, hostname\n in hostnames.items())\n\n def host_free_memory():\n import psutil\n return psutil.virtual_memory().free / 1e6\n \n \n one_engine_per_host_ids = list(one_engine_per_host.values())\n host_mem = client[one_engine_per_host_ids].apply(\n host_free_memory).get_dict()\n \n return dict((hostnames[eid], m) for eid, m in host_mem.items())\n\nget_engines_memory(client)\n\nget_host_free_memory(client)", "Let's allocate a 80MB memmap array in the first engine and load it in readwrite mode in all the engines:", "%%px --targets=-1\n\n# Cleanup any existing file from past session (necessary for windows)\nimport os\nif os.path.exists('big.mmap'):\n os.unlink('big.mmap')\n\nnp.memmap('big.mmap', shape=10 * int(1e6), dtype=np.float64, mode='w+')\n\nls -lh big.mmap\n\nget_host_free_memory(client)", "No significant memory was used in this operation as we just asked the OS to allocate the buffer on the hard drive and just maitain a virtual memory area as a cheap reference to this buffer.\nLet's open new references to the same buffer from all the engines at once:", "%px %time big_mmap = np.memmap('big.mmap', dtype=np.float64, mode='r+')\n\n%px big_mmap\n\nget_host_free_memory(client)", "No physical memory was allocated in the operation as it just took a couple of ms to do so. This is also confirmed by the engines process stats:", "get_engines_memory(client)", "Let's trigger an actual load of the data from the drive into the in-memory disk cache of the OS, this can take some time depending on the speed of the hard drive (on the order of 100MB/s to 300MB/s hence 3s to 8s for this dataset):", "%%px --targets=-1\n\n%time np.sum(big_mmap)\n\nget_engines_memory(client)\n\nget_host_free_memory(client)", "We can see that the first engine has now access to the data in memory and the free memory on the host has decreased by the same amount.\nWe can now access this data from all the engines at once much faster as the disk will no longer be used: the shared memory buffer will instead accessed directly by all the engines:", "%px %time np.sum(big_mmap)\n\nget_engines_memory(client)\n\nget_host_free_memory(client)", "So it seems that the engines have loaded a whole copy of the data but this actually not the case as the total amount of free memory was not impacted by the parallel access to the shared buffer. Furthermore, once the data has been preloaded from the hard drive using one process, all the of the other processes on the same host can access it almost instantly saving a lot of IO wait.\nThis strategy makes it very interesting to load the readonly datasets of machine learning problems, especially when the same data is reused over and over by concurrent processes as can be the case when doing learning curves analysis or grid search.\nMemmaping Nested Numpy-based Data Structures with Joblib\njoblib is a utility library included in the sklearn package. Among other things it provides tools to serialize objects that comprise large numpy arrays and reload them as memmap backed datastructures.\nTo demonstrate it, let's create an arbitrary python datastructure involving numpy arrays:", "import numpy as np\n\nclass MyDataStructure(object):\n \n def __init__(self, shape):\n self.float_zeros = np.zeros(shape, dtype=np.float32)\n self.integer_ones = np.ones(shape, dtype=np.int64)\n \ndata_structure = MyDataStructure((3, 4))\ndata_structure.float_zeros, data_structure.integer_ones", "We can now persist this datastructure to disk:", "from sklearn.externals import joblib\n\njoblib.dump(data_structure, 'data_structure.pkl')\n\n!ls -l data_structure*", "A memmapped copy of this datastructure can then be loaded:", "memmaped_data_structure = joblib.load('data_structure.pkl', mmap_mode='r+')\nmemmaped_data_structure.float_zeros, memmaped_data_structure.integer_ones", "Memmaping CV Splits for Multiprocess Dataset Sharing\nWe can leverage the previous tools to build a utility function that extracts Cross Validation splits ahead of time to persist them on the hard drive in a format suitable for memmaping by IPython engine processes.", "from sklearn.externals import joblib\nfrom sklearn.cross_validation import ShuffleSplit\nimport os\n\ndef persist_cv_splits(X, y, n_cv_iter=5, name='data',\n suffix=\"_cv_%03d.pkl\", test_size=0.25, random_state=None):\n \"\"\"Materialize randomized train test splits of a dataset.\"\"\"\n\n cv = ShuffleSplit(X.shape[0], n_iter=n_cv_iter,\n test_size=test_size, random_state=random_state)\n cv_split_filenames = []\n \n for i, (train, test) in enumerate(cv):\n cv_fold = (X[train], y[train], X[test], y[test])\n cv_split_filename = name + suffix % i\n cv_split_filename = os.path.abspath(cv_split_filename)\n joblib.dump(cv_fold, cv_split_filename)\n cv_split_filenames.append(cv_split_filename)\n \n return cv_split_filenames", "Let's try it on the digits dataset, we can run this from the :", "from sklearn.datasets import load_digits\n\ndigits = load_digits()\ndigits_split_filenames = persist_cv_splits(digits.data, digits.target,\n name='digits', random_state=42)\ndigits_split_filenames\n\nls -lh digits*", "Each of the persisted CV splits can then be loaded back again using memmaping:", "X_train, y_train, X_test, y_test = joblib.load(\n 'digits_cv_002.pkl', mmap_mode='r+')\n\nX_train\n\ny_train", "Parallel Model Selection and Grid Search\nLet's leverage IPython.parallel and the Memory Mapping features of joblib to write a custom grid search utility that runs on cluster in a memory efficient manner.\nAssume that we want to reproduce the grid search from the previous session:", "import numpy as np\nfrom pprint import pprint\n\nsvc_params = {\n 'C': np.logspace(-1, 2, 4),\n 'gamma': np.logspace(-4, 0, 5),\n}\npprint(svc_params)", "GridSearchCV internally uses the following ParameterGrid utility iterator class to build the possible combinations of parameters:", "from sklearn.grid_search import ParameterGrid\n\nlist(ParameterGrid(svc_params))", "Let's write a function to load the data from a CV split file and compute the validation score for a given parameter set and model:", "def compute_evaluation(cv_split_filename, model, params):\n \"\"\"Function executed by a worker to evaluate a model on a CV split\"\"\"\n # All module imports should be executed in the worker namespace\n from sklearn.externals import joblib\n\n X_train, y_train, X_validation, y_validation = joblib.load(\n cv_split_filename, mmap_mode='c')\n \n model.set_params(**params)\n model.fit(X_train, y_train)\n validation_score = model.score(X_validation, y_validation)\n return validation_score\n\ndef grid_search(lb_view, model, cv_split_filenames, param_grid):\n \"\"\"Launch all grid search evaluation tasks.\"\"\"\n all_tasks = []\n all_parameters = list(ParameterGrid(param_grid))\n \n for i, params in enumerate(all_parameters):\n task_for_params = []\n \n for j, cv_split_filename in enumerate(cv_split_filenames): \n t = lb_view.apply(\n compute_evaluation, cv_split_filename, model, params)\n task_for_params.append(t) \n \n all_tasks.append(task_for_params)\n \n return all_parameters, all_tasks", "Let's try on the digits dataset that we splitted previously as memmapable files:", "from sklearn.svm import SVC\nfrom IPython.parallel import Client\n\nclient = Client()\nlb_view = client.load_balanced_view()\nmodel = SVC()\nsvc_params = {\n 'C': np.logspace(-1, 2, 4),\n 'gamma': np.logspace(-4, 0, 5),\n}\n\nall_parameters, all_tasks = grid_search(\n lb_view, model, digits_split_filenames, svc_params)", "The grid_search function is using the asynchronous API of the LoadBalancedView, we can hence monitor the progress:", "import time\ntime.sleep(5)\n\ndef progress(tasks):\n return np.mean([task.ready() for task_group in tasks\n for task in task_group])\n\nprint(\"Tasks completed: {0}%\".format(100 * progress(all_tasks)))", "Even better, we can introspect the completed task to find the best parameters set so far:", "def find_bests(all_parameters, all_tasks, n_top=5):\n \"\"\"Compute the mean score of the completed tasks\"\"\"\n mean_scores = []\n \n for param, task_group in zip(all_parameters, all_tasks):\n scores = [t.get() for t in task_group if t.ready()]\n if len(scores) == 0:\n continue\n mean_scores.append((np.mean(scores), param))\n \n return sorted(mean_scores, reverse=True, key=lambda x: x[0])[:n_top]\n\nfrom pprint import pprint\n\nprint(\"Tasks completed: {0}%\".format(100 * progress(all_tasks)))\npprint(find_bests(all_parameters, all_tasks))\n\n[t.wait() for tasks in all_tasks for t in tasks]\nprint(\"Tasks completed: {0}%\".format(100 * progress(all_tasks)))\npprint(find_bests(all_parameters, all_tasks))", "Optimization Trick: Truncated Randomized Search\nIt is often wasteful to search all the possible combinations of parameters as done previously, especially if the number of parameters is large (e.g. more than 3).\nTo speed up the discovery of good parameters combinations, it is often faster to randomized the search order and allocate a budget of evaluations, e.g. 10 or 100 combinations.\nSee this JMLR paper by James Bergstra for an empirical analysis of the problem. The interested reader should also have a look at hyperopt that further refines this parameter search method using meta-optimizers.\nRandomized Parameter Search has just been implemented in the master branch of scikit-learn be part of the 0.14 release.\nA More Complete Parallel Model Selection and Assessment Example", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Some nice default configuration for plots\nplt.rcParams['figure.figsize'] = 10, 7.5\nplt.rcParams['axes.grid'] = True\nplt.gray();\n\nlb_view = client.load_balanced_view()\nmodel = SVC()\n\nimport sys, imp\nfrom collections import OrderedDict\nsys.path.append('..')\nimport model_selection, mmap_utils\nimp.reload(model_selection), imp.reload(mmap_utils)\n\nlb_view.abort()\n\nsvc_params = OrderedDict([\n ('gamma', np.logspace(-4, 0, 5)),\n ('C', np.logspace(-1, 2, 4)),\n])\n\nsearch = model_selection.RandomizedGridSeach(lb_view)\nsearch.launch_for_splits(model, svc_params, digits_split_filenames)\n\ntime.sleep(5)\n\nprint(search.report())\n\ntime.sleep(5)\n\nprint(search.report())\nsearch.boxplot_parameters(display_train=False)\n\n#search.abort()", "Distributing the Computation on EC2 Spot Instances with StarCluster\nInstallation\nTo provision a cheap transient compute cluster on Amazon EC2, the first step is to register on EC2 with a credit card and put your EC2 credentials as environment variables. For instance under Linux / OSX:\n[laptop]% export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXX\n[laptop]% export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n\nYou can put those exports in your ~/.bashrc to automatically get those credentials loaded in new shell sessions.\nThen proceed to the installation of StarCluster it-self:\n[laptop]% pip install StarCluster\n\nConfiguration\nLet's run the help command a first time and create a template configuration file:\n[laptop]% starcluster help\nStarCluster - (http://star.mit.edu/cluster)\nSoftware Tools for Academics and Researchers (STAR)\nPlease submit bug reports to [email protected]\n\ncli.py:87 - ERROR - config file /home/user/.starcluster/config does not exist\n\nOptions:\n--------\n[1] Show the StarCluster config template\n[2] Write config template to /home/user/.starcluster/config\n[q] Quit\n\nPlease enter your selection:\n2\n\nand create a password-less ssh key that will be dedicated to this transient cluster:\n[laptop]% starcluster createkey mykey -o ~/.ssh/mykey.rsa\n\nYou can now edit the file /home/user/.starcluster/config and remplace its content with the following sample configuration:\n[global]\nDEFAULT_TEMPLATE=iptemplate\nREFRESH_INTERVAL=5\n\n[key mykey]\nKEY_LOCATION=~/.ssh/mykey.rsa\n\n[plugin ipcluster]\nSETUP_CLASS = starcluster.plugins.ipcluster.IPCluster\nENABLE_NOTEBOOK = True\nNOTEBOOK_PASSWD = aaaa\n\n[plugin ipclusterstop]\nSETUP_CLASS = starcluster.plugins.ipcluster.IPClusterStop\n\n[plugin ipclusterrestart]\nSETUP_CLASS = starcluster.plugins.ipcluster.IPClusterRestartEngines\n\n[plugin pypackages]\nsetup_class = starcluster.plugins.pypkginstaller.PyPkgInstaller\npackages = scikit-learn, psutil\n\n# Base configuration for IPython.parallel cluster\n[cluster iptemplate]\nKEYNAME = mykey\nCLUSTER_SIZE = 1\nCLUSTER_USER = ipuser\nCLUSTER_SHELL = bash\nREGION = us-east-1\nNODE_IMAGE_ID = ami-5b3fb632 # REGION and NODE_IMAGE_ID go in pair\nNODE_INSTANCE_TYPE = c1.xlarge # 8 CPUs\nDISABLE_QUEUE = True # We don't need SGE, faster cluster startup\nPLUGINS = pypackages, ipcluster\n\nLaunching a Cluster\nStart a new cluster using the myclustertemplate section of the ~/.startcluster/config file:\n[laptop]% starcluster start -c iptemplate -s 3 -b 0.5 mycluster\n\n\n\nThe -s option makes it possible to select the number of EC2 instance to start.\n\n\nThe -b option makes it possible to provision non-master instances on the Spot Instance market\n\n\nTo also provision the master node on the Spot Instance market you can further add the --force-spot-master flag to the previous commandline.\n\n\nProvisioning Spot Instances is typically up to 5x cheaper than regular instances for largish instance types such as c1.xlarge but you run the risk of having your instances shut down if the price goes up. Also provisioning new instances on the Spot market can be slower: often a couple of minutes instead of 30s for On Demand instances.\n\n\nYou can access the price history of spot instances of a specific region with:\n[laptop]% starcluster -r us-west-1 spothistory c1.xlarge\nStarCluster - (http://star.mit.edu/cluster) (v. 0.9999)\nSoftware Tools for Academics and Researchers (STAR)\nPlease submit bug reports to [email protected]\n\n&gt;&gt;&gt; Current price: $0.11\n&gt;&gt;&gt; Max price: $0.75\n&gt;&gt;&gt; Average price: $0.13\n\n\n\nConnect to the master node via ssh:\n[laptop]% starcluster sshmaster -A -u ipuser\n\n\n\nThe -A flag makes it possible to use your local ssh agent to manage your keys: makes it possible to git clone / git push github repositories from the master node as you would from your local folder.\n\n\nThe StarCluster AMI comes with tmux installed by default.\n\n\nIt is possible to ssh into other cluster nodes from the master using local DNS aliases such as:\n[myuser@master]% ssh node001\n\nDynamically Resizing the Cluster\nWhen using the LoadBalancedView API of IPython.parallel.Client is it possible to dynamically grow the cluster to shorten the duration of the processing of a queue of task without having to restart from scratch.\nThis can be achieved using the addnode command, for instance to add 3 more nodes using $0.50 bid price on the Spot Instance market:\n[laptop]% starcluster addnode -s 3 -b 0.5 mycluster\n\nEach node will automatically run the IPCluster plugin and register new IPEngine processes to the existing IPController process running on master.\nIt is also possible to terminate individual running nodes of the cluster with removenode command but this will kill any task running on that node and IPython.parallel will not restart the failed task automatically.\nTerminating a Cluster\nOnce your are done with your computation, don't forget to shutdown the whole cluster and EBS volume so as to only pay for the resources you used.\nBefore doing so, don't forget to backup any result file you would like to keep, by either pushing them to the S3 storage service (recommended for large files that you would want to reuse on EC2 later) or fetching them locally using the starcluster get command.\nThe cluster shutdown itself can be achieved with a single command:\n[laptop]% starcluster terminate mycluster\n\nAlternatively to can also keep your data by preserving the EBS volume attached to the master node by remplacing the terminate command with the stop command:\n[laptop]% starcluster stop mycluster\n\nYou can then later restart the same cluster again with the start command to automatically remount the EBS volume." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rishuatgithub/MLPy
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
apache-2.0
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nWorking with Text Files\nIn this section we'll cover\n * Working with f-strings (formatted string literals) to format printed text\n * Working with Files - opening, reading, writing and appending text files\nFormatted String Literals (f-strings)\nIntroduced in Python 3.6, <strong>f-strings</strong> offer several benefits over the older .format() string method. <br>For one, you can bring outside variables immediately into to the string rather than pass them through as keyword arguments:", "name = 'Fred'\n\n# Using the old .format() method:\nprint('His name is {var}.'.format(var=name))\n\n# Using f-strings:\nprint(f'His name is {name}.')", "Pass !r to get the <strong>string representation</strong>:", "print(f'His name is {name!r}')", "Be careful not to let quotation marks in the replacement fields conflict with the quoting used in the outer string:", "d = {'a':123,'b':456}\n\nprint(f'Address: {d['a']} Main Street')", "Instead, use different styles of quotation marks:", "d = {'a':123,'b':456}\n\nprint(f\"Address: {d['a']} Main Street\")", "Minimum Widths, Alignment and Padding\nYou can pass arguments inside a nested set of curly braces to set a minimum width for the field, the alignment and even padding characters.", "library = [('Author', 'Topic', 'Pages'), ('Twain', 'Rafting', 601), ('Feynman', 'Physics', 95), ('Hamilton', 'Mythology', 144)]\n\nfor book in library:\n print(f'{book[0]:{10}} {book[1]:{8}} {book[2]:{7}}')", "Here the first three lines align, except Pages follows a default left-alignment while numbers are right-aligned. Also, the fourth line's page number is pushed to the right as Mythology exceeds the minimum field width of 8. When setting minimum field widths make sure to take the longest item into account.\nTo set the alignment, use the character &lt; for left-align, ^ for center, &gt; for right.<br>\nTo set padding, precede the alignment character with the padding character (- and . are common choices).\nLet's make some adjustments:", "for book in library:\n print(f'{book[0]:{10}} {book[1]:{10}} {book[2]:.>{7}}') # here .> was added", "Date Formatting", "from datetime import datetime\n\ntoday = datetime(year=2018, month=1, day=27)\n\nprint(f'{today:%B %d, %Y}')", "For more info on formatted string literals visit https://docs.python.org/3/reference/lexical_analysis.html#f-strings\n\nFiles\nPython uses file objects to interact with external files on your computer. These file objects can be any sort of file you have on your computer, whether it be an audio file, a text file, emails, Excel documents, etc. Note: You will probably need to install certain libraries or modules to interact with those various file types, but they are easily available. (We will cover downloading modules later on in the course).\nPython has a built-in open function that allows us to open and play with basic file types. First we will need a file though. We're going to use some IPython magic to create a text file!\nCreating a File with IPython\nThis function is specific to jupyter notebooks! Alternatively, quickly create a simple .txt file with Sublime text editor.", "%%writefile test.txt\nHello, this is a quick test file.\nThis is the second line of the file.", "Python Opening a File\nKnow Your File's Location\nIt's easy to get an error on this step:", "myfile = open('whoops.txt')", "To avoid this error, make sure your .txt file is saved in the same location as your notebook. To check your notebook location, use pwd:", "pwd", "Alternatively, to grab files from any location on your computer, simply pass in the entire file path. \nFor Windows you need to use double \\ so python doesn't treat the second \\ as an escape character, a file path is in the form:\nmyfile = open(\"C:\\\\Users\\\\YourUserName\\\\Home\\\\Folder\\\\myfile.txt\")\n\nFor MacOS and Linux you use slashes in the opposite direction:\nmyfile = open(\"/Users/YourUserName/Folder/myfile.txt\")", "# Open the text.txt file we created earlier\nmy_file = open('test.txt')\n\nmy_file", "my_file is now an open file object held in memory. We'll perform some reading and writing exercises, and then we have to close the file to free up memory.\n.read() and .seek()", "# We can now read the file\nmy_file.read()\n\n# But what happens if we try to read it again?\nmy_file.read()", "This happens because you can imagine the reading \"cursor\" is at the end of the file after having read it. So there is nothing left to read. We can reset the \"cursor\" like this:", "# Seek to the start of file (index 0)\nmy_file.seek(0)\n\n# Now read again\nmy_file.read()", ".readlines()\nYou can read a file line by line using the readlines method. Use caution with large files, since everything will be held in memory. We will learn how to iterate over large files later in the course.", "# Readlines returns a list of the lines in the file\nmy_file.seek(0)\nmy_file.readlines()", "When you have finished using a file, it is always good practice to close it.", "my_file.close()", "Writing to a File\nBy default, the open() function will only allow us to read the file. We need to pass the argument 'w' to write over the file. For example:", "# Add a second argument to the function, 'w' which stands for write.\n# Passing 'w+' lets us read and write to the file\n\nmy_file = open('test.txt','w+')", "<div class=\"alert alert-danger\" style=\"margin: 20px\">**Use caution!**<br>\nOpening a file with 'w' or 'w+' *truncates the original*, meaning that anything that was in the original file **is deleted**!</div>", "# Write to the file\nmy_file.write('This is a new first line')\n\n# Read the file\nmy_file.seek(0)\nmy_file.read()\n\nmy_file.close() # always do this when you're done with a file", "Appending to a File\nPassing the argument 'a' opens the file and puts the pointer at the end, so anything written is appended. Like 'w+', 'a+' lets us read and write to a file. If the file does not exist, one will be created.", "my_file = open('test.txt','a+')\nmy_file.write('\\nThis line is being appended to test.txt')\nmy_file.write('\\nAnd another line here.')\n\nmy_file.seek(0)\nprint(my_file.read())\n\nmy_file.close()", "Appending with %%writefile\nJupyter notebook users can do the same thing using IPython cell magic:", "%%writefile -a test.txt\n\nThis is more text being appended to test.txt\nAnd another line here.", "Add a blank space if you want the first line to begin on its own line, as Jupyter won't recognize escape sequences like \\n\nAliases and Context Managers\nYou can assign temporary variable names as aliases, and manage the opening and closing of files automatically using a context manager:", "with open('test.txt','r') as txt:\n first_line = txt.readlines()[0]\n \nprint(first_line)", "Note that the with ... as ...: context manager automatically closed test.txt after assigning the first line of text to first_line:", "txt.read()", "Iterating through a File", "with open('test.txt','r') as txt:\n for line in txt:\n print(line, end='') # the end='' argument removes extra linebreaks", "Great! Now you should be familiar with formatted string literals and working with text files.\nNext up: Working with PDF Text" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
linglaiyao1314/maths-with-python
04-basic-plotting.ipynb
mit
[ "Plotting\nThere are many Python plotting libraries depending on your purpose. However, the standard general-purpose library is matplotlib. This is often used through its pyplot interface.", "from matplotlib import pyplot\n%matplotlib inline", "The command %matplotlib inline is not a Python command, but an IPython command. When using the console, or the notebook, it makes the plots appear inline. You do not want to use this in a plain Python code.", "from math import sin, pi\n\nx = []\ny = []\nfor i in range(201):\n x_point = 0.01*i\n x.append(x_point)\n y.append(sin(pi*x_point)**2)\n\npyplot.plot(x, y)\npyplot.show()", "We have defined two sequences - in this case lists, but tuples would also work. One contains the $x$-axis coordinates, the other the data points to appear on the $y$-axis. A basic plot is produced using the plot command of pyplot. However, this plot will not automatically appear on the screen, as after plotting the data you may wish to add additional information. Nothing will actually happen until you either save the figure to a file (using pyplot.savefig(&lt;filename&gt;)) or explicitly ask for it to be displayed (with the show command). When the plot is displayed the program will typically pause until you dismiss the plot.\nThis plotting interface is straightforward, but the results are not particularly nice. The following commands illustrate some of the ways of improving the plot:", "from math import sin, pi\n\nx = []\ny = []\nfor i in range(201):\n x_point = 0.01*i\n x.append(x_point)\n y.append(sin(pi*x_point)**2)\n\npyplot.plot(x, y, marker='+', markersize=8, linestyle=':', \n linewidth=3, color='b', label=r'$\\sin^2(\\pi x)$')\npyplot.legend(loc='lower right')\npyplot.xlabel(r'$x$')\npyplot.ylabel(r'$y$')\npyplot.title('A basic plot')\npyplot.show()", "Whilst most of the commands are self-explanatory, a note should be made of the strings line r'$x$'. These strings are in LaTeX format, which is the standard typesetting method for professional-level mathematics. The $ symbols surround mathematics. The r before the definition of the string is Python notation, not LaTeX. It says that the following string will be \"raw\": that backslash characters should be left alone. Then, special LaTeX commands have a backslash in front of them: here we use \\pi and \\sin. Most basic symbols can be easily guessed (eg \\theta or \\int), but there are useful lists of symbols, and a reverse search site available. We can also use ^ to denote superscripts (used here), _ to denote subscripts, and use {} to group terms.\nBy combining these basic commands with other plotting types (semilogx and loglog, for example), most simple plots can be produced quickly.\nHere are some more examples:", "from math import sin, pi, exp, log\n\nx = []\ny1 = []\ny2 = []\nfor i in range(201):\n x_point = 1.0 + 0.01*i\n x.append(x_point)\n y1.append(exp(sin(pi*x_point)))\n y2.append(log(pi+x_point*sin(x_point)))\n\npyplot.loglog(x, y1, linestyle='--', linewidth=4, \n color='k', label=r'$y_1=e^{\\sin(\\pi x)}$')\npyplot.loglog(x, y2, linestyle='-.', linewidth=4, \n color='r', label=r'$y_2=\\log(\\pi+x\\sin(x))$')\npyplot.legend(loc='lower right')\npyplot.xlabel(r'$x$')\npyplot.ylabel(r'$y$')\npyplot.title('A basic logarithmic plot')\npyplot.show()\n\nfrom math import sin, pi, exp, log\n\nx = []\ny1 = []\ny2 = []\nfor i in range(201):\n x_point = 1.0 + 0.01*i\n x.append(x_point)\n y1.append(exp(sin(pi*x_point)))\n y2.append(log(pi+x_point*sin(x_point)))\n\npyplot.semilogy(x, y1, linestyle='None', marker='o', \n color='g', label=r'$y_1=e^{\\sin(\\pi x)}$')\npyplot.semilogy(x, y2, linestyle='None', marker='^', \n color='r', label=r'$y_2=\\log(\\pi+x\\sin(x))$')\npyplot.legend(loc='lower right')\npyplot.xlabel(r'$x$')\npyplot.ylabel(r'$y$')\npyplot.title('A different logarithmic plot')\npyplot.show()", "We will look at more complex plots later, but the matplotlib documentation contains a lot of details, and the gallery contains a lot of examples that can be adapted to fit. There is also an extremely useful document as part of Johansson's lectures on scientific Python.\nExercise: Logistic map\nThe logistic map builds a sequence of numbers ${ x_n }$ using the relation\n\\begin{equation}\n x_{n+1} = r x_n \\left( 1 - x_n \\right),\n\\end{equation}\nwhere $0 \\le x_0 \\le 1$.\nExercise 1\nWrite a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).\nExercise 2\nFix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$. Plot the last 100 members of the sequence in both cases.\nWhat does this suggest about the long-term behaviour of the sequence?\nExercise 3\nFix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the 'k.' plotting style).\nExercise 4\nFor iterative maps such as the logistic map, one of three things can occur:\n\nThe sequence settles down to a fixed point.\nThe sequence rotates through a finite number of values. This is called a limit cycle.\nThe sequence generates an infinite number of values. This is called deterministic chaos.\n\nUsing just your plot, or new plots from this data, work out approximate values of $r$ for which there is a transition from fixed points to limit cycles, from limit cycles of a given number of values to more values, and the transition to chaos.\nExercise: Mandelbrot\nThe Mandelbrot set is also generated from a sequence, ${ z_n }$, using the relation\n\\begin{equation}\n z_{n+1} = z_n^2 + c, \\qquad z_0 = 0.\n\\end{equation}\nThe members of the sequence, and the constant $c$, are all complex. The point in the complex plane at $c$ is in the Mandelbrot set only if the $|z_n| < 2$ for all members of the sequence. In reality, checking the first 100 iterations is sufficient.\nNote: the Python notation for a complex number $x + \\text{i} y$ is x + yj: that is, j is used to indicate $\\sqrt{-1}$. If you know the values of x and y then x + yj constructs a complex number; if they are stored in variables you can use complex(x, y).\nExercise 1\nWrite a function that checks if the point $c$ is in the Mandelbrot set.\nExercise 2\nCheck the points $c=0$ and $c=\\pm 2 \\pm 2 \\text{i}$ and ensure they do what you expect. (What should you expect?)\nExercise 3\nWrite a function that, given $N$\n\ngenerates an $N \\times N$ grid spanning $c = x + \\text{i} y$, for $-2 \\le x \\le 2$ and $-2 \\le y \\le 2$;\nreturns an $N\\times N$ array containing one if the associated grid point is in the Mandelbrot set, and zero otherwise.\n\nExercise 4\nUsing the function imshow from matplotlib, plot the resulting array for a $100 \\times 100$ array to make sure you see the expected shape.\nExercise 5\nModify your functions so that, instead of returning whether a point is inside the set or not, it returns the logarithm of the number of iterations it takes. Plot the result using imshow again.\nExercise 6\nTry some higher resolution plots, and try plotting only a section to see the structure. Note this is not a good way to get high accuracy close up images!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
robertoalotufo/ia898
master/tutorial_contraste_iterativo_2.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Realce-de-Contraste-Interativo-utilizando-Janela-e-Nรญvel\" data-toc-modified-id=\"Realce-de-Contraste-Interativo-utilizando-Janela-e-Nรญvel-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Realce de Contraste Interativo utilizando Janela e Nรญvel</a></div><div class=\"lev2 toc-item\"><a href=\"#Equaรงรฃo-da-funรงรฃo-de-realce-de-contraste\" data-toc-modified-id=\"Equaรงรฃo-da-funรงรฃo-de-realce-de-contraste-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Equaรงรฃo da funรงรฃo de realce de contraste</a></div><div class=\"lev2 toc-item\"><a href=\"#Implementaรงรฃo-da-Funรงรฃo-de-contraste-Window-&amp;-Level\" data-toc-modified-id=\"Implementaรงรฃo-da-Funรงรฃo-de-contraste-Window-&amp;-Level-12\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Implementaรงรฃo da Funรงรฃo de contraste Window &amp; Level</a></div><div class=\"lev2 toc-item\"><a href=\"#Imagem-original-e-seu-histograma\" data-toc-modified-id=\"Imagem-original-e-seu-histograma-13\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Imagem original e seu histograma</a></div><div class=\"lev2 toc-item\"><a href=\"#Calculando-e-visualizando-a-Transforma-de-Contraste-Window-&amp;-Level\" data-toc-modified-id=\"Calculando-e-visualizando-a-Transforma-de-Contraste-Window-&amp;-Level-14\"><span class=\"toc-item-num\">1.4&nbsp;&nbsp;</span>Calculando e visualizando a Transforma de Contraste Window &amp; Level</a></div><div class=\"lev2 toc-item\"><a href=\"#Aplicando-a-Transformaรงรฃo-de-Contraste\" data-toc-modified-id=\"Aplicando-a-Transformaรงรฃo-de-Contraste-15\"><span class=\"toc-item-num\">1.5&nbsp;&nbsp;</span>Aplicando a Transformaรงรฃo de Contraste</a></div><div class=\"lev2 toc-item\"><a href=\"#Visualizando-o-histograma-da-imagem-com-realce-de-contraste\" data-toc-modified-id=\"Visualizando-o-histograma-da-imagem-com-realce-de-contraste-16\"><span class=\"toc-item-num\">1.6&nbsp;&nbsp;</span>Visualizando o histograma da imagem com realce de contraste</a></div><div class=\"lev2 toc-item\"><a href=\"#Links-Interessantes\" data-toc-modified-id=\"Links-Interessantes-17\"><span class=\"toc-item-num\">1.7&nbsp;&nbsp;</span>Links Interessantes</a></div>\n\n# Realce de Contraste Interativo utilizando Janela e Nรญvel\n\n\nEm equipamentos interativos de visualizaรงรฃo de imagens, รฉ usual ter uma opรงรฃo interativa\ndenominada \"Window & Level contrast enhancement\" que permite com auxรญlio do mouse mudar\no contraste da imagem de forma seletiva. A transformaรงรฃo de intensidade que รฉ utilizada\nรฉ uma transformaรงรฃo linear aplicada na faixa de valores mรญnimo e mรกximo de nรญvel de cinza\nem que se deseja aumentar o contraste. Entretanto, em vez de alterar este dois parรขmetros,\nos dois parรขmetros alterados sรฃo: *Window* que รฉ a faixa entre o mรญnimo e mรกximo e o *Level*\nque รฉ o nรญvel de cinza do centro da faixa. A vantagem desta forma de parametrizar รฉ que\nรฉ possรญvel por exemplo deixar uma janela fixa e alterar o nรญvel de cinza do centro, dando um\nmaior controle ao usuรกrio.\n\nUma demonstraรงรฃo interativa pode ser vista em\n\n- [adessowiki:ws_demo2](http://adessowiki.fee.unicamp.br/adesso/wiki/Demo/ws_demo2/view/?usecache=0) \n\nque foi feita\nem javascript pelo Luis Tavares durante o seu mestrado na FEEC-Unicamp. Experimente\nesta ferramenta interativa e coloque a Janela em 5 e varie o Nรญvel para a parte mais escura\ne verifique que รฉ possรญvel verificar a distribuiรงรฃo dos pixels do ar, que estรก ao redor do\nsujeito.\n\nA demonstraรงรฃo a seguir รฉ feita neste notebook, porรฉm nรฃo de forma nรฃo interativa.\n\n## Equaรงรฃo da funรงรฃo de realce de contraste\n\nA equaรงรฃo da Transformaรงรฃo de contraste Window & Level รฉ dada pela seguinte equaรงรฃo:\n\n$$ \\begin{matrix}\n T(p) &=& \\lfloor\\frac{255 (p - P_{min})}{P_{max} - P_{min}}\\rfloor\\\\\n \\text{onde}& &\\\\\n P_{min} &=& \\max(0, L - \\lfloor\\frac{W}{2}\\rfloor)\\\\\n P_{max} &=& \\min(L + \\lfloor\\frac{W}{2}\\rfloor, 255) \n \\end{matrix}\n$$ \n\n## Implementaรงรฃo da Funรงรฃo de contraste Window & Level\n\nComo todo problema, existem inรบmeras maneiras de se implementar em NumPy a funรงรฃo de contraste Janela e Nรญvel.\nA implementaรงรฃo a seguir faz uso da funรงรฃo ``linspace`` que gera a parte linear da funรงรฃo:", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport sys,os\nia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')\nif ia898path not in sys.path:\n sys.path.append(ia898path)\nimport ia898.src as ia\n\n \ndef TWL(L,W):\n Pmin = max(0,L-W//2)\n Pmax = min(255,L+W//2)\n\n T = np.zeros(256, np.uint8)\n T[Pmin:Pmax+1] = np.floor(np.linspace(0, 255, num=(Pmax - Pmin + 1)))\n T[Pmax:] = 255\n return T\n\ndef WL(f,L,W):\n T = TWL(L,W)\n return T[f]", "Imagem original e seu histograma", " # Imagem original\n f = mpimg.imread('../data/cameraman.tif')\n ia.adshow(f,'Imagem Original')\n \n h = ia.histogram(f)\n plt.plot(h), plt.title('Histograma da Imagem Original')", "Calculando e visualizando a Transforma de Contraste Window & Level", "W = 30\nL = 15\nTw = TWL(L,W)\nplt.plot(Tw)\n#plt.ylabel('Output intensity')\n#plt.xlabel('Input intensity')\nplt.title('Transformada de intensidade W=%d L=%d' % (W,L))", "Aplicando a Transformaรงรฃo de Contraste\nObserve que esta transformaรงรฃo amplia o contraste ao redor do nรญvel de\ncinza 15, tornando os detalhes do paletรณ do \"cameraman\" bem visรญveis:", "g = WL(f,L,W)\nia.adshow(g, 'Imagem com contraste ajustado, L = %d, W = %d' %(L,W))", "Visualizando o histograma da imagem com realce de contraste\nObserve que quanto menor a largura da janela, mais pixels terรฃo valores 0 e 255.\nQuando de visualiza seu histograma, aparecerรก um grande pico nestes dois valores\nque sรฃo o extremo do histograma. Para evitar que estes valores entrem no plot,\nfaz-se um fatiamento do histograma do segundo pixel ao penรบltimo: h[1:-1]. A\nseguir mostramos o histograma contendo os valores 0 e 255 e depois nรฃo\nutilizando estes valores:", "hg = ia.histogram(g)\nplt.figure(1)\nplt.plot(hg),plt.title('Histograma da Imagens realรงada')\nplt.show()\nplt.figure(2)\nplt.plot(hg[1:-1]),plt.title('Idem, porรฉm sem valores 0 e 255')\nplt.show()", "Links Interessantes\n\nDemonstraรงรฃo interativa" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gogartom/caffe-textmaps
examples/classification.ipynb
mit
[ "Classifying ImageNet: the instant Caffe way\nCaffe has a Python interface, pycaffe, with a caffe.Net interface for models. There are both Python and MATLAB interfaces. While this example uses the off-the-shelf Python caffe.Classifier interface there is also a MATLAB example at matlab/caffe/matcaffe_demo.m.\nBefore we begin, you must compile Caffe. You should add the Caffe module to your PYTHONPATH although this example includes it automatically. If you haven't yet done so, please refer to the installation instructions. This example uses our pre-trained CaffeNet model, an ILSVRC12 image classifier. You can download it by running ./scripts/download_model_binary.py models/bvlc_reference_caffenet or let the first step of this example download it for you.\nReady? Let's start.", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Make sure that caffe is on the python path:\ncaffe_root = '../' # this file is expected to be in {caffe_root}/examples\nimport sys\nsys.path.insert(0, caffe_root + 'python')\n\nimport caffe\n\n# Set the right path to your model definition file, pretrained model weights,\n# and the image you would like to classify.\nMODEL_FILE = '../models/bvlc_reference_caffenet/deploy.prototxt'\nPRETRAINED = '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'\nIMAGE_FILE = 'images/cat.jpg'\n\nimport os\nif not os.path.isfile(PRETRAINED):\n print(\"Downloading pre-trained CaffeNet model...\")\n !../scripts/download_model_binary.py ../models/bvlc_reference_caffenet", "Loading a network is easy. caffe.Classifier takes care of everything. Note the arguments for configuring input preprocessing: mean subtraction switched on by giving a mean array, input channel swapping takes care of mapping RGB into the reference ImageNet model's BGR order, and raw scaling multiplies the feature scale from the input [0,1] to the ImageNet model's [0,255].\nWe will set the phase to test since we are doing testing, and will first use CPU for the computation.", "caffe.set_mode_cpu()\nnet = caffe.Classifier(MODEL_FILE, PRETRAINED,\n mean=np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1),\n channel_swap=(2,1,0),\n raw_scale=255,\n image_dims=(256, 256))", "Let's take a look at our example image with Caffe's image loading helper.", "input_image = caffe.io.load_image(IMAGE_FILE)\nplt.imshow(input_image)", "Time to classify. The default is to actually do 10 predictions, cropping the center and corners of the image as well as their mirrored versions, and average over the predictions:", "prediction = net.predict([input_image]) # predict takes any number of images, and formats them for the Caffe net automatically\nprint 'prediction shape:', prediction[0].shape\nplt.plot(prediction[0])\nprint 'predicted class:', prediction[0].argmax()", "You can see that the prediction is 1000-dimensional, and is pretty sparse.\nThe predicted class 281 is \"Tabby cat.\" Our pretrained model uses the synset ID ordering of the classes, as listed in ../data/ilsvrc12/synset_words.txt if you fetch the auxiliary imagenet data by ../data/ilsvrc12/get_ilsvrc_aux.sh. If you look at the top indices that maximize the prediction score, they are cats, foxes, and other cute mammals. Not unreasonable predictions, right?\nNow let's classify by the center crop alone by turning off oversampling. Note that this makes a single input, although if you inspect the model definition prototxt you'll see the network has a batch size of 10. The python wrapper handles batching and padding for you!", "prediction = net.predict([input_image], oversample=False)\nprint 'prediction shape:', prediction[0].shape\nplt.plot(prediction[0])\nprint 'predicted class:', prediction[0].argmax()", "Now, why don't we see how long it takes to perform the classification end to end? This result is run from an Intel i5 CPU, so you may observe some performance differences.", "%timeit net.predict([input_image])", "It may look a little slow, but note that time is spent on cropping, python interfacing, and running 10 images. For performance, if you really want to make prediction fast, you can optionally code in C++ and pipeline operations better. For experimenting and prototyping the current speed is fine.\nLet's time classifying a single image with input preprocessed:", "# Resize the image to the standard (256, 256) and oversample net input sized crops.\ninput_oversampled = caffe.io.oversample([caffe.io.resize_image(input_image, net.image_dims)], net.crop_dims)\n# 'data' is the input blob name in the model definition, so we preprocess for that input.\ncaffe_input = np.asarray([net.transformer.preprocess('data', in_) for in_ in input_oversampled])\n# forward() takes keyword args for the input blobs with preprocessed input arrays.\n%timeit net.forward(data=caffe_input)", "OK, so how about GPU? it is actually pretty easy:", "caffe.set_mode_gpu()", "Voila! Now we are in GPU mode. Let's see if the code gives the same result:", "prediction = net.predict([input_image])\nprint 'prediction shape:', prediction[0].shape\nplt.plot(prediction[0])", "Good, everything is the same. And how about time consumption? The following benchmark is obtained on the same machine with a GTX 770 GPU:", "# Full pipeline timing.\n%timeit net.predict([input_image])\n\n# Forward pass timing.\n%timeit net.forward(data=caffe_input)", "Pretty fast right? Not as fast as you expected? Indeed, in this python demo you are seeing only 4 times speedup. But remember - the GPU code is actually very fast, and the data loading, transformation and interfacing actually start to take more time than the actual conv. net computation itself!\nTo fully utilize the power of GPUs, you really want to:\n\nUse larger batches, and minimize python call and data transfer overheads.\nPipeline data load operations, like using a subprocess.\nCode in C++. A little inconvenient, but maybe worth it if your dataset is really, really large.\n\nParting Words\nSo this is python! We hope the interface is easy enough for one to use. The python wrapper is interfaced with boost::python, and source code can be found at python/caffe with the main interface in pycaffe.py and the classification wrapper in classifier.py. If you have customizations to make, start there! Do let us know if you make improvements by sending a pull request!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DS-100/sp17-materials
sp17/hw/hw1/hw1.ipynb
gpl-3.0
[ "Homework 1: Setup and (Re-)Introduction to Python\nCourse Policies\nHere are some important course policies. These are also located at\nhttp://www.ds100.org/sp17/.\nTentative Grading\nThere will be 7 challenging homework assignments. Homeworks must be completed\nindividually and will mix programming and short answer questions. At the end of\neach week of instruction we will have an online multiple choice quiz (\"vitamin\") that will\nhelp you stay up-to-date with lecture materials. Labs assignments will be\ngraded for completion and are intended to help with the homework assignments.\n\n40% Homeworks\n13% Vitamins\n7% Labs\n15% Midterm\n25% Final\n\nCollaboration Policy\nData science is a collaborative activity. While you may talk with others about\nthe homework, we ask that you write your solutions individually. If you do\ndiscuss the assignments with others please include their names at the top\nof your solution. Keep in mind that content from the homework and vitamins will\nlikely be covered on both the midterm and final.\nThis assignment\nIn this assignment, you'll learn (or review):\n\nHow to set up Jupyter on your own computer.\nHow to check out and submit assignments for this class.\nPython basics, like defining functions.\nHow to use the numpy library to compute with arrays of numbers.\n\n1. Setup\nIf you haven't already, read through the instructions at\nhttp://www.ds100.org/spring-2017/setup.\nThe instructions for submission are at the end of this notebook.\nFirst, let's make sure you have the latest version of okpy.", "!pip install -U okpy", "If you've set up your environment properly, this cell should run without problems:", "import math\nimport numpy as np\nimport matplotlib\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')\nfrom datascience import *\n\nfrom client.api.notebook import Notebook\nok = Notebook('hw1.ok')", "Now, run this cell to log into OkPy.\nThis is the submission system for the class; you will use this\nwebsite to confirm that you've submitted your assignment.", "ok.auth(inline=True)", "2. Python\nPython is the main programming language we'll use in this course. We assume you have some experience with Python or can learn it yourself, but here is a brief review.\nBelow are some simple Python code fragments.\nYou should feel confident explaining what each fragment is doing. If not,\nplease brush up on your Python. There a number of tutorials online (search\nfor \"Python tutorial\"). https://docs.python.org/3/tutorial/ is a good place to\nstart.", "2 + 2\n\n# This is a comment.\n# In Python, the ** operator performs exponentiation.\nmath.e**(-2)\n\nprint(\"Hello\" + \",\", \"world!\")\n\"Hello, cell output!\"\n\ndef add2(x):\n \"\"\"This docstring explains what this function does: it adds 2 to a number.\"\"\"\n return x + 2\n\ndef makeAdder(amount):\n \"\"\"Make a function that adds the given amount to a number.\"\"\"\n def addAmount(x):\n return x + amount\n return addAmount\n\nadd3 = makeAdder(3)\nadd3(4)\n\n# add4 is very similar to add2, but it's been created using a lambda expression.\nadd4 = lambda x: x + 4\nadd4(5)\n\nsameAsMakeAdder = lambda amount: lambda x: x + amount\nadd5 = sameAsMakeAdder(5)\nadd5(6)\n\ndef fib(n):\n if n <= 1:\n return 1\n # Functions can call themselves recursively.\n return fib(n-1) + fib(n-2)\n\nfib(4)\n\n# A for loop repeats a block of code once for each\n# element in a given collection.\nfor i in range(5):\n if i % 2 == 0:\n print(2**i)\n else:\n print(\"Odd power of 2\")\n\n# A list comprehension is a convenient way to apply a function\n# to each element in a given collection.\n# The String method join appends together all its arguments\n# separated by the given string. So we append each element produced\n# by the list comprehension, each separated by a newline (\"\\n\").\nprint(\"\\n\".join([str(2**i) if i % 2 == 0 else \"Odd power of 2\" for i in range(5)]))", "Question 1\nQuestion 1a\nWrite a function nums_reversed that takes in an integer n and returns a string\ncontaining the numbers 1 through n including n in reverse order, separated\nby spaces. For example:\n&gt;&gt;&gt; nums_reversed(5)\n'5 4 3 2 1'\n\nNote: The ellipsis (...) indicates something you should fill in. It doesn't necessarily imply you should replace it with only one line of code.", "def nums_reversed(n):\n ...\n\n_ = ok.grade('q01a')\n_ = ok.backup()", "Question 1b\nWrite a function string_splosion that takes in a non-empty string like\n\"Code\" and returns a long string containing every prefix of the input.\nFor example:\n&gt;&gt;&gt; string_splosion('Code')\n'CCoCodCode'\n&gt;&gt;&gt; string_splosion('data!')\n'ddadatdatadata!'\n&gt;&gt;&gt; string_splosion('hi')\n'hhi'", "def string_splosion(string):\n ...\n\n_ = ok.grade('q01b')\n_ = ok.backup()", "Question 1c\nWrite a function double100 that takes in a list of integers\nand returns True only if the list has two 100s next to each other.\n&gt;&gt;&gt; double100([100, 2, 3, 100])\nFalse\n&gt;&gt;&gt; double100([2, 3, 100, 100, 5])\nTrue", "def double100(nums):\n ...\n\n_ = ok.grade('q01c')\n_ = ok.backup()", "Question 1d\nWrite a function median that takes in a list of numbers\nand returns the median element of the list. If the list has even\nlength, it returns the mean of the two elements in the middle.\n&gt;&gt;&gt; median([5, 4, 3, 2, 1])\n3\n&gt;&gt;&gt; median([ 40, 30, 10, 20 ])\n25", "def median(number_list):\n ...\n\n_ = ok.grade('q01d')\n_ = ok.backup()", "3. NumPy\nThe NumPy library lets us do fast, simple computing with numbers in Python.\n3.1. Arrays\nThe basic NumPy data type is the array, a homogeneously-typed sequential collection (a list of things that all have the same type). Arrays will most often contain strings, numbers, or other arrays.\nLet's create some arrays:", "array1 = np.array([2, 3, 4, 5])\narray2 = np.arange(4)\narray1, array2", "Math operations on arrays happen element-wise. Here's what we mean:", "array1 * 2\n\narray1 * array2\n\narray1 ** array2", "This is not only very convenient (fewer for loops!) but also fast. NumPy is designed to run operations on arrays much faster than equivalent Python code on lists. Data science sometimes involves working with large datasets where speed is important - even the constant factors!\nJupyter pro-tip: Pull up the docs for any function in Jupyter by running a cell with\nthe function name and a ? at the end:", "np.arange?", "Another Jupyter pro-tip: Pull up the docs for any function in Jupyter by typing the function\nname, then &lt;Shift&gt;-&lt;Tab&gt; on your keyboard. Super convenient when you forget the order\nof the arguments to a function. You can press &lt;Tab&gt; multiple tabs to expand the docs.\nTry it on the function below:", "np.linspace", "Question 2\nUsing the np.linspace function, create an array called xs that contains\n100 evenly spaced points between 0 and 2 * np.pi. Then, create an array called ys that\ncontains the value of $ \\sin{x} $ at each of those 100 points.\nHint: Use the np.sin function. You should be able to define each variable with one line of code.)", "xs = ...\nys = ...\n\n_ = ok.grade('q02')\n_ = ok.backup()", "The plt.plot function from another library called matplotlib lets us make plots. It takes in\nan array of x-values and a corresponding array of y-values. It makes a scatter plot of the (x, y) pairs and connects points with line segments. If you give it enough points, it will appear to create a smooth curve.\nLet's plot the points you calculated in the previous question:", "plt.plot(xs, ys)", "This is a useful recipe for plotting any function:\n1. Use linspace or arange to make a range of x-values.\n2. Apply the function to each point to produce y-values.\n3. Plot the points.\nYou might remember from calculus that the derivative of the sin function is the cos function. That means that the slope of the curve you plotted above at any point xs[i] is given by cos(xs[i]). You can try verifying this by plotting cos in the next cell.", "# Try plotting cos here.", "Calculating derivatives is an important operation in data science, but it can be difficult. We can have computers do it for us using a simple idea called numerical differentiation.\nConsider the ith point (xs[i], ys[i]). The slope of sin at xs[i] is roughly the slope of the line connecting (xs[i], ys[i]) to the nearby point (xs[i+1], ys[i+1]). That slope is:\n(ys[i+1] - ys[i]) / (xs[i+1] - xs[i])\n\nIf the difference between xs[i+1] and xs[i] were infinitessimal, we'd have exactly the derivative. In numerical differentiation we take advantage of the fact that it's often good enough to use \"really small\" differences instead.\nQuestion 3\nDefine a function called derivative that takes in an array of x-values and their\ncorresponding y-values and computes the slope of the line connecting each point to the next point.\n&gt;&gt;&gt; derivative(np.array([0, 1, 2]), np.array([2, 4, 6]))\nnp.array([2., 2.])\n&gt;&gt;&gt; derivative(np.arange(5), np.arange(5) ** 2)\nnp.array([0., 2., 4., 6.])\n\nNotice that the output array has one less element than the inputs since we can't\nfind the slope for the last point.\nIt's possible to do this in one short line using slicing, but feel free to use whatever method you know.\nThen, use your derivative function to compute the slopes for each point in xs, ys.\nStore the slopes in an array called slopes.", "def derivative(xvals, yvals):\n ...\n\nslopes = ...\nslopes[:5]\n\n_ = ok.grade('q03')\n_ = ok.backup()", "Question 4\nPlot the slopes you computed. Then plot cos on top of your plot, calling plt.plot again in the same cell. Did numerical differentiation work?\nNote: Since we have only 99 slopes, you'll need to take off the last x-value before plotting to avoid an error.", "...\n...", "In the plot above, it's probably not clear which curve is which. Examine the cell below to see how to plot your results with a legend.", "plt.plot(xs[:-1], slopes, label=\"Numerical derivative\")\nplt.plot(xs[:-1], np.cos(xs[:-1]), label=\"True derivative\")\n# You can just call plt.legend(), but the legend will cover up\n# some of the graph. Use bbox_to_anchor=(x,y) to set the x-\n# and y-coordinates of the center-left point of the legend,\n# where, for example, (0, 0) is the bottom-left of the graph\n# and (1, .5) is all the way to the right and halfway up.\nplt.legend(bbox_to_anchor=(1, .5), loc=\"center left\");", "3.2. Multidimensional Arrays\nA multidimensional array is a primitive version of a table, containing only one kind of data and having no column labels. A 2-dimensional array is useful for working with matrices of numbers.", "# The zeros function creates an array with the given shape.\n# For a 2-dimensional array like this one, the first\n# coordinate says how far the array goes *down*, and the\n# second says how far it goes *right*.\narray3 = np.zeros((4, 5))\narray3\n\n# The shape attribute returns the dimensions of the array.\narray3.shape\n\n# You can think of array3 as an array containing 4 arrays, each\n# containing 5 zeros. Accordingly, we can set or get the third\n# element of the second array in array 3 using standard Python\n# array indexing syntax twice:\narray3[1][2] = 7\narray3\n\n# This comes up so often that there is special syntax provided\n# for it. The comma syntax is equivalent to using multiple\n# brackets:\narray3[1, 2] = 8\narray3", "Arrays allow you to assign to multiple places at once. The special character : means \"everything.\"", "array4 = np.zeros((3, 5))\narray4[:, 2] = 5\narray4", "In fact, you can use arrays of indices to assign to multiple places. Study the next example and make sure you understand how it works.", "array5 = np.zeros((3, 5))\nrows = np.array([1, 0, 2])\ncols = np.array([3, 1, 4])\n\n# Indices (1,3), (0,1), and (2,4) will be set.\narray5[rows, cols] = 3\narray5", "Question 5\nCreate a 50x50 array called twice_identity that contains all zeros except on the\ndiagonal, where it contains the value 2.\nStart by making a 50x50 array of all zeros, then set the values. Use indexing, not a for loop! (Don't use np.eye either, though you might find that function useful later.)", "twice_identity = ...\n...\ntwice_identity\n\n_ = ok.grade('q05')\n_ = ok.backup()", "4. A Picture Puzzle\nYour boss has given you some strange text files. He says they're images,\nsome of which depict a summer scene and the rest a winter scene.\nHe demands that you figure out how to determine whether a given\ntext file represents a summer scene or a winter scene.\nYou receive 10 files, 1.txt through 10.txt. Peek at the files in a text\neditor of your choice.\nQuestion 6\nHow do you think the contents of the file are structured? Take your best guess.\nWrite your answer here, replacing this text.\nQuestion 7\nCreate a function called read_file_lines that takes in a filename as its argument.\nThis function should return a Python list containing the lines of the\nfile as strings. That is, if 1.txt contains:\n1 2 3\n3 4 5\n7 8 9\nthe return value should be: ['1 2 3\\n', '3 4 5\\n', '7 8 9\\n'].\nThen, use the read_file_lines function on the file 1.txt, reading the contents\ninto a variable called file1.\nHint: Check out this Stack Overflow page on reading lines of files.", "def read_file_lines(filename):\n ...\n ...\n\nfile1 = ...\nfile1[:5]\n\n_ = ok.grade('q07')\n_ = ok.backup()", "Each file begins with a line containing two numbers. After checking the length of\na file, you could notice that the product of these two numbers equals the number of\nlines in each file (other than the first one).\nThis suggests the rows represent elements in a 2-dimensional grid. In fact, each\ndataset represents an image!\nOn the first line, the first of the two numbers is\nthe height of the image (in pixels) and the second is the width (again in pixels).\nEach line in the rest of the file contains the pixels of the image.\nEach pixel is a triplet of numbers denoting how much red, green, and blue\nthe pixel contains, respectively.\nIn image processing, each column in one of these image files is called a channel\n(disregarding line 1). So there are 3 channels: red, green, and blue.\nQuestion 8\nDefine a function called lines_to_image that takes in the contents of a\nfile as a list (such as file1). It should return an array containing integers of\nshape (n_rows, n_cols, 3). That is, it contains the pixel triplets organized in the\ncorrect number of rows and columns.\nFor example, if the file originally contained:\n4 2\n0 0 0\n10 10 10\n2 2 2\n3 3 3\n4 4 4\n5 5 5\n6 6 6\n7 7 7\nThe resulting array should be a 3-dimensional array that looks like this:\narray([\n [ [0,0,0], [10,10,10] ],\n [ [2,2,2], [3,3,3] ],\n [ [4,4,4], [5,5,5] ],\n [ [6,6,6], [7,7,7] ]\n])\nThe string method split and the function np.reshape might be useful.\nImportant note: You must call .astype(np.uint8) on the final array before\nreturning so that numpy will recognize the array represents an image.\nOnce you've defined the function, set image1 to the result of calling\nlines_to_image on file1.", "def lines_to_image(file_lines):\n ...\n image_array = ...\n # Make sure to call astype like this on the 3-dimensional array\n # you produce, before returning it.\n return image_array.astype(np.uint8)\n\nimage1 = ...\nimage1.shape\n\n_ = ok.grade('q08')\n_ = ok.backup()", "Question 9\nImages in numpy are simply arrays, but we can also display them them as\nactual images in this notebook.\nUse the provided show_images function to display image1. You may call it\nlike show_images(image1). If you later have multiple images to display, you\ncan call show_images([image1, image2]) to display them all at once.\nThe resulting image should look almost completely black. Why do you suppose\nthat is?", "def show_images(images, ncols=2, figsize=(10, 7), **kwargs):\n \"\"\"\n Shows one or more color images.\n \n images: Image or list of images. Each image is a 3-dimensional\n array, where dimension 1 indexes height and dimension 2\n the width. Dimension 3 indexes the 3 color values red,\n blue, and green (so it always has length 3).\n \"\"\"\n def show_image(image, axis=plt):\n plt.imshow(image, **kwargs)\n \n if not (isinstance(images, list) or isinstance(images, tuple)):\n images = [images]\n images = [image.astype(np.uint8) for image in images]\n \n nrows = math.ceil(len(images) / ncols)\n ncols = min(len(images), ncols)\n \n plt.figure(figsize=figsize)\n for i, image in enumerate(images):\n axis = plt.subplot2grid(\n (nrows, ncols),\n (i // ncols, i % ncols),\n )\n axis.tick_params(bottom='off', left='off', top='off', right='off',\n labelleft='off', labelbottom='off')\n axis.grid(False)\n show_image(image, axis)\n\n# Show image1 here:\n...", "Question 10\nIf you look at the data, you'll notice all the numbers lie between 0 and 10.\nIn NumPy, a color intensity is an integer ranging from 0 to 255, where 0 is\nno color (black). That's why the image is almost black. To see the image,\nwe'll need to rescale the numbers in the data to have a larger range.\nDefine a function expand_image_range that takes in an image. It returns a\nnew copy of the image with the following transformation:\nold value | new value\n========= | =========\n0 | 12\n1 | 37\n2 | 65\n3 | 89\n4 | 114\n5 | 137\n6 | 162\n7 | 187\n8 | 214\n9 | 240\n10 | 250\n\nThis expands the color range of the image. For example, a pixel that previously\nhad the value [5 5 5] (almost-black) will now have the value [137 137 137]\n(gray).\nSet expanded1 to the expanded image1, then display it with show_images.\nThis page\nfrom the numpy docs has some useful information that will allow you\nto use indexing instead of for loops.\nHowever, the slickest implementation uses one very short line of code.\nHint: If you index an array with another array or list as in question 5, your\narray (or list) of indices can contain repeats, as in array1[[0, 1, 0]].\nInvestigate what happens in that case.", "# This array is provided for your convenience.\ntransformed = np.array([12, 37, 65, 89, 114, 137, 162, 187, 214, 240, 250])\n\ndef expand_image_range(image):\n ...\n\nexpanded1 = ...\nshow_images(expanded1)\n\n_ = ok.grade('q10')\n_ = ok.backup()", "Question 11\nEureka! You've managed to reveal the image that the text file represents.\nNow, define a function called reveal_file that takes in a filename\nand returns an expanded image. This should be relatively easy since you've\ndefined functions for each step in the process.\nThen, set expanded_images to a list of all the revealed images. There are\n10 images to reveal (including the one you just revealed).\nFinally, use show_images to display the expanded_images.", "def reveal_file(filename):\n ...\n\nfilenames = ['1.txt', '2.txt', '3.txt', '4.txt', '5.txt',\n '6.txt', '7.txt', '8.txt', '9.txt', '10.txt']\nexpanded_images = ...\n\nshow_images(expanded_images, ncols=5)", "Notice that 5 of the above images are of summer scenes; the other 5\nare of winter.\nThink about how you'd distinguish between pictures of summer and winter. What\nqualities of the image seem to signal to your brain that the image is one of\nsummer? Of winter?\nOne trait that seems specific to summer pictures is that the colors are warmer.\nLet's see if the proportion of pixels of each color in the image can let us\ndistinguish between summer and winter pictures.\nQuestion 12\nTo simplify things, we can categorize each pixel according to its most intense\n(highest-value) channel. (Remember, red, green, and blue are the 3 channels.)\nFor example, we could just call a [2 4 0] pixel \"green.\" If a pixel has a\ntie between several channels, let's count it as none of them.\nWrite a function proportion_by_channel. It takes in an image. It assigns\neach pixel to its greatest-intensity channel: red, green, or blue. Then\nthe function returns an array of length three containing the proportion of\npixels categorized as red, the proportion categorized as green, and the\nproportion categorized as blue (respectively). (Again, don't count pixels\nthat are tied between 2 or 3 colors as any category, but do count them\nin the denominator when you're computing proportions.)\nFor example:\n```\n\n\n\ntest_im = np.array([\n [ [5, 2, 2], [2, 5, 10] ] \n])\nproportion_by_channel(test_im)\narray([ 0.5, 0, 0.5 ])\n\n\n\nIf tied, count neither as the highest\n\n\n\ntest_im = np.array([\n [ [5, 2, 5], [2, 50, 50] ] \n])\nproportion_by_channel(test_im)\narray([ 0, 0, 0 ])\n```\n\n\n\nThen, set image_proportions to the result of proportion_by_channel called\non each image in expanded_images as a 2d array.\nHint: It's fine to use a for loop, but for a difficult challenge, try\navoiding it. (As a side benefit, your code will be much faster.) Our solution\nuses the NumPy functions np.reshape, np.sort, np.argmax, and np.bincount.", "def proportion_by_channel(image):\n ...\n\nimage_proportions = ...\nimage_proportions\n\n_ = ok.grade('q12')\n_ = ok.backup()", "Let's plot the proportions you computed above on a bar chart:", "# You'll learn about Pandas and DataFrames soon.\nimport pandas as pd\npd.DataFrame({\n 'red': image_proportions[:, 0],\n 'green': image_proportions[:, 1],\n 'blue': image_proportions[:, 2]\n }, index=pd.Series(['Image {}'.format(n) for n in range(1, 11)], name='image'))\\\n .iloc[::-1]\\\n .plot.barh();", "Question 13\nWhat do you notice about the colors present in the summer images compared to\nthe winter ones?\nUse this info to write a function summer_or_winter. It takes in an image and\nreturns True if the image is a summer image and False if the image is a\nwinter image.\nDo not hard-code the function to the 10 images you currently have (eg.\nif image1, return False). We will run your function on other images\nthat we've reserved for testing.\nYou must classify all of the 10 provided images correctly to pass the test\nfor this function.", "def summer_or_winter(image):\n ...\n\n_ = ok.grade('q13')\n_ = ok.backup()", "Congrats! You've created your very first classifier for this class.\nQuestion 14\n\nHow do you think your classification function will perform\n in general?\nWhy do you think it will perform that way?\nWhat do you think would most likely give you false positives?\nFalse negatives?\n\nWrite your answer here, replacing this text.\nFinal note: While our approach here is simplistic, skin color segmentation\n-- figuring out which parts of the image belong to a human body -- is a\nkey step in many algorithms such as face detection.\nOptional: Our code to encode images\nHere are the functions we used to generate the text files for this assignment.\nFeel free to send not-so-secret messages to your friends if you'd like.", "import skimage as sk\nimport skimage.io as skio\n\ndef read_image(filename):\n '''Reads in an image from a filename'''\n return skio.imread(filename)\n\ndef compress_image(im):\n '''Takes an image as an array and compresses it to look black.'''\n res = im / 25\n return res.astype(np.uint8)\n\ndef to_text_file(im, filename):\n '''\n Takes in an image array and a filename for the resulting text file.\n \n Creates the encoded text file for later decoding.\n '''\n h, w, c = im.shape\n to_rgb = ' '.join\n to_row = '\\n'.join\n to_lines = '\\n'.join\n \n rgb = [[to_rgb(triplet) for triplet in row] for row in im.astype(str)]\n lines = to_lines([to_row(row) for row in rgb])\n\n with open(filename, 'w') as f:\n f.write('{} {}\\n'.format(h, w))\n f.write(lines)\n f.write('\\n')\n\nsummers = skio.imread_collection('orig/summer/*.jpg')\nwinters = skio.imread_collection('orig/winter/*.jpg')\nlen(summers)\n\nsum_nums = np.array([ 5, 6, 9, 3, 2, 11, 12])\nwin_nums = np.array([ 10, 7, 8, 1, 4, 13, 14])\n\nfor im, n in zip(summers, sum_nums):\n to_text_file(compress_image(im), '{}.txt'.format(n))\nfor im, n in zip(winters, win_nums):\n to_text_file(compress_image(im), '{}.txt'.format(n))", "5. Submitting this assignment\nFirst, run this cell to run all the autograder tests at once so you can double-\ncheck your work.", "_ = ok.grade_all()", "Now, run this code in your terminal to make a\ngit commit\nthat saves a snapshot of your changes in git. The last line of the cell\nruns git push, which will send your work to your personal Github repo.\n```\nTell git to commit all the changes so far\ngit add -A\nTell git to make the commit\ngit commit -m \"hw1 finished\"\nSend your updates to your personal private repo\ngit push origin master\n```\nFinally, we'll submit the assignment to OkPy so that the staff will know to\ngrade it. You can submit as many times as you want and you can choose which\nsubmission you want us to grade by going to https://okpy.org/cal/data100/sp17/.", "# Now, we'll submit to okpy\n_ = ok.submit()", "Congrats! You are done with homework 1." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/nerc/cmip6/models/hadgem3-gc31-hh/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: HADGEM3-GC31-HH\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:26\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hh', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "ยฉ2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dotsdl/msmbuilder
examples/hmm-and-msm.ipynb
lgpl-2.1
[ "This example builds HMM and MSMs on the alanine_dipeptide dataset using varing lag times\nand numbers of states, and compares the relaxation timescales", "from __future__ import print_function\nimport os\n%matplotlib inline\nfrom matplotlib.pyplot import *\nfrom msmbuilder.featurizer import SuperposeFeaturizer\nfrom msmbuilder.example_datasets import AlanineDipeptide\nfrom msmbuilder.hmm import GaussianFusionHMM\nfrom msmbuilder.cluster import KCenters\nfrom msmbuilder.msm import MarkovStateModel", "First: load and \"featurize\"\nFeaturization refers to the process of converting the conformational\nsnapshots from your MD trajectories into vectors in some space $\\mathbb{R}^N$ that can be manipulated and modeled by subsequent analyses. The Gaussian HMM, for instance, uses Gaussian emission distributions, so it models the trajectory as a time-dependent\nmixture of multivariate Gaussians.\nIn general, the featurization is somewhat of an art. For this example, we're using Mixtape's SuperposeFeaturizer, which superposes each snapshot onto a reference frame (trajectories[0][0] in this example), and then measure the distance from each\natom to its position in the reference conformation as the 'feature'", "print(AlanineDipeptide.description())\n\ndataset = AlanineDipeptide().get()\ntrajectories = dataset.trajectories\ntopology = trajectories[0].topology\n\nindices = [atom.index for atom in topology.atoms if atom.element.symbol in ['C', 'O', 'N']]\nfeaturizer = SuperposeFeaturizer(indices, trajectories[0][0])\nsequences = featurizer.transform(trajectories)", "Now sequences is our featurized data.", "lag_times = [1, 10, 20, 30, 40]\nhmm_ts0 = {}\nhmm_ts1 = {}\nn_states = [3, 5]\n\nfor n in n_states:\n hmm_ts0[n] = []\n hmm_ts1[n] = []\n for lag_time in lag_times:\n strided_data = [s[i::lag_time] for s in sequences for i in range(lag_time)]\n hmm = GaussianFusionHMM(n_states=n, n_features=sequences[0].shape[1], n_init=1).fit(strided_data)\n timescales = hmm.timescales_ * lag_time\n hmm_ts0[n].append(timescales[0])\n hmm_ts1[n].append(timescales[1])\n print('n_states=%d\\tlag_time=%d\\ttimescales=%s' % (n, lag_time, timescales))\n print()\n\nfigure(figsize=(14,3))\n\nfor i, n in enumerate(n_states):\n subplot(1,len(n_states),1+i)\n plot(lag_times, hmm_ts0[n])\n plot(lag_times, hmm_ts1[n])\n if i == 0:\n ylabel('Relaxation Timescale')\n xlabel('Lag Time')\n title('%d states' % n)\n\nshow()\n\nmsmts0, msmts1 = {}, {}\nlag_times = [1, 10, 20, 30, 40]\nn_states = [4, 8, 16, 32, 64]\n\nfor n in n_states:\n msmts0[n] = []\n msmts1[n] = []\n for lag_time in lag_times:\n assignments = KCenters(n_clusters=n).fit_predict(sequences)\n msm = MarkovStateModel(lag_time=lag_time, verbose=False).fit(assignments)\n timescales = msm.timescales_\n msmts0[n].append(timescales[0])\n msmts1[n].append(timescales[1])\n print('n_states=%d\\tlag_time=%d\\ttimescales=%s' % (n, lag_time, timescales[0:2]))\n print()\n\nfigure(figsize=(14,3))\n\nfor i, n in enumerate(n_states):\n subplot(1,len(n_states),1+i)\n plot(lag_times, msmts0[n])\n plot(lag_times, msmts1[n])\n if i == 0:\n ylabel('Relaxation Timescale')\n xlabel('Lag Time')\n title('%d states' % n)\n\nshow()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex AI Pipelines: TPU model train, upload, and deploy using google-cloud-pipeline-components\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/notebooks/blob/master/official/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td>\n <a href=\"https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/notebooks/blob/master/official/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb\">\n Open in Vertex AI Workbench\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis notebook shows how to use the components defined in google_cloud_pipeline_components in conjunction with an experimental run_as_aiplatform_custom_job method, to build a Vertex AI Pipelines workflow that trains a custom model using TPUs, uploads the model as a Model resource, creates an Endpoint resource, and deploys the Model resource to the Endpoint resource.\nNote: TPU VM Training is currently an opt-in feature. Your GCP project must first be added to the feature allowlist. Please email your project information(project id/number) to [email protected] for the allowlist. You will receive an email as soon as your project is ready.\nDataset\nThe dataset used for this tutorial is the cifar10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, or truck.\nObjective\nIn this tutorial, you create an custom model using a pipeline with components from google_cloud_pipeline_components and a custom pipeline component you build.\nIn addition, you use the kfp.v2.google.experimental.run_as_aiplatform_custom_job method to train a custom model leveraging TPUs.\nThe steps performed include:\n\nBuild a custom container for the custom model.\nTrain the custom model with TPUs.\nUploads the trained model as a Model resource.\nCreates an Endpoint resource.\nDeploys the Model resource to the Endpoint resource.\n\nThe components are documented here.\n(From that page, see also the CustomPythonPackageTrainingJobRunOp and CustomContainerTrainingJobRunOp components, which similarly run 'custom' training, but as with the related google.cloud.aiplatform.CustomContainerTrainingJob and google.cloud.aiplatform.CustomPythonPackageTrainingJob methods from the Vertex AI SDK, also upload the trained model).\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements. You need the following:\n\nThe Cloud Storage SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n\n\nInstall and initialize the SDK.\n\n\nInstall Python 3.\n\n\nInstall virtualenv and create a virtual environment that uses Python 3.\n\n\nActivate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.\n\n\nRun jupyter notebook on the command line in a terminal shell to launch Jupyter.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstallation\nInstall the latest version of Vertex SDK for Python.", "import os\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG", "Install the latest GA version of google-cloud-pipeline-components library as well.", "! pip3 install $USER_FLAG kfp google-cloud-pipeline-components --upgrade", "Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Check the versions of the packages you installed. The KFP SDK version should be >=1.6.", "! python3 -c \"import kfp; print('KFP SDK version: {}'.format(kfp.__version__))\"\n! python3 -c \"import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))\"", "Before you begin\nGPU runtime\nThis tutorial does not require a GPU runtime.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.\n\n\nThe Google Cloud SDK is already installed in Google Cloud Notebook.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "BUCKET_URI = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_URI == \"\" or BUCKET_URI is None or BUCKET_URI == \"gs://[your-bucket-name]\":\n BUCKET_URI = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_URI", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_URI", "Service Account\nIf you don't know your service account, try to get your service account using gcloud command by executing the second cell below.", "SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}\n\nif (\n SERVICE_ACCOUNT == \"\"\n or SERVICE_ACCOUNT is None\n or SERVICE_ACCOUNT == \"[your-service-account]\"\n):\n # Get your GCP project id from gcloud\n shell_output = !gcloud auth list 2>/dev/null\n SERVICE_ACCOUNT = shell_output[2].strip()\n SERVICE_ACCOUNT = SERVICE_ACCOUNT.replace(\"*\", \"\")\n SERVICE_ACCOUNT = SERVICE_ACCOUNT.replace(\" \", \"\")\n print(SERVICE_ACCOUNT)", "Set service account access for Vertex AI Pipelines\nRun the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.", "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI\n\n! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants", "import google.cloud.aiplatform as aip", "Vertex AI Pipelines constants\nSetup up the following constants for Vertex AI Pipelines:", "PIPELINE_ROOT = \"{}/pipeline_root/tpu_cifar10_pipeline\".format(BUCKET_URI)", "Additional imports.", "import kfp\nfrom google_cloud_pipeline_components import aiplatform as gcc_aip\nfrom kfp.v2.dsl import component\nfrom kfp.v2.google import experimental", "Initialize Vertex AI SDK for Python\nInitialize the Vertex AI SDK for Python for your project and corresponding bucket.", "aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)", "Set up variables\nNext, set up some variables used throughout the tutorial.\nSet hardware accelerators\nYou can set hardware accelerators for both training and prediction.\nSet the variables TRAIN_TPU/TRAIN_NTPU to use a container training image supporting a TPU and the number of TPUs allocated and DEPLOY_GPU/DEPLOY_NGPU to user a container deployment image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. \nCurrently, while TPUs are in experimental, use the following numbers to represent the 2 TPUs available. Both have 8 accelerators:\n6 = TPU_V2\n7 = TPU_V3\nFor example, to use a TPU_V3 training container image, you would specify:\n(7, 8)\nSee the locations where accelerators are available.\nOtherwise specify (None, None) to use a container image to run on a CPU.\nNote: TensorFlow releases earlier than 2.3 for GPU support fail to load the custom model in this tutorial. This issue is caused by static graph operations that are generated in the serving function. This is a known issue, which is fixed in TensorFlow 2.3. If you encounter this issue with your own custom models, use a container image for TensorFlow 2.3 or later with GPU support.", "from google.cloud.aiplatform import gapic\n\n# Use TPU Accelerators. Temporarily using numeric codes, until types are added to the SDK\n# 6 = TPU_V2\n# 7 = TPU_V3\nTRAIN_TPU, TRAIN_NTPU = (7, 8) # Using TPU_V3 with 8 accelerators\n\nDEPLOY_GPU, DEPLOY_NGPU = (gapic.AcceleratorType.NVIDIA_TESLA_K80, 1)", "Set pre-built containers\nVertex AI provides pre-built containers to run training and prediction.\nFor the latest list, see Pre-built containers for training and Pre-built containers for prediction", "DEPLOY_VERSION = \"tf2-gpu.2-6\"\n\nDEPLOY_IMAGE = \"us-docker.pkg.dev/cloud-aiplatform/prediction/{}:latest\".format(\n DEPLOY_VERSION\n)\n\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)", "Set machine types\nNext, set the machine types to use for training and prediction.\n\nSet the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.\nmachine type\ncloud-tpu : used for TPU training. See the TPU Architecture site for details\nn1-standard: 3.75GB of memory per vCPU\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: The following is not supported for training:\n\nstandard: 2 vCPUs\nhighcpu: 2, 4 and 8 vCPUs\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.", "MACHINE_TYPE = \"cloud-tpu\"\n\n# TPU VMs do not require VCPU definition\nTRAIN_COMPUTE = MACHINE_TYPE\nprint(\"Train machine type\", TRAIN_COMPUTE)\n\nMACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)\n\nif not TRAIN_NTPU or TRAIN_NTPU < 2:\n TRAIN_STRATEGY = \"single\"\nelse:\n TRAIN_STRATEGY = \"tpu\"\n\nEPOCHS = 20\nSTEPS = 10000\n\nTRAINER_ARGS = [\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n \"--distribute=\" + TRAIN_STRATEGY,\n]\n\n# create working dir to pass to job spec\nWORKING_DIR = f\"{PIPELINE_ROOT}/model\"\n\nMODEL_DISPLAY_NAME = f\"tpu_train_deploy_{TIMESTAMP}\"\nprint(TRAINER_ARGS, WORKING_DIR, MODEL_DISPLAY_NAME)", "Create a custom container\nWe will create a directory and write all of our container build artifacts into that folder.", "CONTAINER_ARTIFACTS_DIR = \"tpu-container-artifacts\"\n\n!mkdir {CONTAINER_ARTIFACTS_DIR}\n\nimport os", "Write the Dockerfile", "dockerfile = \"\"\"FROM python:3.8\n\nWORKDIR /root\n\n# Copies the trainer code to the docker image.\nCOPY train.py /root/train.py\n\nRUN pip3 install tensorflow-datasets\n\n# Install TPU Tensorflow and dependencies.\n# libtpu.so must be under the '/lib' directory.\nRUN wget https://storage.googleapis.com/cloud-tpu-tpuvm-artifacts/libtpu/20210525/libtpu.so -O /lib/libtpu.so\nRUN chmod 777 /lib/libtpu.so\n\nRUN wget https://storage.googleapis.com/cloud-tpu-tpuvm-artifacts/tensorflow/20210525/tf_nightly-2.6.0-cp38-cp38-linux_x86_64.whl\nRUN pip3 install tf_nightly-2.6.0-cp38-cp38-linux_x86_64.whl\nRUN rm tf_nightly-2.6.0-cp38-cp38-linux_x86_64.whl\n\nENTRYPOINT [\"python3\", \"train.py\"]\n\"\"\"\n\nwith open(os.path.join(CONTAINER_ARTIFACTS_DIR, \"Dockerfile\"), \"w\") as f:\n f.write(dockerfile)", "Training script\nIn the next cell, you write the contents of the training script, train.py. In summary:\n\nGet the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service.\nLoads CIFAR10 dataset from TF Datasets (tfds).\nBuilds a model using TF.Keras model API.\nCompiles the model (compile()).\nSets a training distribution strategy according to the argument args.distribute.\nTrains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps\nSaves the trained model (save(MODEL_DIR)) to the specified model directory.\n\nTPU specific changes are listed below:\n- Added a section that finds the TPU cluster, connects to it, and sets the training strategy to TPUStrategy\n- Added a section that saves the trained TPU model to the local device, so that it can be saved to the AIP_MODEL_DIR", "%%writefile {CONTAINER_ARTIFACTS_DIR}/train.py\n# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10\n\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport argparse\nimport os\nimport sys\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--lr', dest='lr',\n default=0.01, type=float,\n help='Learning rate.')\nparser.add_argument('--epochs', dest='epochs',\n default=10, type=int,\n help='Number of epochs.')\nparser.add_argument('--steps', dest='steps',\n default=200, type=int,\n help='Number of steps per epoch.')\nparser.add_argument('--distribute', dest='distribute', type=str, default='single',\n help='distributed training strategy')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\nprint('DEVICES', device_lib.list_local_devices())\n\n# Single Machine, single compute device\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\n# Single Machine, multiple TPU devices\nelif args.distribute == 'tpu':\n cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=\"local\")\n tf.config.experimental_connect_to_cluster(cluster_resolver)\n tf.tpu.experimental.initialize_tpu_system(cluster_resolver)\n strategy = tf.distribute.TPUStrategy(cluster_resolver)\n print(\"All devices: \", tf.config.list_logical_devices('TPU'))\n# Single Machine, multiple compute device\nelif args.distribute == 'mirror':\n strategy = tf.distribute.MirroredStrategy()\n# Multiple Machine, multiple compute device\nelif args.distribute == 'multi':\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\n# Multi-worker configuration\nprint('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\n# Preparing dataset\nBUFFER_SIZE = 10000\nBATCH_SIZE = 64\n\ndef make_datasets_unbatched():\n # Scaling CIFAR10 data from (0, 255] to (0., 1.]\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255.0\n return image, label\n\n datasets, info = tfds.load(name='cifar10',\n with_info=True,\n as_supervised=True)\n return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()\n\n\n# Build the Keras model\ndef build_and_compile_cnn_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(32, 3, activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n model.compile(\n loss=tf.keras.losses.sparse_categorical_crossentropy,\n optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),\n metrics=['accuracy'])\n return model\n\n# Train the model\nNUM_WORKERS = strategy.num_replicas_in_sync\n# Here the batch size scales up by number of workers since\n# `tf.data.Dataset.batch` expects the global batch size.\nGLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS\nMODEL_DIR = os.getenv(\"AIP_MODEL_DIR\")\n\ntrain_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)\n\nwith strategy.scope():\n # Creation of dataset, and model building/compiling need to be within\n # `strategy.scope()`.\n model = build_and_compile_cnn_model()\n\nmodel.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)\nif args.distribute==\"tpu\":\n save_locally = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')\n model.save(MODEL_DIR, options=save_locally)\nelse:\n model.save(MODEL_DIR)", "Build Container\nRun these artifact registry and docker steps once", "!gcloud services enable artifactregistry.googleapis.com\n\n!sudo usermod -a -G docker ${USER}\n\nREPOSITORY = \"tpu-training-repository\"\nIMAGE = \"tpu-train\"\n\n!gcloud auth configure-docker us-central1-docker.pkg.dev --quiet\n\n!gcloud artifacts repositories create $REPOSITORY --repository-format=docker \\\n--location=us-central1 --description=\"Vertex TPU training repository\"", "Build the training image", "TRAIN_IMAGE = f\"{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}:latest\"\n\nprint(TRAIN_IMAGE)\n\n%cd $CONTAINER_ARTIFACTS_DIR\n\n# Use quiet flag as the output is fairly large\n!docker build --quiet \\\n --tag={TRAIN_IMAGE} \\\n .\n\n!docker push {TRAIN_IMAGE}\n\n%cd ..", "Define custom model pipeline that uses components from google_cloud_pipeline_components\nNext, you define the pipeline.\nThe experimental.run_as_aiplatform_custom_job method takes as arguments the previously defined component, and the list of worker_pool_specsโ€” in this case oneโ€” with which the custom training job is configured.\nThen, google_cloud_pipeline_components components are used to define the rest of the pipeline: upload the model, create an endpoint, and deploy the model to the endpoint.\nNote: While not shown in this example, the model deploy will create an endpoint if one is not provided.", "WORKER_POOL_SPECS = [\n {\n \"containerSpec\": {\n \"args\": TRAINER_ARGS,\n \"env\": [{\"name\": \"AIP_MODEL_DIR\", \"value\": WORKING_DIR}],\n \"imageUri\": TRAIN_IMAGE,\n },\n \"replicaCount\": \"1\",\n \"machineSpec\": {\n \"machineType\": TRAIN_COMPUTE,\n \"accelerator_type\": TRAIN_TPU,\n \"accelerator_count\": TRAIN_NTPU,\n },\n }\n]", "Define pipeline components\nThe following example define a custom pipeline component for this tutorial:\n\nThis component doesn't do anything (but run a print statement).", "@component\ndef tpu_training_task_op(input1: str):\n print(\"training task: {}\".format(input1))", "The pipeline has four main steps:\n1) The run_as_experimental_custom_job runs the docker container which will execute the training task\n2) The ModelUploadOp uploads the trained model to Vertex\n3) The EndpointCreateOp creates the model endpoint\n4) Finally, the ModelDeployOp deploys the model to the endpoint", "@kfp.dsl.pipeline(name=\"train-endpoint-deploy\" + TIMESTAMP)\ndef pipeline(\n project: str = PROJECT_ID,\n model_display_name: str = MODEL_DISPLAY_NAME,\n serving_container_image_uri: str = DEPLOY_IMAGE,\n):\n\n train_task = tpu_training_task_op(\"tpu model training\")\n experimental.run_as_aiplatform_custom_job(\n train_task,\n worker_pool_specs=WORKER_POOL_SPECS,\n )\n\n model_upload_op = gcc_aip.ModelUploadOp(\n project=project,\n display_name=model_display_name,\n artifact_uri=WORKING_DIR,\n serving_container_image_uri=serving_container_image_uri,\n )\n model_upload_op.after(train_task)\n\n endpoint_create_op = gcc_aip.EndpointCreateOp(\n project=project,\n display_name=\"tpu-pipeline-created-endpoint\",\n )\n\n gcc_aip.ModelDeployOp(\n endpoint=endpoint_create_op.outputs[\"endpoint\"],\n model=model_upload_op.outputs[\"model\"],\n deployed_model_display_name=model_display_name,\n dedicated_resources_machine_type=DEPLOY_COMPUTE,\n dedicated_resources_min_replica_count=1,\n dedicated_resources_max_replica_count=1,\n dedicated_resources_accelerator_type=DEPLOY_GPU.name,\n dedicated_resources_accelerator_count=DEPLOY_NGPU,\n )", "Compile the pipeline\nNext, compile the pipeline.", "from kfp.v2 import compiler # noqa: F811\n\ncompiler.Compiler().compile(\n pipeline_func=pipeline,\n package_path=\"tpu train cifar10_pipeline.json\".replace(\" \", \"_\"),\n)", "Run the pipeline\nNext, run the pipeline.", "DISPLAY_NAME = \"tpu_cifar10_training_\" + TIMESTAMP\n\njob = aip.PipelineJob(\n display_name=DISPLAY_NAME,\n template_path=\"tpu train cifar10_pipeline.json\".replace(\" \", \"_\"),\n pipeline_root=PIPELINE_ROOT,\n)\n\njob.run()\n\n! rm tpu_train_cifar10_pipeline.json", "Click on the generated link to see your run in the Cloud Console.\nIn the UI, many of the pipeline DAG nodes will expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).\n\nCleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial -- Note: this is auto-generated and not all resources may be applicable for this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "delete_dataset = True\ndelete_pipeline = True\ndelete_model = True\ndelete_endpoint = True\ndelete_batchjob = True\ndelete_customjob = True\ndelete_hptjob = True\n\n# Warning: Setting this to true will delete everything in your bucket\ndelete_bucket = False\n\ntry:\n if delete_model and \"DISPLAY_NAME\" in globals():\n models = aip.Model.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n model = models[0]\n aip.Model.delete(model)\n print(\"Deleted model:\", model)\nexcept Exception as e:\n print(e)\n\ntry:\n if delete_endpoint and \"DISPLAY_NAME\" in globals():\n endpoints = aip.Endpoint.list(\n filter=f\"display_name={DISPLAY_NAME}_endpoint\", order_by=\"create_time\"\n )\n endpoint = endpoints[0]\n endpoint.undeploy_all()\n aip.Endpoint.delete(endpoint.resource_name)\n print(\"Deleted endpoint:\", endpoint)\nexcept Exception as e:\n print(e)\n\nif delete_dataset and \"DISPLAY_NAME\" in globals():\n if \"tabular\" == \"tabular\":\n try:\n datasets = aip.TabularDataset.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n dataset = datasets[0]\n aip.TabularDataset.delete(dataset.resource_name)\n print(\"Deleted dataset:\", dataset)\n except Exception as e:\n print(e)\n\n if \"tabular\" == \"image\":\n try:\n datasets = aip.ImageDataset.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n dataset = datasets[0]\n aip.ImageDataset.delete(dataset.resource_name)\n print(\"Deleted dataset:\", dataset)\n except Exception as e:\n print(e)\n\n if \"tabular\" == \"text\":\n try:\n datasets = aip.TextDataset.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n dataset = datasets[0]\n aip.TextDataset.delete(dataset.resource_name)\n print(\"Deleted dataset:\", dataset)\n except Exception as e:\n print(e)\n\n if \"tabular\" == \"video\":\n try:\n datasets = aip.VideoDataset.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n dataset = datasets[0]\n aip.VideoDataset.delete(dataset.resource_name)\n print(\"Deleted dataset:\", dataset)\n except Exception as e:\n print(e)\n\ntry:\n if delete_pipeline and \"DISPLAY_NAME\" in globals():\n pipelines = aip.PipelineJob.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n pipeline = pipelines[0]\n aip.PipelineJob.delete(pipeline.resource_name)\n print(\"Deleted pipeline:\", pipeline)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_URI\" in globals():\n ! gsutil rm -r $BUCKET_URI" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mari-linhares/tensorflow-workshop
code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb
apache-2.0
[ "Dependencies", "# Tensorflow\nimport tensorflow as tf\nprint('Tested with TensorFLow 1.2.0')\nprint('Your TensorFlow version:', tf.__version__) \n\n# Feeding function for enqueue data\nfrom tensorflow.python.estimator.inputs.queues import feeding_functions as ff\n\n# Rnn common functions\nfrom tensorflow.contrib.learn.python.learn.estimators import rnn_common\n\n# Model builder\nfrom tensorflow.python.estimator import model_fn as model_fn_lib\n\n# Run an experiment\nfrom tensorflow.contrib.learn.python.learn import learn_runner\n\n# Helpers for data processing\nimport pandas as pd\nimport numpy as np\nimport argparse\nimport random", "Loading Data\nFirst, we want to create our word vectors. For simplicity, we're going to be using a pretrained model. \nAs one of the biggest players in the ML game, Google was able to train a Word2Vec model on a massive Google News dataset that contained over 100 billion different words! From that model, Google was able to create 3 million word vectors, each with a dimensionality of 300. \nIn an ideal scenario, we'd use those vectors, but since the word vectors matrix is quite large (3.6 GB!), we'll be using a much more manageable matrix that is trained using GloVe, a similar word vector generation model. The matrix will contain 400,000 word vectors, each with a dimensionality of 50. \nWe're going to be importing two different data structures, one will be a Python list with the 400,000 words, and one will be a 400,000 x 50 dimensional embedding matrix that holds all of the word vector values.", "# data from: http://ai.stanford.edu/~amaas/data/sentiment/\nTRAIN_INPUT = 'data/train.csv'\nTEST_INPUT = 'data/test.csv'\n\n# data manually generated\nMY_TEST_INPUT = 'data/mytest.csv'\n\n# wordtovec\n# https://nlp.stanford.edu/projects/glove/\n# the matrix will contain 400,000 word vectors, each with a dimensionality of 50.\nword_list = np.load('word_list.npy')\nword_list = word_list.tolist() # originally loaded as numpy array\nword_list = [word.decode('UTF-8') for word in word_list] # encode words as UTF-8\nprint('Loaded the word list, length:', len(word_list))\n\nword_vector = np.load('word_vector.npy')\nprint ('Loaded the word vector, shape:', word_vector.shape)", "We can also search our word list for a word like \"baseball\", and then access its corresponding vector through the embedding matrix.", "baseball_index = word_list.index('baseball')\nprint('Example: baseball')\nprint(word_vector[baseball_index])", "Now that we have our vectors, our first step is taking an input sentence and then constructing the its vector representation. Let's say that we have the input sentence \"I thought the movie was incredible and inspiring\". In order to get the word vectors, we can use Tensorflow's embedding lookup function. This function takes in two arguments, one for the embedding matrix (the wordVectors matrix in our case), and one for the ids of each of the words. The ids vector can be thought of as the integerized representation of the training set. This is basically just the row index of each of the words. Let's look at a quick example to make this concrete.", "max_seq_length = 10 # maximum length of sentence\nnum_dims = 50 # dimensions for each word vector\n\nfirst_sentence = np.zeros((max_seq_length), dtype='int32')\nfirst_sentence[0] = word_list.index(\"i\")\nfirst_sentence[1] = word_list.index(\"thought\")\nfirst_sentence[2] = word_list.index(\"the\")\nfirst_sentence[3] = word_list.index(\"movie\")\nfirst_sentence[4] = word_list.index(\"was\")\nfirst_sentence[5] = word_list.index(\"incredible\")\nfirst_sentence[6] = word_list.index(\"and\")\nfirst_sentence[7] = word_list.index(\"inspiring\")\n# first_sentence[8] = 0\n# first_sentence[9] = 0\n\nprint(first_sentence.shape)\nprint(first_sentence) # shows the row index for each word", "TODO### Insert image\nThe 10 x 50 output should contain the 50 dimensional word vectors for each of the 10 words in the sequence.", "with tf.Session() as sess:\n print(tf.nn.embedding_lookup(word_vector, first_sentence).eval().shape)", "Before creating the ids matrix for the whole training set, letโ€™s first take some time to visualize the type of data that we have. This will help us determine the best value for setting our maximum sequence length. In the previous example, we used a max length of 10, but this value is largely dependent on the inputs you have. \nThe training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews. Each of the reviews is stored in a txt file that we need to parse through. The positive reviews are stored in one directory and the negative reviews are stored in another. The following piece of code will determine total and average number of words in each review.", "from os import listdir\nfrom os.path import isfile, join\npositiveFiles = ['positiveReviews/' + f for f in listdir('positiveReviews/') if isfile(join('positiveReviews/', f))]\nnegativeFiles = ['negativeReviews/' + f for f in listdir('negativeReviews/') if isfile(join('negativeReviews/', f))]\nnumWords = []\nfor pf in positiveFiles:\n with open(pf, \"r\", encoding='utf-8') as f:\n line=f.readline()\n counter = len(line.split())\n numWords.append(counter) \nprint('Positive files finished')\n\nfor nf in negativeFiles:\n with open(nf, \"r\", encoding='utf-8') as f:\n line=f.readline()\n counter = len(line.split())\n numWords.append(counter) \nprint('Negative files finished')\n\nnumFiles = len(numWords)\nprint('The total number of files is', numFiles)\nprint('The total number of words in the files is', sum(numWords))\nprint('The average number of words in the files is', sum(numWords)/len(numWords))", "We can also use the Matplot library to visualize this data in a histogram format.", "import matplotlib.pyplot as plt\n%matplotlib inline\nplt.hist(numWords, 50)\nplt.xlabel('Sequence Length')\nplt.ylabel('Frequency')\nplt.axis([0, 1200, 0, 8000])\nplt.show()", "From the histogram as well as the average number of words per file, we can safely say that most reviews will fall under 250 words, which is the max sequence length value we will set.", "max_seq_len = 250", "Data", "ids_matrix = np.load('ids_matrix.npy').tolist()", "Parameters", "# Parameters for training\nSTEPS = 100000\nBATCH_SIZE = 64\n\n# Parameters for data processing\nREVIEW_KEY = 'review'\nSEQUENCE_LENGTH_KEY = 'sequence_length'", "Separating train and test data\nThe training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews. \nLet's first give a positive label [1, 0] to the first 12500 reviews, and a negative label [0, 1] to the other reviews.", "POSITIVE_REVIEWS = 12500\n\n# copying sequences\ndata_sequences = [np.asarray(v, dtype=np.int32) for v in ids_matrix]\n# generating labels\ndata_labels = [[1, 0] if i < POSITIVE_REVIEWS else [0, 1] for i in range(len(ids_matrix))]\n# also creating a length column, this will be used by the Dynamic RNN\n# see more about it here: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn\ndata_length = [max_seq_len for i in range(len(ids_matrix))]", "Then, let's shuffle the data and use 90% of the reviews for training and the other 10% for testing.", "data = list(zip(data_sequences, data_labels, data_length))\nrandom.shuffle(data) # shuffle\n\ndata = np.asarray(data)\n# separating train and test data\nlimit = int(len(data) * 0.9)\n\ntrain_data = data[:limit]\ntest_data = data[limit:]", "Verifying if the train and test data have enough positive and negative examples", "LABEL_INDEX = 1\ndef _number_of_pos_labels(df):\n pos_labels = 0\n for value in df:\n if value[LABEL_INDEX] == [1, 0]:\n pos_labels += 1\n return pos_labels\n\npos_labels_train = _number_of_pos_labels(train_data)\ntotal_labels_train = len(train_data)\n\npos_labels_test = _number_of_pos_labels(test_data)\ntotal_labels_test = len(test_data)\n\nprint('Total number of positive labels:', pos_labels_train + pos_labels_test)\nprint('Proportion of positive labels on the Train data:', pos_labels_train/total_labels_train)\nprint('Proportion of positive labels on the Test data:', pos_labels_test/total_labels_test)", "Input functions", "def get_input_fn(df, batch_size, num_epochs=1, shuffle=True): \n def input_fn():\n # https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data\n sequences = np.asarray([v for v in df[:,0]], dtype=np.int32)\n labels = np.asarray([v for v in df[:,1]], dtype=np.int32)\n length = np.asarray(df[:,2], dtype=np.int32)\n\n dataset = (\n tf.contrib.data.Dataset.from_tensor_slices((sequences, labels, length)) # reading data from memory\n .repeat(num_epochs) # repeat dataset the number of epochs\n .batch(batch_size)\n )\n \n # for our \"manual\" test we don't want to shuffle the data\n if shuffle:\n dataset = dataset.shuffle(buffer_size=100000)\n\n # create iterator\n review, label, length = dataset.make_one_shot_iterator().get_next()\n\n features = {\n REVIEW_KEY: review,\n SEQUENCE_LENGTH_KEY: length,\n }\n\n return features, label\n return input_fn\n\nfeatures, label = get_input_fn(train_data, 2)()\n\nwith tf.Session() as sess:\n items = sess.run(features)\n print(items[REVIEW_KEY])\n print\n\n items = sess.run(features)\n print(items[REVIEW_KEY])\n print\n\ntrain_input_fn = get_input_fn(train_data, BATCH_SIZE, None)\ntest_input_fn = get_input_fn(test_data, BATCH_SIZE)", "Creating the Estimator model", "def get_model_fn(rnn_cell_sizes,\n label_dimension,\n dnn_layer_sizes=[],\n optimizer='SGD',\n learning_rate=0.01,\n embed_dim=128):\n \n def model_fn(features, labels, mode):\n \n review = features[REVIEW_KEY]\n sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], tf.int32)\n\n # Creating embedding\n data = tf.Variable(tf.zeros([BATCH_SIZE, max_seq_len, 50]),dtype=tf.float32)\n data = tf.nn.embedding_lookup(word_vector, review)\n \n # Each RNN layer will consist of a LSTM cell\n rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]\n \n # Construct the layers\n multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)\n \n # Runs the RNN model dynamically\n # more about it at: \n # https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn\n outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,\n inputs=data,\n dtype=tf.float32)\n\n # Slice to keep only the last cell of the RNN\n last_activations = rnn_common.select_last_activations(outputs, sequence_length)\n\n # Construct dense layers on top of the last cell of the RNN\n for units in dnn_layer_sizes:\n last_activations = tf.layers.dense(\n last_activations, units, activation=tf.nn.relu)\n \n # Final dense layer for prediction\n predictions = tf.layers.dense(last_activations, label_dimension)\n predictions_softmax = tf.nn.softmax(predictions)\n \n loss = None\n train_op = None\n \n preds_op = {\n 'prediction': predictions_softmax,\n 'label': labels\n }\n \n eval_op = {\n \"accuracy\": tf.metrics.accuracy(\n tf.argmax(input=predictions_softmax, axis=1),\n tf.argmax(input=labels, axis=1))\n }\n \n if mode != tf.estimator.ModeKeys.PREDICT: \n loss = tf.losses.softmax_cross_entropy(labels, predictions)\n \n if mode == tf.estimator.ModeKeys.TRAIN: \n train_op = tf.contrib.layers.optimize_loss(\n loss,\n tf.contrib.framework.get_global_step(),\n optimizer=optimizer,\n learning_rate=learning_rate)\n \n return model_fn_lib.EstimatorSpec(mode,\n predictions=predictions_softmax,\n loss=loss,\n train_op=train_op,\n eval_metric_ops=eval_op)\n return model_fn\n\nmodel_fn = get_model_fn(rnn_cell_sizes=[64], # size of the hidden layers\n label_dimension=2, # since are just 2 classes\n dnn_layer_sizes=[128, 64], # size of units in the dense layers on top of the RNN\n optimizer='Adam',\n learning_rate=0.001,\n embed_dim=512)", "Create and Run Experiment", "# create experiment\ndef generate_experiment_fn():\n \n \"\"\"\n Create an experiment function given hyperparameters.\n Returns:\n A function (output_dir) -> Experiment where output_dir is a string\n representing the location of summaries, checkpoints, and exports.\n this function is used by learn_runner to create an Experiment which\n executes model code provided in the form of an Estimator and\n input functions.\n All listed arguments in the outer function are used to create an\n Estimator, and input functions (training, evaluation, serving).\n Unlisted args are passed through to Experiment.\n \"\"\"\n\n def _experiment_fn(run_config, hparams):\n estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)\n return tf.contrib.learn.Experiment(\n estimator,\n train_input_fn=train_input_fn,\n eval_input_fn=test_input_fn,\n train_steps=STEPS\n )\n return _experiment_fn\n\n# run experiment \nlearn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir='testing2'))", "Making Predictions\nLet's generate our own sentence to see how the model classifies them.", "def generate_data_row(sentence, label):\n length = max_seq_length\n sequence = np.zeros((length), dtype='int32')\n for i, word in enumerate(sentence):\n sequence[i] = word_list.index(word)\n \n return sequence, label, length\n \ndata_sequences = [np.asarray(v, dtype=np.int32) for v in ids_matrix]\n# generating labels\ndata_labels = [[1, 0] if i < POSITIVE_REVIEWS else [0, 1] for i in range(len(ids_matrix))]\n# also creating a length column, this will be used by the Dynamic RNN\n# see more about it here: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn\ndata_length = [max_seq_len for i in range(len(ids_matrix))]\n \n\nfirst_sentence[0] = word_list.index(\"i\")\nfirst_sentence[1] = word_list.index(\"thought\")\nfirst_sentence[2] = word_list.index(\"the\")\nfirst_sentence[3] = word_list.index(\"movie\")\nfirst_sentence[4] = word_list.index(\"was\")\nfirst_sentence[5] = word_list.index(\"incredible\")\nfirst_sentence[6] = word_list.index(\"and\")\nfirst_sentence[7] = word_list.index(\"inspiring\")\n# first_sentence[8] = 0\n# first_sentence[9] = 0\n\nprint(first_sentence.shape)\nprint(first_sentence) # shows the row index for each word\n\n\npreds = estimator.predict(input_fn=my_test_input_fn, as_iterable=True)\n\nsentences = _get_csv_column(MY_TEST_INPUT, 'review')\n\nprint()\nfor p, s in zip(preds, sentences):\n print('sentence:', s)\n print('bad review:', p[0], 'good review:', p[1])\n print('-' * 10)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jamesnw/wtb-data
notebooks/Colors.ipynb
mit
[ "Everyone's favorite nerdy comic, XKCD, ranked colors by best tasting. I thought I would use the WTB dataset to compare and see if the data agrees.", "# Import libraries\nimport numpy as np\nimport pandas as pd\n# Import the data\nimport WTBLoad\nwtb = WTBLoad.load_frame()\n\npink = [\"watermelon\", \"cranberry\"]\nred = [\"cherry\",\"apple\",\"raspberry\",\"strawberry\", \"rose hips\", \"hibiscus\",'rhubarb', \"red wine\"]\nblue = [\"blueberry\",\"juniper berries\"]\ngreen = [\"green tea\",\"mint\",\"lemon grass\",'cucumber','basil']\nwhite = [\"pear\", \"elderflower\", \"ginger\", \"coconut\",\"piรฑa colada\",\"vanilla\",\"white wine\"]\nbrown = [ \"chai\", \"chicory\", \"coriander\", \"cardamom\", \"seeds of paradise\", \"cinnamon\", \"chocolate\", \"peanut butter\", \"hazelnut\",\"pecan\",\"bacon\",\"bourbon\",\"whiskey\",\"coffee\",\"oak\",\"rye\",\"maple\"]\norange = [\"apricot\", \"peach\", \"grapefruit\",\"orange peel\", \"pumpkin\",\"sweet potato\"]\nyellow = [\"chamomile\",\"lemon peel\"]\npurple = [ \"plum\", \"lavender\", \"port\",\"blackberry\"]\nblack = [ \"anise\", 'peppercorn', 'lemon pepper', \"smoke\"]\nadditionsColors = {\"pink\": pink,\"red\": red,\"blue\": blue,\"green\": green,\"white\": white,\"brown\": brown,\"orange\": orange,\"yellow\": yellow,\"purple\": purple, \"black\": black}\n\n# Great. Now we have a mapping from color to addition, but we really need it the other way around.\nadditionToColor = {}\nfor color in additionsColors:\n for addition in additionsColors[color]:\n additionToColor[addition] = color\nprint(additionToColor['watermelon'])", "Let's add a color column.", "def addcolor(addition):\n return additionToColor[addition]\nwtb['color'] = np.vectorize(addcolor)(wtb['addition'])", "Now group by the new color column, get the mean, and sort the values high to low.", "wtb.groupby(by='color').mean().sort_values('vote',ascending=False)", "There we have it. Blue is the best tasting color. \nBut brown is awfully close. I wonder how the ranges compare. Let's take a look at a histogram.", "%matplotlib inline\nwtb.groupby(by='color').boxplot(subplots=False,rot=45)", "There we can see that while blue has a slightly higher average, brown has a lot of very high and very low outliers.\nFor more analysis, check out the What To Brew blog." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gunan/tensorflow
tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Create and convert a TensorFlow model\nThis notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite. The model created in this notebook is used in the hello_world sample for TensorFlow Lite for Microcontrollers.\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nImport dependencies\nOur first task is to import the dependencies we need. Run the following cell to do so:", "# TensorFlow is an open source machine learning library\nimport tensorflow as tf\n# Numpy is a math library\nimport numpy as np\n# Matplotlib is a graphing library\nimport matplotlib.pyplot as plt\n# math is Python's math library\nimport math", "Generate data\nDeep learning networks learn to model patterns in underlying data. In this notebook, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.\nIn a real world application, if you needed the sine of x, you could just calculate it directly. However, by training a model to do this, we can demonstrate the basic principles of machine learning.\nIn the hello_world sample for TensorFlow Lite for Microcontrollers, we'll use this model to control LEDs that light up in a sequence.\nThe code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph:", "# We'll generate this many sample datapoints\nSAMPLES = 1000\n\n# Set a \"seed\" value, so we get the same random numbers each time we run this\n# notebook\nnp.random.seed(1337)\n\n# Generate a uniformly distributed set of random numbers in the range from\n# 0 to 2ฯ€, which covers a complete sine wave oscillation\nx_values = np.random.uniform(low=0, high=2*math.pi, size=SAMPLES)\n\n# Shuffle the values to guarantee they're not in order\nnp.random.shuffle(x_values)\n\n# Calculate the corresponding sine values\ny_values = np.sin(x_values)\n\n# Plot our data. The 'b.' argument tells the library to print blue dots.\nplt.plot(x_values, y_values, 'b.')\nplt.show()", "Add some noise\nSince it was generated directly by the sine function, our data fits a nice, smooth curve.\nHowever, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.\nIn the following cell, we'll add some random noise to each value, then draw a new graph:", "# Add a small random number to each y value\ny_values += 0.1 * np.random.randn(*y_values.shape)\n\n# Plot our data\nplt.plot(x_values, y_values, 'b.')\nplt.show()", "Split our data\nWe now have a noisy dataset that approximates real world data. We'll be using this to train our model.\nTo evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.\nTo ensure we have data to use for evaluation, we'll set some aside before we begin training. We'll reserve 20% of our data for validation, and another 20% for testing. The remaining 60% will be used to train the model. This is a typical split used when training models.\nThe following code will split our data and then plot each set as a different color:", "# We'll use 60% of our data for training and 20% for testing. The remaining 20%\n# will be used for validation. Calculate the indices of each section.\nTRAIN_SPLIT = int(0.6 * SAMPLES)\nTEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)\n\n# Use np.split to chop our data into three parts.\n# The second argument to np.split is an array of indices where the data will be\n# split. We provide two indices, so the data will be divided into three chunks.\nx_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])\ny_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])\n\n# Double check that our splits add up correctly\nassert (x_train.size + x_validate.size + x_test.size) == SAMPLES\n\n# Plot the data in each partition in different colors:\nplt.plot(x_train, y_train, 'b.', label=\"Train\")\nplt.plot(x_test, y_test, 'r.', label=\"Test\")\nplt.plot(x_validate, y_validate, 'y.', label=\"Validate\")\nplt.legend()\nplt.show()\n", "Design a model\nWe're going to build a model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression.\nTo achieve this, we're going to create a simple neural network. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.\nTo begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 16 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.\nThe activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.\nNote: To learn more about how neural networks function, you can explore the Learn TensorFlow codelabs.\nThe code in the following cell defines our model using Keras, TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we compile it, specifying parameters that determine how it will be trained:", "# We'll use Keras to create a simple model architecture\nfrom tensorflow.keras import layers\nmodel_1 = tf.keras.Sequential()\n\n# First layer takes a scalar input and feeds it through 16 \"neurons\". The\n# neurons decide whether to activate based on the 'relu' activation function.\nmodel_1.add(layers.Dense(16, activation='relu', input_shape=(1,)))\n\n# Final layer is a single neuron, since we want to output a single value\nmodel_1.add(layers.Dense(1))\n\n# Compile the model using a standard optimizer and loss function for regression\nmodel_1.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])", "Train the model\nOnce we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.\nTraining runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.\nDuring each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.\nThe code in the following cell uses the x and y values from our training data to train the model. It runs for 1000 epochs, with 16 pieces of data in each batch. We also pass in some data to use for validation. As you will see when you run the cell, training can take a while to complete:", "# Train the model on our training data while validating on our validation set\nhistory_1 = model_1.fit(x_train, y_train, epochs=1000, batch_size=16,\n validation_data=(x_validate, y_validate))", "Check the training metrics\nDuring training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.\nThe following cells will display some of that data in a graphical form:", "# Draw a graph of the loss, which is the distance between\n# the predicted and actual values during training and validation.\nloss = history_1.history['loss']\nval_loss = history_1.history['val_loss']\n\nepochs = range(1, len(loss) + 1)\n\nplt.plot(epochs, loss, 'g.', label='Training loss')\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()", "Look closer at the data\nThe graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.\nAs we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!\nOur goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.\nTo make the flatter part of the graph more readable, let's skip the first 50 epochs:", "# Exclude the first few epochs so the graph is easier to read\nSKIP = 50\n\nplt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')\nplt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')\nplt.title('Training and validation loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()", "Further metrics\nFrom the plot, we can see that loss continues to reduce until around 600 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 600 epochs.\nHowever, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.\nTo gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers:", "plt.clf()\n\n# Draw a graph of mean absolute error, which is another way of\n# measuring the amount of error in the prediction.\nmae = history_1.history['mae']\nval_mae = history_1.history['val_mae']\n\nplt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')\nplt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')\nplt.title('Training and validation mean absolute error')\nplt.xlabel('Epochs')\nplt.ylabel('MAE')\nplt.legend()\nplt.show()", "This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.\nIn addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.\nTo get more insight into what is happening, we can plot our network's predictions for the training data against the expected values:", "# Use the model to make predictions from our validation data\npredictions = model_1.predict(x_train)\n\n# Plot the predictions along with to the test data\nplt.clf()\nplt.title('Training data predicted vs actual values')\nplt.plot(x_test, y_test, 'b.', label='Actual')\nplt.plot(x_train, predictions, 'r.', label='Predicted')\nplt.legend()\nplt.show()", "Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way. From 0 &lt;= x &lt;= 1.1 the line mostly fits, but for the rest of our x values it is a rough approximation at best.\nThe rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.\nChange our model\nTo make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with an additional layer of 16 neurons in the middle:", "model_2 = tf.keras.Sequential()\n\n# First layer takes a scalar input and feeds it through 16 \"neurons\". The\n# neurons decide whether to activate based on the 'relu' activation function.\nmodel_2.add(layers.Dense(16, activation='relu', input_shape=(1,)))\n\n# The new second layer may help the network learn more complex representations\nmodel_2.add(layers.Dense(16, activation='relu'))\n\n# Final layer is a single neuron, since we want to output a single value\nmodel_2.add(layers.Dense(1))\n\n# Compile the model using a standard optimizer and loss function for regression\nmodel_2.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])", "We'll now train the new model. To save time, we'll train for only 600 epochs:", "history_2 = model_2.fit(x_train, y_train, epochs=600, batch_size=16,\n validation_data=(x_validate, y_validate))", "Evaluate our new model\nEach training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ): \nEpoch 600/600\n600/600 [==============================] - 0s 109us/sample - loss: 0.0124 - mae: 0.0892 - val_loss: 0.0116 - val_mae: 0.0845\nYou can see that we've already got a huge improvement - validation loss has dropped from 0.15 to 0.015, and validation MAE has dropped from 0.31 to 0.1.\nThe following cell will print the same graphs we used to evaluate our original model, but showing our new training history:", "# Draw a graph of the loss, which is the distance between\n# the predicted and actual values during training and validation.\nloss = history_2.history['loss']\nval_loss = history_2.history['val_loss']\n\nepochs = range(1, len(loss) + 1)\n\nplt.plot(epochs, loss, 'g.', label='Training loss')\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()\n\n# Exclude the first few epochs so the graph is easier to read\nSKIP = 100\n\nplt.clf()\n\nplt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')\nplt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')\nplt.title('Training and validation loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()\n\nplt.clf()\n\n# Draw a graph of mean absolute error, which is another way of\n# measuring the amount of error in the prediction.\nmae = history_2.history['mae']\nval_mae = history_2.history['val_mae']\n\nplt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')\nplt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')\nplt.title('Training and validation mean absolute error')\nplt.xlabel('Epochs')\nplt.ylabel('MAE')\nplt.legend()\nplt.show()", "Great results! From these graphs, we can see several exciting things:\n\nOur network has reached its peak accuracy much more quickly (within 200 epochs instead of 600)\nThe overall loss and MAE are much better than our previous network\nMetrics are better for validation than training, which means the network is not overfitting\n\nThe reason the metrics for validation are better than those for training is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.\nThis all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier:", "# Calculate and print the loss on our test dataset\nloss = model_2.evaluate(x_test, y_test)\n\n# Make predictions based on our test dataset\npredictions = model_2.predict(x_test)\n\n# Graph the predictions against the actual values\nplt.clf()\nplt.title('Comparison of predictions and actual values')\nplt.plot(x_test, y_test, 'b.', label='Actual')\nplt.plot(x_test, predictions, 'r.', label='Predicted')\nplt.legend()\nplt.show()", "Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.\nThe model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when x is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.\nHowever, an important part of machine learning is knowing when to quit, and this model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.\nConvert to TensorFlow Lite\nWe now have an acceptably accurate model in-memory. However, to use this with TensorFlow Lite for Microcontrollers, we'll need to convert it into the correct format and download it as a file. To do this, we'll use the TensorFlow Lite Converter. The converter outputs a file in a special, space-efficient format for use on memory-constrained devices.\nSince this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of models is called quantization. It reduces the precision of the model's weights, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.\nThe TensorFlow Lite Converter can apply quantization while it converts the model. In the following cell, we'll convert the model twice: once with quantization, once without:", "# Convert the model to the TensorFlow Lite format without quantization\nconverter = tf.lite.TFLiteConverter.from_keras_model(model_2)\ntflite_model = converter.convert()\n\n# Save the model to disk\nopen(\"sine_model.tflite\", \"wb\").write(tflite_model)\n\n# Convert the model to the TensorFlow Lite format with quantization\nconverter = tf.lite.TFLiteConverter.from_keras_model(model_2)\nconverter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]\ntflite_model = converter.convert()\n\n# Save the model to disk\nopen(\"sine_model_quantized.tflite\", \"wb\").write(tflite_model)", "Test the converted models\nTo prove these models are still accurate after conversion and quantization, we'll use both of them to make predictions and compare these against our test results:", "# Instantiate an interpreter for each model\nsine_model = tf.lite.Interpreter('sine_model.tflite')\nsine_model_quantized = tf.lite.Interpreter('sine_model_quantized.tflite')\n\n# Allocate memory for each model\nsine_model.allocate_tensors()\nsine_model_quantized.allocate_tensors()\n\n# Get the input and output tensors so we can feed in values and get the results\nsine_model_input = sine_model.tensor(sine_model.get_input_details()[0][\"index\"])\nsine_model_output = sine_model.tensor(sine_model.get_output_details()[0][\"index\"])\nsine_model_quantized_input = sine_model_quantized.tensor(sine_model_quantized.get_input_details()[0][\"index\"])\nsine_model_quantized_output = sine_model_quantized.tensor(sine_model_quantized.get_output_details()[0][\"index\"])\n\n# Create arrays to store the results\nsine_model_predictions = np.empty(x_test.size)\nsine_model_quantized_predictions = np.empty(x_test.size)\n\n# Run each model's interpreter for each value and store the results in arrays\nfor i in range(x_test.size):\n sine_model_input().fill(x_test[i])\n sine_model.invoke()\n sine_model_predictions[i] = sine_model_output()[0]\n\n sine_model_quantized_input().fill(x_test[i])\n sine_model_quantized.invoke()\n sine_model_quantized_predictions[i] = sine_model_quantized_output()[0]\n\n# See how they line up with the data\nplt.clf()\nplt.title('Comparison of various models against actual values')\nplt.plot(x_test, y_test, 'bo', label='Actual')\nplt.plot(x_test, predictions, 'ro', label='Original predictions')\nplt.plot(x_test, sine_model_predictions, 'bx', label='Lite predictions')\nplt.plot(x_test, sine_model_quantized_predictions, 'gx', label='Lite quantized predictions')\nplt.legend()\nplt.show()\n", "We can see from the graph that the predictions for the original model, the converted model, and the quantized model are all close enough to be indistinguishable. This means that our quantized model is ready to use!\nWe can print the difference in file size:", "import os\nbasic_model_size = os.path.getsize(\"sine_model.tflite\")\nprint(\"Basic model is %d bytes\" % basic_model_size)\nquantized_model_size = os.path.getsize(\"sine_model_quantized.tflite\")\nprint(\"Quantized model is %d bytes\" % quantized_model_size)\ndifference = basic_model_size - quantized_model_size\nprint(\"Difference is %d bytes\" % difference)", "Our quantized model is only 16 bytes smaller than the original version, which only a tiny reduction in size! At around 2.6 kilobytes, this model is already so small that the weights make up only a small fraction of the overall size, meaning quantization has little effect.\nMore complex models have many more weights, meaning the space saving from quantization will be much higher, approaching 4x for most sophisticated models.\nRegardless, our quantized model will take less time to execute than the original version, which is important on a tiny microcontroller!\nWrite to a C file\nThe final step in preparing our model for use with TensorFlow Lite for Microcontrollers is to convert it into a C source file. You can see an example of this format in hello_world/sine_model_data.cc.\nTo do so, we can use a command line utility named xxd. The following cell runs xxd on our quantized model and prints the output:", "# Install xxd if it is not available\n!apt-get -qq install xxd\n# Save the file as a C source file\n!xxd -i sine_model_quantized.tflite > sine_model_quantized.cc\n# Print the source file\n!cat sine_model_quantized.cc", "We can either copy and paste this output into our project's source code, or download the file using the collapsible menu on the left hand side of this Colab." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fonnesbeck/ngcm_pandas_2016
notebooks/1.4 - Pandas Best Practices.ipynb
cc0-1.0
[ "Idomatic Pandas\n\nQ: How do I make my pandas code faster with parallelism?\nA: You donโ€™t need parallelism, you can use Pandas better.\n-- Matthew Rocklin\n\nNow that we have been exposed to the basic functionality of pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks.\nAs most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed.\nAs you may already have noticed, there are sometimes mutliple ways to achieve the same goal using pandas. Importantly, some approaches are better than others, in terms of performance, readability and ease of use. We will cover some important ways of maximizing your pandas efficiency.", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt", "Reshaping DataFrame objects\nIn the context of a single DataFrame, we are often interested in re-arranging the layout of our data. \nThis dataset in from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia (spasmodic torticollis) from nine U.S. sites.\n\nRandomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)\nResponse variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)\nTWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began", "cdystonia = pd.read_csv(\"../data/cdystonia.csv\", index_col=None)\ncdystonia.head()", "This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements.\nThe stack method rotates the data frame so that columns are represented in rows:", "stacked = cdystonia.stack()\nstacked", "Have a peek at the structure of the index of the stacked data (and the data itself).\nTo complement this, unstack pivots from rows back to columns.", "stacked.unstack().head()", "Exercise\nWhich columns uniquely define a row? Create a DataFrame called cdystonia2 with a hierarchical index based on these columns.", "# Write your answer here", "If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.", "twstrs_wide = cdystonia2['twstrs'].unstack('obs')\ntwstrs_wide.head()", "We can now merge these reshaped outcomes data with the other variables to create a wide format DataFrame that consists of one row for each patient.", "cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']]\n .drop_duplicates()\n .merge(twstrs_wide, right_index=True, left_on='patient', how='inner'))\ncdystonia_wide.head()", "A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:", "(cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs']\n .unstack('week').head())", "To convert our \"wide\" format back to long, we can use the melt function, appropriately parameterized. This function is useful for DataFrames where one\nor more columns are identifier variables (id_vars), with the remaining columns being measured variables (value_vars). The measured variables are \"unpivoted\" to\nthe row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments.", "pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'], \n var_name='obs', value_name='twsters').head()", "This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.\nThe preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.\nMethod chaining\nIn the DataFrame reshaping section above, you probably noticed how several methods were strung together to produce a wide format table:", "(cdystonia[['patient','site','id','treat','age','sex']]\n .drop_duplicates()\n .merge(twstrs_wide, right_index=True, left_on='patient', how='inner')\n .head())", "This approach of seqentially calling methods is called method chaining, and despite the fact that it creates very long lines of code that must be properly justified, it allows for the writing of rather concise and readable code. Method chaining is possible because of the pandas convention of returning copies of the results of operations, rather than in-place operations. This allows methods from the returned object to be immediately called, as needed, rather than assigning the output to a variable that might not otherwise be used. For example, without method chaining we would have done the following:", "cdystonia_subset = cdystonia[['patient','site','id','treat','age','sex']]\ncdystonia_complete = cdystonia_subset.drop_duplicates()\ncdystonia_merged = cdystonia_complete.merge(twstrs_wide, right_index=True, left_on='patient', how='inner')\ncdystonia_merged.head()", "This necessitates the creation of a slew of intermediate variables that we really don't need.\nLet's transform another dataset using method chaining. The measles.csv file contains de-identified cases of measles from an outbreak in Sao Paulo, Brazil in 1997. The file contains rows of individual records:", "measles = pd.read_csv(\"../data/measles.csv\", index_col=0, encoding='latin-1', parse_dates=['ONSET'])\nmeasles.head()", "The goal is to summarize this data by age groups and bi-weekly period, so that we can see how the outbreak affected different ages over the course of the outbreak.\nThe best approach is to build up the chain incrementally. We can begin by generating the age groups (using cut) and grouping by age group and the date (ONSET):", "(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))\n .groupby(['ONSET', 'AGE_GROUP']))", "What we then want is the number of occurences in each combination, which we can obtain by checking the size of each grouping:", "(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))\n .groupby(['ONSET', 'AGE_GROUP'])\n .size()).head(10)", "This results in a hierarchically-indexed Series, which we can pivot into a DataFrame by simply unstacking:", "(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))\n .groupby(['ONSET', 'AGE_GROUP'])\n .size()\n .unstack()).head(5)", "Now, fill replace the missing values with zeros:", "(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))\n .groupby(['ONSET', 'AGE_GROUP'])\n .size()\n .unstack()\n .fillna(0)).head(5)", "Finally, we want the counts in 2-week intervals, rather than as irregularly-reported days, which yields our the table of interest:", "case_counts_2w = (measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))\n .groupby(['ONSET', 'AGE_GROUP'])\n .size()\n .unstack()\n .fillna(0)\n .resample('2W')\n .sum())\n\ncase_counts_2w", "From this, it is easy to create meaningful plots and conduct analyses:", "case_counts_2w.plot(cmap='hot')", "Pivoting\nThe pivot method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: index, columns and values, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.\nFor example, we may want the twstrs variable (the response variable) in wide format according to patient, as we saw with the unstacking method above:", "cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()", "Exercise\nTry pivoting the cdystonia DataFrame without specifying a variable for the cell values:", "# Write your answer here", "A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.", "cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs', \n aggfunc=max).head(20)", "For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.", "pd.crosstab(cdystonia.sex, cdystonia.site)", "Data transformation\nThere are a slew of additional operations for DataFrames that we would collectively refer to as transformations which include tasks such as:\n\nremoving duplicate values\nreplacing values\ngrouping values.\n\nDealing with duplicates\nWe can easily identify and remove duplicate values from DataFrame objects. For example, say we want to remove ships from our vessels dataset that have the same name:", "vessels = pd.read_csv('../data/AIS/vessel_information.csv')\nvessels.tail(10)\n\nvessels.duplicated(subset='names').tail(10)", "These rows can be removed using drop_duplicates", "vessels.drop_duplicates(['names']).tail(10)", "Value replacement\nFrequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset:", "cdystonia.treat.value_counts()", "A logical way to specify these numerically is to change them to integer values, perhaps using \"Placebo\" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.", "treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}\n\ncdystonia['treatment'] = cdystonia.treat.map(treatment_map)\ncdystonia.treatment", "Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method. \nAn example where replacement is useful is replacing sentinel values with an appropriate numeric value prior to analysis. A large negative number is sometimes used in otherwise positive-valued data to denote missing values.", "scores = pd.Series([99, 76, 85, -999, 84, 95])", "In such situations, we can use replace to substitute nan where the sentinel values occur.", "scores.replace(-999, np.nan)", "We can also perform the same replacement that we used map for with replace:", "cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})", "Inidcator variables\nFor some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward.\nLet's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships.\nExercise\nCreate a subset of the vessels DataFrame called vessels5 that only contains the 5 most common types of vessels, based on their prevalence in the dataset.", "# Write your answer here", "We can now apply get_dummies to the vessel type to create 5 indicator variables.", "pd.get_dummies(vessels5.type).head(10)", "Discretization\nPandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly!\nLets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups:", "cdystonia.age.describe()", "Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's:", "pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]", "The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False:", "pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]", "Since the data are now ordinal, rather than numeric, we can give them labels:", "pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','really old'])[:30]", "A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default:", "pd.qcut(cdystonia.age, 4)[:30]", "Alternatively, one can specify custom quantiles to act as cut points:", "quantiles = pd.qcut(vessels.max_loa, [0, 0.01, 0.05, 0.95, 0.99, 1])\nquantiles[:30]", "Exercise\nUse the discretized segment lengths as the input for get_dummies to create 5 indicator variables for segment length:", "# Write your answer here", "Categorical Variables\nOne of the keys to maximizing performance in pandas is to use the appropriate types for your data wherever possible. In the case of categorical data--either the ordered categories as we have just created, or unordered categories like race, gender or country--the use of the categorical to encode string variables as numeric quantities can dramatically improve performance and simplify subsequent analyses.\nWhen text data are imported into a DataFrame, they are endowed with an object dtype. This will result in relatively slow computation because this dtype runs at Python speeds, rather than as Cython code that gives much of pandas its speed. We can ameliorate this by employing the categorical dtype on such data.", "cdystonia_cat = cdystonia.assign(treatment=cdystonia.treat.astype('category')).drop('treat', axis=1)\ncdystonia_cat.dtypes\n\ncdystonia_cat.treatment.head()\n\ncdystonia_cat.treatment.cat.codes", "This creates an unordered categorical variable. To create an ordinal variable, we can specify order=True as an argument to astype:", "cdystonia.treat.astype('category', ordered=True).head()", "However, this is not the correct order; by default, the categories will be sorted alphabetically, which here gives exactly the reverse order that we need. \nTo specify an arbitrary order, we can used the set_categories method, as follows:", "cdystonia.treat.astype('category').cat.set_categories(['Placebo', '5000U', '10000U'], ordered=True).head()", "Notice that we obtained set_categories from the cat attribute of the categorical variable. This is known as the category accessor, and is a device for gaining access to Categorical variables' categories, analogous to the string accessor that we have seen previously from text variables.", "cdystonia_cat.treatment.cat", "Additional categoried can be added, even if they do not currently exist in the DataFrame, but are part of the set of possible categories:", "cdystonia_cat['treatment'] = (cdystonia.treat.astype('category').cat\n .set_categories(['Placebo', '5000U', '10000U', '20000U'], ordered=True))", "To complement this, we can remove categories that we do not wish to retain:", "cdystonia_cat.treatment.cat.remove_categories('20000U').head()", "Or, even more simply:", "cdystonia_cat.treatment.cat.remove_unused_categories().head()", "For larger datasets, there is an appreciable gain in performance, both in terms of speed and memory usage.", "vessels_merged = (pd.read_csv('../data/AIS/vessel_information.csv', index_col=0)\n .merge(pd.read_csv('../data/AIS/transit_segments.csv'), left_index=True, right_on='mmsi'))\n\nvessels_merged['registered'] = vessels_merged.flag.astype('category')\n\n%timeit vessels_merged.groupby('flag').avg_sog.mean().sort_values()\n\n%timeit vessels_merged.groupby('registered').avg_sog.mean().sort_values()\n\nvessels_merged[['flag','registered']].memory_usage()", "Data aggregation and GroupBy operations\nOne of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple:\n\naggregation, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results\nslicing the DataFrame into groups and then doing something with the resulting slices (e.g. plotting)\ngroup-wise transformation, such as standardization/normalization", "cdystonia_grouped = cdystonia.groupby(cdystonia.patient)", "This grouped dataset is hard to visualize", "cdystonia_grouped", "However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups:", "for patient, group in cdystonia_grouped:\n print('patient', patient)\n print('group', group)", "A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.\nFor example, we may want to aggregate our data with with some function.\n\n<div align=\"right\">*(figure taken from \"Python for Data Analysis\", p.251)*</div>\n\nWe can aggregate in Pandas using the aggregate (or agg, for short) method:", "cdystonia_grouped.agg(np.mean).head()", "Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.\nSome aggregation functions are so common that Pandas has a convenience method for them, such as mean:", "cdystonia_grouped.mean().head()", "The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation:", "cdystonia_grouped.mean().add_suffix('_mean').head()", "Exercise\nUse the quantile method to generate the median values of the twstrs variable for each patient.", "# Write your answer here", "If we wish, we can easily aggregate according to multiple keys:", "cdystonia.groupby(['week','site']).mean().head()", "Alternately, we can transform the data, using a function of our choice with the transform method:", "normalize = lambda x: (x - x.mean())/x.std()\n\ncdystonia_grouped.transform(normalize).head()", "It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns:", "%timeit cdystonia_grouped['twstrs'].mean().head()", "Or, as a DataFrame:", "cdystonia_grouped[['twstrs']].mean().head()", "If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:", "chunks = dict(list(cdystonia_grouped))\n\nchunks[4]", "By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way:", "dict(list(cdystonia.groupby(cdystonia.dtypes, axis=1)))", "Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index:", "cdystonia2.head(10)", "The level argument specifies which level of the index to use for grouping.", "cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()", "Apply\nWe can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.\nThe function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call.", "def top(df, column, n=5):\n return df.sort_index(by=column, ascending=False)[:n]", "To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship:", "goo = vessels_merged.groupby('mmsi')\n\ntop3segments = vessels_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]\ntop3segments.head(15)", "Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.\nExercise\nLoad the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.", "from IPython.core.display import HTML\nHTML(filename='../data/titanic.html')", "Women and children first?\n\nUse the groupby method to calculate the proportion of passengers that survived by sex.\nCalculate the same proportion, but by class and sex.\nCreate age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex.", "# Write your answer here", "References\nPython for Data Analysis Wes McKinney" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
anandha2017/udacity
nd101 Deep Learning Nanodegree Foundation/notebooks/1 - playing with jupyter/keyboard-shortcuts.ipynb
mit
[ "Keyboard shortcuts\nIn this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed.\nFirst up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself.\nBy default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape.\n\nExercise: Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times.", "# mode practice", "Help with commands\nIf you ever need to look up a command, you can bring up the list of shortcuts by pressing H in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now.\nCreating new cells\nOne of the most common commands is creating new cells. You can create a cell above the current cell by pressing A in command mode. Pressing B will create a cell below the currently selected cell.\n\nExercise: Create a cell above this cell using the keyboard command.\nExercise: Create a cell below this cell using the keyboard command.\n\nSwitching between Markdown and code\nWith keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to a code cell, press Y. To switch from code to Markdown, press M.\n\nExercise: Switch the cell below between Markdown and code cells.", "## Practice here\n\ndef fibo(n): # Recursive Fibonacci sequence!\n if n == 0:\n return 0\n elif n == 1:\n return 1\n return fibo(n-1) + fibo(n-2)", "Line numbers\nA lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing L (in command mode of course) on a code cell.\n\nExercise: Turn line numbers on and off in the above code cell.\n\nDeleting cells\nDeleting cells is done by pressing D twice in a row so D, D. This is to prevent accidently deletions, you have to press the button twice!\n\nExercise: Delete the cell below.\n\nSaving the notebook\nNotebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press S. So easy!\nThe Command Palette\nYou can easily access the command palette by pressing Shift + Control/Command + P. \n\nNote: This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari.\n\nThis will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in \"move\" which will bring up the move commands.\n\nExercise: Use the command palette to move the cell below down one position.", "# below this cell\n\n# Move this cell down", "Finishing up\nThere is plenty more you can do such as copying, cutting, and pasting cells. I suggest getting used to using the keyboard shortcuts, youโ€™ll be much quicker at working in notebooks. When you become proficient with them, you'll rarely need to move your hands away from the keyboard, greatly speeding up your work.\nRemember, if you ever need to see the shortcuts, just press H in command mode." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kkkddder/dmc
notebooks/week-6/02-using a pre-trained model with Keras.ipynb
apache-2.0
[ "Lab 6.2 - Using a pre-trained model with Keras\nIn this section of the lab, we will load the model we trained in the previous section, along with the training data and mapping dictionaries, and use it to generate longer sequences of text.\nLet's start by importing the libraries we will be using:", "import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Dropout\nfrom keras.layers import LSTM\nfrom keras.callbacks import ModelCheckpoint\nfrom keras.utils import np_utils\n\nimport sys\nimport re\nimport pickle", "Next, we will import the data we saved previously using the pickle library.", "pickle_file = '-basic_data.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n X = save['X']\n y = save['y']\n char_to_int = save['char_to_int'] \n int_to_char = save['int_to_char'] \n del save # hint to help gc free up memory\n print('Training set', X.shape, y.shape)", "Now we need to define the Keras model. Since we will be loading parameters from a pre-trained model, this needs to match exactly the definition from the previous lab section. The only difference is that we will comment out the dropout layer so that the model uses all the hidden neurons when doing the predictions.", "# define the LSTM model\nmodel = Sequential()\nmodel.add(LSTM(128, return_sequences=False, input_shape=(X.shape[1], X.shape[2])))\n# model.add(Dropout(0.50))\nmodel.add(Dense(y.shape[1], activation='softmax'))\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')", "Next we will load the parameters from the model we trained previously, and compile it with the same loss and optimizer function.", "# load the parameters from the pretrained model\nfilename = \"-basic_LSTM.hdf5\"\nmodel.load_weights(filename)\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')", "We also need to rewrite the sample() and generate() helper functions so that we can use them in our code:", "def sample(preds, temperature=1.0):\n preds = np.asarray(preds).astype('float64')\n preds = np.log(preds) / temperature\n exp_preds = np.exp(preds)\n preds = exp_preds / np.sum(exp_preds)\n probas = np.random.multinomial(1, preds, 1)\n return np.argmax(probas)\n\ndef generate(sentence, sample_length=50, diversity=0.35):\n generated = sentence\n sys.stdout.write(generated)\n\n for i in range(sample_length):\n x = np.zeros((1, X.shape[1], X.shape[2]))\n for t, char in enumerate(sentence):\n x[0, t, char_to_int[char]] = 1.\n\n preds = model.predict(x, verbose=0)[0]\n next_index = sample(preds, diversity)\n next_char = int_to_char[next_index]\n\n generated += next_char\n sentence = sentence[1:] + next_char\n\n sys.stdout.write(next_char)\n sys.stdout.flush()\n print", "Now we can use the generate() function to generate text of any length based on our imported pre-trained model and a seed text of our choice. For best result, the length of the seed text should be the same as the length of training sequences (100 in the previous lab section). \nIn this case, we will test the overfitting of the model by supplying it two seeds:\n\none which comes verbatim from the training text, and\none which comes from another earlier speech by Obama\n\nIf the model has not overfit our training data, we should expect it to produce reasonable results for both seeds. If it has overfit, it might produce pretty good results for something coming directly from the training set, but perform poorly on a new seed. This means that it has learned to replicate our training text, but cannot generalize to produce text based on other inputs. Since the original article was very short, however, the entire vocabulary of the model might be very limited, which is why as input we use a part of another speech given by Obama, instead of completely random text.\nSince we have not trained the model for that long, we will also use a lower temperature to get the model to generate more accurate if less diverse results. Try running the code a few times with different temperature settings to generate different results.", "prediction_length = 500\nseed_from_text = \"america has shown that progress is possible. last year, income gains were larger for households at t\"\nseed_original = \"and as people around the world began to hear the tale of the lowly colonists who overthrew an empire\"\n\nfor seed in [seed_from_text, seed_original]:\n generate(seed, prediction_length, .50)\n print \"-\" * 20" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
feststelltaste/software-analytics
notebooks/Committer Distribution.ipynb
gpl-3.0
[ "Introduction\nIn the last notebook, I showed you how easy it is to connect jQAssistant/neo4j with Python Pandas/py2neo. In this notebook, I show you a (at first glance) simple analysis of the Git repository https://github.com/feststelltaste/spring-petclinic. This repository is a fork of the demo repository for jQAssistant (https://github.com/feststelltaste/spring-petclinic) therefore it integrates jQAssistant already.\nAs analysis task, we want to know who are the Top 10 committers and how the distribution of the commits is. This could be handy if you want to identify your main contributors of a project e. g. to send them a gift at Christmas ;-) .\nBut first, you might ask yourself: \"Why do I need a fully fledged data analysis framework like Pandas for such a simple task? Are there no standard tools out there?\" Well, I'll show you why (OK, and you got me there: I needed another reason to go deeper with Python, Pandas, jQAssistant and Neo4j to get some serious software data analysis started) \nSo let's go!\nPreparation\nThis notebook assumes that \n- there is a running Neo4j server with the default configuration. \n- the graph database is filled with the scan results of jQAssistant (happens for the repository above automatically with an <tt>mvn clean install</tt>)\n- you use a standard Anaconda installation with Python 3+\n- you installed the py2neo connector\nIf everything is set up, we just import the usual suspects: py2neo for connecting to Neo4j and Pandas for data analysis. We also want to plot some graphics later on, so we import matplotlib accordingly as the convention suggests.", "import py2neo\nimport pandas as pd\nimport matplotlib.pyplot as plt\n# display graphics directly in the notebook\n%matplotlib inline", "Data input\nWe need some data to get started. Luckily, we have jQAssistant at our hand. It's integrated into the build process of Spring PetClinic repository above and scanned the Git repository information automatically with every executed build. \nSo let's query our almighty Neo4j graph database that holds all the structural data about the software project.", "graph = py2neo.Graph()\nquery = \"\"\"\nMATCH (author:Author)-[:COMMITED]-> (commit:Commit)\nRETURN author.name as name, author.email as email\n\"\"\"\nresult = graph.data(query)\n# just how first three entries\nresult[0:3]", "The query returns all commits with their authors and the author's email addresses. We get some nice, tabular data that we put into Pandas's DataFrame.", "commits = pd.DataFrame(result)\ncommits.head()", "Familiarization\nFirst, I like to check the raw data a little bit. I often do this by first having a look at the data types the data source is returning. It's a good starting point to check that Pandas recognizes the data types accordingly. You can also use this approach to check for skewed data columns very quickly (especially necessary when reading CSV or Excel files): If there should be a column with a specific data type (e. g. because the documentation of the dataset said so), the data type should be recognized automatically as specified. If not, there is a high probability that the imported data source isn't correct (and we have a data quality problem).", "commits.dtypes", "That's OK for our simple scenario. The two columns with texts are objects &ndash; nothing spectacular.\nIn the next step, I always like to get a \"feeling\" of all the data. Primarily, I want to get a quick impression of the data quality again. It could always be that there is \"dirty data\" in the dataset or that there are outliers that would screw up the analysis. With such a small amount of data we have, we can simply list all unique values that occur in the columns. I just list the top 10's for both columns.", "commits['name'].value_counts()[0:10]", "OK, at first glance, something seems awkward. Let's have a look at the email addresses.", "commits['email'].value_counts()[0:10]", "OK, the bad feeling is strengthening. We might have a problem with multiple authors having multiple email addresses. Let me show you the problem by better representing the problem.\nInterlude - begin\nIn the interlude section, I take you to a short, mostly undocumented excursion with probably messy code (don't do this at home!) to make a point. If you like, you can skip that section.\nGoal: Create a diagram that shows the relationship between the authors and the emails addresses.\n(Note to myself: It's probably better to solve that directly in Neo4j the next time ;-) )\nI need a unique index for each name and I have to calculate the number of different email addresses per author.", "grouped_by_authors = commits[['name', 'email']]\\\n .drop_duplicates().groupby('name').count()\\\n .sort_values('email', ascending=False).reset_index().reset_index()\ngrouped_by_authors.head()", "Same procedure for the email addresses.", "grouped_by_email = commits[['name', 'email']]\\\n .drop_duplicates().groupby('email').count()\\\n .sort_values('name', ascending=False).reset_index().reset_index()\ngrouped_by_email.head()", "Then I merge the two DataFrames with a subset of the original data. I get each author and email index as well as the number of occurrences for author respectively emails. I only need the ones that are occurring multiple times, so I check for > 2.", "plot_data = commits.drop_duplicates()\\\n .merge(grouped_by_authors, left_on='name', right_on=\"name\", suffixes=[\"\", \"_from_authors\"], how=\"outer\")\\\n .merge(grouped_by_email, left_on='email', right_on=\"email\", suffixes=[\"\", \"_from_emails\"], how=\"outer\")\nplot_data = plot_data[\\\n (plot_data['email_from_authors'] > 1) | \\\n (plot_data['name_from_emails'] > 1)]\nplot_data", "I just add some nicely normalized indexes for plotting (note: there might be a method that's easier)", "from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\nle.fit(plot_data['index'])\nplot_data['normalized_index_name'] = le.transform(plot_data['index']) * 10\nle.fit(plot_data['index_from_emails'])\nplot_data['normalized_index_email'] = le.transform(plot_data['index_from_emails']) * 10\nplot_data.head()", "Plot an assignment table with the relationships between authors and email addresses.", "fig1 = plt.figure(facecolor='white')\nax1 = plt.axes(frameon=False)\nax1.set_frame_on(False)\nax1.get_xaxis().tick_bottom()\nax1.axes.get_yaxis().set_visible(False)\nax1.axes.get_xaxis().set_visible(False)\n\n# simply plot all the data (imperfection: duplicated will be displayed in bold font)\nfor data in plot_data.iterrows():\n row = data[1]\n plt.text(0, row['normalized_index_name'], row['name'], fontsize=15, horizontalalignment=\"right\")\n plt.text(1, row['normalized_index_email'], row['email'], fontsize=15, horizontalalignment=\"left\")\n plt.plot([0,1],[row['normalized_index_name'],row['normalized_index_email']],'grey', linewidth=1.0)", "Alright! Here we are! We see that multiple authors use multiple email addresses. And I see a pattern that could be used to get better data. Do you, too? \nInterlude - end\nIf you skipped the interlude section: I just visualized / demonstrated that there are different email addresses per author (and vise versa). Some authors choose to use another email address and some choose a different name for committing to the repositories (and a few did both things).\nData Wrangling\nThe situation above is a typical case of a little data messiness and &ndash; to demotivate you &ndash; absolutely normal. So we have to do some data correction before we start our analysis. Otherwise, we would ignore reality completely and deliver wrong results. This could damage our reputation as a data analyst and is something we have to avoid at all costs!\nWe want to fix the problem with the multiple authors having multiple email addresses (but are the same persons). We need a mapping between them. Should we do it manually? That would be kind of crazy. As mentioned above, there is a pattern in the data to fix that. We simply use the name of the email address as an identifier for a person.\nLet's give it a try by extracting the name part from the email address with a simple split.", "commits['nickname'] = commits['email'].apply(lambda x : x.split(\"@\")[0])\ncommits.head()", "That looks pretty good. Now we want to get only the person's real name instead of the nickname. We use a little heuristic to determine the \"best fitting\" real name and replace all the others. For this, we need group by nicknames and determine the real names.", "def determine_real_name(names):\n \n real_name = \"\"\n \n for name in names:\n # assumption: if there is a whitespace in the name, \n # someone thought about it to be first name and surname\n if \" \" in name:\n return name\n # else take the longest name\n elif len(name) > len(real_name):\n real_name = name\n \n return real_name\n \ncommits_grouped = commits[['nickname', 'name']].groupby(['nickname']).agg(determine_real_name)\ncommits_grouped = commits_grouped.rename(columns={'name' : 'real_name'})\ncommits_grouped.head()", "That looks great! Now we switch back to our previous DataFrame by joining in the new information.", "commits = commits.merge(commits_grouped, left_on='nickname', right_index=True)\n# drop duplicated for better displaying\ncommits.drop_duplicates().head()", "That should be enough data cleansing for today!\nAnalysis\nNow that we have valid data, we can produce some new insights.\nTop 10 committers\nEasy tasks first: We simply produce a table with the Top 10 committers. We group by the real name and count every commit by using a subset (only the <tt>email</tt> column) of the DataFrame to only get on column returned. We rename the returned columns to <tt>commits</tt> for displaying reasons (would otherwise be <tt>email</tt>). Then we just list the top 10 entries after sorting appropriately.", "committers = commits.groupby('real_name')[['email']]\\\n .count().rename(columns={'email' : 'commits'})\\\n .sort_values('commits', ascending=False)\ncommitters.head(10)\n\ncommitters.head(10)", "Committer Distribution\nNext, we create a pie chart to get a good impression of the committers.", "committers['commits'].plot(kind='pie')", "Uhh...that looks ugly and kind of weird. Let's first try to fix the mess on the right side that shows all authors with minor changes by summing up their number of commits. We will use a threshold value that makes sense with our data (e. g. the committers that contribute more than 3/4 to the code) to identify them. A nice start is the description of the current data set.", "committers_description = committers.describe()\ncommitters_description", "OK, we want the 3/4 main contributors...", "threshold = committers_description.loc['75%'].values[0]\nthreshold", "...that is > 75% of the commits of all contributors.", "minor_committers = committers[committers['commits'] <= threshold]\nminor_committers.head()", "These are the entries we want to combine to our new \"Others\" section. But we don't want to loose the number of changes, so we store them for later usage.", "others_number_of_changes = minor_committers.sum()\nothers_number_of_changes", "Now we are deleting all authors that are in the <tt>author_minor_changes</tt>'s DataFrame. To not check on the threshold value from above again, we reuse the already calculated DataFrame.", "main_committers = committers[~committers.isin(minor_committers)]\nmain_committers.tail()", "This gives us for the contributors with just a few commits missing values for the <tt>changes</tt> column, because these values were in the <tt>author_minor_changes</tt> DataFrame. We drop all Nan values to get only the major contributors.", "main_committers = main_committers.dropna()\nmain_committers", "We add the \"Others\" row by appending to the DataFrame", "main_committers.loc[\"Others\"] = others_number_of_changes\nmain_committers", "Almost there, you redraw with some styling and minor adjustments.", "# some configuration for displaying nice diagrams\nplt.style.use('fivethirtyeight')\nplt.figure(facecolor='white')\n\nax = main_committers['commits'].plot(\n kind='pie', figsize=(6,6), title=\"Main committers\", \n autopct='%.2f', fontsize=12)\n# get rid of the distracting label for the y-axis\nax.set_ylabel(\"\")", "Summary\nI hope you saw that there are some minor difficulties in working with data. We got the big problem with the authors and email addresses that we solved by correcting the names. We also transformed an ugly pie chart into a management-grade one.\nThis analysis also gives you some handy lines of code for some common data analysis tasks." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jedludlow/sp-for-ds
sp_for_ds.ipynb
mit
[ "%matplotlib widget\n\nimport warnings; warnings.simplefilter('ignore')\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\nimport scipy.signal as signal\nimport ipywidgets as widgets\nfrom IPython.display import display\nsns.set_context('notebook')", "<hr>\n\nSignal Processing for Data Scientists\nJed Ludlow\nUnitedHealth Group\n<hr>\n\nGet the code at https://github.com/jedludlow/sp-for-ds\nOverview\n\nSignal processing: Tools to separate the useful information from the nuisance information in a time series.\nCover three areas today\nFourier analysis in the frequency domain\nDiscrete-time sampling\nDigital filtering\n\nFourier Analysis in the Frequency Domain\nFourier Series\nA periodic signal $s(t)$ can be expressed as a (possibly infininte) sum of simple sinusoids. Usually we approximate it by truncating the series to $N$ terms as\n$$s_N(t) = \\frac{A_0}{2} + \\sum_{n=1}^N A_n \\sin(\\tfrac{2\\pi nt}{P}+\\phi_n) \\quad \\scriptstyle \\text{for integer}\\ N\\ \\ge\\ 1$$\nDiscrete Fourier Transform\nIf we have a short sample of a periodic signal, the discrete Fourier transform allows us to recover its sinusoidal frequency components. Numerically, the problem of computing the discrete Fourier transform has been studied for many years, and the result is the Fast Fourier Transform (FFT).\n\nIn 1994, Gilbert Strang described the FFT as \"the most important numerical algorithm of our lifetime\" and it was included in Top 10 Algorithms of 20th Century by the IEEE journal Computing in Science & Engineering. (source: https://en.wikipedia.org/wiki/Fast_Fourier_transform)\n\nIn Python, this transform is available in the numpy.fft package.", "def fft_scaled(x, axis=-1, samp_freq=1.0, remove_mean=True):\n \"\"\"\n Fully scaled and folded FFT with physical amplitudes preserved.\n\n Arguments\n ---------\n\n x: numpy n-d array\n array of signal information.\n\n axis: int\n array axis along which to compute the FFT.\n\n samp_freq: float\n signal sampling frequency in Hz.\n\n remove_mean: boolean\n remove the mean of each signal prior to taking the FFT so the DC\n component of the FFT will be zero.\n\n Returns\n --------\n\n (fft_x, freq) where *fft_x* is the full complex FFT, scaled and folded\n so that only positive frequencies remain, and *freq* is a matching\n array of positive frequencies.\n\n Examples\n --------\n\n A common use case would present the signals in a 2-D array\n where each row contains a signal trace. Columns would\n then represent time sample intervals of the signals. The rows of\n the returned *fft_x* array would contain the FFT of each signal, and\n each column would correspond to an entry in the *freq* array.\n\n \"\"\"\n # Get length of the requested array axis.\n n = x.shape[axis]\n\n # Use truncating division here since for odd n we want to\n # round down to the next closest integer. See docs for numpy fft.\n half_n = n // 2\n\n # Remove the mean if requested\n if remove_mean:\n ind = [slice(None)] * x.ndim\n ind[axis] = np.newaxis\n x = x - x.mean(axis)[ind]\n\n # Compute fft, scale, and fold negative frequencies into positive.\n def scale_and_fold(x):\n # Scale by length of original signal\n x = (1.0 / n) * x[:half_n + 1]\n # Fold negative frequency\n x[1:] *= 2.0\n return x\n\n fft_x = np.fft.fft(x, axis=axis)\n fft_x = np.apply_along_axis(scale_and_fold, axis, fft_x)\n\n # Matching frequency array. The abs takes care of the case where n\n # is even, and the Nyquist frequency is usually negative.\n freq = np.fft.fftfreq(n, 1.0 / samp_freq)\n freq = np.abs(freq[:half_n + 1])\n\n return (fft_x, freq)", "1 Hz Square Wave", "f_s = 1000.0 # Sampling frequency in Hz\ntime = np.arange(0.0, 100.0 + 1.0/f_s, 1.0/f_s)\nsquare_wave = signal.square(2 * np.pi * time)\nplt.figure(figsize=(9, 5))\nplt.plot(time, square_wave), plt.xlabel('time (s)'), plt.ylabel('x(t)'), plt.title('1 Hz Square Wave')\nplt.xlim((0, 3)), plt.ylim((-1.1, 1.1));", "Fourier Analysis of Square Wave", "fft_x, freq_sq = fft_scaled(square_wave, samp_freq=f_s)\nf_max = 24.0\nplt.figure(figsize=(9, 5)), plt.plot(freq_sq, np.abs(fft_x))\nplt.xticks(np.arange(0.0, f_max + 1.0, 1.0))\nplt.xlim((0, f_max)), plt.xlabel('Frequency (Hz)'), plt.ylabel('Amplitude')\nplt.title('Frequency Spectrum of Square Wave');", "Approximate 1 Hz Square Wave\nLet's sythesize an approximation to a square wave by summing a reduced number of sinusoidal components.", "# Set frequency components and amplitudes.\n# Square waves contain all the odd harmonics\n# of the fundamental frequency.\nf_components = [1.0, 3.0, 5.0, 7.0, 9.0, 11.0]\n# f_components = [1.0, 3.0, 5.0, 7.0, 9.0, 11.0,\n# 13.0, 15.0, 17.0, 19.0, 21.0]\namplitudes = [1.28 / f for f in f_components]\n\n# Generate the square wave\ns_t = np.zeros_like(time)\nfor f, amp in zip(f_components, amplitudes):\n s_t += amp * np.sin(2 * np.pi * f * time)\n\nplt.figure(figsize=(9, 5)), plt.plot(time, s_t)\nplt.xlabel('time (s)'), plt.ylabel('$s(t)$'), plt.xlim((0, 3))\nplt.title('Approximate Square Wave');", "Fourier Analysis of Approximate Square Wave", "freq_spec, freq = fft_scaled(s_t, samp_freq=f_s)\nf_max = 12.0\nplt.figure(figsize=(9, 5)), plt.plot(freq, np.abs(freq_spec))\nplt.xticks(np.arange(0.0, f_max + 1.0, 1.0))\nplt.xlim((0, f_max)), plt.xlabel('Frequency (Hz)'), plt.ylabel('Amplitude')\nplt.title('Frequency Spectrum of Approximate Square Wave');", "Discrete-Time Sampling\nNyquist-Shannon Sampling Theorem\nConsider a continuous signal $x(t)$ with Fourier transfom $X(f)$. Assume:\n\nA sampled version of the signal is constructed as\n\n$$x_k = x(kT), k \\in \\mathbb{I}$$\n\n$x(t)$ is band-limited such that\n\n$$X(f) = 0 \\ \\forall \\ |f| > B$$\n<center><img src=\"images/Bandlimited.svg\" width=\"300\"></center>\nThen $x(t)$ is uniquely recoverable from $x_k$ if\n$$\\frac{1}{T} \\triangleq f_s > 2B$$\nThis critical frequency shows up so frequently that is has its own name, the Nyquist frequency.\n$$f_N = \\frac{f_s}{2}$$\nA note about frequency: Most theoretical signal processing work is done using circular frequency $\\omega$ in units of rad/sec. This is done to eliminate the factor of of $2 \\pi$ which shows up in many equations when true ordinary frequency $f$ is used. That said, nearly all practical signal processing is done with ordinary frequency. The relationship between the two frequencies is\n$$ \\omega = 2 \\pi f$$\n<center><img src=\"images/ideal_sampler.png\" width=\"800\"></center>\nimage credit: MIT Open Courseware, Signals and Systems, Oppenheimer\nPractical Realities\n\nFor complete recoverability, Nyquist requires an ideal sampler and an ideal interpolator. In practice, these are not physically realizable.\n\n$$x(t) = \\mathrm{IdealInterpolator}_T(\\mathrm{IdealSampler}_T(x(t))$$\n\n\nReal signals are never perfectly band-limited. There are always some noise components out past the Nyquist sampling rate. \n\n\nYou will often be given sampled data but have very little insight into the system that generated the data. In that situation, you really have no guarantees that any estimates of frequency content for the underlying continuous time process are correct. You may be observing alias frequencies. A frequency $f_a$ is an alias of $f$ if\n\n\n$$ f_a = |nf_s - f|, n \\in \\mathbb{I}$$\nAliasing\nWhen your signal contains frequency components that are above the Nyquist frequency then those high frequency components show up at lower frequencies. These lower frequencies are called aliases of the higher frequencies.", "def scale_and_fold(x):\n n = len(x)\n half_n = n // 2\n # Scale by length of original signal\n x = (1.0 / n) * x[:half_n + 1]\n # Fold negative frequency\n x[1:] *= 2.0\n return x\n\ndef aliasing_demo():\n f_c = 1000.0 # Hz\n f_s = 20.0 # Hz\n f_end = 25.0 # Hz\n f = 1.0 # Hz\n\n time_c = np.arange(0.0, 10.0 + 1.0/f_c, 1/f_c)\n time_s = np.arange(0.0, 10.0 + 1.0/f_s, 1/f_s)\n freq_c = np.fft.fftfreq(len(time_c), 1.0 / f_c)\n freq_c = np.abs(freq_c[:len(time_c) // 2 + 1])\n freq_s = np.fft.fftfreq(len(time_s), 1.0 / f_s)\n freq_s = np.abs(freq_s[:len(time_s) // 2 + 1])\n\n f=widgets.FloatSlider(value=1.0, min=0.0, max=f_end, step=0.1, description='Frequency (Hz)')\n phi = widgets.FloatSlider(value=0.0, min=0.0, max=2.0*np.pi, step=0.1, description=\"Phase (rad)\")\n\n x_c = np.sin(2 * np.pi * f.value * time_c + phi.value)\n x_s = np.sin(2 * np.pi * f.value * time_s + phi.value)\n fig, ax = plt.subplots(2, 1, figsize=(9, 6))\n fig.subplots_adjust(hspace=0.3)\n line1 = ax[0].plot(time_c, x_c, alpha=0.9, lw=2.0)[0]\n line2 = ax[0].plot(time_s, x_s, marker='o', color='r', ls=':')[0]\n ax[0].set_xlabel(\"Time (s)\")\n ax[0].set_ylabel(\"$x$\")\n ax[0].set_title('Sine Wave Sampled at {} Hz'.format(int(f_s)))\n ax[0].set_ylim((-1, 1))\n ax[0].set_xlim((0, 1))\n\n window_c = 2 * np.hanning(len(time_c))\n window_s = 2 * np.hanning(len(time_s))\n fft_c = scale_and_fold(np.fft.fft(x_c * window_c))\n fft_s = scale_and_fold(np.fft.fft(x_s * window_s))\n\n line3 = ax[1].plot(freq_c, np.abs(fft_c), alpha=0.5, lw=2)[0]\n line4 = ax[1].plot(freq_s, np.abs(fft_s), 'r:', lw=2)[0]\n line5 = ax[1].axvline(f_s / 2.0, color='0.75', ls='--')\n plt.axvline(f_s, color='0.75')\n ax[1].text(1.02 * f_s / 2, 0.93, '$f_N$', {'size':14})\n ax[1].text(1.01 * f_s, 0.93, '$f_s$', {'size':14})\n ax[1].set_xlabel(\"Frequency (Hz)\")\n ax[1].set_ylabel(\"$X(f)$\")\n ax[1].set_xlim((0, f_end))\n\n def on_slider(s): \n x_c = np.sin(2 * np.pi * f.value * time_c + phi.value)\n x_s = np.sin(2 * np.pi * f.value * time_s + phi.value)\n fft_c = scale_and_fold(np.fft.fft(x_c * window_c))\n fft_s = scale_and_fold(np.fft.fft(x_s * window_s))\n\n # line1.set_xdata(time_c)\n line1.set_ydata(x_c)\n # line2.set_xdata(time_s)\n line2.set_ydata(x_s)\n line3.set_ydata(np.abs(fft_c))\n line4.set_ydata(np.abs(fft_s))\n plt.draw()\n\n f.on_trait_change(on_slider)\n phi.on_trait_change(on_slider)\n\n display(f)\n display(phi)\n\naliasing_demo()", "Avoiding Aliasing\nIf you have control over the sampling process, specify a sampling frequency that is at least twice the highest frequency component of your signal. If you really want to preserve high fidelity, specify a sampling frequency that is ten times the highest frequency component in your signal.\nDigital Filtering\nReshaping the Signal\nSo far we've discussed analysis techniques for characterizing the frequency content of a signal. Now we discuss how to modify the frequency content of the signal to emphasize some of the information in it while removing other aspects. Generally accomplish this using digital filters.\nMoving Average as a Digital Filter\nLet's express a moving average of five in the language of digital filtering. The output $y$ at the $k$-th sample is a function of the last five inputs $x$.\n$$y_k = \\frac{x_k + x_{k-1} + x_{k-2} + x_{k-3} + x_{k-4}}{5}$$\nMore generally, this looks like\n$$y_k = b_0 x_k + b_1 x_{k-1} + b_2 x_{k-2} + b_3 x_{k-3} + b_4 x_{k-4}$$\nwhere all the $b_i = 0.2$. But they don't have to be equal. We could select each of the $b_i$ independently to be whatever we want. Then the filter looks like a weighted average.\nUsing Previous Outputs\nEven more generally, the current output can be a function of previous outputs as well as inputs if we desire.\n$$y_k = \\frac{1}{a_0} \\left(\\frac{b_0 x_k + b_1 x_{k-1} + b_2 x_{k-2} + b_3 x_{k-3} + b_4 x_{k-4}, + \\cdots}\n {a_1 y_{k-1} + a_2 y_{k-2} + a_3 y_{k-3} + a_4 y_{k-4} + \\cdots}\n \\right)$$\nBut how do we choose the $b_i$ and the $a_i$ to get a filter with a particular desired behavior?\nStandard Digital Filter Designs\nLuckily, standard filter designs already exist to create filters that have certain response characteristics, either in the time domain or the frequency domain.\n\nButterworth\nChebyshev\nElliptic\nBessel\n\nWhen in doubt, use the Butterworth filter since it's a great general purpose filter and is easier to specify. All of these filter designs are available in scipy.signal.", "def butter_filt(x, sampling_freq_hz, corner_freq_hz=4.0, lowpass=True, filtfilt=False):\n \"\"\"\n Smooth data with a low-pass or high-pass filter.\n\n Apply a 2nd order Butterworth filter. Note that if filtfilt\n is True the applied filter is effectively a 4th order Butterworth.\n \n Parameters\n ----------\n x: 1D numpy array\n Array containing the signal to be smoothed.\n sampling_freq_hz: float\n Sampling frequency of the signal in Hz.\n corner_freq_hz: float\n Corner frequency of the Butterworth filter in Hz.\n lowpass: bool\n If True (default), apply a low-pass filter. If False,\n apply a high-pass filter.\n filtfilt: bool\n If True, apply the filter forward and then backward\n to elminate delay. If False (default), apply the\n filter only in the forward direction.\n\n Returns\n -------\n filtered: 1D numpy array\n Array containing smoothed signal\n b, a: 1D numpy arrays\n Polynomial coefficients of the smoothing filter as returned from\n the Butterworth design function.\n\n \"\"\"\n nyquist = sampling_freq_hz / 2.0\n f_c = np.array([corner_freq_hz, ], dtype=np.float64) # Hz\n # Normalize by Nyquist\n f_c /= nyquist\n # Second order Butterworth filter at corner frequency\n btype = 'low' if lowpass else 'high'\n b, a = signal.butter(2, f_c, btype=btype)\n # Apply the filter either in forward direction or forward-back.\n if filtfilt:\n filtered = signal.filtfilt(b, a, x)\n else:\n filtered = signal.lfilter(b, a, x)\n return (filtered, b, a)\n\nf_c_low = 2.0 # Corner frequency in Hz\ns_filtered, b, a = butter_filt(s_t, f_s, f_c_low)\nw, h = signal.freqz(b, a, 2048)\nw *= (f_s / (2 * np.pi))\nfig, ax = plt.subplots(2, 1, sharex=True, figsize=(9, 5))\nax[0].plot(w, abs(h)), plt.xlim((0, 12)), ax[1].plot(w, np.angle(h, deg=True))\nax[0].set_ylabel('Attenuation Factor'), ax[1].set_ylabel('Phase Angle (deg)')\nax[1].set_xlabel('Frequency (Hz)')\nax[0].set_title('Filter Frequency Response - 2nd Order Butterworth Low-Pass');\n\nplt.figure(figsize=(9, 5))\nplt.plot(time, s_t, label='Original'), plt.plot(time, s_filtered, 'r-', label='Filtered')\nplt.xlim((0, 3))\nplt.xlabel('Time (s)'), plt.ylabel('Signal'), plt.legend(), plt.title('Low-Pass, Forward Filtering');\n\ns_filtered, b, a = butter_filt(s_t, f_s, f_c_low, filtfilt=True)\nplt.figure(figsize=(9, 5))\nplt.plot(time, s_t, label='Original'), plt.plot(time, s_filtered, 'r-', label='Filtered')\nplt.xlim((0, 3))\nplt.xlabel('Time (s)'), plt.ylabel('Signal'), plt.legend(), plt.title('Low-Pass, Forward-Backward Filtering');\n\nf_c_high = 6.0 # Corner frequency in Hz\ns_filtered, b, a = butter_filt(s_t, f_s, f_c_high, lowpass=False, filtfilt=True)\nw, h = signal.freqz(b, a, 2048)\nw *= (f_s / (2 * np.pi))\nfig, ax = plt.subplots(2, 1, sharex=True, figsize=(9, 5))\nax[0].plot(w, abs(h)), plt.xlim((0, 12)), ax[1].plot(w[1:], np.angle(h, deg=True)[1:])\nax[0].set_ylabel('Attenuation Factor'), ax[1].set_ylabel('Phase Angle (deg)')\nax[1].set_xlabel('Frequency (Hz)')\nax[0].set_title('Filter Frequency Response - 2nd Order Butterworth High-Pass');\n\ns_filtered, b, a = butter_filt(s_t, f_s, f_c_high, lowpass=False, filtfilt=True)\nplt.figure(figsize=(9, 5))\nplt.plot(time, s_t, label='Original'), plt.plot(time, s_filtered, 'r-', label='Filtered')\nplt.xlim((0, 3))\nplt.xlabel('Time (s)'), plt.ylabel('Signal'), plt.legend(), plt.title('High-Pass, Forward-Backward Filtering');", "Thank you!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
scikit-optimize/scikit-optimize.github.io
0.7/notebooks/auto_examples/strategy-comparison.ipynb
bsd-3-clause
[ "%matplotlib inline", "Comparing surrogate models\nTim Head, July 2016.\nReformatted by Holger Nahrstaedt 2020\n.. currentmodule:: skopt\nBayesian optimization or sequential model-based optimization uses a surrogate\nmodel to model the expensive to evaluate function func. There are several\nchoices for what kind of surrogate model to use. This notebook compares the\nperformance of:\n\ngaussian processes,\nextra trees, and\nrandom forests\n\nas surrogate models. A purely random optimization strategy is also used as\na baseline.", "print(__doc__)\nimport numpy as np\nnp.random.seed(123)\nimport matplotlib.pyplot as plt", "Toy model\nWe will use the :class:benchmarks.branin function as toy model for the expensive function.\nIn a real world application this function would be unknown and expensive\nto evaluate.", "from skopt.benchmarks import branin as _branin\n\ndef branin(x, noise_level=0.):\n return _branin(x) + noise_level * np.random.randn()\n\nfrom matplotlib.colors import LogNorm\n\n\ndef plot_branin():\n fig, ax = plt.subplots()\n\n x1_values = np.linspace(-5, 10, 100)\n x2_values = np.linspace(0, 15, 100)\n x_ax, y_ax = np.meshgrid(x1_values, x2_values)\n vals = np.c_[x_ax.ravel(), y_ax.ravel()]\n fx = np.reshape([branin(val) for val in vals], (100, 100))\n\n cm = ax.pcolormesh(x_ax, y_ax, fx,\n norm=LogNorm(vmin=fx.min(),\n vmax=fx.max()))\n\n minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]])\n ax.plot(minima[:, 0], minima[:, 1], \"r.\", markersize=14,\n lw=0, label=\"Minima\")\n\n cb = fig.colorbar(cm)\n cb.set_label(\"f(x)\")\n\n ax.legend(loc=\"best\", numpoints=1)\n\n ax.set_xlabel(\"X1\")\n ax.set_xlim([-5, 10])\n ax.set_ylabel(\"X2\")\n ax.set_ylim([0, 15])\n\n\nplot_branin()", "This shows the value of the two-dimensional branin function and\nthe three minima.\nObjective\nThe objective of this example is to find one of these minima in as\nfew iterations as possible. One iteration is defined as one call\nto the :class:benchmarks.branin function.\nWe will evaluate each model several times using a different seed for the\nrandom number generator. Then compare the average performance of these\nmodels. This makes the comparison more robust against models that get\n\"lucky\".", "from functools import partial\nfrom skopt import gp_minimize, forest_minimize, dummy_minimize\n\nfunc = partial(branin, noise_level=2.0)\nbounds = [(-5.0, 10.0), (0.0, 15.0)]\nn_calls = 60\n\ndef run(minimizer, n_iter=5):\n return [minimizer(func, bounds, n_calls=n_calls, random_state=n)\n for n in range(n_iter)]\n\n# Random search\ndummy_res = run(dummy_minimize)\n\n# Gaussian processes\ngp_res = run(gp_minimize)\n\n# Random forest\nrf_res = run(partial(forest_minimize, base_estimator=\"RF\"))\n\n# Extra trees\net_res = run(partial(forest_minimize, base_estimator=\"ET\"))", "Note that this can take a few minutes.", "from skopt.plots import plot_convergence\n\nplot = plot_convergence((\"dummy_minimize\", dummy_res),\n (\"gp_minimize\", gp_res),\n (\"forest_minimize('rf')\", rf_res),\n (\"forest_minimize('et)\", et_res),\n true_minimum=0.397887, yscale=\"log\")\n\nplot.legend(loc=\"best\", prop={'size': 6}, numpoints=1)", "This plot shows the value of the minimum found (y axis) as a function\nof the number of iterations performed so far (x axis). The dashed red line\nindicates the true value of the minimum of the :class:benchmarks.branin function.\nFor the first ten iterations all methods perform equally well as they all\nstart by creating ten random samples before fitting their respective model\nfor the first time. After iteration ten the next point at which\nto evaluate :class:benchmarks.branin is guided by the model, which is where differences\nstart to appear.\nEach minimizer only has access to noisy observations of the objective\nfunction, so as time passes (more iterations) it will start observing\nvalues that are below the true value simply because they are fluctuations." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdss/marvin
docs/sphinx/jupyter/first-steps.ipynb
bsd-3-clause
[ "First Steps\nNow that you have installed Marvin, it's time to take your first steps. If you want to learn more about how Marvin works, then go see General Info to learn about Marvin Modes, Versions, or Downloading. If you just want to play, then read on.\nFirst let's run some boilerplate code for Python 2/3 compatibility and plotting in the notebook:", "from __future__ import print_function, division, absolute_import\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Now, letโ€™s import Marvin:", "import marvin", "Let's see what release we're using. Releases can be either MPLs (e.g. MPL-5) or DRs (e.g. DR13), however DRs are currently disabled in Marvin.", "marvin.config.release", "On intial import, Marvin will set the default data release to use the latest MPL available, currently MPL-6. You can change the version of MaNGA data using the Marvin Config.", "from marvin import config\nconfig.setRelease('MPL-5')\n\nprint('MPL:', config.release)", "But let's work with MPL-6:", "config.setRelease('MPL-6')\n\n# check designated version\nconfig.release", "My First Cube\nNow letโ€™s play with a Marvin Cube!\nImport the Marvin-Tools Cube class:", "from marvin.tools.cube import Cube", "Let's load a cube from a local file. Start by specifying the full path and name of the file, such as:\n/Users/Brian/Work/Manga/redux/v2_3_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz\nEDIT Next Cell", "#----- EDIT THIS CELL -----#\n\n# filename = '/Users/Brian/Work/Manga/redux/v1_5_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'\nfilename = 'path/to/manga/cube/manga-8485-1901-LOGCUBE.fits.gz'\n\nfilename = '/Users/andrews/manga/spectro/redux/v2_3_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'\nfilename = '/Users/Brian/Work/Manga/redux/v2_3_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'", "Create a Cube object:", "cc = Cube(filename=filename)", "Now we have a Cube object:", "print(cc)", "How about we look at some meta-data", "cc.ra, cc.dec, cc.header['SRVYMODE']", "...and the quality and target bits", "cc.target_flags\n\ncc.quality_flag", "Get a Spaxel\nCubes have several functions currently available: getSpaxel, getMaps, getAperture. Let's look at spaxels. We can retrieve spaxels from a cube easily via indexing. In this manner, spaxels are 0-indexed from the lower left corner. Let's get spaxel (x=10, y=10):", "spax = cc[10,10]\n\n# print the spaxel to see the x,y coord from the lower left, and the coords relative to the cube center, x_cen/y_cen\nspax", "Spaxels have a spectrum associated with it. It has the wavelengths and fluxes of each spectral channel:\nAlternatively grab a spaxel with getSpaxel. Use the xyorig keyword to set the coordinate origin point: 'lower' or 'center'. The default is \"center\"", "# let's grab the central spaxel\nspax = cc.getSpaxel(x=0, y=0)\nspax\n\nspax.flux.wavelength\n\nspax.flux", "Plot the spectrum!", "# turn on interactive plotting\n%matplotlib notebook\n\nspax.flux.plot()", "Save plot to Downloads directory:", "# To save the plot, we need to draw it in the same cell as the save command.\nspax.flux.plot()\n\nimport os\nplt.savefig(os.getenv('HOME') + '/Downloads/my-first-spectrum.png')\n\n# NOTE - if you are using the latest version of iPython and Jupyter notebooks, then interactive matplotlib plots \n# should be enabled. You can save the figure with the save icon in the interactive toolbar." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ctroupin/CMEMS_INSTAC_Training
PythonNotebooks/PlatformPlots/Plot_TimeSeries1.ipynb
mit
[ "%matplotlib inline\nimport netCDF4\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nfrom matplotlib import dates\nimport numpy as np", "Read variables and units\nWe assume the data file is present in the following directory:", "datafile = \"~/CMEMS_INSTAC/INSITU_MED_NRT_OBSERVATIONS_013_035/history/mooring/IR_TS_MO_61198.nc\"", "We use the os mudule to extend the ~.", "import os\ndatafile = os.path.expanduser(datafile)\n\nwith netCDF4.Dataset(datafile, 'r') as ds:\n time_values = ds.variables['TIME'][:]\n temperature_values = ds.variables['TEMP'][:]\n temperatureQC = ds.variables['TEMP_QC'][:]\n time_units = ds.variables['TIME'].units\n temperature_units = ds.variables['TEMP'].units\n time2 = netCDF4.num2date(time_values, time_units)", "We also mask the temperature values that have quality flag not equal to 1.", "temperature_values = np.ma.masked_where(temperatureQC != 1, temperature_values)", "Basic plot\nWe create the most simple plot, without any additional option.", "fig = plt.figure()\nplt.plot(time2, temperature_values)\nplt.ylabel(temperature_units)", "Main problems:\n* The figure is not large enough.\n* The labels are too small.\nImproved plot\nWith some commands the previous plot can be improved:\n* The figure size is increased\n* The font size is set to 20 (pts)\n* The year labels are rotated 45ยบ", "mpl.rcParams.update({'font.size': 20})\n\nfig = plt.figure(figsize=(15, 8))\nax = fig.add_subplot(111)\nplt.plot(time2, temperature_values, linewidth=0.5)\nplt.ylabel(temperature_units)\nplt.xlabel('Year')\nfig.autofmt_xdate()\nplt.grid()", "Final version\nWe want to add a title containing the coordinates of the station. Longitude and latitude are both stored as vectors, but we will only keep the mean position to be included in the title.\nLaTeX syntax can be used, as in this example, with the degree symbol.", "with netCDF4.Dataset(datafile, 'r') as ds:\n lon = ds.variables['LONGITUDE'][:]\n lat = ds.variables['LATITUDE'][:]\nfigure_title = r'Temperature evolution at \\n%s$^\\circ$E, %s$^\\circ$N' % (lon.mean(), lat.mean())\nprint figure_title", "The units for the temperature are also changed:", "temperature_units2 = '($^{\\circ}$C)'\n\nfig = plt.figure(figsize=(15, 8))\nax = fig.add_subplot(111)\nax.xaxis.set_major_locator(dates.YearLocator(base=2))\nax.xaxis.set_minor_locator(dates.YearLocator())\nplt.plot(time2, temperature_values, linewidth=0.5)\nplt.ylabel(temperature_units2, rotation=0., horizontalalignment='right')\nplt.title(figure_title)\nplt.xlabel('Year')\nfig.autofmt_xdate()\nplt.grid()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
heatseeknyc/data-science
src/bryan analyses/Hack for Heat #1.ipynb
mit
[ "Hacking for heat\nIn this series, I'm going to be posting about the process that goes on behind some of the blog posts we end up writing. In this first entry, I'm going to be exploring a number of datsets.\nThese are the ones that I'm going to be looking at:\n\nHPD (Housing Preservation and Development) housing litigations\nHousing maintenance code complaints\nHousing maintanence code violations\nHPD complaints\n\n(for HPD datasets, some documentation can be found here)\nHPD litigation database\nFirst, we're going to look at the smallest dataset, one that contains cases against landlords. From the documentation, this file contains \"All cases commenced by HPD or by tennants (naming HPD as a party) in [housing court] since August 2006\" either seeking orders for landlords to comply with regulations, or awarding HPD civil penalties (i.e., collecting on fines).", "import pandas as pd\n\nlitigation = pd.read_csv(\"Housing_Litigations.csv\")\n\nlitigation.head()", "Let's take a look at unique values for some of the columns:", "litigation['Boro'].unique()\n\nlitigation.groupby(by = ['Boro','CaseJudgement']).count()", "The above table tells us that Manhattan has the lowest proportion of cases that receive judgement (about 1 in 80), whereas Staten Island has the highest (about 1 in 12). It may be something worth looking into, but it's also important to note that many cases settle out of court, and landlords in Manhattan may be more willing (or able) to do so.", "litigation['CaseType'].unique()\n\nlitigation.groupby(by = ['CaseType', 'CaseJudgement']).count()", "The table above shows the same case judgement proportions, but conditioned on what type of case it was. Unhelpfully, the documentation does not specify what the difference between Access Warrant - Lead and Non-Lead is. It could be one of two possibilities: The first is whether the warrants have to do with lead-based paint, which is a common problem, but perhaps still too idiosyncratic to have it's own warrant type. The second, perhaps more likely possibility is whether or not HPD was the lead party in the case.\nWe'll probably end up using these data by aggregating it and examining how complaints change over time, perhaps as a function of what type they are. There's also the possibility of looking up specific buildings' complaints and tying them to landlords. There's probably also an easy way to join this dataset with another, by converting the address information into something standardized, like borough-block-lot (BBL; http://www1.nyc.gov/nyc-resources/service/1232/borough-block-lot-bbl-lookup)\nHPD complaints\nNext, we're going to look at a dataset of HPD complaints.", "hpdcomp = pd.read_csv('Housing_Maintenance_Code_Complaints.csv')\n\nhpdcomp.head()\n\nlen(hpdcomp)", "This dataset is less useful on its own. It doesn't tell us what the type of complaint was, only the date it was received and whether or not the complaint is still open. However, it may be useful in conjunction with the earlier dataset. For example, we might be interested in how many of these complaints end up in court (or at least, have some sort of legal action taken).\nHPD violations\nThe following dataset tracks HPD violations.", "hpdviol = pd.read_csv('Housing_Maintenance_Code_Violations.csv')\n\nhpdviol.head()\n\nlen(hpdviol)", "These datasets all have different lengths, but that's not surprising, given they come from different years. One productive initial step would be to convert the date strings into something numerical.\nHPD complaint problems database", "hpdcompprob = pd.read_csv('Complaint_Problems.csv')\n\nhpdcompprob.head()", "Awesome! This dataset provides some more details about the complaints, and lets us join by ComplaintID.\nSummary and next steps\nIn the immediate future, I'm going to be writing script to join and clean this dataset. This can either be done either in python, or by writing some SQL. I haven't decided yet. Additionally, I'm going to be writing some code to do things like convert date strings, and perhaps scrape text." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
arcyfelix/Courses
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/05-Pandas-with-Time-Series/01 - Datetime Index.ipynb
apache-2.0
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\n<center>Copyright Pierian Data 2017</center>\n<center>For more information, visit us at www.pieriandata.com</center>\nIntroduction to Time Series with Pandas\nA lot of our financial data will have a datatime index, so let's learn how to deal with this sort of data with pandas!", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom datetime import datetime\n\n# To illustrate the order of arguments\nmy_year = 2017\nmy_month = 1\nmy_day = 2\nmy_hour = 13\nmy_minute = 30\nmy_second = 15\n\n# January 2nd, 2017\nmy_date = datetime(my_year,my_month, my_day)\n\n# Defaults to 0:00\nmy_date \n\n# January 2nd, 2017 at 13:30:15\nmy_date_time = datetime(my_year, my_month, my_day, my_hour, my_minute, my_second)\n\nmy_date_time", "You can grab any part of the datetime object you want", "my_date.day\n\nmy_date_time.hour", "Pandas with Datetime Index\nYou'll usually deal with time series as an index when working with pandas dataframes obtained from some sort of financial API. Fortunately pandas has a lot of functions and methods to work with time series!", "# Create an example datetime list/array\nfirst_two = [datetime(2016, 1, 1), datetime(2016, 1, 2)]\nfirst_two\n\n# Converted to an index\ndt_ind = pd.DatetimeIndex(first_two)\ndt_ind\n\n# Attached to some random data\ndata = np.random.randn(2, 2)\nprint(data)\ncols = ['A','B']\n\ndf = pd.DataFrame(data,dt_ind,cols)\n\ndf\n\ndf.index\n\n# Latest Date Location\ndf.index.argmax()\n\ndf.index.max()\n\n# Earliest Date Index Location\ndf.index.argmin()\n\ndf.index.min()", "Great, let's move on!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
harmsm/pythonic-science
chapters/00_inductive-python/05_lists.ipynb
unlicense
[ "Lists\nLists are objects that let you hold on to multiple values at once in a sane and organized fashion.\nIntroduction\nLists are ordered collections of objects. Objects in lists can be of any type. You can also add and remove list entries. They are indicated by \"[\" and \"]\" brackets.\nmy_list = [1,2,3]\nIndexing\nPredict what this code does.", "some_list = [10,20,30]\nprint(some_list[2])\n\nsome_list = [10,20,30]\nprint(some_list[0])\n\nsome_list = [10,20,30]\nprint(some_list[-1])", "Summarize\nHow do you access elements in a list?\nPredict what this code does.", "some_list = [10,20,30,40]\nprint(some_list[1:3])\n\nsome_list = [10,20,30]\nprint(some_list[:3])", "Summarize\nWhat does the \":\" symbol do?\nModify\nChange the cell below so it prints the second through fourth elements in the list.", "some_list = [0,10,20,30,40,50,60,70]\nprint(some_list[2:4])", "Setting values in lists\nPredict what this code does.", "some_list = [10,20,30]\nsome_list[0] = 50\nprint(some_list)", "Predict what this code does.", "some_list = []\nfor i in range(5):\n some_list.append(i)\nprint(some_list)", "Predict what this code does.", "some_list = [1,2,3]\nsome_list.insert(2,5)\nprint(some_list)\n\nsome_list = [10,20,30]\nsome_list.pop(1)\nprint(some_list)\n\nsome_list = [10,20,30]\nsome_list.remove(30)\nprint(some_list)", "Summarize\nHow can you change entries in a list?\nImplement\nWrite a program that creates a list with all integers from 0 to 9 and then replaces the 5 with the number 423.\nMiscellaneous List Stuff", "# You can put anything in a list\nsome_list = [\"test\",1,1.52323,print]\n\n# You can even put a list in a list\nsome_list = [[1,2,3],[4,5,6],[7,8,9]] # a list of three lists!\n\n# You can get the length of a list with len(some_list)\nsome_list = [10,20,30]\nprint(len(some_list))", "Copying lists\n(a confusing point for python programmers)\nPredict what this code does.", "some_list = [10,20,30]\n\nanother_list = some_list\nsome_list[0] = 50\n\nprint(some_list)\nprint(another_list)", "Predict what this code does.", "import copy\n\nsome_list = [10,20,30]\n\nanother_list = copy.deepcopy(some_list)\nsome_list[0] = 50\n\nprint(some_list)\nprint(another_list)", "Think about it for a moment. What might be going on?\n<h3><font color=\"red\">DANGER</font></h3>\n\nThe previous cells demonstrate a common (and confusing) python gotcha when dealing with lists.\nIn the first case:\nThe statement\npython\n another_list = some_list\nsays that another_list is some_list. These are now two labels for the same underlying object. It does not create a new object. If I change some_list, it changes another_list because they are the same thing. In programming terms, another_list and some_list are both references to the same object. \nI can write:\nMary is also known as Jane.\nI can then use \"Mary\" or \"Jane\" and it will be understood both labels point to the same person. If I say, \"Jane has red hair\", it implies that \"Mary\" also has red hair because they are the same person. \nIn the second case:\nThe statement\npython\n another_list = copy.deepcopy(some_list)\nsays to make a copy of some_list and call it another_list. These are now independent. I can modify one without modifiying another. (In programming terms, another_list and some_list now refer to different objects). \nIn our Mary and Jane analogy, the sentence would be:\nI cloned Mary and named the clone Jane.\nNow the two labels point to to different people. If I say \"Jane has red hair\" it does not imply that \"Mary has red hair\" (Mary may have dyed her hair blue). \nSummary\n\nPython lists start at 0 and count to N-1. \nNegative numbers (starting at -1) count from the right side.\n\n\":\" lets you slice lists. \n\nUsing some_list[i:j+1] returns the $i^{th}$ through the $j^{th}$ entries in the list. \nsome_list[:3] gives the $0^{th}$ through the $2^{nd}$ entries.\nsome_list[1:-1] gives the $1^{st}$ through the last entries.\nsome_list[:] gives the whole list\n\n\n\nChange $i^{th}$ entry to $x$ by some_list[i] = x\n\nAppend $x$ to the end by some_list.append(x)\nInsert $x$ at the $i^{th}$ position by some_list.insert(i,x)\nRemove the $i^{th}$ entry by some_list.pop(i)\nRemove the first entry with value $x$ by some_list.remove(x)\n\nIf you want to copy a list, do:\n```python\nimport copy\nsome_list = [1,2,3]\nlist_copy = copy.deepcopy(some_list)\n```" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
azhurb/deep-learning
sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Project 4).ipynb
mit
[ "Sentiment Classification & How To \"Frame Problems\" for a Neural Network\nby Andrew Trask\n\nTwitter: @iamtrask\nBlog: http://iamtrask.github.io\n\nWhat You Should Already Know\n\nneural networks, forward and back-propagation\nstochastic gradient descent\nmean squared error\nand train/test splits\n\nWhere to Get Help if You Need it\n\nRe-watch previous Udacity Lectures\nLeverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)\nShoot me a tweet @iamtrask\n\nTutorial Outline:\n\n\nIntro: The Importance of \"Framing a Problem\"\n\n\nCurate a Dataset\n\nDeveloping a \"Predictive Theory\"\n\nPROJECT 1: Quick Theory Validation\n\n\nTransforming Text to Numbers\n\n\nPROJECT 2: Creating the Input/Output Data\n\n\nPutting it all together in a Neural Network\n\n\nPROJECT 3: Building our Neural Network\n\n\nUnderstanding Neural Noise\n\n\nPROJECT 4: Making Learning Faster by Reducing Noise\n\n\nAnalyzing Inefficiencies in our Network\n\n\nPROJECT 5: Making our Network Train and Run Faster\n\n\nFurther Noise Reduction\n\n\nPROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary\n\n\nAnalysis: What's going on in the weights?\n\n\nLesson: Curate a Dataset", "def pretty_print_review_and_label(i):\n print(labels[i] + \"\\t:\\t\" + reviews[i][:80] + \"...\")\n\ng = open('reviews.txt','r') # What we know!\nreviews = list(map(lambda x:x[:-1],g.readlines()))\ng.close()\n\ng = open('labels.txt','r') # What we WANT to know!\nlabels = list(map(lambda x:x[:-1].upper(),g.readlines()))\ng.close()\n\nlen(reviews)\n\nreviews[0]\n\nlabels[0]", "Lesson: Develop a Predictive Theory", "print(\"labels.txt \\t : \\t reviews.txt\\n\")\npretty_print_review_and_label(2137)\npretty_print_review_and_label(12816)\npretty_print_review_and_label(6267)\npretty_print_review_and_label(21934)\npretty_print_review_and_label(5297)\npretty_print_review_and_label(4998)", "Project 1: Quick Theory Validation", "from collections import Counter\nimport numpy as np\n\npositive_counts = Counter()\nnegative_counts = Counter()\ntotal_counts = Counter()\n\nfor i in range(len(reviews)):\n if(labels[i] == 'POSITIVE'):\n for word in reviews[i].split(\" \"):\n positive_counts[word] += 1\n total_counts[word] += 1\n else:\n for word in reviews[i].split(\" \"):\n negative_counts[word] += 1\n total_counts[word] += 1\n\npositive_counts.most_common()\n\npos_neg_ratios = Counter()\n\nfor term,cnt in list(total_counts.most_common()):\n if(cnt > 100):\n pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)\n pos_neg_ratios[term] = pos_neg_ratio\n\nfor word,ratio in pos_neg_ratios.most_common():\n if(ratio > 1):\n pos_neg_ratios[word] = np.log(ratio)\n else:\n pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))\n\n# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()\n\n# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]", "Transforming Text into Numbers", "from IPython.display import Image\n\nreview = \"This was a horrible, terrible movie.\"\n\nImage(filename='sentiment_network.png')\n\nreview = \"The movie was excellent\"\n\nImage(filename='sentiment_network_pos.png')", "Project 2: Creating the Input/Output Data", "vocab = set(total_counts.keys())\nvocab_size = len(vocab)\nprint(vocab_size)\n\nlist(vocab)\n\nimport numpy as np\n\nlayer_0 = np.zeros((1,vocab_size))\nlayer_0\n\nfrom IPython.display import Image\nImage(filename='sentiment_network.png')\n\nword2index = {}\n\nfor i,word in enumerate(vocab):\n word2index[word] = i\nword2index\n\ndef update_input_layer(review):\n \n global layer_0\n \n # clear out previous state, reset the layer to be all 0s\n layer_0 *= 0\n for word in review.split(\" \"):\n layer_0[0][word2index[word]] += 1\n\nupdate_input_layer(reviews[0])\n\nlayer_0\n\ndef get_target_for_label(label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n\nlabels[0]\n\nget_target_for_label(labels[0])\n\nlabels[1]\n\nget_target_for_label(labels[1])", "Project 3: Building a Neural Network\n\nStart with your neural network from the last chapter\n3 layer neural network\nno non-linearity in hidden layer\nuse our functions to create the training data\ncreate a \"pre_process_data\" function to create vocabulary for our training data generating functions\nmodify \"train\" to train over the entire corpus\n\nWhere to Get Help if You Need it\n\nRe-watch previous week's Udacity Lectures\nChapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)", "import time\nimport sys\nimport numpy as np\n\n# Let's tweak our network from before to model these phenomena\nclass SentimentNetwork:\n def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):\n \n # set our random number generator \n np.random.seed(1)\n \n self.pre_process_data(reviews, labels)\n \n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n \n \n def pre_process_data(self, reviews, labels):\n \n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n review_vocab.add(word)\n self.review_vocab = list(review_vocab)\n \n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n self.label_vocab = list(label_vocab)\n \n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n \n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n \n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n self.learning_rate = learning_rate\n \n self.layer_0 = np.zeros((1,input_nodes))\n \n \n def update_input_layer(self,review):\n\n # clear out previous state, reset the layer to be all 0s\n self.layer_0 *= 0\n for word in review.split(\" \"):\n if(word in self.word2index.keys()):\n self.layer_0[0][self.word2index[word]] += 1\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n def train(self, training_reviews, training_labels):\n \n assert(len(training_reviews) == len(training_labels))\n \n correct_so_far = 0\n \n start = time.time()\n \n for i in range(len(training_reviews)):\n \n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n # Input Layer\n self.update_input_layer(review)\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n\n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # TODO: Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # TODO: Update the weights\n self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n if(np.abs(layer_2_error) < 0.5):\n correct_so_far += 1\n \n reviews_per_second = i / float(time.time() - start)\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \n correct = 0\n \n start = time.time()\n \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n reviews_per_second = i / float(time.time() - start)\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \"% #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \n # Input Layer\n self.update_input_layer(review.lower())\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n if(layer_2[0] > 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n \n\nmlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)\n\n# evaluate our model before training (just to show how horrible it is)\nmlp.test(reviews[-1000:],labels[-1000:])\n\n# train the network\nmlp.train(reviews[:-1000],labels[:-1000])\n\nmlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)\n\n# train the network\nmlp.train(reviews[:-1000],labels[:-1000])\n\nmlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)\n\n# train the network\nmlp.train(reviews[:-1000],labels[:-1000])", "Understanding Neural Noise", "from IPython.display import Image\nImage(filename='sentiment_network.png')\n\ndef update_input_layer(review):\n \n global layer_0\n \n # clear out previous state, reset the layer to be all 0s\n layer_0 *= 0\n for word in review.split(\" \"):\n layer_0[0][word2index[word]] += 1\n\nupdate_input_layer(reviews[0])\n\nlayer_0\n\nreview_counter = Counter()\n\nfor word in reviews[0].split(\" \"):\n review_counter[word] += 1\n\nreview_counter.most_common()", "Project 4: Reducing Noise in our Input Data", "import time\nimport sys\nimport numpy as np\n\n# Let's tweak our network from before to model these phenomena\nclass SentimentNetwork:\n def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):\n \n # set our random number generator \n np.random.seed(1)\n \n self.pre_process_data(reviews, labels)\n \n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n \n \n def pre_process_data(self, reviews, labels):\n \n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n review_vocab.add(word)\n self.review_vocab = list(review_vocab)\n \n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n self.label_vocab = list(label_vocab)\n \n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n \n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n \n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n self.learning_rate = learning_rate\n \n self.layer_0 = np.zeros((1,input_nodes))\n \n \n def update_input_layer(self,review):\n\n # clear out previous state, reset the layer to be all 0s\n self.layer_0 *= 0\n for word in review.split(\" \"):\n if(word in self.word2index.keys()):\n self.layer_0[0][self.word2index[word]] = 1\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n def train(self, training_reviews, training_labels):\n \n assert(len(training_reviews) == len(training_labels))\n \n correct_so_far = 0\n \n start = time.time()\n \n for i in range(len(training_reviews)):\n \n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n # Input Layer\n self.update_input_layer(review)\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n\n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # TODO: Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # TODO: Update the weights\n self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n if(np.abs(layer_2_error) < 0.5):\n correct_so_far += 1\n \n reviews_per_second = i / float(time.time() - start)\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \n correct = 0\n \n start = time.time()\n \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n reviews_per_second = i / float(time.time() - start)\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \"% #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \n # Input Layer\n self.update_input_layer(review.lower())\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n if(layer_2[0] > 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n \n\nmlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)\n\nmlp.train(reviews[:-1000],labels[:-1000])\n\n# evaluate our model before training (just to show how horrible it is)\nmlp.test(reviews[-1000:],labels[-1000:])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
tutorial_part1/FIR Filter Design and C Headers.ipynb
bsd-2-clause
[ "%pylab inline\n#%matplotlib qt\nfrom __future__ import division # use so 1/2 = 0.5, etc.\nimport sk_dsp_comm.sigsys as ss\nimport sk_dsp_comm.fir_design_helper as fir_d\nimport sk_dsp_comm.coeff2header as c2h\nimport scipy.signal as signal\nimport imp # for module development and reload()\nfrom IPython.display import Audio, display\nfrom IPython.display import Image, SVG\n\npylab.rcParams['savefig.dpi'] = 100 # default 72\n#pylab.rcParams['figure.figsize'] = (6.0, 4.0) # default (6,4)\n#%config InlineBackend.figure_formats=['png'] # default for inline viewing\n%config InlineBackend.figure_formats=['svg'] # SVG inline viewing\n#%config InlineBackend.figure_formats=['pdf'] # render pdf figs for LaTeX\n#%Image('fname.png',width='90%')", "FIR Filter Design\nBoth floating point and fixed-point FIR filters are the objective here. we will also need a means to export the filter coefficients to header files. Header export functions for float32_t and int16_t format are provided below. The next step is to actually design some filters using functions found in scipy.signal. To support both of these activities the Python modules fir_design_helper.py and coeff2header.py are available.\nNote: The MATLAB signal processing toolbox is extremely comprehensive in its support of digital filter design. The use of Python is adequate for this, but do not ignore the power available in MATLAB.\nWindowed (Kaiser window) and Equal-Ripple FIR Filter Design\nThe module fir_design_helper.py contains custom filter design code build on top of functions found in scipy.signal. Functions are available for winowed FIR design using a Kaiser window function and equal-ripple FIR design, both type have linear phase. \nExample: Lowpass with $f_s = 1$ Hz\nFor this 31 tap filter we choose the cutoff frequency to be $F_c = F_s/8$, or in normalized form $f_c = 1/8$.", "b_k = fir_d.firwin_kaiser_lpf(1/8,1/6,50,1.0)\nb_r = fir_d.fir_remez_lpf(1/8,1/6,0.2,50,1.0)\n\nfir_d.freqz_resp_list([b_k,b_r],[[1],[1]],'dB',fs=1)\nylim([-80,5])\ntitle(r'Kaiser vs Equal Ripple Lowpass')\nylabel(r'Filter Gain (dB)')\nxlabel(r'Frequency in kHz')\nlegend((r'Kaiser: %d taps' % len(b_k),r'Remez: %d taps' % len(b_r)),loc='best')\ngrid();", "A Highpass Design", "b_k_hp = fir_d.firwin_kaiser_hpf(1/8,1/6,50,1.0)\nb_r_hp = fir_d.fir_remez_hpf(1/8,1/6,0.2,50,1.0)\n\nfir_d.freqz_resp_list([b_k_hp,b_r_hp],[[1],[1]],'dB',fs=1)\nylim([-80,5])\ntitle(r'Kaiser vs Equal Ripple Lowpass')\nylabel(r'Filter Gain (dB)')\nxlabel(r'Frequency in kHz')\nlegend((r'Kaiser: %d taps' % len(b_k),r'Remez: %d taps' % len(b_r)),loc='best')\ngrid();", "Plot a Pole-Zero Map for the Equal-Ripple Design", "ss.zplane(b_r_hp,[1]) # the b and a coefficient arrays ", "A Bandpass Design", "b_k_bp = fir_d.firwin_kaiser_bpf(7000,8000,14000,15000,50,48000)\nb_r_bp = fir_d.fir_remez_bpf(7000,8000,14000,15000,0.2,50,48000)\n\nfir_d.freqz_resp_list([b_k_bp,b_r_bp],[[1],[1]],'dB',fs=48)\nylim([-80,5])\ntitle(r'Kaiser vs Equal Ripple Bandpass')\nylabel(r'Filter Gain (dB)')\nxlabel(r'Frequency in kHz')\nlegend((r'Kaiser: %d taps' % len(b_k_bp),\n r'Remez: %d taps' % len(b_r_bp)),\n loc='lower right')\ngrid();", "Exporting Coefficients to Header Files\nOnce a filter design is complete it can be exported as a C header file using FIR_header() for floating-point design and FIR_fix_header() for 16-bit fixed-point designs.\nFloat Header Export\npython\ndef FIR_header(fname_out,h):\n \"\"\"\n Write FIR Filter Header Files \n \"\"\"\n16 Bit Signed Integer Header Export\npython\ndef FIR_fix_header(fname_out,h):\n \"\"\"\n Write FIR Fixed-Point Filter Header Files \n \"\"\"\nThese functions are available in coeff2header.py, which was imported as c2h above\nWrite a Header File for the Bandpass Equal-Ripple", "# Write a C header file\nc2h.FIR_header('remez_8_14_bpf_f32.h',b_r_bp)", "The header file, remez_8_14_bpf_f32.h written above takes the form:\n\n```c\n//define a FIR coefficient Array\ninclude <stdint.h>\nifndef M_FIR\ndefine M_FIR 101\nendif\n/**********/\n/ FIR Filter Coefficients */\nfloat32_t h_FIR[M_FIR] = {-0.001475936747, 0.000735580994, 0.004771062558,\n 0.001254178712,-0.006176846780,-0.001755945520,\n 0.003667323660, 0.001589634576, 0.000242520766,\n 0.002386316353,-0.002699251419,-0.006927087152,\n 0.002072374590, 0.006247819434,-0.000017122009,\n 0.000544273776, 0.001224920394,-0.008238424843,\n -0.005846603175, 0.009688130613, 0.007237935594,\n -0.003554185785, 0.000423864572,-0.002894644665,\n -0.013460012489, 0.002388684318, 0.019352295029,\n 0.002144732872,-0.009232278407, 0.000146728997,\n -0.010111394762,-0.013491956909, 0.020872121644,\n 0.025104278030,-0.013643042233,-0.015018451283,\n -0.000068299117,-0.019644863999, 0.000002861510,\n 0.052822261169, 0.015289946639,-0.049012297911,\n -0.016642744836,-0.000164469072,-0.032121234463,\n 0.059953731027, 0.133383985599,-0.078819553619,\n -0.239811117665, 0.036017541207, 0.285529343096,\n 0.036017541207,-0.239811117665,-0.078819553619,\n 0.133383985599, 0.059953731027,-0.032121234463,\n -0.000164469072,-0.016642744836,-0.049012297911,\n 0.015289946639, 0.052822261169, 0.000002861510,\n -0.019644863999,-0.000068299117,-0.015018451283,\n -0.013643042233, 0.025104278030, 0.020872121644,\n -0.013491956909,-0.010111394762, 0.000146728997,\n -0.009232278407, 0.002144732872, 0.019352295029,\n 0.002388684318,-0.013460012489,-0.002894644665,\n 0.000423864572,-0.003554185785, 0.007237935594,\n 0.009688130613,-0.005846603175,-0.008238424843,\n 0.001224920394, 0.000544273776,-0.000017122009,\n 0.006247819434, 0.002072374590,-0.006927087152,\n -0.002699251419, 0.002386316353, 0.000242520766,\n 0.001589634576, 0.003667323660,-0.001755945520,\n -0.006176846780, 0.001254178712, 0.004771062558,\n 0.000735580994,-0.001475936747};\n/***********/\n```\n\nThis file can be included in the main module of an ARM Cortex M4 micro controller using the Cypress FM4 $50 dev kit", "f_AD,Mag_AD, Phase_AD = loadtxt('BPF_8_14_101tap_48k.csv',\n delimiter=',',skiprows=6,unpack=True)\n\nfir_d.freqz_resp_list([b_r_bp],[[1]],'dB',fs=48)\nylim([-80,5])\nplot(f_AD/1e3,Mag_AD+.5)\ntitle(r'Equal Ripple Bandpass Theory vs Measured')\nylabel(r'Filter Gain (dB)')\nxlabel(r'Frequency in kHz')\nlegend((r'Equiripple Theory: %d taps' % len(b_r_bp),\n r'AD Measured (0.5dB correct)'),loc='lower right',fontsize='medium')\ngrid();", "FIR Design Problem\nNow its time to design and implement your own FIR filter using the filter design tools of fir_design_helper.py. The assignment here is to complete a design using a sampling rate of 48 kHz having an equiripple FIR lowpass lowpass response with 1dB cutoff frequency at 5 kHz, a passband ripple of 1dB, and stopband attenuation of 60 dB starting at 6.5 kHz. See Figure 9 for a graphical depiction of these amplitude response requirements.", "Image('images/FIR_LPF_Design.png',width='100%')", "We can test this filter in Lab3 using PyAudio for real-time DSP.", "# Design the filter here\n", "Plot the magnitude response and phase response, and the pole-zero plot\n\nUsing the freqz_resp_list\n```Python\ndef freqz_resp_list(b,a=np.array([1]),mode = 'dB',fs=1.0,Npts = 1024,fsize=(6,4)):\n\"\"\"\n A method for displaying a list filter frequency responses in magnitude,\n phase, and group delay. A plot is produced using matplotlib\nfreqz_resp([b],[a],mode = 'dB',Npts = 1024,fsize=(6,4))\nb = ndarray of numerator coefficients\na = ndarray of denominator coefficents\n\nmode = display mode: 'dB' magnitude, 'phase' in radians, or \n 'groupdelay_s' in samples and 'groupdelay_t' in sec, \n all versus frequency in Hz\n Npts = number of points to plot; default is 1024\nfsize = figure size; defult is (6,4) inches\n\"\"\"\n```", "# fill in the plotting details\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
authman/DAT210x
Module5/Module5 - Lab4.ipynb
mit
[ "DAT210x - Programming with Python for DS\nModule5- Lab4", "import math\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n\nfrom sklearn import preprocessing\nfrom sklearn.decomposition import PCA\n\n# You might need to import more modules here..\n# .. your code here ..\n\nmatplotlib.style.use('ggplot') # Look Pretty\nc = ['red', 'green', 'blue', 'orange', 'yellow', 'brown']", "You can experiment with these parameters:", "PLOT_TYPE_TEXT = False # If you'd like to see indices\nPLOT_VECTORS = True # If you'd like to see your original features in P.C.-Space", "Some Convenience Functions", "def drawVectors(transformed_features, components_, columns, plt):\n num_columns = len(columns)\n\n # This function will project your *original* feature (columns)\n # onto your principal component feature-space, so that you can\n # visualize how \"important\" each one was in the\n # multi-dimensional scaling\n\n # Scale the principal components by the max value in\n # the transformed set belonging to that component\n xvector = components_[0] * max(transformed_features[:,0])\n yvector = components_[1] * max(transformed_features[:,1])\n\n ## Visualize projections\n\n # Sort each column by its length. These are your *original*\n # columns, not the principal components.\n important_features = { columns[i] : math.sqrt(xvector[i]**2 + yvector[i]**2) for i in range(num_columns) }\n important_features = sorted(zip(important_features.values(), important_features.keys()), reverse=True)\n print(\"Projected Features by importance:\\n\", important_features)\n\n ax = plt.axes()\n\n for i in range(num_columns):\n # Use an arrow to project each original feature as a\n # labeled vector on your principal component axes\n plt.arrow(0, 0, xvector[i], yvector[i], color='b', width=0.0005, head_width=0.02, alpha=0.75, zorder=600000)\n plt.text(xvector[i]*1.2, yvector[i]*1.2, list(columns)[i], color='b', alpha=0.75, zorder=600000)\n \n return ax\n\ndef doPCA(data, dimensions=2):\n model = PCA(n_components=dimensions, svd_solver='randomized', random_state=7)\n model.fit(data)\n return model\n\ndef doKMeans(data, num_clusters=0):\n # TODO: Do the KMeans clustering here, passing in the # of clusters parameter\n # and fit it against your data. Then, return a tuple containing the cluster\n # centers and the labels.\n #\n # Hint: Just like with doPCA above, you will have to create a variable called\n # `model`, which will be a SKLearn K-Means model for this to work.\n \n \n # .. your code here ..\n return model.cluster_centers_, model.labels_", "Load up the dataset. It may or may not have nans in it. Make sure you catch them and destroy them, by setting them to 0. This is valid for this dataset, since if the value is missing, you can assume no money was spent on it.", "# .. your code here ..", "As instructed, get rid of the Channel and Region columns, since you'll be investigating as if this were a single location wholesaler, rather than a national / international one. Leaving these fields in here would cause KMeans to examine and give weight to them:", "# .. your code here ..", "Before unitizing / standardizing / normalizing your data in preparation for K-Means, it's a good idea to get a quick peek at it. You can do this using the .describe() method, or even by using the built-in pandas df.plot.hist():", "# .. your code here ..", "Having checked out your data, you may have noticed there's a pretty big gap between the top customers in each feature category and the rest. Some feature scaling algorithms won't get rid of outliers for you, so it's a good idea to handle that manually---particularly if your goal is NOT to determine the top customers. \nAfter all, you can do that with a simple Pandas .sort_values() and not a machine learning clustering algorithm. From a business perspective, you're probably more interested in clustering your +/- 2 standard deviation customers, rather than the top and bottom customers.\nRemove top 5 and bottom 5 samples for each column:", "drop = {}\nfor col in df.columns:\n # Bottom 5\n sort = df.sort_values(by=col, ascending=True)\n if len(sort) > 5: sort=sort[:5]\n for index in sort.index: drop[index] = True # Just store the index once\n\n # Top 5\n sort = df.sort_values(by=col, ascending=False)\n if len(sort) > 5: sort=sort[:5]\n for index in sort.index: drop[index] = True # Just store the index once", "Drop rows by index. We do this all at once in case there is a collision. This way, we don't end up dropping more rows than we have to, if there is a single row that satisfies the drop for multiple columns. Since there are 6 rows, if we end up dropping < 562 = 60 rows, that means there indeed were collisions:", "print(\"Dropping {0} Outliers...\".format(len(drop)))\ndf.drop(inplace=True, labels=drop.keys(), axis=0)\ndf.describe()", "What are you interested in?\nDepending on what you're interested in, you might take a different approach to normalizing/standardizing your data.\nYou should note that all columns left in the dataset are of the same unit. You might ask yourself, do I even need to normalize / standardize the data? The answer depends on what you're trying to accomplish. For instance, although all the units are the same (generic money unit), the price per item in your store isn't. There may be some cheap items and some expensive one. If your goal is to find out what items people tend to buy together but you didn't \"unitize\" properly before running kMeans, the contribution of the lesser priced item would be dwarfed by the more expensive item. This is an issue of scale.\nFor a great overview on a few of the normalization methods supported in SKLearn, please check out: https://stackoverflow.com/questions/30918781/right-function-for-normalizing-input-of-sklearn-svm\nSuffice to say, at the end of the day, you're going to have to know what question you want answered and what data you have available in order to select the best method for your purpose. Luckily, SKLearn's interfaces are easy to switch out so in the mean time, you can experiment with all of them and see how they alter your results.\n5-sec summary before you dive deeper online:\nNormalization\nLet's say your user spend a LOT. Normalization divides each item by the average overall amount of spending. Stated differently, your new feature is = the contribution of overall spending going into that particular item: \\$spent on feature / \\$overall spent by sample.\nMinMax\nWhat % in the overall range of $spent by all users on THIS particular feature is the current sample's feature at? When you're dealing with all the same units, this will produce a near face-value amount. Be careful though: if you have even a single outlier, it can cause all your data to get squashed up in lower percentages.\nImagine your buyers usually spend \\$100 on wholesale milk, but today only spent \\$20. This is the relationship you're trying to capture with MinMax. NOTE: MinMax doesn't standardize (std. dev.); it only normalizes / unitizes your feature, in the mathematical sense. MinMax can be used as an alternative to zero mean, unit variance scaling. [(sampleFeatureValue-min) / (max-min)] * (max-min) + min Where min and max are for the overall feature values for all samples.\nBack to The Assignment\nUn-comment just ONE of lines at a time and see how alters your results. Pay attention to the direction of the arrows, as well as their LENGTHS:", "#T = preprocessing.StandardScaler().fit_transform(df)\n#T = preprocessing.MinMaxScaler().fit_transform(df)\n#T = preprocessing.MaxAbsScaler().fit_transform(df)\n#T = preprocessing.Normalizer().fit_transform(df)\nT = df # No Change", "Sometimes people perform PCA before doing KMeans, so that KMeans only operates on the most meaningful features. In our case, there are so few features that doing PCA ahead of time isn't really necessary, and you can do KMeans in feature space. But keep in mind you have the option to transform your data to bring down its dimensionality. If you take that route, then your Clusters will already be in PCA-transformed feature space, and you won't have to project them again for visualization.", "# Do KMeans\n\nn_clusters = 3\ncentroids, labels = doKMeans(T, n_clusters)", "Print out your centroids. They're currently in feature-space, which is good. Print them out before you transform them into PCA space for viewing", "# .. your code here ..", "Now that we've clustered our KMeans, let's do PCA, using it as a tool to visualize the results. Project the centroids as well as the samples into the new 2D feature space for visualization purposes:", "display_pca = doPCA(T)\nT = display_pca.transform(T)\nCC = display_pca.transform(centroids)", "Visualize all the samples. Give them the color of their cluster label", "fig = plt.figure()\nax = fig.add_subplot(111)\nif PLOT_TYPE_TEXT:\n # Plot the index of the sample, so you can further investigate it in your dset\n for i in range(len(T)): ax.text(T[i,0], T[i,1], df.index[i], color=c[labels[i]], alpha=0.75, zorder=600000)\n ax.set_xlim(min(T[:,0])*1.2, max(T[:,0])*1.2)\n ax.set_ylim(min(T[:,1])*1.2, max(T[:,1])*1.2)\nelse:\n # Plot a regular scatter plot\n sample_colors = [ c[labels[i]] for i in range(len(T)) ]\n ax.scatter(T[:, 0], T[:, 1], c=sample_colors, marker='o', alpha=0.2)", "Plot the Centroids as X's, and label them", "ax.scatter(CC[:, 0], CC[:, 1], marker='x', s=169, linewidths=3, zorder=1000, c=c)\nfor i in range(len(centroids)):\n ax.text(CC[i, 0], CC[i, 1], str(i), zorder=500010, fontsize=18, color=c[i])\n\n# Display feature vectors for investigation:\nif PLOT_VECTORS:\n drawVectors(T, display_pca.components_, df.columns, plt)\n\n# Add the cluster label back into the dataframe and display it:\ndf['label'] = pd.Series(labels, index=df.index)\ndf\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dani-lbnl/2017_ucberkeley_course
code/ZeissMicroscopyCenter2017_DaniUshizima_lecture.ipynb
gpl-3.0
[ "Image exploration using Python - essentials\n\nRead image from web\nQuerying image: matrix, sub-matrices, ROI\nImage transformations: filtering\nImmunohistochemistry example from scikit-image\nSegmentation and feature extraction \nSave information as a xls file \nSimulating 2D images - \"cells\"\nSimulate particles\nCheck particle neighborhood: groups (clustering algorithms)\nPandas and Seaborn\n\n1. Read image from web and scikit-image", "%matplotlib inline\nimport numpy as np\n\nfrom matplotlib import pyplot as plt\nfrom skimage import data, io\n\nfname = 'http://crl.berkeley.edu/files/2014/04/Holly-226x300.jpg'\nimageFromWeb = data.imread(fname, as_grey=False, plugin=None, flatten=None)\nplt.imshow(imageFromWeb)", "2. Querying image: matrix, sub-matrices, ROI", "print('-----------------------------------------------------------------------')\nprint('Image shape is',imageFromWeb.shape, 'and type is',type(imageFromWeb))\nprint('Min =',imageFromWeb.min(),\",Mean =\",imageFromWeb.mean(),',Max = ',imageFromWeb.max())\nprint('dtype = ',imageFromWeb.dtype)\nprint('-----------------------------------------------------------------------') \n\n# Cropping an image\nfacecolor = imageFromWeb[50:115, 95:140]\nplt.imshow(facecolor)\nplt.title('Holly')", "3. Image transformations", "import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom skimage.color import rgb2gray\nfrom skimage.filters import sobel\nfrom skimage.filters.rank import mean, equalize\n\nfrom skimage.morphology import disk\nfrom skimage import exposure\nfrom skimage.morphology import reconstruction\nfrom skimage import img_as_ubyte, img_as_float\n\n# Turn color image into grayscale representation\nface = rgb2gray(facecolor)\nface = img_as_ubyte(face) #this generates the warning\n\nhist = np.histogram(face, bins=np.arange(0, 256))\n\nfig, ax = plt.subplots(ncols=2, figsize=(10, 5))\n\nax[0].imshow(face, interpolation='nearest', cmap=plt.cm.gray)\nax[0].axis('off')\n \nax[1].plot(hist[1][:-1], hist[0], lw=1)\nax[1].set_title('Histogram of gray values')\n\nplt.tight_layout()\n\n# Smoothing\nsmoothed = img_as_ubyte(mean(face, disk(2)))\n\n#smoothPill = ndi.median_filter(edgesPill.astype(np.uint16), 3)\n# Global equalization\nequalized = exposure.equalize_hist(face)\n\n# Extract edges\nedge_sobel = sobel(face)\n\n# Masking\nmask = face < 80\nfacemask = face.copy()\n# Set to \"white\" (255) pixels where mask is True\nfacemask[mask] = 255\n#facemask = img_as_uint(facemask)\n\n\nfig, ax = plt.subplots(ncols=5, sharex=True, sharey=True,\n figsize=(10, 4))\n\nax[0].imshow(face, cmap='gray')\nax[0].set_title('Original')\n\nax[1].imshow(smoothed, cmap='gray')\nax[1].set_title('Smoothing')\n\nax[2].imshow(equalized, cmap='gray')\nax[2].set_title('Equalized')\n\nax[3].imshow(edge_sobel, cmap='gray')\nax[3].set_title('Sobel Edge Detection')\n\nax[4].imshow(facemask, cmap='gray')\nax[4].set_title('Masked <50')\n\nfor a in ax:\n a.axis('off')\n\nplt.tight_layout()\nplt.show()", "4. Immunohistochemistry example from scikit-image\n\nMore at: http://scikit-image.org/docs/dev/api/skimage.data.html#skimage.data.immunohistochemistry", "imgMicro = data.immunohistochemistry()\nplt.imshow(imgMicro)", "5. Segmentation and feature extraction", "import matplotlib.patches as mpatches\n\nfrom skimage import data\nfrom skimage.filters import threshold_otsu\nfrom skimage.segmentation import clear_border\nfrom skimage.measure import label, regionprops\nfrom skimage.morphology import closing, square\nfrom skimage.color import label2rgb\n\n# create a subimage for tests\nimage = imgMicro[300:550, 200:400, 2]\n\n# apply threshold\nthresh = threshold_otsu(image)\nbw = closing(image > thresh, square(3))\n\n# remove artifacts connected to image border\ncleared = clear_border(bw)\n\n# label image regions\nlabel_image = label(cleared)\nimage_label_overlay = label2rgb(label_image, image=image)\n\nfig, ax = plt.subplots(figsize=(10, 6))\nax.imshow(image_label_overlay)\n\nfor region in regionprops(label_image):\n # take regions with large enough areas\n if region.area >= 50:\n # draw rectangle around segmented coins\n minr, minc, maxr, maxc = region.bbox\n rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr,\n fill=False, edgecolor='red', linewidth=2)\n ax.add_patch(rect)\n\nax.set_axis_off()\nplt.tight_layout()\nplt.show()\n#plt.imshow(bw,cmap=plt.cm.gray) ", "6. Save information as a xls file", "# Calculate regions properties from label_image\nregions = regionprops(label_image) \n\nfor i in range(len(regions)):\n all_props = {p:regions[i][p] for p in regions[i] if p not in ('image','convex_image','filled_image')}\n for p, v in list(all_props.items()):\n if isinstance(v,np.ndarray):\n if(len(v.shape)>1):\n del all_props[p]\n\n for p, v in list(all_props.items()):\n try:\n L = len(v)\n except:\n L = 1\n if L>1:\n del all_props[p]\n for n,entry in enumerate(v):\n all_props[p + str(n)] = entry\n\n k = \", \".join(all_props.keys())\n v = \", \".join([str(f) for f in all_props.values()]) #notice you need to convert numbers to strings\n if(i==0):\n with open('cellsProps.csv','w') as f:\n #f.write(k)\n f.writelines([k,'\\n',v,'\\n']) \n else:\n with open('cellsProps.csv','a') as f:\n #f.write(k)\n f.writelines([v,'\\n']) \n ", "7. Simulating 2D images - \"cells\"", "# Test\nfrom skimage.draw import circle\nimg = np.zeros((50, 50), dtype=np.uint8)\nrr, cc = circle(25, 25, 5)\nimg[rr, cc] = 1\nplt.imshow(img,cmap='gray')\n\n\n%matplotlib inline\nimport numpy as np\n\nimport random\nimport math\nfrom matplotlib import pyplot as plt\nimport matplotlib.patches as mpatches\nfrom skimage import data, io\nfrom skimage.draw import circle\n\ndef createMyCells(width, height, r, num_cells):\n \n image = np.zeros((width,height),dtype=np.uint8)\n imgx, imgy = image.shape\n nx = []\n ny = []\n ng = []\n \n #Creates a synthetic set of points\n for i in range(num_cells):\n nx.append(random.randrange(imgx))\n ny.append(random.randrange(imgy))\n ng.append(random.randrange(256))\n \n #Uses points as centers of circles \n for i in range(num_cells):\n rr, cc = circle(ny[i], nx[i], radius)\n if valid(ny[i],r,imgy) & valid(nx[i],r,imgx):\n image[rr, cc] = ng[i]\n return image\n\ndef valid(v,radius,dim):\n if v<radius:\n return False\n else: \n if v>=dim-radius:\n return False\n else:\n return True\n\nwidth = 200\nheight = 200\nradius = 5\nnum_cells = 50\nimage = createMyCells(width, height, radius, num_cells)\nplt.imshow(image)", "8. Simulate particles with Scikit-learn -> sklearn", "from sklearn.cluster import MeanShift, estimate_bandwidth\nfrom sklearn.datasets.samples_generator import make_blobs\n\nn = 1000\nclusterSD = 10 #proportional to the pool size\ncenters = [[50,50], [100, 100], [100, 200], [150,150], [200, 100], [200,200]]\nX, _ = make_blobs(n_samples=n, centers=centers, cluster_std=clusterSD)\nimage = np.zeros(shape=(300,300), dtype=np.uint8)\nfor i in X:\n x,y=i.astype(np.uint8)\n #print(x,',',y)\n image[x,y]=255\nplt.imshow(image,cmap=plt.cm.gray) \n\nmyquantile=0.15 #Change this parameter (smaller numbers will produce smaller clusters and more numerous)\n\nbandwidth = estimate_bandwidth(X, quantile=myquantile, n_samples=500)\n\nms = MeanShift(bandwidth=bandwidth, bin_seeding=True)\nms.fit(X)\nlabels = ms.labels_\ncluster_centers = ms.cluster_centers_\n\nlabels_unique = np.unique(labels)\nn_clusters_ = len(labels_unique)\n\nprint(\"number of estimated clusters : %d\" % n_clusters_)", "9. Check particle neighborhood: groups (clustering algorithms)", "import matplotlib.pyplot as plt\nfrom itertools import cycle \n\nplt.figure(1)\nplt.clf()\n\ncolors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk')\nfor k, col in zip(range(n_clusters_), colors):\n my_members = labels == k\n cluster_center = cluster_centers[k]\n plt.plot(X[my_members, 0], X[my_members, 1], col + '.')\n plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,\n markeredgecolor='k', markersize=14)\nplt.title('Estimated number of clusters: %d' % n_clusters_)\nplt.show()", "10. Pandas and seaborn\nPandas: http://pandas.pydata.org/\nSeaborn: http://seaborn.pydata.org/", "import numpy as np\nimport pandas as pd\nfrom scipy import stats, integrate\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndf = pd.DataFrame(X, columns=[\"x\", \"y\"])\n# Kernel density estimation\nsns.jointplot(x=\"x\", y=\"y\", data=df, kind=\"kde\");", "Acknowledgements:\n\nA crash course on NumPy for images" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MachineLearningStudyGroup/Smart_Review_Summarization
ipynbs/Dynamic_Aspect_Extraction_Part_B.ipynb
mit
[ "Dynamic Aspect Extraction for camera Reviews Part B\nHan, Kehang ([email protected])\nAs a follow-up demonstration, this ipynb is focused on extracting aspects from datasets called AmazonReviews, which has much more reviews on cameras. \nSet up", "import json\nimport nltk\nimport string\n\nfrom srs.utilities import Product, AspectPattern", "s1: load raw data from AmazonReviews datasets", "product_name = 'B00000JFIF'\nreviewJsonFile = product_name + '.json'\nproduct = Product(name=product_name)\nproduct.loadReviewsFromJsonFile('../data/trainingFiles/AmazonReviews/cameras/' + reviewJsonFile)", "s2: define aspect patterns", "aspectPatterns = []\n# define an aspect pattern1\npattern_name = 'adj_nn'\npattern_structure =\"\"\"\nadj_nn:{<JJ><NN.?>}\n\"\"\"\naspectTagIndices = [1]\naspectPattern = AspectPattern(name='adj_nn', structure=pattern_structure, aspectTagIndices=aspectTagIndices)\naspectPatterns.append(aspectPattern)\n# define an aspect pattern2\npattern_name = 'nn_nn'\npattern_structure =\"\"\"\nnn_nn:{<NN.?><NN.?>}\n\"\"\"\naspectTagIndices = [0,1]\naspectPattern = AspectPattern(name='nn_nn', structure=pattern_structure, aspectTagIndices=aspectTagIndices)\naspectPatterns.append(aspectPattern)", "s3: match sentence to pattern to extract aspects", "# pos tagging\nfor review in product.reviews:\n for sentence in review.sentences:\n sentence.pos_tag()\n sentence.matchDaynamicAspectPatterns(aspectPatterns)", "s4: statistic analysis on aspects extracted across all reviews", "word_dict = {}\nfor review in product.reviews:\n for sentence in review.sentences:\n for aspect in sentence.dynamic_aspects:\n if aspect in word_dict:\n word_dict[aspect] += 1\n else:\n word_dict[aspect] = 1\n\nword_sorted = sorted(word_dict.items(), key=lambda tup:-tup[1])\nword_sorted[:15]", "s5: save most frequent dynamic aspects", "import json\nword_output = open('../data/word_list/{0}_wordlist.txt'.format(product_name), 'w')\njson.dump(word_sorted[:15], word_output)\nword_output.close()", "s6: stemming analysis", "from nltk.stem import SnowballStemmer\nstemmer = SnowballStemmer('english')\n\n# collect word with same stem\nstemmedWord_dict = {}\nfor word in word_dict:\n stemmedWord = stemmer.stem(word)\n if stemmedWord in stemmedWord_dict:\n stemmedWord_dict[stemmedWord] += word_dict[word]\n else:\n stemmedWord_dict[stemmedWord] = word_dict[word]\n\n# frequency ranking\nstemmedWord_sorted = sorted(stemmedWord_dict.items(), key=lambda tup:-tup[1])\nstemmedWord_sorted[:15]\n\n# save most frequent stemmed words\nstemmedWord_output = open('../data/word_list/{0}_stemmedwordlist.txt'.format(product_name), 'w')\njson.dump(stemmedWord_sorted[:15], stemmedWord_output)\nstemmedWord_output.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
BBN-Q/Auspex
doc/examples/Example-Channel-Lib.ipynb
apache-2.0
[ "Example Q2: Save and Loading Channel Library Versions\nThis example notebook shows how one may save and load versions of the channel library.\nยฉ Raytheon BBN Technologies 2018\nSaving Channel Library Versions\nWe initialize the channel library as shown in tutorial Q1:", "from QGL import *\n\ncl = ChannelLibrary(\":memory:\")\nq1 = cl.new_qubit(\"q1\")\naps2_1 = cl.new_APS2(\"BBNAPS1\", address=\"192.168.5.101\") \naps2_2 = cl.new_APS2(\"BBNAPS2\", address=\"192.168.5.102\")\ndig_1 = cl.new_X6(\"X6_1\", address=0)\nh1 = cl.new_source(\"Holz1\", \"HolzworthHS9000\", \"HS9004A-009-1\", power=-30)\nh2 = cl.new_source(\"Holz2\", \"HolzworthHS9000\", \"HS9004A-009-2\", power=-30) \ncl.set_control(q1, aps2_1, generator=h1)\ncl.set_measure(q1, aps2_2, dig_1.ch(1), generator=h2)\ncl.set_master(aps2_1, aps2_1.ch(\"m2\"))\ncl[\"q1\"].measure_chan.frequency = 0e6\ncl.commit()", "Let us save this channel library for posterity:", "cl.save_as(\"NoSidebanding\")", "Now we adjust some parameters and save another version of the channel library", "cl[\"q1\"].measure_chan.frequency = 50e6\ncl.commit()\ncl.save_as(\"50MHz-Sidebanding\")", "Maybe we forgot to change something. No worries! We can just update the parameter and create a new copy.", "cl[\"q1\"].pulse_params['length'] = 400e-9\ncl.commit()\ncl.save_as(\"50MHz-Sidebanding\")\ncl.ls()", "We see the various versions of the channel library here. Note that the user is always modifying the working version of the database: all other versions are archival, but they can be restored to the current working version as shown below.\nLoading Channel Library Versions\nLet us load a previous version of the channel library, noting that the former value of our parameter is restored in the working copy. CRUCIAL POINT: do not use the old reference q1, which is no longer pointing to the database since the working db has been replaced with the saved version. Instead use dictionary access cl[\"q1\"] on the channel library to return the first qubit:", "cl.load(\"NoSidebanding\")\ncl[\"q1\"].measure_chan.frequency", "Now let's load the second oldest version of the 50MHz-sidebanding library:", "cl.load(\"50MHz-Sidebanding\", -1)\ncl[\"q1\"].pulse_params['length'], cl[\"q1\"].measure_chan.frequency\n\n# q1 = QubitFactory(\"q1\")\nplot_pulse_files(RabiAmp(cl[\"q1\"], np.linspace(-1, 1, 11)), time=True)", "", "cl.ls()\n\ncl.rm(\"NoSidebanding\")\n\ncl.ls()\n\ncl.rm(\"50MHz-Sidebanding\")\n\ncl.ls()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ikegwukc/INFO597-DeepLearning-GameTheory
basicGames/Basic Games.ipynb
mit
[ "Basic Games\nIterated Prisoner's Dilemma\nPrisoner's Dilemma is a pretty standard game that is commonly used in game theory (for a comprehensive defintion of Prisoner's dilemma see this link). Iterated Prisoner's Dilemma is simply repeating the game and allowing the agents to choose again. Prisoner's Dilemma is implemented in a module I wrote called game_types. For an example using the PrisonersDilemma class see the pseudocode snippet below (I dicuss different types of strategies availiable in subsequent cells):\npython\nfrom game_types import PrisonersDilemma\n...\n...\n...\nagent1 = Strategy() # Some Strategy for agent1\nagent2 = Strategy() # Some Strategy for agent2\ngame = PrisonersDilemma(agent1, agent2) # Play Prisoners Dilemma with agent1 and agent2\nnum_iter = 10000 # Play the game 10000 times\ngame.play(num_iter)\ndata = game.data # grab data from game\nScenario - TCP User's Game\nIn this scenario Robert Brunner uploads a dataset and asks two graduate students Edward and Will to download and perform a variety of tasks. For the sake of simplicity in this game Edward and Will are the only ones on a special network and they recieve no interference from other people.\nThe traffic for this network is governed by the TCP Protocol and one feature of TCP is the backoff mechanism. If the rates that Edward and Will are sending packets to this network causes congestion they (backoff and) each reduce the rate for a while until network congestion subsides. This is the correct implementation. A defective implementation of TCP would be one that does not backoff if the network is congested. The users in this game have 2 choices. To Cooperate (use the correct implementation of the TCP protocol) or to defect (use an incorrect implementation of the TCP protocol).\nThe payoff matrix is below. The numbers in the box are the utility values. For this example the higher the utlity value the faster the dataset is downloaded. Will is the first player and is the first number in each box. Edward is the second player and is the the second number in each box. \nIf Edward and Will follow the correct protocol they will both download the dataset in a reasonable amout of time (top left). If Will decides he wants the data faster and uses a defective TCP protocol while Edward still follows the correct protocol Will downloads the dataset much faster than Edward (Top Right). Vise-versa (bottom left). If they both defect they download the dataset significantly slower than if they both cooperated.\n\nTypes of Strategies\nThe strategies of Will and Edward will depend on what they want to achieve. Edward's part may be based on Will's work so it would make sense for Will to defect and Edward cooperates on purpose. If it's a competition they may try to defect to get a head start. If both split up the work then it may be in there interest to Cooperate. There are a lot of scenarios that can unfold but in most games agents do not know the other agents intentions or strategies so for the sake of this game we assume Edward and Will know nothing about it.\nI've implemented the following basic strategies in a directory called strategies:\nCooperate\nCooperate: With this strategy Will or Edward (the agent) always cooperates. To create an agent that uses this strategy you can do the following in Python:\npython\nfrom strategies import cooperate as c\nagent = c.Cooperate()\nDefect\nDefect: With this strategy the agent always defects. To create an agent that uses this strategy you can do the following in Python:\npython\nfrom strategies import defect as d\nagent = d.Defect()\nChaos\nChaos: An agent uses this strategu to cause chaos by cooperating or defecting at random. To create an agent that uses this strategy you can do the following in Python:\npython\nfrom strategies import chaos as ch\nagent = ch.Chaos()\nGrim\nGrim: This strategy is the most unforgiving. The agent cooperates until the opponent defects, in which case it will defect for the remainder of the game. This is also known as a trigger strategy. To create an agent that uses this strategy you can do the following in Python:\npython\nfrom strategies import grim as g\nagent = g.Grim()\nPavlov\nPavlov: This is another trigger strategy where initally the agent will cooperate until it loses then it will change it's strategy (defect). The agent will continue to change it's strategy if it loses. To create an agent that uses this strategy you can do the following in Python:\npython\nfrom strategies import pavlov as p\nagent = p.Pavlov()\nQ-Learning\nQ-Learning: This agent uses Q-learning for it's strategy where Q Learning is a model-free reinforcement learning technique. To create an agent that uses this strategy you can do the following in Python:\npython\nfrom strategies import machine_learning as ml\nagent = ml.QLearn()\nHuman\nHuman: This agent recieves the action as input from a human player. To create an agent that uses this strategy you can do the following in Python:\npython\nfrom strategies import human as h\nagent = h.Human()", "%matplotlib inline\nfrom collections import Counter\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nsns.set()\n\n# Importing Games\nfrom game_types import PrisonersDilemma\nfrom game_types import Coordination\n\n# Importing Strategies\nfrom strategies import chaos as c\nfrom strategies import defect as d\nfrom strategies import machine_learning as ml\nfrom strategies import pavlov as p\nfrom strategies import grim as g\n", "Simulating Games:\nChaos VS Defect", "# Create agents and play the game for 10000 iteratations\nagent1 = c.Chaos()\nagent2 = d.Defect()\ngame = PrisonersDilemma(agent1, agent2)\ngame.play(10000)\n\n# Grab Data\nagent1_util_vals = Counter(game.data['A'])\nagent2_util_vals = Counter(game.data['B'])\n\na1_total_score = sum(game.data['A'])\na2_total_score = sum(game.data['B'])\n\n\n# Plot the results\nx1, y1, x2, y2 = [], [], [], []\n\nfor i, j in zip(agent1_util_vals, agent2_util_vals):\n x1.append(i)\n y1.append(agent1_util_vals[i])\n x2.append(j)\n y2.append(agent2_util_vals[j])\n\nfig, ax = plt.subplots(figsize=(12,6))\nwidth = 0.35\na1 = ax.bar(x1, y1, width, color='#8A9CEF')\na2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')\n\n_ = ax.set_title('Chaos Agent Vs Defect Agent')\n_ = ax.set_ylabel('Number of Games')\n_ = ax.set_xlabel('Utility Values')\nax.set_xticks(np.add([1,2,5],width-.05))\n_ = ax.set_xticklabels(('1', '2', '5'))\n_ = ax.legend((a1[0], a2[0]), ('Chaos Agent\\nTotal Utility Score: {}'.format(str(a1_total_score)),\n 'Defect Agent\\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))\nplt.show()", "In this scenario defecting is the domiant strategy. Where the agent is better off defecting no matter what other agents do.\nGrim VS Pavlov", "# play the game\nagent1 = g.Grim()\nagent2 = p.Pavlov()\ngame = PrisonersDilemma(agent1, agent2)\ngame.play(10000)\n\n# get data from game\nagent1_util_vals = Counter(game.data['A'])\nagent2_util_vals = Counter(game.data['B'])\n\na1_total_score = sum(game.data['A'])\na2_total_score = sum(game.data['B'])\n\n# Plot the results\nx1, y1, x2, y2 = [], [], [], []\n\nfor i, j in zip(agent1_util_vals, agent2_util_vals):\n x1.append(i)\n y1.append(agent1_util_vals[i])\n x2.append(j)\n y2.append(agent2_util_vals[j])\n\nfig, ax = plt.subplots(figsize=(12,6))\nwidth = 0.35\na1 = ax.bar(x1, y1, width, color='#8A9CEF')\na2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')\n\n_ = ax.set_title('Grim Agent Vs Pavlov Agent')\n_ = ax.set_ylabel('Number of Games')\n_ = ax.set_xlabel('Utility Values')\nax.set_xticks(np.add([0,4,5],width/2))\n_ = ax.set_xticklabels(('0', '4', '5'))\n_ = ax.legend((a1[0], a2[0]), ('Grim Agent\\nTotal Utility Score: {}'.format(str(a1_total_score)),\n 'Pavlov Agent\\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))\nplt.show()", "Both strategies start out cooperating, Grim never defects because pavlov never defects. Pavlov never loses a round so it doesn't change it's strategy.\nQ-Learning VS Pavlov", "# Play the Game\nagent1 = ml.QLearn()\nagent2 = p.Pavlov()\ngame = PrisonersDilemma(agent1, agent2)\ngame.play(10000)\n\n# Get Data from Game\nagent1_util_vals = Counter(game.data['A'])\nagent2_util_vals = Counter(game.data['B'])\n\na1_total_score = sum(game.data['A'])\na2_total_score = sum(game.data['B'])\n\n# Plot the results\nx1, y1, x2, y2 = [], [], [], []\n\nfor i, j in zip(agent1_util_vals, agent2_util_vals):\n x1.append(i)\n y1.append(agent1_util_vals[i])\n x2.append(j)\n y2.append(agent2_util_vals[j])\n\nfig, ax = plt.subplots(figsize=(12,6))\nwidth = 0.35\na1 = ax.bar(x1, y1, width, color='#8A9CEF')\na2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')\n\n_ = ax.set_title('QLearning Agent Vs Pavlov Agent')\n_ = ax.set_ylabel('Number of Games')\n_ = ax.set_xlabel('Utility Values')\nax.set_xticks(np.add([1,2,4,5],width/2))\n_ = ax.set_xticklabels(('1', '2', '4', '5'))\n_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\\nTotal Utility Score: {}'.format(str(a1_total_score)),\n 'Pavlov Agent\\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))\nplt.show()\n", "Pavlov's simple rules out performs Q Learning here which is interesting.", "print(agent1_util_vals, agent2_util_vals)", "Q-Learning VS Chaos", "# Play the Game\nN = 10000\nagent1 = ml.QLearn()\nagent2 = c.Chaos()\ngame = PrisonersDilemma(agent1, agent2)\ngame.play(N)\n\n# Get Data from Game\nagent1_util_vals = Counter(game.data['A'])\nagent2_util_vals = Counter(game.data['B'])\na1_total_score = sum(game.data['A'])\na2_total_score = sum(game.data['B'])\n\n\n# Plot the results\nx1, y1, x2, y2 = [], [], [], []\n\nfor i, j in zip(agent1_util_vals, agent2_util_vals):\n x1.append(i)\n y1.append(agent1_util_vals[i])\n x2.append(j)\n y2.append(agent2_util_vals[j])\n\nfig, ax = plt.subplots(figsize=(12,6))\nwidth = 0.35\na1 = ax.bar(x1, y1, width, color='#8A9CEF')\na2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')\n\n_ = ax.set_title('QLearning Agent Vs Chaos Agent')\n_ = ax.set_ylabel('Number of Games')\n_ = ax.set_xlabel('Utility Values')\nax.set_xticks(np.add(x2,width/2))\n_ = ax.set_xticklabels(('1', '2', '4', '5'))\n_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\\nTotal Utility Score: {}'.format(str(a1_total_score)),\n 'Chaos Agent\\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))\nplt.show()\n", "Q Learning significantly outperforms the Chaos Agent because the Q Learning Agent learns pretty quickly that defecting yields the highest expected utility (talked about more in appendix).\nQ Learning VS Q Learning", "# Play the Game\nN = 10000\nagent1 = ml.QLearn()\nagent2 = ml.QLearn()\ngame = PrisonersDilemma(agent1, agent2)\ngame.play(N)\n\n# Get Data from Game\nagent1_util_vals = Counter(game.data['A'])\nagent2_util_vals = Counter(game.data['B'])\na1_total_score = sum(game.data['A'])\na2_total_score = sum(game.data['B'])\n\n\n# Plot the results\nx1, y1, x2, y2 = [], [], [], []\n\nfor i, j in zip(agent1_util_vals, agent2_util_vals):\n x1.append(i)\n y1.append(agent1_util_vals[i])\n x2.append(j)\n y2.append(agent2_util_vals[j])\n\nfig, ax = plt.subplots(figsize=(12,6))\nwidth = 0.35\na1 = ax.bar(x1, y1, width, color='#8A9CEF')\na2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')\n\n_ = ax.set_title('QLearning Agent Vs QLearning Agent')\n_ = ax.set_ylabel('Number of Games')\n_ = ax.set_xlabel('Utility Values')\nax.set_xticks(np.add(x2,width/2))\n_ = ax.set_xticklabels(('1', '2', '4', '5'))\n_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\\nTotal Utility Score: {}'.format(str(a1_total_score)),\n 'QLearning Agent\\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))\nplt.show()", "Here both QLearning Agents tend to mirror each other. I assume this is because they have the same inital parameters which will yield the same expected utility.\nQLearning Vs QLearning (Longer Game; Different Starting Parameters)", "# Play the Game\nN = 200000 # Play a longer game\n# agent 1's parameters are bit more short sighted\nagent1 = ml.QLearn(decay=0.4, lr=0.03, explore_period=30000, explore_random_prob=0.4, exploit_random_prob=0.2)\n# agent 2's parameters think more about the future\nagent2 = ml.QLearn(decay=0.6, lr=0.2, explore_period=40000, explore_random_prob=0.4, exploit_random_prob=0.1)\ngame = PrisonersDilemma(agent1, agent2)\ngame.play(N)\n\n# Get Data from Game\nagent1_util_vals = Counter(game.data['A'])\nagent2_util_vals = Counter(game.data['B'])\na1_total_score = sum(game.data['A'])\na2_total_score = sum(game.data['B'])\n\n\n# Plot the results\nx1, y1, x2, y2 = [], [], [], []\n\nfor i, j in zip(agent1_util_vals, agent2_util_vals):\n x1.append(i)\n y1.append(agent1_util_vals[i])\n x2.append(j)\n y2.append(agent2_util_vals[j])\n\nfig, ax = plt.subplots(figsize=(12,6))\nwidth = 0.35\na1 = ax.bar(x1, y1, width, color='#8A9CEF')\na2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')\n\n_ = ax.set_title('QLearning Agent Vs QLearning Agent')\n_ = ax.set_ylabel('Number of Games')\n_ = ax.set_xlabel('Utility Values')\nax.set_xticks(np.add(x2,width/2))\n_ = ax.set_xticklabels(('1', '2', '4', '5'))\n_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\\nTotal Utility Score: {}'.format(str(a1_total_score)),\n 'QLearning Agent\\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))\nplt.show()", "(I haven't had the time to look through the actions of both agents but one is short sighted and the other is not, which yields the Orange QLearning agent a higher total utility score.)", "print(agent1_util_vals, agent2_util_vals)", "Iterated Coordination Game\nScenario - Choosing Movies\nIn this scenario Vincent and Maghav want to see different movies. Vincent wants to see Guardians of the Galaxy 2 and Maghav wants to see Wonder Woman. They are willing to go see the movie that don't really care for but they both don't want to go see a movie alone. They both have 2 choices to defect (see the other persons movie person), or to cooperate go and see the movie they want. \nThe payoff matrix is below:\n\nChaos VS Defect", "# Create agents and play the game for 10000 iteratations\nagent1 = c.Chaos()\nagent2 = d.Defect()\ngame = Coordination(agent1, agent2)\ngame.play(10000)\n\n# Grab Data\nagent1_util_vals = Counter(game.data['A'])\nagent2_util_vals = Counter(game.data['B'])\n\na1_total_score = sum(game.data['A'])\na2_total_score = sum(game.data['B'])\n\n\n# Plot the results\nx1, y1, x2, y2 = [], [], [], []\n\nfor i, j in zip(agent1_util_vals, agent2_util_vals):\n x1.append(i)\n y1.append(agent1_util_vals[i])\n x2.append(j)\n y2.append(agent2_util_vals[j])\n\nfig, ax = plt.subplots(figsize=(12,6))\nwidth = 0.35\na1 = ax.bar(x1, y1, width, color='#8A9CEF')\na2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')\n\n_ = ax.set_title('Chaos Agent Vs Defect Agent')\n_ = ax.set_ylabel('Number of Games')\n_ = ax.set_xlabel('Utility Values')\nax.set_xticks(np.add([0,1, 2],width-.05))\n_ = ax.set_xticklabels(('0','1','2'))\n_ = ax.legend((a1[0], a2[0]), ('Chaos Agent\\nTotal Utility Score: {}'.format(str(a1_total_score)),\n 'Defect Agent\\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))\nplt.show()", "Here Defect isn't a domiant strategy. The defect agent only recieves a non 0 utility value if the chaos agent sees the movie they intended to see. A Mixed Strategy is needed.", "print(agent1_util_vals,agent2_util_vals)", "Grim VS Pavlov", "# play the game\nagent1 = g.Grim()\nagent2 = p.Pavlov()\ngame = Coordination(agent1, agent2)\ngame.play(10000)\n\n# get data from game\nagent1_util_vals = Counter(game.data['A'])\nagent2_util_vals = Counter(game.data['B'])\n\na1_total_score = sum(game.data['A'])\na2_total_score = sum(game.data['B'])\n\n# Plot the results\nx1, y1, x2, y2 = [], [], [], []\n\nfor i, j in zip(agent1_util_vals, agent2_util_vals):\n x1.append(i)\n y1.append(agent1_util_vals[i])\n x2.append(j)\n y2.append(agent2_util_vals[j])\n\nfig, ax = plt.subplots(figsize=(12,6))\nwidth = 0.35\na1 = ax.bar(x1, y1, width, color='#8A9CEF')\na2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')\n\n_ = ax.set_title('Grim Agent Vs Pavlov Agent')\n_ = ax.set_ylabel('Number of Games')\n_ = ax.set_xlabel('Utility Values')\nax.set_xticks(np.add([0,1,2],width/2))\n_ = ax.set_xticklabels(('0', '1', '2'))\n_ = ax.legend((a1[0], a2[0]), ('Grim Agent\\nTotal Utility Score: {}'.format(str(a1_total_score)),\n 'Pavlov Agent\\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))\nplt.show()", "Grim loses in the first round and always goes to other movie, the Pavlov Agent even won a round where they both ended up at the same movie and never changed it's strategy.", "print(agent1_util_vals, agent2_util_vals)", "Q-Learning Vs Chaos", "# Play the Game\nN = 10000\nagent1 = ml.QLearn()\nagent2 = c.Chaos()\ngame = Coordination(agent1, agent2)\ngame.play(N)\n\n# Get Data from Game\nagent1_util_vals = Counter(game.data['A'])\nagent2_util_vals = Counter(game.data['B'])\na1_total_score = sum(game.data['A'])\na2_total_score = sum(game.data['B'])\n\n\n# Plot the results\nx1, y1, x2, y2 = [], [], [], []\n\nfor i, j in zip(agent1_util_vals, agent2_util_vals):\n x1.append(i)\n y1.append(agent1_util_vals[i])\n x2.append(j)\n y2.append(agent2_util_vals[j])\n\nfig, ax = plt.subplots(figsize=(12,6))\nwidth = 0.35\na1 = ax.bar(x1, y1, width, color='#8A9CEF')\na2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')\n\n_ = ax.set_title('QLearning Agent Vs Chaos Agent')\n_ = ax.set_ylabel('Number of Games')\n_ = ax.set_xlabel('Utility Values')\nax.set_xticks(np.add(x2,width/2))\n_ = ax.set_xticklabels(('0', '1', '2'))\n_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\\nTotal Utility Score: {}'.format(str(a1_total_score)),\n 'Chaos Agent\\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))\nplt.show()", "This is different from Prisoner's Dilema, the QLearning Agent is trying to cooperate with the chaos agent but can never predict which movie he is going to.\nQLearning Vs QLearning", "# Play the Game\nN = 10000\nagent1 = ml.QLearn()\nagent2 = ml.QLearn()\ngame = Coordination(agent1, agent2)\ngame.play(N)\n\n# Get Data from Game\nagent1_util_vals = Counter(game.data['A'])\nagent2_util_vals = Counter(game.data['B'])\na1_total_score = sum(game.data['A'])\na2_total_score = sum(game.data['B'])\n\n\n# Plot the results\nx1, y1, x2, y2 = [], [], [], []\n\nfor i, j in zip(agent1_util_vals, agent2_util_vals):\n x1.append(i)\n y1.append(agent1_util_vals[i])\n x2.append(j)\n y2.append(agent2_util_vals[j])\n\nfig, ax = plt.subplots(figsize=(12,6))\nwidth = 0.35\na1 = ax.bar(x1, y1, width, color='#8A9CEF')\na2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')\n\n_ = ax.set_title('QLearning Agent Vs QLearning Agent')\n_ = ax.set_ylabel('Number of Games')\n_ = ax.set_xlabel('Utility Values')\nax.set_xticks(np.add(x2,width/2))\n_ = ax.set_xticklabels(('0','1', '2'))\n_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\\nTotal Utility Score: {}'.format(str(a1_total_score)),\n 'QLearning Agent\\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))\nplt.show()", "Still playing around with this one, but both do pretty bad here.", "print(agent1_util_vals, agent2_util_vals)", "Next Up?\nImmediately: more reading, Deep QLearning, more advanced games, N>2 player games\nLater: more reading, develop games (mechanism design) given data (Cool Paper entitled Mechanism Design for Data Science), apply other deep learning models that I learn about to games\nMuch Later: more reading, apply to some financial applications\nAppendix\nA Few Cool Things about Q Learning:\n\nQ Learning a model-free reinforcement learning technique\nmodel free meaning it doesn't a need a model of the environment to determine the next action to take. It makes pretty good choices with it's action value function. This is useful for large complex environments where the rules of the environment aren't all known.\nThe action value function will find expected utility of an action in a given state.\nA correctly implemented Q Learning model will follow a policy (set of rules) for choosing the best action, in this case the action that gives the highest expected utility.\nThe expected utility for each state determined by using the following update rule\n\nthe discount factor determines the importance for future rewards. The closer the discount factor is 0 the more short sighted it is.\n\nSome topics in Game Theory that were indirectly dicussed above but not formally defined:\n\nBoth games above are two player normal form games. This is where a normal form game is a finite valued player game.\nThe Payoff Matrix is a common way to represent normal formed games.\nThe agents use a utility function which maps the states of the world around around them to real numbers. In the TCP example the utility values (numbers) are the download the rates. The rates may be significantly different and have units associated with them but for each action and state the numbers will be relative to there real values. Ergo in the TCP example dicussed above if one person doesn't follow the TCP protocol and another does, the download rates will be faster than the person who has reduced the number of packets it is sending/recieving. \nA Nash Equilbrium is a stable state where no agent can gain a better outcome if the strategies of other agents remain unchanged.\nIn the Prisoner's Dilemma Game the Nash Equilbrium for both players is to defect.\nIn the Coordination Game 2 Nash Equilbriums exist, both agents going to the either movie as long as it the same." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]