repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
sequence | types
sequence |
---|---|---|---|---|
kaiping/incubator-singa | doc/en/docs/notebook/regression.ipynb | apache-2.0 | [
"Train a linear regression model\nIn this notebook, we are going to use the tensor module from PySINGA to train a linear regression model. We use this example to illustrate the usage of tensor of PySINGA. Please refer the documentation page to for more tensor functions provided by PySINGA.",
"from __future__ import division\nfrom __future__ import print_function\nfrom builtins import range\nfrom past.utils import old_div\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt",
"To import the tensor module of PySINGA, run",
"from singa import tensor",
"The ground-truth\nOur problem is to find a line that fits a set of 2-d data points.\nWe first plot the ground truth line,",
"a, b = 3, 2\nf = lambda x: a * x + b\ngx = np.linspace(0.,1,100)\ngy = [f(x) for x in gx]\nplt.plot(gx, gy, label='y=f(x)')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.legend(loc='best')",
"Generating the trainin data\nThen we generate the training data points by adding a random error to sampling points from the ground truth line.\n30 data points are generated.",
"nb_points = 30\n\n# generate training data\ntrain_x = np.asarray(np.random.uniform(0., 1., nb_points), np.float32)\ntrain_y = np.asarray(f(train_x) + np.random.rand(30), np.float32)\nplt.plot(train_x, train_y, 'bo', ms=7)",
"Training via SGD\nAssuming that we know the training data points are sampled from a line, but we don't know the line slope and intercept. The training is then to learn the slop (k) and intercept (b) by minimizing the error, i.e. ||kx+b-y||^2. \n1. we set the initial values of k and b (could be any values).\n2. we iteratively update k and b by moving them in the direction of reducing the prediction error, i.e. in the gradient direction. For every iteration, we plot the learned line.",
"def plot(idx, x, y):\n global gx, gy, axes\n # print the ground truth line\n axes[idx//5, idx%5].plot(gx, gy, label='y=f(x)') \n # print the learned line\n axes[idx//5, idx%5].plot(x, y, label='y=kx+b')\n axes[idx//5, idx%5].legend(loc='best')\n\n# set hyper-parameters\nmax_iter = 15\nalpha = 0.05\n\n# init parameters\nk, b = 2.,0.",
"SINGA tensor module supports basic linear algebra operations, like + - * /, and advanced functions including axpy, gemm, gemv, and random function (e.g., Gaussian and Uniform).\nSINGA Tensor instances could be created via tensor.Tensor() by specifying the shape, and optionally the device and data type. Note that every Tensor instance should be initialized (e.g., via set_value() or random functions) before reading data from it. You can also create Tensor instances from numpy arrays,\n\nnumpy array could be converted into SINGA tensor via tensor.from_numpy(np_ary) \nSINGA tensor could be converted into numpy array via tensor.to_numpy(); Note that the tensor should be on the host device. tensor instances could be transferred from other devices to host device via to_host()\n\nUsers cannot read a single cell of the Tensor instance. To read a single cell, users need to convert the Tesnor into a numpy array.",
"# to plot the intermediate results\nfig, axes = plt.subplots(3, 5, figsize=(12, 8))\nx = tensor.from_numpy(train_x)\ny = tensor.from_numpy(train_y)\n# sgd\nfor idx in range(max_iter):\n y_ = x * k + b\n err = y_ - y\n loss = old_div(tensor.sum(err * err), nb_points)\n print('loss at iter %d = %f' % (idx, loss))\n da1 = old_div(tensor.sum(err * x), nb_points)\n db1 = old_div(tensor.sum(err), nb_points)\n # update the parameters\n k -= da1 * alpha\n b -= db1 * alpha\n plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_))",
"We can see that the learned line is becoming closer to the ground truth line (in blue color).\nNext: MLP example",
"# to plot the intermediate results\nfig, axes = plt.subplots(3, 5, figsize=(12, 8))\nx = tensor.from_numpy(train_x)\ny = tensor.from_numpy(train_y)\n# sgd\nfor idx in range(max_iter):\n y_ = x * k + b\n err = y_ - y\n loss = old_div(tensor.sum(err * err), nb_points)\n print('loss at iter %d = %f' % (idx, loss))\n da1 = old_div(tensor.sum(err * x), nb_points)\n db1 = old_div(tensor.sum(err), nb_points)\n # update the parameters\n k -= da1 * alpha\n b -= db1 * alpha\n plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
aschaffn/phys202-2015-work | days/day12/Integration.ipynb | mit | [
"Numerical Integration\nLearning Objectives: Learn how to numerically integrate 1d and 2d functions that are represented as Python functions or numerical arrays of data using scipy.integrate.\nThis lesson was orginally developed by Jennifer Klay under the terms of the MIT license. The original version is in this repo (https://github.com/Computing4Physics/C4P). Her materials was in turn based on content from the Computational Physics book by Mark Newman at University of Michigan, materials developed by Matt Moelter and Jodi Christiansen for PHYS 202 at Cal Poly, as well as the SciPy tutorials.\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np",
"Introduction\nWe often calculate integrals in physics (electromagnetism, thermodynamics, quantum mechanics, etc.). In calculus, you learned how to evaluate integrals analytically. Some functions are too difficult to integrate analytically and for these we need to use the computer to integrate numerically. A numerical integral goes back to the basic principles of calculus. Given a function $f(x)$, we need to find the area under the curve between two limits, $a$ and $b$:\n$$\nI(a,b) = \\int_a^b f(x) dx\n$$\nThere is no known way to calculate such an area exactly in all cases on a computer, but we can do it approximately by dividing up the area into rectangular slices and adding them all together. Unfortunately, this is a poor approximation, since the rectangles under and overshoot the function:\n<img src=\"rectangles.png\" width=400>\n\nTrapezoidal Rule\nA better approach, which involves very little extra work, is to divide the area into trapezoids rather than rectangles. The area under the trapezoids is a considerably better approximation to the area under the curve, and this approach, though simple, often gives perfectly adequate results.\n<img src=\"trapz.png\" width=420>\nWe can improve the approximation by making the size of the trapezoids smaller. Suppose we divide the interval from $a$ to $b$ into $N$ slices or steps, so that each slice has width $h = (b − a)/N$ . Then the right-hand side of the $k$ th slice falls at $a+kh$, and the left-hand side falls at $a+kh−h$ = $a+(k−1)h$ . Thus the area of the trapezoid for this slice is\n$$\nA_k = \\tfrac{1}{2}h[ f(a+(k−1)h)+ f(a+kh) ]\n$$\nThis is the trapezoidal rule. It gives us a trapezoidal approximation to the area under one slice of our function.\nNow our approximation for the area under the whole curve is the sum of the areas of the trapezoids for all $N$ slices\n$$\nI(a,b) \\simeq \\sum\\limits_{k=1}^N A_k = \\tfrac{1}{2}h \\sum\\limits_{k=1}^N [ f(a+(k−1)h)+ f(a+kh) ] = h \\left[ \\tfrac{1}{2}f(a) + \\tfrac{1}{2}f(b) + \\sum\\limits_{k=1}^{N-1} f(a+kh)\\right]\n$$\nNote the structure of the formula: the quantity inside the square brackets is a sum over values of $f(x)$ measured at equally spaced points in the integration domain, and we take a half of the values at the start and end points but one times the value at all the interior points.\nApplying the Trapezoidal rule\nUse the trapezoidal rule to calculate the integral of $x^4 − 2x + 1$ from $x$ = 0 to $x$ = 2.\nThis is an integral we can do by hand, so we can check our work. To define the function, let's use a lambda expression (you learned about these in the advanced python section of CodeCademy). It's basically just a way of defining a function of some variables in one line. For this case, it is just a function of x:",
"func = lambda x: x**4 - 2*x + 1\n\nN = 10\na = 0.0\nb = 2.0\nh = (b-a)/N\n\nk = np.arange(1,N)\nI = h*(0.5*func(a) + 0.5*func(b) + func(a+k*h).sum())\n\nprint(I)",
"The correct answer is\n$$\nI(0,2) = \\int_0^2 (x^4-2x+1)dx = \\left[\\tfrac{1}{5}x^5-x^2+x\\right]_0^2 = 4.4\n$$\nSo our result is off by about 2%.\nSimpson's Rule\nThe trapezoidal rule estimates the area under a curve by approximating the curve with straight-line segments. We can often get a better result if we approximate the function instead with curves of some kind. Simpson's rule uses quadratic curves. In order to specify a quadratic completely one needs three points, not just two as with a straight line. So in this method we take a pair of adjacent slices and fit a quadratic through the three points that mark the boundaries of those slices. \nGiven a function $f(x)$ and spacing between adjacent points $h$, if we fit a quadratic curve $ax^2 + bx + c$ through the points $x$ = $-h$, 0, $+h$, we get\n$$\nf(-h) = ah^2 - bh + c, \\hspace{1cm} f(0) = c, \\hspace{1cm} f(h) = ah^2 +bh +c\n$$\nSolving for $a$, $b$, and $c$ gives:\n$$\na = \\frac{1}{h^2}\\left[\\tfrac{1}{2}f(-h) - f(0) + \\tfrac{1}{2}f(h)\\right], \\hspace{1cm} b = \\frac{1}{2h}\\left[f(h)-f(-h)\\right], \\hspace{1cm} c = f(0)\n$$\nand the area under the curve of $f(x)$ from $-h$ to $+h$ is given approximately by the area under the quadratic:\n$$\nI(-h,h) \\simeq \\int_{-h}^h (ax^2+bx+c)dx = \\tfrac{2}{3}ah^3 + 2ch = \\tfrac{1}{3}h[f(-h)+4f(0)+f(h)]\n$$\nThis is Simpson’s rule. It gives us an approximation to the area under two adjacent slices of our function. Note that the final formula for the area involves only $h$ and the value of the function at evenly spaced points, just as with the trapezoidal rule. So to use Simpson’s rule we don’t actually have to worry about the details of fitting a quadratic—we just plug numbers into this formula and it gives us an answer. This makes Simpson’s rule almost as simple to use as the trapezoidal rule, and yet Simpson’s rule often gives much more accurate answers.\nApplying Simpson’s rule involves dividing the domain of integration into many slices and using the rule to separately estimate the area under successive pairs of slices, then adding the estimates for all pairs to get the final answer.\nIf we are integrating from $x = a$ to $x = b$ in slices of width $h$ then Simpson’s rule gives the area under the $k$ th pair, approximately, as\n$$\nA_k = \\tfrac{1}{3}h[f(a+(2k-2)h)+4f(a+(2k-1)h) + f(a+2kh)]\n$$\nWith $N$ slices in total, there are $N/2$ pairs of slices, and the approximate value of the entire integral is given by the sum\n$$\nI(a,b) \\simeq \\sum\\limits_{k=1}^{N/2}A_k = \\tfrac{1}{3}h\\left[f(a)+f(b)+4\\sum\\limits_{k=1}^{N/2}f(a+(2k-1)h)+2\\sum\\limits_{k=1}^{N/2-1}f(a+2kh)\\right]\n$$\nNote that the total number of slices must be even for Simpson's rule to work.\nApplying Simpson's rule\nNow let's code Simpson's rule to compute the integral of the same function from before, $f(x) = x^4 - 2x + 1$ from 0 to 2.",
"N = 10\na = 0.0\nb = 2.0\nh = (b-a)/N\n\nk1 = np.arange(1,N/2+1)\nk2 = np.arange(1,N/2)\nI = (1./3.)*h*(func(a) + func(b) + 4.*func(a+(2*k1-1)*h).sum() + 2.*func(a+2*k2*h).sum())\n \nprint(I)",
"Adaptive methods and higher order approximations\nIn some cases, particularly for integrands that are rapidly varying, a very large number of steps may be needed to achieve the desired accuracy, which means the calculation can become slow. \nSo how do we choose the number $N$ of steps for our integrals? In our example calculations we just chose round numbers and looked to see if the results seemed reasonable. A more common situation is that we want to calculate the value of an integral to a given accuracy, such as four decimal places, and we would like to know how many steps will be needed. So long as the desired accuracy does not exceed the fundamental limit set by the machine precision of our computer— the rounding error that limits all calculations—then it should always be possible to meet our goal by using a large enough number of steps. At the same time, we want to avoid using more steps than are necessary, since more steps take more time and our calculation will be slower. \nIdeally we would like an $N$ that gives us the accuracy we want and no more. A simple way to achieve this is to start with a small value of $N$ and repeatedly double it until we achieve the accuracy we want. This method is an example of an adaptive integration method, one that changes its own parameters to get a desired answer.\nThe trapezoidal rule is based on approximating an integrand $f(x)$ with straight-line segments, while Simpson’s rule uses quadratics. We can create higher-order (and hence potentially more accurate) rules by using higher-order polynomials, fitting $f(x)$ with cubics, quartics, and so forth. The general form of the trapezoidal and Simpson rules is\n$$\n\\int_a^b f(x)dx \\simeq \\sum\\limits_{k=1}^{N}w_kf(x_k)\n$$\nwhere the $x_k$ are the positions of the sample points at which we calculate the integrand and the $w_k$ are some set of weights. In the trapezoidal rule, the first and last weights are $\\tfrac{1}{2}$ and the others are all 1, while in Simpson’s rule the weights are $\\tfrac{1}{3}$ for the first and last slices and alternate between $\\tfrac{4}{3}$ and $\\tfrac{2}{3}$ for the other slices. For higher-order rules the basic form is the same: after fitting to the appropriate polynomial and integrating we end up with a set of weights that multiply the values $f(x_k)$ of the integrand at evenly spaced sample points. \nNotice that the trapezoidal rule is exact if the function being integrated is actually a straight line, because then the straight-line approximation isn’t an approximation at all. Similarly, Simpson’s rule is exact if the function being integrated is a quadratic, and so on for higher order polynomials.\nThere are other more advanced schemes for calculating integrals that can achieve high accuracy while still arriving at an answer quickly. These typically combine the higher order polynomial approximations with adaptive methods for choosing the number of slices, in some cases allowing their sizes to vary over different regions of the integrand. \nOne such method, called Gaussian Quadrature - after its inventor, Carl Friedrich Gauss, uses Legendre polynomials to choose the $x_k$ and $w_k$ such that we can obtain an integration rule accurate to the highest possible order of $2N−1$. It is beyond the scope of this course to derive the Gaussian quadrature method, but you can learn more about it by searching the literature. \nNow that we understand the basics of numerical integration and have even coded our own trapezoidal and Simpson's rules, we can feel justified in using scipy's built-in library of numerical integrators that build on these basic ideas, without coding them ourselves.\nscipy.integrate\nIt is time to look at scipy's built-in functions for integrating functions numerically. Start by importing the library.",
"import scipy.integrate as integrate\n\nintegrate?",
"An overview of the module is provided by the help command, but it produces a lot of output. Here's a quick summary:\nMethods for Integrating Functions given function object.\nquad -- General purpose integration.\ndblquad -- General purpose double integration.\ntplquad -- General purpose triple integration.\nfixed_quad -- Integrate func(x) using Gaussian quadrature of order n.\nquadrature -- Integrate with given tolerance using Gaussian quadrature.\nromberg -- Integrate func using Romberg integration.\n\nMethods for Integrating Functions given fixed samples.\ntrapz -- Use trapezoidal rule to compute integral from samples.\ncumtrapz -- Use trapezoidal rule to cumulatively compute integral.\nsimps -- Use Simpson's rule to compute integral from samples.\nromb -- Use Romberg Integration to compute integral from (2**k + 1) evenly-spaced samples.\n\nSee the <code>special</code> module's orthogonal polynomials (<code>scipy.special</code>) for Gaussian quadrature roots and weights for other weighting factors and regions.\nInterface to numerical integrators of ODE systems.\nodeint -- General integration of ordinary differential equations.\node -- Integrate ODE using VODE and ZVODE routines.\n\nGeneral integration (quad)\nThe scipy function quad is provided to integrate a function of one variable between two points. The points can be $\\pm\\infty$ ($\\pm$ np.infty) to indicate infinite limits. For example, suppose you wish to integrate the following: \n$$\nI = \\int_0^{2\\pi} e^{-x}\\sin(x)dx\n$$\nThis could be computed using quad as:",
"fun = lambda x : np.exp(-x)*np.sin(x) \n\nresult,error = integrate.quad(fun, 0, 2*np.pi) \n\nprint(result,error)",
"The first argument to quad is a “callable” Python object (i.e a function, method, or class instance). Notice that we used a lambda function in this case as the argument. The next two arguments are the limits of integration. The return value is a tuple, with the first element holding the estimated value of the integral and the second element holding an upper bound on the error.\nThe analytic solution to the integral is \n$$\n\\int_0^{2\\pi} e^{-x} \\sin(x) dx = \\frac{1}{2} - e^{-2\\pi} \\simeq \\textrm{0.499066}\n$$\nso that is pretty good.\nHere it is again, integrated from 0 to infinity:",
"I = integrate.quad(fun, 0, np.infty)\n\nprint(I)",
"In this case the analytic solution is exactly 1/2, so again pretty good.\nWe can calculate the error in the result by looking at the difference between the exact result and the numerical value from quad with",
"print(abs(I[0]-0.5))",
"In this case, the numerically-computed integral is within $10^{-16}$ of the exact result — well below the reported error bound.\nIntegrating array data\nWhen you want to compute the integral for an array of data (such as our thermistor resistance-temperature data from the Interpolation lesson), you don't have the luxury of varying your choice of $N$, the number of slices (unless you create an interpolated approximation to your data).\nThere are three functions for computing integrals given only samples: trapz , simps, and romb. The trapezoidal rule approximates the function as a straight line between adjacent points while Simpson’s rule approximates the function between three adjacent points as a parabola, as we have already seen. The first two functions can also handle non-equally-spaced samples (something we did not code ourselves) which is a useful extension to these integration rules.\nIf the samples are equally-spaced and the number of samples available is $2^k+1$ for some integer $k$, then Romberg integration can be used to obtain high-precision estimates of the integral using the available samples. Romberg integration is an adaptive method that uses the trapezoid rule at step-sizes related by a power of two and then performs something called Richardson extrapolation on these estimates to approximate the integral with a higher-degree of accuracy. (A different interface to Romberg integration useful when the function can be provided is also available as romberg).\nApplying simps to array data\nHere is an example of using simps to compute the integral for some discrete data:",
"x = np.arange(0, 20, 2)\ny = np.array([0, 3, 5, 2, 8, 9, 0, -3, 4, 9], dtype = float)\nplt.plot(x,y)\nplt.xlabel('x')\nplt.ylabel('y')\n#Show the integration area as a filled region\nplt.fill_between(x, y, y2=0,color='red',hatch='//',alpha=0.2);\n\nI = integrate.simps(y,x) \nprint(I)",
"Multiple Integrals\nMultiple integration can be handled using repeated calls to quad. The mechanics of this for double and triple integration have been wrapped up into the functions dblquad and tplquad. The function dblquad performs double integration. Use the help function to be sure that you define the arguments in the correct order. The limits on all inner integrals are actually functions (which can be constant).\nDouble integrals using dblquad\nSuppose we want to integrate $f(x,y)=y\\sin(x)+x\\cos(y)$ over $\\pi \\le x \\le 2\\pi$ and $0 \\le y \\le \\pi$:\n$$\\int_{x=\\pi}^{2\\pi}\\int_{y=0}^{\\pi} y \\sin(x) + x \\cos(y) dxdy$$\nTo use dblquad we have to provide callable functions for the range of the x-variable. Although here they are constants, the use of functions for the limits enables freedom to integrate over non-constant limits. In this case we create trivial lambda functions that return the constants. Note the order of the arguments in the integrand. If you put them in the wrong order you will get the wrong answer.",
"from scipy.integrate import dblquad\n\n#NOTE: the order of arguments matters - inner to outer\nintegrand = lambda x,y: y * np.sin(x) + x * np.cos(y)\n\nymin = 0\nymax = np.pi\n\n#The callable functions for the x limits are just constants in this case:\nxmin = lambda y : np.pi\nxmax = lambda y : 2*np.pi\n\n#See the help for correct order of limits\nI, err = dblquad(integrand, ymin, ymax, xmin, xmax)\nprint(I, err)\n\ndblquad?",
"Triple integrals using tplquad\nWe can also numerically evaluate a triple integral:\n$$ \\int_{x=0}^{\\pi}\\int_{y=0}^{1}\\int_{z=-1}^{1} y\\sin(x)+z\\cos(x) dxdydz$$",
"from scipy.integrate import tplquad\n\n#AGAIN: the order of arguments matters - inner to outer\nintegrand = lambda x,y,z: y * np.sin(x) + z * np.cos(x)\n\nzmin = -1\nzmax = 1\n\nymin = lambda z: 0\nymax = lambda z: 1\n\n#Note the order of these arguments:\nxmin = lambda y,z: 0\nxmax = lambda y,z: np.pi\n\n#Here the order of limits is outer to inner\nI, err = tplquad(integrand, zmin, zmax, ymin, ymax, xmin, xmax)\nprint(I, err)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
opengeostat/pygslib | doc/source/Ipython_templates/gamv3D.ipynb | mit | [
"PyGSLIB\nIntroduction\nThis is a simple example on how to use raw pyslib to compute variograms",
"#general imports\nimport pygslib ",
"Getting the data ready for work\nIf the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.",
"#get the data in gslib format into a pandas Dataframe\nmydata= pygslib.gslib.read_gslib_file('../datasets/cluster.dat') \n\n# This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code\n# so, we are adding constant elevation = 0 and a dummy BHID = 1 \nmydata['Zlocation']=0.\nmydata['bhid']=1.\n\n# printing to verify results\nprint (' \\n **** 5 first rows in my datafile \\n\\n ', mydata.head(n=5))",
"Testing variogram function gamv",
"# these are the parameters we need. Note that at difference of GSLIB this dictionary also stores \n# the actual data (ex, X, Y, etc.). \n\n#important! python is case sensitive 'bhid' is not equal to 'BHID'\n\n\nparameters = {\n 'x' : mydata['Xlocation'].values,\n 'y' : mydata['Ylocation'].values,\n 'z' : mydata['Zlocation'].values, \n 'bhid' : mydata['bhid'].values,\n 'vr' : mydata['Primary'].values,\n 'tmin' : -1.0e21,\n 'tmax' : 1.0e21,\n 'nlag' : 10,\n 'xlag' : 1,\n 'ndir' : 10,\n 'ndip' : 10,\n 'orgdir': 0.,\n 'orgdip': 0.,\n 'isill' : 1,\n 'sills' : [mydata['Primary'].var()],\n 'ivtail' : [1],\n 'ivhead' : [1],\n 'ivtype' : [1]\n }\n\n\n#Now we are ready to calculate the veriogram\nnp, dis, gam, hm, tm, hv, tv = pygslib.gslib.gamv3D(parameters)\n\nnp\n\n# create structured grid with data \n\nimport vtk\nimport vtk.util.numpy_support as vtknumpy\nimport math\nimport numpy as np\n\nXYZPts = vtk.vtkPoints()\nXYZPts.SetNumberOfPoints(parameters['ndip']*parameters['nlag']*parameters['ndir']*2)\n\nangdir = (math.pi/180.)*180./(parameters['ndir'])\nangdip = (math.pi/180.)*90./(parameters['ndip'])\norgdir = parameters['orgdir'] * math.pi/180.\norgdip = parameters['orgdip'] * math.pi/180.\n\nid=-1\nfor k in range(-parameters['ndip']+1,parameters['ndip']+1):\n for j in range(parameters['nlag']):\n for i in range(parameters['ndir']):\n id+=1\n \n x= parameters['xlag']*(j+1)*math.cos(angdir*i-orgdir)*math.cos(angdip*k-orgdip)\n y= parameters['xlag']*(j+1)*math.sin(angdir*i-orgdir)*math.cos(angdip*k-orgdip)\n z= parameters['xlag']*(j+1)* math.sin(angdip*k-orgdip)\n \n print (id, i,j,k, angdir*i*(180/math.pi), angdip*k*(180/math.pi),x,y,z)\n #print math.cos(angdip*k-orgdip)\n XYZPts.SetPoint(id,x,y,z)\n \n\nXYZGrid = vtk.vtkStructuredGrid()\nXYZGrid.SetDimensions(parameters['ndir'],parameters['nlag'],parameters['ndip']*2-1)\nXYZGrid.SetPoints(XYZPts)\n\n\nptid = np.arange(2*parameters['ndip']*parameters['nlag']*(parameters['ndir']))\ncscalars = vtknumpy.numpy_to_vtk(ptid)\ncscalars.SetName('PointID|') \nXYZGrid.GetPointData().AddArray(cscalars)\n\n\n#Write file\nwriter = vtk.vtkXMLStructuredGridWriter()\nwriter.SetFileName(\"output.vts\")\nwriter.SetInputData(XYZGrid)\nwriter.Write()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
IBMDecisionOptimization/docplex-examples | examples/cp/jupyter/scheduling_tuto.ipynb | apache-2.0 | [
"Tutorial: Getting started with Scheduling in CPLEX for Python\nThis notebook introduces the basic building blocks of a scheduling model that can be solved using Constraint Programming Optimizer (named CP Optimizer in the following) that is included in CPLEX for Python. \n\nThis notebook is part of Prescriptive Analytics for Python\nIt requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account\nand you can start using IBM Cloud Pak for Data as a Service right away).\nCPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:\n - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:\n - <i>Python 3.x</i> runtime: Community edition\n - <i>Python 3.x + DO</i> runtime: full edition\n - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition\n\nTo follow the examples in this section, some knowledge about optimization (math programming or constraint programming) and about modeling optimization problems is necessary.\nFor beginners in optimization, following the online free Decision Optimization tutorials (here and here) might help to get a better understanding of Mathematical Optimization.\nEach chapter of this notebook is a self-contained separate lesson.\n\nChapter 1. Introduction to Scheduling\nChapter 2. Modeling and solving a simple problem: house building\nChapter 3. Adding workers and transition times to the house building problem\nChapter 4. Adding calendars to the house building problem\nChapter 5. Using cumulative functions in the house building problem\nChapter 6. Using alternative resources in the house building problem\nChapter 7. Using state functions: house building with state incompatibilities\nSummary\nReferences\n\nChapter 1. Introduction to Scheduling\nThis chapter describes the basic characteristics of a scheduling program.\nSet up the model solving\nSolving capabilities are required to solve example models that are given in the following. \nThere are several ways to solve a model:\n\nUse a local solver, a licensed installation of CPLEX Optimization Studio to run the notebook locally.\nUse DSX Desktop or Local version that contain a pre-installed version of CPLEX Community Edition\nSubscribe to the private cloud offer or Decision Optimization on Cloud solve service here.\n\nScheduling building blocks\nScheduling is the act of creating a schedule, which is a timetable for planned occurrences. \nScheduling may also involve allocating resources to activities over time. \nA scheduling problem can be viewed as a constraint satisfaction problem or as a constrained optimization problem. Regardless of how it is viewed, a scheduling problem is defined by:\n* A set of time intervals, to define activities, operations, or tasks to be completed\n* A set of temporal constraints, to define possible relationships between the start and end times of the intervals\n* A set of specialized constraints, to specify of the complex relationships on a set of intervals due to the state and finite capacity of resources.\nCreation of the model\nA scheduling model starts with the creation of the model container, as follows",
"import sys\nfrom docplex.cp.model import *\n\nmdl0 = CpoModel()",
"This code creates a CP model container that allows the use of constraints that are specific to constraint programming or to\nscheduling.\nDeclarations of decision variables\nVariable declarations define the type of each variable in the model. For example, to create a variable that equals the amount of material shipped from location i to location j, a variable named ship can be created as follows:\n<code>\n ship = [[integer_var(min=0) for j in range(N)] for i in range(N)]\n</code>\nThis code declares an array (list of lists in Python) of non-negative integer decision variables; <code>ship[i][j]</code> is the decision variable handling the amount of material shipped from location i to location j.\nFor scheduling there are specific additional decision variables, namely:\n * interval variables\n * sequence variables.\nActivities, operations andtasks are represented as interval decision variables.\nAn interval has a start, a end, a length, and a size. An interval variable allows for these values to be variable within the model. \nThe start is the lower endpoint of the interval and the end is the upper endpoint of the interval. \nBy default, the size is equal to the length, which is the difference between the end and the start of the interval. \nIn general, the size is a lower bound on the length.\nAn interval variable may also be optional, and its presence in the solution is represented by a decision variable. \nIf an interval is not present in the solution, this means that any constraints on this interval acts like the interval is “not there”.\nThe exact semantics will depend on the specific constraint.\nThe following example contains a dictionary of interval decision variables where the sizes of the interval variables are fixed and the keys are 2 dimensional:\n<code>\n itvs = {(h,t) : mdl.interval_var(size = Duration[t]) for h in Houses for t in TaskNames}\n</code>\nObjective function\nThe objective function is an expression that has to be optimized. This function consists of variables and data that have been declared earlier in the model.\nThe objective function is introduced by either the minimize or the maximize function. \nFor example:\n<code>\n mdl.add(mdl.minimize(mdl.endOf(tasks[\"moving\"])))\n</code>\nindicates that the end of the interval variable <code>tasks[\"moving\"]</code> needs to be minimized.\nConstraints\nThe constraints indicate the conditions that are necessary for a feasible solution to the model.\nSeveral types of constraints can be placed on interval variables:\n* precedence constraints, which ensure that relative positions of intervals in the solution (For example a precedence constraint can model a requirement that an interval a must end before interval b starts, optionally with some minimum delay z);\n* no overlap constraints, which ensure that positions of intervals in the solution are disjointed in time;\n* span constraints, which ensure that one interval to cover those intervals in a set of intervals;\n* alternative constraints, which ensure that exactly one of a set of intervals be present in the solution;\n* synchronize constraints, which ensure that a set of intervals start and end at the same time as a given interval variable if it is present in the solution;\n* cumulative expression constraints, which restrict the bounds on the domains of cumulative expressions.\nExample\nThis section provides a completed example model that can be tested.\nThe problem is a house building problem. There are ten tasks of fixed size, and each of them needs to be assigned a starting time. \nThe statements for creating the interval variables that represent the tasks are:",
"masonry = mdl0.interval_var(size=35)\ncarpentry = mdl0.interval_var(size=15)\nplumbing = mdl0.interval_var(size=40)\nceiling = mdl0.interval_var(size=15)\nroofing = mdl0.interval_var(size=5)\npainting = mdl0.interval_var(size=10)\nwindows = mdl0.interval_var(size=5)\nfacade = mdl0.interval_var(size=10)\ngarden = mdl0.interval_var(size=5)\nmoving = mdl0.interval_var(size=5)",
"Adding the constraints\nThe constraints in this problem are precedence constraints; some tasks cannot start until other tasks have ended. \nFor example, the ceilings must be completed before painting can begin. \nThe set of precedence constraints for this problem can be added to the model with the block:",
"mdl0.add( mdl0.end_before_start(masonry, carpentry) )\nmdl0.add( mdl0.end_before_start(masonry, plumbing) )\nmdl0.add( mdl0.end_before_start(masonry, ceiling) )\nmdl0.add( mdl0.end_before_start(carpentry, roofing) )\nmdl0.add( mdl0.end_before_start(ceiling, painting) )\nmdl0.add( mdl0.end_before_start(roofing, windows) )\nmdl0.add( mdl0.end_before_start(roofing, facade) )\nmdl0.add( mdl0.end_before_start(plumbing, facade) )\nmdl0.add( mdl0.end_before_start(roofing, garden) )\nmdl0.add( mdl0.end_before_start(plumbing, garden) )\nmdl0.add( mdl0.end_before_start(windows, moving) )\nmdl0.add( mdl0.end_before_start(facade, moving) )\nmdl0.add( mdl0.end_before_start(garden, moving) )\nmdl0.add( mdl0.end_before_start(painting, moving) )",
"Here, the special constraint end_before_start() ensures that one interval variable ends before the other starts. \nIf one of the interval variables is not present, the constraint is automatically satisfied.\nCalling the solve",
"# Solve the model\nprint(\"\\nSolving model....\")\nmsol0 = mdl0.solve(TimeLimit=10)\nprint(\"done\")",
"Displaying the solution\nThe interval variables and precedence constraints completely describe this simple problem. \nPrint statements display the solution, after values have been assigned to the start and end of each of the interval variables in the model.",
"if msol0:\n var_sol = msol0.get_var_solution(masonry)\n print(\"Masonry : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol0.get_var_solution(carpentry)\n print(\"Carpentry : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol0.get_var_solution(plumbing)\n print(\"Plumbing : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol0.get_var_solution(ceiling)\n print(\"Ceiling : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol0.get_var_solution(roofing)\n print(\"Roofing : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol0.get_var_solution(painting)\n print(\"Painting : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol0.get_var_solution(windows)\n print(\"Windows : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol0.get_var_solution(facade)\n print(\"Facade : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol0.get_var_solution(moving)\n print(\"Moving : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\nelse:\n print(\"No solution found\")",
"To understand the solution found by CP Optimizer to this satisfiability scheduling problem, consider the line:\n<code>Masonry : 0..35</code>\nThe interval variable representing the masonry task, which has size 35, has been assigned the interval [0,35). \nMasonry starts at time 0 and ends at the time point 35.\nGraphical view of these tasks can be obtained with following additional code:",
"import docplex.cp.utils_visu as visu\nimport matplotlib.pyplot as plt\n%matplotlib inline\n#Change the plot size\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 15, 3\n\nif msol0:\n wt = msol0.get_var_solution(masonry) \n visu.interval(wt, 'lightblue', 'masonry') \n wt = msol0.get_var_solution(carpentry) \n visu.interval(wt, 'lightblue', 'carpentry')\n wt = msol0.get_var_solution(plumbing) \n visu.interval(wt, 'lightblue', 'plumbing')\n wt = msol0.get_var_solution(ceiling) \n visu.interval(wt, 'lightblue', 'ceiling')\n wt = msol0.get_var_solution(roofing) \n visu.interval(wt, 'lightblue', 'roofing')\n wt = msol0.get_var_solution(painting) \n visu.interval(wt, 'lightblue', 'painting')\n wt = msol0.get_var_solution(windows) \n visu.interval(wt, 'lightblue', 'windows')\n wt = msol0.get_var_solution(facade) \n visu.interval(wt, 'lightblue', 'facade')\n wt = msol0.get_var_solution(moving) \n visu.interval(wt, 'lightblue', 'moving')\n visu.show()",
"Note on interval variables\nAfter a time interval has been assigned a start value (say s) and an end value (say e), the interval is written as [s,e). \nThe time interval does not include the endpoint e. \nIf another interval variable is constrained to be placed after this interval, it can start at the time e.\nChapter 2. Modeling and solving house building with an objective\nThis chapter presents the same house building example in such a manner that minimizes an objective.\nIt intends to present how to:\n* use the interval variable,\n* use the constraint endBeforeStart,\n* use the expressions startOf and endOf.\nThe objective to minimize is here the cost associated with performing specific tasks before a preferred earliest start date or after a preferred latest end date. \nSome tasks must necessarily take place before other tasks, and each task has a given duration. \nTo find a solution to this problem, a three-stage method is used: describe, model, and solve.\nProblem to be solved\nThe problem consists of assigning start dates to tasks in such a way that the resulting schedule satisfies precedence constraints and minimizes a criterion. \nThe criterion for this problem is to minimize the earliness costs associated with starting certain tasks earlier than a given date, and tardiness costs associated with completing certain tasks later than a given date.\nFor each task in the house building project, the following table shows the duration (measured in days) of the task along with the tasks that must finish before the task can start.\nNote:\nThe unit of time represented by an interval variable is not defined. As a result, the size of the masonry task in this problem could be 35 hours or 35 weeks or 35 months.\nHouse construction tasks:\n| Task | Duration | Preceding tasks |\n|-----------|----------|-----------------------------------|\n| masonry | 35 | |\n| carpentry | 15 | masonry |\n| plumbing | 40 | masonry |\n| ceiling | 15 | masonry |\n| roofing | 5 | carpentry |\n| painting | 10 | ceiling |\n| windows | 5 | roofing |\n| facade | 10 | roofing, plumbing |\n| garden | 5 | roofing, plumbing |\n| moving | 5 | windows, facade, garden, painting |\nThe other information for the problem includes the earliness and tardiness costs associated with some tasks.\nHouse construction task earliness costs:\n| Task | Preferred earliest start date | Cost per day for starting early |\n|-----------|-------------------------------|---------------------------------|\n| masonry | 25 | 200.0 |\n| carpentry | 75 | 300.0 |\n| ceiling |75 | 100.0 |\nHouse construction task tardiness costs:\n| Task | Preferred latest end date | Cost per day for ending late |\n|--------|---------------------------|------------------------------|\n| moving | 100 | 400.0 |\nSolving the problem consists of identifying starting dates for the tasks such that the total cost, determined by the earliness and lateness costs, is minimized.\nStep 1: Describe the problem\nThe first step in modeling the problem is to write a natural language description of the problem, identifying the decision variables and the constraints on these variables.\nWriting a natural language description of this problem requires to answer these questions:\n* What is the known information in this problem ?\n* What are the decision variables or unknowns in this problem ?\n* What are the constraints on these variables ?\n* What is the objective ?\n\nWhat is the known information in this problem ?\n\nThere are ten house building tasks, each with a given duration. For each task,\nthere is a list of tasks that must be completed before the task can start. Some\ntasks also have costs associated with an early start date or late end date.\n\nWhat are the decision variables or unknowns in this problem ?\n\nThe unknowns are the date that each task will start. The cost is determined by the assigned start dates.\n\nWhat are the constraints on these variables ?\n\nIn this case, each constraint specifies that a particular task may not begin until one or more given tasks have been completed.\n\nWhat is the objective ?\n\nThe objective is to minimize the cost incurred through earliness and tardiness costs.\nStep 2: Declare the interval variables\nIn the model, each task is represented by an interval variables. \nEach variable represents the unknown information, the scheduled interval for each activity. \nAfter the model is executed, the values assigned to these interval variables will represent the solution to the problem.\nDeclaration of engine\nA scheduling model starts with the declaration of the engine as follows:",
"import sys\nfrom docplex.cp.model import *\n\nmdl1 = CpoModel()",
"The declaration of necessary interval variables is done as follows:",
"masonry = mdl1.interval_var(size=35)\ncarpentry = mdl1.interval_var(size=15)\nplumbing = mdl1.interval_var(size=40)\nceiling = mdl1.interval_var(size=15)\nroofing = mdl1.interval_var(size=5)\npainting = mdl1.interval_var(size=10)\nwindows = mdl1.interval_var(size=5)\nfacade = mdl1.interval_var(size=10)\ngarden = mdl1.interval_var(size=5)\nmoving = mdl1.interval_var(size=5)",
"Step 3: Add the precedence constraints\nIn this example, certain tasks can start only after other tasks have been completed.\nCP Optimizer allows to express constraints involving temporal relationships between pairs of interval variables using <i>precedence constraints</i>.\nPrecedence constraints are used to specify when an interval variable must start or end with respect to the start or end time of another interval variable. \nThe following types of precedence constraints are available; if a and b denote interval variables, both interval variables are present, and delay is a number or integer expression (0 by default), then:\n* end_before_end(a, b, delay) constrains at least the given delay to elapse between the end of a and the end of b. It imposes the inequality endTime(a) + delay <= endTime(b).\n* end_before_start(a, b, delay) constrains at least the given delay to elapse between the end of a and the start of b. It imposes the inequality endTime(a) + delay <= startTime(b).\n* end_at_end(a, b, delay) constrains the given delay to separate the end of a and the end of ab. It imposes the equality endTime(a) + delay == endTime(b).\n* end_at_start(a, b, delay) constrains the given delay to separate the end of a and the start of b. It imposes the equality endTime(a) + delay == startTime(b).\n* start_before_end(a, b, delay) constrains at least the given delay to elapse between the start of a and the end of b. It imposes the inequality startTime(a) + delay <= endTime(b).\n* start_before_start(a, b, delay) constrains at least the given delay to elapse between the start of act1 and the start of act2. It imposes the inequality startTime(a) + delay <= startTime(b).\n* start_at_end(a, b, delay) constrains the given delay to separate the start of a and the end of b. It imposes the equality startTime(a) + delay == endTime(b).\n* start_at_start(a, b, delay) constrains the given delay to separate the start of a and the start of b. It imposes the equality startTime(a) + delay == startTime(b).\nIf either interval a or b is not present in the solution, the constraint is automatically satisfied, and it is as if the constraint was never imposed.\nFor our model, precedence constraints can be added with the following code:",
"mdl1.add( mdl1.end_before_start(masonry, carpentry) )\nmdl1.add( mdl1.end_before_start(masonry, plumbing) )\nmdl1.add( mdl1.end_before_start(masonry, ceiling) )\nmdl1.add( mdl1.end_before_start(carpentry, roofing) )\nmdl1.add( mdl1.end_before_start(ceiling, painting) )\nmdl1.add( mdl1.end_before_start(roofing, windows) )\nmdl1.add( mdl1.end_before_start(roofing, facade) )\nmdl1.add( mdl1.end_before_start(plumbing, facade) )\nmdl1.add( mdl1.end_before_start(roofing, garden) )\nmdl1.add( mdl1.end_before_start(plumbing, garden) )\nmdl1.add( mdl1.end_before_start(windows, moving) )\nmdl1.add( mdl1.end_before_start(facade, moving) )\nmdl1.add( mdl1.end_before_start(garden, moving) )\nmdl1.add( mdl1.end_before_start(painting, moving) )",
"To model the cost for starting a task earlier than the preferred starting date, the expression start_of() can be used. \nIt represents the start of an interval variable as an integer expression.\nFor each task that has an earliest preferred start date, the number of days before the preferred date it is scheduled to start can be determined using the expression start_of().\nThis expression can be negative if the task starts after the preferred date. \nTaking the maximum of this value and 0 using max() allows to determine how many days early the task is scheduled to start. \nWeighting this value with the cost per day of starting early determines the cost associated with the task.\nThe cost for ending a task later than the preferred date is modeled in a similar manner using the expression endOf(). \nThe earliness and lateness costs can be summed to determine the total cost.\nStep 4: Add the objective\nThe objective function to be minimized can be written as follows:",
"obj = mdl1.minimize( 400 * mdl1.max([mdl1.end_of(moving) - 100, 0]) \n + 200 * mdl1.max([25 - mdl1.start_of(masonry), 0]) \n + 300 * mdl1.max([75 - mdl1.start_of(carpentry), 0]) \n + 100 * mdl1.max([75 - mdl1.start_of(ceiling), 0]) )\nmdl1.add(obj)",
"Solving a problem consists of finding a value for each decision variable so that all constraints are satisfied. \nIt is not always know beforehand whether there is a solution that satisfies all the constraints of the problem. \nIn some cases, there may be no solution. In other cases, there may be many solutions to a problem.\nStep 5: Solve the model and display the solution",
"# Solve the model\nprint(\"\\nSolving model....\")\nmsol1 = mdl1.solve(TimeLimit=20)\nprint(\"done\")\n\nif msol1:\n print(\"Cost will be \" + str(msol1.get_objective_values()[0]))\n \n var_sol = msol1.get_var_solution(masonry)\n print(\"Masonry : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol1.get_var_solution(carpentry)\n print(\"Carpentry : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol1.get_var_solution(plumbing)\n print(\"Plumbing : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol1.get_var_solution(ceiling)\n print(\"Ceiling : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol1.get_var_solution(roofing)\n print(\"Roofing : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol1.get_var_solution(painting)\n print(\"Painting : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol1.get_var_solution(windows)\n print(\"Windows : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol1.get_var_solution(facade)\n print(\"Facade : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\n var_sol = msol1.get_var_solution(moving)\n print(\"Moving : {}..{}\".format(var_sol.get_start(), var_sol.get_end()))\nelse:\n print(\"No solution found\")",
"Graphical display of the same result is available with:",
"import docplex.cp.utils_visu as visu\nimport matplotlib.pyplot as plt\n%matplotlib inline\n#Change the plot size\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 15, 3\n\nif msol1:\n wt = msol1.get_var_solution(masonry) \n visu.interval(wt, 'lightblue', 'masonry') \n wt = msol1.get_var_solution(carpentry) \n visu.interval(wt, 'lightblue', 'carpentry')\n wt = msol1.get_var_solution(plumbing) \n visu.interval(wt, 'lightblue', 'plumbing')\n wt = msol1.get_var_solution(ceiling) \n visu.interval(wt, 'lightblue', 'ceiling')\n wt = msol1.get_var_solution(roofing) \n visu.interval(wt, 'lightblue', 'roofing')\n wt = msol1.get_var_solution(painting) \n visu.interval(wt, 'lightblue', 'painting')\n wt = msol1.get_var_solution(windows) \n visu.interval(wt, 'lightblue', 'windows')\n wt = msol1.get_var_solution(facade) \n visu.interval(wt, 'lightblue', 'facade')\n wt = msol1.get_var_solution(moving) \n visu.interval(wt, 'lightblue', 'moving')\n visu.show()",
"The overall cost is 5000 and moving will be completed by day 110.\nChapter 3. Adding workers and transition times to the house building problem\nThis chapter introduces workers and transition times to the house building problem described in the previous chapters. It allows to learn the following concepts:\n* use the interval variable sequence;\n* use the constraints span and no_overlap;\n* use the expression length_of.\nThe problem to be solved is the scheduling of tasks involved in building multiple houses in a manner that minimizes the costs associated with completing each house after a given due date and with the length of time it takes to build each house. \nSome tasks must necessarily take place before other tasks, and each task has a predefined duration. \nEach house has an earliest starting date.\nMoreover, there are two workers, each of whom must perform a given subset of the necessary tasks, and there is a transition time associated with a worker transferring from one house to another house. \nA task, once started, cannot be interrupted. \nThe objective is to minimize the cost, which is composed of tardiness costs for certain tasks as well as a cost associated with the length of time it takes to complete each house. \nProblem to be solved\nThe problem consists of assigning start dates to a set of tasks in such a way that the schedule satisfies temporal constraints and minimizes a criterion. \nThe criterion for this problem is to minimize the tardiness costs associated with completing each house later than its specified due date and the cost associated with the length of time it takes to complete each house.\nFor each type of task, the following table shows the duration of the task in days along with the tasks that must be finished before the task can start. \nIn addition, each type of task must be performed by a specific worker, Jim or Joe. \nA worker can only work on one task at a time. \nA task, once started, may not be interrupted. \nThe time required to transfer from one house to another house is determined by a function based on the location of the two houses.\nThe following table indicates these details for each task:\n| Task | Duration | Worker | Preceding tasks |\n|-----------|----------|--------|-------------------|\n| masonry | 35 | Joe | |\n| carpentry | 15 | Joe | masonry |\n| plumbing | 40 | Jim | masonry |\n| ceiling | 15 | Jim | masonry |\n| roofing | 5 | Joe | carpentry |\n| painting | 10 | Jim | ceiling |\n| windows | 5 | Jim | roofing |\n| facade | 10 | Joe | roofing, plumbing |\n| garden | 5 | Joe | roofing, plumbing |\n| moving | 5 | Jim | windows, facade,garden, painting|\nFor each of the five houses that must be built, there is an earliest starting date, a due date and a cost per day of completing the house later than the preferred due date.\nThe house construction tardiness costs is indicated in the following table:\n| House | Earliest start date | Preferred latest end date | Cost per day for ending late |\n|-------|---------------------|---------------------------|------------------------------|\n| 0 | 0 | 120 | 100.0 |\n| 1 | 0 | 212 | 100.0 |\n| 2 | 151 | 304 | 100.0 |\n| 3 | 59 | 181 | 200.0 |\n| 4 | 243 | 425 | 100.0 |\nSolving the problem consists of determining starting dates for the tasks such that the cost, where the cost is determined by the lateness costs and length costs, is minimized.\nStep 1: Describe the problem\n\n\nWhat is the known information in this problem ?\nThere are five houses to be built by two workers. For each house, there are ten house building tasks, each with a given duration, or size. Each house also has a given earliest starting date. For each task, there is a list of tasks that must be completed before the task can start. Each task must be performed by a given worker, and there is a transition time associated with a worker transferring from one house to another house. There are costs associated with completing eachhouse after its preferred due date and with the length of time it takes to complete each house.\n\n\nWhat are the decision variables or unknowns in this problem ?\n\n\nThe unknowns are the start and end dates of the interval variables associated with the tasks. Once fixed, these interval variables also determine the cost of the solution. For some of the interval variables, there is a fixed minimum start date.\n\nWhat are the constraints on these variables ?\n\nThere are constraints that specify a particular task may not begin until one or more given tasks have been completed. In addition, there are constraints that specify that a worker can be assigned to only one task at a time and that it takes time for a worker to travel from one house to the other.\n\nWhat is the objective ?\n\nThe objective is to minimize the cost incurred through tardiness and length costs.\nStep2: Prepare data\nFirst coding step is to prepare model data:",
"NbHouses = 5\n\nWorkerNames = [\"Joe\", \"Jim\"]\n\nTaskNames = [\"masonry\", \"carpentry\", \"plumbing\", \n \"ceiling\", \"roofing\", \"painting\", \n \"windows\", \"facade\", \"garden\", \"moving\"]\n\nDuration = [35, 15, 40, 15, 5, 10, 5, 10, 5, 5]\n\nWorker = {\"masonry\" : \"Joe\" , \n \"carpentry\": \"Joe\" , \n \"plumbing\" : \"Jim\" , \n \"ceiling\" : \"Jim\" , \n \"roofing\" : \"Joe\" , \n \"painting\" : \"Jim\" , \n \"windows\" : \"Jim\" , \n \"facade\" : \"Joe\" , \n \"garden\" : \"Joe\" , \n \"moving\" : \"Jim\"}\n\nReleaseDate = [ 0, 0, 151, 59, 243]\nDueDate = [120, 212, 304, 181, 425]\nWeight = [100.0, 100.0, 100.0, 200.0, 100.0]\n\nPrecedences = [(\"masonry\", \"carpentry\"),(\"masonry\", \"plumbing\"),\n (\"masonry\", \"ceiling\"), (\"carpentry\", \"roofing\"),\n (\"ceiling\", \"painting\"), (\"roofing\", \"windows\"), \n (\"roofing\", \"facade\"), (\"plumbing\", \"facade\"),\n (\"roofing\", \"garden\"), (\"plumbing\", \"garden\"),\n (\"windows\", \"moving\"), (\"facade\", \"moving\"), \n (\"garden\", \"moving\"), (\"painting\", \"moving\")]\n\nHouses = range(NbHouses)",
"One part of the objective is based on the time it takes to build a house.\nTo model this, one interval variable is used for each house, and is later constrained to span the tasks associated with the given house. \nAs each house has an earliest starting date, and each house interval variable is declared to have a start date no earlier than that release date. \nThe ending date of the task is not constrained, so the upper value of the range for the variable is maxint.\nStep 3: Create the house interval variables",
"import sys\nfrom docplex.cp.model import *\n\nmdl2 = CpoModel()\n\nhouses = [mdl2.interval_var(start=(ReleaseDate[i], INTERVAL_MAX), name=\"house\"+str(i)) for i in Houses]",
"Step 4: Create the task interval variables\nEach house has a list of tasks that must be scheduled. \nThe duration, or size, of each task t is Duration[t]. \nThis information allows to build the matrix itvs of interval variables.",
"TaskNames_ids = {}\nitvs = {}\nfor h in Houses:\n for i,t in enumerate(TaskNames):\n _name = str(h)+\"_\"+str(t)\n itvs[(h,t)] = mdl2.interval_var(size=Duration[i], name=_name)\n TaskNames_ids[_name] = i",
"Step 5: Add the precedence constraints\nThe tasks of the house building project have precedence constraints that are added to the model.",
"for h in Houses:\n for p in Precedences:\n mdl2.add(mdl2.end_before_start(itvs[(h,p[0])], itvs[(h,p[1])]) )",
"To model the cost associated with the length of time it takes to build a single house, the interval variable associated with the house is constrained to start at the start of the first task of the house and end at the end of the last task. \nThis interval variable must span the tasks.\nStep 6: Add the span constraints\nThe constraint span allows to specify that one interval variable must exactly cover a set of interval variables.\nIn other words, the spanning interval is present in the solution if and only if at least one of the spanned interval variables is present and, in this case, the spanning interval variable starts at the start of the interval variable scheduled earliest in the set and ends at the end of the interval variable scheduled latest in the set.\nFor house h, the interval variable houses[h] is constrained to cover the interval variables in itvs that are associated with the tasks of the given house.",
"for h in Houses:\n mdl2.add( mdl2.span(houses[h], [itvs[(h,t)] for t in TaskNames] ) )",
"Step 7: Create the transition times\nTransition times can be modeled using tuples with three elements. \nThe first element is the interval variable type of one task, the second is the interval variable type of the other task and the third element of the tuple is the transition time from the first to the second. \nAn integer interval variable type can be associated with each interval variable.\nGiven an interval variable a1 that precedes (not necessarily directly) an interval variable a2 in a sequence of non-overlapping interval variables, the transition time between a1 and a2 is an amount of time that must elapse between the end of a1 and the beginning of a2.",
"transitionTimes = transition_matrix([[int(abs(i - j)) for j in Houses] for i in Houses])",
"Each of the tasks requires a particular worker. \nAs a worker can perform only one task at a time, it is necessary to know all of the tasks that a worker must perform and then constrain that these intervals not overlap and respect the transition times.\nA sequence variable represents the order in which the workers perform the tasks.\nNote that the sequence variable does not force the tasks to not overlap or the order of tasks. In a later step, a constraint is created that enforces these relations on the sequence of interval variables.\nStep 8: Create the sequence variables\nUsing the decision variable type sequence, variable can be created to represent a sequence of interval variables. The sequence can contain a subset of the variables or be empty. \nIn a solution, the sequence will represent a total order over all the intervals in the set that are present in the solution. \nThe assigned order of interval variables in the sequence does not necessarily determine their relative positions in time in the schedule. \nThe sequence variable takes an array of interval variables as well as the transition types for each of those variables. \nInterval sequence variables are created for Jim and Joe, using the arrays of their tasks and the task locations.",
"workers = {w : mdl2.sequence_var([ itvs[(h,t)] for h in Houses for t in TaskNames if Worker[t]==w ], \n types=[h for h in Houses for t in TaskNames if Worker[t]==w ], name=\"workers_\"+w) \n for w in WorkerNames}",
"Step 9: Add the no overlap constraint\nNow that the sequence variables have been created, each sequence must be constrained such that the interval variables do not overlap in the solution, that the transition times are respected, and that the sequence represents the relations of the interval variables in time. \nThe constraint no_overlap allows to constrain an interval sequence variable to define a chain of non-overlapping intervals that are present in the solution. \nIf a set of transition tuples is specified, it defines the minimal time that must elapse between two intervals in the chain.\nNote that intervals which are not present in the solution are automatically removed from the sequence.\nOne no overlap constraint is created for the sequence interval variable for each worker.",
"for w in WorkerNames:\n mdl2.add( mdl2.no_overlap(workers[w], transitionTimes) )",
"The cost for building a house is the sum of the tardiness cost and the number of days it takes from start to finish building the house. \nTo model the cost associated with a task being completed later than its preferred latest end date, the expression endOf() can be used to determine the end date of the house interval variable. \nTo model the cost of the length of time it takes to build the house, the expression lengthOf() can be used, which returns an expression representing the length of an interval variable. \nStep 10: Add the objective\nThe objective of this problem is to minimize the cost as represented by the cost expression.",
"# create the obj and add it.\nmdl2.add( \n mdl2.minimize( \n mdl2.sum(Weight[h] * mdl2.max([0, mdl2.end_of(houses[h])-DueDate[h]]) + mdl2.length_of(houses[h]) for h in Houses) \n ) \n)",
"Step 11: Solve the model\nThe search for an optimal solution in this problem can potentiality take a long time. A fail limit can be placed on the solve process to limit the search process. \nThe search stops when the fail limit is reached, even if optimality of the current best solution is not guaranteed. \nThe code for limiting the solve process is provided below:",
"# Solve the model\nprint(\"\\nSolving model....\")\nmsol2 = mdl2.solve(FailLimit=30000)\nprint(\"done\")\n\nif msol2:\n print(\"Cost will be \" + str(msol2.get_objective_values()[0]))\nelse:\n print(\"No solution found\")\n\n# Viewing the results of sequencing problems in a Gantt chart\n# (double click on the gantt to see details)\nimport docplex.cp.utils_visu as visu\nimport matplotlib.pyplot as plt\n%matplotlib inline\n#Change the plot size\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 15, 3\n\ndef showsequence(msol, s, setup, tp):\n seq = msol.get_var_solution(s)\n visu.sequence(name=s.get_name())\n vs = seq.get_value()\n for v in vs:\n nm = v.get_name()\n visu.interval(v, tp[TaskNames_ids[nm]], nm)\n for i in range(len(vs) - 1):\n end = vs[i].get_end()\n tp1 = tp[TaskNames_ids[vs[i].get_name()]]\n tp2 = tp[TaskNames_ids[vs[i + 1].get_name()]]\n visu.transition(end, end + setup.get_value(tp1, tp2))\nif msol2:\n visu.timeline(\"Solution for SchedSetup\")\n for w in WorkerNames:\n types=[h for h in Houses for t in TaskNames if Worker[t]==w]\n showsequence(msol2, workers[w], transitionTimes, types)\n visu.show()",
"Chapter 4. Adding calendars to the house building problem\nThis chapter introduces calendars into the house building problem, a problem of scheduling the tasks involved in building multiple houses in such a manner that minimizes the overall completion date of the houses.\nThere are two workers, each of whom must perform a given subset of the necessary tasks. \nEach worker has a calendar detailing on which days he does not work, such as weekends and holidays. \nOn a worker’s day off, he does no work on his tasks, and his tasks may not be scheduled to start or end on these days. \nTasks that are in process by the worker are suspended during his days off.\nFollowing concepts are demonstrated:\n* use of the step functions,\n* use an alternative version of the constraint no_overlap,\n* use intensity expression,\n* use the constraints forbid_start and forbid_end,\n* use the length and size of an interval variable.\nProblem to be solved\nThe problem consists of assigning start dates to a set of tasks in such a way that the schedule satisfies temporal constraints and minimizes a criterion. \nThe criterion for this problem is to minimize the overall completion date.\nFor each task type in the house building project, the following table shows the size of the task in days along with the tasks that must be finished before the task can start. \nIn addition, each type of task can be performed by a given one of the two workers, Jim and Joe. \nA worker can only work on one task at a time. \nOnce started, Problem to be solveda task may be suspended during a worker’s days off, but may not be interrupted by another task.\nHouse construction tasks are detailed in the folowing table:\n| Task | Duration | Worker | Preceding tasks |\n|-----------|----------|--------|-----------------------------------|\n| masonry | 35 | Joe | |\n| carpentry | 15 | Joe | masonry |\n| plumbing | 40 | Jim | masonry |\n| ceiling | 15 | Jim | masonry |\n| roofing | 5 | Joe | carpentry |\n| painting | 10 | Jim | ceiling |\n| windows | 5 | Jim | roofing |\n| facade | 10 | Joe | roofing, plumbing |\n| garden | 5 | Joe | roofing, plumbing |\n| moving | 5 | Jim | windows, facade, garden, painting |\nSolving the problem consists of determining starting dates for the tasks such that\nthe overall completion date is minimized.\nStep 1: Describe the problem\nThe first step in modeling the problem is to write a natural language description of the problem, identifying the decision variables and the constraints on these variables.\n\nWhat is the known information in this problem ?\n\nThere are five houses to be built by two workers. For each house, there are ten house building tasks, each with a given size. For each task, there is a list of tasks that must be completed before the task can start. Each task must be performed by a given worker, and each worker has a calendar listing his days off.\n\nWhat are the decision variables or unknowns in this problem ?\n\nThe unknowns are the start and end times of tasks which also determine the overall completion time. The actual length of a task depends on its position in time and on the calendar of the associated worker.\n\nWhat are the constraints on these variables ?\n\nThere are constraints that specify that a particular task may not begin until one or more given tasks have been completed. In addition, there are constraints that specify that a worker can be assigned to only one task at a time. A task cannot start or end during the associated worker’s days off.\n\nWhat is the objective ?\n\nThe objective is to minimize the overall completion date.\nStep 2: Prepare data\nA scheduling model starts with the declaration of the engine as follows:",
"import sys\nfrom docplex.cp.model import *\n\nmdl3 = CpoModel()\n\nNbHouses = 5;\n\nWorkerNames = [\"Joe\", \"Jim\" ]\n\nTaskNames = [\"masonry\",\"carpentry\",\"plumbing\",\"ceiling\",\"roofing\",\"painting\",\"windows\",\"facade\",\"garden\",\"moving\"]\n\nDuration = [35,15,40,15,5,10,5,10,5,5]\n\nWorker = {\"masonry\":\"Joe\",\"carpentry\":\"Joe\",\"plumbing\":\"Jim\",\"ceiling\":\"Jim\",\n \"roofing\":\"Joe\",\"painting\":\"Jim\",\"windows\":\"Jim\",\"facade\":\"Joe\",\n \"garden\":\"Joe\",\"moving\":\"Jim\"}\n\n\nPrecedences = { (\"masonry\",\"carpentry\"),(\"masonry\",\"plumbing\"),\n (\"masonry\",\"ceiling\"),(\"carpentry\",\"roofing\"),\n (\"ceiling\",\"painting\"),(\"roofing\",\"windows\"),\n (\"roofing\",\"facade\"),(\"plumbing\",\"facade\"),\n (\"roofing\",\"garden\"),(\"plumbing\",\"garden\"),\n (\"windows\",\"moving\"),(\"facade\",\"moving\"), \n (\"garden\",\"moving\"),(\"painting\",\"moving\") }\n\nHouses = range(NbHouses)",
"Step 3: Add the intensity step functions\nTo model the availability of a worker with respect to his days off, a step function is created to represents his intensity over time. \nThis function has a range of [0..100], where the value 0 represents that the worker is not available and the value 100 represents that the worker is available with regard to his calendar.\nStep functions are created by the method step_function().\nEach interval [x1, x2) on which the function has the same value is called a step. \nWhen two consecutive steps of the function have the same value, these steps are merged so that the function is always represented with the minimal number of steps.\nFor each worker, a sorted tupleset is created. At each point in time where the worker’s availability changes, a tuple is created. \nThe tuple has two elements; the first element is an integer value that represents the worker’s availability (0 for on a break, 100 for fully available to work, 50 for a half-day), and the other element represents the date at which the availability changes to this value. \nThis tupleset, sorted by date, is then used to create a step function to represent the worker’s intensity over time. \nThe value of the function after the final step is set to 100.",
"Breaks ={\n \"Joe\" : [\n (5,14),(19,21),(26,28),(33,35),(40,42),(47,49),(54,56),(61,63),\n (68,70),(75,77),(82,84),(89,91),(96,98),(103,105),(110,112),(117,119),\n (124,133),(138,140),(145,147),(152,154),(159,161),(166,168),(173,175),\n (180,182),(187,189),(194,196),(201,203),(208,210),(215,238),(243,245),(250,252),\n (257,259),(264,266),(271,273),(278,280),(285,287),(292,294),(299,301),\n (306,308),(313,315),(320,322),(327,329),(334,336),(341,343),(348,350),\n (355,357),(362,364),(369,378),(383,385),(390,392),(397,399),(404,406),(411,413),\n (418,420),(425,427),(432,434),(439,441),(446,448),(453,455),(460,462),(467,469),\n (474,476),(481,483),(488,490),(495,504),(509,511),(516,518),(523,525),(530,532),\n (537,539),(544,546),(551,553),(558,560),(565,567),(572,574),(579,602),(607,609),\n (614,616),(621,623),(628,630),(635,637),(642,644),(649,651),(656,658),(663,665),\n (670,672),(677,679),(684,686),(691,693),(698,700),(705,707),(712,714),\n (719,721),(726,728)\n ],\n \"Jim\" : [\n (5,7),(12,14),(19,21),(26,42),(47,49),(54,56),(61,63),(68,70),(75,77),\n (82,84),(89,91),(96,98),(103,105),(110,112),(117,119),(124,126),(131,133),\n (138,140),(145,147),(152,154),(159,161),(166,168),(173,175),(180,182),(187,189),\n (194,196),(201,225),(229,231),(236,238),(243,245),(250,252),(257,259),\n (264,266),(271,273),(278,280),(285,287),(292,294),(299,301),(306,315),\n (320,322),(327,329),(334,336),(341,343),(348,350),(355,357),(362,364),(369,371),\n (376,378),(383,385),(390,392),(397,413),(418,420),(425,427),(432,434),(439,441),\n (446,448),(453,455),(460,462),(467,469),(474,476),(481,483),(488,490),(495,497),\n (502,504),(509,511),(516,518),(523,525),(530,532),(537,539),(544,546),\n (551,553),(558,560),(565,581),(586,588),(593,595),(600,602),(607,609),\n (614,616),(621,623),(628,630),(635,637),(642,644),(649,651),(656,658),\n (663,665),(670,672),(677,679),(684,686),(691,693),(698,700),(705,707),\n (712,714),(719,721),(726,728)]\n }\n\nfrom collections import namedtuple\nBreak = namedtuple('Break', ['start', 'end'])\n\nCalendar = {}\nmymax = max(max(v for k,v in Breaks[w]) for w in WorkerNames)\nfor w in WorkerNames:\n step = CpoStepFunction()\n step.set_value(0, mymax, 100)\n for b in Breaks[w]:\n t = Break(*b)\n step.set_value(t.start, t.end, 0)\n Calendar[w] = step",
"This intensity function is used in creating the task variables for the workers. \nThe intensity step function of the appropriate worker is passed to the creation of each interval variable. \nThe size of the interval variable is the time spent at the house to process the task, not including the worker’s day off. \nThe length is the difference between the start and the end of the interval.\nStep 4: Create the interval variables",
"#TaskNames_ids = {}\nitvs = {}\nfor h in Houses:\n for i,t in enumerate(TaskNames):\n _name = str(h) + \"_\" + str(t)\n itvs[(h,t)] = mdl3.interval_var(size=Duration[i], intensity=Calendar[Worker[t]], name=_name)",
"The tasks of the house building project have precedence constraints that are added to the model.\nStep 5: Add the precedence constraints",
"for h in Houses:\n for p in Precedences:\n mdl3.add( mdl3.end_before_start(itvs[h,p[0]], itvs[h,p[1]]) )",
"Step 6: Add the no overlap constraints\nTo add the constraints that a worker can perform only one task at a time, the interval variables associated with that worker are constrained to not overlap in the solution. \nTo do this, the specialized constraint no_overlap() is used, but with a slightly different form than was used in the section Chapter 3, “Adding workers and transition times to the house building problem,”.\nThis form is a shortcut that avoids the need to explicitly define the interval sequence variable when no additional constraints are required on the sequence variable. A single no_overlap() constraint is added on the array of interval variables for each worker.",
"for w in WorkerNames:\n mdl3.add( mdl3.no_overlap( [itvs[h,t] for h in Houses for t in TaskNames if Worker[t]==w] ) )",
"Step 7: Create the forbidden start and end constraints\nWhen an intensity function is set on an interval variable, the tasks which overlap weekends and/or holidays will be automatically prolonged. \nA task could still be scheduled to start or end in a weekend, but, in this problem, a worker’s tasks cannot start or end during the worker’s days off. \nCP Optimizer provides the constraints forbid_start and forbid_end to model these types of constraints.\nWith the constraint forbid_start, a constraint is created to specifies that an interval variable must not be scheduled to start at certain times.\nThe constraint takes as parameters an interval variable and a step function. \nIf the interval variable is present in the solution, then it is constrained to not start at a time when the value of the step function is zero.\nCP Optimizer also provides forbid_end and forbid_extent, which respectively constrain an interval variable to not end and not overlap where the associated step function is valued zero.\nThe first argument of the constraint forbid_start is the interval variable on which the constraint is placed.\nThe second argument is the step function that defines a set of forbidden values for the start of the interval variable: the interval variable cannot start at a point where the step function is 0.",
"for h in Houses:\n for t in TaskNames:\n mdl3.add(mdl3.forbid_start(itvs[h,t], Calendar[Worker[t]]))\n mdl3.add(mdl3.forbid_end (itvs[h,t], Calendar[Worker[t]]))",
"Step 8: Create the objective\nThe objective of this problem is to minimize the overall completion date (the completion date of the house that is completed last). \nThe maximum completion date among the individual house projects is determined using the expression end_of() on the last task in building each house (here, it is the moving task) and minimize the maximum of these expressions.",
"mdl3.add( mdl3.minimize(mdl3.max(mdl3.end_of(itvs[h,\"moving\"]) for h in Houses)))",
"Step 9: Solve the model\nThe search for an optimal solution in this problem could potentiality take a long time, so a fail limit has been placed on the solve process. \nThe search will stop when the fail limit is reached, even if optimality of the current best solution is not guaranteed. \nThe code for limiting the solve process is provided below:",
"# Solve the model\nprint(\"\\nSolving model....\")\nmsol3 = mdl3.solve(FailLimit=30000)\nprint(\"done\")\n\nif msol3:\n print(\"Cost will be \" + str( msol3.get_objective_values()[0] )) # Allocate tasks to workers\n tasks = {w : [] for w in WorkerNames}\n for k,v in Worker.items():\n tasks[v].append(k)\n\n types = {t : i for i,t in enumerate(TaskNames)}\n\n import docplex.cp.utils_visu as visu\n import matplotlib.pyplot as plt\n %matplotlib inline\n #Change the plot size\n from pylab import rcParams\n rcParams['figure.figsize'] = 15, 3\n\n visu.timeline('Solution SchedCalendar')\n for w in WorkerNames:\n visu.panel()\n visu.pause(Calendar[w])\n visu.sequence(name=w,\n intervals=[(msol3.get_var_solution(itvs[h,t]), types[t], t) for t in tasks[w] for h in Houses])\n visu.show()\nelse:\n print(\"No solution found\")",
"Chapter 5. Using cumulative functions in the house building problem\nSome tasks must necessarily take place before other tasks, and each task has a predefined duration. \nMoreover, there are three workers, and each task requires any one of the three workers. \nA worker can be assigned to at most one task at a time. \nIn addition, there is a cash budget with a starting balance. \nEach task consumes a certain amount of the budget at the start of the task, and the cash balance is increased every 60 days. \nThis chapter introduces:\n* use the modeling function cumul_function,\n* use the functions pulse, step, step_at_start and step_at_end.\nProblem to be solved\nThe problem consists of assigning start dates to a set of tasks in such a way that the schedule satisfies temporal constraints and minimizes a criterion. The criterion\nfor this problem is to minimize the overall completion date. Each task requires 200 dollars per day of the task, payable at the start of the task. Every 60 days, starting\nat day 0, the amount of 30,000 dollars is added to the cash balance.\nFor each task type in the house building project, the following table shows the duration of the task in days along with the tasks that must be finished before the task can start. Each task requires any one of the three workers. A worker can only work on one task at a time; each task, once started, may not be interrupted.\nHouse construction tasks:\n| Task | Duration | Preceding tasks |\n|-----------|----------|----------------------|\n| masonry | 35 | | \n| carpentry | 15 | masonry | \n| plumbing | 40 | masonry | \n| ceiling | 15 | masonry | \n| roofingv | 5 | carpentry | \n| painting | 10 | ceiling | \n| windows | 5 | roofing | \n| facade | 10 | roofing, plumbing | \n| garden | 5 | roofing, plumbing | \n| moving | 5 | windows, facade, garden,painting | \nThere is an earliest starting date for each of the five houses that must be built.\n| House | Earliest starting date |\n|---|----|\n| 0 | 31 |\n| 1 | 0 |\n| 2 | 90 |\n| 3 | 120|\n| 4 | 90 |\nSolving the problem consists of determining starting dates for the tasks such that\nthe overall completion date is minimized.\nStep 1: Describe the problem\nThe first step in modeling and solving the problem is to write a natural language description of the problem, identifying the decision variables and the constraints on these variables.\n\nWhat is the known information in this problem ?\n\nThere are five houses to be built by three workers. For each house, there are ten house building tasks, each with a given size and cost. For each task, there is a list of tasks that must be completed before the task can start. There is a starting cash balance of a given amount, and, each sixty days, the cash balance is increased by a given amount.\n\nWhat are the decision variables or unknowns in this problem ?\n\nThe unknown is the point in time that each task will start. Once starting dates have been fixed, the overall completion date will also be fixed.\n\nWhat are the constraints on these variables ?\n\nThere are constraints that specify that a particular task may not begin until one or more given tasks have been completed. Each task requires any one of the three workers. In addition, there are constraints that specify that a worker can be assigned to only one task at a time. Before a task can start, the cash balance must be large enough to pay the cost of the task.\n\nWhat is the objective ?\n\nThe objective is to minimize the overall completion date.\nStep 2: Prepare data\nIn the related data file, the data provided includes the number of houses (NbHouses), the number of workers (NbWorkers), the names of the tasks (TaskNames), the sizes of the tasks (Duration), the precedence relations (Precedences), and the earliest start dates of the houses (ReleaseDate).\nAs each house has an earliest starting date, the task interval variables are declared to have a start date no earlier than that release date of the associated house. The ending dates of the tasks are not constrained, so the upper value of the range for the variables is maxint.",
"NbWorkers = 3\nNbHouses = 5\n\nTaskNames = {\"masonry\",\"carpentry\",\"plumbing\",\n \"ceiling\",\"roofing\",\"painting\",\n \"windows\",\"facade\",\"garden\",\"moving\"}\n\nDuration = [35, 15, 40, 15, 5, 10, 5, 10, 5, 5]\n\nReleaseDate = [31, 0, 90, 120, 90]\n\nPrecedences = [(\"masonry\", \"carpentry\"), (\"masonry\", \"plumbing\"), (\"masonry\", \"ceiling\"),\n (\"carpentry\", \"roofing\"), (\"ceiling\", \"painting\"), (\"roofing\", \"windows\"),\n (\"roofing\", \"facade\"), (\"plumbing\", \"facade\"), (\"roofing\", \"garden\"),\n (\"plumbing\", \"garden\"), (\"windows\", \"moving\"), (\"facade\", \"moving\"),\n (\"garden\", \"moving\"), (\"painting\", \"moving\")]\n\nHouses = range(NbHouses)",
"Step 3: Create the interval variables",
"import sys\nfrom docplex.cp.model import *\n\nmdl4 = CpoModel()\n\nitvs = {}\nfor h in Houses:\n for i,t in enumerate(TaskNames):\n itvs[h,t] = mdl4.interval_var(start = [ReleaseDate[h], INTERVAL_MAX], size=Duration[i])",
"As the workers are equivalent in this problem, it is better to represent them as one pool of workers instead of as individual workers with no overlap constraints as was done in the earlier examples. \nThe expression representing usage of this pool of workers can be modified by the interval variables that require a worker.\nTo model both the limited number of workers and the limited budget, we need to represent the sum of the individual contributions associated with the interval variables. \nIn the case of the cash budget, some tasks consume some of the budget at the start. \nIn the case of the workers, a task requires the worker only for the duration of the task.\nStep 4: Declare the worker usage function\nA cumulative function expression, can be used to model a resource usage function over time. \nThis function can be computed as a sum of interval variable demands on a resource over time.\nAn interval usually increases the cumulated resource usage function at its start time and decreases it when it releases the resource at its end time (pulse function).\nFor resources that can be produced and consumed by activities (for instance the contents of an inventory or a tank), the resource level can also be described as a function of time. \nA production activity will increase the resource level at the start or end time of the activity whereas a consuming activity will decrease it. \nThe cumulated contribution of activities on the resource can be represented by a function of time, and constraints can be modeled on this function (for instance, a maximal or a safety level).\nThe value of the expression at any given moment in time is constrained to be non-negative. A cumulative function expression can be modified with the atomic demand functions:\n* step(), which increases or decreases the level of the function by a given amount at a given time,\n* pulse(), which increases or decreases the level of the function by a given amount for the length of a given interval variable or fixed interval,\n* step_at_start(), which increases or decreases the level of the function by a given amount at the start of a given interval variable,\n* step_at_end(), which increases or decreases the level of the function by a given amount at the end of a given interval variable.\nA cumulative function expression can be constrained to model limited resource capacity by constraining that the function be ≤ the capacity.\nTwo cumulative functions are required, one to represent the usage of the workers and the other to represent the cash balance.\nEach task requires one worker from the start to the end of the task interval. \nA cumulative function expression, workerUsage is used to represent the fact that a worker is required for the task.\nThis function is constrained to not exceed the number of workers at any point in time. \nThe function pulse() adjusts the expression by a given amount on the interval. \nSumming these pulse atoms over all the interval variables results in an expression that represents worker usage over the entire time frame for building the houses.",
"workers_usage = step_at(0, 0)\nfor h in Houses:\n for t in TaskNames:\n workers_usage += mdl4.pulse(itvs[h,t],1)",
"Step 5: Declare the cash budget function\nA cumulative function cach is also used to model the cash budget. \nTo set the initial cash balance of 30,000 dollars and increase the balance by 30,000 every sixty days, the function step_at() is used to increment or decrement the cumulative function expression by a fixed amount on a given date.\nEach task requires a cash payment equal to 200 dollars a day for the length of the task, payable at the start of the task. \nThe function step_at_start() is used to adjust the cash balance cumulative function expression the appropriate amount for every task.",
"cash = step_at(0, 0)\nfor p in Houses:\n cash += mdl4.step_at(60*p, 30000)\n\nfor h in Houses:\n for i,t in enumerate(TaskNames):\n cash -= mdl4.step_at_start(itvs[h,t], 200*Duration[i])",
"Step 6: Add the temporal constraints\nThe tasks have precedence constraints that are added to the model.",
"for h in Houses:\n for p in Precedences:\n mdl4.add( mdl4.end_before_start(itvs[h,p[0]], itvs[h,p[1]]) )",
"Step 7: Add the worker usage constraint\nThere is a limited number of workers, and the cumulative function expression representing worker usage must be constrained to not be greater than the number of workers..",
"mdl4.add( workers_usage <= NbWorkers )",
"Step 8: Add the cash budget constraint\nThe budget must always be nonnegative, and the cumulative function expression representing the cash budget must be greater than 0.",
"mdl4.add( cash >= 0 )",
"Step 9: Add the objective\nThe objective of this problem is to minimize the overall completion date (the completion date of the house that is completed last). \nThe maximum completion date among the individual house projects is determined using the expression end_of() on the last task in building each house (here, it is the moving task) and minimize the maximum of these expressions.",
"mdl4.add(\n mdl4.minimize( \n mdl4.max( mdl4.end_of(itvs[h,\"moving\"]) for h in Houses)\n )\n)",
"Step 10: Solve the model\nThe search for an optimal solution in this problem could potentiality take a long time, so a fail limit has been placed on the solve process. \nThe search will stop when the fail limit is reached, even if optimality of the current best solution is not guaranteed. \nThe code for limiting the solve process is:",
"# Solve the model\nprint(\"\\nSolving model....\")\nmsol4 = mdl4.solve(FailLimit=30000)\nprint(\"done\")\n\nif msol4:\n print(\"Cost will be \" + str( msol4.get_objective_values()[0] ))\n\n import docplex.cp.utils_visu as visu\n import matplotlib.pyplot as plt\n %matplotlib inline\n #Change the plot size\n from pylab import rcParams\n rcParams['figure.figsize'] = 15, 3\n\n workersF = CpoStepFunction()\n cashF = CpoStepFunction()\n for p in range(5):\n cashF.add_value(60 * p, INT_MAX, 30000)\n for h in Houses:\n for i,t in enumerate(TaskNames):\n itv = msol4.get_var_solution(itvs[h,t])\n workersF.add_value(itv.get_start(), itv.get_end(), 1)\n cashF.add_value(itv.start, INT_MAX, -200 * Duration[i])\n\n visu.timeline('Solution SchedCumul')\n visu.panel(name=\"Schedule\")\n for h in Houses:\n for i,t in enumerate(TaskNames):\n visu.interval(msol4.get_var_solution(itvs[h,t]), h, t)\n visu.panel(name=\"Workers\")\n visu.function(segments=workersF, style='area')\n visu.panel(name=\"Cash\")\n visu.function(segments=cashF, style='area', color='gold')\n visu.show()\nelse:\n print(\"No solution found\")",
"Chapter 6. Using alternative resources in the house building problem\nThis chapter presents how to use alternative resources in the house building problem. The following concepts are presented:\n* use the constraints alternative and presence_of,\n* use the function optional.\nEach house has a maximal completion date. \nMoreover, there are three workers, and one of the three is required for each task. \nThe three workers have varying levels of skills with regard to the various tasks; if a worker has no skill for a particular task, he may not be assigned to the task. \nFor some pairs of tasks, if a particular worker performs one of the pair on a house, then the same worker must be assigned to the other of the pair for that house. \nThe objective is to find a solution that maximizes the task associated skill levels of the workers assigned to the tasks. \nProblem to be solved\nThe problem consists of assigning start dates to a set of tasks in such a way that the schedule satisfies temporal constraints and maximizes a criterion. The criterion for this problem is to maximize the task associated skill levels of the workers assigned to the tasks.\nFor each task type in the house building project, the following table shows the duration of the task in days along with the tasks that must be finished before the task can start. A worker can only work on one task at a time; each task, once started, may not be interrupted.\nHouse construction tasks:\n| Task | Duration | Preceding tasks |\n|-----------|----------|-----------------|\n| masonry | 35 | |\n| carpentry | 15 | masonry |\n| plumbing | 40 | masonry |\n| ceiling | 15 | masonry |\n| roofing | 5 | carpentry |\n| painting | 10 | ceiling |\n| windows | 5 | roofing |\n| facade | 10 | roofing, plumbing |\n| garden | 5 | roofing, plumbing |\n| moving | 5 | windows, facade, garden,painting |\nEvery house must be completed within 300 days. There are three workers with varying skill levels in regard to the ten tasks. If a worker has a skill level of zero for a task, he may not be assigned to the task.\nWorker-task skill levels:\n| Task | Joe | Jack | Jim |\n|-----------|-----|------|-----|\n| masonry | 9 | 5 | 0 | \n| carpentry | 7 | 0 | 5 | \n| plumbing | 0 | 7 | 0 | \n| ceiling | 5 | 8 | 0 | \n| roofing | 6 | 7 | 0 | \n| painting | 0 | 9 | 6 | \n| windows | 8 | 0 | 5 | \n| façade | 5 | 5 | 0 | \n| garden | 5 | 5 | 9 | \n| moving | 6 | 0 | 8 | \nFor Jack, if he performs the roofing task or facade task on a house, then he must perform the other task on that house. For Jim, if he performs the garden task or moving task on a house, then he must perform the other task on that house. For\nJoe, if he performs the masonry task or carpentry task on a house, then he must perform the other task on that house. Also, if Joe performs the carpentry task or roofing task on a house, then he must perform the other task on that house.\nStep 1: Describe the problem\nThe first step in modeling and solving the problem is to write a natural language description of the problem, identifying the decision variables and the constraints on these variables.\n\nWhat is the known information in this problem ?\n\nThere are five houses to be built by three workers. For each house, there are ten house building tasks, each with a given size. For each task, there is a list of tasks that must be completed before the task can start. Each worker has a skill level associated with each task. There is an overall deadline for the work to be completed on the five houses.\n\nWhat are the decision variables or unknowns in this problem ?\n\nThe unknown is the point in time that each task will start. Also, unknown is which worker will be assigned to each task.\n\nWhat are the constraints on these variables ?\n\nThere are constraints that specify that a particular task may not begin until one or more given tasks have been completed. In addition, there are constraints that specify that each task must have one worker assigned to it, that a worker can be assigned to only one task at a time and that a worker can be assigned only to tasks for which he has some level of skill. There are pairs of tasks that if one task for a house is done by a particular worker, then the other task for that house must be done by the same worker.\n\nWhat is the objective ?\n\nThe objective is to maximize the skill levels used.\nStep 2: Prepare data\nIn the related data file, the data provided includes the number of houses (NbHouses), the names of the workers (Workers), the names of the tasks (Tasks), the sizes of the tasks (Durations), the precedence relations (Precedences), and the overall deadline for the construction of the houses (Deadline).\nThe data also includes a tupleset, Skills. Each tuple in the set consists of a worker, a task, and the skill level that the worker has for the task. In addition, there is a tupleset, Continuities, which is a set of triples (a pair of tasks and a worker). If one of the two tasks in a pair is performed by the worker for a given house, then the other task in the pair must be performed by the same worker for that house.\nTwo matrices of interval variables are created in this model. \nThe first, tasks, is indexed on the houses and tasks and must be scheduled in the interval [0..Deadline]. \nThe other matrix of interval variables is indexed on the houses and the Skills tupleset. \nThese interval variables are optional and may or may not be present in the solution. \nThe intervals that are performed will represent which worker performs which task.",
"NbHouses = 5\nDeadline = 318\n\nWorkers = [\"Joe\", \"Jack\", \"Jim\"]\n\nTasks = [\"masonry\", \"carpentry\", \"plumbing\", \"ceiling\",\"roofing\", \"painting\", \"windows\", \"facade\",\"garden\", \"moving\"]\n\nDurations = [35, 15, 40, 15, 5, 10, 5, 10, 5, 5]\n\nSkills = [(\"Joe\",\"masonry\",9),(\"Joe\",\"carpentry\",7),(\"Joe\",\"ceiling\",5),(\"Joe\",\"roofing\",6), \n (\"Joe\",\"windows\",8),(\"Joe\",\"facade\",5),(\"Joe\",\"garden\",5),(\"Joe\",\"moving\",6),\n (\"Jack\",\"masonry\",5),(\"Jack\",\"plumbing\",7),(\"Jack\",\"ceiling\",8),(\"Jack\",\"roofing\",7),\n (\"Jack\",\"painting\",9),(\"Jack\",\"facade\",5),(\"Jack\",\"garden\",5),(\"Jim\",\"carpentry\",5),\n (\"Jim\",\"painting\",6),(\"Jim\",\"windows\",5),(\"Jim\",\"garden\",9),(\"Jim\",\"moving\",8)]\n\nPrecedences = [(\"masonry\",\"carpentry\"),(\"masonry\",\"plumbing\"),(\"masonry\",\"ceiling\"),\n (\"carpentry\",\"roofing\"),(\"ceiling\",\"painting\"),(\"roofing\",\"windows\"),\n (\"roofing\",\"facade\"),(\"plumbing\",\"facade\"),(\"roofing\",\"garden\"),\n (\"plumbing\",\"garden\"),(\"windows\",\"moving\"),(\"facade\",\"moving\"),\n (\"garden\",\"moving\"),(\"painting\",\"moving\")\n ]\n \nContinuities = [(\"Joe\",\"masonry\",\"carpentry\"),(\"Jack\",\"roofing\",\"facade\"), \n (\"Joe\",\"carpentry\", \"roofing\"),(\"Jim\",\"garden\",\"moving\")]\n\nnbWorkers = len(Workers)\nHouses = range(NbHouses)",
"Step 3: Create the interval variables",
"import sys\nfrom docplex.cp.model import *\n\nmdl5 = CpoModel()\n\ntasks = {}\nwtasks = {}\nfor h in Houses:\n for i,t in enumerate(Tasks):\n tasks[(h,t)] = mdl5.interval_var(start=[0,Deadline], size=Durations[i])\n for s in Skills:\n wtasks[(h,s)] = mdl5.interval_var(optional=True)",
"Step 4: Add the temporal constraints\nThe tasks in the model have precedence constraints that are added to the model.",
"for h in Houses:\n for p in Precedences:\n mdl5.add( mdl5.end_before_start(tasks[h,p[0]], tasks[h,p[1]]) )",
"Step 5: Add the alternative constraints\nthe specialized constraint alternative() is used to constrain the solution so that exactly one of the interval variables tasks associated with a given task of a given house is to be present in the solution, \nThe constraint alternative() creates a constraint between an interval and a set of intervals that specifies that if the given interval is present in the solution, then exactly one interval variable of the set is present in the solution.\nIn other words, consider an alternative constraint created with an interval variable a and an array of interval variables bs. If a is present in the solution, then exactly one of the interval variables in bs will be present, and a starts and ends together with this chosen interval.",
"for h in Houses:\n for t in Tasks:\n mdl5.add( mdl5.alternative(tasks[h,t], [wtasks[h,s] for s in Skills if s[1]==t]) )",
"The expression presence_of() is used to represent whether a task is performed by a worker. \nThe constraint presence_of() is true if the interval variable is present in and is false if the interval variable is absent from the solution.\nFor each house and each given pair of tasks and worker that must have continuity, a constraint states that if the interval variable for one of the two tasks for the worker is present, the interval variable associated with that worker and the other task must also be present.",
"for h in Houses:\n for c in Continuities:\n for (worker1, task1, l1) in Skills:\n if worker1 == c[0] and task1 == c[1]:\n for (worker2, task2, l2) in Skills:\n if worker2 == c[0] and task2 == c[2]:\n mdl5.add(\n mdl5.presence_of(wtasks[h,(c[0], task1, l1)]) \n == \n mdl5.presence_of(wtasks[h,(c[0], task2, l2)])\n )",
"Step 7: Add the no overlap constraints\nThe constraint no_overlap() allows to specify that a given worker can be assigned only one task at a given moment in time.",
"for w in Workers:\n mdl5.add( mdl5.no_overlap([wtasks[h,s] for h in Houses for s in Skills if s[0]==w]) )",
"Step 8: Add the objective\nThe presence of an interval variable in the solution must be accounted in the objective. Thus for each of these possible tasks, the cost is incremented by the product of the skill level and the expression representing the presence of the interval variable in the solution.\nThe objective of this problem is to maximize the skill levels used for all the tasks, then to maximize the expression.",
"mdl5.add(\n mdl5.maximize(\n mdl5.sum( s[2] * mdl5.presence_of(wtasks[h,s]) for h in Houses for s in Skills)\n )\n)",
"Step 9: Solve the model\nThe search for an optimal solution in this problem could potentiality take a long time, so a fail limit has been placed on the solve process. The search will stop when the fail limit is reached, even if optimality of the current best solution is not guaranteed.",
"# Solve the model\nprint(\"\\nSolving model....\")\nmsol5 = mdl5.solve(FailLimit=30000)\nprint(\"done\")\n\nif msol5:\n print(\"Cost will be \"+str( msol5.get_objective_values()[0] ))\n\n worker_idx = {w : i for i,w in enumerate(Workers)}\n worker_tasks = [[] for w in range(nbWorkers)] # Tasks assigned to a given worker\n for h in Houses:\n for s in Skills:\n worker = s[0]\n wt = wtasks[(h,s)]\n worker_tasks[worker_idx[worker]].append(wt)\n\n import docplex.cp.utils_visu as visu\n import matplotlib.pyplot as plt\n %matplotlib inline\n #Change the plot size\n from pylab import rcParams\n rcParams['figure.figsize'] = 15, 3\n\n visu.timeline('Solution SchedOptional', 0, Deadline)\n for i,w in enumerate(Workers):\n visu.sequence(name=w)\n for t in worker_tasks[worker_idx[w]]:\n wt = msol5.get_var_solution(t)\n if wt.is_present():\n #if desc[t].skills[w] == max(desc[t].skills):\n # Green-like color when task is using the most skilled worker\n # color = 'lightgreen'\n #else:\n # Red-like color when task does not use the most skilled worker\n # color = 'salmon'\n color = 'salmon'\n visu.interval(wt, color, wt.get_name())\n visu.show()\nelse:\n print(\"No solution found\")",
"Chapter 7. Using state functions: house building with state incompatibilities\nThis chapter describes how to use state functions to take into account incompatible states as tasks finish. Following concepts are presented:\n* use the stateFunction,\n* use the constraint alwaysEqual.\nThere are two workers, and each task requires either one of the two workers. \nA subset of the tasks require that the house be clean, whereas other tasks make the house dirty. \nA transition time is needed to change the state of the house from dirty to clean. \nProblem to be solved\nThe problem consists of assigning start dates to a set of tasks in such a way that the schedule satisfies temporal constraints and minimizes an expression. The objective for this problem is to minimize the overall completion date.\nFor each task type in the house building project, the following table shows the duration of the task in days along with state of the house during the task. A worker can only work on one task at a time; each task, once started, may not be interrupted.\nHouse construction tasks:\n| Task | Duration | State | Preceding tasks |\n|-----------|----------|-------|-----------------|\n| masonry | 35 | dirty | | \n| carpentry | 15 | dirty | masonry | \n| plumbing | 40 | clean | masonry | \n| ceiling | 15 | clean | masonry | \n| roofing | 5 | dirty | carpentry | \n| painting | 10 | clean | ceiling |\n| windows | 5 | dirty | roofing | \n| facade | 10 | | roofing, plumbing| \n| garden | 5 | | roofing, plumbing| \n| moving | 5 | | windows, facade,garden, painting| \nSolving the problem consists of determining starting dates for the tasks such that\nthe overall completion date is minimized.\nStep 1: Describe the problem\nThe first step in modeling and solving the problem is to write a natural language description of the problem, identifying the decision variables and the constraints on these variables.\n\nWhat is the known information in this problem ?\n\nThere are five houses to be built by two workers. For each house, there are ten house building tasks, each with a given size. For each task, there is a list of tasks that must be completed before the task can start. There are two workers. There is a transition time associated with changing the state of a house from dirty to clean.\n\nWhat are the decision variables or unknowns in this problem ?\n\nThe unknowns are the date that each task will start. The cost is determined by the assigned start dates.\n\nWhat are the constraints on these variables ?\n\nThere are constraints that specify that a particular task may not begin until one or more given tasks have been completed. Each task requires either one of the two workers. Some tasks have a specified house cleanliness state.\n\nWhat is the objective ?\n\nThe objective is to minimize the overall completion date.\nStep 2: Prepare data\nIn the related data, the data provided includes the number of houses (NbHouses), the number of workers (NbWorkers), the names of the tasks (TaskNames), the sizes of the tasks (Duration), the precedence relations (Precedences), and the cleanliness state of each task (AllStates).\nEach house has a list of tasks that must be scheduled. The duration, or size, of each task t is Duration[t]. Using this information, a matrix task of interval variables can be built.",
"NbHouses = 5\nNbWorkers = 2\nAllStates = [\"clean\", \"dirty\"]\n\nTaskNames = [\"masonry\",\"carpentry\", \"plumbing\", \"ceiling\",\"roofing\",\"painting\",\"windows\",\"facade\",\"garden\",\"moving\"]\n\nDuration = [35,15,40,15,5,10,5,10,5,5]\n\nStates = [(\"masonry\",\"dirty\"),(\"carpentry\",\"dirty\"),(\"plumbing\",\"clean\"),\n (\"ceiling\",\"clean\"),(\"roofing\",\"dirty\"),(\"painting\",\"clean\"),\n (\"windows\",\"dirty\")]\n\nPrecedences = [(\"masonry\",\"carpentry\"),(\"masonry\",\"plumbing\"),(\"masonry\",\"ceiling\"),\n (\"carpentry\",\"roofing\"),(\"ceiling\",\"painting\"),(\"roofing\",\"windows\"),\n (\"roofing\",\"facade\"),(\"plumbing\",\"facade\"),(\"roofing\",\"garden\"),\n (\"plumbing\",\"garden\"),(\"windows\",\"moving\"),(\"facade\",\"moving\"),\n (\"garden\",\"moving\"),(\"painting\",\"moving\")]\n\nHouses = range(NbHouses)",
"Step 3: Create the interval variables",
"import sys\nfrom docplex.cp.model import *\n\nmdl6 = CpoModel()\n\ntask = {}\nfor h in Houses:\n for i,t in enumerate(TaskNames):\n task[(h,t)] = mdl6.interval_var(size = Duration[i])",
"Step 4: Declare the worker usage functions\nAs in the example Chapter 5, “Using cumulative functions in the house building problem”, each task requires one worker from the start to the end of the task interval. To represent the fact that a worker is required for the task, a cumulative function expression workers is created. \nThis function is constrained to not exceed the number of workers at any point in time.\nhe function pulse adjusts the expression by a given amount on the interval. \nSumming these pulse atoms over all the interval variables results in an expression that represents worker usage over the entire time frame for building the houses.",
"workers = step_at(0, 0)\nfor h in Houses:\n for t in TaskNames:\n workers += mdl6.pulse(task[h,t], 1)",
"Step 5: Create the transition times\nThe transition time from a dirty state to a clean state is the same for all houses. \nAs in the example Chapter 3, “Adding workers and transition times to the house building problem”, a tupleset ttime is created to represent the transition time between cleanliness states.",
"Index = {s : i for i,s in enumerate(AllStates)}\n\nttvalues = [[0, 0], [0, 0]]\nttvalues[Index[\"dirty\"]][Index[\"clean\"]] = 1\nttime = transition_matrix(ttvalues, name='TTime')",
"Step 6: Declare the state function\nCertain tasks require the house to be clean, and other tasks cause the house to be dirty. \nTo model the possible states of the house, the state function function is used to represent the disjoint states through time.\nA state function is a function describing the evolution of a given feature of the environment. \nThe possible evolution of this feature is constrained by interval variables of the problem. \nFor example, a scheduling problem may contain a resource whose state changes over time. \nThe resource state can change because of scheduled activities or because of exogenous events; some activities in the schedule may need a particular resource state in order to execute.\nInterval variables have an absolute effect on a state function, requiring the function value to be equal to a particular state or in a set of possible states.",
"state = { h : state_function(ttime, name=\"house\"+str(h)) for h in Houses}",
"Step 7: Add the constraints\nTo model the state required or imposed by a task, a constraint is created to specifies the state of the house throughout the interval variable representing that task.\nThe constraint always_equal(), specifies the value of a state function over the interval variable.\nThe constraint takes as parameters a state function, an interval variable, and a state value.\nWhenever the interval variable is present, the state function is defined everywhere between the start and the end of the interval variable and remains equal to the specified state value over this interval.\nThe state function is constrained to take the appropriate values during the tasks that require the house to be in a specific state.\nTo add the constraint that there can be only two workers working at a given time, the cumulative function expression representing worker usage is constrained to not be greater than the value NbWorkers.",
"for h in Houses:\n for p in Precedences:\n mdl6.add( mdl6.end_before_start(task[h,p[0]], task[h,p[1]]) )\n\n for s in States:\n mdl6.add( mdl6.always_equal(state[h], task[h,s[0]], Index[s[1]]) )\n\nmdl6.add( workers <= NbWorkers )",
"Step 8: Add the objective\nThe objective of this problem is to minimize the overall completion date (the completion date of the house that is completed last).",
"mdl6.add(mdl6.minimize( mdl6.max( mdl6.end_of(task[h,\"moving\"]) for h in Houses )))",
"Step 9: Solve the model\nThe search for an optimal solution in this problem could potentiality take a long time, so a fail limit has been placed on the solve process. The search will stop when the fail limit is reached, even if optimality of the current best solution is not guaranteed. \nThe code for limiting the solve process is given below:",
"# Solve the model\nprint(\"\\nSolving model....\")\nmsol6 = mdl6.solve(FailLimit=30000)\nprint(\"done\")\n\nif msol6:\n print(\"Cost will be \" + str( msol6.get_objective_values()[0] ))\n\n import docplex.cp.utils_visu as visu\n import matplotlib.pyplot as plt\n %matplotlib inline\n #Change the plot size\n from pylab import rcParams\n rcParams['figure.figsize'] = 15, 3\n\n workers_function = CpoStepFunction()\n for h in Houses:\n for t in TaskNames:\n itv = msol6.get_var_solution(task[h,t])\n workers_function.add_value(itv.get_start(), itv.get_end(), 1)\n\n visu.timeline('Solution SchedState')\n visu.panel(name=\"Schedule\")\n for h in Houses:\n for t in TaskNames:\n visu.interval(msol6.get_var_solution(task[h,t]), h, t)\n\n\n visu.panel(name=\"Houses state\")\n for h in Houses:\n f = state[h]\n visu.sequence(name=f.get_name(), segments=msol6.get_var_solution(f))\n visu.panel(name=\"Nb of workers\")\n visu.function(segments=workers_function, style='line')\n visu.show()\nelse:\n print(\"No solution found\")",
"Summary\nHaving completed this notebook, the reader should be able to:\n- Describe the characteristics of a Scheduling problem in terms of the objective, decision variables and constraints\n- Formulate a simple Scheduling model on paper\n- Conceptually explain the buidling blocks of a scheduling model\n- Write a simple model with docplex\nReferences\n\nCPLEX Modeling for Python documentation\nIBM Decision Optimization\nNeed help with DOcplex or to report a bug? Please go here.\nContact us at [email protected].\n\nCopyright © 2017, 2021 IBM. IPLA licensed Sample Materials."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
joommf/tutorial | workshops/Durham/reference/basics-python.ipynb | bsd-3-clause | [
"Python basics\nObjects, dir, help\n\nEverything in Python is an object\nObjects have attributes\nUse TAB for autocompletion\n\nExample: a list",
"a = [\"Hello\", \"World\", 42]",
"Type a., then press tab to see attributes:\nAlternatively, use the dir(a) command to see the attributes (ignore everything starting with __):",
"dir(a)",
"Imagine we want to find out what the append attribute is: use help(a.append) or a.append? to learn more about an attribute:",
"help(a.append)",
"Let's try this:",
"print(a)\n\na.append(\"New element\")\n\nprint(a)",
"Comments\nAnything following a # sign is considered a comment (to the end of the line)",
"d = 20e-9 # distance in metres",
"Importing libraries\nThe core Python commands can be extened through importing additonal libraries.\nimport syntax 1",
"import math\nmath.sin(0)",
"import syntax 2",
"import math as m\nm.sin(0)",
"Functions\nA function is defined in Python using the def keyword. For example, the greet function accepts two input arguments, and concatenates them to become a greeting:",
"def greet(greeting, name):\n \"\"\"Optional documentation string, inclosed in tripple quotes.\n Can extend over mutliple lines.\"\"\"\n print(greeting + \" \" + name)\n\ngreet(\"Hello\", \"World\")\n\ngreet(\"Bonjour\", \"tout le monde\")",
"In above examples, the input argument to the function has been identified by the order of the arguments. \nIn general, we prefer another way of passing the input arguments as \n- this provides additional clarity and \n- the order of the arguments stops to matter.",
"greet(greeting=\"Hello\", name=\"World\")\n\ngreet(name=\"World\", greeting=\"Hello\")",
"Note that the names of the input arguments can be displayed intermittently if you type greet( and then press SHIFT+TAB (the cursor needs to be just to the right of the opening paranthesis).\nA loop",
"def say_hello(name): # function\n print(\"Hello \" + name)\n\n# main program starts here\nnames = [\"Landau\", \"Lifshitz\", \"Gilbert\"]\nfor name in names:\n say_hello(name=name)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rtidatascience/connected-nx-tutorial | notebooks/5. Link Prediction.ipynb | mit | [
"Link Prediction\n\nDefinition of Link Prediction\nPerform link prediction on dataset\nJaccard coefficient\nPreferential Attachment",
"import networkx as nx\nimport matplotlib.pyplot as plt # for plotting graphs\n%matplotlib inline\n\nGA = nx.read_gexf('../data/ga_graph.gexf')",
"Link Prediction\nThe idea of link prediction was first proposed by Liben-Nowell and Kleinberg in 2004 as the following question:\n\n\"Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future?\"\n\nIt's an inticing idea and has led to many interesting developments in the network literature. For our example, the question could be rephrased as:\n\n\"Given a snapshot of the Grey's Anatomy relationship network, can we infer which new relationships are likely to occur in the near future?\"\n\nSounds awesome, but how does it work? \nJaccard Coefficient\nThe most popular measures for link prediction analyze the “proximity” of nodes in a network. One way to measure proximity is to see what proportion of neighbors a pair of nodes share. This can be capture succintly with the Jaccard index. \n\n\nIn the context of a network, we're comparing sets of neighbors:\n$$ Jaccard = \\frac{|\\Gamma(u) \\cap \\Gamma(v)|}{|\\Gamma(u) \\cup \\Gamma(v)|} $$\nwhere $\\Gamma(u)$ denotes the set of neighbors of $u$.",
"preds_jc = nx.jaccard_coefficient(GA)\n\npred_jc_dict = {}\nfor u, v, p in preds_jc:\n pred_jc_dict[(u,v)] = p\n\nsorted(pred_jc_dict.items(), key=lambda x:x[1], reverse=True)[:10]\n\nextra_attrs = {'finn':('Finn Dandridge','M','S'),\n 'olivia':('Olivia Harper','F','S'),\n 'steve':('Steve Murphy','M','S'),\n 'torres':('Callie Torres','F','B'),\n 'colin':('Colin Marlow','M','S'),\n 'grey':('Meredith Grey','F','S'),\n 'mrs. seabury':('Dana Seabury','F','S'),\n 'altman':('Teddy Altman','F','S'),\n 'tucker':('Tucker Jones','M','S'),\n 'ben':('Ben Warren','M','S'),\n \"o'malley\":(\"George O'Malley\",'M','S'),\n 'thatch grey':('Thatcher Grey','M','S'),\n 'susan grey':('Susan Grey','F','S'),\n 'derek':('Derek Shepherd','M','S'),\n 'chief':('Richard Webber','M','S'),\n 'addison':('Addison Montgomery','F','S'),\n 'karev':('Alex Karev','M','S'),\n 'hank':('Hank','M','S'),\n 'lexi':('Lexie Grey','F','S'),\n 'adele':('Adele Webber','F','S'),\n 'owen':('Owen Hunt','M','S'),\n 'sloan':('Mark Sloan','M','S'),\n 'arizona':('Arizona Robbins','F','G'),\n 'izzie':('Izzie Stevens','F','S'),\n 'preston':('Preston Burke','M','S'),\n 'kepner':('April Kepner','M','S'),\n 'bailey':('Miranda Bailey','F','S'),\n 'ellis grey':('Ellis Grey','F','S'),\n 'denny':('Denny Duquette','M','S'),\n 'yang':('Cristina Yang','F','S'),\n 'nancy':('Nancy Shepherd','F','S'),\n 'avery':('Jackson Avery','M','S')}\n\nfor i in GA.nodes():\n GA.node[i][\"full_name\"] = extra_attrs[i][0]\n GA.node[i][\"gender\"] = extra_attrs[i][1]\n GA.node[i][\"orientation\"] = extra_attrs[i][2]\n\nGA.node['grey']",
"Preferential Attachment\nThe preferential attachement methods mirrors the “rich get richer” -- nodes with more connections will be the ones to be more likely to get future connections. \n\nEssentially, the measure is the product of a node pairs degrees:\n$$ PA = |\\Gamma(u)| \\bullet |\\Gamma(v)|$$\nwhere $\\Gamma(u)$ denotes the set of neighbors (degree) of $u$.",
"preds_pa = nx.preferential_attachment(GA)\n\npred_pa_dict = {}\nfor u, v, p in preds_pa:\n pred_pa_dict[(u,v)] = p\n\nsorted(pred_pa_dict.items(), key=lambda x:x[1], reverse=True)[:10]",
"Other Link Prediction Algorithms\n\nCommon Neighbors\nResource Allocation Index\nAdamic-Adar index"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tleonhardt/CodingPlayground | dataquest/JupyterNotebook/Basics.ipynb | mit | [
"White House Employee Data\nThe 2015_white_house.csv* file contains data on White House employees in 2015, and their salares. Here are the columns:\n\nName -- the name of the employee.\nStatus -- whether the employee was a White hHuse employee, or detailed to the White House.\nSalary -- the employee salary, in USD.\nPay Basis -- the time period the salary is expressed over.\nPosition Title -- the title of the employee.",
"import pandas as pd\nwhite_house = pd.read_csv(\"../data/2015_white_house.csv\")\nprint(white_house.shape)\n\nprint(white_house.iloc[-1])\n\nwhite_house\n\n%matplotlib notebook\nimport matplotlib.pyplot as plt\n\nplt.hist(white_house[\"Salary\"])\nplt.show()",
"So far we have imported a dataset from a CSV file into a Pandas DataFrame using the read_csv() function. Then we displayed the data, first as a table, and secondly as a historgram.\nQuestions About the Data\nThere are a near infinite number of questions we could possibly ask about this data. But to get started, here are a few example questions that could be asked:\n\nHow does length of employee titles correlate to salary?\nHow much does the White House pay in total salary?\nWho are the highest and lowest paid staffers?\nWhat words are the most common in titles?\n\nHow does the length of employee titles correlate to salary?\nSteps for figuring this out may look like the following:\n1. Calculate the length of each employee title - should be able to use apply() to get this\n1. Add a column to the DataFrame containing the length of the employee title\n1. Plot length of employee title versus employee salary (could also use direct correlation, but visual plot is good)",
"# Calculate the length of each employee's title and add to the DataFrame\nwhite_house['LengthOfTitle'] = white_house['Position Title'].apply(len)\nwhite_house.head()\n\n# Plot the length of employee title versus salary to look for correlation\nplt.plot(white_house['LengthOfTitle'], white_house['Salary'])\nplt.title('How does length of employee titles correlate to salary?')\nplt.xlabel('Length of Employee Title')\nplt.ylabel('Salary ($)')",
"Uh ok, maybe I was wrong about visuallizing being great for detecting correlation ;-)\nIt looks like there may be a weak positive correlation. But it is really hard to tell.\nMaybe we should just numerically calculate the correlation. \nAlso, it looks like there are some low salary outliers. Should we check to make sure we aren't mixing in monthly salaries with yearly ones?",
"# Get the values in Pay Basis and figure out how many unique ones there are\ntypes_of_pay_basis = set(white_house['Pay Basis'])\ntypes_of_pay_basis",
"Ok, only one pay basis, annually. So that wasn't an issue.",
"# Compute pairwise correlation of columns, excluding NA/null values\ncorrelations = white_house.corr()\ncorrelations\n\n# Linear Regression using ordinary least squares\nimport statsmodels.api as sm\nmodel = sm.OLS(white_house['Salary'], white_house['LengthOfTitle'])\nresiduals = model.fit()\nprint(residuals.summary())",
"So yea, there is a real positive correlation between length of employee title and salary!\nHow much does the White House pay in total salary?",
"total_salary = sum(white_house['Salary'])\ntotal_salary",
"The white house pays about $40 Million per year in total salary.\nWho are the highest and lowest paid staffers?",
"highest_paid = white_house[white_house['Salary'] == max(white_house['Salary'])]\nhighest_paid\n\nlowest_paid = white_house[white_house['Salary'] == min(white_house['Salary'])]\nlowest_paid",
"Wow, who are these poor unpaid schmucks?\nWhat words are the most common in titles?\nThis is another multi-step one that is a bit more involved. One approach to solving it might go like the following:\n1. Create an empty dictionary or Series\n1. Parse all words in each title, splititng at whitespace, possibly adding them to one bit list of words\n1. Increment count for a word each time you see it"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ogaway/Econometrics | SimultaneousEquation.ipynb | gpl-3.0 | [
"同時方程式体系\n『Rによる計量経済学』第10章「同時方程式体系」をPythonで実行する。\nテキスト付属データセット(「k1001.csv」等)については出版社サイトよりダウンロードしてください。\nまた、以下の説明は本書の一部を要約したものですので、より詳しい説明は本書を参照してください。 \n例題10.1\n次のような供給関数と需要関数を推定する。\n$Q_{t} = \\alpha_{0} + \\alpha_{1} P_{t} + \\alpha_{2} E_{t} + u_{t}$\n$Q_{t} = \\beta_{0} + \\beta_{1} P_{t} + \\beta_{2} A_{t} + v_{t}$\nただし、$Q_{t}$ は数量、$P_{t}$ は価格、$E_{t}$ は供給関数シフト要因、$A_{t}$ は需要関数シフト要因とする。",
"%matplotlib inline\n\n# -*- coding:utf-8 -*-\nfrom __future__ import print_function\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nfrom statsmodels.sandbox.regression.gmm import IV2SLS\nimport matplotlib.pyplot as plt\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# データ読み込み\ndata = pd.read_csv('example/k1001.csv')\n\n# 式1説明変数設定\nX1 = data[['P', 'E']].as_matrix().reshape(-1, 2)\nX1 = sm.add_constant(X1)\n# 式2説明変数設定\nX2 = data[['P', 'A']].as_matrix().reshape(-1, 2)\nX2 = sm.add_constant(X2)\n\n# 被説明変数設定\nY = data[['Q']].as_matrix().reshape(-1)\n\n# OLSの実行(Ordinary Least Squares: 最小二乗法)\nmodel1 = sm.OLS(Y, X1)\nmodel2 = sm.OLS(Y, X2)\nresult1 = model1.fit()\nresult2 = model2.fit()\n\nprint(result1.summary())\n\nprint(result2.summary())",
"この結果から古典的最小二乗法による推定式をまとめると、\n[供給関数] \n$\\hat Q_{i} = 4.8581 + 1.5094 P_{i} - 1.5202 E_{i} $\n[需要関数]\n$\\hat Q_{i} = 16.6747 - 0.9088 P_{i} - 1.0369 A_{i}$\nとなる。 \nしかし、説明変数Pと誤差の間に関係があるため、同時方程式バイアスが生じてしまいます。 \nそこで、以下では同時方程式体系の推定法として代表的な二段階最小二乗法を用いて推定し直します。",
"# 外生変数設定\ninst = data[[ 'A', 'E']].as_matrix()\ninst = sm.add_constant(inst)\n\n# 2SLSの実行(Two Stage Least Squares: 二段階最小二乗法)\nmodel1 = IV2SLS(Y, X1, inst)\nmodel2 = IV2SLS(Y, X2, inst)\nresult1 = model1.fit()\nresult2 = model2.fit()\n\nprint(result1.summary())\n\nprint(result2.summary())",
"この結果から二段階最小二乗法による推定式をまとめると、\n[供給関数] \n$\\hat Q_{i} = 3.7867 + 2.2119 P_{i} - 2.1531 E_{i} $\n[需要関数]\n$\\hat Q_{i} = 17.8558 - 1.0249 P_{i} - 1.1484 A_{i}$\nとなる。"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
llclave/Springboard-Mini-Projects | Reduce Hospital Readmissions Using EDA/sliderule_dsi_inferential_statistics_exercise_3.ipynb | mit | [
"Hospital Readmissions Data Analysis and Recommendations for Reduction\nBackground\nIn October 2012, the US government's Center for Medicare and Medicaid Services (CMS) began reducing Medicare payments for Inpatient Prospective Payment System hospitals with excess readmissions. Excess readmissions are measured by a ratio, by dividing a hospital’s number of “predicted” 30-day readmissions for heart attack, heart failure, and pneumonia by the number that would be “expected,” based on an average hospital with similar patients. A ratio greater than 1 indicates excess readmissions.\nExercise Directions\nIn this exercise, you will:\n+ critique a preliminary analysis of readmissions data and recommendations (provided below) for reducing the readmissions rate\n+ construct a statistically sound analysis and make recommendations of your own \nMore instructions provided below. Include your work in this notebook and submit to your Github account. \nResources\n\nData source: https://data.medicare.gov/Hospital-Compare/Hospital-Readmission-Reduction/9n3s-kdb3\nMore information: http://www.cms.gov/Medicare/medicare-fee-for-service-payment/acuteinpatientPPS/readmissions-reduction-program.html\nMarkdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet",
"%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport bokeh.plotting as bkp\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\n# read in readmissions data provided\nhospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')",
"Preliminary Analysis",
"# deal with missing and inconvenient portions of data \nclean_hospital_read_df = hospital_read_df[hospital_read_df['Number of Discharges'] != 'Not Available']\nclean_hospital_read_df.loc[:, 'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int)\nclean_hospital_read_df = clean_hospital_read_df.sort_values('Number of Discharges')\n\n# generate a scatterplot for number of discharges vs. excess rate of readmissions\n# lists work better with matplotlib scatterplot function\nx = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]]\ny = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3])\n\nfig, ax = plt.subplots(figsize=(8,5))\nax.scatter(x, y,alpha=0.2)\n\nax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True)\nax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True)\n\nax.set_xlim([0, max(x)])\nax.set_xlabel('Number of discharges', fontsize=12)\nax.set_ylabel('Excess rate of readmissions', fontsize=12)\nax.set_title('Scatterplot of number of discharges vs. excess rate of readmissions', fontsize=14)\n\nax.grid(True)\nfig.tight_layout()",
"Preliminary Report\nRead the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform.\nA. Initial observations based on the plot above\n+ Overall, rate of readmissions is trending down with increasing number of discharges\n+ With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red)\n+ With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green) \nB. Statistics\n+ In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1 \n+ In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1 \nC. Conclusions\n+ There is a significant correlation between hospital capacity (number of discharges) and readmission rates. \n+ Smaller hospitals/facilities may be lacking necessary resources to ensure quality care and prevent complications that lead to readmissions.\nD. Regulatory policy recommendations\n+ Hospitals/facilties with small capacity (< 300) should be required to demonstrate upgraded resource allocation for quality care to continue operation.\n+ Directives and incentives should be provided for consolidation of hospitals and facilities to have a smaller number of them with higher capacity and number of discharges.\n\nExercise\nInclude your work on the following in this notebook and submit to your Github account. \nA. Do you agree with the above analysis and recommendations? Why or why not?\nB. Provide support for your arguments and your own recommendations with a statistically sound analysis:\n\nSetup an appropriate hypothesis test.\nCompute and report the observed significance value (or p-value).\nReport statistical significance for $\\alpha$ = .01. \nDiscuss statistical significance and practical significance. Do they differ here? How does this change your recommendation to the client?\nLook at the scatterplot above. \nWhat are the advantages and disadvantages of using this plot to convey information?\nConstruct another plot that conveys the same information in a more direct manner.\n\n\n\nYou can compose in notebook cells using Markdown: \n+ In the control panel at the top, choose Cell > Cell Type > Markdown\n+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet",
"clean_hospital_read_df.head()\n\nclean_hospital_read_df.info()\n\nhospital_dropna_df = clean_hospital_read_df[np.isfinite(clean_hospital_read_df['Excess Readmission Ratio'])]\nhospital_dropna_df.info()",
"A. Do you agree with the above analysis and recommendations? Why or why not?\nThe analysis seems to base its conclusion on one scatterplot. I cannot agree with the above analysis and recommendations because there is not enough evidence to support it. Further investigation and statistical analysis should be completed to determine whether there is a significant correlation between hospital capacity (number of discharges) and readmission rates. It may be possible that the correlation may have come from chance.\nB. Provide support for your arguments and your own recommendations with a statistically sound analysis:\n1) Setup an appropriate hypothesis test.\n$H_0$: There is no statistically significant correlation between number of discharges and readmission rates.\n$H_A$: There is a statistically significant negative correlation between number of discharges and readmission rates.\n2) Compute and report the observed significance value (or p-value).",
"number_of_discharges = hospital_dropna_df['Number of Discharges']\nexcess_readmission_ratio = hospital_dropna_df['Excess Readmission Ratio'] \n\npearson_r = np.corrcoef(number_of_discharges, excess_readmission_ratio)[0, 1]\n\nprint('The Pearson correlation of the sample is', pearson_r)\n\npermutation_replicates = np.empty(100000)\n\nfor i in range(len(permutation_replicates)):\n number_of_discharges_perm = np.random.permutation(number_of_discharges)\n permutation_replicates[i] = np.corrcoef(number_of_discharges_perm, excess_readmission_ratio)[0, 1]\n\np = np.sum(permutation_replicates <= pearson_r) / len(permutation_replicates) \nprint('p =', p)",
"The p value was calculated to be extremely small above. This means our null hypothesis ($H_0$) should be rejected and the alternate hypothesis is more likely. There is a statistically significant negtive correlation between number of discharges and readmission rates. However the correlation is small as shown by the pearson correlation above of -0.097.\n3) Report statistical significance for α = .01.\nSince the p value was calculated to be extremely small, we can still conclude it to be statistically significant with an alpha of 0.01.\n4) Discuss statistical significance and practical significance. Do they differ here? How does this change your recommendation to the client?\nStatistical significance refers to how unlikely the Pearson correlation observed in the samples have occurred due to sampling errors. We can use it in hypothesis testing. Statisical significance tells us whether the value we were looking at, (in this case - pearson correlation) occured due to chance. Because the p value we calculated was so small, we can conclude that the correlation was statistically significant and that there likely is a negative correlation between number of discharges and readmission rates.\nPractical significance looks at whether the Pearson correlation is large enough to be a value that shows a significant relationship. Practical significance looks at the value itself. In this case, the Pearson correlation was very small (-0.097). There probably is a negative correlation. However it is so small and close to zero that it is not very significant to the relationship between number of discharges and readmission rates.\nOverall, size or number of discharges is not a good predictor for readmission rates. The recommendations above should not be followed. Instead further analysis should be performed to figure out what data has a higher correlation with readmission rates.\n5) Look at the scatterplot above.\n\nWhat are the advantages and disadvantages of using this plot to convey information?\nConstruct another plot that conveys the same information in a more direct manner.\n\nThis plot above is able to display all of the data points at once. Sometimes a quick visual like this will indicate a relationship. However, it is hard to tell with this plot alone due to the data. If the plot included a line which showed the value of the negative correlation, it would more clearly present the relationship of the data.",
"import seaborn as sns\n\nsns.set()\nplt.scatter(number_of_discharges, excess_readmission_ratio, alpha=0.5)\n\nslope, intercept = np.polyfit(number_of_discharges, excess_readmission_ratio, 1)\n\nx = np.array([0, max(number_of_discharges)])\ny = slope * x + intercept\n\nplt.plot(x, y)\n\nplt.xlabel('Number of discharges')\nplt.ylabel('Excess rate of readmissions')\nplt.title('Scatterplot of number of discharges vs. excess rate of readmissions')\n\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kanavanand/kanavanand.github.io | ZOMATO_final.ipynb | mit | [
"--Kanav Anand(IIT kharagpur)\n Inustrial and Systems Engineering\n\n\n<a href='#intro'>1. Content:</a> \n\n<a href='#rtd'>2. Retrieving the Data</a>\n\n<a href='#ll'>2.1 Load libraries</a>\n<a href='#td'>2.2 Read the Data</a>\n<a href='#ll2'>2.3 Which data to use?</a>\n\n\n\n<a href='#god'>3. Feature Engineering</a>\n\n<a href='#ot'>3.1 Drivers Engaged</a>\n<a href='#ot2'>3.2 Backbone of my Analysis(Feature Engineering)</a>\n<a href='#ot3'>3.3 Exploring hidden hints and making features.</a>\n<a href='#ot4'>3.4 Making Features using order file</a>\n<a href='#ot5'>3.5 Aggregate features on order file.</a>\n<a href='#ot6'>3.6 Number of months a person visit.</a>\n<a href='#ot7'>3.7 Approximate month interval after which person make visits.</a>\n<a href='#ot6'>3.8 Date-time based features.</a>\n\n\n<a href='#nlp'>4.Making Model.</a>\n\nInitially I have done some preprocessing in excel, So please use all the files in ZIP.\n<a id='rtd'>2. Retrieving the Data</a>\n<a id='ll'>2.1 Load libraries</a>",
"import pandas as pd\nimport numpy as np\n\ntrain=pd.read_csv('post_stockout_train_candidate.csv',parse_dates=['time_stamp_utc'])\npre_stock = pd.read_csv('pre_stockout_train_candidate.csv',parse_dates=['time_stamp_utc'])\norder =pd.read_csv('pre_and_post_orders_data_train.csv')\norder_test = pd.read_csv('test/pre_orders_data_test.csv',parse_dates=['device_acknowledge_at'])\npre_stock_test =pd.read_csv('test/pre_stockout_test_candidate.csv')\ntest =pd.read_csv('test/post_stockout_test_candidate.csv')\ndriver_log=pd.read_csv('pre_and_post_driver_log_train_candidate.csv')\ndriver_log_test=pd.read_csv('test/pre_driver_log_test_candidate.csv')\n\npd.set_option('display.max_columns', 500)\n\ntrain.head()\n\ntest.head()\n\npre_stock.head()\n\npre_stock_test.head()",
"<a id='ll'>2.3 Which data to use?</a>\nHere i am combining all the 3 data so as to collect the data of all files and removing 1st jan which is an outlier\n\nHere i have removed the data of 1st Jan because it was new year, and as we can see the number of stockouts were too high so we should remove it from our current analysis because .We are analysing for a month and it was an outlier because our test dataset does not contain any \"Holiday or festival\"\n\nSee here, there are around 8000 stockouts on single day!",
"pre_stock.groupby(by='dt')['stockout'].sum().sort_values(ascending=False)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\npre_stock.groupby(by='dt')['stockout'].sum().plot(figsize=(10,4))\nplt.xticks(rotation=40)\n\nplt.annotate(\"1St Jan\",xy=(1,8000))\n\norder['created_at']=pd.to_datetime(order.created_at)\norder['complete_at']=pd.to_datetime(order.complete_at)\n\norder_test['created_at']=pd.to_datetime(order_test.created_at)\norder_test['complete_at']=pd.to_datetime(order_test.complete_at)\n\norder['date_create']=order.created_at.dt.date\norder_test['date_create']=order_test.created_at.dt.date\n\norder['unique']=1\n\npre_stock_test.columns\n\ncol = [ 'time_stamp_utc', 'dt', 'Latitude', 'Longitude', \n 'stockout', 'hour', 'minute', 'second', 'weekday']\nalldata =pd.concat([train[col],pre_stock[col],pre_stock_test[col]],axis=0).reset_index(drop=True)\n\nalldata.dt.value_counts().index\n\nalldata =pd.concat([train[col],pre_stock[col],pre_stock_test[col]],axis=0).reset_index(drop=True)\nalldata_out = alldata.loc[alldata.dt != '1-Jan-18'].reset_index(drop=True)\nalldata_out.shape",
"<a id='god'>3. Feature Engineering</a>\n<a id='ot'>3.1 Drivers Engaged</a>\n\nHere i am calculating how many drivers are engaged in given train-test combination of times. This would give us average engagement of a driver at particular time interval.\n\n** Note: this would take 5-6 hours.",
"df = order.loc[(order.state=='COMPLETE')&(order.created_at.dt.day>29)].reset_index()\n\ndf.head()\n\ndf2 = order_test.loc[(order_test.state=='COMPLETE')&(order_test.created_at.dt.day>29)].reset_index()\n\norder_test.loc[(order_test.state=='COMPLETE')&(order_test.created_at.dt.day>25)].reset_index().shape\n\nfrom tqdm import tqdm",
"i saved a backup here(train_driver.csv which contains driver engaged)",
"train.head()\n\ntrain=pd.read_csv('train_driver.csv')\ntest =pd.read_csv('test/post_stockout_test_candidate.csv')\n\n\ntrain.head()\n\ntrain.head()",
"<a id='ot2'>3.2 Backbone of my Analysis(Feature Engineering)</a>\nMaking Aggregate features like\nConcat the train and test and make features like<br>\n\n1) Stockout on day of week<br>\n2)stockout in the hour of a day<br>\n3)stockout in the hour of second<br>\n4)stockout in the hour of every day of week<br>\n5)stockout in the minute of an hour of every day of week<br>\n6)Stockout in particular residential id.<br>\n7)Stockout in particular residential id for a particular hour<br>\n8)Stockout in particular residential id for a particular hour in that minute<br>",
"\n#train.head()\ndef upd(train,pre_stock):\n train = train.merge(pd.DataFrame(alldata_out.groupby(by=['weekday'])['stockout'].sum()).reset_index(),on='weekday',how='left',suffixes=('','_week'))\n train = train.merge(pd.DataFrame(alldata_out.groupby(by=['hour'])['stockout'].sum()).reset_index(),on='hour',how='left',suffixes=('','_hour'))\n train = train.merge(pd.DataFrame(alldata_out.groupby(by=['second'])['stockout'].sum()).reset_index(),on='second',how='left',suffixes=('','_second'))\n\n train.head()\n\n\n train = train.merge(pd.DataFrame(alldata_out.groupby(by=['weekday','hour'])['stockout'].sum()).reset_index(),on=['weekday','hour'],how='left',suffixes=('','_week_hour'))\n train = train.merge(pd.DataFrame(alldata_out.groupby(by=['hour','minute'])['stockout'].sum()).reset_index(),on=['hour','minute'],how='left',suffixes=('','_hour_minute'))\n\n train.fillna(0,inplace=True)\n\n train = train.merge(pd.DataFrame(alldata_out.groupby(by=['weekday','hour','minute'])['stockout'].sum()).reset_index(),on=['weekday','hour','minute'],how='left',suffixes=('','_hour_week_minute'))\n train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id'])['stockout'].sum()).reset_index(),on='res_id',how='left',suffixes=('','_x'))\n\n\n\n\n train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id','hour'])['stockout'].sum()).reset_index(),on=['res_id','hour'],how='left',suffixes=('','_hour_res'))\n\n train.fillna(0,inplace=True)\n\n train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id','hour','minute'])['stockout'].sum()).reset_index(),on=['res_id','hour','minute'],how='left',suffixes=('','_hour_res_minute'))\n#;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n \n train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id'])['stockout'].count()).reset_index(),on='res_id',how='left',suffixes=('','_countx'))\n\n\n\n\n train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id','hour'])['stockout'].count()).reset_index(),on=['res_id','hour'],how='left',suffixes=('','_counthour_res'))\n\n train.fillna(0,inplace=True)\n\n train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id','hour','minute'])['stockout'].count()).reset_index(),on=['res_id','hour','minute'],how='left',suffixes=('','_counthour_res_minute'))\n\n train.fillna(0,inplace=True)\n return train",
"Processing date",
"from datetime import datetime\nimport datetime\ndef dat1(X):\n return(datetime.datetime.strptime(X, \"%d%b%Y:%H:%M:%S\"))\n\ntm=train.time_stamp_utc.apply(dat1)\n\ntm2=test.time_stamp_utc.apply(dat1)\n\ntrain =upd(train,pre_stock)\ntrain.head()\n\ntest =upd(test,pre_stock_test)\ntest= test.merge(pd.DataFrame(alldata_out.groupby(by=['weekday'])['stockout'].sum()).reset_index(),on='weekday',how='left',suffixes=('','_week'))\n\ntest.head()",
"<a id='ot3'>3.3 Exploring hidden hints and making features.</a>\nThis is the minute stockout history on every day-of-week basis",
"import seaborn as sns\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(20,5))\nsns.heatmap(pre_stock.pivot_table(index='weekday',columns='minute',values='stockout',aggfunc=sum),linecolor='black')\n\nmin_graph=pre_stock.pivot_table(index='weekday',columns='minute',values='stockout',aggfunc=sum)\nmin_graph\n\nplt.figure(figsize=(20,5))\nmin_graph.loc[1].plot()\n\nsec_graph=pre_stock.pivot_table(index='weekday',columns='second',values='stockout',aggfunc=sum)\nsec_graph",
"This is the one of the most shocking observation here, as you can see there is very high probability of getting a stockout during between 11-25 and 41-55 seconds of every minute.",
"plt.figure(figsize=(20,5))\n\nsns.heatmap(pre_stock.pivot_table(index='weekday',columns='second',values='stockout',aggfunc=sum),linecolor='black')",
"you can see that is somewhat perfect normal",
"plt.figure(figsize=(20,5))\nsec_graph.loc[1].plot()",
"Inference:\n\nSo we can infer that there could be two possible case:<br>\n1)As stockout starts occuring the Zomato's iternal system may alert the Part time driver.<br>\n2)The data-organisers may have generated the random time samples where it was not uniform but somewhat normal.\n\nUsing the Above fact lets discover a new feature based on Seconds",
"a=[]\nfor i in train.second:\n if i<15:\n a.append(i)\n elif i<30:\n a.append(30-i)\n elif i<45:\n a.append(i-30)\n else:\n a.append(60-i)\n\ntrain['sec_fun']=a\na=[]\nfor i in test.second:\n if i<15:\n a.append(i)\n elif i<30:\n a.append(30-i)\n elif i<45:\n a.append(i-30)\n else:\n a.append(60-i)\ntest['sec_fun']=a\n \n\ntrain.columns\n\ncat_vars =[ 'res_id',\n 'hour', 'minute', 'second', 'weekday', ]\ncont_vars =['Latitude', 'Longitude','stockout_week',\n 'stockout_hour', 'stockout_second', 'stockout_week_hour',\n 'stockout_hour_minute', 'stockout_hour_week_minute', 'stockout_x',\n 'stockout_hour_res', 'stockout_hour_res_minute', 'stockout_countx',\n 'stockout_counthour_res', 'stockout_counthour_res_minute']\n\nfor v in cat_vars: train[v] = train[v].astype('category').cat.as_ordered()\nfor v in cont_vars: train[v] = train[v].astype('float32')\nfor v in cat_vars: test[v] = test[v].astype('category').cat.as_ordered()\nfor v in cont_vars: test[v] = test[v].astype('category').astype('float32') ",
"<a id='ot4'>3.4 Making Features using order file.</a>\nUsing the order file , Let's try to calculate the total number of orders and aggregate features",
"order_comp=order.loc[order.state=='COMPLETE']\norder_comp_test=order_test.loc[order.state=='COMPLETE']",
"Getting day time features for the order_file.",
"order_comp['day']=order_comp.created_at.dt.day\norder_comp['hour']=order_comp.created_at.dt.hour\norder_comp['weekday']=order_comp.created_at.dt.dayofweek\norder_comp['geography']=order_comp.pickup_locality\n\norder_comp_test['day']=order_comp_test.created_at.dt.day\norder_comp_test['hour']=order_comp_test.created_at.dt.hour\norder_comp_test['weekday']=order_comp_test.created_at.dt.dayofweek\norder_comp_test['geography']=order_comp_test.pickup_locality\n\norder_comp['count_order']=1\norder_comp_test['count_order']=1\n\n",
"<a id='nlp'>4.Preprocessing the data. </a>",
"def lower(x):\n return x.lower()\norder_comp.geography=order_comp.geography.apply(lower)\ntrain.geography=train.geography.apply(lower)\n\ntrain.head()\n\norder_comp.replace({'hsr layout':'hsr_layout'},inplace=True)\n\nfrom sklearn.preprocessing import LabelEncoder\nle_geo=LabelEncoder()\norder_comp.geography=le_geo.fit_transform(order_comp.geography)\ntrain.geography=le_geo.fit_transform(train.geography)\n\norder_comp_test.geography=le_geo.fit_transform(order_comp_test.geography)\ntest.geography=le_geo.fit_transform(test.geography)",
"Authors of the data have very cleverly organised data by only removing the data of 2nd Jan to 12th Jan and remaining data is now used for the predicting the orders per hour,order per week and orders per day-hour combinaiton",
"train.head()",
"correct the weekday",
"order_comp.head()\n\ntrain.weekday=train.weekday-1\ntest.weekday=test.weekday-1",
"<a id='ot5'>3.5 Aggregate features on order file.</a>\nAggregate features.\n\n1) Total orders within a geography in an hour of a weekday<br>\n2) Total orders in a weekday<br>\n3) Total orders in a day of hour.<br>\nCombiing all of them with train",
"order_comp.loc[order_comp.day>1].groupby(by=['geography','weekday','hour'],as_index=False).count_order.sum()\norder_comp.loc[order_comp.day>1].groupby(by=['weekday'],as_index=False).count_order.sum()\norder_comp.loc[order_comp.day>1].groupby(by=['day','hour'],as_index=False).count_order.sum()\n\norder_comp_test.loc[order_comp_test.day>1].groupby(by=['weekday','hour'],as_index=False).count_order.sum()\norder_comp_test.loc[order_comp_test.day>1].groupby(by=['weekday'],as_index=False).count_order.sum()\norder_comp_test.loc[order_comp_test.day>1].groupby(by=['day','hour'],as_index=False).count_order.sum()\n\ntrain=train.merge(order_comp.loc[order_comp.day>1].groupby(by=['geography','weekday','hour'],as_index=False).count_order.sum(),on=['geography','weekday','hour']\n ,how='left')\ntrain=train.merge(order_comp.loc[order_comp.day>1].groupby(by=['geography','weekday'],as_index=False).count_order.sum(),on=['geography','weekday']\n ,how='left')\n\ntrain=train.merge(order_comp.loc[order_comp.day>1].groupby(by=['weekday','hour'],as_index=False).count_order.sum(),on=['weekday','hour']\n ,how='left',suffixes=('','_1'))\ntrain=train.merge(order_comp.loc[order_comp.day>1].groupby(by=['weekday'],as_index=False).count_order.sum(),on=['weekday']\n ,how='left',suffixes=('','_2'))\n#train=train.merge(order_comp.loc[order_comp.day>1].groupby(by=['day','hour'],as_index=False).count_order.sum(),on=['day','hour']\n# ,how='left')\ntrain.head()\n\ntest=test.merge(order_comp_test.loc[order_comp_test.day>1].groupby(by=['geography','weekday','hour'],as_index=False).count_order.sum(),on=['geography','weekday','hour']\n ,how='left')\ntest=test.merge(order_comp_test.loc[order_comp_test.day>1].groupby(by=['geography','weekday'],as_index=False).count_order.sum(),on=['geography','weekday']\n ,how='left')\n\ntest=test.merge(order_comp_test.loc[order_comp_test.day>1].groupby(by=['weekday','hour'],as_index=False).count_order.sum(),on=['weekday','hour']\n ,how='left',suffixes=('','_1'))\ntest=test.merge(order_comp_test.loc[order_comp_test.day>1].groupby(by=['weekday'],as_index=False).count_order.sum(),on=['weekday']\n ,how='left',suffixes=('','_2'))\n\ndef dat1(X):\n return(datetime.datetime.strptime(X, \"%Y-%b-%d %H:%M:%S\"))\nfrom dateutil.parser import parse\n\ndef date1(x):\n return parse(x)\ndriver_log.login_time=driver_log.login_time.apply(date1)\ndriver_log.logout_time=driver_log.logout_time.apply(date1)\n\ndriver_log_test.login_time=driver_log_test.login_time.apply(date1)\ndriver_log_test.logout_time=driver_log_test.logout_time.apply(date1)\n\ndriver_log_test.head()\n\n(driver_log_test.logout_time-driver_log_test.login_time).dt.seconds.head()",
"<a id='nlp'>4.Making Model. </a>\n<a id='nlp'>4.Preparing validation set. </a>\nMaking validation set !\nHere normal train and test won't work so divide according to date and treat it as a test.",
"valid =train.loc[train.dt == '31-Jan-18'].reset_index(drop=True)\ntrain_val =train.loc[train.dt != '31-Jan-18'].reset_index(drop=True)\n\nlen(test.columns)\n\nlen(train.columns)\n\ntest.columns\n\ntrain.columns\n\ncol = [ 'Latitude', 'Longitude', 'res_id',\n 'minute', 'geography',\n 'stockout_hour', 'stockout_week_hour',\n 'stockout_hour_minute', 'stockout_hour_week_minute','stockout_x',\n 'stockout_hour_res', 'stockout_hour_res_minute',\n 'stockout_counthour_res_minute', 'count_order_x',]",
"using RandomForest model",
"from sklearn.ensemble import RandomForestClassifier,GradientBoostingClassifier,BaggingClassifier,AdaBoostClassifier\nimport xgboost as xgb\n\n\nX=train[col]\n\ny=train.stockout\n\ntrain_X=train_val[col]\n\ntrain_y=train_val.stockout\n\ntest_X=valid[col]\n\ntest_y=valid.stockout",
"<a id='nlp2'>4.2 Applying the RandomForestclassifier</a>",
"clf=RandomForestClassifier(n_estimators=30,max_depth=7)#highest_accuracy\nclf=RandomForestClassifier(n_estimators=30,max_depth=7,)#highest_accuracy\n\n#clf=AdaBoostClassifier(base_estimator=clf)\nimport lightgbm as lgb\nfrom sklearn.tree import ExtraTreeClassifier\n#clf = lgb.LGBMClassifier(max_depth=8,n_estimators=1000,random_state=5)\n#clf=ExtraTreeClassifier(max_depth=7)\n#clf=Dec\n#clf=GradientBoostingClassifier(n_estimators=100,max_depth=7)\nclf.fit(train_X[col],train_y,)\n\n#a=test_y.copy()\na=np.zeros(test_y.shape)\n\nfrom sklearn.metrics import accuracy_score,confusion_matrix\nprint('prediciton->',accuracy_score(test_y,clf.predict(test_X[col])))\nprint('zeroes->',accuracy_score(test_y,a))\nconfusion_matrix(test_y,clf.predict(test_X[col]))\n\nvalid.loc[valid.stockout==1]\n\nvalid.iloc[clf.predict(test_X[col])==1]\n\ntest",
"Feature Importance chart",
"import matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\nplt.figure(figsize=(20,5))\nsns.barplot(col,clf.feature_importances_)\n\npd.DataFrame(clf.feature_importances_,index=col).sort_values(by=0)\n\nsub=pd.read_csv('test/submission_online_testcase.csv')\n\nclf.fit(X,y)\n\nsub['stockout']=clf.predict(test[X.columns])[:len(sub)]\n\nsub.to_csv('submi/hmsub_allfeat_count.csv',index=None)\n\nsub.stockout.sum()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rvm-segfault/edx | python_for_data_sci_dse200x/week3/03_Numpy_Notebook.ipynb | apache-2.0 | [
"<p style=\"font-family: Arial; font-size:3.75em;color:purple; font-style:bold\"><br>\nIntroduction to numpy:\n</p>\n<br>\n<p style=\"font-family: Arial; font-size:1.25em;color:#2462C0; font-style:bold\"><br>\nPackage for scientific computing with Python\n</p>\n<br>\nNumerical Python, or \"Numpy\" for short, is a foundational package on which many of the most common data science packages are built. Numpy provides us with high performance multi-dimensional arrays which we can use as vectors or matrices. \nThe key features of numpy are:\n\nndarrays: n-dimensional arrays of the same data type which are fast and space-efficient. There are a number of built-in methods for ndarrays which allow for rapid processing of data without using loops (e.g., compute the mean).\nBroadcasting: a useful tool which defines implicit behavior between multi-dimensional arrays of different sizes.\nVectorization: enables numeric operations on ndarrays.\nInput/Output: simplifies reading and writing of data from/to file.\n\n<b>Additional Recommended Resources:</b><br>\n<a href=\"https://docs.scipy.org/doc/numpy/reference/\">Numpy Documentation</a><br>\n<i>Python for Data Analysis</i> by Wes McKinney<br>\n<i>Python Data science Handbook</i> by Jake VanderPlas\n<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nGetting started with ndarray<br><br></p>\n\nndarrays are time and space-efficient multidimensional arrays at the core of numpy. Like the data structures in Week 2, let's get started by creating ndarrays using the numpy package.\n<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nHow to create Rank 1 numpy arrays:\n</p>",
"import numpy as np\n\nan_array = np.array([3, 33, 333]) # Create a rank 1 array\n\nprint(type(an_array)) # The type of an ndarray is: \"<class 'numpy.ndarray'>\"\n\n# test the shape of the array we just created, it should have just one dimension (Rank 1)\nprint(an_array.shape)\n\n# because this is a 1-rank array, we need only one index to accesss each element\nprint(an_array[0], an_array[1], an_array[2]) \n\nan_array[0] =888 # ndarrays are mutable, here we change an element of the array\n\nprint(an_array)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nHow to create a Rank 2 numpy array:</p>\n\nA rank 2 ndarray is one with two dimensions. Notice the format below of [ [row] , [row] ]. 2 dimensional arrays are great for representing matrices which are often useful in data science.",
"another = np.array([[11,12,13],[21,22,23]]) # Create a rank 2 array\n\nprint(another) # print the array\n\nprint(\"The shape is 2 rows, 3 columns: \", another.shape) # rows x columns \n\nprint(\"Accessing elements [0,0], [0,1], and [1,0] of the ndarray: \", another[0, 0], \", \",another[0, 1],\", \", another[1, 0])",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nThere are many way to create numpy arrays:\n</p>\n\nHere we create a number of different size arrays with different shapes and different pre-filled values. numpy has a number of built in methods which help us quickly and easily create multidimensional arrays.",
"import numpy as np\n\n# create a 2x2 array of zeros\nex1 = np.zeros((2,2)) \nprint(ex1) \n\n# create a 2x2 array filled with 9.0\nex2 = np.full((2,2), 9.0) \nprint(ex2) \n\n# create a 2x2 matrix with the diagonal 1s and the others 0\nex3 = np.eye(2,2)\nprint(ex3) \n\n# create an array of ones\nex4 = np.ones((1,2))\nprint(ex4) \n\n# notice that the above ndarray (ex4) is actually rank 2, it is a 2x1 array\nprint(ex4.shape)\n\n# which means we need to use two indexes to access an element\nprint()\nprint(ex4[0,1])\n\n# create an array of random floats between 0 and 1\nex5 = np.random.random((2,2))\nprint(ex5) ",
"<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nArray Indexing\n<br><br></p>\n\n<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\nSlice indexing:\n</p>\n\nSimilar to the use of slice indexing with lists and strings, we can use slice indexing to pull out sub-regions of ndarrays.",
"import numpy as np\n\n# Rank 2 array of shape (3, 4)\nan_array = np.array([[11,12,13,14], [21,22,23,24], [31,32,33,34]])\nprint(an_array)",
"Use array slicing to get a subarray consisting of the first 2 rows x 2 columns.",
"a_slice = an_array[:2, 1:3]\nprint(a_slice)",
"When you modify a slice, you actually modify the underlying array.",
"print(\"Before:\", an_array[0, 1]) #inspect the element at 0, 1 \na_slice[0, 0] = 1000 # a_slice[0, 0] is the same piece of data as an_array[0, 1]\nprint(\"After:\", an_array[0, 1]) ",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nUse both integer indexing & slice indexing\n</p>\n\nWe can use combinations of integer indexing and slice indexing to create different shaped matrices.",
"# Create a Rank 2 array of shape (3, 4)\nan_array = np.array([[11,12,13,14], [21,22,23,24], [31,32,33,34]])\nprint(an_array)\n\n# Using both integer indexing & slicing generates an array of lower rank\nrow_rank1 = an_array[1, :] # Rank 1 view \n\nprint(row_rank1, row_rank1.shape) # notice only a single []\n\n# Slicing alone: generates an array of the same rank as the an_array\nrow_rank2 = an_array[1:2, :] # Rank 2 view \n\nprint(row_rank2, row_rank2.shape) # Notice the [[ ]]\n\n#We can do the same thing for columns of an array:\n\nprint()\ncol_rank1 = an_array[:, 1]\ncol_rank2 = an_array[:, 1:2]\n\nprint(col_rank1, col_rank1.shape) # Rank 1\nprint()\nprint(col_rank2, col_rank2.shape) # Rank 2",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nArray Indexing for changing elements:\n</p>\n\nSometimes it's useful to use an array of indexes to access or change elements.",
"# Create a new array\nan_array = np.array([[11,12,13], [21,22,23], [31,32,33], [41,42,43]])\n\nprint('Original Array:')\nprint(an_array)\n\n# Create an array of indices\ncol_indices = np.array([0, 1, 2, 0])\nprint('\\nCol indices picked : ', col_indices)\n\nrow_indices = np.arange(4)\nprint('\\nRows indices picked : ', row_indices)\n\n# Examine the pairings of row_indices and col_indices. These are the elements we'll change next.\nfor row,col in zip(row_indices,col_indices):\n print(row, \", \",col)\n\n# Select one element from each row\nprint('Values in the array at those indices: ',an_array[row_indices, col_indices])\n\n# Change one element from each row using the indices selected\nan_array[row_indices, col_indices] += 100000\n\nprint('\\nChanged Array:')\nprint(an_array)",
"<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\nBoolean Indexing\n\n<br><br></p>\n<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nArray Indexing for changing elements:\n</p>",
"# create a 3x2 array\nan_array = np.array([[11,12], [21, 22], [31, 32]])\nprint(an_array)\n\n# create a filter which will be boolean values for whether each element meets this condition\nfilter = (an_array > 15)\nfilter",
"Notice that the filter is a same size ndarray as an_array which is filled with True for each element whose corresponding element in an_array which is greater than 15 and False for those elements whose value is less than 15.",
"# we can now select just those elements which meet that criteria\nprint(an_array[filter])\n\n# For short, we could have just used the approach below without the need for the separate filter array.\n\nan_array[(an_array % 2 == 0)]",
"What is particularly useful is that we can actually change elements in the array applying a similar logical filter. Let's add 100 to all the even values.",
"an_array[an_array % 2 == 0] +=100\nprint(an_array)",
"<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nDatatypes and Array Operations\n<br><br></p>\n\n<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nDatatypes:\n</p>",
"ex1 = np.array([11, 12]) # Python assigns the data type\nprint(ex1.dtype)\n\nex2 = np.array([11.0, 12.0]) # Python assigns the data type\nprint(ex2.dtype)\n\nex3 = np.array([11, 21], dtype=np.int64) #You can also tell Python the data type\nprint(ex3.dtype)\n\n# you can use this to force floats into integers (using floor function)\nex4 = np.array([11.1,12.7], dtype=np.int64)\nprint(ex4.dtype)\nprint()\nprint(ex4)\n\n# you can use this to force integers into floats if you anticipate\n# the values may change to floats later\nex5 = np.array([11, 21], dtype=np.float64)\nprint(ex5.dtype)\nprint()\nprint(ex5)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nArithmetic Array Operations:\n\n</p>",
"x = np.array([[111,112],[121,122]], dtype=np.int)\ny = np.array([[211.1,212.1],[221.1,222.1]], dtype=np.float64)\n\nprint(x)\nprint()\nprint(y)\n\n# add\nprint(x + y) # The plus sign works\nprint()\nprint(np.add(x, y)) # so does the numpy function \"add\"\n\n# subtract\nprint(x - y)\nprint()\nprint(np.subtract(x, y))\n\n# multiply\nprint(x * y)\nprint()\nprint(np.multiply(x, y))\n\n# divide\nprint(x / y)\nprint()\nprint(np.divide(x, y))\n\n# square root\nprint(np.sqrt(x))\n\n# exponent (e ** x)\nprint(np.exp(x))",
"<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nStatistical Methods, Sorting, and <br> <br> Set Operations:\n<br><br>\n</p>\n\n<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nBasic Statistical Operations:\n</p>",
"# setup a random 2 x 4 matrix\narr = 10 * np.random.randn(2,5)\nprint(arr)\n\n# compute the mean for all elements\nprint(arr.mean())\n\n# compute the means by row\nprint(arr.mean(axis = 1))\n\n# compute the means by column\nprint(arr.mean(axis = 0))\n\n# sum all the elements\nprint(arr.sum())\n\n# compute the medians\nprint(np.median(arr, axis = 1))",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nSorting:\n</p>",
"# create a 10 element array of randoms\nunsorted = np.random.randn(10)\n\nprint(unsorted)\n\n# create copy and sort\nsorted = np.array(unsorted)\nsorted.sort()\n\nprint(sorted)\nprint()\nprint(unsorted)\n\n# inplace sorting\nunsorted.sort() \n\nprint(unsorted)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nFinding Unique elements:\n</p>",
"array = np.array([1,2,1,4,2,1,4,2])\n\nprint(np.unique(array))",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nSet Operations with np.array data type:\n</p>",
"s1 = np.array(['desk','chair','bulb'])\ns2 = np.array(['lamp','bulb','chair'])\nprint(s1, s2)\n\nprint( np.intersect1d(s1, s2) ) \n\nprint( np.union1d(s1, s2) )\n\nprint( np.setdiff1d(s1, s2) )# elements in s1 that are not in s2\n\nprint( np.in1d(s1, s2) )#which element of s1 is also in s2",
"<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nBroadcasting:\n<br><br>\n</p>\n\nIntroduction to broadcasting. <br>\nFor more details, please see: <br>\nhttps://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html",
"import numpy as np\n\nstart = np.zeros((4,3))\nprint(start)\n\n# create a rank 1 ndarray with 3 values\nadd_rows = np.array([1, 0, 2])\nprint(add_rows)\n\ny = start + add_rows # add to each row of 'start' using broadcasting\nprint(y)\n\n# create an ndarray which is 4 x 1 to broadcast across columns\nadd_cols = np.array([[0,1,2,3]])\nadd_cols = add_cols.T\n\nprint(add_cols)\n\n# add to each column of 'start' using broadcasting\ny = start + add_cols \nprint(y)\n\n# this will just broadcast in both dimensions\nadd_scalar = np.array([1]) \nprint(start+add_scalar)",
"Example from the slides:",
"# create our 3x4 matrix\narrA = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12]])\nprint(arrA)\n\n# create our 4x1 array\narrB = [0,1,0,2]\nprint(arrB)\n\n# add the two together using broadcasting\nprint(arrA + arrB)",
"<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nSpeedtest: ndarrays vs lists\n<br><br>\n</p>\n\nFirst setup paramaters for the speed test. We'll be testing time to sum elements in an ndarray versus a list.",
"from numpy import arange\nfrom timeit import Timer\n\nsize = 1000000\ntimeits = 1000\n\n# create the ndarray with values 0,1,2...,size-1\nnd_array = arange(size)\nprint( type(nd_array) )\n\n# timer expects the operation as a parameter, \n# here we pass nd_array.sum()\ntimer_numpy = Timer(\"nd_array.sum()\", \"from __main__ import nd_array\")\n\nprint(\"Time taken by numpy ndarray: %f seconds\" % \n (timer_numpy.timeit(timeits)/timeits))\n\n# create the list with values 0,1,2...,size-1\na_list = list(range(size))\nprint (type(a_list) )\n\n# timer expects the operation as a parameter, here we pass sum(a_list)\ntimer_list = Timer(\"sum(a_list)\", \"from __main__ import a_list\")\n\nprint(\"Time taken by list: %f seconds\" % \n (timer_list.timeit(timeits)/timeits))",
"<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nRead or Write to Disk:\n<br><br>\n</p>\n\n<p style=\"font-family: Arial; font-size:1.3em;color:#2462C0; font-style:bold\"><br>\n\nBinary Format:</p>",
"x = np.array([ 23.23, 24.24] )\n\nnp.save('an_array', x)\n\nnp.load('an_array.npy')",
"<p style=\"font-family: Arial; font-size:1.3em;color:#2462C0; font-style:bold\"><br>\n\nText Format:</p>",
"np.savetxt('array.txt', X=x, delimiter=',')\n\n!cat array.txt\n\nnp.loadtxt('array.txt', delimiter=',')",
"<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nAdditional Common ndarray Operations\n<br><br></p>\n\n<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nDot Product on Matrices and Inner Product on Vectors:\n\n</p>",
"# determine the dot product of two matrices\nx2d = np.array([[1,1],[1,1]])\ny2d = np.array([[2,2],[2,2]])\n\nprint(x2d.dot(y2d))\nprint()\nprint(np.dot(x2d, y2d))\n\n# determine the inner product of two vectors\na1d = np.array([9 , 9 ])\nb1d = np.array([10, 10])\n\nprint(a1d.dot(b1d))\nprint()\nprint(np.dot(a1d, b1d))\n\n# dot produce on an array and vector\nprint(x2d.dot(a1d))\nprint()\nprint(np.dot(x2d, a1d))",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nSum:\n</p>",
"# sum elements in the array\nex1 = np.array([[11,12],[21,22]])\n\nprint(np.sum(ex1)) # add all members\n\nprint(np.sum(ex1, axis=0)) # columnwise sum\n\nprint(np.sum(ex1, axis=1)) # rowwise sum",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nElement-wise Functions: </p>\n\nFor example, let's compare two arrays values to get the maximum of each.",
"# random array\nx = np.random.randn(8)\nx\n\n# another random array\ny = np.random.randn(8)\ny\n\n# returns element wise maximum between two arrays\n\nnp.maximum(x, y)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nReshaping array:\n</p>",
"# grab values from 0 through 19 in an array\narr = np.arange(20)\nprint(arr)\n\n# reshape to be a 4 x 5 matrix\narr.reshape(4,5)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nTranspose:\n\n</p>",
"# transpose\nex1 = np.array([[11,12],[21,22]])\n\nex1.T",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nIndexing using where():</p>",
"x_1 = np.array([1,2,3,4,5])\n\ny_1 = np.array([11,22,33,44,55])\n\nfilter = np.array([True, False, True, False, True])\n\nout = np.where(filter, x_1, y_1)\nprint(out)\n\nmat = np.random.rand(5,5)\nmat\n\nnp.where( mat > 0.5, 1000, -1)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\n\"any\" or \"all\" conditionals:</p>",
"arr_bools = np.array([ True, False, True, True, False ])\n\narr_bools.any()\n\narr_bools.all()",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nRandom Number Generation:\n</p>",
"Y = np.random.normal(size = (1,5))[0]\nprint(Y)\n\nZ = np.random.randint(low=2,high=50,size=4)\nprint(Z)\n\nnp.random.permutation(Z) #return a new ordering of elements in Z\n\nnp.random.uniform(size=4) #uniform distribution\n\nnp.random.normal(size=4) #normal distribution",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nMerging data sets:\n</p>",
"K = np.random.randint(low=2,high=50,size=(2,2))\nprint(K)\n\nprint()\nM = np.random.randint(low=2,high=50,size=(2,2))\nprint(M)\n\nnp.vstack((K,M))\n\nnp.hstack((K,M))\n\nnp.concatenate([K, M], axis = 0)\n\nnp.concatenate([K, M.T], axis = 1)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rvm-segfault/edx | python_for_data_sci_dse200x/week3/Satellite Image Analysis using numpy.ipynb | apache-2.0 | [
"<p style=\"font-family: Arial; font-size:3.75em;color:purple; font-style:bold\"><br>\nSatellite Image Data <br><br><br>Analysis using numpy</p>\n\n<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>Data Source: Satellite Image from WIFIRE Project</p>\n\nWIFIRE is an integrated system for wildfire analysis, with specific regard to changing urban dynamics and climate. The system integrates networked observations such as heterogeneous satellite data and real-time remote sensor data, with computational techniques in signal processing, visualization, modeling, and data assimilation to provide a scalable method to monitor such phenomena as weather patterns that can help predict a wildfire's rate of spread. You can read more about WIFIRE at: https://wifire.ucsd.edu/\nIn this example, we will analyze a sample satellite image dataset from WIFIRE using the numpy Library.\n<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\">Loading the libraries we need: numpy, scipy, matplotlib</p>",
"%matplotlib inline\nimport numpy as np\nfrom scipy import misc\nimport matplotlib.pyplot as plt",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\">\nCreating a numpy array from an image file:</p>\n\n<br>\nLets choose a WIFIRE satellite image file as an ndarray and display its type.",
"from skimage import data\n\nphoto_data = misc.imread('./wifire/sd-3layers.jpg')\n\ntype(photo_data)\n",
"Let's see what is in this image.",
"plt.figure(figsize=(15,15))\nplt.imshow(photo_data)\n\nphoto_data.shape\n\n#print(photo_data)",
"The shape of the ndarray show that it is a three layered matrix. The first two numbers here are length and width, and the third number (i.e. 3) is for three layers: Red, Green and Blue.\n<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\">\nRGB Color Mapping in the Photo:</p>\n<br>\n<ul>\n<li><p style=\"font-family: Arial; font-size:1.75em;color:red; font-style:bold\">\nRED pixel indicates Altitude</p>\n<li><p style=\"font-family: Arial; font-size:1.75em;color:blue; font-style:bold\">\nBLUE pixel indicates Aspect\n</p>\n<li><p style=\"font-family: Arial; font-size:1.75em;color:green; font-style:bold\">\nGREEN pixel indicates Slope\n</p>\n</ul>\n<br>\nThe higher values denote higher altitude, aspect and slope.",
"photo_data.size\n\nphoto_data.min(), photo_data.max()\n\nphoto_data.mean()",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nPixel on the 150th Row and 250th Column</p>",
"photo_data[150, 250]\n\nphoto_data[150, 250, 1]",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\nSet a Pixel to All Zeros</p>\n<br/>\nWe can set all three layer in a pixel as once by assigning zero globally to that (row,column) pairing. However, setting one pixel to zero is not noticeable.",
"#photo_data = misc.imread('./wifire/sd-3layers.jpg')\nphoto_data[150, 250] = 0\nplt.figure(figsize=(10,10))\nplt.imshow(photo_data)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\nChanging colors in a Range<p/>\n<br/>\nWe can also use a range to change the pixel values. As an example, let's set the green layer for rows 200 t0 800 to full intensity.",
"photo_data = misc.imread('./wifire/sd-3layers.jpg')\n\nphoto_data[200:800, : ,1] = 255\nplt.figure(figsize=(10,10))\nplt.imshow(photo_data)\n\nphoto_data = misc.imread('./wifire/sd-3layers.jpg')\n\nphoto_data[200:800, :] = 255\nplt.figure(figsize=(10,10))\nplt.imshow(photo_data)\n\nphoto_data = misc.imread('./wifire/sd-3layers.jpg')\n\nphoto_data[200:800, :] = 0\nplt.figure(figsize=(10,10))\nplt.imshow(photo_data)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\nPick all Pixels with Low Values</p>",
"photo_data = misc.imread('./wifire/sd-3layers.jpg')\nprint(\"Shape of photo_data:\", photo_data.shape)\nlow_value_filter = photo_data < 200\nprint(\"Shape of low_value_filter:\", low_value_filter.shape)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\">\nFiltering Out Low Values</p>\nWhenever the low_value_filter is True, set value to 0.\n<br/>",
"#import random\nplt.figure(figsize=(10,10))\nplt.imshow(photo_data)\nphoto_data[low_value_filter] = 0\nplt.figure(figsize=(10,10))\nplt.imshow(photo_data)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\">\nMore Row and Column Operations</p>\n<br>\nYou can design complex patters by making cols a function of rows or vice-versa. Here we try a linear relationship between rows and columns.",
"rows_range = np.arange(len(photo_data))\ncols_range = rows_range\nprint(type(rows_range))\n\nphoto_data[rows_range, cols_range] = 255\n\nplt.figure(figsize=(15,15))\nplt.imshow(photo_data)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\nMasking Images</p>\n<br>Now let us try something even cooler...a mask that is in shape of a circular disc.\n<img src=\"./1494532821.png\" align=\"left\" style=\"width:550px;height:360px;\"/>",
"total_rows, total_cols, total_layers = photo_data.shape\n#print(\"photo_data = \", photo_data.shape)\n\nX, Y = np.ogrid[:total_rows, :total_cols]\n#print(\"X = \", X.shape, \" and Y = \", Y.shape)\n\ncenter_row, center_col = total_rows / 2, total_cols / 2\n#print(\"center_row = \", center_row, \"AND center_col = \", center_col)\n#print(X - center_row)\n#print(Y - center_col)\ndist_from_center = (X - center_row)**2 + (Y - center_col)**2\n#print(dist_from_center)\nradius = (total_rows / 2)**2\n#print(\"Radius = \", radius)\ncircular_mask = (dist_from_center > radius)\n#print(circular_mask)\nprint(circular_mask[1500:1700,2000:2200])\n\nphoto_data = misc.imread('./wifire/sd-3layers.jpg')\nphoto_data[circular_mask] = 0\nplt.figure(figsize=(15,15))\nplt.imshow(photo_data)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\">\nFurther Masking</p>\n<br/>You can further improve the mask, for example just get upper half disc.",
"X, Y = np.ogrid[:total_rows, :total_cols]\nhalf_upper = X < center_row # this line generates a mask for all rows above the center\n\nhalf_upper_mask = np.logical_and(half_upper, circular_mask)\n\nphoto_data = misc.imread('./wifire/sd-3layers.jpg')\nphoto_data[half_upper_mask] = 255\n#photo_data[half_upper_mask] = random.randint(200,255)\nplt.figure(figsize=(15,15))\nplt.imshow(photo_data)",
"<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\nFurther Processing of our Satellite Imagery </p>\n\n<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\">\nProcessing of RED Pixels</p>\n\nRemember that red pixels tell us about the height. Let us try to highlight all the high altitude areas. We will do this by detecting high intensity RED Pixels and muting down other areas.",
"photo_data = misc.imread('./wifire/sd-3layers.jpg')\nred_mask = photo_data[:, : ,0] < 150\n\nphoto_data[red_mask] = 0\nplt.figure(figsize=(15,15))\nplt.imshow(photo_data)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\nDetecting Highl-GREEN Pixels</p>",
"photo_data = misc.imread('./wifire/sd-3layers.jpg')\ngreen_mask = photo_data[:, : ,1] < 150\n\nphoto_data[green_mask] = 0\nplt.figure(figsize=(15,15))\nplt.imshow(photo_data)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\nDetecting Highly-BLUE Pixels</p>",
"photo_data = misc.imread('./wifire/sd-3layers.jpg')\nblue_mask = photo_data[:, : ,2] < 150\n\nphoto_data[blue_mask] = 0\nplt.figure(figsize=(15,15))\nplt.imshow(photo_data)",
"<p style=\"font-family: Arial; font-size:1.75em;color:#2462C0; font-style:bold\"><br>\n\nComposite mask that takes thresholds on all three layers: RED, GREEN, BLUE</p>",
"photo_data = misc.imread('./wifire/sd-3layers.jpg')\n\nred_mask = photo_data[:, : ,0] < 150\ngreen_mask = photo_data[:, : ,1] > 100\nblue_mask = photo_data[:, : ,2] < 100\n\nfinal_mask = np.logical_and(red_mask, green_mask, blue_mask)\nphoto_data[final_mask] = 0\nplt.figure(figsize=(15,15))\nplt.imshow(photo_data)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
machlearn/ipython-notebooks | ML Algorithm - Random Forests.ipynb | mit | [
"Random Forests belong to the class of ensemble methods. The goal of ensemble methods is to combine the predictions of several base estimators built with a give learning algorithm in order to improve generalizability/ robustness over a single estimator. \nThere are two families of ensemble methods:\n\n\nAverage methods: build several estimators independently and then to average their predictions. Examples: Bagging methods, Forest of randomized trees.\n\n\nBoosting methods: base estimators are built sequentially and one tries to reduce the bias of the combined estimator. The motivation is to combine several weak models to produce a powerful ensemble. Examples: AdaBoost, Gradient Tree Boosting,...\n\n\nBagging meta-estimator\n\n\nBuild several instances of a blackbox estimator on random subsets of the original training set and then aggregate their individual predictions to form a final prediction. \n\n\nIn scikit-learn, bagging methods are offered as a unified BaggingClassifier meta-estimator, taking as input a user-specified base estimator along with parameters specifying the strategy to draw random subsets:",
"from sklearn.ensemble import BaggingClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nbagging = BaggingClassifier(KNeighborsClassifier(), max_samples = 0.5, max_features=0.5)",
"Forests of randomized trees\nThe sklearn.ensemble module includes two averaging algorithms based on randomized decision trees: the RandomForest algorithm and the Extra-Trees method. These two has perturb-and-combine style: a diverse set of classifiers is created by introducing randomness in the classifier construction. The prediction of the ensemble is given as the averaged prediction of the individual classifiers.",
"from sklearn.ensemble import RandomForestClassifier\nX = [[0,0], [1,1]]\nY = [0, 1]\nclf = RandomForestClassifier(n_estimators=10)\nclf = clf.fit(X,Y)\n",
"Random Forests\nEach tree is the ensemble built from a sample drawn with replacement from the training set. \nThe scikit-learn implementation combines classifiers by averaging their probablistic prediction, instead of letting each classifier vote for a single class.\nExtremely Randomized Trees\nRandomness goes one step further in the way splits are computed. \nAs in random forests, a random subset of candidate features is used, but instead of looking for the most discriminative thresholds, thresholds are drawn at random at each candidate feature and the best of these randomly-generated thresholds is picked as the splitting rule. \nThis allows to reduce the variance of the model a bit more, at the expense of a slightly greater increase in bias. \nParameters\nn_estimators and max_features.\nn_estimators is the number of tres in the forest. The larger the better, but also the longer it will take to compute. \nmax_features is the size of random subsets of features to consider when splitting a node. The lower the greater the reduction of variance, but also the greater in increase in bias. \nEmpirical good default values are max_features=n_features for regression problems, and max_features=sqrt(n_features) for classification tasks. \nGood results are often achieved when max_depth=None in combination with min_samples_split=1. \nThe best parameter values should always be cross-validated. \nIn random forests, bootstrap samples are used by default (bootstrap=True) while the default strategy for extra-trees is to use the whole dataset (bootstrap=False). \nParallelization\nn_jobs\nIf n_jobs=k then computations are partitioned into k jobs, and run on k cores of the machine.\nIF n_jobs=-1 then all cores available on the machine are used. \nTodo\n\nRead about decision trees\nWhat does \"bootstrap samples\" mean? \nSee some examples in scikit-learn about the performance of random forests. \n\nhttp://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_iris.html#example-ensemble-plot-forest-iris-py\nhttp://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances_faces.html#example-ensemble-plot-forest-importances-faces-py\nhttp://scikit-learn.org/stable/auto_examples/plot_multioutput_face_completion.html#example-plot-multioutput-face-completion-py\nReferences\nhttp://scikit-learn.org/stable/modules/ensemble.html"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
massimo-nocentini/competitive-programming | tutorials/dynamic-programming.ipynb | mit | [
"<p>\n<img src=\"http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg\" \n alt=\"UniFI logo\" style=\"float: left; width: 20%; height: 20%;\">\n<div align=\"right\">\nMassimo Nocentini<br>\n<small>\n<br>December 1, 2016: `longest_common_subsequence`\n<br>November 29, 2016: `matrix_product_ordering`\n<br>November 22, 2016: `edit_distance`\n<br>September 21, 2016: Monge arrays\n</small>\n</div>\n</p>\n<br>\n<div align=\"center\">\n<b>Abstract</b><br>\nSome examples of *dynamic programming*.\n</div>",
"from functools import lru_cache\nimport operator\nfrom itertools import count, zip_longest, chain\nfrom collections import OrderedDict\nfrom random import randint\n\ndef memo_holder(optimizer):\n def f(*args, **kwds):\n return_memo_table = kwds.pop('memo_table', False)\n pair = optimized, memo_table = optimizer(*args, **kwds)\n return pair if return_memo_table else optimized\n return f",
"DP references\n\nDynamic Programming - From Novice to Advanced, by Dumitru topcoder member (link)\n\nMonge arrays\nHave a look at https://en.wikipedia.org/wiki/Monge_array",
"def parity_numbered_rows(matrix, parity, include_index=False):\n start = 0 if parity == 'even' else 1\n return [(i, r) if include_index else r \n for i in range(start, len(matrix), 2)\n for r in [matrix[i]]]\n \ndef argmin(iterable, only_index=True):\n index, minimum = index_min = min(enumerate(iterable), key=operator.itemgetter(1))\n return index if only_index else index_min\n\ndef interleaving(one, another):\n for o, a in zip_longest(one, another):\n yield o\n if a: yield a\n\ndef is_sorted(iterable, pred=lambda l, g: l <= g):\n _, *rest = iterable\n return all(pred(l, g) for l, g in zip(iterable, rest))\n \ndef minima_indexes(matrix):\n \n if len(matrix) == 1: return [argmin(matrix.pop())]\n \n recursion = minima_indexes(parity_numbered_rows(matrix, parity='even'))\n even_minima = OrderedDict((i, m) for i, m in zip(count(start=0, step=2), recursion))\n odd_minima = [argmin(odd_r[start:end]) + start\n for o, odd_r in parity_numbered_rows(matrix, parity='odd', include_index=True)\n for start in [even_minima[o-1]]\n for end in [even_minima[o+1]+1 if o+1 in even_minima else None]]\n \n return list(interleaving(even_minima.values(), odd_minima))\n\ndef minima(matrix):\n return [matrix[i][m] for i, m in enumerate(minima_indexes(matrix))]\n\ndef is_not_monge(matrix):\n return any(any(matrix[r][m] > matrix[r][i] for i in range(m)) \n for r, m in enumerate(minima_indexes(matrix)))\n ",
"The following is a Monge array:",
"matrix = [\n [10, 17, 13, 28, 23],\n [17, 22, 16, 29, 23],\n [24, 28, 22, 34, 24],\n [11, 13, 6, 17, 7],\n [45, 44, 32, 37, 23],\n [36, 33, 19, 21, 6],\n [75, 66, 51, 53, 34],\n]\n\nminima(matrix)\n\nminima_indexes(matrix)\n\nis_not_monge(matrix)",
"The following is not a Monge array:",
"matrix = [\n [37, 23, 22, 32],\n [21, 6, 7, 10],\n [53, 34, 30, 31],\n [32, 13, 9, 6],\n [43, 21, 15, 8],\n]\n\nminima(matrix) # produces a wrong answer!!!\n\nminima_indexes(matrix)\n\nis_not_monge(matrix)",
"longest_increasing_subsequence",
"@memo_holder\ndef longest_increasing_subsequence(seq):\n\n L = []\n\n for i, current in enumerate(seq):\n \"\"\"opt, arg = max([(l, j) for (l, j) in L[:i] if l[-1] < current], \n key=lambda p: len(p[0]), \n default=([], tuple()))\n L.append(opt + [current], (arg, i))\"\"\"\n L.append(max(filter(lambda prefix: prefix[-1] < current, L[:i]), key=len, default=[]) + [current])\n\n return max(L, key=len), L\n\n\ndef lis_rec(seq):\n\n @lru_cache(maxsize=None)\n def rec(i):\n current = seq[i]\n return max([rec(j) for j in range(i) if seq[j] < current], key=len, default=[]) + [current]\n\n return max([rec(i) for i, _ in enumerate(seq)], key=len)\n\n",
"a simple test case taken from page 157:",
"seq = [5, 2, 8, 6, 3, 6, 9, 7] # see page 157\n\nsubseq, memo_table = longest_increasing_subsequence(seq, memo_table=True)\n\nsubseq",
"memoization table shows that [2,3,6,7] is another solution:",
"memo_table\n\nlis_rec(seq)",
"The following is an average case where the sequence is generated randomly:",
"length = int(5e3)\nseq = [randint(0, length) for _ in range(length)]\n\n%timeit longest_increasing_subsequence(seq)\n\n%timeit lis_rec(seq)",
"worst scenario where the sequence is a sorted list, in increasing order:",
"seq = range(length)\n\n%timeit longest_increasing_subsequence(seq)\n\n%timeit lis_rec(seq)",
"edit_distance",
"@memo_holder\ndef edit_distance(xs, ys, \n gap_in_xs=lambda y: 1, # cost of putting a gap in `xs` when reading `y`\n gap_in_ys=lambda x: 1, # cost of putting a gap in `ys` when reading `x`\n mismatch=lambda x, y: 1, # cost of mismatch (x, y) in the sense of `==`\n gap = '▢',\n mark=lambda s: s.swapcase(), \n reduce=sum):\n \n T = {}\n \n T.update({ (i, 0):(xs[:i], gap * i, i) for i in range(len(xs)+1) })\n T.update({ (0, j):( gap * j,ys[:j], j) for j in range(len(ys)+1) })\n \n def combine(w, z):\n a, b, c = zip(w, z)\n return ''.join(a), ''.join(b), reduce(c)\n \n for i, x in enumerate(xs, start=1):\n for j, y in enumerate(ys, start=1):\n T[i, j] = min(combine(T[i-1, j], (x, gap, gap_in_ys(x))),\n combine(T[i, j-1], (gap, y, gap_in_xs(y))),\n combine(T[i-1, j-1], (x, y, 0) if x == y else (mark(x), mark(y), mismatch(x, y))),\n key=lambda t: t[2])\n \n \n return T[len(xs), len(ys)], T\n\n(xs, ys, cost), memo_table = edit_distance('exponential', 'polynomial', memo_table=True)\n\nprint('edit with cost {}:\\n\\n{}\\n{}'.format(cost, xs, ys))\n\nmemo_table\n\n(xs, ys, cost), memo_table = edit_distance('exponential', 'polynomial', memo_table=True, mismatch=lambda x,y: 10)\n\nprint('edit with cost {}:\\n\\n{}\\n{}'.format(cost, xs, ys))\n\nmemo_table",
"matrix_product_ordering",
"@memo_holder\ndef matrix_product_ordering(o, w, op):\n \n n = len(o)\n T = {(i,i):(lambda: o[i], 0) for i in range(n)}\n \n def combine(i,r,j):\n t_ir, c_ir = T[i, r]\n t_rj, c_rj = T[r+1, j]\n return (lambda: op(t_ir(), t_rj()), c_ir + c_rj + w(i, r+1, j+1)) # w[i]*w[r+1]*w[j+1])\n \n for d in range(1, n):\n for i in range(n-d):\n j = i + d\n T[i, j] = min([combine(i, r, j) for r in range(i, j)], key=lambda t: t[-1])\n \n opt, cost = T[0, n-1]\n return (opt(), cost), T\n\ndef parens_str_proxy(w, **kwds):\n return matrix_product_ordering(o=['▢']*(len(w)-1), \n w=lambda l, c, r: w[l]*w[c]*w[r],\n op=lambda a, b: '({} {})'.format(a, b),\n **kwds)\n\n(opt, cost), memo_table = parens_str_proxy(w={0:100, 1:20, 2:1000, 3:2, 4:50}, memo_table=True)\n\nopt, cost\n\n{k:(thunk(), cost) for k,(thunk, cost) in memo_table.items()}\n\n(opt, cost), memo_table = parens_str_proxy(w={i:(i+1) for i in range(10)}, memo_table=True)\n\nopt, cost\n\n{k:(thunk(), cost) for k,(thunk, cost) in memo_table.items()}",
"http://oeis.org/A180118",
"from sympy import fibonacci, Matrix, init_printing\n\ninit_printing()\n\n(opt, cost), memo_table = parens_str_proxy(w={i:fibonacci(i+1) for i in range(10)}, memo_table=True)\n\nopt, cost\n\n{k:(thunk(), cost) for k,(thunk, cost) in memo_table.items()}",
"http://oeis.org/A180664",
"def to_matrix_cost(dim, memo_table):\n n, m = dim\n return Matrix(n, m, lambda n,k: memo_table.get((n, k), (lambda: None, 0))[-1])\n\nto_matrix_cost(dim=(9,9), memo_table=memo_table)",
"longest_common_subsequence",
"@memo_holder\ndef longest_common_subsequence(A, B, gap_A, gap_B, \n equal=lambda a, b: 1, shrink_A=lambda a: 0, shrink_B=lambda b: 0,\n reduce=sum):\n \n T = {}\n \n T.update({(i, 0):([gap_A]*i, 0) for i in range(len(A)+1)})\n T.update({(0, j):([gap_B]*j, 0) for j in range(len(B)+1)})\n \n def combine(w, z):\n alpha, beta = zip(w, z)\n return list(chain.from_iterable(alpha)), reduce(beta)\n \n for i, a in enumerate(A, start=1):\n for j, b in enumerate(B, start=1):\n T[i, j] = combine(T[i-1, j-1], ([a], equal(a, b))) if a == b else max(\n combine(T[i, j-1], ([gap_B], -shrink_B(b))), \n combine(T[i-1, j], ([gap_A], -shrink_A(a))),\n key=lambda t: t[-1])\n \n opt, cost = T[len(A), len(B)]\n return (opt, cost), T\n\ndef pprint_memo_table(T, joiner, do=str):\n return {k:(joiner.join(map(do, v[0])), v[1]) for k, v in T.items()}\n\n(opt, cost), memo_table = longest_common_subsequence(A='ADCAAB', \n B='BAABDCDCAACACBA', gap_A='▢', gap_B='○', \n #shrink_B=lambda b:1,\n memo_table=True)\n\nprint('BAABDCDCAACACBA') \nprint(''.join(opt))\n\npprint_memo_table(memo_table, joiner='')\n\n(opt, cost), memo_table = longest_common_subsequence(A=[0,1,1,2,3,5,8,13,21,34,55], \n B=[1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, 208012, 742900, 2674440,], \n gap_A='▢', gap_B='○', \n #shrink_B=lambda b:1,\n memo_table=True)\n\nprint(','.join(map(str, opt)))\n\npprint_memo_table(memo_table, joiner=',')",
"<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png\" /></a><br />This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
diegocavalca/Studies | programming/Python/tensorflow/exercises/Control_Flow.ipynb | cc0-1.0 | [
"Control Flow",
"from __future__ import print_function\nimport tensorflow as tf\nimport numpy as np\n\nfrom datetime import date\ndate.today()\n\nauthor = \"kyubyong. https://github.com/Kyubyong/tensorflow-exercises\"\n\ntf.__version__\n\nnp.__version__\n\nsess = tf.InteractiveSession()",
"NOTE on notation\n* _x, _y, _z, ...: NumPy 0-d or 1-d arrays\n* _X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays\n* x, y, z, ...: 0-d or 1-d tensors\n* X, Y, Z, ...: 2-d or higher dimensional tensors\nControl Flow Operations\nQ1. Let x and y be random 0-D tensors. Return x + y \nif x < y and x - y otherwise.\nQ2. Let x and y be 0-D int32 tensors randomly selected from 0 to 5. Return x + y 2 if x < y, x - y elif x > y, 0 otherwise.\nQ3. Let X be a tensor [[-1, -2, -3], [0, 1, 2]] and Y be a tensor of zeros with the same shape as X. Return a boolean tensor that yields True if X equals Y elementwise.\nLogical Operators\nQ4. Given x and y below, return the truth value x AND/OR/XOR y element-wise.",
"x = tf.constant([True, False, False], tf.bool)\ny = tf.constant([True, True, False], tf.bool)\n",
"Q5. Given x, return the truth value of NOT x element-wise.",
"x = tf.constant([True, False, False], tf.bool)\n",
"Comparison Operators\nQ6. Let X be a tensor [[-1, -2, -3], [0, 1, 2]] and Y be a tensor of zeros with the same shape as x. Return a boolean tensor that yields True if X does not equal Y elementwise.\nQ7. Let X be a tensor [[-1, -2, -3], [0, 1, 2]] and Y be a tensor of zeros with the same shape as X. Return a boolean tensor that yields True if X is greater than or equal to Y elementwise.\nQ8. Let X be a tensor [[1, 2], [3, 4]], Y be a tensor [[5, 6], [7, 8]], and Z be a boolean tensor [[True, False], [False, True]]. Create a 2*2 tensor such that each element corresponds to X if Z is True, otherise Y."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dsiufl/2015-Fall-Hadoop | instructor-notes/1-hadoop-streaming-py-wordcount.ipynb | mit | [
"Hadoop Short Course\n1. Hadoop Distributed File System\nHadoop Distributed File System (HDFS)\nHDFS is the primary distributed storage used by Hadoop applications. A HDFS cluster primarily consists of a NameNode that manages the file system metadata and DataNodes that store the actual data. The HDFS Architecture Guide describes HDFS in detail. To learn more about the interaction of users and administrators with HDFS, please refer to HDFS User Guide. \nAll HDFS commands are invoked by the bin/hdfs script. Running the hdfs script without any arguments prints the description for all commands. For all the commands, please refer to HDFS Commands Reference\nStart HDFS",
"hadoop_root = '/home/ubuntu/shortcourse/hadoop-2.7.1/'\nhadoop_start_hdfs_cmd = hadoop_root + 'sbin/start-dfs.sh'\nhadoop_stop_hdfs_cmd = hadoop_root + 'sbin/stop-dfs.sh'\n\n# start the hadoop distributed file system\n! {hadoop_start_hdfs_cmd}\n\n# show the jave jvm process summary\n# You should see NamenNode, SecondaryNameNode, and DataNode\n! jps",
"Normal file operations and data preparation for later example",
"# list recursively everything under the root dir\n! {hadoop_root + 'bin/hdfs dfs -ls -R /'}",
"Download some files for later use.",
"# We will use three ebooks from Project Gutenberg for later example\n# Pride and Prejudice by Jane Austen: http://www.gutenberg.org/ebooks/1342.txt.utf-8\n! wget http://www.gutenberg.org/ebooks/1342.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/pride-and-prejudice.txt\n\n# Alice's Adventures in Wonderland by Lewis Carroll: http://www.gutenberg.org/ebooks/11.txt.utf-8\n! wget http://www.gutenberg.org/ebooks/11.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/alice.txt\n \n# The Adventures of Sherlock Holmes by Arthur Conan Doyle: http://www.gutenberg.org/ebooks/1661.txt.utf-8\n! wget http://www.gutenberg.org/ebooks/1661.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/sherlock-holmes.txt\n\n# delete existing folders\n! {hadoop_root + 'bin/hdfs dfs -rm -R /user/ubuntu/*'}\n\n# create input folder\n! {hadoop_root + 'bin/hdfs dfs -mkdir /user/ubuntu/input'}\n\n# copy the three books to the input folder in HDFS\n! {hadoop_root + 'bin/hdfs dfs -copyFromLocal /home/ubuntu/shortcourse/data/wordcount/* /user/ubuntu/input/'}\n\n# show if the files are there\n! {hadoop_root + 'bin/hdfs dfs -ls -R'}",
"2. WordCount Example\nLet's count the single word frequency in the uploaded three books.\nStart Yarn, the resource allocator for Hadoop.",
"# start the hadoop distributed file system\n! {hadoop_root + 'sbin/start-yarn.sh'}\n\n# wordcount 1 the scripts\n# Map: /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py\n# Test locally the map script\n! echo \"go gators gators beat everyone go glory gators\" | \\\n /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py\n\n# Reduce: /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py\n# Test locally the reduce script\n! echo \"go gators gators beat everyone go glory gators\" | \\\n /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py | \\\n sort -k1,1 | \\\n /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py\n\n# run them with Hadoop against the uploaded three books\ncmd = hadoop_root + 'bin/hadoop jar ' + hadoop_root + 'hadoop-streaming-2.7.1.jar ' + \\\n '-input input ' + \\\n '-output output ' + \\\n '-mapper /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py ' + \\\n '-reducer /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py ' + \\\n '-file /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py ' + \\\n '-file /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py'\n\n! {cmd}\n\n# list the output\n! {hadoop_root + 'bin/hdfs dfs -ls -R output'}\n\n# Let's see what's in the output file\n# delete if previous results exist\n! rm -rf /home/ubuntu/shortcourse/tmp/*\n! {hadoop_root + 'bin/hdfs dfs -copyToLocal output/part-00000 /home/ubuntu/shortcourse/tmp/wc1-part-00000'}\n! tail -n 20 /home/ubuntu/shortcourse/tmp/wc1-part-00000",
"3. Exercise: WordCount2\nCount the single word frequency, where the words are given in a pattern file. \nFor example, given pattern.txt file, which contains: \n\"a b c d\"\n\nAnd the input file is: \n\"d e a c f g h i a b c d\".\n\nThen the output shoule be:\n\"a 1\n b 1\n c 2\n d 2\"\n\nPlease copy the mapper.py and reduce.py from the first wordcount example to foler \"/home/ubuntu/shortcourse/notes/scripts/wordcount2/\". The pattern file is given in the wordcount2 folder with name \"wc2-pattern.txt\"\nHint:\n1. pass the pattern file using \"-file option\" and use -cmdenv to pass the file name as environment variable\n2. in the mapper, read the pattern file into a set\n3. only print out the words that exist in the set",
"# execise: count the words existing in the given pattern file for the three books\n\ncmd = hadoop_root + 'bin/hadoop jar ' + hadoop_root + 'hadoop-streaming-2.7.1.jar ' + \\\n '-cmdenv PATTERN_FILE=wc2-pattern.txt ' + \\\n '-input input ' + \\\n '-output output2 ' + \\\n '-mapper /home/ubuntu/shortcourse/notes/scripts/wordcount2/mapper.py ' + \\\n '-reducer /home/ubuntu/shortcourse/notes/scripts/wordcount2/reducer.py ' + \\\n '-file /home/ubuntu/shortcourse/notes/scripts/wordcount2/mapper.py ' + \\\n '-file /home/ubuntu/shortcourse/notes/scripts/wordcount2/reducer.py ' + \\\n '-file /home/ubuntu/shortcourse/notes/scripts/wordcount2/wc2-pattern.txt'\n\n! {cmd}",
"Verify Results\n\nCopy the output file to local\n\nrun the following command, and compare with the downloaded output\nsort -nrk 2,2 part-00000 | head -n 20\n\n\nThe wc1-part-00000 is the output of the previous wordcount (wordcount1)",
"! rm -rf /home/ubuntu/shortcourse/tmp/wc2-part-00000\n! {hadoop_root + 'bin/hdfs dfs -copyToLocal output2/part-00000 /home/ubuntu/shortcourse/tmp/wc2-part-00000'}\n! cat /home/ubuntu/shortcourse/tmp/wc2-part-00000 | sort -nrk2,2\n\n! sort -nr -k2,2 /home/ubuntu/shortcourse/tmp/wc1-part-00000 | head -n 20\n\n# stop dfs and yarn\n!{hadoop_root + 'sbin/stop-yarn.sh'}\n# don't stop hdfs for now, later use\n# !{hadoop_stop_hdfs_cmd}"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ContinuumIO/cube-explorer | doc/Introductory_Tutorial.ipynb | bsd-3-clause | [
"The Iris cubes used in this notebook are publicly available in the SciTools/iris-sample-data repository. This notebook is based on the user story described in Issue #7 with the following image suggested for inspiration.",
"import datetime\nimport iris\nimport numpy as np\nimport holoviews as hv\nimport holocube as hc\nfrom cartopy import crs\nfrom cartopy import feature as cf\nhv.notebook_extension()",
"Setting some notebook-wide options\nLet's start by setting some normalization options (discussed below) and always enable colorbars for the elements we will be displaying:",
"iris.FUTURE.strict_grib_load = True\n%opts Image {+framewise} [colorbar=True] Contours [colorbar=True] {+framewise} Curve [xrotation=60]",
"Note that it is easy to set global defaults for a project allowing any suitable settings can be made into a default on a per-element basis. Now lets specify the maximum number of frames we will be displaying:",
"%output max_frames=1000 ",
"<div class=\"alert alert-info\" role=\"alert\">When working on a live server append ``widgets='live'`` to the line above for greatly improved performance and memory usage </div>\n\nLoading our first cube\nHere is the summary of the first cube containing some surface temperature data:",
"iris_cube = iris.load_cube(iris.sample_data_path('GloSea4', 'ensemble_001.pp'))\niris_cube.coord('latitude').guess_bounds()\niris_cube.coord('longitude').guess_bounds()\n\nprint iris_cube.summary()",
"Now we can wrap this Iris cube in a HoloCube:",
"surface_temperature = hc.HoloCube(iris_cube)\nsurface_temperature",
"A Simple example\nHere is a simple example of viewing the surface_temperature cube over time with a single line of code. In HoloViews, this datastructure is a HoloMap of Image elements:",
"surface_temperature.to.image(['longitude', 'latitude'])",
"You can drag the slider to view the surface temperature at different times. Here is how you can view the values of time in the cube via the HoloViews API:",
"surface_temperature.dimension_values('time')",
"The times shown in the slider are long making the text rather small. We can use the fact that all times are recorded in the year 2011 on the 16th of each month to shorten these dates. Defining how all dates should be formatted as follows will help with readability:",
"hv.Dimension.type_formatters[datetime.datetime] = \"%m/%y %Hh\"",
"Now let us load a cube showing the pre-industrial air temperature:",
"air_temperature = hc.HoloCube(iris.load_cube(iris.sample_data_path('pre-industrial.pp')),\n group='Pre-industrial air temperature')\nair_temperature.data.coord('longitude').guess_bounds()\nair_temperature.data.coord('latitude').guess_bounds()\nair_temperature # Use air_temperature.data.summary() to see the Iris summary (.data is the Iris cube)",
"Note that we have the air_temperature available over longitude and latitude but not the time dimensions. As a result, this cube is a single frame when visualized as a temperature map.",
"(surface_temperature.to.image(['longitude', 'latitude'])+\n air_temperature.to.image(['longitude', 'latitude'])(plot=dict(projection=crs.PlateCarree())))",
"Next is a fairly involved example that plots data side-by-side in a Layout without using the + operator. \nThis shows how complex plots can be generated with little code and also demonstrates how different HoloViews elements can be combined together. In the following visualization, the curve is a sample of the surface_temperature at longitude and latitude (0,10):",
"%%opts Layout [fig_inches=(12,7)] Curve [aspect=2 xticks=4 xrotation=20] Points (color=2) Overlay [aspect='equal']\n%%opts Image [projection=crs.PlateCarree()]\n# Sample the surface_temperature at (0,10)\ntemp_curve = surface_temperature.to.curve('time', dynamic=True)[0, 10]\n# Show surface_temperature and air_temperature with Point (0,10) marked\ntemp_maps = [cb.to.image(['longitude', 'latitude']) * hc.Points([(0,10)]) \n for cb in [surface_temperature, air_temperature]]\n# Show everything in a two column layout\nhv.Layout(temp_maps + [temp_curve]).cols(2).display('all')",
"Overlaying data and normalization\nLets view the surface temperatures together with the global coastline:",
"cf.COASTLINE.scale='1000m'\n\n%%opts Image [projection=crs.Geostationary()] (cmap='Greens')\nsurface_temperature.to.image(['longitude', 'latitude']) * hc.GeoFeature(cf.COASTLINE)",
"Notice that every frame uses the full dynamic range of the Greens color map. This is because normalization is set to +framewise at the top of the notebook which means every frame is normalized independently. \nTo control normalization, we need to decide on the normalization limits. Let's see the maximum temperature in the cube:",
"max_surface_temp = surface_temperature.data.data.max()\nmax_surface_temp\n\n%%opts Image [projection=crs.Geostationary()] (cmap='Greens')\n# Declare a humidity dimension with a range from 0 to 0.01\nsurface_temp_dim = hv.Dimension('surface_temperature', range=(300, max_surface_temp))\n# Use it to declare the value dimension of a HoloCube\n(hc.HoloCube(surface_temperature, vdims=[surface_temp_dim]).to.image(['longitude', 'latitude']) * hc.GeoFeature(cf.COASTLINE))",
"By specifying the normalization range we can reveal different aspects of the data. In the example above we can see a warming effect over time as the dark green areas close to the bottom of the normalization range (200K) vanish. Values outside this range are clipped to the ends of the color map.\nLastly, here is a demo of a conversion from surface_temperature to Contours:",
"%%opts Contours [levels=10]\n(surface_temperature.to.contours(['longitude', 'latitude']) * hc.GeoFeature(cf.COASTLINE))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
celiasmith/syde556 | SYDE 556 Lecture 9 Action Selection.ipynb | gpl-2.0 | [
"SYDE 556/750: Simulating Neurobiological Systems\nReadings: Stewart et al.\nBiological Cognition -- Control\n\n\nLots of contemporary neural models are quite simple\n\nWorking memory, vision, audition, perceptual decision making, oscillations, etc.\n\n\n\nWhat happens when our models get more complex?\n\nI.e., what happens when the models:\nSwitch modalities?\nHave a complex environment?\nHave limited resources?\nCan't do everything at once?\n\n\n\n\n\nThe brain needs a way to determine how to best use the finite resources it has.\n\n\nThink about what happens when:\n\nYou have two targets to reach to at once (or 3 or 4)\nYou want to get to a goal that requires a series of actions\nYou don't know what the target is, but you know what modality it will be in\nYou don't know what the target will be, but you know where it will be\n\n\n\nIn all these cases, your brain needs to control the flow of information through it to solve the task.\n\nChapter 5 of How to build a brain is focussed on relevant neural models.\nThat chapter distinguishes two aspects of control:\ndetermining what an appropriate control signal is\napplying that signal to change the system\n\n\nThe first is a kind of decision making called 'action selection'\nThe second is more of an implementational question about how to gate information effectively (we've seen several possibilities for this already; e.g. inhibition, multiplication)\nThis lecture focusses on the first aspect of control\n\nAction Selection and the Basal Ganglia\n\nActions can be many different things\nphysical movements\nmoving attention\nchanging contents of working memory \nrecalling items from long-term memory\n\n\n\nAction Selection\n\nHow can we do this?\nSuppose we're a critter that's trying to survive in a harsh environment\nWe have a bunch of different possible actions\ngo home\nmove randomly\ngo towards food\ngo away from predators\n\n\nWhich one do we pick?\nIdeas?\n\n\n\nReinforcement Learning\n\nReinforcement learning is a biologically inspired computational approach to machine learning. It is based on the idea that creatures maximize reward, which seems to be the case (see, e.g., the Rescorla-Wagner model of Pavlov's experiments).\nThere have been a lot of interesting connections found between signals in these models and signals in the brain.\nSo, let's steal a simple idea from reinforcement learning:\nEach action has a utility $Q$ that depends on the current state $s$\n$Q(s, a)$ (the action value)\n\n\n\nThe best action will then be the action that has the largest $Q$\n\n\nNote\n\nLots of different variations on this\n$V(s)$ (the state value - expected reward given a state & policy)\nSoftmax: $p(a_i) = e^{Q(s, a_i)/T} / \\sum_i e^{Q(s, a_i)/T}$ (instead of max)\n\n\nIn RL research, people come up with learning algorithms for adjusting $Q$ based on rewards\nWe won't worry about that for now (see the lecture on learning) and just use the basic idea\nThere's some sort of state $s$\nFor each action $a_i$, compute $Q(s, a_i)$ which is a function that we can define\nTake the biggest $Q$ and perform that action\n\n\n\nImplementation\n\nOne group of neurons to represent the state $s$\n\nOne group of neurons for each action's utility $Q(s, a_i)$\n\nOr one large group of neurons for all the $Q$ values\n\n\n\nWhat should the output be?\n\nWe could have $index$, which is the index $i$ of the action with the largest $Q$ value\nOr we could have something like $[0,0,1,0]$, indicating which action is selected\nAdvantages and disadvantages?\n\n\nThe second option seems easier if we consider that we have to do action execution next...\n\nA Simple Example\n\nState $s$ is 2-dimensional (x,y plane)\nFour actions (A, B, C, D)\n\nDo action A if $s$ is near [1,0], B if near [-1,0], C if near [0,1], D if near [0,-1]\n\n$Q(s, a_A)=s \\cdot [1,0]$\n$Q(s, a_B)=s \\cdot [-1,0]$\n$Q(s, a_C)=s \\cdot [0,1]$\n$Q(s, a_D)=s \\cdot [0,-1]$\n\n\n\nREMINDER: COURSE EVALUATION STUFF!",
"%pylab inline\nimport nengo\n\nmodel = nengo.Network('Selection')\n\nwith model:\n stim = nengo.Node(lambda t: [np.sin(t), np.cos(t)])\n \n s = nengo.Ensemble(200, dimensions=2)\n Q_A = nengo.Ensemble(50, dimensions=1)\n Q_B = nengo.Ensemble(50, dimensions=1)\n Q_C = nengo.Ensemble(50, dimensions=1)\n Q_D = nengo.Ensemble(50, dimensions=1)\n\n nengo.Connection(s, Q_A, transform=[[1,0]])\n nengo.Connection(s, Q_B, transform=[[-1,0]])\n nengo.Connection(s, Q_C, transform=[[0,1]])\n nengo.Connection(s, Q_D, transform=[[0,-1]])\n nengo.Connection(stim, s)\n \n model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)\n qa_p = nengo.Probe(Q_A)\n qb_p = nengo.Probe(Q_B)\n qc_p = nengo.Probe(Q_C)\n qd_p = nengo.Probe(Q_D)\n s_p = nengo.Probe(s)\n \nsim = nengo.Simulator(model)\nsim.run(3.)\n\nt = sim.trange()\n\nplot(t, sim.data[s_p], label=\"state\")\nlegend()\n\nfigure(figsize=(8,8))\nplot(t, sim.data[qa_p], label='Q_A')\nplot(t, sim.data[qb_p], label='Q_B')\nplot(t, sim.data[qc_p], label='Q_C')\nplot(t, sim.data[qd_p], label='Q_D')\nlegend(loc='best');",
"That behavior makes a lot of sense\n\nThe highest Q happens when an action's 'favorite state' (i.e. when the transform is equal to state) is in s\n\n\n\nIt's annoying to have all those separate $Q$ neurons\n\nPerfect opportunity to use the EnsembleArray again (see last lecture)\nDoesn't change the model at all\nIt just groups things together for you",
"import nengo\n\nmodel = nengo.Network('Selection')\n\nwith model:\n stim = nengo.Node(lambda t: [np.sin(t), np.cos(t)])\n \n s = nengo.Ensemble(200, dimensions=2)\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)\n \n nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])\n nengo.Connection(stim, s)\n \n model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)\n qs_p = nengo.Probe(Qs.output)\n s_p = nengo.Probe(s)\n \nsim = nengo.Simulator(model)\nsim.run(3.)\n\nt = sim.trange()\n\nplot(t, sim.data[s_p], label=\"state\")\nlegend()\n\nfigure(figsize=(8,8))\nplot(t, sim.data[qs_p], label='Qs')\nlegend(loc='best');",
"Yay, Network Arrays make shorter code!\n\n\nBack to the model: How do we implement the $max$ function?\n\nWell, it's just a function, so let's implement it\nNeed to combine all the $Q$ values into one 4-dimensional ensemble\nWhy?",
"import nengo\n\ndef maximum(x):\n result = [0,0,0,0]\n result[np.argmax(x)] = 1\n return result\n\nmodel = nengo.Network('Selection')\n\nwith model:\n stim = nengo.Node(lambda t: [np.sin(t), np.cos(t)])\n \n s = nengo.Ensemble(200, dimensions=2)\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)\n Qall = nengo.Ensemble(400, dimensions=4)\n Action = nengo.Ensemble(200, dimensions=4)\n \n nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])\n nengo.Connection(Qs.output, Qall)\n nengo.Connection(Qall, Action, function=maximum)\n nengo.Connection(stim, s)\n \n model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)\n qs_p = nengo.Probe(Qs.output)\n action_p = nengo.Probe(Action)\n s_p = nengo.Probe(s)\n \nsim = nengo.Simulator(model)\nsim.run(3.)\n\nt = sim.trange()\n\nplot(t, sim.data[s_p], label=\"state\")\nlegend()\n\nfigure()\nplot(t, sim.data[qs_p], label='Qs')\nlegend(loc='best')\n\nfigure()\nplot(t, sim.data[action_p], label='Action')\nlegend(loc='best');",
"Not so great (it looks pretty much the same as the linear case)\nVery nonlinear function, so neurons are not able to approximate it well\nOther options?\n\nThe Standard Neural Network Approach (modified)\n\nIf you give this problem to a standard neural networks person, what would they do?\nThey'll say this is exactly what neural networks are great at\nImplement this with mutual inhibition and self-excitation\n\n\nNeural competition\n4 \"neurons\"\nhave excitation from each neuron back to themselves\nhave inhibition from each neuron to all the others\n\n\nNow just put in the input and wait for a while and it will stablize to one option\nCan we do that?\nSure! Just replace each \"neuron\" with a group of neurons, and compute the desired function on those connections\nnote that this is a very general method of converting any non-realistic neural model into a biologically realistic spiking neuron model (though often you can do a one-for-one neuron conversion as well)",
"import nengo\n\nmodel = nengo.Network('Selection')\n\nwith model:\n stim = nengo.Node(lambda t: [.5,.4] if t <1. else [0,0] )\n \n s = nengo.Ensemble(200, dimensions=2)\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)\n \n nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])\n \n e = 0.1\n i = -1\n\n recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]] \n \n nengo.Connection(Qs.output, Qs.input, transform=recur)\n nengo.Connection(stim, s)\n \n model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)\n qs_p = nengo.Probe(Qs.output)\n s_p = nengo.Probe(s)\n \nsim = nengo.Simulator(model)\nsim.run(1.)\n\nt = sim.trange()\n\nplot(t, sim.data[s_p], label=\"state\")\nlegend()\n\nfigure()\nplot(t, sim.data[qs_p], label='Qs')\nlegend(loc='best');",
"Oops, that's not quite right\nWhy is it selecting more than one action?",
"import nengo\n\nmodel = nengo.Network('Selection')\n\nwith model:\n stim = nengo.Node(lambda t: [.5,.4] if t <1. else [0,0] )\n \n s = nengo.Ensemble(200, dimensions=2)\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)\n Action = nengo.networks.EnsembleArray(50, n_ensembles=4)\n \n nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])\n nengo.Connection(Qs.output, Action.input)\n \n e = 0.1\n i = -1\n\n recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]] \n\n # Let's force the feedback connection to only consider positive values\n def positive(x):\n if x[0]<0: return [0]\n else: return x\n pos = Action.add_output('positive', positive)\n \n nengo.Connection(pos, Action.input, transform=recur)\n nengo.Connection(stim, s)\n \n model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)\n qs_p = nengo.Probe(Qs.output)\n action_p = nengo.Probe(Action.output)\n s_p = nengo.Probe(s)\n \nsim = nengo.Simulator(model)\nsim.run(1.)\n\nt = sim.trange()\n\nplot(t, sim.data[s_p], label=\"state\")\nlegend(loc='best')\n\nfigure()\nplot(t, sim.data[qs_p], label='Qs')\nlegend(loc='best')\n\nfigure()\nplot(t, sim.data[action_p], label='Action')\nlegend(loc='best');",
"Now we only influence other Actions when we have a positive value\nNote: Is there a more neurally efficient way to do this?\n\n\nMuch better\nSelects one action reliably\nBut still gives values smaller than 1.0 for the output a lot\nCan we fix that?\nWhat if we adjust e?",
"%pylab inline\nimport nengo\n\ndef stimulus(t):\n if t<.3:\n return [.5,.4]\n elif .3<t<.5:\n return [.4,.5]\n else:\n return [0,0] \n \nmodel = nengo.Network('Selection')\n\nwith model:\n stim = nengo.Node(stimulus)\n \n s = nengo.Ensemble(200, dimensions=2)\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)\n Action = nengo.networks.EnsembleArray(50, n_ensembles=4)\n \n nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])\n nengo.Connection(Qs.output, Action.input)\n \n e = .5\n i = -1\n\n recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]] \n\n # Let's force the feedback connection to only consider positive values\n def positive(x):\n if x[0]<0: return [0]\n else: return x\n pos = Action.add_output('positive', positive)\n \n nengo.Connection(pos, Action.input, transform=recur)\n nengo.Connection(stim, s)\n \n model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)\n qs_p = nengo.Probe(Qs.output)\n action_p = nengo.Probe(Action.output)\n s_p = nengo.Probe(s)\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/action_selection.py.cfg\")\n\nsim = nengo.Simulator(model)\nsim.run(1.)\n\nt = sim.trange()\n\nplot(t, sim.data[s_p], label=\"state\")\nlegend(loc='best')\n\nfigure()\nplot(t, sim.data[qs_p], label='Qs')\nlegend(loc='best')\n\nfigure()\nplot(t, sim.data[action_p], label='Action')\nlegend(loc='best');",
"That seems to introduce a new problem\nThe self-excitation is so strong that it can't respond to changes in the input\nIndeed, any method like this is going to have some form of memory effects\nNotice that what has been implemented is an integrator (sort of)\n\n\nCould we do anything to help without increasing e too much?",
"%pylab inline\nimport nengo\n\ndef stimulus(t):\n if t<.3:\n return [.5,.4]\n elif .3<t<.5:\n return [.3,.5]\n else:\n return [0,0] \n \nmodel = nengo.Network('Selection')\n\nwith model:\n stim = nengo.Node(stimulus)\n \n s = nengo.Ensemble(200, dimensions=2)\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)\n Action = nengo.networks.EnsembleArray(50, n_ensembles=4)\n \n nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])\n nengo.Connection(Qs.output, Action.input)\n \n e = 0.2\n i = -1\n\n recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]] \n\n def positive(x):\n if x[0]<0: return [0]\n else: return x\n pos = Action.add_output('positive', positive)\n \n nengo.Connection(pos, Action.input, transform=recur)\n \n def select(x):\n if x[0]>=0: return [1]\n else: return [0]\n sel = Action.add_output('select', select)\n \n aValues = nengo.networks.EnsembleArray(50, n_ensembles=4)\n \n nengo.Connection(sel, aValues.input) \n nengo.Connection(stim, s)\n \n model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)\n qs_p = nengo.Probe(Qs.output)\n action_p = nengo.Probe(Action.output)\n aValues_p = nengo.Probe(aValues.output)\n s_p = nengo.Probe(s)\n\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/action_selection2.py.cfg\")\n\nsim = nengo.Simulator(model)\nsim.run(1.)\n\nt = sim.trange()\n\nplot(t, sim.data[s_p], label=\"state\")\nlegend(loc='best')\n\nfigure()\nplot(t, sim.data[qs_p], label='Qs')\nlegend(loc='best')\n\nfigure()\nplot(t, sim.data[action_p], label='Action')\nlegend(loc='best')\n\nfigure()\nplot(t, sim.data[aValues_p], label='Action Values')\nlegend(loc='best');",
"Better behaviour\nBut there's still situations where there's too much memory (see the visualizer)\nWe can reduce this by reducing e",
"%pylab inline\nimport nengo\n\ndef stimulus(t):\n if t<.3:\n return [.5,.4]\n elif .3<t<.5:\n return [.3,.5]\n else:\n return [0,0] \n \nmodel = nengo.Network('Selection')\n\nwith model:\n stim = nengo.Node(stimulus)\n \n s = nengo.Ensemble(200, dimensions=2)\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)\n Action = nengo.networks.EnsembleArray(50, n_ensembles=4)\n \n nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])\n nengo.Connection(Qs.output, Action.input)\n \n e = 0.1\n i = -1\n\n recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]] \n\n def positive(x):\n if x[0]<0: return [0]\n else: return x\n pos = Action.add_output('positive', positive)\n \n nengo.Connection(pos, Action.input, transform=recur)\n \n def select(x):\n if x[0]>=0: return [1]\n else: return [0]\n sel = Action.add_output('select', select)\n \n aValues = nengo.networks.EnsembleArray(50, n_ensembles=4)\n \n nengo.Connection(sel, aValues.input) \n nengo.Connection(stim, s)\n \n model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)\n qs_p = nengo.Probe(Qs.output)\n action_p = nengo.Probe(Action.output)\n aValues_p = nengo.Probe(aValues.output)\n s_p = nengo.Probe(s)\n \n#sim = nengo.Simulator(model)\n#sim.run(1.)\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/bg_simple1.py.cfg\")",
"Much less memory, but it's still there\nAnd slower to respond to changes\nNote that this speed is dependent on $e$, $i$, and the time constant of the neurotransmitter used\n\nCan be hard to find good values\n\n\nAnd this gets harder to balance as the number of actions increases\n\nAlso hard to balance for a wide range of $Q$ values\n(Does it work for $Q$=[0.9, 0.9, 0.95, 0.9] and $Q$=[0.2, 0.2, 0.25, 0.2]?)\n\n\n\n\n\nBut this is still a pretty standard approach\n\nNice and easy to get working for special cases\nDon't really need the NEF (if you're willing to assume non-realistic non-spiking neurons)\n(Although really, if you're not looking for biological realism, why not just compute the max function?)\n\n\n\nExample: OReilly, R.C. (2006). Biologically Based Computational Models of High-Level Cognition. Science, 314, 91-94.\n\nLeabra\n\n\n\nThey tend to use a \"kWTA\" (k-Winners Take All) approach in their models\n\nSet up inhibition so that only $k$ neurons will be active\nBut since that's complex to do, just do the math instead of doing the inhibition\nWe think that doing it their way means that the dynamics of the model will be wrong (i.e. all the effects we saw above are being ignored).\n\n\n\nAny other options?\n\n\nBiology\n\nLet's look at the biology\nWhere is this action selection in the brain?\nGeneral consensus: the basal ganglia\n\n<img src=\"files/lecture_selection/basal_ganglia.jpg\" width=\"500\">\n\nPretty much all of cortex connects in to this area (via the striatum)\n\nOutput goes to the thalamus, the central routing system of the brain\n\n\nDisorders of this area of the brain cause problems controlling actions:\n\nParkinson's disease\nNeurons in the substantia nigra die off\nExtremely difficult to trigger actions to start\nUsually physical actions; as disease progresses and more of the SNc is gone, can get cognitive effects too\n\n\nHuntington's disease\nNeurons in the striatum die off\nActions are triggered inappropriately (disinhibition)\nSmall uncontrollable movements\nTrouble sequencing cognitive actions too\n\n\n\n\n\nAlso heavily implicated in reinforcement learning\n\nThe dopamine levels seem to map onto reward prediction error\nHigh levels when get an unexpected reward, low levels when didn't get a reward that was expected\n\n\n\n<img src=\"files/lecture_selection/dopamine.png\" width=\"500\">\n\nConnectivity diagram:\n\n<img src=\"files/lecture_selection/basal_ganglia2.gif\" width=\"500\">\n\nOld terminology:\n\"direct\" pathway: cortex -> striatum -> GPi -> thalamus\n\"indirect\" pathway: cortex -> striatum -> GPe -> STN -> GPi -> thalamus\n\n\n\nThen they found:\n\n\"hyperdirect\" pathway: cortex -> STN -> GPi -> thalamus\nand lots of other connections\n\n\n\nActivity in the GPi (output)\n\ngenerally always active\nneurons stop firing when corresponding action is chosen\nrepresenting [1, 1, 0, 1] instead of [0, 0, 1, 0]\n\n\n\nLeabra approach\n\nEach action has two groups of neurons in the striatum representing $Q(s, a_i)$ and $1-Q(s, a_i)$ (\"go\" and \"no go\")\nMutual inhibition causes only one of the \"go\" and one of the \"no go\" groups to fire\nGPi neuron get connections from \"go\" neurons, with value multiplied by -1 (direct pathway)\nGPi also gets connections from \"no go\" neurons, but multiplied by -1 (striatum->GPe), then -1 again (GPe->STN), then +1 (STN->GPi)\nResult in GPi is close to [1, 1, 0, 1] form\n\n\nSeems to match onto the biology okay\nBut why the weird double-inverting thing? Why not skip the GPe and STN entirely?\nAnd why split into \"go\" and \"no-go\"? Just the direct pathway on its own would be fine\nMaybe it's useful for some aspect of the learning...\nWhat about all those other connections?\n\n\n\nAn alternate model of the Basal Ganglia\n\nMaybe the weird structure of the basal ganglia is an attempt to do action selection without doing mutual inhibition\nNeeds to select from a large number of actions\n\nNeeds to do so quickly, and without the memory effects\n\n\nGurney, Prescott, and Redgrave, 2001\n\n\nLet's start with a very simple version\n\n\n<img src=\"files/lecture_selection/gpr1.png\">\n\nSort of like an \"unrolled\" version of one step of mutual inhibition\n\nNote that both A and B have surround inhibition and local excitation that is 'flipped' (in slightly different ways) on the way to the output\n\n\nUnfortunately this doesn't easily map onto the basal ganglia because of the diffuse inhibition needed from cortex to what might be the striatum (the first layer). Instead, we can get similar functionality using something like the following\n\nNotice the importance of the hyperdirect pathway (from cortex to STN).\n\n<img src=\"files/lecture_selection/gpr2.png\">\n\nBut that's only going to work for very specific $Q$ values. (Here, the winning option is the sum of the losing ones)\n\nNeed to dynamically adjust the amount of +ve and -ve weighting\n\n\nHere the GPe is adjusting the weighting by monitoring STN & D2 activity.\n\nNotice that the GPe gets the same inputs as GPi, but projects back to STN, to 'regulate' the action selection.\n\n<img src=\"files/lecture_selection/gpr3.png\">\n\nThis turns out to work surprisingly well\nBut extremely hard to analyze its behaviour\n\nThey showed that it qualitatively matches pretty well\n\n\nSo what happens if we convert this into realistic spiking neurons?\n\nUse the same approach where one \"neuron\" in their model is a pool of neurons in the NEF\nThe \"neuron model\" they use was rectified linear\nThat becomes the function the decoders are computing\n\n\nNeurotransmitter time constants are all known\n$Q$ values are between 0 and 1\nFiring rates max out around 50-100Hz\nEncoders are all positive and thresholds are chosen for efficiency",
"%pylab inline\nmm=1\nmp=1\nme=1\nmg=1\n\n#connection strengths from original model\nws=1\nwt=1\nwm=1\nwg=1\nwp=0.9\nwe=0.3\n\n#neuron lower thresholds for various populations\ne=0.2 \nep=-0.25\nee=-0.2\neg=-0.2\n\nle=0.2\nlg=0.2\n\nD = 10\ntau_ampa=0.002\ntau_gaba=0.008\nN = 50\nradius = 1.5\n\nimport nengo\nfrom nengo.dists import Uniform\n\nmodel = nengo.Network('Basal Ganglia', seed=4)\n\nwith model:\n stim = nengo.Node([0]*D)\n\n StrD1 = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(e,1), \n encoders=Uniform(1,1), radius=radius)\n StrD2 = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(e,1), \n encoders=Uniform(1,1), radius=radius)\n STN = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(ep,1), \n encoders=Uniform(1,1), radius=radius)\n GPi = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(eg,1), \n encoders=Uniform(1,1), radius=radius)\n GPe = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(ee,1), \n encoders=Uniform(1,1), radius=radius)\n\n nengo.Connection(stim, StrD1.input, transform=ws*(1+lg), synapse=tau_ampa)\n nengo.Connection(stim, StrD2.input, transform=ws*(1-le), synapse=tau_ampa)\n nengo.Connection(stim, STN.input, transform=wt, synapse=tau_ampa)\n \n def func_str(x): #relu-like function\n if x[0]<e: return 0\n return mm*(x[0]-e)\n strd1_out = StrD1.add_output('func_str', func_str)\n strd2_out = StrD2.add_output('func_str', func_str)\n \n nengo.Connection(strd1_out, GPi.input, transform=-wm, synapse=tau_gaba)\n nengo.Connection(strd2_out, GPe.input, transform=-wm, synapse=tau_gaba)\n \n def func_stn(x):\n if x[0]<ep: return 0\n return mp*(x[0]-ep)\n stn_out = STN.add_output('func_stn', func_stn)\n \n tr=[[wp]*D for i in range(D)] \n nengo.Connection(stn_out, GPi.input, transform=tr, synapse=tau_ampa)\n nengo.Connection(stn_out, GPe.input, transform=tr, synapse=tau_ampa)\n\n def func_gpe(x):\n if x[0]<ee: return 0\n return me*(x[0]-ee)\n gpe_out = GPe.add_output('func_gpe', func_gpe)\n \n nengo.Connection(gpe_out, GPi.input, transform=-we, synapse=tau_gaba)\n nengo.Connection(gpe_out, STN.input, transform=-wg, synapse=tau_gaba)\n\n Action = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(0.2,1), \n encoders=Uniform(1,1))\n bias = nengo.Node([1]*D)\n nengo.Connection(bias, Action.input)\n nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=tau_gaba)\n \n def func_gpi(x):\n if x[0]<eg: return 0\n return mg*(x[0]-eg)\n gpi_out = GPi.add_output('func_gpi', func_gpi)\n \n nengo.Connection(gpi_out, Action.input, transform=-3, synapse=tau_gaba)\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/bg_good2.py.cfg\")",
"Notice that we are also flipping the output from [1, 1, 0, 1] to [0, 0, 1, 0]\n\nMostly for our convenience, but we can also add some mutual inhibition there\n\n\n\nWorks pretty well\n\nScales up to many actions\n\nSelects quickly\n\n\nGets behavioural match to empirical data, including timing predictions (!)\n\nAlso shows interesting oscillations not seen in the original GPR model\nBut these are seen in the real basal ganglia\n\n\n\n<img src=\"files/lecture_selection/gpr-latency.png\">\n\n\nDynamic Behaviour of a Spiking Model of Action Selection in the Basal Ganglia\n\n\nLet's make sure this works with our original system\n\nTo make it easy to use the basal ganglia, there is a special network constructor\nSince this is a major component of the SPA, it's also in that module",
"%pylab inline\nimport nengo\nfrom nengo.dists import Uniform\n\nmodel = nengo.Network(label='Selection')\n\nD=4\n\nwith model:\n stim = nengo.Node([0,0])\n \n s = nengo.Ensemble(200, dimensions=2)\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=D)\n\n nengo.Connection(stim, s)\n nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])\n \n Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1), \n encoders=Uniform(1,1))\n bias = nengo.Node([1]*D)\n nengo.Connection(bias, Action.input)\n nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008) \n\n basal_ganglia = nengo.networks.BasalGanglia(dimensions=D) \n \n nengo.Connection(Qs.output, basal_ganglia.input, synapse=None)\n nengo.Connection(basal_ganglia.output, Action.input)\n\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/bg_good1.py.cfg\")",
"This system seems to work well\nStill not perfect\n\nMatches biology nicely, because of how we implemented it\n\n\nSome more details on the basal ganglia implementation\n\nall those parameters come from here\n\n\n\n<img src=\"files/lecture_selection/gpr-diagram.png\" width=\"500\">\n\nIn the original model, each action has a single \"neuron\" in each area that responds like this:\n\n$$\ny = \\begin{cases}\n 0 &\\mbox{if } x < \\epsilon \\ \n m(x- \\epsilon) &\\mbox{otherwise} \n \\end{cases}\n$$\n\nThese need to get turned into groups of neurons\nWhat is the best way to do this?\n\n\n\n<img src=\"files/lecture_selection/gpr-tuning.png\">\n\nencoders are all +1\nintercepts are chosen to be $> \\epsilon$\n\nAction Execution\n\nNow that we can select an action, how do we perform it?\nDepends on what the action is\n\nLet's start with simple actions\n\nMove in a given direction\nRemember a specific vector\nSend a particular value as input into a particular cognitive system\n\n\n\nExample:\n\nState $s$ is 2-dimensional\nFour actions (A, B, C, D)\nDo action A if $s$ is near [1,0], B if near [-1,0], C if near [0,1], D if near [0,-1]\n$Q(s, a_A)=s \\cdot [1,0]$\n$Q(s, a_B)=s \\cdot [-1,0]$\n$Q(s, a_C)=s \\cdot [0,1]$\n$Q(s, a_D)=s \\cdot [0,-1]$\n\n\nTo do Action A, set $m=[1,0]$\nTo do Action B, set $m=[-1,0]$\nTo do Action C, set $m=[0,1]$\nTo do Action D, set $m=[0,-1]$",
"%pylab inline\nimport nengo\nfrom nengo.dists import Uniform\n\nmodel = nengo.Network(label='Selection')\n\nD=4\n\nwith model:\n stim = nengo.Node([0,0])\n \n s = nengo.Ensemble(200, dimensions=2)\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)\n\n nengo.Connection(stim, s)\n nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])\n \n Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1), \n encoders=Uniform(1,1))\n bias = nengo.Node([1]*D)\n nengo.Connection(bias, Action.input)\n nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008) \n\n basal_ganglia = nengo.networks.BasalGanglia(dimensions=D) \n \n nengo.Connection(Qs.output, basal_ganglia.input, synapse=None)\n nengo.Connection(basal_ganglia.output, Action.input)\n \n motor = nengo.Ensemble(100, dimensions=2)\n nengo.Connection(Action.output[0], motor, transform=[[1],[0]])\n nengo.Connection(Action.output[1], motor, transform=[[-1],[0]])\n nengo.Connection(Action.output[2], motor, transform=[[0],[1]])\n nengo.Connection(Action.output[3], motor, transform=[[0],[-1]])\n\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/bg_good3.py.cfg\")",
"What about more complex actions?\nConsider a simple creature that goes where it's told, or runs away if it's scared\nAction 1: set $m$ to the direction it's told to do\nAction 2: set $m$ to the direction we started from\n\n\nNeed to pass information from one group of neurons to another\nBut only do this when the action is chosen\nHow?\n\n\nWell, let's use a function\n$m = a \\times d$\nwhere $a$ is the action selection (0 for not selected, 1 for selected)\n\n\nLet's try that with the creature",
"%pylab inline\nimport nengo\nfrom nengo.dists import Uniform\n\nmodel = nengo.Network('Creature')\n\nwith model:\n stim = nengo.Node([0,0], label='stim')\n command = nengo.Ensemble(100, dimensions=2, label='command')\n motor = nengo.Ensemble(100, dimensions=2, label='motor')\n position = nengo.Ensemble(1000, dimensions=2, label='position')\n scared_direction = nengo.Ensemble(100, dimensions=2, label='scared direction')\n\n def negative(x):\n return -x[0], -x[1]\n\n nengo.Connection(position, scared_direction, function=negative)\n nengo.Connection(position, position, synapse=.05)\n \n def rescale(x):\n return x[0]*0.1, x[1]*0.1\n nengo.Connection(motor, position, function=rescale)\n nengo.Connection(stim, command)\n \n D=4\n Q_input = nengo.Node([0,0,0,0], label='select')\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)\n nengo.Connection(Q_input, Qs.input)\n Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1), \n encoders=Uniform(1,1))\n bias = nengo.Node([1]*D)\n nengo.Connection(bias, Action.input)\n nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008) \n\n basal_ganglia = nengo.networks.BasalGanglia(dimensions=D) \n \n nengo.Connection(Qs.output, basal_ganglia.input, synapse=None)\n nengo.Connection(basal_ganglia.output, Action.input)\n\n do_command = nengo.Ensemble(300, dimensions=3, label='do command')\n\n nengo.Connection(command, do_command[0:2])\n nengo.Connection(Action.output[0], do_command[2])\n \n def apply_command(x):\n return x[2]*x[0], x[2]*x[1]\n nengo.Connection(do_command, motor, function=apply_command)\n \n do_scared = nengo.Ensemble(300, dimensions=3, label='do scared')\n\n nengo.Connection(scared_direction, do_scared[0:2])\n nengo.Connection(Action.output[1], do_scared[2])\n nengo.Connection(do_scared, motor, function=apply_command)\n \n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/bg_creature.py.cfg\")\n#first dimensions activates do_command, i.e. go in the indicated direciton\n#second dimension activates do_scared, i.e. return 'home' (0,0)\n#creature tracks the position it goes to (by integrating)\n#creature inverts direction to position via scared direction/do_scared and puts that into motor",
"There's also another way to do this\nA special case for forcing a function to go to zero when a particular group of neurons is active",
"%pylab inline\nimport nengo\nfrom nengo.dists import Uniform\n\nmodel = nengo.Network('Creature')\n\nwith model:\n stim = nengo.Node([0,0], label='stim')\n command = nengo.Ensemble(100, dimensions=2, label='command')\n motor = nengo.Ensemble(100, dimensions=2, label='motor')\n position = nengo.Ensemble(1000, dimensions=2, label='position')\n scared_direction = nengo.Ensemble(100, dimensions=2, label='scared direction')\n\n def negative(x):\n return -x[0], -x[1]\n\n nengo.Connection(position, scared_direction, function=negative)\n nengo.Connection(position, position, synapse=.05)\n \n def rescale(x):\n return x[0]*0.1, x[1]*0.1\n nengo.Connection(motor, position, function=rescale)\n nengo.Connection(stim, command)\n \n D=4\n Q_input = nengo.Node([0,0,0,0], label='select')\n Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)\n nengo.Connection(Q_input, Qs.input)\n Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1), \n encoders=Uniform(1,1))\n bias = nengo.Node([1]*D)\n nengo.Connection(bias, Action.input)\n nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008) \n\n basal_ganglia = nengo.networks.BasalGanglia(dimensions=D) \n \n nengo.Connection(Qs.output, basal_ganglia.input, synapse=None)\n nengo.Connection(basal_ganglia.output, Action.input)\n\n do_command = nengo.Ensemble(300, dimensions=2, label='do command')\n nengo.Connection(command, do_command)\n nengo.Connection(Action.output[1], do_command.neurons, transform=-np.ones([300,1]))\n nengo.Connection(do_command, motor)\n \n do_scared = nengo.Ensemble(300, dimensions=2, label='do scared')\n nengo.Connection(scared_direction, do_scared)\n nengo.Connection(Action.output[0], do_scared.neurons, transform=-np.ones([300,1]))\n nengo.Connection(do_scared, motor) \n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/bg_creature2.py.cfg\")",
"This is a situation where it makes sense to ignore the NEF!\nAll we want to do is shut down the neural activity\nSo just do a very inhibitory connection\n\n\n\nThe Cortex-Basal Ganglia-Thalamus loop\n\nWe now have everything we need for a model of one of the primary structures in the mammalian brain\nBasal ganglia: action selection\nThalamus: action execution\nCortex: everything else\n\n\n\n<img src=\"lecture_selection/ctx-bg-thal.png\" width=\"800\"> \n\n\nWe build systems in cortex that give some input-output functionality\n\nWe set up the basal ganglia and thalamus to make use of that functionality appropriately\n\n\n\nExample\n\nCortex stores some state (integrator)\nAdd some state transition rules\nIf in state A, go to state B\nIf in state B, go to state C\nIf in state C, go to state D\n...\n\n\nFor now, let's just have states A, B, C, D, etc be some randomly chosen vectors \n$Q(s, a_i) = s \\cdot a_i$\nThe effect of each action is to input the corresponding vector into the integrator\nThis is the basic loop of the SPA, so we can use that module",
"%pylab inline\nimport nengo\nfrom nengo import spa\n\nD = 16\n\ndef start(t):\n if t < 0.05:\n return 'A'\n else:\n return '0'\n \nmodel = spa.SPA(label='Sequence_Module', seed=5)\n\nwith model:\n model.cortex = spa.Buffer(dimensions=D, label='cortex')\n model.input = spa.Input(cortex=start, label='input') \n\n actions = spa.Actions(\n 'dot(cortex, A) --> cortex = B',\n 'dot(cortex, B) --> cortex = C',\n 'dot(cortex, C) --> cortex = D',\n 'dot(cortex, D) --> cortex = E',\n 'dot(cortex, E) --> cortex = A'\n )\n model.bg = spa.BasalGanglia(actions=actions)\n model.thal = spa.Thalamus(model.bg)\n\n cortex = nengo.Probe(model.cortex.state.output, synapse=0.01)\n actions = nengo.Probe(model.thal.actions.output, synapse=0.01)\n utility = nengo.Probe(model.bg.input, synapse=0.01)\n \nsim = nengo.Simulator(model)\nsim.run(0.5)\n\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/bg_alphabet.py.cfg\")\n\nfig = figure(figsize=(12,8))\np1 = fig.add_subplot(3,1,1)\n\np1.plot(sim.trange(), model.similarity(sim.data, cortex))\np1.legend(model.get_output_vocab('cortex').keys, fontsize='x-small')\np1.set_ylabel('State')\n\np2 = fig.add_subplot(3,1,2)\np2.plot(sim.trange(), sim.data[actions])\np2_legend_txt = [a.effect for a in model.bg.actions.actions]\np2.legend(p2_legend_txt, fontsize='x-small')\np2.set_ylabel('Action')\n\np3 = fig.add_subplot(3,1,3)\np3.plot(sim.trange(), sim.data[utility])\np3_legend_txt = [a.condition for a in model.bg.actions.actions]\np3.legend(p3_legend_txt, fontsize='x-small')\np3.set_ylabel('Utility')\n\nfig.subplots_adjust(hspace=0.2)",
"Behavioural Evidence\n\nIs there any evidence that this is the way it works in brains?\nConsistent with anatomy/connectivity\n\n\n\nWhat about behavioural evidence?\n\n\nA few sources of support\n\n\nTiming data\n\nHow long does it take to do an action?\nThere are lots of existing computational (non-neural) cognitive models that have something like this action selection loop\nUsually all-symbolic\nA set of IF-THEN rules\n\n\ne.g. ACT-R\nUsed to model mental arithmetic, driving a car, using a GUI, air-traffic control, staffing a battleship, etc etc\n\n\nBest fit across all these situations is to set the loop time to 50ms\n\n\n\nHow long does this model take?\n\nNotice that all the timing is based on neural properties, not the algorithm\nDominated by the longer neurotransmitter time constants in the basal ganglia\n\n\n\n<img src=\"files/lecture_selection/timing-simple.png\">\n<center>Simple actions</center>\n<img src=\"files/lecture_selection/timing-complex.png\">\n<center>Complex actions (routing)</center>\n\n\nThis is in the right ballpark\n\n\nBut what about this distinction between the two types of actions?\n\nNot a distinction made in the literature\nBut once we start looking for it, there is evidence\nResolves an outstanding weirdness where some actions seem to take twice as long as others\nStarting to be lots of citations for 40ms for simple tasks \nTask artifacts and strategic adaptation in the change signal task\n\n\n\n\nThis is a nice example of the usefulness of making neural models!\nThis distinction wasn't obvious from computational implementations\n\n\n\nMore complex tasks\n\nLots of complex tasks can be modelled this way\nSome basic cognitive components (cortex)\naction selection system (basal ganglia and thalamus)\n\n\n\nThe tricky part is figuring out the actions\n\n\nExample: the Tower of Hanoi task\n\n3 pegs\nN disks of different sizes on the pegs\nmove from one configuration to another\ncan only move one disk at a time\nno larger disk can be on a smaller disk\n\n\n\n<img src=\"files/lecture_selection/hanoi.png\">\n\ncan we build rules to do this?",
"from IPython.display import YouTubeVideo\nYouTubeVideo('sUvHCs5y0o8', width=640, height=390, loop=1, autoplay=0)",
"How do people do this task?\n\nStudied extensively by cognitive scientists\nSimon (1975):\nFind the largest disk not in its goal position and make the goal to get it in that position. This is the initial “goal move” for purposes of the next two steps. If all disks are in their goal positions, the problem is solved\nIf there are any disks blocking the goal move, find the largest blocking disk (either on top of the disk to be moved or at the destination peg) and make the new goal move to move this blocking disk to the other peg (i.e., the peg that is neither the source nor destination of this disk). The previous goal move is stored as the parent goal of the new goal move. Repeat this step with the new goal move.\nIf there are no disks blocking the goal move perform the goal move and (a) If the goal move had a parent goal retrieve that parent goal, make it the goal move, and go back to step 2. (b) If the goal had no parent goal, go back to step 1.\n\n\n\n\n\nWhat do the actions look like?\n\n\nState:\n\ngoal: what disk am I trying to move (D0, D1, D2)\nfocus: what disk am I looking at (D0, D1, D2)\ngoal_peg: where is the disk I am trying to move (A, B, C)\nfocus_peg: where is the disk I am looking at (A, B, C)\ntarget_peg: where am I trying to move a disk to (A, B, C)\ngoal_final: what is the overall final desired location of the disk I'm trying to move (A, B, C)\n\n\n\nNote: we're not yet modelling all the sensory and memory stuff (e.g. loading) here, so we manually set things like goal_final.\n\n\nAction effects: when an action is selected, it could do the following\n\nset focus\nset goal\nset goal_peg\nactually try to move a disk to a given location by setting move and move_peg\nNote: we're also not modelling the full motor system, so we fake this too, i.e. the peg moves when we want it to\n\n\n\n\n\nIs this sufficient to implement the algorithm described above?\n\n\nWhat do the action rules look like?\n\nif focus=NONE then focus=D2, goal=D2, goal_peg=goal_final\nthe antecedent is implemented as $Q$=focus $\\cdot$ NONE \n\n\nif focus=D2 and goal=D2 and goal_peg!=target_peg then focus=D1\nthe antecedent is implemented as $Q$=focus $\\cdot$ D2 + goal $\\cdot$ D2 - goal_peg $\\cdot$ target_peg\n\n\nif focus=D2 and goal=D2 and goal_peg==target_peg then focus=D1, goal=D1, goal_peg=goal_final\nif focus=D1 and goal=D1 and goal_peg!=target_peg then focus=D0\nif focus=D1 and goal=D1 and goal_peg==target_peg then focus=D0, goal=D0, goal_peg=goal_final\nif focus=D0 and goal_peg==target_peg then focus=NONE\nif focus=D0 and goal=D0 and goal_peg!=target_peg then focus=NONE, move=D0, move_peg=target_peg\nif focus!=goal and focus_peg==goal_peg and target_peg!=focus_peg then goal=focus, goal_peg=A+B+C-target_peg-focus_peg\ntrying to move something, but smaller disk is on top of this one\n\n\nif focus!=goal and focus_peg!=goal_peg and target_peg==focus_peg then goal=focus, goal_peg=A+B+C-target_peg-goal_peg\ntrying to move something, but smaller disk is on top of target peg\n\n\nif focus=D0 and goal!=D0 and target_peg!=focus_peg and target_peg!=goal_peg and focus_peg!=goal_peg then move=goal, move_peg=target_peg\nmove the disk, since there's nothing in the way\n\n\nif focus=D1 and goal!=D1 and target_peg!=focus_peg and target_peg!=goal_peg and focus_peg!=goal_peg then focus=D0\ncheck the next disk\n\n\n\n\n\nSufficient to solve any version of the problem\n\nIs it what people do?\n\nHow can we tell?\n\n\nDo science\n\nWhat predictions does the theory make\nErrors?\nReaction times?\nNeural activity?\nfMRI?\n\n\n\nTiming:\n\n\n<img src=\"files/lecture_selection/hanoi-timing.png\">\n\nThe model doesn't quite capture some aspects of the human data\nMuch longer pauses in some situations than there should be\nAt those stages, it is recomputing plans that it has made previously\nPeople probably remember those, and don't restart from scratch\nNeed to add that into the model\n\n\nNeural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tleonhardt/CodingPlayground | dataquest/DataCleaning/Analyzing_NYC_High_School_Data.ipynb | mit | [
"Read in the data",
"import pandas as pd\nimport numpy as np\nimport re\n\ndata_files = [\"ap_2010.csv\",\n \"class_size.csv\",\n \"demographics.csv\",\n \"graduation.csv\",\n \"hs_directory.csv\",\n \"sat_results.csv\"]\ndata = {}\n\nfor f in data_files:\n d = pd.read_csv(\"../data/schools/{0}\".format(f))\n data[f.replace(\".csv\", \"\")] = d",
"Read in the surveys",
"all_survey = pd.read_csv(\"../data/schools/survey_all.txt\", delimiter=\"\\t\", encoding='windows-1252')\nd75_survey = pd.read_csv(\"../data/schools/survey_d75.txt\", delimiter=\"\\t\", encoding='windows-1252')\nsurvey = pd.concat([all_survey, d75_survey], axis=0)\n\nsurvey[\"DBN\"] = survey[\"dbn\"]\n\nsurvey_fields = [\n \"DBN\", \n \"rr_s\", \n \"rr_t\", \n \"rr_p\", \n \"N_s\", \n \"N_t\", \n \"N_p\", \n \"saf_p_11\", \n \"com_p_11\", \n \"eng_p_11\", \n \"aca_p_11\", \n \"saf_t_11\", \n \"com_t_11\", \n \"eng_t_10\", \n \"aca_t_11\", \n \"saf_s_11\", \n \"com_s_11\", \n \"eng_s_11\", \n \"aca_s_11\", \n \"saf_tot_11\", \n \"com_tot_11\", \n \"eng_tot_11\", \n \"aca_tot_11\",\n]\nsurvey = survey.loc[:,survey_fields]\ndata[\"survey\"] = survey",
"Add DBN columns",
"data[\"hs_directory\"][\"DBN\"] = data[\"hs_directory\"][\"dbn\"]\n\ndef pad_csd(num):\n string_representation = str(num)\n if len(string_representation) > 1:\n return string_representation\n else:\n return \"0\" + string_representation\n \ndata[\"class_size\"][\"padded_csd\"] = data[\"class_size\"][\"CSD\"].apply(pad_csd)\ndata[\"class_size\"][\"DBN\"] = data[\"class_size\"][\"padded_csd\"] + data[\"class_size\"][\"SCHOOL CODE\"]",
"Convert columns to numeric",
"cols = ['SAT Math Avg. Score', 'SAT Critical Reading Avg. Score', 'SAT Writing Avg. Score']\nfor c in cols:\n data[\"sat_results\"][c] = pd.to_numeric(data[\"sat_results\"][c], errors=\"coerce\")\n\ndata['sat_results']['sat_score'] = data['sat_results'][cols[0]] + data['sat_results'][cols[1]] + data['sat_results'][cols[2]]\n\ndef find_lat(loc):\n coords = re.findall(\"\\(.+, .+\\)\", loc)\n lat = coords[0].split(\",\")[0].replace(\"(\", \"\")\n return lat\n\ndef find_lon(loc):\n coords = re.findall(\"\\(.+, .+\\)\", loc)\n lon = coords[0].split(\",\")[1].replace(\")\", \"\").strip()\n return lon\n\ndata[\"hs_directory\"][\"lat\"] = data[\"hs_directory\"][\"Location 1\"].apply(find_lat)\ndata[\"hs_directory\"][\"lon\"] = data[\"hs_directory\"][\"Location 1\"].apply(find_lon)\n\ndata[\"hs_directory\"][\"lat\"] = pd.to_numeric(data[\"hs_directory\"][\"lat\"], errors=\"coerce\")\ndata[\"hs_directory\"][\"lon\"] = pd.to_numeric(data[\"hs_directory\"][\"lon\"], errors=\"coerce\")",
"Condense datasets",
"class_size = data[\"class_size\"]\nclass_size = class_size[class_size[\"GRADE \"] == \"09-12\"]\nclass_size = class_size[class_size[\"PROGRAM TYPE\"] == \"GEN ED\"]\n\nclass_size = class_size.groupby(\"DBN\").agg(np.mean)\nclass_size.reset_index(inplace=True)\ndata[\"class_size\"] = class_size\n\ndata[\"demographics\"] = data[\"demographics\"][data[\"demographics\"][\"schoolyear\"] == 20112012]\n\ndata[\"graduation\"] = data[\"graduation\"][data[\"graduation\"][\"Cohort\"] == \"2006\"]\ndata[\"graduation\"] = data[\"graduation\"][data[\"graduation\"][\"Demographic\"] == \"Total Cohort\"]",
"Convert AP scores to numeric",
"cols = ['AP Test Takers ', 'Total Exams Taken', 'Number of Exams with scores 3 4 or 5']\n\nfor col in cols:\n data[\"ap_2010\"][col] = pd.to_numeric(data[\"ap_2010\"][col], errors=\"coerce\")",
"Combine the datasets",
"combined = data[\"sat_results\"]\n\ncombined = combined.merge(data[\"ap_2010\"], on=\"DBN\", how=\"left\")\ncombined = combined.merge(data[\"graduation\"], on=\"DBN\", how=\"left\")\n\nto_merge = [\"class_size\", \"demographics\", \"survey\", \"hs_directory\"]\n\nfor m in to_merge:\n combined = combined.merge(data[m], on=\"DBN\", how=\"inner\")\n\ncombined = combined.fillna(combined.mean())\ncombined = combined.fillna(0)",
"Add a school district column for mapping",
"def get_first_two_chars(dbn):\n return dbn[0:2]\n\ncombined[\"school_dist\"] = combined[\"DBN\"].apply(get_first_two_chars)",
"Find correlations",
"correlations = combined.corr()\ncorrelations = correlations[\"sat_score\"]\ncorrelations = correlations.dropna()\ncorrelations.sort_values(ascending=False, inplace=True)\n\n# Interesting correlations tend to have r value > .25 or < -.25\ninteresting_correlations = correlations[abs(correlations) > 0.25]\nprint(interesting_correlations)\n\n# Setup Matplotlib to work in Jupyter notebook\n%matplotlib inline\nimport matplotlib.pyplot as plt",
"Survey Correlations",
"# Make a bar plot of the correlations between survey fields and sat_score\ncorrelations[survey_fields].plot.bar(figsize=(9,7))",
"From the survey fields, two stand out due to their significant positive correlations:\n* N_s - Number of student respondents\n* N_p - Number of parent respondents\n* aca_s_11 - Academic expectations score based on student responses\n* saf_s_11 - Safety and Respect score based on student responses\nWhy are some possible reasons that N_s and N_p could matter?\n1. Higher numbers of students and parents responding to the survey may be an indicator that students and parents care more about the school and about academics in general.\n1. Maybe larger schools do better on the SAT and higher numbers of respondents is just indicative of a larger overall student population.\n1. Maybe there is a hidden underlying correlation, say that rich students/parents or white students/parents are more likely to both respond to surveys and to have the students do well on the SAT.\n1. Maybe parents who care more will fill out the surveys and get their kids to fill out the surveys and these same parents will push their kids to study for the SAT.\nSafety and SAT Scores\nBoth student and teacher perception of safety and respect at school correlate significantly with SAT scores. Let's dig more into this relationship.",
"# Make a scatterplot of the saf_s_11 column vs the sat-score in combined\ncombined.plot.scatter(x='sat_score', y='saf_s_11', figsize=(9,5))",
"So a high saf_s_11 student safety and respect score doesn't really have any predictive value regarding SAT score. However, a low saf_s_11 has a very strong correlation with low SAT scores.\nMap out Safety Scores",
"# Find the average values for each column for each school_dist in combined\ndistricts = combined.groupby('school_dist').agg(np.mean)\n\n# Reset the index of districts, making school_dist a column again\ndistricts.reset_index(inplace=True)\n\n# Make a map that shows afety scores by district\nfrom mpl_toolkits.basemap import Basemap\n\nplt.figure(figsize=(8,8))\n# Setup the Matplotlib Basemap centered on New York City\nm = Basemap(projection='merc',\n llcrnrlat=40.496044,\n urcrnrlat=40.915256,\n llcrnrlon=-74.255735,\n urcrnrlon=-73.700272,\n resolution='i')\nm.drawmapboundary(fill_color='white')\nm.drawcoastlines(color='blue', linewidth=.4)\nm.drawrivers(color='blue', linewidth=.4)\n\n# Convert the lat and lon columns of districts to lists\nlongitudes = districts['lon'].tolist()\nlatitudes = districts['lat'].tolist()\n\n# Plot the locations\nm.scatter(longitudes, latitudes, s=50, zorder=2, latlon=True, \n c=districts['saf_s_11'], cmap='summer')\n\n# Add colorbar\n# add colorbar.\ncbar = m.colorbar(location='bottom',pad=\"5%\")\ncbar.set_label('saf_s_11')",
"So it looks like the safest schools are in Manhattan, while the least safe schools are in Brooklyn.\nThis jives with crime statistics by borough \nRace and SAT Scores\nThere are a few columsn that indicate the percentage of each race at a given school:\n* white_per\n* asian_per\n* black_per\n* hispanic_per\nBy plotting out the correlations between these columns and sat_score, we can see if there are any racial differences in SAT performance.",
"# Make a plot of the correlations between racial cols and sat_score\nrace_cols = ['white_per', 'asian_per', 'black_per', 'hispanic_per']\nrace_corr = correlations[race_cols]\nrace_corr.plot(kind='bar')",
"A higher percentage of white and asian students correlates positively with SAT scores and a higher percentage of black or hispanic students correlates negatively with SAT scores. I wouldn't say any of this is suprising. My guess would be that there is an underlying economic factor which is the cause - white and asian neighborhoods probably have a higher median household income and more well funded schools than black or hispanic neighborhoods.",
"# Explore schools with low SAT scores and a high hispanic_per\ncombined.plot.scatter(x='hispanic_per', y='sat_score')",
"The above scatterplot shows that a low hispanic percentage isn't particularly predictive of SAT score. However, a high hispanic percentage is highly predictive of a low SAT score.",
"# Research any schools with a greater than 95% hispanic_per\nhigh_hispanic = combined[combined['hispanic_per'] > 95]\n\n# Find the names of schools from the data\nhigh_hispanic['SCHOOL NAME']",
"The above schools appear to contain a lot of international schools focused on recent immigrants who are learning English as a 2nd language. It makes sense that they would have a harder time on the SAT which is given soley in English.",
"# Research any schools with less than 10% hispanic_per and greater than\n# 1800 average SAT score\nhigh_sat_low_hispanic = combined[(combined['hispanic_per'] < 10) & \n (combined['sat_score'] > 1800)]\nhigh_sat_low_hispanic['SCHOOL NAME']",
"Most of the schools above appear to be specialized science and technology schools which receive extra funding and require students to do well on a standardized test before being admitted. So it is reasonable that students at these schools would have a high average SAT score.\nGender and SAT Scores\nThere are two columns that indicate the percentage of each gender at a school:\n* male_per\n* female_per",
"# Investigate gender differences in SAT scores\ngender_cols = ['male_per', 'female_per']\ngender_corr = correlations[gender_cols]\ngender_corr\n\n# Make a plot of the gender correlations\ngender_corr.plot.bar()",
"In the plot above, we can see that a high percentage of females at a school positively correlates with SAT score, whereas a high percentage of males at a school negatively correlates with SAT score. Neither correlation is extremely strong.\nMore data would be required before I was wiling to say that this is a significant effect.",
"# Investigate schools with high SAT scores and a high female_per\ncombined.plot.scatter(x='female_per', y='sat_score')",
"The above plot appears to show that either very low or very high percentage of females in a school leads to a low average SAT score. However, a percentage in the range 40 to 80 or so can lead to good scores. There doesn't appear to be a strong overall correlation.",
"# Research any schools with a greater than 60% female_per, and greater\n# than 1700 average SAT score.\nhigh_female_high_sat = combined[(combined['female_per'] > 60) &\n (combined['sat_score'] > 1700)]\nhigh_female_high_sat['SCHOOL NAME']",
"These schools appears to be very selective liberal arts schools that have high academic standards.\nAP Scores vs SAT Scores\nThe Advanced Placement (AP) exams are exams that high schoolers take in order to gain college credit. AP exams can be taken in many different subjects, and passing the AP exam means that colleges may grant you credits.\nIt makes sense that the number of students who took the AP exam in a school and SAT scores would be highly correlated. Let's dig into this relationship more.\nSince total_enrollment is highly correlated with sat_score, we don't want to bias our results, so we'll instead look at the percentage of students in each school who took at least one AP exam.",
"# Compute the percentage of students in each school that took the AP exam\ncombined['ap_per'] = combined['AP Test Takers '] / combined['total_enrollment']\n\n# Investigate the relationship between AP scores and SAT scores\ncombined.plot.scatter(x='ap_per', y='sat_score')",
"It looks like there is a relationship between the percentage of students in a school who take the AP exam, and their average SAT scores. It's not an extremely strong correlation, though.\nI\"m really surprised this relationship isn't stronger. This is rather counter-intuitive.\nNext Steps\nThere is still quite a bit of analysis left to do. Here are some potential next steps\n* Look at free and reduced lunch percentage and SAT scores\n * Combine current dataset with a median household income by school district dataset to see how that correlates (if we can find one)\n* Looking at class size and SAT scores\n* Figuring out the best area to live in based on school performance\n * If we combine this with a property values dataset, we could find the cheapest place where there are good schools\n* Looking into the differences between parent, teacher, and student responses to surveys\n* Assigning a score to schools based on sat_score and other attributes"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cloudmesh/book | notebooks/machinelearning/precisionrecall.ipynb | apache-2.0 | [
"Precision and Recall\nIn machine learning model, we have mentioned that, there is an important concept called metrics. However, for classifications problems, accuracy is one of the metrics. There are other important metrics.\nIn this exercise, we will test our model with new metrics: Precision and Recall\nPlease answer Questions\nTo help you understand precision and recall. Please answer questions below by searching and input your answers.\n\n\nQuestion 1: What is your understanding of these terms: true postive, false postive, true negative, false negative?\n\n\nQuestion 2: What are the relationships between those terms and precision or recall?\n\n\nPlease write down your answer by two simple methematical equation\n\n\n Answer: Please double click the cell and input your answer here.",
"from sklearn import svm, datasets\nfrom sklearn.model_selection import train_test_split\nimport numpy as np",
"Below is an example for how to get precision of your model\n Attention : You need to finish one line of code to implement the whole example.",
"#Let's load iris data again\niris = datasets.load_iris()\nX = iris.data\ny = iris.target\n\n# Let's split the data to training and testing data.\nrandom_state = np.random.RandomState(0)\nn_samples, n_features = X.shape\nX = np.c_[X, random_state.randn(n_samples, 200 * n_features)]\n\n# Limit to the two first classes, and split into training and test\nX_train, X_test, y_train, y_test = train_test_split(X[y < 2], y[y < 2],\n test_size=.5,\n random_state=random_state)\n\n\n# Create a simple classifier\nclassifier = svm.LinearSVC(random_state=random_state)\n\n# How could we fit the model? Please find your solutions from our example, and write down your code to fit the svm model \n# from training data.\n\n\n# After you have fit the model, then we make predicions.\ny_score = classifier.decision_function(X_test)",
"Get the average precision score, Run the cell below",
"from sklearn.metrics import average_precision_score\naverage_precision = average_precision_score(y_test, y_score)\n\nprint('Average precision-recall score: {0:0.2f}'.format(\n average_precision))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mathemage/h2o-3 | examples/deeplearning/notebooks/deeplearning_tensorflow_mnist.ipynb | apache-2.0 | [
"DeepLearning\nMNIST Dataset using DeepWater and TensorFlow\nThe MNIST database is a well-known academic dataset used to benchmark\nclassification performance. The data consists of 60,000 training images and\n10,000 test images. Each image is a standardized $28^2$ pixel greyscale image of\na single handwritten digit. An example of the scanned handwritten digits is\nshown",
"import h2o\nh2o.init()\n\nimport os.path\nPATH = os.path.expanduser(\"~/h2o-3/\")\n\ntest_df = h2o.import_file(PATH + \"bigdata/laptop/mnist/test.csv.gz\")\n\ntrain_df = h2o.import_file(PATH + \"/bigdata/laptop/mnist/train.csv.gz\")",
"Specify the response and predictor columns",
"y = \"C785\"\nx = train_df.names[0:784]",
"Convert the number to a class",
"train_df[y] = train_df[y].asfactor()\ntest_df[y] = test_df[y].asfactor()",
"Train Deep Learning model and validate on test set\nLeNET 1989\n\nIn this demo you will learn how to use a simple LeNET Model using TensorFlow.\nUsing the LeNET model architecture for training in H2O\nWe are ready to start the training procedure.",
"from h2o.estimators.deepwater import H2ODeepWaterEstimator\n\nlenet_model = H2ODeepWaterEstimator(\n epochs=10,\n learning_rate=1e-3, \n mini_batch_size=64,\n network='lenet', \n image_shape=[28,28],\n problem_type='dataset', ## Not 'image' since we're not passing paths to image files, but raw numbers\n ignore_const_cols=False, ## We need to keep all 28x28=784 pixel values, even if some are always 0\n channels=1,\n backend=\"tensorflow\"\n)\n\nlenet_model.train(x=train_df.names, y=y, training_frame=train_df, validation_frame=test_df)\n\nerror = lenet_model.model_performance(valid=True).mean_per_class_error()\nprint \"model error:\", error"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adrianstaniec/deep-learning | 08_transfer-learning/Transfer_Learning.ipynb | mit | [
"Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.\ngit clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")",
"Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.",
"import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()",
"ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6).",
"import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]",
"Below I'm running images through the VGG network in batches.",
"# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 100\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\nvgg = vgg16.Vgg16()\ninput_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\nwith tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\n \n\nwith tf.Session() as sess:\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n \n # Image batch to pass to VGG network\n images = np.concatenate(batch)\n \n feed_dict = {input_: images}\n codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)\n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)",
"Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.",
"# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))",
"Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\nFrom scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.",
"from sklearn.preprocessing import LabelBinarizer\nlb = LabelBinarizer()\nlb.fit(labels)\nlabels_vecs = lb.transform(labels)\nlabels_vecs",
"Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices.",
"from sklearn.model_selection import StratifiedShuffleSplit\n\nsss = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=0)\nfor train_index, test_index in sss.split(codes, labels_vecs):\n train_x, rest_x = codes[train_index], codes[test_index]\n train_y, rest_y = labels_vecs[train_index], labels_vecs[test_index]\n\nsss = StratifiedShuffleSplit(n_splits=1, test_size=0.5, random_state=0)\nfor train_index, test_index in sss.split(rest_x, rest_y):\n val_x, test_x = rest_x[train_index], rest_x[test_index]\n val_y, test_y = rest_y[train_index], rest_y[test_index]\n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)",
"Classifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.",
"from tensorflow import layers\n\ninputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\nfc = tf.layers.dense(inputs_, 2000, activation=tf.nn.relu)\nlogits = tf.layers.dense(fc, labels_vecs.shape[1], activation=None)\n\ncost = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels=labels_,\n logits=logits))\n\noptimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)\n\n# Operations for validation/test accuracy\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))",
"Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.",
"def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y",
"Training\nHere, we'll train the network.",
"epochs = 10\nbatches = 100\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n b = 0\n for x, y in get_batches(train_x, train_y, batches):\n feed = {inputs_: x,\n labels_: y}\n batch_cost, _ = sess.run([cost, optimizer], feed_dict=feed)\n print(\"Epoch: {}/{} \".format(e+1, epochs),\n \"Batch: {}/{} \".format(b+1, batches),\n \"Training loss: {:.4f}\".format(batch_cost))\n b += 1 \n saver.save(sess, \"checkpoints/flowers.ckpt\")",
"Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.",
"with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread",
"Below, feel free to choose images and see how the trained classifier predicts the flowers in them.",
"test_img_path = 'flower_photos/daisy/144603918_b9de002f60_m.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nif 'vgg' in globals():\n print('\"vgg\" object already exists. Will not create again.')\nelse:\n #create vgg\n with tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nbatch = []\nwith tf.Session() as sess: \n img = utils.load_image(test_img_path)\n batch.append(img.reshape((1, 224, 224, 3)))\n images = np.concatenate(batch)\n\n feed_dict = {input_: images}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)",
"Find photos that were mistakenly calassified",
"data_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)] \n\nwith tf.Session() as sess: \n saver = tf.train.Saver()\n with tf.Session() as sess2:\n saver.restore(sess2, tf.train.latest_checkpoint('checkpoints'))\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n \n for file in files:\n batch = []\n labels = []\n\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(lb.transform([each])[0])\n images = np.concatenate(batch)\n\n feed_dict = {input_: images}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n\n feed = {inputs_: code, labels_: labels}\n correct, prediction = sess2.run([correct_pred, predicted], feed_dict=feed)\n \n if not correct[0]:\n #test_img = imread(os.path.join(class_path, file))\n #plt.imshow(test_img)\n #plt.barh(np.arange(5), prediction)\n #_ = plt.yticks(np.arange(5), lb.classes_)\n print(os.path.join(class_path, file))\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
UWSEDS/LectureNotes | Spring2018/Debugging-and-Exceptions/Exceptions.ipynb | bsd-2-clause | [
"import numpy as np",
"Exceptions\nAn exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions.\nYou've already seen some exceptions:\n- syntax errors\n- divide by 0\nMany programs want to know about exceptions when they occur. For example, if the input to a program is a file path. If the user inputs an invalid or non-existent path, the program generates an exception. It may be desired to provide a response to the user in this case.\nIt may also be that programs will generate exceptions. This is a way of indicating that there is an error in the inputs provided. In general, this is the preferred style for dealing with invalid inputs or states inside a python function rather than having an error return.\nCatching Exceptions\nPython provides a way to detect when an exception occurs. This is done by the use of a block of code surrounded by a \"try\" and \"except\" statement.",
"def divide1(numerator, denominator):\n try:\n result = numerator/denominator\n print(\"result = %f\" % result)\n except:\n print(\"You can't divide by 0!!\")\n\ndivide1(1.0, 2)\n\ndivide1(1.0, 0)\n\ndivide1(\"x\", 2)",
"Question: What do you do when you get an exception?\nYou can get information about exceptions.",
"#1/0\n\ndef divide2(numerator, denominator):\n try:\n result = numerator/denominator\n print(\"result = %f\" % result)\n except (ZeroDivisionError, TypeError):\n print(\"Got an exception\")\n\ndivide2(1, \"x\")\n\n# Why doesn't this catch the exception?\n# How do we fix it?\ndivide2(\"x\", 2)\n\n# Exceptions in file handling\ndef read_safely(path):\n error = None\n try:\n with open(path, \"r\") as fd:\n lines = fd.readlines()\n print ('\\n'.join(lines()))\n except FileNotFoundError as err:\n print(\"File %s does not exist. Try again.\" % path)\n\nread_safely(\"unknown.txt\")\n\n# Handle division by 0 by using a small number\nSMALL_NUMBER = 1e-3\ndef divide2(numerator, denominator):\n try:\n result = numerator/denominator\n except ZeroDivisionError:\n result = numerator/SMALL_NUMBER\n print(\"result = %f\" % result)\n\ndivide2(1,0)",
"Generating Exceptions\nWhy generate exceptions? (Don't I have enough unintentional errors?)",
"import pandas as pd\ndef func(df):\n \"\"\"\"\n :param pd.DataFrame df: should have a column named \"hours\"\n \"\"\"\n if not \"hours\" in df.columns:\n raise ValueError(\"DataFrame should have a column named 'hours'.\")\n\ndf = pd.DataFrame({'hours': range(10) })\nfunc(df)\n\ndf = pd.DataFrame({'years': range(10) })\n# Generates an exception\n#func(df)",
"Class exercise\nChoose one of the functions from the last exercise. Create two new functions:\n- The first function throws an exception if there is a negative argument.\n- The second function catches an exception if the modulo operator (%) throws an exception and attempts to correct it by coercing the argument to a positive integer."
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
deepfield/ibis | docs/source/notebooks/tutorial/8-More-Analytics-Helpers.ipynb | apache-2.0 | [
"Additional Analytics Tools\nSetup",
"import ibis\nimport os\nhdfs_port = os.environ.get('IBIS_WEBHDFS_PORT', 50070)\nhdfs = ibis.hdfs_connect(host='quickstart.cloudera', port=hdfs_port)\ncon = ibis.impala.connect(host='quickstart.cloudera', database='ibis_testing',\n hdfs_client=hdfs)\nibis.options.interactive = True",
"Frequency tables\nIbis provides the value_counts API, just like pandas, for computing a frequency table for a table column or array expression. You might have seen it used already earlier in the tutorial.",
"lineitem = con.table('tpch_lineitem')\norders = con.table('tpch_orders')\n\nitems = (orders.join(lineitem, orders.o_orderkey == lineitem.l_orderkey)\n [lineitem, orders])\n\nitems.o_orderpriority.value_counts()",
"This can be customized, of course:",
"freq = (items.group_by(items.o_orderpriority)\n .aggregate([items.count().name('nrows'),\n items.l_extendedprice.sum().name('total $')]))\nfreq",
"Binning and histograms\nNumeric array expressions (columns with numeric type and other array expressions) have bucket and histogram methods which produce different kinds of binning. These produce category values (the computed bins) that can be used in grouping and other analytics.\nLet's have a look at a few examples\nI'll use the summary function to see the general distribution of lineitem prices in the order data joined above:",
"items.l_extendedprice.summary()",
"Alright then, now suppose we want to split the item prices up into some buckets of our choosing:",
"buckets = [0, 5000, 10000, 50000, 100000]",
"The bucket function creates a bucketed category from the prices:",
"bucketed = items.l_extendedprice.bucket(buckets).name('bucket')",
"Let's have a look at the value counts:",
"bucketed.value_counts()",
"The buckets we wrote down define 4 buckets numbered 0 through 3. The NaN is a pandas NULL value (since that's how pandas represents nulls in numeric arrays), so don't worry too much about that. Since the bucketing ends at 100000, we see there are 4122 values that are over 100000. These can be included in the bucketing with include_over:",
"bucketed = (items.l_extendedprice\n .bucket(buckets, include_over=True)\n .name('bucket'))\nbucketed.value_counts()",
"The bucketed object here is a special category type",
"bucketed.type()",
"Category values can either have a known or unknown cardinality. In this case, there's either 4 or 5 buckets based on how we used the bucket function.\nLabels can be assigned to the buckets at any time using the label function:",
"bucket_counts = bucketed.value_counts()\n\nlabeled_bucket = (bucket_counts.bucket\n .label(['0 to 5000', '5000 to 10000', '10000 to 50000',\n '50000 to 100000', 'Over 100000'])\n .name('bucket_name'))\n\nexpr = (bucket_counts[labeled_bucket, bucket_counts]\n .sort_by('bucket'))\nexpr",
"Nice, huh?\nhistogram is a linear (fixed size bin) equivalent:",
"t = con.table('functional_alltypes')\n\nd = t.double_col\n\ntier = d.histogram(10).name('hist_bin')\nexpr = (t.group_by(tier)\n .aggregate([d.min(), d.max(), t.count()])\n .sort_by('hist_bin'))\nexpr",
"Filtering in aggregations\nSuppose that you want to compute an aggregation with a subset of the data for only one of the metrics / aggregates in question, and the complete data set with the other aggregates. Most aggregation functions are thus equipped with a where argument. Let me show it to you in action:",
"t = con.table('functional_alltypes')\n\nd = t.double_col\ns = t.string_col\n\ncond = s.isin(['3', '5', '7'])\n\nmetrics = [t.count().name('# rows total'), \n cond.sum().name('# selected'),\n d.sum().name('total'),\n d.sum(where=cond).name('selected total')]\n\ncolor = (t.float_col\n .between(3, 7)\n .ifelse('red', 'blue')\n .name('color'))\n\nt.group_by(color).aggregate(metrics)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
abulbasar/machine-learning | Scikit - 02 Visualization.ipynb | apache-2.0 | [
"Visualization using matplotlib and seaborn\n Visualization strategy: \n- Single variable\n - numeric continuous variable\n - histogram: distribution of values\n - boxplot: outlier analysis\n - Categorical (string or discrete numeric)\n - frequency plot\n- Association plot \n - continuous vs continuous: scatter plot\n - continuous vs categorical: vertical bar and boxplot (regression problems)\n - categorical vs continuous: horizontal bar (classification problems)\n - categorical vs categorical: heapmap",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom matplotlib.mlab import normpdf\n\n%matplotlib inline\n\nplt.rcParams['figure.figsize'] = 10, 6\n\ndf = pd.read_csv(\"http://www-bcf.usc.edu/~gareth/ISL/Auto.data\", sep=r\"\\s+\")\ndf.head(10)\n\ndf.info()\n\ndf[\"year\"].unique()\n\ndf.sample(10)",
"Visualization for a single continuous variable",
"plt.hist(df[\"mpg\"], bins = 30)\nplt.title(\"Histogram plot of mpg\")\nplt.xlabel(\"mpg\")\nplt.ylabel(\"Frequency\")\n\nplt.boxplot(df[\"mpg\"])\nplt.title(\"Boxplot of mpg\\n \")\nplt.ylabel(\"mpg\")\n\n#plt.figure(figsize = (10, 6))\nplt.subplot(2, 1, 1)\nn, bins, patches = plt.hist(df[\"mpg\"], bins = 50, normed = True)\nplt.title(\"Histogram plot of mpg\")\nplt.xlabel(\"MPG\")\n\npdf = normpdf(bins, df[\"mpg\"].mean(), df[\"mpg\"].std())\nplt.plot(bins, pdf, color = \"red\")\n\nplt.subplot(2, 1, 2)\nplt.boxplot(df[\"mpg\"], vert=False)\nplt.title(\"Boxplot of mpg\")\nplt.tight_layout()\nplt.xlabel(\"MPG\")\n\nnormpdf(bins, df[\"mpg\"].mean(), df[\"mpg\"].std())\n\n# using pandas plot function\nplt.figure(figsize = (10, 6))\ndf.mpg.plot.hist(bins = 50, normed = True)\nplt.title(\"Histogram plot of mpg\")\nplt.xlabel(\"mpg\")",
"Visualization for single categorical variable - frequency plot",
"counts = df[\"year\"].value_counts().sort_index()\n\nplt.figure(figsize = (10, 4))\nplt.bar(range(len(counts)), counts, align = \"center\")\nplt.xticks(range(len(counts)), counts.index)\nplt.xlabel(\"Year\")\nplt.ylabel(\"Frequency\")\nplt.title(\"Frequency distribution by year\")",
"Bar plot using matplotlib visualization",
"plt.figure(figsize = (10, 4))\ndf.year.value_counts().sort_index().plot.bar()",
"Association plot between two continuous variables\nContinuous vs continuous",
"corr = np.corrcoef(df[\"weight\"], df[\"mpg\"])[0, 1]\nplt.scatter(df[\"weight\"], df[\"mpg\"])\nplt.xlabel(\"Weight\")\nplt.ylabel(\"Mpg\")\nplt.title(\"Mpg vs Weight, correlation: %.2f\" % corr)",
"Scatter plot using pandas dataframe plot function",
"df.plot.scatter(x= \"weight\", y = \"mpg\")\nplt.title(\"Mpg vs Weight, correlation: %.2f\" % corr)",
"Continuous vs Categorical",
"mpg_by_year = df.groupby(\"year\")[\"mpg\"].agg([np.median, np.std])\nmpg_by_year.head()\n\nmpg_by_year[\"median\"].plot.bar(yerr = mpg_by_year[\"std\"], ecolor = \"red\")\nplt.title(\"MPG by year\")\nplt.xlabel(\"year\")\nplt.ylabel(\"MPG\")",
"Show the boxplot of MPG by year",
"plt.figure(figsize=(10, 5))\nsns.boxplot(\"year\", \"mpg\", data = df)",
"Association plot between 2 categorical variables",
"plt.figure(figsize=(10, 8))\nsns.heatmap(df.corr(), cmap=sns.color_palette(\"RdBu\", 10), annot=True)\n\nplt.figure(figsize=(10, 8))\naggr = df.groupby([\"year\", \"cylinders\"])[\"mpg\"].agg(np.mean).unstack()\nsns.heatmap(aggr, cmap=sns.color_palette(\"Blues\", n_colors= 10), annot=True)",
"Classificaition plot",
"iris = pd.read_csv(\"https://raw.githubusercontent.com/abulbasar/data/master/iris.csv\")\niris.head()\n\nfig, ax = plt.subplots()\nx1, x2 = \"SepalLengthCm\", \"PetalLengthCm\"\ncmap = sns.color_palette(\"husl\", n_colors=3)\nfor i, c in enumerate(iris.Species.unique()):\n iris[iris.Species == c].plot.scatter(x1, x2, color = cmap[i], label = c, ax = ax)\nplt.legend()",
"QQ Plot for normality test",
"import scipy.stats as stats\np = stats.probplot(df[\"mpg\"], dist=\"norm\", plot=plt)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ADBI-george2/Spark-Tutorial | tutorial_spark.ipynb | apache-2.0 | [
"MLlib Tutorial\nIPython notebooks consist of multiple \"cells\", which can contain text (markdown) or code. You can run a single cell, multiple cells at a time, or every cellin the file. IPython also gives you the ability to export the notebook into different formats, such as a python file, a PDF file, an HTML file, etc.\nSpend a few minutes and make yourself familiar with the IPython interface, it should be pretty straight forward.\nThe Machine Learning Library contains common machine learning algorithms and utilties. A summary of these features can be found here.\nIn this tutorial, we will explore collaborative filtering on a small example problem. For your submission, you must turn in a PDF of this IPython notebook, but fully completed with any missing code.\nCollaborative Filtering Example\nWe will go through an exercise based on the example from the collaborative filtering page. The first step is to import the MLlib library and whatever functions we want with this module.",
"from pyspark.mllib.recommendation import ALS, Rating",
"Loading the Input Data\nNext, we must load the data. There are some example datasets that come with Spark by default. Example data related to machine learning in particular is located in the $SPARK_HOME/data/mllib directory. For this part, we will be working with the $SPARK_HOME/data/mllib/als/test.data file. This is a small dataset, so it is easy to see what is happening.",
"data = sc.textFile(\"/Users/george/Panzer/Softwares/spark-1.5.2-bin-hadoop2.6/data/mllib/als/test.data\")",
"Even though, we have the environment $SPARK_HOME defined, but it can't be used here. You must specify the full path, or the relative path based off where you initiated IPython.\nThe textFile command will create an RDD where each element is a line of the input file. In the below cell, write some code to (1) print the number of elements and (2) print the fifth element. Print your result in a single line with the format: \"There are X elements. The fifth element is: Y\".",
"rows = data.collect()\nx = len(rows)\ny = rows[4]\nprint(\"There are %d elements. The fifth element is : %s\"%(x,y))",
"Transforming the Input Data\nThis data isn't in a great format, since each element is in the RDD is currently a string. However, we will assume that the first column of the string represents a user ID, the second column represents a product ID, and the third column represents a user-specified rating of that product.\nIn the below cell, write a function that takes a string (that has the same format as lines in this file) as input and returns a tuple where the first and second elements are ints and the third element is a float. Call your function parser.\nWe will then use this function to transform the RDD.",
"def parser(line):\n splits = line.strip().split(\",\")\n return (int(splits[0]), int(splits[1]), float(splits[2]))\n\nratings = data.map(parser).map(lambda l: Rating(*l))\nratings.collect()",
"Your output should look like the following:\n[Rating(user=1, product=1, rating=5.0),\n Rating(user=1, product=2, rating=1.0),\n Rating(user=1, product=3, rating=5.0),\n Rating(user=1, product=4, rating=1.0),\n Rating(user=2, product=1, rating=5.0),\n Rating(user=2, product=2, rating=1.0),\n Rating(user=2, product=3, rating=5.0),\n Rating(user=2, product=4, rating=1.0),\n Rating(user=3, product=1, rating=1.0),\n Rating(user=3, product=2, rating=5.0),\n Rating(user=3, product=3, rating=1.0),\n Rating(user=3, product=4, rating=5.0),\n Rating(user=4, product=1, rating=1.0),\n Rating(user=4, product=2, rating=5.0),\n Rating(user=4, product=3, rating=1.0),\n Rating(user=4, product=4, rating=5.0)]\nIf it doesn't, then you did something wrong! If it does match, then you are ready to move to the next step.\nBuilding and Running the Model\nNow we are ready to build the actual recommendation model using the Alternating Least Squares algorithm. The documentation can be found here, and the papers the algorithm is based on are linked off the collaborative filtering page.",
"rank = 10\nnumIterations = 10\nmodel = ALS.train(ratings, rank, numIterations)\n\n# Let's define some test data\ntestdata = ratings.map(lambda p: (p[0], p[1]))\n\n# Running the model on all possible user->product predictions\npredictions = model.predictAll(testdata)\npredictions.collect()",
"Transforming the Model Output\nThis result is not really in a nice format. Write some code that will transform the RDD so that each element is a user ID and a dictionary of product->rating pairs. Note that for the a Ratings object (which is what the elements of the RDD are), you can access the different fields by via the .user, .product, and .rating variables. For example, predictions.take(1)[0].user. \nCall the new RDD userPredictions. It should look as follows (when using userPredictions.collect()):\n[(4,\n {1: 1.0011434289237737,\n 2: 4.996713610813412,\n 3: 1.0011434289237737,\n 4: 4.996713610813412}),\n (1,\n {1: 4.996411869659315,\n 2: 1.0012037253934976,\n 3: 4.996411869659315,\n 4: 1.0012037253934976}),\n (2,\n {1: 4.996411869659315,\n 2: 1.0012037253934976,\n 3: 4.996411869659315,\n 4: 1.0012037253934976}),\n (3,\n {1: 1.0011434289237737,\n 2: 4.996713610813412,\n 3: 1.0011434289237737,\n 4: 4.996713610813412})]",
"def format_ratings(lst):\n ratings = {}\n for rating in lst:\n ratings[rating.product] = rating.rating\n return ratings\nuserPredictions = predictions.groupBy(lambda r: r.user).mapValues(format_ratings)\nuserPredictions.collect()",
"Evaluating the Model\nNow, lets calculate the mean squared error.",
"userPredictions = predictions.map(lambda r: ((r[0],r[1]), r[2]))\nratesAndPreds = ratings.map(lambda r: ((r[0],r[1]), r[2])).join(predictions)\nMSE = ratesAndPreds.map(lambda r: (r[1][0] - r[1][1])**2).mean()\nprint(\"Mean Squared Error = \" + str(MSE))",
"Reflections\nWhy do you believe that this model achieved such a low error. Is there anything we did incorrectly for testing this model?\n-----------------\nThe model was tested on the same training dataset. Thus it yielded such a low error.\nFor testing the model approaches like\na) splitting the data into a training set to build the model and test set to verify the model can be used.\nb) k-fold cross validation can also be adopted, where k times we split the data into training and test set in the ratio k-1:1 and compute the MSE for each split."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jrg365/gpytorch | examples/01_Exact_GPs/GP_Regression_Fully_Bayesian.ipynb | mit | [
"Fully Bayesian GPs - Sampling Hyperparamters with NUTS\nIn this notebook, we'll demonstrate how to integrate GPyTorch and NUTS to sample GP hyperparameters and perform GP inference in a fully Bayesian way.\nThe high level overview of sampling in GPyTorch is as follows:\n\nDefine your model as normal, extending ExactGP and defining a forward method.\nFor each parameter your model defines, you'll need to register a GPyTorch prior with that parameter, or some function of the parameter. If you use something other than a default closure (e.g., by specifying a parameter or transformed parameter name), you'll need to also specify a setting_closure: see the docs for gpytorch.Module.register_prior.\nDefine a pyro model that has a sample site for each GP parameter, and then computes a loss. For your convenience, we define a pyro_sample_from_prior method on gpytorch.Module that does the former operation. For the latter operation, just call mll.pyro_factor(output, y) instead of mll(output, y) to get your loss.\nRun NUTS (or HMC etc) on the pyro model you just defined to generate samples. Note this can take quite a while or no time at all depending on the priors you've defined.\nLoad the samples in to the model, converting the model from a simple GP to a batch GP (see our example notebook on simple batch GPs), where each GP in the batch corresponds to a different hyperparameter sample.\nPass test data through the batch GP to get predictions for each hyperparameter sample.",
"import math\nimport torch\nimport gpytorch\nimport pyro\nfrom pyro.infer.mcmc import NUTS, MCMC\nfrom matplotlib import pyplot as plt\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\n# Training data is 11 points in [0,1] inclusive regularly spaced\ntrain_x = torch.linspace(0, 1, 6)\n# True function is sin(2*pi*x) with Gaussian noise\ntrain_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2\n\n# We will use the simplest form of GP model, exact inference\nclass ExactGPModel(gpytorch.models.ExactGP):\n def __init__(self, train_x, train_y, likelihood):\n super(ExactGPModel, self).__init__(train_x, train_y, likelihood)\n self.mean_module = gpytorch.means.ConstantMean()\n self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())\n \n def forward(self, x):\n mean_x = self.mean_module(x)\n covar_x = self.covar_module(x)\n return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)",
"Running Sampling\nThe next cell is the first piece of code that differs substantially from other work flows. In it, we create the model and likelihood as normal, and then register priors to each of the parameters of the model. Note that we directly can register priors to transformed parameters (e.g., \"lengthscale\") rather than raw ones (e.g., \"raw_lengthscale\"). This is useful, however you'll need to specify a prior whose support is fully contained in the domain of the parameter. For example, a lengthscale prior must have support only over the positive reals or a subset thereof.",
"# this is for running the notebook in our testing framework\nimport os\nsmoke_test = ('CI' in os.environ)\nnum_samples = 2 if smoke_test else 100\nwarmup_steps = 2 if smoke_test else 200\n\n\nfrom gpytorch.priors import LogNormalPrior, NormalPrior, UniformPrior\n# Use a positive constraint instead of usual GreaterThan(1e-4) so that LogNormal has support over full range.\nlikelihood = gpytorch.likelihoods.GaussianLikelihood(noise_constraint=gpytorch.constraints.Positive())\nmodel = ExactGPModel(train_x, train_y, likelihood)\n\nmodel.mean_module.register_prior(\"mean_prior\", UniformPrior(-1, 1), \"constant\")\nmodel.covar_module.base_kernel.register_prior(\"lengthscale_prior\", UniformPrior(0.01, 0.5), \"lengthscale\")\nmodel.covar_module.base_kernel.register_prior(\"period_length_prior\", UniformPrior(0.05, 2.5), \"period_length\")\nmodel.covar_module.register_prior(\"outputscale_prior\", UniformPrior(1, 2), \"outputscale\")\nlikelihood.register_prior(\"noise_prior\", UniformPrior(0.05, 0.3), \"noise\")\n\nmll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)\n\ndef pyro_model(x, y):\n model.pyro_sample_from_prior()\n output = model(x)\n loss = mll.pyro_factor(output, y)\n return y\n\nnuts_kernel = NUTS(pyro_model, adapt_step_size=True)\nmcmc_run = MCMC(nuts_kernel, num_samples=num_samples, warmup_steps=warmup_steps, disable_progbar=smoke_test)\nmcmc_run.run(train_x, train_y)",
"Loading Samples\nIn the next cell, we load the samples generated by NUTS in to the model. This converts model from a single GP to a batch of num_samples GPs, in this case 100.",
"model.pyro_load_from_samples(mcmc_run.get_samples())\n\nmodel.eval()\ntest_x = torch.linspace(0, 1, 101).unsqueeze(-1)\ntest_y = torch.sin(test_x * (2 * math.pi))\nexpanded_test_x = test_x.unsqueeze(0).repeat(num_samples, 1, 1)\noutput = model(expanded_test_x)",
"Plot Mean Functions\nIn the next cell, we plot the first 25 mean functions on the samep lot. This particular example has a fairly large amount of data for only 1 dimension, so the hyperparameter posterior is quite tight and there is relatively little variance.",
"with torch.no_grad():\n # Initialize plot\n f, ax = plt.subplots(1, 1, figsize=(4, 3))\n \n # Plot training data as black stars\n ax.plot(train_x.numpy(), train_y.numpy(), 'k*', zorder=10)\n \n for i in range(min(num_samples, 25)):\n # Plot predictive means as blue line\n ax.plot(test_x.numpy(), output.mean[i].detach().numpy(), 'b', linewidth=0.3)\n \n # Shade between the lower and upper confidence bounds\n # ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)\n ax.set_ylim([-3, 3])\n ax.legend(['Observed Data', 'Sampled Means'])",
"Simulate Loading Model from Disk\nLoading a fully Bayesian model from disk is slightly different from loading a standard model because the process of sampling changes the shapes of the model's parameters. To account for this, you'll need to call load_strict_shapes(False) on the model before loading the state dict. In the cell below, we demonstrate this by recreating the model and loading from the state dict.\nNote that without the load_strict_shapes call, this would fail.",
"state_dict = model.state_dict()\nmodel = ExactGPModel(train_x, train_y, likelihood)\n\n# Load parameters without standard shape checking.\nmodel.load_strict_shapes(False)\n\nmodel.load_state_dict(state_dict)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ebenolson/Recipes | examples/imagecaption/COCO Preprocessing.ipynb | mit | [
"Image Captioning with LSTM\nThis is a partial implementation of \"Show and Tell: A Neural Image Caption Generator\" (http://arxiv.org/abs/1411.4555), borrowing heavily from Andrej Karpathy's NeuralTalk (https://github.com/karpathy/neuraltalk)\nThis example consists of three parts:\n1. COCO Preprocessing - prepare the dataset by precomputing image representations using GoogLeNet\n2. RNN Training - train a network to predict image captions\n3. Caption Generation - use the trained network to caption new images\nOutput\nThis notebook prepares the dataset by extracting a vector representation of each image using the GoogLeNet CNN pretrained on ImageNet. A link to download the final result is given in the next notebook.\nPrerequisites\nTo run this notebook, you'll need to download the MSCOCO training and validation datasets, and unzip them into './coco/'.\nThe captions should be downloaded as well and unzipped into './captions/'",
"import sklearn\nimport numpy as np\nimport lasagne\nimport skimage.transform\n\nfrom lasagne.utils import floatX\n\nimport theano\nimport theano.tensor as T\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport json\nimport pickle",
"Functions for building the GoogLeNet model with Lasagne are defined in googlenet.py:",
"import googlenet",
"We need to download parameter values for the pretrained network",
"!wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/blvc_googlenet.pkl",
"Build the model and select layers we need - the features are taken from the final network layer, before the softmax nonlinearity.",
"cnn_layers = googlenet.build_model()\ncnn_input_var = cnn_layers['input'].input_var\ncnn_feature_layer = cnn_layers['loss3/classifier']\ncnn_output_layer = cnn_layers['prob']\n\nget_cnn_features = theano.function([cnn_input_var], lasagne.layers.get_output(cnn_feature_layer))",
"Load the pretrained weights into the network",
"model_param_values = pickle.load(open('blvc_googlenet.pkl'))['param values']\nlasagne.layers.set_all_param_values(cnn_output_layer, model_param_values)",
"The images need some preprocessing before they can be fed to the CNN",
"MEAN_VALUES = np.array([104, 117, 123]).reshape((3,1,1))\n\ndef prep_image(im):\n if len(im.shape) == 2:\n im = im[:, :, np.newaxis]\n im = np.repeat(im, 3, axis=2)\n # Resize so smallest dim = 224, preserving aspect ratio\n h, w, _ = im.shape\n if h < w:\n im = skimage.transform.resize(im, (224, w*224/h), preserve_range=True)\n else:\n im = skimage.transform.resize(im, (h*224/w, 224), preserve_range=True)\n\n # Central crop to 224x224\n h, w, _ = im.shape\n im = im[h//2-112:h//2+112, w//2-112:w//2+112]\n \n rawim = np.copy(im).astype('uint8')\n \n # Shuffle axes to c01\n im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)\n \n # Convert to BGR\n im = im[::-1, :, :]\n\n im = im - MEAN_VALUES\n return rawim, floatX(im[np.newaxis])",
"Let's verify that GoogLeNet and our preprocessing are functioning properly",
"im = plt.imread('./coco/val2014/COCO_val2014_000000391895.jpg')\nplt.imshow(im)\n\nrawim, cnn_im = prep_image(im)\n\nplt.imshow(rawim)\n\np = get_cnn_features(cnn_im)\nCLASSES = pickle.load(open('blvc_googlenet.pkl'))['synset words']\nprint(CLASSES[p.argmax()])",
"Load the caption data",
"dataset = json.load(open('./captions/dataset_coco.json'))['images']",
"Iterate over the dataset and add a field 'cnn features' to each item. This will take quite a while.",
"def chunks(l, n):\n for i in xrange(0, len(l), n):\n yield l[i:i + n]\n\nfor chunk in chunks(dataset, 256):\n cnn_input = floatX(np.zeros((len(chunk), 3, 224, 224)))\n for i, image in enumerate(chunk):\n fn = './coco/{}/{}'.format(image['filepath'], image['filename'])\n try:\n im = plt.imread(fn)\n _, cnn_input[i] = prep_image(im)\n except IOError:\n continue\n features = get_cnn_features(cnn_input)\n for i, image in enumerate(chunk):\n image['cnn features'] = features[i]",
"Save the final product",
"pickle.dump(dataset, open('coco_with_cnn_features.pkl','w'), protocol=pickle.HIGHEST_PROTOCOL)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb | mit | [
"import numpy as np\nimport pandas as pd\nfrom scipy.optimize import minimize\n\nimport emcee\nimport george\nfrom george import kernels\nimport corner\n\nimport matplotlib.pyplot as plt\n%matplotlib notebook",
"The Neverending Search for Periodicity: Techniques Beyond Lomb-Scargle\nVersion 0.1\n\nBy AA Miller 28 Apr 2018\nIn this lecture we will examine alternative methods to search for periodic signals in astronomical time series. The problems will provide a particular focus on a relatively new technique, which is to model the periodic behavior as a Gaussian Process, and then sample the posterior to identify the optimal period via Markov Chain Monte Carlo analysis. A lot of this work has been pioneered by previous DSFP lecturer Suzanne Aigrain.\nFor a refresher on GPs, see Suzanne's previous lectures: part 1 & part 2. For a refresher on MCMC, see Andy Connolly's previous lectures: part 1, part 2, & part 3.\nAn Incomplete Whirlwind Tour\nIn addition to LS, the following techniques are employed to search for periodic signals:\nString Length\nThe string length method (Dworetsky 1983), phase folds the data at trial periods and then minimizes the distance to connect the phase-ordered observations.\n<img style=\"display: block; margin-left: auto; margin-right: auto\" src=\"./images/StringLength.png\" align=\"middle\">\n<div align=\"right\"> <font size=\"-3\">(credit: Gaveen Freer - http://slideplayer.com/slide/4212629/#) </font></div>\n\nPhase Dispersion Minimization\nPhase Dispersion Minimization (PDM; Jurkevich 1971, Stellingwerth 1978), like LS, folds the data at a large number of trial frequencies $f$. \nThe phased data are then binned, and the variance is calculated in each bin, combined, and compared to the overall variance of the signal. No functional form of the signal is assumed, and thus, non-sinusoidal signals can be found.\nChallenge: how to select the number of bins?\n<img style=\"display: block; margin-left: auto; margin-right: auto\" src=\"./images/PDM.jpg\" align=\"middle\">\n<div align=\"right\"> <font size=\"-3\">(credit: Gaveen Freer - http://slideplayer.com/slide/4212629/#) </font></div>\n\nAnalysis of Variance\nAnalysis of Variance (AOV; Schwarzenberg-Czerny 1989) is similar to PDM. Optimal periods are defined via hypothesis testing, and these methods are found to perform best for certain types of astronomical signals.\nSupersmoother\nSupersmoother (Reimann) is a least-squares approach wherein a flexible, non-parametric model is fit to the folded observations at many trial frequncies. The use of this flexible model reduces aliasing issues relative to models that assume a sinusoidal shape, however, this comes at the cost of requiring considerable computational time. \nConditional Entropy\nConditional Entropy (CE; Graham et al. 2013), and other entropy based methods, aim to minimize the entropy in binned (normalized magnitude, phase) space. CE, in particular, is good at supressing signal due to the window function.\nWhen tested on real observations, CE outperforms most of the alternatives (e.g., LS, PDM, etc).\n<img style=\"display: block; margin-left: auto; margin-right: auto\" src=\"./images/CE.png\" align=\"middle\">\n<div align=\"right\"> <font size=\"-3\">(credit: Graham et al. 2013) </font></div>\n\nBayesian Methods\nThere have been some efforts to frame the period-finding problem in a Bayesian framework. Bretthorst 1988 developed Bayesian generalized LS models, while Gregory & Loredo 1992 applied Bayesian techniques to phase-binned models. \nMore recently, efforts to use Gaussian processes (GPs) to model and extract a period from the light curve have been developed (Wang et al. 2012). These methods have proved to be especially useful for detecting stellar rotation in Kepler light curves (Angus et al. 2018). \n[Think of Suzanne's lectures during session 4]\nFor this lecture we will focus on the use GPs, combined with an MCMC analysis (and we will take some shortcuts in the interest of time), to identify periodic signals in astronomical data. \nProblem 1) Helper Functions\nWe are going to create a few helper functions, similar to the previous lecture, that will help minimize repetition for some common tasks in this notebook. \nProblem 1a\nAdjust the variable ncores to match the number of CPUs on your machine.",
"ncores = # adjust to number of CPUs on your machine\nnp.random.seed(23)",
"Problem 1b\nCreate a function gen_periodic_data that returns \n$$y = C + A\\cos\\left(\\frac{2\\pi x}{P}\\right) + \\sigma_y$$\nwhere $C$, $A$, and $P$ are constants, $x$ is input data and $\\sigma_y$ represents Gaussian noise.\nHint - this should only require a minor adjustment to your function from lecture 1.",
"def gen_periodic_data( # complete\n y = # complete\n return y",
"Problem 1c\nLater, we will be using MCMC. Execute the following cell which will plot the chains from emcee to follow the MCMC walkers.",
"def plot_chains(sampler, nburn, paramsNames):\n Nparams = len(paramsNames) # + 1\n fig, ax = plt.subplots(Nparams,1, figsize = (8,2*Nparams), sharex = True)\n fig.subplots_adjust(hspace = 0)\n ax[0].set_title('Chains')\n xplot = range(len(sampler.chain[0,:,0]))\n\n for i,p in enumerate(paramsNames):\n for w in range(sampler.chain.shape[0]):\n ax[i].plot(xplot[:nburn], sampler.chain[w,:nburn,i], color=\"0.5\", alpha = 0.4, lw = 0.7, zorder = 1)\n ax[i].plot(xplot[nburn:], sampler.chain[w,nburn:,i], color=\"k\", alpha = 0.4, lw = 0.7, zorder = 1)\n \n ax[i].set_ylabel(p)\n fig.tight_layout()\n return ax",
"Problem 1d \nUsing gen_periodic_data generate 250 observations taken at random times between 0 and 10, with $C = 10$, $A = 2$, $P = 0.4$, and variance of the noise = 0.1. Create an uncertainty array dy with the same length as y and each value equal to $\\sqrt{0.1}$.\nPlot the resulting data over the exact (noise-free) signal.",
"x = # complete\ny = # complete\ndy = # complete\n\n# complete\n\nfig, ax = plt.subplots()\nax.errorbar( # complete\nax.plot( # complete\n# complete\n# complete\nfig.tight_layout()",
"Problem 2) Maximum-Likelihood Optimization\nA common approach$^\\dagger$ in the literature for problems where there is good reason to place a strong prior on the signal (i.e. to only try and fit a single model) is maximum likelihood optimization [this is sometimes also called $\\chi^2$ minimization].\n$^\\dagger$The fact that this approach is commonly used, does not mean it should be commonly used.\nIn this case, where we are fitting for a known signal in simulated data, we are justified in assuming an extremely strong prior and fitting a sinusoidal model to the data.\nProblem 2a\nWrite a function, correct_model, that returns the expected signal for our data given input time $t$:\n$$f(t) = a + b\\cos\\left(\\frac{2\\pi t}{c}\\right)$$\nwhere $a, b, c$ are model parameters.\nHint - store the model parameters in a single variable (this will make things easier later).",
"def correct_model( # complete\n # complete\n return # complete",
"For these data the log likelihood of the data can be written as: \n$$\\ln \\mathcal{L} = -\\frac{1}{2} \\sum \\left(\\frac{y - f(t)}{\\sigma_y}\\right)^2$$\nUltimately, it is easier to minimize the negative log likelihood, so we will do that. \nProblem 2b\nWrite a function, lnlike1, that returns the log likelihood for the data given model parameters $\\theta$, and $t, y, \\sigma_y$.\nWrite a second function, nll, that returns the negative log likelihood.",
"def lnlike1( # complete\n return # complete\n\ndef nll( # complete\n return # complete",
"Problem 2c\nUse the minimize function from scipy.optimize to determine maximum likelihood estimates for the model parameters for the data simulated in problem 1d. What is the best fit period?\nThe optimization routine requires an initial guess for the model parameters, use 10 for the offset, 1 for the amplitude of variations, and 0.39 for the period.\nHint - as arguments, minimize takes the function, nll, the initial guess, and optional keyword args, which should be (x, y, dy) in this case.",
"initial_theta = # complete\nres = minimize( # complete\n\nprint(\"The maximum likelihood estimate for the period is: {:.5f}\".format( # complete",
"Problem 2d\nPlot the input model, the noisy data, and the maximum likelihood model.\nHow does the model fit look?",
"fig, ax = plt.subplots()\nax.errorbar( # complete\nax.plot( # complete\nax.plot( # complete\nax.set_xlabel('x')\nax.set_ylabel('y')\nfig.tight_layout()",
"Problem 2e \nRepeat the maximum likelihood optimization, but this time use an initial guess of 10 for the offset, 1 for the amplitude of variations, and 0.393 for the period.",
"initial_theta = # complete\nres = minimize( # complete\n\nprint(\"The ML estimate for a, b, c is: {:.5f}, {:.5f}, {:.5f}\".format( # complete",
"Given the lecture order this is a little late, but we have now identified the fundamental challenge in identifying periodic signals in astrophysical observations: \nperiodic models are highly non-linear!\nThis can easily be seen in the LS periodograms from the previous lecture: period estimates essentially need to be perfect to properly identify the signal. Take for instance the previous example, where we adjusted the initial guess for the period by less than 1% and it made the difference between correct estimates catastrophic errors. \nThis also means that classic optimization procedures (e.g., gradient decent) are helpless for this problem. If you guess the wrong period there is no obvious way to know whether the subsequent guess should use a larger or smaller period.\nProblem 3) Sampling Techniques\nGiven our lack of success with maximum likelihood techniques, we will now attempt a Bayesian approach. As a brief reminder, Bayes theorem tells us that:\n$$P(\\theta|X) \\propto P(X|\\theta) P(\\theta).$$\nIn words, the posterior probability is proportional to the likelihood multiplied by the prior. We will use sampling techniques, MCMC, to estimate the posterior.\nRemember - we already calculated the likelihood above.\nProblem 3a\nWrite a function lnprior1 to calculate the log of the prior on $\\theta$. Use a reasonable, wide and flat prior for all the model parameters.\nHint - for emcee the log prior should return 0 within the prior and $-\\infty$ otherwise.",
"def lnprior1( # complete\n a, b, c = # complete\n if # complete\n return 0.0\n return -np.inf",
"Problem 3b\nWrite a function lnprob1 to calculate the log of the posterior probability. This function should take $\\theta$ and x, y, dy as inputs.",
"def lnprob1( # complete\n lp = lnprior1(theta)\n if np.isfinite(lp):\n return # complete\n return -np.inf",
"Problem 3c\nInitialize the walkers for emcee, which we will use to draw samples from the posterior. Like before, we need to include an initial guess (the parameters of which don't matter much beyond the period). Start with a guess of 0.6 for the period.\nAs a quick reminder, emcee is a pure python implementation of Goodman & Weare's affine Invariant Markov Chain Monte Carlo (MCMC) Ensemble sampler. emcee seeds several \"walkers\" which are members of the ensemble. You can think of each walker as its own Metropolis-Hastings chain, but the key detail is that the chains are not independent. Thus, the proposal distribution for each new step in the chain is dependent upon the position of all the other walkers in the chain. \nChoosing the initial position for each of the walkers does not significantly affect the final results (though it will affect the burn in time). Standard procedure is to create several walkers in a small ball around a reasonable guess [the samplers will quickly explore beyond the extent of the initial ball].",
"guess = [10, 1, 0.6]\nndim = len(guess)\nnwalkers = 100\n\np0 = [np.array(guess) + 1e-8 * np.random.randn(ndim)\n for i in range(nwalkers)]\nsampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob1, args=(x, y, dy), threads = ncores)",
"Problem 3d\nRun the walkers through 1000 steps. \nHint - The run_mcmc method on the sampler object may be useful.",
"sampler.run_mcmc( # complete",
"Problem 3e\nUse the previous created plot_chains helper funtion to plot the chains from the MCMC sampling. Note - you may need to adjust nburn after examining the chains.\nHave your chains converged? Will extending the chains improve this?",
"params_names = # complete\nnburn = # complete\nplot_chains( # complete",
"Problem 3f\nMake a corner plot (use corner) to examine the post burn-in samples from the MCMC chains.",
"samples = sampler.chain[:, nburn:, :].reshape((-1, ndim))\nfig = # complete",
"As you can see - force feeding this problem into a Bayesian framework does not automatically generate more reasonable answers. While some of the chains appear to have identified periods close to the correct period most of them are suck in local minima. \nThere are sampling techniques designed to handle multimodal posteriors, but the non-linear nature of this problem makes it difficult for the various walkers to explore the full parameter space in the way that we would like. \nProblem 4) GPs and MCMC to identify a best-fit period\nWe will now attempt to model the data via a Gaussian Process (GP). As a very brief reminder, a GP is a collection of random variables, in which any finite subset has a multivariate gaussian distribution. \nA GP is fully specified by a mean function and a covariance matrix $K$. In this case, we wish to model the simulated data from problem 1. If we specify a cosine kernel for the covariance:\n$$K_{ij} = k(x_i - x_j) = \\cos\\left(\\frac{2\\pi \\left|x_i - x_j\\right|}{P}\\right)$$\nthen the mean function is simply the offset, b.\nProblem 4a\nWrite a function model2 that returns the mean function for the GP given input parameters $\\theta$.\nHint - no significant computation is required to complete this task.",
"def model2( # complete\n # complete\n return # complete",
"To model the GP in this problem we will use the george package (first introduced during session 4) written by Dan Foreman-Mackey. george is a fast and flexible tool for GP regression in python. It includes several built-in kernel functions, which we will take advantage of.\nProblem 4b\nWrite a function lnlike2 to calculate the likelihood for the GP model assuming a cosine kernel, and mean model defined by model2. \nNote - george takes $\\ln P$ as an argument and not $P$. We will see why this is useful later.\nHint - there isn't a lot you need to do for this one! But pay attention to the functional form of the model.",
"def lnlike2(theta, t, y, yerr):\n lnper, lna = theta[:2]\n gp = george.GP(np.exp(lna) * kernels.CosineKernel(lnper))\n gp.compute(t, yerr)\n return gp.lnlikelihood(y - model2(theta, t), quiet=True)",
"Problem 4c\nWrite a function lnprior2 to calculte $\\ln P(\\theta)$, the log prior for the model parameters. Use a wide flat prior for the parameters.\nNote - a flat prior in log space is not flat in the parameters.",
"def lnprior2( # complete\n # complete\n # complete\n # complete \n # complete\n # complete\n # complete",
"Problem 4d\nWrite a function lnprob2 to calculate the log posterior given the model parameters and data.",
"def lnprob2(# complete\n # complete\n # complete\n # complete\n # complete\n # complete\n # complete",
"Problem 4e\nIntialize 100 walkers in an emcee.EnsembleSampler variable called sampler. For you initial guess at the parameter values set $\\ln a = 1$, $\\ln P = 1$, and $b = 8$.\nNote - this is very similar to what you did previously.",
"initial = # complete\nndim = len(initial)\np0 = [np.array(initial) + 1e-4 * np.random.randn(ndim)\n for i in range(nwalkers)]\nsampler = emcee.EnsembleSampler( # complete",
"Problem 4f\nRun the chains for 200 steps.\nHint - you'll notice these are shorter chains than we previously used. That is because the computational time is longer, as will be the case for this and all the remaining problems.",
"p0, _, _ = sampler.run_mcmc( # complete",
"Problem 4g\nPlot the chains from the MCMC.",
"params_names = ['ln(P)', 'ln(a)', 'b']\nnburn = # complete\nplot_chains( # complete",
"It should be clear that the chains have not, in this case, converged. This will be true even if you were to continue to run them for a very long time. \nNevertheless, if we treat this entire run as a burn in, we can actually extract some useful information from this initial run. In particular, we will look at the posterior values for the different walkers at the end of their chains. From there we will re-initialize our walkers.\nWe are actually free to initialize the walkers at any location we choose, so this approach is not cheating. However, one thing that should make you a bit uneasy about the way in which we are re-initializing the walkers is that we have no guarantee that the initial run that we just performed found a global maximum for the posterior. Thus, it may be the case that our continued analysis in this case is not \"right.\"\nProblem 4h\nBelow you are given two arrays, chain_lnp_end and chain_lnprob_end, that contain the final $\\ln P$ and log posterior, respectively, for each of the walkers.\nPlot these two arrays against each other, to get a sense of what period is \"best.\"",
"chain_lnp_end = sampler.chain[:,-1,0]\nchain_lnprob_end = sampler.lnprobability[:,-1]\nfig, ax = plt.subplots()\nax.scatter( # complete\n# complete\n# complete\nfig.tight_layout()",
"Problem 4i\nReinitialize the walkers in a ball around the maximum log posterior value from the walkers in the previous burn in. Then run the MCMC sampler for 200 steps.\nHint - you'll want to run sampler.reset() prior to the running the MCMC, but after selecting the new starting point for the walkers.",
"p = # complete\nsampler.reset()\n\np0 = # complete\np0, _, _ = sampler.run_mcmc( # complete",
"Problem 4j\nPlot the chains. Have they converged?",
"paramsNames = ['ln(P)', 'ln(a)', 'b']\nnburn = # complete\nplot_chains( # complete",
"Problem 4k\nMake a corner plot of the samples. Does the marginalized distribution on $P$ make sense?",
"fig = ",
"If you run the cell below, you will see random samples from the posterior overplotted on the data. Do the posterior samples seem reasonable in this case?",
"fig, ax = plt.subplots()\nax.errorbar(x, y, dy, fmt='o')\nax.set_xlabel('x')\nax.set_ylabel('y')\n\nfor s in samples[np.random.randint(len(samples), size=5)]:\n # Set up the GP for this sample.\n lnper, lna = s[:2]\n gp = george.GP(np.exp(lna) * kernels.CosineKernel(lnper))\n gp.compute(x, dy)\n # Compute the prediction conditioned on the observations and plot it.\n m = gp.sample_conditional(y - model2(s, x), x_grid) + model2(s, x_grid)\n \n ax.plot(x_grid, m, color=\"0.2\", alpha=0.3)\nfig.tight_layout()",
"Problem 4l\nWhat is the marginalized best period estimate, including uncertainties?",
"# complete\n\nprint('ln(P) = {:.6f} +{:.6f} -{:.6f}'.format( # complete\nprint('True period = 0.4, GP Period = {:.4f}'.format( # complete",
"In this way - it is possible to use GPs + MCMC to determine the period in noisy irregular data. Furthermore, unlike with LS, we actually have a direct estimate on the uncertainty for that period. \nAs I previously alluded to, however, the solution does depend on how we initialize the walkers. Because this is simulated data, we know that the correct period has been estimated in this case, but there's no guarantee of that once we start working with astronomical sources. This is something to keep in mind if you plan on using GPs to search for periodic signals...\nProblem 5) The Quasi-Periodic Kernel\nAs we saw in the first lecture, there are many sources with periodic light curves that are not strictly sinusoidal. Thus, the use of the cosine kernel (on its own) may not be sufficient to model the signal. As Suzanne told us during session, the quasi-period kernel: \n$$K_{ij} = k(x_i - x_j) = \\exp \\left(-\\Gamma \\sin^2\\left[\\frac{\\pi}{P} \\left|x_i - x_j\\right|\\right]\\right)$$\nis useful for non-sinusoidal signals. We will now use this kernel to model the variations in the simulated data.\nProblem 5a\nWrite a function lnprob3 to calculate log posterior given model parameters $\\theta$ and data x, y, dy.\nHint - it may be useful to write this out as multiple functions.",
"# complete\n\n# complete\n\n# complete\n\ndef lnprob3( # complete\n # complete\n # complete",
"Problem 5b \nInitialize 100 walkers around a reasonable starting point. Be sure that $\\ln P = 0$ in this initialization.\nRun the MCMC for 200 steps. \nHint - it may be helpful to run this second step in a separate cell.",
"# complete\n# complete\n# complete\n\nsampler = emcee.EnsembleSampler( # complete\n\np0, _, _ = sampler.run_mcmc( # complete",
"Problem 5c\nPlot the chains from the MCMC. Did the chains converge?",
"paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\\gamma)$']\nnburn = # complete\nplot_chains( # complete",
"Problem 5d\nPlot the final $\\ln P$ vs. log posterior for each of the walkers. Do you notice anything interesting?\nHint - recall that you are plotting the log posterior, and not the posterior.",
"# complete\n# complete\n# complete\n# complete\n# complete\n# complete",
"Problem 5e\nRe-initialize the walkers around the chain with the maximum log posterior value.\nRun the MCMC for 500 steps.",
"p = # complete\nsampler.reset()\n# complete\nsampler.run_mcmc( # complete",
"Problem 5f\nPlot the chains for the MCMC. \nHint - you may need to adjust the length of the burn in.",
"paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\\gamma)$']\nnburn = # complete\nplot_chains( # complete",
"Problem 5g\nMake a corner plot for the samples. \nIs the marginalized estimate for the period reasonable?",
"# complete\nfig = # complete",
"Problem 6) GPs + MCMC for actual astronomical data\nWe will now apply this model to the same light curve that we studied in the LS lecture.\nIn this case we do not know the actual period (that's only sorta true), so we will have to be even more careful about initializing the walkers and performing burn in than we were previously.\nProblem 6a \nRead in the data for the light curve stored in example_asas_lc.dat.",
"# complete",
"Problem 6b\nAdjust the prior from problem 5 to be appropriate for this data set.",
"def lnprior3( # complete\n # complete\n # complete\n # complete\n # complete\n # complete\n # complete\n # complete",
"Because we have no idea where to initialize our walkers in this case, we are going to use an ad hoc common sense + brute force approach. \nProblem 6c\nRun LombScarge on the data and determine the top three peaks in the periodogram. Set nterms = 2, and the maximum frequency to 5 (this is arbitrary but sufficient in this case).\nHint - you may need to search more than the top 3 periodogram values to find the 3 peaks.",
"from astropy.stats import LombScargle\n\nfrequency, power = # complete\n\nprint('Top LS period is {}'.format(# complete\nprint( # complete",
"Problem 6d\nInitialize one third of your 100 walkers around each of the periods identified in the previous problem (note - the total number of walkers must be an even number, so use 34 walkers around one of the top 3 frequency peaks). \nRun the MCMC for 500 steps following this initialization.",
"initial1 = # complete\n# complete\n# complete\n\ninitial2 = # complete\n# complete\n# complete\n\ninitial3 = # complete\n# complete\n# complete\n\n# complete\nsampler = emcee.EnsembleSampler( # complete\n\np0, _, _ = sampler.run_mcmc( # complete",
"Problem 6e\nPlot the chains.",
"paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\\gamma)$']\nnburn = # complete\nplot_chains( # complete",
"Problem 6f \nPlot $\\ln P$ vs. log posterior.",
"# complete\n# complete\n# complete\n# complete\n# complete\n# complete",
"Problem 6g\nReinitialize the walkers around the previous walker with the maximum posterior value. \nRun the MCMC for 500 steps. Plot the chains. Have they converged?",
"# complete\nsampler.reset()\n# complete\n# complete\nsampler.run_mcmc( # complete\n\nparamsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\\gamma)$']\nnburn = # complete\nplot_chains( # complete",
"Problem 6h\nMake a corner plot of the samples. What is the marginalized estimate for the period of this source? \nHow does this estimate compare to LS?",
"# complete\nfig = corner.corner( # complete\n\n# complete\n\nprint('ln(P) = {:.6f} +{:.6f} -{:.6f}'.format( # complete\n\nprint('GP Period = {:.6f}'.format( # complete",
"The cell below shows marginalized samples overplotted on the actual data. How well does the model perform?",
"fig, ax = plt.subplots()\nax.errorbar(lc['hjd'], lc['mag'], lc['mag_unc'], fmt='o')\nax.set_xlabel('HJD (d)')\nax.set_ylabel('mag')\n\nhjd_grid = np.linspace(2800, 3000,3000)\n\nfor s in samples[np.random.randint(len(samples), size=5)]:\n # Set up the GP for this sample.\n lnper, lna, b, lngamma = s\n gp = george.GP(np.exp(lna) * kernels.ExpSine2Kernel(np.exp(lngamma), lnper))\n gp.compute(lc['hjd'], lc['mag_unc'])\n # Compute the prediction conditioned on the observations and plot it.\n m = gp.sample_conditional(lc['mag'] - model3(s, lc['hjd']), hjd_grid) + model3(s, hjd_grid)\n \n ax.plot(hjd_grid, m, color=\"0.2\", alpha=0.3)\nfig.tight_layout()",
"Now you have the tools to fit a GP to a light and get an estimate of the best fit period (and to get an estimate of the uncertainty on that period to boot!). \nAs previously noted, you should be a bit worried about \"burn in\" and how the walkers were initialized throughout. If you plan to use GPs to search for periods in your own work, I highly recommend you read Angus et al. 2018 on the GP periodogram. Angus et al. provide far more intelligent methods for initializing the MCMC than what is presented here."
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
intel-analytics/BigDL | python/chronos/use-case/AIOps/AIOps_anomaly_detect_unsupervised.ipynb | apache-2.0 | [
"Unsupervised Anomaly Detection\nAnomaly detection detects data points in data that does not fit well with the rest of data. In this notebook we demonstrate how to do anomaly detection for 1-D data using Chronos's dbscan detector, autoencoder detector and threshold detector.\nFor demonstration, we use the publicly available cluster trace data cluster-trace-v2018 of Alibaba Open Cluster Trace Program. You can find the dataset introduction <a href=\"https://github.com/alibaba/clusterdata/blob/master/cluster-trace-v2018/trace_2018.md\" target=\"_blank\">here</a>. In particular, we use machine usage data to demonstrate anomaly detection, you can download the separate data file directly with <a href=\"http://clusterdata2018pubcn.oss-cn-beijing.aliyuncs.com/machine_usage.tar.gz\" target=\"_blank\">machine_usage</a>.\nDownload raw dataset and load into dataframe\nNow we download the dataset and load it into a pandas dataframe.Steps are as below:\n* First, download the raw data <a href=\"http://clusterdata2018pubcn.oss-cn-beijing.aliyuncs.com/machine_usage.tar.gz\" target=\"_blank\">machine_usage</a>. Or run the script get_data.sh to download the raw data.It will download the resource usage of each machine from m_1932 to m_2085. \n* Second, run grep m_1932 machine_usage.csv > m_1932.csv to extract records of machine 1932. Or run extract_data.sh.We use machine 1932 as an example in this notebook.You can choose any machines in the similar way.\n* Finally, use pandas to load m_1932.csv into a dataframe as shown below.",
"import os \nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndf_1932 = pd.read_csv(\"m_1932.csv\", header=None, usecols=[1,2,3], names=[\"time_step\", \"cpu_usage\",\"mem_usage\"])",
"Below are some example records of the data",
"df_1932.head()\n\ndf_1932.sort_values(by=\"time_step\", inplace=True)\ndf_1932.reset_index(inplace=True)\ndf_1932.plot(y=\"cpu_usage\", x=\"time_step\", figsize=(16,6),title=\"cpu_usage of machine 1932\")",
"Data pre-processing\nNow we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset. \nFor the machine_usage data, the pre-processing contains 2 parts:\n1. Convert the time step in seconds to timestamp starting from 2018-01-01\n2. Generate a built-in TSDataset to resample the average of cpu_usage in minutes and impute missing data",
"df_1932[\"time_step\"] = pd.to_datetime(df_1932[\"time_step\"], unit='s', origin=pd.Timestamp('2018-01-01'))\n\nfrom bigdl.chronos.data import TSDataset\n\ntsdata = TSDataset.from_pandas(df_1932, dt_col=\"time_step\", target_col=\"cpu_usage\")\ndf = tsdata.resample(interval='1min', merge_mode=\"mean\")\\\n .impute(mode=\"last\")\\\n .to_pandas()\n\ndf['cpu_usage'].plot(figsize=(16,6))",
"Anomaly Detection by DBScan Detector\nDBScanDetector uses DBSCAN clustering for anomaly detection. The DBSCAN algorithm tries to cluster the points and label the points that do not belong to any clusters as -1. It thus detects outliers detection in the input time series. DBScanDetector assigns anomaly score 1 to anomaly samples, and 0 to normal samples.",
"from bigdl.chronos.detector.anomaly import DBScanDetector\n\nad = DBScanDetector(eps=0.1, min_samples=6)\nad.fit(df['cpu_usage'].to_numpy())\n\nanomaly_scores = ad.score()\nanomaly_indexes = ad.anomaly_indexes()\n\nprint(\"The anomaly scores are:\", anomaly_scores)\nprint(\"The anomaly indexes are:\", anomaly_indexes)",
"Draw anomalies in line chart.",
"plt.figure(figsize=(16,6))\nplt.plot(df.time_step, df.cpu_usage, label='cpu_usage')\nplt.scatter(df.time_step[anomaly_indexes], df.cpu_usage[anomaly_indexes], color='red', label='anomalies value')\n\nplt.title('the anomalies value')\nplt.xlabel('datetime')\nplt.legend(loc='upper left')\nplt.show()",
"Anomaly Detection by AutoEncoder Detector\nAEDetector is unsupervised anomaly detector. It builds an autoencoder network, try to fit the model to the input data, and calcuates the reconstruction error. The samples with larger reconstruction errors are more likely the anomalies.",
"from bigdl.chronos.detector.anomaly import AEDetector\n\nad = AEDetector(roll_len=10, ratio=0.05)\nad.fit(df['cpu_usage'].to_numpy())\n\nanomaly_scores = ad.score()\nanomaly_indexes = ad.anomaly_indexes()\n\nprint(\"The anomaly scores are:\", anomaly_scores)\nprint(\"The anomaly indexes are:\", anomaly_indexes)",
"Draw anomalies in line chart.",
"plt.figure(figsize=(16,6))\nplt.plot(df.time_step, df.cpu_usage, label='cpu_usage')\nplt.scatter(df.time_step[anomaly_indexes], df.cpu_usage[anomaly_indexes], color='red', label='anomalies value')\n\nplt.title('the anomalies value')\nplt.xlabel('datetime')\nplt.legend(loc='upper left')\nplt.show()",
"Anomaly Detection by Threshold Detector\nThresholdDetector is a simple anomaly detector that detectes anomalies based on threshold. The target value for anomaly testing can be either 1) the sample value itself or 2) the difference between the forecasted value and the actual value. In this notebook we demostrate the first type. The thresold can be set by user or esitmated from the train data accoring to anomaly ratio and statistical distributions.",
"from bigdl.chronos.detector.anomaly import ThresholdDetector\n\nthd=ThresholdDetector()\nthd.set_params(threshold=(20, 80))\nthd.fit(df['cpu_usage'].to_numpy())\n\nanomaly_scores = thd.score()\nanomaly_indexes = thd.anomaly_indexes()\n\nprint(\"The anomaly scores are:\", anomaly_scores)\nprint(\"The anomaly indexes are:\", anomaly_indexes)",
"Draw anomalies in line chart.",
"plt.figure(figsize=(16,6))\nplt.plot(df.time_step, df.cpu_usage, label='cpu_usage')\nplt.scatter(df.time_step[anomaly_indexes], df.cpu_usage[anomaly_indexes], color='red', label='anomalies value')\n\nplt.title('the anomalies value')\nplt.xlabel('datetime')\nplt.legend(loc='upper left')\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mikecharles/palettable | demo/Palettable Demo.ipynb | mit | [
"Palettable\nFind Palettable online:\n\nDocs: https://jiffyclub.github.io/palettable/\nGitHub: https://github.com/jiffyclub/palettable\nPyPI: https://pypi.python.org/pypi/palettable/",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Palettable API",
"from palettable.colorbrewer.qualitative import Set1_9\n\nSet1_9.name\n\nSet1_9.type\n\nSet1_9.number\n\nSet1_9.colors\n\nSet1_9.hex_colors\n\nSet1_9.mpl_colors\n\nSet1_9.mpl_colormap\n\n# requires ipythonblocks\nSet1_9.show_as_blocks()\n\nSet1_9.show_continuous_image()\n\nSet1_9.show_discrete_image()",
"Setting the matplotlib Color Cycle\nAdapted from the example at http://matplotlib.org/examples/color/color_cycle_demo.html.\nUse the .mpl_colors attribute to change the color cycle used by matplotlib\nwhen colors for plots are not specified.",
"from palettable.wesanderson import Aquatic1_5, Moonrise4_5\n\nx = np.linspace(0, 2 * np.pi)\noffsets = np.linspace(0, 2*np.pi, 4, endpoint=False)\n# Create array with shifted-sine curve along each column\nyy = np.transpose([np.sin(x + phi) for phi in offsets])\n\nplt.rc('lines', linewidth=4)\nplt.rc('axes', color_cycle=Aquatic1_5.mpl_colors)\n\nfig, (ax0, ax1) = plt.subplots(nrows=2)\n\nax0.plot(yy)\nax0.set_title('Set default color cycle to Aquatic1_5')\n\nax1.set_color_cycle(Moonrise4_5.mpl_colors)\nax1.plot(yy)\nax1.set_title('Set axes color cycle to Moonrise4_5')\n\n# Tweak spacing between subplots to prevent labels from overlapping\nplt.subplots_adjust(hspace=0.3)",
"Using a Continuous Palette\nAdapted from http://matplotlib.org/examples/pylab_examples/hist2d_log_demo.html.\nUse the .mpl_colormap attribute any place you need a matplotlib colormap.",
"from palettable.colorbrewer.sequential import YlGnBu_9\n\nfrom matplotlib.colors import LogNorm\n\n#normal distribution center at x=0 and y=5\nx = np.random.randn(100000)\ny = np.random.randn(100000)+5\n\nplt.hist2d(x, y, bins=40, norm=LogNorm(), cmap=YlGnBu_9.mpl_colormap)\nplt.colorbar()",
"Note that matplotlib already has colorbrewer palettes, as you can see at\nhttp://matplotlib.org/examples/color/colormaps_reference.html.\nAbove I could have used cmap=plt.cm.YlGnBu for the same affect."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dimonaks/siman | tutorials/surfaces.ipynb | gpl-2.0 | [
"Instruction\nThis tutorial explain how to build specific surfaces on the example of (111) surface in FCC copper.\n1. Read initial structure\n2. Choose new vectors to make required surface normal to one of the vectors\n3. Build supercell\n4. Build slab with surface\n5. Scale slab\nImport libraries",
"import sys\nfrom IPython.display import Image\n\nfrom siman import header\nfrom siman.calc_manage import smart_structure_read\nfrom siman.geo import create_supercell, create_surface2, supercell\n%matplotlib inline",
"1. Read structure",
"st = smart_structure_read('Cu/POSCARCU.vasp') # read required structure",
"2. Choose new vectors\nThe initial structure is FCC lattice in conventianal setting i.e. cubic unit cell.\nAs a first step we create orthogonal supercell with {111}cub surface on one side. \nBelow the directions orthogonal to {111} are shown.\nWe will choose [-1-1-1], [01-1] and [2-1-1].",
"Image(filename='figs/Thompson-tetrahedron-notation-for-FCC-slip-systems.png')",
"3. Build supercell with new vectors",
"# create supercell using chosen directions, the *mul* allows to choose one half of the third vector\nsc = create_supercell(st, [ [-1,-1,-1], [0,1,-1], [2,-1,-1]], mul = (1,1,0.5)) ",
"4. Build slab\nNow we need to create vacuum and rotate the cell. This can be done using create_surface2 function",
"# here we choose [100] normal in supercell, which is equivalent to [111]cub\n# combinations of *min_slab_size* and *cut_thickness* (small cut of slab from one side) allows create symmetrical slab\nst_suf = create_surface2(sc, [1, 0, 0], min_vacuum_size = 10, \n min_slab_size = 16, cut_thickness = 3, oxidation = {'Cu':'Cu0+' }, return_one = 1, surface_i = 0)",
"5. Scale slab\nAbove the slab with minimum surface area was obtained. If you need larger surface you can use supercell() function for which you need to provide required sizes in Angstrems",
"st_sufsc112 = supercell(st_suf, [10,10,32]) # make 2x2 slab\n\nst_sufsc112.write_poscar() # save file as POSCAR for VASP"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rishizek/deep-learning | language-translation/dlnd_language_translation.ipynb | mit | [
"Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.\nYou can get the <EOS> word id by doing:\npython\ntarget_vocab_to_int['<EOS>']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.",
"def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n source_id_text = [[source_vocab_to_int.get(vocab, source_vocab_to_int['<UNK>']) for vocab in line.split()]\n for line in source_text.split('\\n')]\n target_id_text = [[target_vocab_to_int.get(vocab, target_vocab_to_int['<UNK>']) for vocab in line.split()]\n + [target_vocab_to_int['<EOS>']] for line in target_text.split('\\n')]\n return source_id_text, target_id_text\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()",
"Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoder_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\nTarget sequence length placeholder named \"target_sequence_length\" with rank 1\nMax target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\nSource sequence length placeholder named \"source_sequence_length\" with rank 1\n\nReturn the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)",
"def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n max target sequence length, source sequence length)\n \"\"\"\n # TODO: Implement Function\n input_data = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None])\n learning_rate = tf.placeholder(tf.float32)\n keep_probability = tf.placeholder(tf.float32, name='keep_prob')\n target_sequence_length = tf.placeholder(tf.int32, [None], name='target_sequence_length')\n max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')\n source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length')\n return input_data, targets, learning_rate, keep_probability, target_sequence_length, \\\n max_target_sequence_length, source_sequence_length\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)",
"Process Decoder Input\nImplement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.",
"def process_decoder_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for encoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)\n\n return dec_input\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_encoding_input(process_decoder_input)",
"Encoding\nImplement encoding_layer() to create a Encoder RNN layer:\n * Embed the encoder input using tf.contrib.layers.embed_sequence\n * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper\n * Pass cell and embedded input to tf.nn.dynamic_rnn()",
"from imp import reload\nreload(tests)\n\ndef encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :param source_sequence_length: a list of the lengths of each sequence in the batch\n :param source_vocab_size: vocabulary size of source data\n :param encoding_embedding_size: embedding size of source data\n :return: tuple (RNN output, RNN state)\n \"\"\"\n # TODO: Implement Function\n # Encoder embedding\n enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)\n\n # RNN cell\n def make_cell(rnn_size):\n lstm = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n return drop\n \n enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, \n sequence_length=source_sequence_length, dtype=tf.float32)\n \n return enc_output, enc_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)",
"Decoding - Training\nCreate a training decoding layer:\n* Create a tf.contrib.seq2seq.TrainingHelper \n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode",
"def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_summary_length, \n output_layer, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_summary_length: The length of the longest sequence in the batch\n :param output_layer: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing training logits and sample_id\n \"\"\"\n # TODO: Implement Function\n # Training Decoder\n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n # Basic decoder\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n encoder_state,\n output_layer) \n # Perform dynamic decoding using the decoder\n training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n impute_finished=True,\n maximum_iterations=max_summary_length)\n \n return training_decoder_output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)",
"Decoding - Inference\nCreate inference decoder:\n* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper\n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode",
"def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n end_of_sequence_id, max_target_sequence_length,\n vocab_size, output_layer, batch_size, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param max_target_sequence_length: Maximum length of target sequences\n :param vocab_size: Size of decoder/target vocabulary\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_layer: Function to apply the output layer\n :param batch_size: Batch size\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing inference logits and sample_id\n \"\"\"\n # TODO: Implement Function\n # Reuses the same parameters trained by the training process\n start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], \n name='start_tokens')\n # Helper for the inference process.\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n end_of_sequence_id)\n # Basic decoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n encoder_state,\n output_layer)\n # Perform dynamic decoding using the decoder\n inference_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n\n return inference_decoder_output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)",
"Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nEmbed the target sequences\nConstruct the decoder LSTM cell (just like you constructed the encoder cell above)\nCreate an output layer to map the outputs of the decoder to the elements of our vocabulary\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.",
"def decoding_layer(dec_input, encoder_state,\n target_sequence_length, max_target_sequence_length,\n rnn_size,\n num_layers, target_vocab_to_int, target_vocab_size,\n batch_size, keep_prob, decoding_embedding_size):\n \"\"\"\n Create decoding layer\n :param dec_input: Decoder input\n :param encoder_state: Encoder state\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_target_sequence_length: Maximum length of target sequences\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param target_vocab_size: Size of target vocabulary\n :param batch_size: The size of the batch\n :param keep_prob: Dropout keep probability\n :param decoding_embedding_size: Decoding embedding size\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n # 1. Decoder Embedding\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n\n # 2. Construct the decoder cell\n def make_cell(rnn_size):\n lstm = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n return drop\n\n dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n # 3. Dense layer to translate the decoder's output at each time \n # step into a choice from the target vocabulary\n output_layer = Dense(target_vocab_size,\n kernel_initializer = tf.truncated_normal_initializer(mean=0.0, stddev=0.1))\n\n\n # 4. Set up a training decoder and an inference decoder\n # Training Decoder\n with tf.variable_scope(\"decode\"):\n training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_target_sequence_length,\n output_layer, keep_prob)\n \n # 5. Inference Decoder\n # Reuses the same parameters trained by the training process\n with tf.variable_scope(\"decode\", reuse=True):\n inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings,\n target_vocab_to_int['<GO>'],\n target_vocab_to_int['<EOS>'], \n max_target_sequence_length, target_vocab_size,\n output_layer, batch_size, keep_prob) \n \n return training_decoder_output, inference_decoder_output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)",
"Build the Neural Network\nApply the functions you implemented above to:\n\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).\nProcess target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.\nDecode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.",
"def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n source_sequence_length, target_sequence_length,\n max_target_sentence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size,\n rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param source_sequence_length: Sequence Lengths of source sequences in the batch\n :param target_sequence_length: Sequence Lengths of target sequences in the batch\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n # Pass the input data through the encoder. We'll ignore the encoder output, but use the state\n _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size,\n enc_embedding_size)\n \n # Prepare the target sequences we'll feed to the decoder in training mode\n dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)\n \n # Pass encoder state and decoder inputs to the decoders\n training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state,\n target_sequence_length, \n max_target_sentence_length,\n rnn_size, num_layers, \n target_vocab_to_int, \n target_vocab_size, batch_size, \n keep_prob, dec_embedding_size)\n \n return training_decoder_output, inference_decoder_output\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability\nSet display_step to state how many steps between each debug output statement",
"# Number of Epochs\nepochs = 7\n# Batch Size\nbatch_size = 100\n# RNN Size\nrnn_size = 256\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 300\ndecoding_embedding_size = 300\n# Learning Rate\nlearning_rate = 0.001\n# Dropout Keep Probability\nkeep_probability = 1.0\ndisplay_step = 100",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n\n #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n\n train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n targets,\n keep_prob,\n batch_size,\n source_sequence_length,\n target_sequence_length,\n max_target_sequence_length,\n len(source_vocab_to_int),\n len(target_vocab_to_int),\n encoding_embedding_size,\n decoding_embedding_size,\n rnn_size,\n num_layers,\n target_vocab_to_int)\n\n\n training_logits = tf.identity(train_logits.rnn_output, name='logits')\n inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n",
"Batch and pad the source and target sequences",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\n\ndef get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n\n # Slice the right amount for the batch\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n\n # Pad\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n\n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n\n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n\n yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1])],\n 'constant')\n\n return np.mean(np.equal(target, logits))\n\n# Split data to training and validation sets\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\nvalid_source = source_int_text[:batch_size]\nvalid_target = target_int_text[:batch_size]\n(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n valid_target,\n batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])) \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n get_batches(train_source, train_target, batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])):\n\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths,\n keep_prob: keep_probability})\n\n\n if batch_i % display_step == 0 and batch_i > 0:\n\n\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch,\n source_sequence_length: sources_lengths,\n target_sequence_length: targets_lengths,\n keep_prob: 1.0})\n\n\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_sources_batch,\n source_sequence_length: valid_sources_lengths,\n target_sequence_length: valid_targets_lengths,\n keep_prob: 1.0})\n\n train_acc = get_accuracy(target_batch, batch_train_logits)\n\n valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)\n\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')",
"Save Parameters\nSave the batch_size and save_path parameters for inference.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()",
"Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the <UNK> word id.",
"def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)",
"Translate\nThis will translate translate_sentence from English to French.",
"translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n target_sequence_length: [len(translate_sentence)*2]*batch_size,\n source_sequence_length: [len(translate_sentence)]*batch_size,\n keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in translate_logits]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n",
"Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gsorianob/fiuba-python | Clase 04 - Excepciones, funciones lambda, búsquedas y ordenamientos.ipynb | apache-2.0 | [
"<!--\n27/10\nOrdenamientos y búsquedas.\nExcepciones. Funciones anónimas.(Pablo o Andres)\n-->\n\nOrdenamiento de listas\nLas listas se pueden ordenar fácilmente usando la función sorted:",
"lista_de_numeros = [1, 6, 3, 9, 5, 2]\nlista_ordenada = sorted(lista_de_numeros)\nprint lista_ordenada",
"Pero, ¿y cómo hacemos para ordenarla de mayor a menor?. <br>\nSimple, interrogamos un poco a la función:\n```Python\n\n\n\nprint sorted.doc\nsorted(iterable, cmp=None, key=None, reverse=False) --> new sorted list\n``\nEntonces, con sólo pasarle el parámetro de *reverse* enTrue` debería alcanzar:",
"lista_de_numeros = [1, 6, 3, 9, 5, 2]\nprint sorted(lista_de_numeros, reverse=True)",
"¿Y si lo que quiero ordenar es una lista de registros?. <br>\nPodemos pasarle una función que sepa cómo comparar esos registros o una que sepa devolver la información que necesita comparar.",
"import random\n\ndef crear_alumnos(cantidad_de_alumnos=5):\n nombres = ['Javier', 'Pablo', 'Ramiro', 'Lucas', 'Carlos']\n apellidos = ['Saviola', 'Aimar', 'Funes Mori', 'Alario', \n 'Sanchez']\n\n alumnos = []\n for i in range(cantidad_de_alumnos):\n a = {\n 'nombre': '{}, {}'.format(\n random.choice(apellidos), random.choice(nombres)),\n 'padron': random.randint(90000, 100000),\n 'nota': random.randint(4, 10)\n }\n alumnos.append(a)\n \n return alumnos\n\n\ndef imprimir_curso(lista):\n for idx, x in enumerate(lista, 1):\n print ' {pos:2}. {padron} - {nombre}: {nota}'.format(\n pos=idx, **x)\n\n\ndef obtener_padron(alumno):\n return alumno['padron']\n\n\ndef ordenar_por_padron(alumno1, alumno2):\n if alumno1['padron'] < alumno2['padron']:\n return -1\n elif alumno2['padron'] < alumno1['padron']:\n return 1\n else:\n return 0\n\ncurso = crear_alumnos()\nprint 'La lista tiene los alumnos:'\nimprimir_curso(curso)\n\nlista_ordenada = sorted(curso, key=obtener_padron)\nprint 'Y la lista ordenada por padrón:'\nimprimir_curso(lista_ordenada)\n\notra_lista_ordenada = sorted(curso, cmp=ordenar_por_padron)\nprint 'Y la lista ordenada por padrón:'\nimprimir_curso(otra_lista_ordenada)",
"Búsquedas en listas\nPara saber si un elemento se encuentra en una lista, alcanza con usar el operador in:",
"lista = [11, 4, 6, 1, 3, 5, 7]\n\nif 3 in lista:\n print '3 esta en la lista'\nelse:\n print '3 no esta en la lista'\n\nif 15 in lista:\n print '15 esta en la lista'\nelse:\n print '15 no esta en la lista'",
"También es muy fácil saber si un elemento no esta en la lista:",
"lista = [11, 4, 6, 1, 3, 5, 7]\n\nif 3 not in lista:\n print '3 NO esta en la lista'\nelse:\n print '3 SI esta en la lista'",
"En cambio, si lo que queremos es saber es dónde se encuentra el número 3 en la lista es:",
"lista = [11, 4, 6, 1, 3, 5, 7]\n\npos = lista.index(3)\nprint 'El 3 se encuentra en la posición', pos\n\npos = lista.index(15)\nprint 'El 15 se encuentra en la posición', pos",
"Funciones anónimas\nHasta ahora, a todas las funciones que creamos les poníamos un nombre al momento de crearlas, pero cuando tenemos que crear funciones que sólo tienen una línea y no se usan en una gran cantidad de lugares se pueden usar las funciones lambda:",
"help(\"lambda\")\n\nmi_funcion = lambda x, y: x+y\n\nresultado = mi_funcion(1,2)\nprint resultado",
"Si bien no son funciones que se usen todos los días, se suelen usar cuando una función recibe otra función como parámetro (las funciones son un tipo de dato, por lo que se las pueden asignar a variables, y por lo tanto, también pueden ser parámetros).\nPor ejemplo, para ordenar los alumnos por padrón podríamos usar:\nPython\nsorted(curso, key=lambda x: x['padron'])\nAhora, si quiero ordenar la lista anterior por nota decreciente y, en caso de igualdad, por padrón podríamos usar:",
"curso = crear_alumnos(15)\nprint 'Curso original'\nimprimir_curso(curso)\n\nlista_ordenada = sorted(curso, key=lambda x: (-x['nota'], x['padron']))\nprint 'Curso ordenado'\nimprimir_curso(lista_ordenada)",
"Otro ejemplo podría ser implementar una búsqueda binaria que permita buscar tanto en listas crecientes como decrecientes:",
"es_mayor = lambda n1, n2: n1 > n2\nes_menor = lambda n1, n2: n1 < n2\n\n\ndef binaria(cmp, lista, clave):\n \"\"\"Binaria es una función que busca en una lista la clave pasada.\n Es un requisito de la búsqueda binaria que la lista se encuentre \n ordenada, pero no si el orden es ascendente o descendente. Por \n este motivo es que también recibe una función que le indique en\n que sentido ir.\n Si la lista está ordenada en forma ascendente la función que se \n le pasa tiene que ser verdadera cuando el primer valor es mayor \n que la segundo; y falso en caso contrario.\n Si la lista está ordenada en forma descendente la función que se \n le pasa tiene que ser verdadera cuando el primer valor es menor \n que la segundo; y falso en caso contrario.\n \"\"\"\n min = 0\n max = len(lista) - 1\n centro = (min + max) / 2\n while (lista[centro] != clave) and (min < max):\n if cmp(lista[centro], clave):\n max = centro - 1\n else:\n min = centro + 1\n centro = (min + max) / 2\n if lista[centro] == clave:\n return centro\n else:\n return -1\n\nprint binaria(es_mayor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 8)\nprint binaria(es_menor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 8)\nprint binaria(es_mayor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 123)\n\nprint binaria(es_menor, [9, 8, 7, 6, 5, 4, 3, 2, 1], 6)\n",
"Excepciones\nUna excepción es la forma que tiene el intérprete de que indicarle al programador y/o usuario que ha ocurrido un error. Si la excepción no es controlada por el desarrollador ésta llega hasta el usuario y termina abruptamente la ejecución del sistema. <br>\nPor ejemplo:",
"print 1/0",
"Pero no hay que tenerle miedo a las excepciones, sólo hay que tenerlas en cuenta y controlarlas en el caso de que ocurran:",
"dividendo = 1\ndivisor = 0\nprint 'Intentare hacer la división de %d/%d' % (dividendo, divisor)\ntry:\n resultado = dividendo / divisor\n print resultado\nexcept ZeroDivisionError:\n print 'No se puede hacer la división ya que el divisor es 0.'",
"Pero supongamos que implementamos la regla de tres de la siguiente forma:",
"def dividir(x, y):\n return x/y\n\ndef regla_de_tres(x, y, z):\n return dividir(z*y, x)\n\n\n# Si de 28 alumnos, aprobaron 15, el porcentaje de aprobados es de...\nporcentaje_de_aprobados = regla_de_tres(28, 15, 100)\nprint 'Porcentaje de aprobados: %0.2f %%' % porcentaje_de_aprobados",
"En cambio, si le pasamos 0 en el lugar de x:",
"resultado = regla_de_tres(0, 13, 100)\nprint 'Porcentaje de aprobados: %0.2f %%' % resultado",
"Acá podemos ver todo el traceback o stacktrace, que son el cómo se fueron llamando las distintas funciones entre sí hasta que llegamos al error. <br>\nPero no es bueno que este tipo de excepciones las vea directamente el usuario, por lo que podemos controlarlas en distintos momentos. Se pueden controlar inmediatamente donde ocurre el error, como mostramos antes, o en cualquier parte de este stacktrace. <br>\nEn el caso de la regla_de_tres no nos conviene poner el try/except encerrando la línea x/y, ya que en ese punto no tenemos toda la información que necesitamos para informarle correctamente al usuario, por lo que podemos ponerla en:",
"def dividir(x, y):\n return x/y\n\ndef regla_de_tres(x, y, z):\n resultado = 0\n try:\n resultado = dividir(z*y, x)\n except ZeroDivisionError:\n print 'No se puede calcular la regla de tres ' \\\n 'porque el divisor es 0'\n \n return resultado\n \nprint regla_de_tres(0, 1, 2)",
"Pero en este caso igual muestra 0, por lo que si queremos, podemos poner los try/except incluso más arriba en el stacktrace:",
"def dividir(x, y):\n return x/y\n\ndef regla_de_tres(x, y, z):\n return dividir(z*y, x)\n \ntry:\n print regla_de_tres(0, 1, 2)\nexcept ZeroDivisionError:\n print 'No se puede calcular la regla de tres ' \\\n 'porque el divisor es 0'\n",
"Todos los casos son distintos y no hay UN lugar ideal dónde capturar la excepción; es cuestión del desarrollador decidir dónde conviene ponerlo para cada problema.\nCapturar múltiples excepciones\nUna única línea puede lanzar distintas excepciones, por lo que capturar un tipo de excepción en particular no me asegura que el programa no pueda lanzar un error en esa línea que supuestamente es segura:\nEn algunos casos tenemos en cuenta que el código puede lanzar una excepción como la de ZeroDivisionError, pero eso puede no ser suficiente:",
"def dividir_numeros(x, y):\n try:\n resultado = x/y\n print 'El resultado es: %s' % resultado\n except ZeroDivisionError:\n print 'ERROR: Ha ocurrido un error por mezclar tipos de datos'\n\ndividir_numeros(1, 0)\ndividir_numeros(10, 2)\ndividir_numeros(\"10\", 2)",
"En esos casos podemos capturar más de una excepción de la siguiente forma:",
"def dividir_numeros(x, y):\n try:\n resultado = x/y\n print 'El resultado es: %s' % resultado\n except TypeError:\n print 'ERROR: Ha ocurrido un error por mezclar tipos de datos'\n except ZeroDivisionError:\n print 'ERROR: Ha ocurrido un error de división por cero'\n except Exception:\n print 'ERROR: Ha ocurrido un error inesperado'\n\ndividir_numeros(1, 0)\ndividir_numeros(10, 2)\ndividir_numeros(\"10\", 2)",
"Incluso, si queremos que los dos errores muestren el mismo mensaje podemos capturar ambas excepciones juntas:",
"def dividir_numeros(x, y):\n try:\n resultado = x/y\n print 'El resultado es: %s' % resultado\n except (ZeroDivisionError, TypeError):\n print 'ERROR: No se puede calcular la división'\n\ndividir_numeros(1, 0)\ndividir_numeros(10, 2)\ndividir_numeros(\"10\", 2)",
"Jerarquía de excepciones\nExiste una <a href=\"https://docs.python.org/2/library/exceptions.html\">jerarquía de excepciones</a>, de forma que si se sabe que puede venir un tipo de error, pero no se sabe exactamente qué excepción puede ocurrir siempre se puede poner una excepción de mayor jerarquía:\n<img src=\"excepciones.png\"/>\nPor lo que el error de división por cero se puede evitar como:",
"try:\n print 1/0\nexcept ZeroDivisionError:\n print 'Ha ocurrido un error de división por cero'",
"Y también como:",
"try:\n print 1/0\nexcept Exception:\n print 'Ha ocurrido un error inesperado'",
"Si bien siempre se puede poner Exception en lugar del tipo de excepción que se espera, no es una buena práctica de programación ya que se pueden esconder errores indeseados. Por ejemplo, un error de sintaxis.\nAdemás, cuando se lanza una excepción en el bloque try, el intérprete comienza a buscar entre todas cláusulas except una que coincida con el error que se produjo, o que sea de mayor jerarquía. Por lo tanto, es recomendable poner siempre las excepciones más específicas al principio y las más generales al final:\nPython\ndef dividir_numeros(x, y):\n try:\n resultado = x/y\n print 'El resultado es: %s' % resultado\n except TypeError:\n print 'ERROR: Ha ocurrido un error por mezclar tipos de datos'\n except ZeroDivisionError:\n print 'ERROR: Ha ocurrido un error de división por cero'\n except Exception:\n print 'ERROR: Ha ocurrido un error inesperado'\nSi el error no es capturado por ninguna clausula se propaga de la misma forma que si no se hubiera puesto nada.\nOtras cláusulas para el manejo de excepciones\nAdemás de las cláusulas try y except existen otras relacionadas con las excepciones que nos permiten manejar de mejor manera el flujo del programa:\n* else: se usa para definir un bloque de código que se ejecutará sólo si no ocurrió ningún error.\n* finally: se usa para definir un bloque de código que se ejecutará siempre, independientemente de si se lanzó una excepción o no.",
"def dividir_numeros(x, y):\n try:\n resultado = x/y\n print 'El resultado es {}'.format(resultado)\n except ZeroDivisionError:\n print 'Error: División por cero'\n else:\n print 'Este mensaje se mostrará sólo si no ocurre ningún error'\n finally: \n print 'Este bloque de código se muestra siempre'\n\ndividir_numeros(1, 0)\nprint '-------------'\ndividir_numeros(10, 2)",
"Pero entonces, ¿por qué no poner ese código dentro del try-except?. Porque tal vez no queremos capturar con las cláusulas except lo que se ejecute en ese bloque de código:",
"def dividir_numeros(x, y):\n try:\n resultado = x/y\n print 'El resultado es {}'.format(resultado)\n except ZeroDivisionError:\n print 'Error: División por cero'\n else:\n print 'Ahora hago que ocurra una excepción'\n print 1/0\n finally: \n print 'Este bloque de código se muestra siempre'\n\ndividir_numeros(1, 0)\nprint '-------------'\ndividir_numeros(10, 2)",
"Lanzar excepciones\nHasta ahora vimos cómo capturar un error y trabajar con él sin que el programa termine abruptamente, pero en algunos casos somos nosotros mismos quienes van a querer lanzar una excepción. Y para eso, usaremos la palabra reservada raise:",
"def dividir_numeros(x, y):\n if y == 0:\n raise Exception('Error de división por cero')\n \n resultado = x/y\n print 'El resultado es {0}'.format(resultado)\n\ntry:\n dividir_numeros(1, 0)\nexcept ZeroDivisionError as e:\n print 'ERROR: División por cero'\nexcept Exception as e:\n print 'ERROR: ha ocurrido un error del tipo Exception'\n\nprint '----------'\ndividir_numeros(1, 0)\n",
"Crear excepciones\nPero así como podemos usar las excepciones estándares, también podemos crear nuestras propias excepciones:\n```Python\nclass MiPropiaExcepcion(Exception):\ndef __str__(self):\n return 'Mensaje del error'\n\n```\nPor ejemplo:",
"class ExcepcionDeDivisionPor2(Exception):\n \n def __str__(self):\n return 'ERROR: No se puede dividir por dos'\n \n\ndef dividir_numeros(x, y):\n if y == 2:\n raise ExcepcionDeDivisionPor2()\n \n resultado = x/y\n\ntry:\n dividir_numeros(1, 2)\nexcept ExcepcionDeDivisionPor2:\n print 'No se puede dividir por 2'\n\ndividir_numeros(1, 2)",
"Para más información, ingresar a https://docs.python.org/2/tutorial/errors.html\nEjercicios\n\nSe leen dos listas A y B, de N y M elementos respectivamente. Construir un algoritmo que halle las listas unión e intersección de A y B. Previamente habrá que ordenarlos.\nEscribir una función que reciba una lista desordenada y un elemento, que:\nBusque todos los elementos coincidan con el pasado por parámetro y devuelva la cantidad de coincidencias encontradas.\nBusque la primera coincidencia del elemento en la lista y devuelva su posición.\nEscribir una función que reciba una lista de números no ordenada, que:\nDevuelva el valor máximo.\nDevuelva una tupla que incluya el valor máximo y su posición.\n¿Qué sucede si los elementos son cadenas de caracteres? <br>\n Nota: no utilizar lista.sort() ni la función sorted.\nSe cuenta con una lista ordenada de productos, en la que uno consiste en una tupla de (identificador, descripción, precio), y una lista de los productos a facturar, en la que cada uno consiste en una tupla de (identificador, cantidad). <br>\nSe desea generar una factura que incluya la cantidad, la descripción, el precio unitario y el precio total de cada producto comprado, y al final imprima el total general. <br>\nEscribir una función que reciba ambas listas e imprima por pantalla la factura solicitada.\nLeer de teclado (usando la función raw_input) los datos de un listado de alumnos terminados con padrón 0. Para cada alumno deben leer:<br> # Padrón<br> # Nombre<br> # Apellido<br> # Nota del primer parcial<br> # Nota del primer recuperatorio (en caso de no haber aprobado el parcial)<br> # Nota del segundo recuperatorio (en caso de no haber aprobado en el primero)<br> # Nombre del grupo<br> # Nota del TP 1<br> # Nota del TP 2<br>\nSi el padrón es 0, no deben seguir pidiendo el resto de los campos.<br>\nTanto el padrón, como el nombre y apellido deben leerse como strings (existen padrones que comienzan con una letra b), pero debe validarse que se haya ingresado algo de por lo menos 2 caracteres. <br>\nTodas las notas serán números enteros entre 0 y 10, aunque puede ser que el usuario accidentalmente ingrese algo que no sea un número, por lo que deberán validar la entrada y volver a pedirle los datos al usuario hasta que ingrese algo válido. También deben validar que las notas pertenezcan al rango de 0 a 10. <br>\nSe asume que todos los alumnos se presentan a todos los parciales hasta aprobar o completar sus 3 chances. <br>\nAl terminar deben:\nimprimir por pantalla un listado de todos los alumnos en condiciones de rendir coloquio (último parcial aprobado y todos los TP aprobados) en el mismo orden en el que el usuario los ingreso.\nimprimir por pantalla un listado de todos los alumnos en condiciones de rendir coloquio (último parcial aprobado y todos los TP aprobados) ordenados por padrón en forma creciente.\nimprimir por pantalla un listado de todos los alumnos en condiciones de rendir coloquio (último parcial aprobado y todos los TP aprobados) ordenados por nota y, en caso de igualdad, por padrón (ambos en forma creciente).\nCalcular para cada alumno el promedio de sus notas del parcial y luego el promedio del curso como el promedio de todos los promedios.\nInformar cuál es la nota que más se repite entre todos los parciales (sin importar si es primer, segundo o tercer parcial) e indicar la cantidad de ocurrencias.\nlistar todas las notas que se sacaron los alumnos en el primer parcial y los padrones de quienes se sacaron esas notas con el siguiente formato:\n\nNota: 2\n * nnnn1\n * nnnn2\n * nnnn3\n * nnnn4\nNota: 4\n * nnnn1\n * nnnn2\n ...\nTener en cuenta que las notas pueden ser del 2 al 10 y puede ocurrir que nadie se haya sacado esa nota (y en dicho caso no esa nota no tiene que aparecer en el listado)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
psci2195/espresso-ffans | doc/tutorials/01-lennard_jones/01-lennard_jones.ipynb | gpl-3.0 | [
"Tutorial 1: Lennard-Jones Liquid\nTable of Contents\n\nIntroduction\nBackground\nThe Lennard-Jones Potential\nUnits\nFirst steps\nOverview of a simulation script\nSystem setup\nChoosing the thermodynamic ensemble, thermostat\nPlacing and accessing particles\nSetting up non-bonded interactions\nWarmup\nIntegrating equations of motion and taking measurements\nSimple Error Estimation on Time Series Data\n\n\nExercises\nBinary Lennard-Jones Liquid\n\n\nReferences\n\nIntroduction\nWelcome to the basic ESPResSo tutorial!\nIn this tutorial, you will learn, how to use the ESPResSo package for your \nresearch. We will cover the basics of ESPResSo, i.e., how to set up and modify a physical system, how to run a simulation, and how to load, save and analyze the produced simulation data.\nMore advanced features and algorithms available in the ESPResSo package are \ndescribed in additional tutorials.\nBackground\nToday's research on Soft Condensed Matter has brought the needs for having a flexible, extensible, reliable, and efficient (parallel) molecular simulation package. For this reason ESPResSo (Extensible Simulation Package for Research on Soft Matter Systems) [1] has been developed at the Max Planck Institute for Polymer Research, Mainz, and at the Institute for Computational Physics at the University of Stuttgart in the group of Prof. Dr. Christian Holm [2,3]. The ESPResSo package is probably the most flexible and extensible simulation package in the market. It is specifically developed for coarse-grained molecular dynamics (MD) simulation of polyelectrolytes but is not necessarily limited to this. For example, it could also be used to simulate granular media. ESPResSo has been nominated for the Heinz-Billing-Preis for Scientific Computing in 2003 [4].\nThe Lennard-Jones Potential\nA pair of neutral atoms or molecules is subject to two distinct forces in the limit of large separation and small separation: an attractive force at long ranges (van der Waals force, or dispersion force) and a repulsive force at short ranges (the result of overlapping electron orbitals, referred to as Pauli repulsion from the Pauli exclusion principle). The Lennard-Jones potential (also referred to as the L-J potential, 6-12 potential or, less commonly, 12-6 potential) is a simple mathematical model that represents this behavior. It was proposed in 1924 by John Lennard-Jones. The L-J potential is of the form\n\\begin{equation}\nV(r) = 4\\epsilon \\left[ \\left( \\dfrac{\\sigma}{r} \\right)^{12} - \\left( \\dfrac{\\sigma}{r} \\right)^{6} \\right]\n\\end{equation}\nwhere $\\epsilon$ is the depth of the potential well and $\\sigma$ is the (finite) distance at which the inter-particle potential is zero and $r$ is the distance between the particles. The $\\left(\\frac{1}{r}\\right)^{12}$ term describes repulsion and the $(\\frac{1}{r})^{6}$ term describes attraction. The Lennard-Jones potential is an\napproximation. The form of the repulsion term has no theoretical justification; the repulsion force should depend exponentially on the distance, but the repulsion term of the L-J formula is more convenient due to the ease and efficiency of computing $r^{12}$ as the square of $r^6$.\nIn practice, the L-J potential is cutoff beyond a specified distance $r_{c}$ and the potential at the cutoff distance is zero.\n<figure>\n<img src='figures/lennard-jones-potential.png' alt='missing' style='width: 600px;'/>\n<center>\n<figcaption>Figure 1: Lennard-Jones potential</figcaption>\n</center>\n</figure>\n\nUnits\nNovice users must understand that ESPResSo has no fixed unit system. The unit \nsystem is set by the user. Conventionally, reduced units are employed, in other \nwords LJ units.\nFirst steps\nWhat is ESPResSo? It is an extensible, efficient Molecular Dynamics package specially powerful on simulating charged systems. In depth information about the package can be found in the relevant sources [1,4,2,3].\nESPResSo consists of two components. The simulation engine is written in C and C++ for the sake of computational efficiency. The steering or control\nlevel is interfaced to the kernel via an interpreter of the Python scripting languages.\nThe kernel performs all computationally demanding tasks. Before all, integration of Newton's equations of motion, including calculation of energies and forces. It also takes care of internal organization of data, storing the data about particles, communication between different processors or cells of the cell-system.\nThe scripting interface (Python) is used to setup the system (particles, boundary conditions, interactions etc.), control the simulation, run analysis, and store and load results. The user has at hand the full reliability and functionality of the scripting language. For instance, it is possible to use the SciPy package for analysis and PyPlot for plotting.\nWith a certain overhead in efficiency, it can also be bused to reject/accept new configurations in combined MD/MC schemes. In principle, any parameter which is accessible from the scripting level can be changed at any moment of runtime. In this way methods like thermodynamic integration become readily accessible.\nNote: This tutorial assumes that you already have a working ESPResSo\ninstallation on your system. If this is not the case, please consult the first chapters of the user's guide for installation instructions.\nPython simulation scripts can be run conveniently:",
"import espressomd\nprint(espressomd.features())\nrequired_features = [\"LENNARD_JONES\"]\nespressomd.assert_features(required_features)",
"Overview of a simulation script\nTypically, a simulation script consists of the following parts:\n\nSystem setup (box geometry, thermodynamic ensemble, integrator parameters)\nPlacing the particles\nSetup of interactions between particles\nWarm up (bringing the system into a state suitable for measurements)\nIntegration loop (propagate the system in time and record measurements)\n\nSystem setup\nThe functionality of ESPResSo for python is provided via a python module called <tt>espressomd</tt>. At the beginning of the simulation script, it has to be imported.",
"# Importing other relevant python modules\nimport numpy as np\n# System parameters\nN_PART = 100\nDENSITY = 0.5\n\nBOX_L = np.power(N_PART / DENSITY, 1.0 / 3.0) * np.ones(3)",
"The next step would be to create an instance of the System class and to seed espresso. This instance is used as a handle to the simulation system. At any time, only one instance of the System class can exist.",
"system = espressomd.System(box_l=BOX_L)\nsystem.seed = 42",
"It can be used to manipulate the crucial system parameters like the time step and the size of the simulation box (<tt>time_step</tt>, and <tt>box_l</tt>).",
"SKIN = 0.4\nTIME_STEP = 0.01\n\nTEMPERATURE = 0.728\nGAMMA=1.0\n\nsystem.time_step = TIME_STEP\nsystem.cell_system.skin = SKIN",
"Choosing the thermodynamic ensemble, thermostat\nSimulations can be carried out in different thermodynamic ensembles such as NVE (particle __N__umber, __V__olume, __E__nergy), NVT (particle __N__umber, __V__olume, __T__emperature) or NPT-isotropic (particle __N__umber, __P__ressure, __T__emperature).\nThe NVE ensemble is simulated without a thermostat. A previously enabled thermostat can be switched off as follows:",
"system.thermostat.turn_off()",
"The NVT and NPT ensembles require a thermostat. In this tutorial, we use the Langevin thermostat.\nIn ESPResSo, the thermostat is set as follows:",
"system.thermostat.set_langevin(kT=TEMPERATURE, gamma=GAMMA, seed=42)",
"Use a Langevin thermostat (NVT or NPT ensemble) with temperature set to temperature and damping coefficient to GAMMA.\nPlacing and accessing particles\nParticles in the simulation can be added and accessed via the <tt>part</tt> property of the System class. Individual particles are referred to by an integer id, e.g., <tt>system.part[0]</tt>. If <tt>id</tt> is unspecified, an unused particle id is automatically assigned. It is also possible to use common python iterators and slicing operations to add or access several particles at once.\nParticles can be grouped into several types, so that, e.g., a binary fluid can be simulated. Particle types are identified by integer ids, which are set via the particles' <tt>type</tt> attribute. If it is not specified, zero is implied.",
"# Add particles to the simulation box at random positions\nfor i in range(N_PART):\n system.part.add(type=0, pos=np.random.random(3) * system.box_l)\n\n# Access position of a single particle\nprint(\"position of particle with id 0:\", system.part[0].pos)\n\n# Iterate over the first five particles for the purpose of demonstration.\n# For accessing all particles, do not splice system.part\nfor i in range(5):\n print(\"id\", i ,\"position:\", system.part[i].pos)\n print(\"id\", i ,\"velocity:\", system.part[i].v)\n\n# Obtain all particle positions\ncur_pos = system.part[:].pos",
"Many objects in ESPResSo have a string representation, and thus can be displayed via python's <tt>print</tt> function:",
"print(system.part[0])",
"Setting up non-bonded interactions\nNon-bonded interactions act between all particles of a given combination of particle types. In this tutorial, we use the Lennard-Jones non-bonded interaction. The interaction of two particles of type 0 can be setup as follows:",
"LJ_EPS = 1.0\nLJ_SIG = 1.0\nLJ_CUT = 2.5 * LJ_SIG\nLJ_CAP = 0.5\nsystem.non_bonded_inter[0, 0].lennard_jones.set_params(\n epsilon=LJ_EPS, sigma=LJ_SIG, cutoff=LJ_CUT, shift='auto')\nsystem.force_cap = LJ_CAP",
"Warmup\nIn many cases, including this tutorial, particles are initially placed randomly in the simulation box. It is therefore possible that particles overlap, resulting in a huge repulsive force between them. In this case, integrating the equations of motion would not be numerically stable. Hence, it is necessary to remove this overlap. This is done by limiting the maximum force between two particles, integrating the equations of motion, and increasing the force limit step by step as follows:",
"WARM_STEPS = 100\nWARM_N_TIME = 2000\nMIN_DIST = 0.87\n\ni = 0\nact_min_dist = system.analysis.min_dist()\nwhile i < WARM_N_TIME and act_min_dist < MIN_DIST:\n system.integrator.run(WARM_STEPS)\n act_min_dist = system.analysis.min_dist()\n i += 1\n LJ_CAP += 1.0\n system.force_cap = LJ_CAP",
"Integrating equations of motion and taking measurements\nOnce warmup is done, the force capping is switched off by setting it to zero.",
"system.force_cap = 0",
"At this point, we have set the necessary environment and warmed up our system. Now, we integrate the equations of motion and take measurements. We first plot the radial distribution function which describes how the density varies as a function of distance from a tagged particle. The radial distribution function is averaged over several measurements to reduce noise.\nThe potential and kinetic energies can be monitored using the analysis method <tt>system.analysis.energy()</tt>.\n<tt>kinetic_temperature</tt> here refers to the measured temperature obtained from kinetic energy and the number\nof degrees of freedom in the system. It should fluctuate around the preset temperature of the thermostat.\nThe mean square displacement of particle $i$ is given by:\n\\begin{equation}\n\\mathrm{msd}_i(t) =\\langle (\\vec{x}_i(t_0+t) -\\vec{x}_i(t_0))^2\\rangle,\n\\end{equation}\nand can be calculated using \"observables and correlators\". An observable is an object which takes a measurement on the system. It can depend on parameters specified when the observable is instanced, such as the ids of the particles to be considered.",
"# Integration parameters\nsampling_interval = 100\nsampling_iterations = 100\n\nfrom espressomd.observables import ParticlePositions\nfrom espressomd.accumulators import Correlator\n# Pass the ids of the particles to be tracked to the observable.\npart_pos = ParticlePositions(ids=range(N_PART))\n# Initialize MSD correlator\nmsd_corr = Correlator(obs1=part_pos,\n tau_lin=10, delta_N=10,\n tau_max=1000 * TIME_STEP,\n corr_operation=\"square_distance_componentwise\")\n# Calculate results automatically during the integration\nsystem.auto_update_accumulators.add(msd_corr)\n\n# Set parameters for the radial distribution function\nr_bins = 70\nr_min = 0.0\nr_max = system.box_l[0] / 2.0\n\navg_rdf = np.zeros((r_bins,))\n\n# Take measurements\ntime = np.zeros(sampling_iterations)\ninstantaneous_temperature = np.zeros(sampling_iterations)\netotal = np.zeros(sampling_iterations)\n\nfor i in range(1, sampling_iterations + 1):\n system.integrator.run(sampling_interval)\n # Measure radial distribution function\n r, rdf = system.analysis.rdf(rdf_type=\"rdf\", type_list_a=[0], type_list_b=[0],\n r_min=r_min, r_max=r_max, r_bins=r_bins)\n avg_rdf += rdf / sampling_iterations\n\n # Measure energies\n energies = system.analysis.energy()\n kinetic_temperature = energies['kinetic'] / (1.5 * N_PART)\n etotal[i - 1] = energies['total']\n time[i - 1] = system.time\n instantaneous_temperature[i - 1] = kinetic_temperature\n\n# Finalize the correlator and obtain the results\nmsd_corr.finalize()\nmsd = msd_corr.result()",
"We now use the plotting library <tt>matplotlib</tt> available in Python to visualize the measurements.",
"import matplotlib.pyplot as plt\nplt.ion()\n\nfig1 = plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k')\nfig1.set_tight_layout(False)\nplt.plot(r, avg_rdf, '-', color=\"#A60628\", linewidth=2, alpha=1)\nplt.xlabel('r $[\\sigma]$', fontsize=20)\nplt.ylabel('$g(r)$', fontsize=20)\nplt.show()\n\nfig2 = plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k')\nfig2.set_tight_layout(False)\nplt.plot(time, instantaneous_temperature, '-', color=\"red\", linewidth=2,\n alpha=0.5, label='Instantaneous Temperature')\nplt.plot([min(time), max(time)], [TEMPERATURE] * 2, '-', color=\"#348ABD\",\n linewidth=2, alpha=1, label='Set Temperature')\nplt.xlabel(r'Time [$\\delta t$]', fontsize=20)\nplt.ylabel(r'$k_B$ Temperature [$k_B T$]', fontsize=20)\nplt.legend(fontsize=16, loc=0)\nplt.show()",
"Since the ensemble average $\\langle E_\\text{kin}\\rangle=3/2 N k_B T$ is related to the temperature,\nwe may compute the actual temperature of the system via $k_B T= 2/(3N) \\langle E_\\text{kin}\\rangle$.\nThe temperature is fixed and does not fluctuate in the NVT ensemble! The instantaneous temperature is\ncalculated via $2/(3N) E_\\text{kin}$ (without ensemble averaging), but it is not the temperature of the system.\nThe correlator output is stored in the array msd and has the following shape:",
"print(msd.shape)",
"The first column of this array contains the lag time in units of the time step.\nThe second column contains the number of values used to perform the averaging of the correlation.\nThe next three columns contain the x, y and z mean squared displacement of the msd of the first particle.\nThe next three columns then contain the x, y, z mean squared displacement of the next particle...",
"fig3 = plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k')\nfig3.set_tight_layout(False)\nlag_time = msd[:, 0]\nfor i in range(0, N_PART, 30):\n msd_particle_i = msd[:, 2+i*3] + msd[:, 3+i*3] + msd[:, 4+i*3]\n plt.plot(lag_time, msd_particle_i,\n 'o-', linewidth=2, label=\"particle id =\" + str(i))\nplt.xlabel(r'Lag time $\\tau$ [$\\delta t$]', fontsize=20)\nplt.ylabel(r'Mean squared displacement [$\\sigma^2$]', fontsize=20)\nplt.xscale('log')\nplt.yscale('log')\nplt.legend()\nplt.show()",
"Simple Error Estimation on Time Series Data\nA simple way to estimate the error of an observable is to use the standard error of the mean (SE) for $N$\nuncorrelated samples:\n\\begin{equation}\n SE = \\sqrt{\\frac{\\sigma^2}{N}},\n\\end{equation}\nwhere $\\sigma^2$ is the variance\n\\begin{equation}\n \\sigma^2 = \\left\\langle x^2 - \\langle x\\rangle^2 \\right\\rangle\n\\end{equation}",
"# calculate the standard error of the mean of the total energy, assuming uncorrelatedness\nstandard_error_total_energy = np.sqrt(etotal.var()) / np.sqrt(sampling_iterations)\nprint(standard_error_total_energy)",
"Exercises\nBinary Lennard-Jones Liquid\nA two-component Lennard-Jones liquid can be simulated by placing particles of two types (0 and 1) into the system. Depending on the Lennard-Jones parameters, the two components either mix or separate.\n\nModify the code such that half of the particles are of <tt>type=1</tt>. Type 0 is implied for the remaining particles.\nSpecify Lennard-Jones interactions between type 0 particles with other type 0 particles, type 1 particles with other type 1 particles, and type 0 particles with type 1 particles (set parameters for <tt>system.non_bonded_inter[i,j].lennard_jones</tt> where <tt>{i,j}</tt> can be <tt>{0,0}</tt>, <tt>{1,1}</tt>, and <tt>{0,1}</tt>. Use the same Lennard-Jones parameters for interactions within a component, but use a different <tt>lj_cut_mixed</tt> parameter for the cutoff of the Lennard-Jones interaction between particles of type 0 and particles of type 1. Set this parameter to $2^{\\frac16}\\sigma$ to get de-mixing or to $2.5\\sigma$ to get mixing between the two components.\nRecord the radial distribution functions separately for particles of type 0 around particles of type 0, type 1 around particles of type 1, and type 0 around particles of type 1. This can be done by changing the <tt>type_list</tt> arguments of the <tt>system.analysis.rdf()</tt> command. You can record all three radial distribution functions in a single simulation. It is also possible to write them as several columns into a single file.\nPlot the radial distribution functions for all three combinations of particle types. The mixed case will differ significantly, depending on your choice of <tt>lj_cut_mixed</tt>. Explain these differences.\n\nReferences\n[1] <a href=\"http://espressomd.org\">http://espressomd.org</a>\n[2] HJ Limbach, A. Arnold, and B. Mann. ESPResSo; an extensible simulation package for research on soft matter systems. Computer Physics Communications, 174(9):704–727, 2006.\n[3] A. Arnold, O. Lenz, S. Kesselheim, R. Weeber, F. Fahrenberger, D. Rohm, P. Kosovan, and C. Holm. ESPResSo 3.1 — molecular dynamics software for coarse-grained models. In M. Griebel and M. A. Schweitzer, editors, Meshfree Methods for Partial Differential Equations VI, volume 89 of Lecture Notes in Computational Science and Engineering, pages 1–23. Springer Berlin Heidelberg, 2013.\n[4] A. Arnold, BA Mann, HJ Limbach, and C. Holm. ESPResSo–An Extensible Simulation Package for Research on Soft Matter Systems. Forschung und wissenschaftliches Rechnen, 63:43–59, 2003."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mayankjohri/LetsExplorePython | Section 2 - Advance Python/Chapter S2.01 - Functional Programming/06_Excercise.ipynb | gpl-3.0 | [
"Excercise - Functional Programming\nQ: Try rewriting the code below as a map. It takes a list of real names and replaces them with code names produced using a more robust strategy.",
"names = [\"Aalok\", \"Chandu\", \"Roshan\", \"Prashant\", \"Saurabh\"]\n\nfor i in range(len(names)):\n names[i] = hash(names[i])\nprint(names)",
"Ans:",
"secret_names = map(hash, names)\nprint(secret_names)",
"Write functions to do the follows\n\n\ngenerate_matrix which takes two arguments m and n and a keyword argument default which specifies the value for each position. It should use a nested list comprehension to generate a list of lists with the given dimensions. If default is provided, each position should have the given value, otherwise the matrix should be populated with zeroes.\n\n\ninitcap that replicates the functionality of the string.title method, except better. Given a string, it should split the string on whitespace, capitalize each element of the resulting list and join them back into a string. Your implementation should use a list comprehension.\n\n\nmake_mapping that takes two lists of equal length and returns a dictionary that maps the values in the first list to the values in the second. The function should also take an optional keyword argument called exclude, which expects a list. Values in the list passed as exclude should be omitted as keys in the resulting dictionary.\n\n\ncompress_keys that takes a dictionary with string keys and returns a new dictionary with the vowels removed from the keys. For instance, the dictionary {\"foo\": 1, \"bar\": 2} should be transformed into {\"f\": 1, \"br\": 2}. The function should use a list comprehension nested inside a dict comprehension.\n\n\ntoUpper that takes a list of names and returns a set of with the case normalized to uppercase. For instance, the list [\"mayank\", \"JohrI\", \"Tagore\", \"Arjun\"] should be transformed into the set {\"MAYANK\", \"JOHRI\", \"TAGORE\", \"ARJUN\"}."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
littlewizardLI/Udacity-ML-nanodegrees | CapstonePoject/capstone_report.ipynb | apache-2.0 | [
"Udacity--毕业项目报告\n报告作者:MingJun-Li\n\n文件说明\n代码主要包含4个开发用的ipython notebook\n* exploration[1]和processing[1]是我初次的2份代码,一个是可视化的,一个是做处理的,不过第一次我做处理的时候用的是较为简单的3个单模型,最后提交的结果比较一般\n\nexploration with xgb[2]和xgb[3]是我后来实验的2个版本,其中xgb效果显著,取得了不错的成绩。\n\n因为2后两个改进版本都是做的比较简短的处理,所以分析主要是以我第一版本的代码为主。\n一、定义\n1.1项目概览\n\n\n历史信息\nRossmann是欧洲的一家连锁药店,在7个欧洲国家拥有3,000家药店。 目前,罗斯曼店经理的任务是提前六周预测其日销量。 商店销售受到诸多因素的影响,包括促销,竞争,学校和国家假日,季节性和地点。 成千上万的个人经理根据其独特的情况预测销售量,结果的准确性可能会有很大的变化。\n\nRossmann希望你能通过给出的数据来预测德国各地1,115家店铺的6周销量。\n 在此次项目中,我将模拟自己参加kaggle比赛,用train.csv来建模,之后用test.csv来预测,通过提交到 Kaggle来评估模型表现。\n\n\n项目意义\n可靠的销售预测使商店经理能够创建有效的员工时间表,从而提高生产力和动力,比如更好的调整供应链和合理的促销策略与竞争策略。 通过帮助Rossmann创建一个强大的预测模型,您将帮助仓库管理人员专注于对他们最重要的内容:客户和团队!\n\n\n实现可能性\nRossmann给出的数据足够丰富,并且各个特征都和销售有关系。这是一个明显的监督学习的项目,有有标签的数据集。\n\n\n引用说明\n最后会用到xgboost这个现成的混合模型,不仅运算速度快,而且集成了一系列算法。因为源码涉及大量C语音因此没有太看懂,主要参见了几个大神的博客,了解了xgboost的运行原理以及此模型的参数设置与调整方案。\n\n\n主要用到了3个数据集,并有1个数据集作为提交数据集得参考样本:\n\ntrain.csv - 历史数据包括sales的数据\ntest.csv - 历史数据不包括sales的数据\n\nstore.csv - 提供各个店铺的具体信息的数据\n\n\nsample_submission.csv - 一个最后提交数据集的参考样本\n\n\n以上数据都由kaggle提供,没有引入其他的数据集,因为这个问题比较特定并且已经足够充分了,不像自然语言处理类需要额外的数据\n-数据集的使用方式:\nsample只用来参考自己得出的答案是否符合格式和是否存在过大的差距\n主要使用train.csv - test.csv - store.csv,前期会分别分析,之后会将store数据与train和test进行合并,用合并后的数据训练模型和验证。\n1.2问题说明\n问题描述\n总的来说,就是需要根据Rossmann药妆店的信息(比如促销,竞争对手,节假日)以及在过去的销售情况,来预测Rossmann未来的销售额。\n下面是具体要完成好此任务需要解决的一些问题\n\n\n数据缺省的问题\n无论是在train还是test还是store等数据集中大部分特征都不是全都数据完整的需要去合理补上这些特征的值\n\n\n数据存在异常值\n数据在记录的时候难免有出现记错或者发生特殊情况的时候,因此有些特征的数据并不可靠,噪音很大,会极大的影响整个模型的准确性,因此要去除这些异常值\n\n\n数据的分离问题:\ntrain和test数据集都与store产生了分离。而store中的特征又与预测结果息息相关。因此要做好一个完整的训练和测试集,必须用合理的方式把store的数据与train和test的合并起来\n\n\n特征的重编码问题:\n比如 Categorical Variabl数值化,而类似销售额等特征可以常识用log等等\n\n\n对各个特征的理解:\n只有理解各个特征的现实意义才能做出更合理的特征工程,而这个理解需要通过一些统计上的知识与实际生活中的常识\n\n\n特征选择\n通过挑选出最重要的 Feature,可以将它们之间进行各种运算和操作的结果作为新的 Feature,可能带来意外的提高。但是怎么选呢?\n\n\n众多模型的选择困难\n能运用的模型有很多种,选择哪一个更适合,是选一个还是混合多个模型,如果用混合模型那该用哪几个模型?\n\n\n降低 Overfitting 的风险的办法\n在提高分数的同时又降低 Overfitting 的风险,这个需要用到不少技巧,尤其是大多数单个模型很难实现test error的持续下降。\n\n\n如何进行有效的可视化\n可视化对我来说不是太熟练,需要补充大量的知识。\n\n\n问题的解决方案\n\n\n数据的缺省:\n\n\n根据不同的特征的情况,选择继承临近的数据或者取平均值来进行填充,具体方案要case by case。\n\n\n数据的分离问题\n找到一个用于作为合并数据集标杆的特征,比如店铺id然后将store的数据与train和test的合并起来\n\n\n数据存在异常值:\n用可视化的方法找出存在异常值的特征,然后进行dropout。\n\n\n特征的重编码问题:\n利用panda的一些小技巧将Categorical Variabl数值化,而类似销售额等特征可以用log再多构造一个特征等等\n\n\n对各个特征的理解\n利用好matplot和seaborn的绘图工具中的统计工具对数据可视化后,洞察出数据之间的相关性。然后辅以自己对于购物的一些常识而和逻辑推理进一步理解特征。从而为更好地进行特征工程作充分准备。\n\n\n特征选择\n经过足够的可视化分析和特征理解之后就能大致清楚了不同特征的重要性了。也就可以进行特征选择了,通过挑选出最重要的 Feature,可以将它们之间进行各种运算和操作的结果作为新的 Feature,可能带来意外的提高,比如两个特征合并等。当然还有去掉那些没意义的特征和数据点,比如商店关门的日子,销售额为零的日子等等\n\n\n模型的选择\n根据数据的具体情况(如:特征维度数量,任务类别:分类还是回归,泛化能力,过拟合的风险等)选择几个适合的模型进行尝试,根据评估指标决定最后的选择\n\n\n降低 Overfitting 的风险的办法\n特征工程是不可或缺的,好的特征对于模型的精确度至关重要,然后要在模型选择和Ensemble上下功夫了,可以试试bagging,boosting或者stacking\n\n\n如何进行有效的可视化\n看大神们的博客,并自己动手实践\n\n\n1.3指标\n现在我们知道这是一个回归问题。 为了测试每个模型的准确性,我们通过划分训练数据创建一个测试集。 然后我们使用均方根预测误差来确定我们预测的准确性。 均方根误差(RMSE)或RMSD是模型或估计器预测的值(测试值和列车值)与实际观测值之间的差异的常用量度。 RMSE表示预测值和实际值之间的差异的样本标准偏差。 因此0意味着完美的分数。\n最终的评估结果主要是Kaggle在此项目中的pravite_data与自己预测结果的“均根方差rmspe”\n自己训练的时候评估主要是看自己进行充分的特征工程后 合并后的数据集在经过训练后在train和test上的“均根方差rmspe”数值\n二、分析\n2.1数据研究及探索可视化\n方法:\nData Exploration,对数据进行探索性的分析,从而为之后的处理和建模提供必要的结论,用 pandas 来载入数据,并 matplotlib 和 seaborn 提供的绘图功能做一些简单的可视化来理解数据。对 Numerical Variable,可以用 Box Plot 来直观地查看它的分布。\n下面我会展示一些我的数据研究。详情可见explorlation [2],eplorlation[1]with simplemodel和eplorlation[3]with xgb的 分析没有放出,因为可视化不太好看。\n- 数据集中大多数数据字段是不言自明的,符合我们的常识,以下是不是这些的字段具体描述:\n\nId - 表示测试集中(存储,日期)副本的Id\nStore - 每个商店的独特Id\nSales - 任何一天的营业额(这是你预测的)\nCustomers - 某一天的客户数量\nOpen - 商店是否打开的指示器:0 =关闭,1 =打开\nStateHoliday - 表示一个国家假期。通常所有商店,除了少数例外,在国营假期关闭。请注意,所有学校在公众假期和周末关闭。 a =公众假期,b =复活节假期,c =圣诞节,0 =无\nSchoolHoliday - 表示(商店,日期)是否受到公立学校关闭的影响\nStoreType - 区分4种不同的商店模式:a,b,c,d\nAssortment - 描述分类级别:a = basic,b = extra,c = extended\nCompetitionDistance - 距离最接近的竞争对手商店的距离\nCompetitionOpenSince[Month/Year] ] - 给出最近的竞争对手开放时间的大约年和月\nPromo - 指示商店是否在当天运行促销\nPromo2 - Promo2是一些持续和连续推广的一些商店:0 =商店不参与,1 =商店正在参与\nPromo2自[年/周] - 描述商店开始参与Promo2的年份和日历周\nPromoInterval - 描述了Promo2的连续间隔开始,命名新的促销活动的月份。例如。 “二月,五月,八月,十一月”是指每一轮在该店的任何一年的二月,五月,八月,十一月份开始",
"import os\nfrom IPython.display import display, Image\nnames = [f for f in os.listdir('C:/Users/Administrator/Desktop/report/') if f.endswith('.png')]",
"如下图分别是训练数据\"train.csv\"和“store.csv”的大致结构,显示头5行的数据内容。从中我们大致了解了有哪些数据特征",
"for name in names[:2]:\n display(Image('C:/Users/Administrator/Desktop/report/' + name, width=800))",
"先探索一下销售额是否和处于一周的第几天有关,填补缺失值,默认周天以外的时间商店都是处于“open”状态,有如下图的情况,由此可以看出销售额与周几的一些关系",
"for name in names[2:3]:\n display(Image('C:/Users/Administrator/Desktop/report/' + name, width=1000))",
"通过将时间数据进行清理,再探索一下销售额与时间(月)的关系,如下图所示展示了平均销售额与月份的关系以及百分比改变状况。",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"4.png\", width=1000))",
"这个图就比较清晰地表明了销售额与月份有着密切地关系,月份对于销售额比较大地影响。这个特征需要重视。\n再比较年份的销售和每年到访的用户数量,如下图所示",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"5.png\", width=1000))",
"从图中可以得知年份用户数量有一定关系,但关系不是很大\n接下来用box-plot和折线图分析下月份和用户数量的关系",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"6.png\", width=1000))",
"由此可知用户数量和月份是紧密相关的,这个趋势图和月份与销售额的很像,这样就容易推理出,用户数量基本上决定了销售额。为了验证我们的想法,我们在更细小的时间段进行验证,下面我以星期为验证,看看用户和销售额的关系",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"7.png\", width=1000))",
"由上图能非常明显的看出,我们的推理得到了验证:“销售额的变化基本是和用户数量的变化是一致的,百分比变化几乎完全一样”\n现在来看看促销对销售额与顾客数量是否明显的影响",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"8.png\", width=1000))",
"如图可知,促销对于顾客数量以及销售额都有着显著的影响,程度上来说,对于销售额的影响大于对用户数量的影响,因此可以推断出促销一定程度上能提升客单价。\n接下来可以看看a、b、c三种state假日在总天数中的占比情况,以及有无state对于销售额和用户的影响",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"9.png\", width=1000))",
"接下来看看学校放假对于销售额和用户量的影响",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"11.png\", width=1000))",
"然后可以再分析下客户量和销售额的关系,下图基本能大致说清楚客户数量与客单价的联系了",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"12.png\", width=1000))",
"由此其实可以推断过度的促销可能使得客单价偏低,有点得不偿失,因此应该控制好平衡。\n之后看看将“store”的各种特征合并到“train”后的一些情况,比如店铺不同类型的占比数量,以及不同类型店铺对于销售额和用户量的影响",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"13.png\", width=1000))\n\ndisplay(Image('C:/Users/Administrator/Desktop/report/' + \"14.png\", width=1000))",
"长期促销销售额对于用户数量的影响,如下图",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"15.png\", width=1000))",
"然后再看一个比较关键的特征,一般竞争者们之间的距离和销售额的关系。形状比较像正态分布",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"16.png\", width=1000))",
"竞争开始时,商店的平均销售额在一段时间内发生了什么事? 我通过一个店铺进行了演示:store_id = 6的平均销售额自竞争开始以来急剧下降。",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"17.png\", width=1000))",
"2.2算法与方法\n先从特征开始讲起把:\n从上述部分可以看出,仅通过使用客户数量数据或仅仅是促销是不能能预测销售的。销售受到每个属性的影响。为了对销售进行预测,首先我们将摆脱“客户”功能和“销售”功能中的异常值,因为数字过高或过低可能会出现异常情况。\n这样的数据会影响模型的精度。处理异常值后,我们可以开始预处理数据。这包括摆脱Null值,encoding一些特征,如StoreType,StateHoliday等。\n处理Missing Data是根据特征的现实意义进行填补和丢弃的,比如除每个星期天外,我都默认商店是处于“open”状态的,这比较符合我们的生活常识\n如前所述,日期的转换对于预测而言是非常重要的,这也是视觉化中证明的,因为日期,月份和日期的销售变化很大。\n我们也可以处理竞争的细节,因为它肯定会影响到真正影响销售的客户数量。促销可能有助于让客户回来,这就是为什么我们需要对所有Promo列进行编码的原因。一旦我们对数据进行了预处理,我们就可以使用cross_validation.train_test_split方法进行拆分。\n该方法随机洗牌数据并返回两套训练和测试。可以定义测试的大小。然后,训练集用于训练几个模型 -\n模型上用到的相关方法:\n\n我实验了大概好几种,分开来看,分别是 DecisionTree回归 ,GradientBoost回归,KNeighborsClassifier我进行了分别的测验,然后根据情况又进行了Ensemable,但最后还是别人造的轮子 xgboost更好用\n\n-1. DecisionTree回归 - 该模型的目标是创建一个通过学习从数据特征推断的简单决策规则来预测目标变量的值的模型。 DecisionTree采用特征,并使用ifnd-else决策规则来获取目标变量的方法。\n-2. Kneighbour回归 - 最近邻方法的原理是找到与新点最近距离最近的预定义数量的训练样本,并从它们预测标签。 K邻居回归基于每个查询点的k个最近邻居实现学习,其中k是用户指定的整数值。 Kneighbour的回归实例 - \n-3. GradientBoost回归 - 渐变增强回归是推广到任意可微分损失函数的推广。\n梯度增强产生一个预测模型,它以弱预测模型(通常是决策树)的形式组成。它以像其他增强方法一样的阶段性方式构建模型,并且通过允许优化任意不同的损失函数来推广它们。\n由于以下属性,这些模型已被选择用于此问题 - •考虑到影响销售的功能,数据可以轻松地分解为使用if-then-else决策规则对输入特征进行决策。这需要可以轻松完成的数据准备,所以我们不需要担心这里的主要缺点。\n虽然决策树可能不稳定,但处理分类数据和数值也是非常好的。\n因此,我们的数据集中的多种类型的功能将不会成为问题。我们可以通过查看模型的测试分数来确定是否对我们的数据不稳定,然后决定是否可以使用。 \n•我们的数据集有大量的数据点。 Kneighbours使用强力执行快速计算,这有助于我们降低模型的成本。另一方面,仅在某些输入特征不是连续值的情况下才是有效的。我们可以通过测试分数找出影响模型准确性的程度。 \n•GradientBoost是一个缓慢的模型,因此模型的成本将会很高。另一方面,它通过优化任意不可分解的损失函数使用不同的方法。如果我们无法通过任何其他模型获得良好的成绩,我们可以通过这种方法获得良好的成绩。\n2.3基准\n考虑到我们将要尝试3个以上的模型,我期望一个模型具有良好的RMSE分数,注意在RMSE的情况下越低越好,0是完美的分数。 \n对销售有一个好主意,商店经理需要对模型的准确性有一定的信念。 \n可以接受一定的错误率,但如果误差几乎达到五分之一,那么该模型是不错的。 有经验的店经理将能够预测那么多错误率。 如果模型没有正确地预测至少五分之一的数据,则数据需要更多的处理,并且所选择的模型需要被优化或改变。 因此,RMSE的基准应为0.20\n三、方法\n首先我们首先处理异常值。 检测数据中的异常值在任何数据预处理步骤中都非常重要的分析。\n异常值的存在常常会使结果不好\n考虑到这些数据点。 有很多“规则“是什么构成数据集中的异常值?\n使用Tukey's方法来识别异常值:离群值是计算为四分位数范围(IQR)的1.5倍。 数据点具有超出IQR之外的异常值的功能该功能被认为是异常的。\n因此,我写了一个代码,用可视化的方法辅助,发现客户和中的异常值\n销售,然后看到哪些异常值在这两个特征中是常见的。然后常见的异常值被丢弃。 \n如前所述,数据需要适量的处理。直接使用一些具有数值的功能 - “存储”,“竞争力”,“促销”,“促销2”,“学校”。处理数据的初始步骤是用零填充所有的NaN值。\n这里我们假设列没有被填写,因为该特征的abcense。然后为了加快处理速度,我放弃了商店关闭的行,这就是开放设置为零的地方,因为我们只想在商店开放的日子里培训模型,因此有销售。那么具有分类值“StoreType”,“Assortment”和“StateHoliday”的特征具有可以在模型中使用的替代值的所有值。之后,我们转到日期。给出的日期格式是aribtrary,需要处理。\n因此,所有的日期都被分为“DayOfWeek”,“Month”,“Day”,“Year”,“WeekOfYear”的功能。然后处理以年或月为单位的竞赛特征的日期。我们将所有的值转换成几个月,以便有一个单位进行比较。\n在“PromoOpen”这一年以来,从星期以来给出了同样的步骤。最后,“IsPromoMonth”映射为月份值,并根据该值分配0和1。为了使模型创建更快一点,销售为0的所有行都将被删除,因为它可能是一个未填充的值,这只会对模型产生负面影响。\n3.2 实施\n实施首先,将步骤分为各种功能,以计算每个功能为检查成本所花费的时间。\n第一个函数列出它所用的分类器作为参数。它适用于拟合方法并报告时间。\n然后第二个函数运行训练集本身的预测,并返回均方根误差率。它还返回了对训练集进行预测所需的时间。然后第三个函数对测试集进行预测,并报告时间和分数。\n首先,将销售数据转换为日志值,以便更容易预测。然后,前面部分提到的每个模型都被调用并作为函数的参数传递,并为每个函数报告时间和分数。一旦报告了分数,则选择最有效的模型并计算特征重要性。功能重要性告诉我们哪些功能在做出预测时最相关。这可以与我们的分析进行比较,同时探索数据。然后在条形图上表示特征重要性。\n然后将整个数据集训练在所选择的模型上,最后的预测被做出并保存到测试文件中。\n选择细化决策回归。 最初,错误率很高。 因此,将模型训练的销售数据转换为日志以降低错误率。 测试组的均方误差值为0.1819,远高于预期值。 然后通过应用GridSearchCV,错误率降低到0.164。 GridSearchCV详尽地考虑了在参数网格中传递的所有参数组合。 在这种情况下,优化了决策树算法的叶样本和样本分割值,以获得最佳分数。\n四、结果\n4.1模型评估和验证\n决策树回归器使用392592的训练集大小来训练DecisionTreeRegressor。 。\n\n训练用去6.3615秒\n做出预测在0.6946秒。 \n训练集的mean_squared_error:0.0000。 \n在0.1140秒内做出预测。 \n测试集的mean_squared_error:0.1819。 \n\n使用训练集大小为392592的Kneighbours培训KNeighborsRegressor。 。\n\n训练用去在3.6165秒\n在23.1225秒内做出预测。 \n训练集的mean_squared_error:0.1927。 \n在5.8234秒内做出预测。 \n测试集的mean_squared_error:0.2470。\n\n对于GradientBoost回归器,使用训练集大小为392592的一个GradientBoostingRegressor进行训练。\n\n训练用去71.3005秒\n做出预测在1.1283秒。 \n训练集的mean_squared_error:0.3151。\n在0.2588秒内做出预测。 \n测试集的mean_squared_error:0.3181。\n\n4.2理由:\n分析\n\n\n在Kneighbour的情况下,模型的成本低于预期,错误率低于所确定的基准值。 这是一个很好的模型,但是错误率相对高于DecisionTree模型,因此该模型不用于最终预测。\n\n\nGradientBoost回归器具有极高的训练成本,预期的模型也能提供最高的误码率。 该模型不被分类为最优模型。 DecisionTree回归是明确的赢家,它的错误率最低,训练时间不会太长。 即使训练时间高于Kneighbours,预测时间要低得多。\n\n\nRMSE的基准值为0.33,该模型的值为0.18,更好。 从而使DecisionTree成为最优的模型。 在通过网格搜索优化DecisionTree后,模型的误码率降至0.16。\n\n\n具体的理由\n在上一节中,错误率的基准值几乎为0.20。 由于DecisionTree回归,错误率几乎是预期值的三分之二。 如果可以预测销售额的错误率只有0.16,\n那么管理者很容易做出必要的改变,看看是什么增加或减少了销售。 \nDecisonTree回归创建了一个模型,通过学习从数据特征推断出的简单决策规则来预测销售额的价值。 错误是如此之低,因为每个功能都被应用,如果另外决定规则来预测销售,并且由于功能和数字以及分类的模型是完美契合。 \n因此,对整个数据集进行了最后的训练,并且预测了测试集的销售。 可以肯定地说,这个模型将精确地预测所需的值。 因此,这个项目的任务已经完成了!\n五,结论\n自由形式可视化最后,当最优模型被训练时,我们可以看到哪些特征是影响最重要性的特征,并评估我们在本项目前期的预测。 让我们看看重要性\n5.1特征重要性可视化分析",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"19.png\", width=1000))",
"如图所见,商店,DayOfWeek和Date是最重要的feature,似乎在销售上有最大的差异。 这个预测是正确的。 另一方面,假期和促销也似乎有很大的差异,但这些特征已经被降低了\n这里有我后来用xgboost实现了低于0.1的error的时候进行的特征的分析,和上面的结果有较显著的区别,如下图所示。尤其是排名3-10的特征,在xgboost运算后明显有了更大的权重,也就是说,xgboost更具有发现特征对于预测结果的能力,不像单模型那样过于简单,靠常识也能知道过于依赖某个特征可信度会有比较大风险。所以最后结果也自然有了区别",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"20.png\", width=1000))",
"5.2思考\n项目的分析最初是项目的一个有意义的部分,因为它能够告诉我们哪个功能会影响销售,几乎和DecisionTree回归函数的feature_importance属性一样。 \n由于数据未进行预处理,因此在数据可视化方面存在困难。 \n数据中有很多NaN值在几乎每个阶段都在降低产出的质量。 \n我预计最终的模型要花费更少的时间,但训练时间不是以小时计算的话根本不重要。 \n优化模型是一个挑战,因为处理的索引错误(如代码中所述)。 现在这个模型可以用于预测销售,即使有更多的商店或不同的业务来了,如果有类似的功能,那么这个模型可以用于适当的预测\n如何将模型应用在商业上\n理想的方法是(需要检查与业务,看看我们是否需要包括他们的任何一个实施以下方法的制约因素)\n\n与业务核实我们需要多长时间才能从生产中刷新该模型状态。\n开发端到端管道,采用来自所有1115家商店的综合销售数据做数据预处理,特征工程,然后训练模型(使用CV验证方法),并根据刷新频率输出预测\n管道应能够持续整合新数据(每天/每周)和帮助预测在包括新数据在内的训练模型时,尽可能准确。\n应将报告发送给每个店主,以了解他未来6周的具体店铺预测\n\n5.3优化\n项目的分析最初是项目的一个有意义的部分,因为它能够告诉我们哪个功能会影响销售,几乎和DecisionTree回归函数的feature_importance属性一样。\n优化模型是一个挑战,因为处理的索引错误(如代码中所述)。 现在这个模型可以用于预测销售,即使有更多的商店或不同的业务来了,如果有类似的功能,那么这个模型可以用于适当的预测\n可以参见我用xgboost做出的结果,因为主要是去调参和之前需要实验的一些特征工程,所以此处不会详细分析,具体内容可以去看看代码。此处我会贴一些最后的截图。第一次不怎么会调参,第二次看了一些博客后有了显著的提升。下面是最终结果",
"display(Image('C:/Users/Administrator/Desktop/report/' + \"21.png\", width=1000))\n\ndisplay(Image('C:/Users/Administrator/Desktop/report/' + \"22.png\", width=1000))",
"-Thanks for reading\n那个最后一版的xgboost算了大概有4-5个小时才弄完"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
abhi1509/deep-learning | seq2seq/sequence_to_sequence_implementation.ipynb | mit | [
"Character Sequence to Sequence\nIn this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models. This notebook was updated to work with TensorFlow 1.1 and builds on the work of Dave Currie. Check out Dave's post Text Summarization with Amazon Reviews.\n<img src=\"images/sequence-to-sequence.jpg\"/>\nDataset\nThe dataset lives in the /data/ folder. At the moment, it is made up of the following files:\n * letters_source.txt: The list of input letter sequences. Each sequence is its own line. \n * letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.",
"import numpy as np\nimport time\n\nimport helper\n\nsource_path = 'data/letters_source.txt'\ntarget_path = 'data/letters_target.txt'\n\nsource_sentences = helper.load_data(source_path)\ntarget_sentences = helper.load_data(target_path)",
"Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.",
"source_sentences[:50].split('\\n')",
"target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.",
"target_sentences[:50].split('\\n')",
"Preprocess\nTo do anything useful with it, we'll need to turn the each string into a list of characters: \n<img src=\"images/source_and_target_arrays.png\"/>\nThen convert the characters to their int values as declared in our vocabulary:",
"def extract_character_vocab(data):\n special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>']\n\n set_words = set([character for line in data.split('\\n') for character in line])\n int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}\n vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}\n\n return int_to_vocab, vocab_to_int\n\n# Build int2letter and letter2int dicts\nsource_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)\ntarget_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)\n\n# Convert characters to ids\nsource_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\\n')]\ntarget_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\\n')] \n\nprint(\"Example source sequence\")\nprint(source_sentences[:30])\nprint(source_letter_ids[:3])\nprint(\"\\n\")\nprint(\"Example target sequence\")\nprint(target_sentences[:30])\nprint(target_letter_ids[:3])",
"This is the final shape we need them to be in. We can now proceed to building the model.\nModel\nCheck the Version of TensorFlow\nThis will check to make sure you have the correct version of TensorFlow",
"from distutils.version import LooseVersion\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))",
"Hyperparameters",
"# Number of Epochs\nepochs = 60\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 50\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 15\ndecoding_embedding_size = 15\n# Learning Rate\nlearning_rate = 0.001",
"Input",
"def get_model_inputs():\n input_data = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n lr = tf.placeholder(tf.float32, name='learning_rate')\n\n target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')\n max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')\n source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')\n \n return input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length\n",
"Sequence to Sequence Model\nWe can now start defining the functions that will build the seq2seq model. We are building it from the bottom up with the following components:\n2.1 Encoder\n - Embedding\n - Encoder cell\n2.2 Decoder\n 1- Process decoder inputs\n 2- Set up the decoder\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n2.3 Seq2seq model connecting the encoder and decoder\n2.4 Build the training graph hooking up the model with the \n optimizer\n\n2.1 Encoder\nThe first bit of the model we'll build is the encoder. Here, we'll embed the input data, construct our encoder, then pass the embedded data to the encoder.\n\n\nEmbed the input data using tf.contrib.layers.embed_sequence\n<img src=\"images/embed_sequence.png\" />\n\n\nPass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.\n<img src=\"images/encoder.png\" />",
"def encoding_layer(input_data, rnn_size, num_layers,\n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n\n\n # Encoder embedding\n enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)\n\n # RNN cell\n def make_cell(rnn_size):\n enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return enc_cell\n\n enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)\n \n return enc_output, enc_state",
"2.2 Decoder\nThe decoder is probably the most involved part of this model. The following steps are needed to create it:\n1- Process decoder inputs\n2- Set up the decoder components\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n\nProcess Decoder Input\nIn the training process, the target sequences will be used in two different places:\n\nUsing them to calculate the loss\nFeeding them to the decoder during training to make the model more robust.\n\nNow we need to address the second point. Let's assume our targets look like this in their letter/word form (we're doing this for readibility. At this point in the code, these sequences would be in int form):\n<img src=\"images/targets_1.png\"/>\nWe need to do a simple transformation on the tensor before feeding it to the decoder:\n1- We will feed an item of the sequence to the decoder at each time step. Think about the last timestep -- where the decoder outputs the final word in its output. The input to that step is the item before last from the target sequence. The decoder has no use for the last item in the target sequence in this scenario. So we'll need to remove the last item. \nWe do that using tensorflow's tf.strided_slice() method. We hand it the tensor, and the index of where to start and where to end the cutting.\n<img src=\"images/strided_slice_1.png\"/>\n2- The first item in each sequence we feed to the decoder has to be GO symbol. So We'll add that to the beginning.\n<img src=\"images/targets_add_go.png\"/>\nNow the tensor is ready to be fed to the decoder. It looks like this (if we convert from ints to letters/symbols):\n<img src=\"images/targets_after_processing_1.png\"/>",
"# Process the input we'll feed to the decoder\ndef process_decoder_input(target_data, vocab_to_int, batch_size):\n '''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)\n\n return dec_input",
"Set up the decoder components\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n\n1- Embedding\nNow that we have prepared the inputs to the training decoder, we need to embed them so they can be ready to be passed to the decoder. \nWe'll create an embedding matrix like the following then have tf.nn.embedding_lookup convert our input to its embedded equivalent:\n<img src=\"images/embeddings.png\" />\n2- Decoder Cell\nThen we declare our decoder cell. Just like the encoder, we'll use an tf.contrib.rnn.LSTMCell here as well.\nWe need to declare a decoder for the training process, and a decoder for the inference/prediction process. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).\nFirst, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.\n3- Dense output layer\nBefore we move to declaring our decoders, we'll need to create the output layer, which will be a tensorflow.python.layers.core.Dense layer that translates the outputs of the decoder to logits that tell us which element of the decoder vocabulary the decoder is choosing to output at each time step.\n4- Training decoder\nEssentially, we'll be creating two decoders which share their parameters. One for training and one for inference. The two are similar in that both created using tf.contrib.seq2seq.BasicDecoder and tf.contrib.seq2seq.dynamic_decode. They differ, however, in that we feed the the target sequences as inputs to the training decoder at each time step to make it more robust.\nWe can think of the training decoder as looking like this (except that it works with sequences in batches):\n<img src=\"images/sequence-to-sequence-training-decoder.png\"/>\nThe training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).\n5- Inference decoder\nThe inference decoder is the one we'll use when we deploy our model to the wild.\n<img src=\"images/sequence-to-sequence-inference-decoder.png\"/>\nWe'll hand our encoder hidden state to both the training and inference decoders and have it process its output. TensorFlow handles most of the logic for us. We just have to use the appropriate methods from tf.contrib.seq2seq and supply them with the appropriate inputs.",
"def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,\n target_sequence_length, max_target_sequence_length, enc_state, dec_input):\n\n target_vocab_size = len(target_letter_to_int)\n # 1. Decoder Embedding\n \n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n #dec_embed_input = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, decoding_embedding_size)\n\n\n # 2. Construct the decoder cell\n def make_cell(rnn_size):\n dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return dec_cell\n\n dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n # 3. Dense layer to translate the decoder's output at each time \n # step into a choice from the target vocabulary\n # https://www.tensorflow.org/api_docs/python/tf/layers/dense\n# dense(\n# inputs,\n# units,\n# activation=None,\n# use_bias=True,\n# kernel_initializer=None,\n# bias_initializer=tf.zeros_initializer(),\n# kernel_regularizer=None,\n# bias_regularizer=None,\n# activity_regularizer=None,\n# trainable=True,\n# name=None,\n# reuse=None)\n \n output_layer = Dense(target_vocab_size,\n kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))\n\n\n # 4. Set up a training decoder and an inference decoder\n # Training Decoder\n with tf.variable_scope(\"decode\"):\n\n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n \n \n # Basic decoder\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n enc_state,\n output_layer) \n \n # Perform dynamic decoding using the decoder\n training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)[0]\n # 5. Inference Decoder\n # Reuses the same parameters trained by the training process\n with tf.variable_scope(\"decode\", reuse=True):\n start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')\n\n # Helper for the inference process.\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n target_letter_to_int['<EOS>'])\n\n # Basic decoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n enc_state,\n output_layer)\n \n # Perform dynamic decoding using the decoder\n inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)[0]\n \n\n \n return training_decoder_output, inference_decoder_output",
"2.3 Seq2seq model\nLet's now go a step above, and hook up the encoder and decoder using the methods we just declared",
"\ndef seq2seq_model(input_data, targets, lr, target_sequence_length, \n max_target_sequence_length, source_sequence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, \n rnn_size, num_layers):\n \n # Pass the input data through the encoder. We'll ignore the encoder output, but use the state\n _, enc_state = encoding_layer(input_data, \n rnn_size, \n num_layers, \n source_sequence_length,\n source_vocab_size, \n encoding_embedding_size)\n \n \n # Prepare the target sequences we'll feed to the decoder in training mode\n dec_input = process_decoder_input(targets, target_letter_to_int, batch_size)\n \n # Pass encoder state and decoder inputs to the decoders\n training_decoder_output, inference_decoder_output = decoding_layer(target_letter_to_int, \n decoding_embedding_size, \n num_layers, \n rnn_size,\n target_sequence_length,\n max_target_sequence_length,\n enc_state, \n dec_input) \n \n return training_decoder_output, inference_decoder_output\n \n\n",
"Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this:\n<img src=\"images/logits.png\"/>\nThe logits we get from the training tensor we'll pass to tf.contrib.seq2seq.sequence_loss() to calculate the loss and ultimately the gradient.",
"# Build the graph\ntrain_graph = tf.Graph()\n# Set the graph to default to ensure that it is ready for training\nwith train_graph.as_default():\n \n # Load the model inputs \n input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs()\n \n # Create the training and inference logits\n training_decoder_output, inference_decoder_output = seq2seq_model(input_data, \n targets, \n lr, \n target_sequence_length, \n max_target_sequence_length, \n source_sequence_length,\n len(source_letter_to_int),\n len(target_letter_to_int),\n encoding_embedding_size, \n decoding_embedding_size, \n rnn_size, \n num_layers) \n \n # Create tensors for the training logits and inference logits\n# https://discussions.udacity.com/t/need-some-help-understanding-the-decoder-part-of-the-implementation-tf1-1/276860\n training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')\n inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions')\n \n # Create the weights for sequence_loss\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n \n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n",
"Get Batches\nThere's little processing involved when we retreive the batches. This is a simple example assuming batch_size = 2\nSource sequences (it's actually in int form, we're showing the characters for clarity):\n<img src=\"images/source_batch.png\" />\nTarget sequences (also in int, but showing letters for clarity):\n<img src=\"images/target_batch.png\" />",
"def pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\ndef get_batches(targets, sources, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n \n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n \n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n \n yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths",
"Train\nWe're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.",
"# Split data to training and validation sets\ntrain_source = source_letter_ids[batch_size:]\ntrain_target = target_letter_ids[batch_size:]\nvalid_source = source_letter_ids[:batch_size]\nvalid_target = target_letter_ids[:batch_size]\n(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size,\n source_letter_to_int['<PAD>'],\n target_letter_to_int['<PAD>']))\n\ndisplay_step = 20 # Check training loss after every 20 batches\n\ncheckpoint = \"best_model.ckpt\" \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n \n for epoch_i in range(1, epochs+1):\n for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(\n get_batches(train_target, train_source, batch_size,\n source_letter_to_int['<PAD>'],\n target_letter_to_int['<PAD>'])):\n \n # Training step\n _, loss = sess.run(\n [train_op, cost],\n {input_data: sources_batch,\n targets: targets_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths})\n\n # Debug message updating us on the status of the training\n if batch_i % display_step == 0 and batch_i > 0:\n \n # Calculate validation cost\n validation_loss = sess.run(\n [cost],\n {input_data: valid_sources_batch,\n targets: valid_targets_batch,\n lr: learning_rate,\n target_sequence_length: valid_targets_lengths,\n source_sequence_length: valid_sources_lengths})\n \n print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}'\n .format(epoch_i,\n epochs, \n batch_i, \n len(train_source) // batch_size, \n loss, \n validation_loss[0]))\n\n \n \n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, checkpoint)\n print('Model Trained and Saved')",
"Prediction",
"def source_to_seq(text):\n '''Prepare the text for the model'''\n sequence_length = 7\n return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text))\n\n\n\n\ninput_sentence = 'hello'\ntext = source_to_seq(input_sentence)\n\ncheckpoint = \"./best_model.ckpt\"\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(checkpoint + '.meta')\n loader.restore(sess, checkpoint)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n \n #Multiply by batch_size to match the model's input parameters\n answer_logits = sess.run(logits, {input_data: [text]*batch_size, \n target_sequence_length: [len(text)]*batch_size, \n source_sequence_length: [len(text)]*batch_size})[0] \n\n\npad = source_letter_to_int[\"<PAD>\"] \n\nprint('Original Text:', input_sentence)\n\nprint('\\nSource')\nprint(' Word Ids: {}'.format([i for i in text]))\nprint(' Input Words: {}'.format(\" \".join([source_int_to_letter[i] for i in text])))\n\nprint('\\nTarget')\nprint(' Word Ids: {}'.format([i for i in answer_logits if i != pad]))\nprint(' Response Words: {}'.format(\" \".join([target_int_to_letter[i] for i in answer_logits if i != pad])))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mikolajsacha/tweetsclassification | Notebook.ipynb | mit | [
"Twitter classificator\nThe aim of this project is to test several Machine Learning classification methods in the context of textual data from Twitter. Being given a tweet, we want to be able to assign it to one of a few pre-defined categories. We will achieve this by using supervised Machine Learning techniques, so we will use a training set of a couple of hundreds of manually labeled tweets. \nAcquiring data\nFirst step is to acquire labeled tweets for training the model. At first, I tried searching on the Internet (I gathered three sample data sets, 'dataset1', '2' and '3'), but the data sets did not satisfy me, so I decided to gather and manually label Tweets myself.<br>\n<br>\nI defined six categories for tweets:<br>\nMusic, Sport, Technology, Weather, Movies, Politics<br>\nI also created a seventh, \"Other\" category for tweets which don't fit in any of the former six.<br>\n<br>\nI used Twitter streaming API for getting all tweets in English from around the world. Then I filtered them using a set of keywords for each category (I tried to include as many keywords as possible, ~200 per category). In the end, I manually labeled those filtered tweets. This way I gathered dataset with circa 1500 Tweets (~200 per each category and ~300 labeled \"Other\")<br>\nI decided not to put the original script for acquiring data in the repository, as I don't think it is useful after getting the data set. However, there is a script for expanding the training set, of which I will tell more later.\n<br>\nThe acquired data set can be found in data/gathered_dataset/external/training_set.txt file.<br>\nWord Embeddings\nNow having a set of labeled sentences (I use the term 'sentence' interchangeably with 'tweet', although it is not strictly correct) we would want to train one of popular classification models, e. g. Neural Network or an SVM classifier. Alas, all these methods require our data to be numerical, not textual. To employ them in our problem we need to firstly get a Word Embedding to represent words from our dataset as vectors of numbers.<br>\n<br>\nWe can find a few pre-trained Word Embeddings on the Internet. I have found a good list in this repository:<br>\nhttps://github.com/3Top/word2vec-api#where-to-get-a-pretrained-models<br>\n<br>\nBy default I decided to use two embeddings for comparison:<br>\n- Google News embedding (Trained by Google, using word2vec architecture)<br>\n- Twitter embedding (Trained by GloVe, using GloVe achitecture)<br>\nGoogle News embedding contains 3 million word vectors of 300 dimensions.<br>\nTwitter embedding was trained on actual tweets, so potentially it could be a really good fit. It has several versions in terms of dimensionality, I used the largest one with 200 dimensions. I do not have information about its vocabulary size, however.<br>\n<br>\nHaving at least one word embedding is required for further work. They have sizes of a couple of GB each, so they can take some time to download.<br>\nYou can also try other embeddings, but it is required that GloVe embeddings are in textual form (.txt) and word2vec in binary form (.bin).<br>\n<br>\nIn code, the default locations for word embeddings are:<br>\n'models/word_embeddings/google/GoogleNews-vectors-negative300.bin'<br>\n'models/word_embeddings/glove_twitter/glove.twitter.27B.200d.txt'<br>\n<br>\nYou can change them by editing src/common.py file (It contains some constants which are used in scripts for testing model parameters). To do this you need to edit the WORD_EMBEDDINGS list.<br>\n<br>\nMeaning of each entry in the list is:<br>\n(word embedding class (Word2VecEmbedding/GloveEmbedding), ['relative_path_to_file', number of dimensions])<br>\nFor scripts to work there needs to be at least one item in this list.<br>\n<br>\nCaution: These embeddings are loaded directly to memory. This means that they can crash on 32-bit version of Python. It is also required to have at least 8 GB RAM.<br>",
"# Word2Vec allows us to see the most similar words for a given word\nfrom src.features.word_embeddings.word2vec_embedding import Word2VecEmbedding, run_interactive_word_embedding_test\n\n# change this line for embedding location\nmodel = Word2VecEmbedding('google/GoogleNews-vectors-negative300.bin', 300)\n\n# loading model to memory can take up to a few minutes\nmodel.build()\n\nrun_interactive_word_embedding_test(model)\n\n# Unfortunately, with a library for GloVe model that I used it is not so easy to find the nearest words.\n# In this script we can only see raw word vectors\nfrom src.features.word_embeddings.glove_embedding import GloveEmbedding, run_interactive_word_embedding_test\n\nmodel = GloveEmbedding('glove_twitter/glove.twitter.27B.200d.txt', 200)\nmodel.build()\nrun_interactive_word_embedding_test(model)\n\n# This scripts tries to visualize embedded words\n# Unfortunately, we see only 3 dimensions, so it uses PCA to reduce dimensionality of word embedding\n# PCA is trained on all words which occur in our dataset\n# Reducing number of dimensions from 300 to 3 may make our word embedding worthless\n# Maybe even in this small dimensionality we can see some patterns\n# Color of word point is based on its occurences counts in tweets from each category\n# If you run this outside the notebook in standalone python you can click on dots to see words they represent\n\nfrom src.visualization.visualize_word_embedding import visualize_word_embedding\n\n# You can also use GloVe embedding here\nword_emb = Word2VecEmbedding('google/GoogleNews-vectors-negative300.bin', 300)\nword_emb.build()\n\nvisualize_word_embedding(word_emb)",
"Preprocessing tweets\nNext step is to look at tweets and see if we can preprocess them so they work better for our model. We want to convert each to tweet a list of words. As an input we have tweets which are unicode strings. I simply convert them to list of words by splitting on whitespaces. Then, I do these conversions:\n - convert all text to lowercase (I assume case does not affect meaning, also SoMe PeOPle WrItE LiKe tHiS)\n - remove hyperlinks\n - remove @UserNames (marks answer to another user's tweet)\n - remove stopwords (words like 'a', 'the' etc) - I use NLTK stopwords corpus\n - convert numbers from 0-20 to textual representation - I use 'inflect' library\n - remove words that contain non-alphanumeric characters\nWe will also ignore words which don't exist in our Word Embedding. This filtering will be done later, because I wanted the preprocessed dataset to be a single file, fit for all embeddings and models.",
"from src.data.dataset import run_interactive_processed_data_generation\n\n# When prompt for folder name, insert 'gathered_dataset'\nrun_interactive_processed_data_generation()\n\n# data will be saved in data/gathered_dataset/processed/training_set.txt (script shows a relative path)",
"Embedding tweets\nWe have aquired a Word Embedding which allows us to get a vector of numbers for each word. However, our training set consists of lists of words, so we need to specify how to represent a sentence (tweet) based on representations of its words. I test two different approaches:<br>\n<br>\nFirst is to simply concatanate word vectors, so for each sentence we get vector of size (number of words in sentence) * (size of a single word vector). To use machine learning algorithms each tweet vector must have the same size, so we need to specify a constant length of a sentence. For instance, we can decide that sentences longer than 30 words will be cut at the end and sentences shorter than 30 words will be filled with zeros.<br>\n<br>\nSecond approach is not to concatanate, but sum word vectors (element-wise). This means that the size of sentence vector will be the same as the size of word vector, so we don't have to artificially cut or lengthen tweets.<br>\n<br>\nConcatanation yields longer vectors, so models will obviously need more time to compute. On the other hand, we don't lose information about order of words. However, if we look at this example we can see that in training for finding categories order of words may not be so important:<br>\n<br>\nTweet 1: \"President Obama was great!\"<br>\nTweet 2: \"What a great president Obama was!\"<br>\n<br>\nBoth are obviously about politics and we know this whatever order of words is.<br>\nFurthermore, if we have a small training set, we may encounter only one of these tweets. In sum embedding it is not a problem, because both will have very similar representation. In concatanation embedding we may end up with a model which labels correctly only one of them. So it is not obvious which embedding is going to work better.<br>",
"# 3D visualization of sentence embeddings, similar to visualization of word embeddings.\n# It transforms sentence vectors to 3 dimensions using PCA trained on them.\n# Colors of sentences align with their category\n\nfrom src.visualization.visualize_sentence_embeddings import visualize_sentence_embeddings\nfrom src.features.sentence_embeddings.sentence_embeddings import ConcatenationEmbedding, SumEmbedding\nfrom src.features.word_embeddings.word2vec_embedding import Word2VecEmbedding\n\n# You can also use GloVe embedding here\n# To use GloVe embedding: from src.features.word_embeddings.glove_embedding import GloveEmbedding\nword_emb = Word2VecEmbedding('google/GoogleNews-vectors-negative300.bin', 300)\nword_emb.build()\n\nsentence_embeddings = [\n ConcatenationEmbedding,\n SumEmbedding\n]\n\nvisualize_sentence_embeddings(word_emb, sentence_embeddings)",
"Finding the best model hyperparameters\nI chose four standard classifiers from sklearn package to test: \n\nSVC (SVM Classifier) \nRandomForestClassifier \nMLPClassifier (Multi-Layer Perceptron classifier - neural network) \nKNeighborsClassifier (classification by k-nearest neighbors). \n\nFor all these methods, I defined a set of hyperparameters to test: \nCLASSIFIERS_PARAMS = [<br>\n (SVC,<br>\n {\"C\": list(log_range(-1, 8)),<br>\n \"gamma\": list(log_range(-7, 1))}),<br>\n<br>\n (RandomForestClassifier,<br>\n {\"criterion\": [\"gini\", \"entropy\"],<br>\n \"min_samples_split\": [2, 5, 10, 15],<br>\n \"min_samples_leaf\": [1, 5, 10, 15],<br>\n \"max_features\": [None, \"sqrt\"]}),<br>\n<br>\n (MLPClassifier,<br>\n {\"alpha\": list(log_range(-5, -2)),<br>\n \"learning_rate\": [\"constant\", \"adaptive\"],<br>\n \"activation\": [\"identity\", \"logistic\", \"tanh\", \"relu\"],<br>\n \"hidden_layer_sizes\": [(100,), (100, 50)]}),<br>\n<br>\n (KNeighborsClassifier,<br>\n {\"n_neighbors\": [1, 2, 3, 4, 7, 10, 12, 15, 30, 50, 75, 100, 150],<br>\n \"weights\": ['uniform', 'distance']})<br>\n]<br>\nI test all possible combinations of these parameters, for all word embeddings' and sentence embeddings' combinations. This yields a lot of test cases. Fortunately, classifiers from sklearn package have similar interfaces, so I wrote a generic method for grid-search. This method looks more or less like this: \nfor word_embedding in word_embbedings: \n - build word embedding \n for sentence_embedding in sentence_embeddings: \n - build features using sentence embedding and word embedding \n for classifier_class, tested_parameters in CLASSIFIERS_PARAMS: \n - run multithreaded GridSearchCV for classifier_class and tested_parameters \n on traing set labels and built features \n - safe GridSearchCV results in a summary file in /summaries folder \nWith models from sklearn we can also use GridSearchCV method from this package, which allows to perform grid search for best parameters in parallel. For each combination of parameters it saves average score in folds. I use folds count equal to 5.<br>\n<br>\nTesting all models can take up to a couple of hours. Because of this, I save grid search result in summaries folder, which is included in repository. We can use these results to get the best possible model or do some interpretation of the results.<br>\n<br>\nBy viewing grid search results files we can see that the best scores are somewhere around 85%.<br>",
"# run grid search on chosen models\n# consider doing a backup of text summary files from /summary folder before running this script\n\nfrom src.common import DATA_FOLDER, FOLDS_COUNT, CLASSIFIERS_PARAMS, \\\n SENTENCE_EMBEDDINGS, CLASSIFIERS_WRAPPERS, WORD_EMBEDDINGS\nfrom src.models.model_testing.grid_search import grid_search\n\nclassifiers_to_check = []\nfor classifier_class, params in CLASSIFIERS_PARAMS:\n to_run = raw_input(\"Do you wish to test {0} with parameters {1} ? [y/n] \"\n .format(classifier_class.__name__, str(params)))\n if to_run.lower() == 'y' or to_run.lower() == 'yes':\n classifiers_to_check.append((classifier_class, params))\n\nprint(\"*\" * 20 + \"\\n\")\ngrid_search(DATA_FOLDER, FOLDS_COUNT,\n word_embeddings=WORD_EMBEDDINGS,\n sentence_embeddings=SENTENCE_EMBEDDINGS,\n classifiers=classifiers_to_check,\n n_jobs=-1) # uses as many threads as CPU cores\n\n# now that we have performed grid search, we can train model using the best parameters and test it interactively\n\nfrom src.common import choose_classifier, LABELS, SENTENCES\nfrom src.models.model_testing.grid_search import get_best_from_grid_search_results\nfrom src.features.build_features import FeatureBuilder\nfrom src.visualization.interactive import interactive_test\n\nclassifier = choose_classifier() # choose classifier model\nbest_parameters = get_best_from_grid_search_results(classifier) # get best parameters\n\nif best_parameters is None: exit(-1) # no grid search summary file for this model\n\nword_emb_class, word_emb_params, sen_emb_class, hyperparams = best_parameters\n\nprint (\"\\nEvaluating model for word embedding: {:s}({:s}), sentence embedding: {:s} \\nHyperparameters {:s}\\n\"\n .format(word_emb_class.__name__, ', '.join(map(str, word_emb_params)), sen_emb_class.__name__, str(hyperparams)))\nhyperparams[\"n_jobs\"] = -1 # uses as many threads as CPU cores\n\nprint (\"Building word embedding...\")\nword_emb = word_emb_class(*word_emb_params)\nword_emb.build()\n\nprint (\"Building sentence embedding...\")\nsen_emb = sen_emb_class()\nsen_emb.build(word_emb)\n\nprint (\"Building features...\")\n\n# this helper class builds a matrix of features with provided sentence embedding\nfb = FeatureBuilder()\nfb.build(sen_emb, LABELS, SENTENCES)\n\nprint (\"Building model...\")\nclf = classifier(sen_emb, probability=True, **hyperparams)\nclf.fit(fb.features, fb.labels)\n\nprint (\"Model evaluated!...\\n\")\ninteractive_test(clf)\n\n# This scripts visualizes how models work in 2D\n# It scatters training samples on a 2D space and colors backgrounds according to model prediction\n# It uses best possible parameters from grid search result file\n# I use PCA to reduce number of dimension to 2D\n# Models perform poorly in 2D, but we can see how boundaries between categories may look for different methods\n\nfrom src.visualization.visualize_2d import visualize_2d\nfrom src.features.word_embeddings.word2vec_embedding import Word2VecEmbedding\nfrom src.common import choose_multiple_classifiers\n\nclassifier_classes = choose_multiple_classifiers()\n\n# use same word embedding for all classifiers - we don't want to load more than one embedding to RAM\nword_emb = Word2VecEmbedding('google/GoogleNews-vectors-negative300.bin', 300)\nword_emb.build()\n\nvisualize_2d(word_emb, classifier_classes)\n\n# Attention: There is probably a bug in matplotlib, which causes the color of the fifth category\n# not to be drawn in the background (cyan, 'Movies' category)",
"Applying PCA\nUntil now, I applied PCA several times to make plotting 2D or 3D figures possible. Maybe there is a chance that applying a well fit Principal Component Analysis to our data set will increase model accuracy. The possible option is to fit PCA to our sentece vectors. Then we can train model on sentence vectors transformed with PCA. With reduced number of dimensions, it is also possible that our model could work faster.<br>\n<br>\nThe script below allows user to see how applying PCA to a chosen model works. It cross-validates model built on PCA with dimensions reduced to numbers distributed linearly between 1 and the initial number of dimensions.\nThere is however a limit - we can't train PCA to number of dimensions which is higher than the number of training samples. So, if we have a dataset of 1500 tweets and we use 5 folds for cross-validation, that means that maximum dimensions after PCA are 1200 (4/5 * 1500). It can be an issue when we use Concatenation Embedding.<br>\n<br>\nThe script measures: cross-validation scores, model training time (including fitting PCA) and average sentence prediction time for a trained model.<br>\n<br>\nMy conclusion after seeing the results is that in our problem applying PCA probably wouldn't be advantegous, as there is a noticable decreasement in training speed and no apparent improvement in accuracy, as compared to model without PCA.",
"# training for PCA visualization may take a while, so I also use a summary file for storing the results\nfrom src.visualization.visualize_pca import visualize_pca\nfrom src.common import choose_classifier\n\nclassifier_class = choose_classifier()\nvisualize_pca(classifier_class, n_jobs=-1)",
"Expanding training set\nAfter having a way to choose the best parameters for a model I modified the script for mining tweets. In the beginning I used keywords to search for tweets possibly fit for the training set. This gave me quite clear tweets, but may seem my models not general, because they fit mostly to those keywords.<br>\n<br>\nThe approach that I use to expand my current training set is to use a trained classification model to test tweets incoming from Twitter. I declare a threshold, for instance 0.7 and I take into consideration only these tweets, which are assigned to one category with probability more than the threshold.<br>",
"from src.data.data_gathering.tweet_miner import mine_tweets\n\n# At first, we train the best possible model we have so far (searching in grid search results files)\n# Then, we run as many threads as CPU cores, or two if there is only one core.\n# First thread reads stream from Twitter API and puts all tweets in a synchronous queue.\n# Other threads read from the queue, test tweets on model\n# and if threshold requirement is met, put them in mined_tweets.txt file\n\n# by setting this flag to False we ignore tweets that are classified as \"Other\"\ninclude_unclassified = False\nthreshold = 0.7\n\nmine_tweets(threshold, include_unclassified)\n# this method can be stopped only by killing the process\n\n# when we have some tweets in mined_tweets.txt, we must label them manually.\nfrom src.data.data_gathering.tweet_selector import select_tweets\n\nselect_tweets()\n\n# Results will be stored in file selected_tweets.txt.\n# We can manually merge this file with already existing training set",
"Some analysis\nHaving performed grid search on several models, we can analyse how various hyperparameters perform. Use script below to see some plots showing cross-validation performance for different parameters.",
"# compare how different sentence embedding perform for all tested models\nfrom src.common import SENTENCE_EMBEDDINGS\nfrom src.visualization.compare_sentence_embeddings import get_grid_search_results_by_sentence_embeddings\nfrom src.visualization.compare_sentence_embeddings import compare_sentence_embeddings_bar_chart\n\nsen_embeddings = [sen_emb.__name__ for sen_emb in SENTENCE_EMBEDDINGS]\n\ngrid_search_results = get_grid_search_results_by_sentence_embeddings(sen_embeddings)\ncompare_sentence_embeddings_bar_chart(grid_search_results)\n\n# compare overall performance of all tested models\nfrom src.visualization.compare_models import get_available_grid_search_results\nfrom src.visualization.compare_models import compare_models_bar_chart\n\nbest_results_for_models = get_available_grid_search_results()\n\nfor classifier_class, parameters in best_results_for_models:\n word_emb_class, word_emb_params, sen_emb_class, params, best_result, avg_result = parameters\n\n print (\"\\n{0}: Best result: {1}%, Average result: {2}%\".\n format(classifier_class.__name__, best_result, avg_result))\n print (\"For embeddings: {0}({1}), {2}\".format(word_emb_class.__name__,\n ', '.join(map(str, word_emb_params)),\n sen_emb_class.__name__))\n print (\"And for parameters: {0}\".format(str(params)))\n\ncompare_models_bar_chart(best_results_for_models)\n\n# For a given model, visualize how it performs for a chosen parameter or pair of parameters\nfrom src.common import choose_classifier\nfrom src.visualization.visualize_parameters import get_all_grid_searched_parameters\nfrom src.visualization.visualize_parameters import choose_parameters_to_analyze\nfrom src.visualization.visualize_parameters import analyze_single_parameter\nfrom src.visualization.visualize_parameters import analyze_two_parameters\n\nclassifier_class = choose_classifier()\nparameters_list = get_all_grid_searched_parameters(classifier_class)\nif not parameters_list: # grid search results not found\n exit(-1)\ntested_parameters = list(parameters_list[0][0].iterkeys())\nparameters_to_analyze = choose_parameters_to_analyze(tested_parameters)\n\n# if we choose a single parameter, draw 1D plot\nif len(parameters_to_analyze) == 1:\n analyze_single_parameter(parameters_to_analyze[0], classifier_class, parameters_list)\n\n# if we choose two parameters, draw 2D plot\nelif len(parameters_to_analyze) == 2:\n analyze_two_parameters(parameters_to_analyze[0], parameters_to_analyze[1], classifier_class, parameters_list)\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pysg/pyther | Envolvente.ipynb | mit | [
"Diagrama de fases de sustancias puras\nEn esta sección se presenta una forma de obtener las ecuaciones necesarias para realizar el cálculo del diagrama de fases de usa sustancia pura utilizando un algoritmo de continuación (Allgower & Georg, 2003, Cismondi & Michelsen, 2007, Cismondi et. al, 2008).\nLa implementación de este algoritmo de solución de las ecuaciones resultantes del equilibrio de fases son implementadas como un método de la librería pyther.\n9.1 Sistema de Ecuaciones\nSe parte de las ecuaciones que surgen de la condición de equilibrio de fases para una sustancia pura, sin embargo, el enfoque que se utiliza corresponde a tener como variables del sistema de ecuaciones al logaritmo natural de la temperatura $T$ y los volumenes de líquido $V_l$ y vapor $V_v$. Adicionalmente, se tiene una ecuación correspondiente a la especificación de un valor de alguna de las variables del sistema de ecuaciones danto lugar a un sistema de 3 ecuaciones con la forma que se muestra a continuación:\n$$ F = \n\\begin{bmatrix}\nln \\left( \\frac{P^l(T, V^l)} {P^v(T, V^v)} \\right)\\\nln f_l(T, V^l) - ln f_v(T, V^v)\\\nX_S - S\n\\end{bmatrix}\n$$\nPor tanto la solución del sistema de ecuaciones se puede obtener como:\n$$ Jx \n\\begin{bmatrix}\n\\Delta ln T\\\n\\Delta ln V^l\\\n\\Delta ln V^v\\\n\\end{bmatrix}\n+ F = 0$$\nsiendo \n$$ \\Lambda = \n\\begin{bmatrix}\n\\Delta ln T\\\n\\Delta ln V^l\\\n\\Delta ln V^v\\\n\\end{bmatrix}\n$$\nen donde cada elemento de la matriz $Jx$, salvo la última fila que son cero, tienen la siguiente forma:\n$$ Jx_{1,1} = T \\left( \\frac {\\left(\\frac{\\partial P_{x} }{\\partial T}\\right)} {P_l} - \\frac {\\left(\\frac{\\partial P_{y} }{\\partial T}\\right)} {P_v} \\right) $$\n$$ Jx_{1,2} = -V_l \\left( \\frac {\\left(\\frac{\\partial P }{\\partial V_{x}}\\right)} {P_l} \\right) $$\n$$ Jx_{1,3} = -V_v \\left( \\frac {\\left(\\frac{\\partial P }{\\partial V_{y}}\\right)} {P_v} \\right) $$\n$$ Jx_{2,1} = T \\left(\\left(\\frac{\\partial f^l } {\\partial T} \\right) - \\left(\\frac{\\partial f^v } {\\partial T} \\right) \\right) $$\n$$ Jx_{2,2} = V_l \\left(\\frac{\\partial f^l } {\\partial V_{l}} \\right) $$\n$$ Jx_{2,3} = - V_v \\left(\\frac{\\partial f^y } {\\partial V_{v}} \\right) $$\nMatriz de primeras derivadas parciales\n$$J_x = \\begin{bmatrix}\nT \\left( \\frac {\\left(\\frac{\\partial P_{x} }{\\partial T}\\right)} {P_l} - \\frac {\\left(\\frac{\\partial P_{y} }{\\partial T}\\right)} {P_v} \\right) & \n-V_l \\left( \\frac {\\left(\\frac{\\partial P_{x} }{\\partial V}\\right)} {P_l} \\right) & \n-V_v \\left( \\frac {\\left(\\frac{\\partial P_{y} }{\\partial V}\\right)} {P_y} \\right) \\\n T \\left(\\left(\\frac{\\partial f^l } {\\partial T} \\right) - \\left(\\frac{\\partial f^v } {\\partial T} \\right) \\right) & V_l \\left(\\frac{\\partial f^l } {\\partial V_{l}} \\right) & - V_v \\left(\\frac{\\partial f^y } {\\partial V_{v}} \\right) & \\\n 0 & 0 & 0 &\n\\end{bmatrix}$$\nuna vez que se obtiene la solución del sistema de ecuaciones planteado, se procede con un método de continuación para obtener un valor inicial de un siguiente punto partiendo de la solución previamente encontrada y de esta forma repetir el procedimiento, siguiendo la descripción que se muestra más adelante.\n9.2 Descripción del algoritmo\nLa descripción del algoritmo es tomada de Pisoni, Gerardo Oscar (2014):\n$$ Jx\\left(\\frac{d\\Lambda}{dS_{Spec}}\\right) + \\left(\\frac{dF}{dS_{Spec}}\\right) = 0 $$\nDonde $J_x$ es la matriz jacobiana de la función vectorial $F$, $\\Lambda$ es el vector de variables del sistema $F=0$, $S_{Spec}$ es el valor asignado a una de las variables del vector $\\Lambda$, $\\frac{d\\Lambda}{ dS_{Spec}}$ es la derivada, manteniendo la condición $F=0$, del vector de variables con respecto al parámetro $S_{spec}$. Observe que si $S_{spec}=\\Lambda_i$, entonces $\\frac{d\\Lambda_i} {dS_{Spec}} =1$. El vector $\\frac{d\\Lambda}{ dS_{Spec}}$ es llamado “vector de sensitividades”.\n$\\frac{\\partial F} {\\partial S_{Spec}}$ es la derivada parcial del vector de funciones $F$ con respecto la variable $S_{spec}$.\nLa matriz jacobiana $J_x$ debe ser valuada en un punto ya convergido que es solución del sistema de ecuaciones $F=0$. Observe en los distintos sistemas de ecuaciones presentados en el capítulo 3, que sólo una componente del vector $F$ depende explícitamente de $S_{spec}$. Por tanto, las componentes del vector $\\frac{\\partial F} {\\partial S_{Spec}}$ son todas iguales a cero, excepto la que depende de $S_{spec}$, en esta tesis el valor de dicha componente es siempre $“-1”$.\nConocidos $J_x$ y $\\frac{\\partial F} {\\partial S_{Spec}}$ es posible calcular todas las componentes del vector $\\frac{d\\Lambda}{ dS_{Spec}}$.\nCon $\\frac{d\\Lambda}{ dS_{Spec}}$ conocido es posible predecir los valores de todas las variables del vector $\\Lambda$ para el siguiente punto de la “hiper-línea\" que se está calculando, aplicando la siguiente ecuación:\n$$ \\Lambda_{next point}^0 = \\Lambda_{conve. pont} + \\left(\\frac{d\\Lambda}{dS_{Spec}}\\right) \\Delta S_{Spec} $$\nAquí $\\Lambda_{next point}^0$ corresponde al valor inicial del vector $\\Lambda$ para el próximo punto a ser calculado. $\\Lambda_{conve. pont}$ es el valor del vector $\\Lambda$ en el punto ya convergido.\nPor otra parte, el vector de sensitividades $\\frac{d\\Lambda}{ dS_{Spec}}$ provee información sobre la próxima variable que debe ser especificada en el próximo punto a ser calculado. La variable a especificar corresponderá a la componente del vector $\\frac{d\\Lambda}{dS_{Spec}}$ de mayor valor absoluto. Supongamos que la variable especificada para el punto convergido fue la presión $P$, es decir en el punto convergido $S_{spec} = P$.\n9.3 Implementación del Algoritmo\nA continuación se muestra la forma de utilizar la librería pyther para realizar el diagrama de fases de una sustancia pura.",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline \nimport pyther as pt",
"Luego de hacer la importación de las librerías que se van a utilizar, en la función main_eos() definida por un usuario se realiza la especificación de la sustancia pura junto con el modelo de ecuación de estado y parámetros que se requieren en la función \"pt.function_elv(components, Vc, Tc, Pc, omega, k, d1)\" que realiza los cálculos del algoritmo que se describió previamente.",
"def main_eos():\n print(\"-\" * 79)\n components = [\"METHANE\"]\n MODEL = \"PR\"\n specification = \"constants\"\n component_eos = pt.parameters_eos_constans(components, MODEL, specification)\n #print(component_eos)\n #print('-' * 79)\n \n methane = component_eos[component_eos.index==components]\n #print(methane) \n methane_elv = methane[[\"Tc\", \"Pc\", \"k\", \"d1\"]]\n #print(methane_elv)\n \n Tc = np.array(methane[\"Tc\"])\n Pc = np.array(methane[\"Pc\"])\n Vc = np.array(methane[\"Vc\"])\n omega = np.array(methane[\"Omega\"])\n k = np.array(methane[\"k\"])\n d1 = np.array(methane[\"d1\"])\n \n punto_critico = np.array([Pc, Vc])\n \n print(\"Tc main = \", Tc)\n print(\"Pc main = \", Pc)\n print(\"punto critico = \", punto_critico)\n \n data_elv = pt.function_elv(components, Vc, Tc, Pc, omega, k, d1)\n #print(data_elv)\n \n return data_elv, Vc, Pc",
"9.4 Resultados\nSe obtiene el diagrama de fases líquido-vapor de una sustancia pura utilizando el método function_elv(components, Vc, Tc, Pc, omega, k, d1) de la librería pyther. Se observa que la función anterior main_eos() puede ser reemplazada por un bloque de widgets que simplifiquen la interfaz gráfica con los usuarios.",
"volumen = envolvente[0][0]\npresion = envolvente[0][1]\nVc, Pc = envolvente[1], envolvente[2]\n\nplt.plot(volumen,presion)\nplt.scatter(Vc, Pc)\n\nplt.xlabel('Volumen [=] $mol/cm^3$')\nplt.ylabel('Presión [=] bar')\nplt.grid(True)\nplt.text(Vc * 1.4, Pc * 1.01, \"Punto critico\")",
"9.5 Referencias\n[1] E.L. Allgower, K. Georg, Introduction to Numerical Continuation Methods, SIAM. Classics in Applied Mathematics, Philadelphia, 2003.\n[2] M. Cismondi, M.L. Michelsen, Global phase equilibrium calculations: Critical lines, critical end points and liquid-liquid-vapour equilibrium in binary mixtures, Journal of Supercritical Fluids, 39 (2007) 287-295.\n[3] M. Cismondi, M.L. Michelsen, M.S. Zabaloy, Automated generation of phase diagrams for binary systems with azeotropic behavior, Industrial and Engineering Chemistry Research, 47 (2008) 9728-9743.\n[4] Pisoni, Gerardo Oscar (2014). Mapas Característicos del Equilibrio entre Fases para Sistemas Ternarios (tesis doctoral). Universidad Nacional del Sur, Argentina."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
zakandrewking/cobrapy | documentation_builder/phenotype_phase_plane.ipynb | lgpl-2.1 | [
"Production envelopes\nProduction envelopes (aka phenotype phase planes) will show distinct phases of optimal growth with different use of two different substrates. For more information, see Edwards et al.\nCobrapy supports calculating these production envelopes and they can easily be plotted using your favorite plotting package. Here, we will make one for the \"textbook\" E. coli core model and demonstrate plotting using matplotlib.",
"import cobra.test\nfrom cobra.flux_analysis import production_envelope\n\nmodel = cobra.test.create_test_model(\"textbook\")",
"We want to make a phenotype phase plane to evaluate uptakes of Glucose and Oxygen.",
"prod_env = production_envelope(model, [\"EX_glc__D_e\", \"EX_o2_e\"])\n\nprod_env.head()",
"If we specify the carbon source, we can also get the carbon and mass yield. For example, temporarily setting the objective to produce acetate instead we could get production envelope as follows and pandas to quickly plot the results.",
"prod_env = production_envelope(\n model, [\"EX_o2_e\"], objective=\"EX_ac_e\", c_source=\"EX_glc__D_e\")\n\nprod_env.head()\n\n%matplotlib inline\n\nprod_env[prod_env.direction == 'maximum'].plot(\n kind='line', x='EX_o2_e', y='carbon_yield')",
"Previous versions of cobrapy included more tailored plots for phase planes which have now been dropped in order to improve maintainability and enhance the focus of cobrapy. Plotting for cobra models is intended for another package."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/asl-ml-immersion | notebooks/text_models/solutions/custom_tf_hub_word_embedding.ipynb | apache-2.0 | [
"Custom TF-Hub Word Embedding with text2hub\nLearning Objectives:\n 1. Learn how to deploy AI Hub Kubeflow pipeline\n 1. Learn how to configure the run parameters for text2hub\n 1. Learn how to inspect text2hub generated artifacts and word embeddings in TensorBoard\n 1. Learn how to run TF 1.x generated hub module in TF 2.0\nIntroduction\nPre-trained text embeddings such as TF-Hub modules are a great tool for building machine learning models for text features, since they capture relationships between words. These embeddings are generally trained on vast but generic text corpora like Wikipedia or Google News, which means that they are usually very good at representing generic text, but not so much when the text comes from a very specialized domain with unique vocabulary, such as in the medical field.\nOne problem in particular that arises when applying a TF-Hub text module which was pre-trained on a generic corpus to specialized text is that all of the unique, domain-specific words will be mapped to the same “out-of-vocabulary” (OOV) vector. By doing so we lose a very valuable part of the text information, because for specialized texts the most informative words are often the words that are very specific to that special domain. Another issue is that of commonly misspelled words from text gathered from say, customer feedback. Applying a generic pre-trained embedding will send the misspelled word to the OOV vectors, losing precious information. However, by creating a TF-Hub module tailored to the texts coming from that customer feedback means that common misspellings present in your real customer data will be part of the embedding vocabulary and should be close by closeby to the original word in the embedding space.\nIn this notebook, we will learn how to generate a text TF-hub module specific to a particular domain using the text2hub Kubeflow pipeline available on Google AI Hub. This pipeline takes as input a corpus of text stored in a GCS bucket and outputs a TF-Hub module to a GCS bucket. The generated TF-Hub module can then be reused both in TF 1.x or in TF 2.0 code by referencing the output GCS bucket path when loading the module. \nOur first order of business will be to learn how to deploy a Kubeflow pipeline, namely text2hub, stored in AI Hub to a Kubeflow cluster. Then we will dig into the pipeline run parameter configuration and review the artifacts produced by the pipeline during its run. These artifacts are meant to help you assess how good the domain specific TF-hub module you generated is. In particular, we will explore the embedding space visually using TensorBoard projector, which provides a tool to list the nearest neighbors to a given word in the embedding space.\nAt last, we will explain how to run the generated module both in TF 1.x and TF 2.0. Running the module in TF 2.0 will necessite a small trick that’s useful to know in itself because it allows you to use all the TF 1.x modules in TF hub in TF 2.0 as a Keras layer.",
"import tensorflow as tf\nimport tensorflow_hub as hub",
"Replace by your GCP project and bucket:",
"PROJECT = !(gcloud config get-value core/project)\nPROJECT = PROJECT[0]\nBUCKET = PROJECT\nREGION = \"us-central1\"\n%env PROJECT = {PROJECT}\n%env BUCKET = {BUCKET}\n%env REGION = {REGION}",
"Loading the dataset in GCS\nThe corpus we chose is one of Project Gutenberg medical texts: A Manual of the Operations of Surgery by Joseph Bell, containing very specialized language. \nThe first thing to do is to upload the text into a GCS bucket:",
"%%bash\n\nURL=https://www.gutenberg.org/cache/epub/24564/pg24564.txt\nOUTDIR=gs://$BUCKET/custom_embedding\nCORPUS=surgery_manual.txt\n\ncurl $URL > $CORPUS\ngsutil cp $CORPUS $OUTDIR/$CORPUS",
"It has very specialized language such as \nOn the surface of the abdomen the position of this vessel would be \nindicated by a line drawn from about an inch on either side of the \numbilicus to the middle of the space between the symphysis pubis \nand the crest of the ilium.\nNow let's go over the steps involved in creating your own embedding from that corpus.\nStep 1: Download the text2hub pipeline from AI Hub (TODO 1)\nGo on AI Hub and search for the text2hub pipeline, or just follow this link.\nYou'll land onto a page describing text2hub. Click on the \"Download\" button on that page to download the Kubeflow pipeline and click Accept.\n\nThe text2hub pipeline is a KubeFlow pipeline that comprises three components; namely:\n\n\nThe text2cooc component that computes a word co-occurrence matrix\nfrom a corpus of text\n\n\nThe cooc2emb component that factorizes the\nco-occurrence matrix using Swivel into\nthe word embeddings exported as a tsv file\n\n\nThe emb2hub component that takes the word\nembedding file and generates a TF Hub module from it\n\n\nEach component is implemented as a Docker container image that's stored into Google Cloud Docker registry, gcr.io. The pipeline.tar.gz file that you downloaded is a yaml description of how these containers need to be composed as well as where to find the corresponding images. \nRemark: Each component can be run individually as a single component pipeline in exactly the same manner as the text2hub pipeline. On AI Hub, each component has a pipeline page describing it and from where you can download the associated single component pipeline:\n\ntext2cooc\ncooc2emb\nemb2hub\n\nStep 2: Upload the pipeline to the Kubeflow cluster (TODO 1)\nGo to your Kubeflow cluster dashboard or navigate to Navigation menu > AI Platform > Pipelines and click Open Pipelines Dashboard then click on the Pipelines tab to create a new pipeline. You'll be prompted to upload the pipeline file you have just downloaded, click Upload Pipeline. Rename the generated pipeline name to be text2hub to keep things nice and clean.\n\nStep 3: Create a pipeline run (TODO 1)\nAfter uploading the pipeline, you should see text2hub appear on the pipeline list. Click on it. This will bring you to a page describing the pipeline (explore!) and allowing you to create a run. You can inspect the input and output parameters of each of the pipeline components by clicking on the component node in the graph representing the pipeline. Click Create Run.\n\nStep 4: Enter the run parameters (TODO 2)\ntext2hub has the following run parameters you can configure:\nArgument | Description | Optional | Data Type | Accepted values | Default\n------------------------------------------------ | ------------------------------------------------------------------------------------- | -------- | --------- | --------------- | -------\ngcs-path-to-the-text-corpus | A Cloud Storage location pattern (i.e., glob) where the text corpus will be read from | False | String | gs://... | -\ngcs-directory-path-for-pipeline-output | A Cloud Storage directory path where the pipeline output will be exported | False | String | gs://... | -\nnumber-of-epochs | Number of epochs to train the embedding algorithm (Swivel) on | True | Integer | - | 40\nembedding-dimension | Number of components of the generated embedding vectors | True | Integer | - | 128\nco-occurrence-word-window-size | Size of the sliding word window where co-occurrences are extracted from | True | Integer | - | 10\nnumber-of-out-of-vocabulary-buckets | Number of out-of-vocabulary buckets | True | Integer | - | 1\nminimum-occurrences-for-a-token-to-be-considered | Minimum number of occurrences for a token to be included in the vocabulary | True | Integer | - | 5\nYou can leave most parameters with their default values except for\ngcs-path-to-the-test-corpus whose value should be set to",
"!echo gs://$BUCKET/custom_embedding/surgery_manual.txt",
"and for gcs-directory-path-for-pipeline-output which we will set to",
"!echo gs://$BUCKET/custom_embedding",
"Remark: gcs-path-to-the-test-corpus will accept a GCS pattern like gs://BUCKET/data/*.txt or simply a path like gs://BUCKET/data/ to a GCS directory. All the files that match the pattern or that are in that directory will be parsed to create the word embedding TF-Hub module. \n\nMake sure to choose experiment default. Once these values have been set, you can start the run by clicking on Start.\nStep 5: Inspect the run artifacts (TODO 3)\nOnce the run has started you can see its state by going to the Experiments tab and clicking on the name of the run (here \"text2hub-1\"). \n\nIt will show you the pipeline graph. The components in green have successfuly completed. You can then click on them and look at the artifacts that these components have produced.\nThe text2cooc components has \"co-occurrence extraction summary\" showing you the GCS path where the co-occurrence data has been saved. Their is a corresponding link that you can paste into your browser to inspect the co-occurrence data from the GCS browser. Some statistics about the vocabulary are given to you such as the most and least frequent tokens. You can also download the vocabulary file containing the token to be embedded. \n\nThe cooc2emb has three artifacts\n* An \"Embedding Extraction Summary\" providing the information as where the model chekpoints and the embedding tables are exported to on GCP\n* A similarity matrix from a random sample of words giving you an indication whether the model associates close-by vectors to similar words\n* An button to start TensorBoard from the UI to inspect the model and visualize the word embeddings\n\nWe can have a look at the word embedding visualization provided by TensorBoard. Select the TF version: TensorFlow 1.14.0. Start TensorBoard by clicking on Start Tensorboard and then Open Tensorboard buttons, and then select \"Projector\".\nRemark: The projector tab may take some time to appear. If it takes too long it may be that your Kubeflow cluster is running an incompatible version of TensorBoard (your TB version should be between 1.13 and 1.15). If that's the case, just run Tensorboard from CloudShell or locally by issuing the following command.",
"!echo tensorboard --port 8080 --logdir gs://$BUCKET/custom_embedding/embeddings",
"The projector view will present you with a representation of the word vectors in a 3 dimensional space (the dim is reduced through PCA) that you can interact with. Enter in the search tool a few words like \"ilium\" and points in the 3D space will light up. \n\nIf you click on a word vector, you'll see appear the n nearest neighbors of that word in the embedding space. The nearset neighbors are both visualized in the center panel and presented as a flat list on the right. \nExplore the nearest neighbors of a given word and see if they kind of make sense. This will give you a rough understanding of the embedding quality. If it nearest neighbors do not make sense after trying for a few key words, you may need rerun text2hub, but this time with either more epochs or more data. Reducing the embedding dimension may help as well as modifying the co-occurence window size (choose a size that make sense given how your corpus is split into lines.)\n\nThe emb2hub artifacts give you a snippet of TensorFlow 1.x code showing you how to re-use the generated TF-Hub module in your code. We will demonstrate how to use the TF-Hub module in TF 2.0 in the next section.\n\nStep 7: Using the generated TF-Hub module (TODO)\nLet's see now how to load the TF-Hub module generated by text2hub in TF 2.0.\nWe first store the GCS bucket path where the TF-Hub module has been exported into a variable:",
"MODULE = f\"gs://{BUCKET}/custom_embedding/hub-module\"\nMODULE",
"Now we are ready to create a KerasLayer out of our custom text embedding.",
"med_embed = hub.KerasLayer(MODULE)",
"That layer when called with a list of sentences will create a sentence vector for each sentence by averaging the word vectors of the sentence.",
"outputs = med_embed(tf.constant([\"ilium\", \"I have a fracture\", \"aneurism\"]))\noutputs",
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ernestyalumni/MLgrabbag | LogReg-sklearn.ipynb | mit | [
"Logistic Regression\ncf. sklearn.linear_model.LogisticRegression documentation\nLet's take a look at the examples in the LogisticRegression documentation of sklearn. \nThe Logistic Regression 3-class Classifier¶ has been credited to \n\nCode source: Gaël Varoquaux\nModified for documentation by Jaques Grobler\nLicense: BSD 3 clause",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import linear_model, datasets\n\n# import some data to play with\niris = datasets.load_iris()\nX = iris.data[:, :2] # take the first two features. # EY : 20160503 type(X) is numpy.ndarray\nY = iris.target # EY : 20160503 type(Y) is numpy.ndarray\n\nh = .02 # step size in the mesh\n\nprint \"X shape: %s, Y shape: %s\" % X.shape, Y.shape\n\nlogreg = linear_model.LogisticRegression(C=1e5)\n\n# we create an instance of Neighbours Classifier and fit the data.\nlogreg.fit(X,Y)\n\n# Plot the decision boundary. For that, we will assign a color to each\n# point in the mest [x_min, x_max]x[y_min, y_max]\nx_min, x_max = X[:,0].min() - .5, X[:, 0].max() + .5\ny_min, y_max = X[:,1].min() - .5, X[:, 1].max() + .5\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\nZ = logreg.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.figure(1, figsize=(4,3))\nplt.pcolormesh(xx,yy,Z, cmap=plt.cm.Paired)\n\n# Plot also the training points\nplt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors='k', cmap=plt.cm.Paired)\nplt.xlabel('Sepal length')\nplt.ylabel('Sepal width')\n\nplt.xlim(xx.min(), xx.max())\nplt.ylim(yy.min(), yy.max())\nplt.xticks(())\nplt.yticks(())\n\nplt.show()",
"Loading files and dealing with local I/O",
"import os\nprint os.getcwd()\nprint os.path.abspath(\"./\") # find out \"where you are\" and \"where Data folder is\" with these commands",
"Let's load the data for Exercise 2 of Machine Learning, taught by Andrew Ng, of Coursera.",
"ex2data1 = np.loadtxt(\"./Data/ex2data1.txt\",delimiter=',') # you, the user, may have to change this, if the directory that you're running this from is somewhere else\nex2data2 = np.loadtxt(\"./Data/ex2data2.txt\",delimiter=',')\n\nX_ex2data1 = ex2data1[:,0:2]\nY_ex2data1 = ex2data1[:,2]\nX_ex2data2 = ex2data2[:,:2]\nY_ex2data2 = ex2data2[:,2]\n\nlogreg.fit(X_ex2data1,Y_ex2data1)\n\ndef trainingdat2mesh(X,marginsize=.5, h=0.2):\n rows, features = X.shape\n ranges = []\n for feature in range(features):\n minrange = X[:,feature].min()-marginsize\n maxrange = X[:,feature].max()+marginsize\n ranges.append((minrange,maxrange))\n if len(ranges) == 2:\n xx, yy = np.meshgrid(np.arange(ranges[0][0], ranges[0][1], h), np.arange(ranges[1][0], ranges[1][1], h))\n return xx, yy\n else:\n return ranges\n\nxx_ex2data1, yy_ex2data1 = trainingdat2mesh(X_ex2data1,h=0.2)\n\nZ_ex2data1 = logreg.predict(np.c_[xx_ex2data1.ravel(),yy_ex2data1.ravel()])\n\nZ_ex2data1 = Z_ex2data1.reshape(xx_ex2data1.shape)\nplt.figure(2)\nplt.pcolormesh(xx_ex2data1,yy_ex2data1,Z_ex2data1)\nplt.scatter(X_ex2data1[:, 0], X_ex2data1[:, 1], c=Y_ex2data1, edgecolors='k')\nplt.show()",
"Get the probability estimates; say a student has an Exam 1 score of 45 and an Exam 2 score of 85.",
"logreg.predict_proba(np.array([[45,85]])).flatten()\nprint \"The student has a probability of no admission of %s and probability of admission of %s\" % tuple( logreg.predict_proba(np.array([[45,85]])).flatten() )",
"Let's change the \"regularization\" with the C parameter/option for LogisticRegression. Call this logreg2",
"logreg2 = linear_model.LogisticRegression()\n\nlogreg2.fit(X_ex2data2,Y_ex2data2)\n\nxx_ex2data2, yy_ex2data2 = trainingdat2mesh(X_ex2data2,h=0.02)\nZ_ex2data2 = logreg.predict(np.c_[xx_ex2data2.ravel(),yy_ex2data2.ravel()])\n\nZ_ex2data2 = Z_ex2data2.reshape(xx_ex2data2.shape)\nplt.figure(3)\nplt.pcolormesh(xx_ex2data2,yy_ex2data2,Z_ex2data2)\nplt.scatter(X_ex2data2[:, 0], X_ex2data2[:, 1], c=Y_ex2data2, edgecolors='k')\nplt.show()",
"As one can see, the \"dataset cannot be separated into positive and negative examples by a straight-line through the plot.\" cf. ex2.pdf\nWe're going to need polynomial terms to map onto. \nUse this code: cf. Underfitting vs. Overfitting¶",
"from sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import PolynomialFeatures\n\npolynomial_features = PolynomialFeatures(degree=6,include_bias=False)\n\npipeline = Pipeline([(\"polynomial_features\", polynomial_features),(\"logistic_regression\",logreg2)])\n\npipeline.fit(X_ex2data2,Y_ex2data2)\n\nZ_ex2data2 = pipeline.predict(np.c_[xx_ex2data2.ravel(),yy_ex2data2.ravel()])\n\nZ_ex2data2 = Z_ex2data2.reshape(xx_ex2data2.shape)\nplt.figure(3)\nplt.pcolormesh(xx_ex2data2,yy_ex2data2,Z_ex2data2)\nplt.scatter(X_ex2data2[:, 0], X_ex2data2[:, 1], c=Y_ex2data2, edgecolors='k')\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
massie/notebooks | VHSE-Based Prediction of Proteasomal Cleavage Sites.ipynb | apache-2.0 | [
"VHSE-Based Prediction of Proteasomal Cleavage Sites\nXie J, Xu Z, Zhou S, Pan X, Cai S, Yang L, et al. (2013) The VHSE-Based Prediction of Proteasomal Cleavage Sites. PLoS ONE 8(9): e74506. doi:10.1371/journal.pone.0074506\nAbstract: \"Prediction of proteasomal cleavage sites has been a focus of computational biology. Up to date, the predictive methods are mostly based on nonlinear classifiers and variables with little physicochemical meanings. In this paper, the physicochemical properties of 14 residues both upstream and downstream of a cleavage site are characterized by VHSE (principal component score vector of hydrophobic, steric, and electronic properties) descriptors. Then, the resulting VHSE descriptors are employed to construct prediction models by support vector machine (SVM). For both in vivo and in vitro datasets, the performance of VHSE-based method is comparatively better than that of the well-known PAProC, MAPPP, and NetChop methods. The results reveal that the hydrophobic property of 10 residues both upstream and downstream of the cleavage site is a dominant factor affecting in vivo and in vitro cleavage specificities, followed by residue’s electronic and steric properties. Furthermore, the difference in hydrophobic potential between residues flanking the cleavage site is proposed to favor substrate cleavages. Overall, the interpretable VHSE-based method provides a preferable way to predict proteasomal cleavage sites.\"\nNotes:\n\nDatabases used in this study to create training and test sets\nImmune Epitope Database and Analysis Resource\nNCBI's reference sequence (RefSeq) database\nAntiJen - a kinetic, thermodynamic and cellular database v2.0\nExPASy/SWISS-PROT\n\n\n\nPeptide Formation\n\nImage by GYassineMrabetTalk. (Own work) [Public domain], <a href=\"https://commons.wikimedia.org/wiki/File%3APeptidformationball.svg\">via Wikimedia Commons</a>\nHydrophobic, Steric, and Electronic Properties\n<img src=\"images/Amino_Acids.png\" align=\"left\" border=\"0\" height=\"500\" width=\"406\" alt=\"Amino Acids\"/>\nAmino acids grouped by electrically charged, polar uncharged, hydrophobic, and special case sidechains. \nEach amino acid has a single letter designation.\nImage by Dancojocari [<a href=\"http://creativecommons.org/licenses/by-sa/3.0\">CC BY-SA 3.0</a> or <a href=\"http://www.gnu.org/copyleft/fdl.html\">GFDL</a>], <a href=\"https://commons.wikimedia.org/wiki/File%3AAmino_Acids.svg\">via Wikimedia Commons</a>\nProtein Representation (FASTA)\nBradykinin is an inflammatory mediator. It is a peptide that causes blood vessels to dilate (enlarge), and therefore causes blood pressure to fall. A class of drugs called ACE inhibitors, which are used to lower blood pressure, increase bradykinin (by inhibiting its degradation) further lowering blood pressure.\n```\n\nsp|P01042|KNG1_HUMAN Kininogen-1 OS=Homo sapiens GN=KNG1 PE=1 SV=2\nMKLITILFLCSRLLLSLTQESQSEEIDCNDKDLFKAVDAALKKYNSQNQSNNQFVLYRIT\nEATKTVGSDTFYSFKYEIKEGDCPVQSGKTWQDCEYKDAAKAATGECTATVGKRSSTKFS\nVATQTCQITPAEGPVVTAQYDCLGCVHPISTQSPDLEPILRHGIQYFNNNTQHSSLFMLN\nEVKRAQRQVVAGLNFRITYSIVQTNCSKENFLFLTPDCKSLWNGDTGECTDNAYIDIQLR\nIASFSQNCDIYPGKDFVQPPTKICVGCPRDIPTNSPELEETLTHTITKLNAENNATFYFK\nIDNVKKARVQVVAGKKYFIDFVARETTCSKESNEELTESCETKKLGQSLDCNAEVYVVPW\nEKKIYPTVNCQPLGMISLMKRPPGFSPFRSSRIGEIKEETTVSPPHTSMAPAQDEERDSG\nKEQGHTRRHDWGHEKQRKHNLGHGHKHERDQGHGHQRGHGLGHGHEQQHGLGHGHKFKLD\nDDLEHQGGHVLDHGHKHKHGHGHGKHKNKGKKNGKHNGWKTEHLASSSEDSTTPSAQTQE\nKTEGPTPIPSLAKPGVTVTFSDFQDSDLIATMMPPISPAPIQSDDDWIPDIQIDPNGLSF\nNPISDFPDTTSPKCPGRPWKSVSEINPTTQMKESYYFDLTDGLS\n```\n\nBradykinin Structure\n\nBy Yikrazuul (Own work) [Public domain], <a href=\"https://commons.wikimedia.org/wiki/File%3ABradykinin_structure.svg\">via Wikimedia Commons</a>\nMHC Class I Processing\n<img src=\"images/MHC_Class_I_processing.png\" align=\"right\" border=\"0\"/>\nThe proteasome digests polypeptides into smaller peptides 5–25 amino acids in length and is the major protease responsible for generating peptide C termini.\nTransporter associated with Antigen Processing (TAP) binds to peptides of length 9-20 amino acids and transports them into the endoplasmic reticulum (ER).\nImage by <a href=\"//commons.wikimedia.org/wiki/User:Scray\" title=\"User:Scray\">Scray</a> - <span class=\"int-own-work\" lang=\"en\">Own work</span>, <a href=\"http://creativecommons.org/licenses/by-sa/3.0\" title=\"Creative Commons Attribution-Share Alike 3.0\">CC BY-SA 3.0</a>, <a href=\"https://commons.wikimedia.org/w/index.php?curid=6251017\">Link</a>\nText from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2913210/\nCytotoxic T lymphocytes (CTLs) are the effector cells of the adaptive immune response that deal with infected, or malfunctioning, cells. Whereas intracellular pathogens are shielded from antibodies, CTLs are endowed with the ability to recognize and destroy cells harbouring intracellular threats. This obviously requires that information on the intracellular protein metabolism (including that of any intracellular pathogen) be translocated to the outside of the cell, where the CTL reside. To this end, the immune system has created an elaborate system of antigen processing and presentation. During the initial phase of antigen processing, peptide antigens are generated from intracellular pathogens and translocated into the endoplasmic reticulum. In here, these peptide antigens are specifically sampled by major histocompatibility complex (MHC) class I molecules and then exported to the cell surface, where they are presented as stable peptide: MHC I complexes awaiting the arrival of scrutinizing T cells. Hence, identifying which peptides are able to induce CTLs is of general interest for our understanding of the immune system, and of particular interest for the development of vaccines and immunotherapy directed against infectious pathogens, as previously reviewed. Peptide binding to MHC molecules is the key feature in cell-mediated immunity, because it is the peptide–MHC class I complex that can be recognized by the T-cell receptor (TCR) and thereby initiate the immune response. The CTLs are CD8+ T cells, whose TCRs recognize foreign peptides in complex with MHC class I molecules. In addition to peptide binding to MHC molecules, several other events have to be considered to be able to explain why a given peptide is eventually presented at the cell surface. Generally, an immunogenic peptide is generated from proteins expressed within the presenting cell, and peptides originating from proteins with high expression rate will normally have a higher chance of being immunogenic, compared with peptides from proteins with a lower expression rate. There are, however, significant exceptions to this generalization, e.g. cross-presentation, but this will be ignored in the following. In the classical MHC class I presenting pathway (see image on right) proteins expressed within a cell will be degraded in the cytosol by the protease complex, named the proteasome. The proteasome digests polypeptides into smaller peptides 5–25 amino acids in length and is the major protease responsible for generating peptide C termini. Some of the peptides that survive further degradation by other cytosolic exopeptidases can be bound by the transporter associated with antigen presentation (TAP), reviewed by Schölz et al. This transporter molecule binds peptides of lengths 9–20 amino acids and transports the peptides into the endoplasmic reticulum, where partially folded MHC molecules [in humans called human leucocyte antigens (HLA)], will complete folding if the peptide is able to bind to the particular allelic MHC molecule. The latter step is furthermore facilitated by the endoplasmic-reticulum-hosted protein tapasin. Each of these steps has been characterized and their individual importance has been related to final presentation on the cell surface.",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import svm, metrics\nfrom sklearn.preprocessing import MinMaxScaler",
"The principal component score Vector of Hydrophobic, Steric, and Electronic properties (VHSE) is \na set of amino acid descriptors that come from A new set of amino acid descriptors and its application in peptide QSARs\n\nVHSE1 and VHSE2 are related to hydrophobic (H) properties, \nVHSE3 and VHSE4 to steric (S) properties, and \nVHSE5 to VHSE8 to electronic (E) properties.",
"# (3-letter, VHSE1, VHSE2, VHSE3, VHSE4, VHSE5, VHSE6, VHSE7, VHSE8)\nvhse = {\n\"A\": (\"Ala\", 0.15, -1.11, -1.35, -0.92, 0.02, -0.91, 0.36, -0.48),\n\"R\": (\"Arg\", -1.47, 1.45, 1.24, 1.27, 1.55, 1.47, 1.30, 0.83),\n\"N\": (\"Asn\", -0.99, 0.00, -0.37, 0.69, -0.55, 0.85, 0.73, -0.80),\n\"D\": (\"Asp\", -1.15, 0.67, -0.41, -0.01, -2.68, 1.31, 0.03, 0.56),\n\"C\": (\"Cys\", 0.18, -1.67, -0.46, -0.21, 0.00, 1.20, -1.61, -0.19),\n\"Q\": (\"Gln\", -0.96, 0.12, 0.18, 0.16, 0.09, 0.42, -0.20, -0.41),\n\"E\": (\"Glu\", -1.18, 0.40, 0.10, 0.36, -2.16, -0.17, 0.91, 0.02),\n\"G\": (\"Gly\", -0.20, -1.53, -2.63, 2.28, -0.53, -1.18, 2.01, -1.34),\n\"H\": (\"His\", -0.43, -0.25, 0.37, 0.19, 0.51, 1.28, 0.93, 0.65),\n\"I\": (\"Ile\", 1.27, -0.14, 0.30, -1.80, 0.30, -1.61, -0.16, -0.13),\n\"L\": (\"Leu\", 1.36, 0.07, 0.26, -0.80, 0.22, -1.37, 0.08, -0.62),\n\"K\": (\"Lys\", -1.17, 0.70, 0.70, 0.80, 1.64, 0.67, 1.63, 0.13),\n\"M\": (\"Met\", 1.01, -0.53, 0.43, 0.00, 0.23, 0.10, -0.86, -0.68),\n\"F\": (\"Phe\", 1.52, 0.61, 0.96, -0.16, 0.25, 0.28, -1.33, -0.20),\n\"P\": (\"Pro\", 0.22, -0.17, -0.50, 0.05, -0.01, -1.34, -0.19, 3.56),\n\"S\": (\"Ser\", -0.67, -0.86, -1.07, -0.41, -0.32, 0.27, -0.64, 0.11),\n\"T\": (\"Thr\", -0.34, -0.51, -0.55, -1.06, 0.01, -0.01, -0.79, 0.39),\n\"W\": (\"Trp\", 1.50, 2.06, 1.79, 0.75, 0.75, -0.13, -1.06, -0.85),\n\"Y\": (\"Tyr\", 0.61, 1.60, 1.17, 0.73, 0.53, 0.25, -0.96, -0.52),\n\"V\": (\"Val\", 0.76, -0.92, 0.17, -1.91, 0.22, -1.40, -0.24, -0.03)}\n",
"There were eight dataset used in this study. The reference datasets (s1, s3, s5, s7) were converted into the actual datasets used in the analysis (s2, s4, s6, s8) using the vhse vector. The s2 and s4 datasets were used for training the SVM model and the s6 and s8 were used for testing.",
"%ls data/proteasomal_cleavage\n\nfrom aa_props import seq_to_aa_props\n# Converts the raw input into our X matrix and y vector. The 'peptide_key'\n# and 'activity_key' parameters are the names of the column in the dataframe\n# for the peptide amino acid string and activity (not cleaved/cleaved) \n# respectively. The 'sequence_len' allows for varying the number of flanking\n# amino acids to cleavage site (which is at position 14 of 28 in each cleaved\n# sample.\ndef dataset_to_X_y(dataframe, peptide_key, activity_key, sequence_len = 28, use_vhse = True):\n raw_peptide_len = 28\n if (sequence_len % 2 or sequence_len > raw_peptide_len or sequence_len <= 0):\n raise ValueError(\"sequence_len needs to an even value (0,%d]\" % (raw_peptide_len))\n X = []\n y = []\n for (peptide, activity) in zip(dataframe[peptide_key], dataframe[activity_key]):\n if (len(peptide) != raw_peptide_len):\n # print \"Skipping peptide! len(%s)=%d. Should be len=%d\" \\\n # % (peptide, len(peptide), raw_peptide_len)\n continue\n y.append(activity)\n\n num_amino_acids_to_clip = (raw_peptide_len - sequence_len) / 2\n clipped_peptide = peptide if num_amino_acids_to_clip == 0 else \\\n peptide[num_amino_acids_to_clip:-num_amino_acids_to_clip]\n # There is a single peptide in dataset s6 with an \"'\" in the sequence.\n # The VHSE values used for it in the study match Proline (P).\n clipped_peptide = clipped_peptide.replace('\\'', 'P')\n row = []\n if use_vhse:\n for amino_acid in clipped_peptide:\n row.append(vhse[amino_acid][1]) # hydrophobic\n row.append(vhse[amino_acid][3]) # steric\n row.append(vhse[amino_acid][5]) # electric\n else:\n row = seq_to_aa_props(clipped_peptide)\n X.append(row)\n return (X, y)",
"Creating the In Vivo Data\nTo create the in vivo training set, the authors\n\nQueried the AntiJen database (7,324 MHC-I ligands)\nRemoved ligands with unknown source protein in ExPASy/SWISS-PROT (6036 MHC-I ligands)\nRemoved duplicate ligands (3,148 ligands)\nRemoved the 231 ligands used for test samples by Saxova et al, (2,917 ligands)\nRemoved sequences less than 28 residues (2,607 ligands) to create the cleavage sample set\nAssigned non-cleavage sites, removed sequences with less than 28 resides (2,480 ligands) to create the non-cleavage sample set\n\nThis process created 5,087 training samples: 2,607 cleavage and 2,480 non-cleavage samples.\nCreating Samples from Ligands and Proteins\nThe C-terminus of the ligand is assumed to be a cleavage site and the midpoint between the N-terminus and C-terminus is assumed to not be a cleavage site. \nBoth the cleavage and non-cleavage sites are at the center position of each sample.\n<img src=\"images/creating_samples_from_ligands.png\"/>\nFormat of Training Data\n\nEach Sequence is 28 residues long, however the authors found the best performance using 20 residues. \nThe Activity is 1 for cleavage and -1 for no cleavage. \nThere are 28 * 8 = 224 features in the raw training set.",
"training_set = pd.DataFrame.from_csv(\"data/proteasomal_cleavage/s2_in_vivo_mhc_1_antijen_swiss_prot_dataset.csv\")\nprint training_set.head(3)",
"Creating the Linear SVM Model\nThe authors measured linear, polynomial, radial basis, and sigmoid kernel and found no significant difference in performance. The linear kernel was chosen for its simplicity and interpretability. The authors did not provide the C value used in their linear model, so I used GridSearchCV to find the best value.",
"from sklearn.model_selection import GridSearchCV\nfrom sklearn.feature_selection import RFECV\n\ndef create_linear_svc_model(parameters, sequence_len = 28, use_vhse = True):\n scaler = MinMaxScaler()\n (X_train_unscaled, y_train) = dataset_to_X_y(training_set, \\\n \"Sequence\", \"Activity\", \\\n sequence_len = sequence_len, \\\n use_vhse = use_vhse)\n X_train = pd.DataFrame(scaler.fit_transform(X_train_unscaled))\n parameters={'estimator__C': [pow(2, i) for i in xrange(-25, 4, 1)]}\n svc = svm.LinearSVC()\n rfe = RFECV(estimator=svc, step=.1, cv=2, scoring='accuracy', n_jobs=8)\n clf = GridSearchCV(rfe, parameters, scoring='accuracy', n_jobs=8, cv=2, verbose=1)\n clf.fit(X_train, y_train)\n\n # summarize results\n print(\"Best: %f using %s\" % (clf.best_score_, clf.best_params_))\n means = clf.cv_results_['mean_test_score']\n stds = clf.cv_results_['std_test_score']\n params = clf.cv_results_['params']\n for mean, stdev, param in zip(means, stds, params):\n print(\"%f (%f) with: %r\" % (mean, stdev, param))\n #\n #svr = svm.LinearSVC()\n #clf = GridSearchCV(svr, parameters, cv=10, scoring='accuracy', n_jobs=1)\n #clf.fit(X_train, y_train)\n #print(\"The best parameters are %s with a score of %0.2f\" \\\n # % (clf.best_params_, clf.best_score_))\n return (scaler, clf)\n\n(vhse_scaler, vhse_model) = create_linear_svc_model(\n parameters = {'estimator__C': [pow(2, i) for i in xrange(-25, 4, 1)]},\n use_vhse = False)",
"Testing In Vivo SVM Model",
"def test_linear_svc_model(scaler, model, sequence_len = 28, use_vhse = True):\n testing_set = pd.DataFrame.from_csv(\"data/proteasomal_cleavage/s6_in_vivo_mhc_1_ligands_dataset.csv\")\n (X_test_prescaled, y_test) = dataset_to_X_y(testing_set, \\\n \"Sequences\", \"Activity\", \\\n sequence_len = sequence_len,\\\n use_vhse = use_vhse)\n X_test = pd.DataFrame(scaler.transform(X_test_prescaled))\n y_predicted = model.predict(X_test)\n accuracy = 100.0 * metrics.accuracy_score(y_test, y_predicted)\n ((tn, fp), (fn, tp)) = metrics.confusion_matrix(y_test, y_predicted, labels=[-1, 1])\n sensitivity = 100.0 * tp/(tp + fn)\n specificity = 100.0 * tn/(tn + fp)\n mcc = metrics.matthews_corrcoef(y_test, y_predicted)\n print \"Authors reported performance\"\n print \"Acc: 73.5, Sen: 82.3, Spe: 64.8, MCC: 0.48\"\n print \"Notebook performance (sequence_len=%d, use_vhse=%s)\" % (sequence_len, use_vhse)\n print \"Acc: %.1f, Sen: %.1f, Spe: %.1f, MCC: %.2f\" \\\n %(accuracy, sensitivity, specificity, mcc)\n \ntest_linear_svc_model(vhse_scaler, vhse_model, use_vhse = False)\n\ntesting_set = pd.DataFrame.from_csv(\"data/proteasomal_cleavage/s6_in_vivo_mhc_1_ligands_dataset.csv\")\n(X_test_prescaled, y_test) = dataset_to_X_y(testing_set, \\\n \"Sequences\", \"Activity\", \\\n sequence_len = 28,\\\n use_vhse = False)\nX_test = pd.DataFrame(vhse_scaler.transform(X_test_prescaled))\nposlabels = [\"-%02d\" % (i) for i in range(14, 0, -1)] + [\"+%02d\" % (i) for i in range(1,15)]\n# 18 H 17 S 15 E\nproplables = [\"H%02d\" % (i) for i in range(18)] + [\"S%02d\" % (i) for i in range(17)] + [\"E%02d\" % (i) for i in range(15)]\n\ncols = []\nfor poslabel in poslabels:\n for proplable in proplables:\n cols.append(\"%s%s\" % (poslabel, proplable))\n\nX_test.columns = cols\n\nfor col in X_test.columns[vhse_model.best_estimator_.get_support()]:\n print col",
"Comparing Linear SVM to PAProC, FragPredict, and NetChop\n\nInterpreting Model Weights\n<img src=\"images/journal.pone.0074506.g002.png\" align=\"left\" border=\"0\"/>\nThe VHSE1 variable at the P1 position has the largest positive weight coefficient (10.49) in line with research showing that C-terminal residues are usually hydrophobic to aid in ER transfer and binding to the MHC molecule. \nThere is a mostly positive and mostly negative coefficents upstream and downstream of the cleavage site respectively. This potential difference appears to be conducive to cleavage.",
"#h = svr.coef_[:, 0::3]\n#s = svr.coef_[:, 1::3]\n#e = svr.coef_[:, 2::3]\n\n#%matplotlib notebook\n\n#n_groups = h.shape[1]\n\n#fig, ax = plt.subplots(figsize=(12,9))\n\n#index = np.arange(n_groups)\n#bar_width = 0.25\n\n#ax1 = ax.bar(index + bar_width, h.T, bar_width, label=\"Hydrophobic\", color='b')\n#ax2 = ax.bar(index, s.T, bar_width, label=\"Steric\", color='r')\n#ax3 = ax.bar(index - bar_width, e.T, bar_width, label=\"Electronic\", color='g')\n\n#ax.set_xlim(-bar_width,len(index)+bar_width)\n\n#plt.xlabel('Amino Acid Position')\n#plt.ylabel('SVM Coefficient Value')\n#plt.title('Hydrophobic, Steric, and Electronic Effect by Amino Acid Position')\n#plt.xticks(index, range (n_groups/2, 0, -1) + [str(i)+\"'\" for i in range (1, n_groups/2+1)])\n#plt.legend()\n\n#plt.tight_layout()\n#plt.show()",
"PCA vs. full matrices\n\nPrincipal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components Source.\nMei et al, creators of the VHSE, applied PCA on 18 hydophobic, 17 steric, and 15 electronic properties. The first 2, 2, and 4 principle components account for 74.33, 78.68, and 77.9% of variability in original matrices.\nThe authors of this paper only used the first principle component from the hydrophobic, steric, and electronic matrices.\nWhat performance would the authors have found if used the full matrices instead of PCA features?\n\n| Matrix | Features | Sensitivity | Specificity | MCC |\n|--------|----------|-------------|-------------|------|\n| VHSE | 3x20=60 | 82.2 | 63.2 | 0.46 | \n| Full | 50x20=1000 | 81.2 | 64.1 | 0.46 |",
"# Performance with no VHSE\n(no_vhse_scaler, no_vhse_model) = create_linear_svc_model(\n parameters = {'C': [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1]},\n use_vhse = False)\ntest_linear_svc_model(no_vhse_scaler, no_vhse_model, use_vhse = False)\n\n# Performance with more flanking residues and no VHSE\n(full_flank_scaler, full_flank_model) = create_linear_svc_model(\n parameters = {'C': [0.0001, 0.003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1]},\n use_vhse = False, sequence_len = 28)\ntest_linear_svc_model(full_flank_scaler, full_flank_model, use_vhse = False, sequence_len=28)",
"Chipper Implementation\nChipper notebook auto-generates training sets, e.g. v0.3.0 trained with 26,166 samples vs. 5,087 used in this study\n<img src=\"images/kl_heatmap.png\"/>\nChipper Performance\nChipper uses full 50 properties per residue instead of VHSE parameters\n| Method | Sensitivity | Specificity | MCC | Notes |\n|--------|-------------|-------------|--------|-------|\n|PAProC| 45.6 |30.0 | -0.25 | |\n|FragPredict | 83.5 | 16.5 | 0.00 | |\n|NetChop 1.0 | 39.8 | 46.3 | -0.14 | |\n|NetChop 2.0 | 73.6 | 42.4 | 0.16 | |\n|NetChop 3.0 | 81.0 | 48.0 | 0.31 | |\n|VHSE-based SVC | 82.3 | 64.8 | 0.48 | 3x20=60 features |\n|chipper VHSE (SVC) | 79.3 | 72.6 | 0.520 | 3x20=60 features |\n|chipper VHSE (LR) | 83.2 | 69.2 | 0.529 | 3x20=60 features |\n|chipper (SVC) | 79.3 | 77.9 | 0.572 | 50x20=1000 features |\n|chipper (LR) | 87.0 | 74.5 | 0.620 | 50x20=1000 features |\n|xgboost | 87.0 | 74.5 | 0.620 | 50x20=1000 features |\n|FANN | 82.7 | 78.8 | 0.616 | 50x20=1000 features |\nChipper LR Performance\n\nThank You\nSource notebook: https://github.com/massie/notebooks/blob/master/VHSE-Based%20Prediction%20of%20Proteasomal%20Cleavage%20Sites.ipynb\nChipper repo: https://github.com/massie/chipper"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kubeflow/examples | natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb | apache-2.0 | [
"Basic Intro\nIn this competition, you’re challenged to build a machine learning model that predicts which Tweets are about real disasters and which one’s aren’t.\n\nWhat's in this kernel?\n\nBasic EDA\nData Cleaning\nBaseline Model\n\nUnzipping the file\nImporting required Libraries.",
"pip install -r requirements.txt",
"Importing Libraries",
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport nltk\nnltk.download('stopwords')\nnltk.download('punkt')\nfrom nltk.corpus import stopwords\nfrom nltk.util import ngrams\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom collections import defaultdict\nfrom collections import Counter\nplt.style.use('ggplot')\nstop=set(stopwords.words('english'))\nimport re\nfrom nltk.tokenize import word_tokenize\nimport gensim\nimport string\nfrom keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing.sequence import pad_sequences\nfrom tqdm import tqdm\nfrom keras.models import Sequential\nfrom keras.layers import Embedding,LSTM,Dense,SpatialDropout1D\nfrom keras.initializers import Constant\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.keras.optimizers import Adam\n\n\n\nimport os\n#os.listdir('../input/glove-global-vectors-for-word-representation/glove.6B.100d.txt')",
"Load data",
"tweet= pd.read_csv('./data/train.csv')\ntest=pd.read_csv('./data/test.csv')\ntweet.head(3)\n\nprint('There are {} rows and {} columns in train'.format(tweet.shape[0],tweet.shape[1]))\nprint('There are {} rows and {} columns in train'.format(test.shape[0],test.shape[1]))",
"Class distribution\nBefore we begin with anything else,let's check the class distribution.There are only two classes 0 and 1.",
"x=tweet.target.value_counts()\nsns.barplot(x.index,x)\nplt.gca().set_ylabel('samples')",
"ohh,as expected ! There is a class distribution.There are more tweets with class 0 ( No disaster) than class 1 ( disaster tweets)\nExploratory Data Analysis of tweets\nFirst,we will do very basic analysis,that is character level,word level and sentence level analysis.\nNumber of characters in tweets",
"fig,(ax1,ax2)=plt.subplots(1,2,figsize=(10,5))\ntweet_len=tweet[tweet['target']==1]['text'].str.len()\nax1.hist(tweet_len,color='red')\nax1.set_title('disaster tweets')\ntweet_len=tweet[tweet['target']==0]['text'].str.len()\nax2.hist(tweet_len,color='green')\nax2.set_title('Not disaster tweets')\nfig.suptitle('Characters in tweets')\nplt.show()\n",
"The distribution of both seems to be almost same.120 t0 140 characters in a tweet are the most common among both.\nNumber of words in a tweet",
"fig,(ax1,ax2)=plt.subplots(1,2,figsize=(10,5))\ntweet_len=tweet[tweet['target']==1]['text'].str.split().map(lambda x: len(x))\nax1.hist(tweet_len,color='red')\nax1.set_title('disaster tweets')\ntweet_len=tweet[tweet['target']==0]['text'].str.split().map(lambda x: len(x))\nax2.hist(tweet_len,color='green')\nax2.set_title('Not disaster tweets')\nfig.suptitle('Words in a tweet')\nplt.show()\n",
"Average word length in a tweet",
"fig,(ax1,ax2)=plt.subplots(1,2,figsize=(10,5))\nword=tweet[tweet['target']==1]['text'].str.split().apply(lambda x : [len(i) for i in x])\nsns.distplot(word.map(lambda x: np.mean(x)),ax=ax1,color='red')\nax1.set_title('disaster')\nword=tweet[tweet['target']==0]['text'].str.split().apply(lambda x : [len(i) for i in x])\nsns.distplot(word.map(lambda x: np.mean(x)),ax=ax2,color='green')\nax2.set_title('Not disaster')\nfig.suptitle('Average word length in each tweet')\n\ndef create_corpus(target):\n corpus=[]\n \n for x in tweet[tweet['target']==target]['text'].str.split():\n for i in x:\n corpus.append(i)\n return corpus",
"Common stopwords in tweets\nFirst we will analyze tweets with class 0.",
"corpus=create_corpus(0)\n\ndic=defaultdict(int)\nfor word in corpus:\n if word in stop:\n dic[word]+=1\n \ntop=sorted(dic.items(), key=lambda x:x[1],reverse=True)[:10] \n\n\nx,y=zip(*top)\nplt.bar(x,y)",
"Now,we will analyze tweets with class 1.",
"\n\ncorpus=create_corpus(1)\n\ndic=defaultdict(int)\nfor word in corpus:\n if word in stop:\n dic[word]+=1\n\ntop=sorted(dic.items(), key=lambda x:x[1],reverse=True)[:10] \n \n\n\nx,y=zip(*top)\nplt.bar(x,y)",
"In both of them,\"the\" dominates which is followed by \"a\" in class 0 and \"in\" in class 1.\nAnalyzing punctuations.\nFirst let's check tweets indicating real disaster.",
"plt.figure(figsize=(10,5))\ncorpus=create_corpus(1)\n\ndic=defaultdict(int)\nimport string\nspecial = string.punctuation\nfor i in (corpus):\n if i in special:\n dic[i]+=1\n \nx,y=zip(*dic.items())\nplt.bar(x,y)",
"Now,we will move on to class 0.",
"plt.figure(figsize=(10,5))\ncorpus=create_corpus(0)\n\ndic=defaultdict(int)\nimport string\nspecial = string.punctuation\nfor i in (corpus):\n if i in special:\n dic[i]+=1\n \nx,y=zip(*dic.items())\nplt.bar(x,y,color='green')",
"Common words ?",
"\ncounter=Counter(corpus)\nmost=counter.most_common()\nx=[]\ny=[]\nfor word,count in most[:40]:\n if (word not in stop) :\n x.append(word)\n y.append(count)\n\nsns.barplot(x=y,y=x)",
"Lot of cleaning needed !\nNgram analysis\nwe will do a bigram (n=2) analysis over the tweets.Let's check the most common bigrams in tweets.",
"def get_top_tweet_bigrams(corpus, n=None):\n vec = CountVectorizer(ngram_range=(2, 2)).fit(corpus)\n bag_of_words = vec.transform(corpus)\n sum_words = bag_of_words.sum(axis=0) \n words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]\n words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)\n return words_freq[:n]\n\nplt.figure(figsize=(10,5))\ntop_tweet_bigrams=get_top_tweet_bigrams(tweet['text'])[:10]\nx,y=map(list,zip(*top_tweet_bigrams))\nsns.barplot(x=y,y=x)",
"We will need lot of cleaning here..\nData Cleaning\nAs we know,twitter tweets always have to be cleaned before we go onto modelling.So we will do some basic cleaning such as spelling correction,removing punctuations,removing html tags and emojis etc.So let's start.",
"df=pd.concat([tweet,test])\ndf.shape",
"Removing urls",
"def remove_URL(text):\n url = re.compile(r'https?://\\S+|www\\.\\S+')\n return url.sub(r'',text)\n\ndf['text']=df['text'].apply(lambda x : remove_URL(x))",
"Removing HTML tags",
"def remove_html(text):\n html=re.compile(r'<.*?>')\n return html.sub(r'',text)\ndf['text']=df['text'].apply(lambda x : remove_html(x))",
"Removing Emojis",
"# Reference : https://gist.github.com/slowkow/7a7f61f495e3dbb7e3d767f97bd7304b\ndef remove_emoji(text):\n emoji_pattern = re.compile(\"[\"\n u\"\\U0001F600-\\U0001F64F\" # emoticons\n u\"\\U0001F300-\\U0001F5FF\" # symbols & pictographs\n u\"\\U0001F680-\\U0001F6FF\" # transport & map symbols\n u\"\\U0001F1E0-\\U0001F1FF\" # flags (iOS)\n u\"\\U00002702-\\U000027B0\"\n u\"\\U000024C2-\\U0001F251\"\n \"]+\", flags=re.UNICODE)\n return emoji_pattern.sub(r'', text)\n\ndf['text']=df['text'].apply(lambda x: remove_emoji(x))\n",
"Removing punctuations",
"def remove_punct(text):\n table=str.maketrans('','',string.punctuation)\n return text.translate(table)\n\ndf['text']=df['text'].apply(lambda x : remove_punct(x))",
"Spelling Correction\nEven if I'm not good at spelling I can correct it with python :) I will use pyspellcheker to do that.\nCorpus Creation",
"def create_corpus(df):\n corpus=[]\n for tweet in tqdm(df['text']):\n words=[word.lower() for word in word_tokenize(tweet) if((word.isalpha()==1) & (word not in stop))]\n corpus.append(words)\n return corpus\ncorpus=create_corpus(df)",
"Download Glove",
"# download files\nimport wget\nimport zipfile\nwget.download(\"http://nlp.stanford.edu/data/glove.6B.zip\", './glove.6B.zip')\n \nwith zipfile.ZipFile(\"glove.6B.zip\", 'r') as zip_ref:\n zip_ref.extractall(\"./\")",
"Embedding Step",
"embedding_dict={}\nwith open(\"./glove.6B.100d.txt\",'r') as f:\n for line in f:\n values=line.split()\n word=values[0]\n vectors=np.asarray(values[1:],'float32')\n embedding_dict[word]=vectors\nf.close()\n\nMAX_LEN=50\ntokenizer_obj=Tokenizer()\ntokenizer_obj.fit_on_texts(corpus)\nsequences=tokenizer_obj.texts_to_sequences(corpus)\n\ntweet_pad=pad_sequences(sequences,maxlen=MAX_LEN,truncating='post',padding='post')\n\nword_index=tokenizer_obj.word_index\nprint('Number of unique words:',len(word_index))\n\nnum_words=len(word_index)+1\nembedding_matrix=np.zeros((num_words,100))\n\nfor word,i in tqdm(word_index.items()):\n if i > num_words:\n continue\n \n emb_vec=embedding_dict.get(word)\n if emb_vec is not None:\n embedding_matrix[i]=emb_vec\n ",
"Baseline Model",
"model=Sequential()\n\nembedding=Embedding(num_words,100,embeddings_initializer=Constant(embedding_matrix),\n input_length=MAX_LEN,trainable=False)\n\nmodel.add(embedding)\nmodel.add(SpatialDropout1D(0.2))\nmodel.add(LSTM(64, dropout=0.2, recurrent_dropout=0.2))\nmodel.add(Dense(1, activation='sigmoid'))\n\n\noptimzer=Adam(learning_rate=1e-5)\n\nmodel.compile(loss='binary_crossentropy',optimizer=optimzer,metrics=['accuracy'])\n\n\n\nmodel.summary()\n\ntrain=tweet_pad[:tweet.shape[0]]\nfinal_test=tweet_pad[tweet.shape[0]:]\n\nX_train,X_test,y_train,y_test=train_test_split(train,tweet['target'].values,test_size=0.15)\nprint('Shape of train',X_train.shape)\nprint(\"Shape of Validation \",X_test.shape)",
"Training Model",
"history=model.fit(X_train,y_train,batch_size=4,epochs=5,validation_data=(X_test,y_test),verbose=2)",
"Making our submission",
"sample_sub=pd.read_csv('./data/sample_submission.csv')\n\ny_pre=model.predict(final_test)\ny_pre=np.round(y_pre).astype(int).reshape(3263)\nsub=pd.DataFrame({'id':sample_sub['id'].values.tolist(),'target':y_pre})\nsub.to_csv('submission.csv',index=False)\n\n\nsub.head()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
explosion/thinc | examples/03_textcat_basic_neural_bow.ipynb | mit | [
"Basic neural bag-of-words text classifier with Thinc\nThis notebook shows how to implement a simple neural text classification model in Thinc. Last tested with thinc==8.0.13.",
"!pip install thinc syntok \"ml_datasets>=0.2.0\" tqdm",
"For simple and standalone tokenization, we'll use the syntok package and the following function:",
"from syntok.tokenizer import Tokenizer\n\ndef tokenize_texts(texts):\n tok = Tokenizer()\n return [[token.value for token in tok.tokenize(text)] for text in texts]",
"Setting up the data\nThe load_data function loads the DBPedia Ontology dataset, converts and tokenizes the data and generates a simple vocabulary mapping. Instead of ml_datasets.dbpedia you can also try ml_datasets.imdb for the IMDB review dataset.",
"import ml_datasets\nimport numpy\n\ndef load_data():\n train_data, dev_data = ml_datasets.dbpedia(train_limit=2000, dev_limit=2000)\n train_texts, train_cats = zip(*train_data)\n dev_texts, dev_cats = zip(*dev_data)\n unique_cats = list(numpy.unique(numpy.concatenate((train_cats, dev_cats))))\n nr_class = len(unique_cats)\n print(f\"{len(train_data)} training / {len(dev_data)} dev\\n{nr_class} classes\")\n\n train_y = numpy.zeros((len(train_cats), nr_class), dtype=\"f\")\n for i, cat in enumerate(train_cats):\n train_y[i][unique_cats.index(cat)] = 1\n dev_y = numpy.zeros((len(dev_cats), nr_class), dtype=\"f\")\n for i, cat in enumerate(dev_cats):\n dev_y[i][unique_cats.index(cat)] = 1\n\n train_tokenized = tokenize_texts(train_texts)\n dev_tokenized = tokenize_texts(dev_texts)\n # Generate simple vocab mapping, <unk> is 0\n vocab = {}\n count_id = 1\n for text in train_tokenized:\n for token in text:\n if token not in vocab:\n vocab[token] = count_id\n count_id += 1\n # Map texts using vocab\n train_X = []\n for text in train_tokenized:\n train_X.append(numpy.array([vocab.get(t, 0) for t in text]))\n dev_X = []\n for text in dev_tokenized:\n dev_X.append(numpy.array([vocab.get(t, 0) for t in text]))\n return (train_X, train_y), (dev_X, dev_y), vocab",
"Defining the model and config\nThe model takes a list of 2-dimensional arrays (the tokenized texts mapped to vocab IDs) and outputs a 2d array. Because the embed layer's nV dimension (the number of entries in the lookup table) depends on the vocab and the training data, it's passed in as an argument and registered as a reference. This makes it easy to retrieve it later on by calling model.get_ref(\"embed\"), so we can set its nV dimension.",
"from typing import List\nimport thinc\nfrom thinc.api import Model, chain, list2ragged, with_array, reduce_mean, Softmax\nfrom thinc.types import Array2d\n\[email protected](\"EmbedPoolTextcat.v1\")\ndef EmbedPoolTextcat(embed: Model[Array2d, Array2d]) -> Model[List[Array2d], Array2d]:\n with Model.define_operators({\">>\": chain}):\n model = with_array(embed) >> list2ragged() >> reduce_mean() >> Softmax()\n model.set_ref(\"embed\", embed)\n return model",
"The config defines the top-level model using the registered EmbedPoolTextcat function, and the embed argument, referencing the Embed layer.",
"CONFIG = \"\"\"\n[hyper_params]\nwidth = 64\n\n[model]\n@layers = \"EmbedPoolTextcat.v1\"\n\n[model.embed]\n@layers = \"Embed.v1\"\nnO = ${hyper_params:width}\n\n[optimizer]\n@optimizers = \"Adam.v1\"\nlearn_rate = 0.001\n\n[training]\nbatch_size = 8\nn_iter = 10\n\"\"\"",
"Training setup\nWhen the config is loaded, it's first parsed as a dictionary and all references to values from other sections, e.g. ${hyper_params:width} are replaced. The result is a nested dictionary describing the objects defined in the config. registry.resolve then creates the objects and calls the functions bottom-up.",
"from thinc.api import registry, Config\n\nC = registry.resolve(Config().from_str(CONFIG))\nC",
"Once the data is loaded, we'll know the vocabulary size and can set the dimension on the embedding layer. model.get_ref(\"embed\") returns the layer defined as the ref \"embed\" and the set_dim method lets you set a value for a dimension. To fill in the other missing shapes, we can call model.initialize with some input and output data.",
"(train_X, train_y), (dev_X, dev_y), vocab = load_data()\n\nbatch_size = C[\"training\"][\"batch_size\"]\noptimizer = C[\"optimizer\"]\nmodel = C[\"model\"]\nmodel.get_ref(\"embed\").set_dim(\"nV\", len(vocab) + 1)\n\nmodel.initialize(X=train_X, Y=train_y)\n\ndef evaluate_model(model, dev_X, dev_Y, batch_size):\n correct = 0.0\n total = 0.0\n for X, Y in model.ops.multibatch(batch_size, dev_X, dev_Y):\n Yh = model.predict(X)\n for j in range(len(Yh)):\n correct += Yh[j].argmax(axis=0) == Y[j].argmax(axis=0)\n total += len(Y)\n return float(correct / total)",
"Training the model",
"from thinc.api import fix_random_seed\nfrom tqdm.notebook import tqdm\n\nfix_random_seed(0)\nfor n in range(C[\"training\"][\"n_iter\"]):\n loss = 0.0\n batches = model.ops.multibatch(batch_size, train_X, train_y, shuffle=True)\n for X, Y in tqdm(batches, leave=False):\n Yh, backprop = model.begin_update(X)\n d_loss = []\n for i in range(len(Yh)):\n d_loss.append(Yh[i] - Y[i])\n loss += ((Yh[i] - Y[i]) ** 2).sum()\n backprop(numpy.array(d_loss))\n model.finish_update(optimizer)\n score = evaluate_model(model, dev_X, dev_y, batch_size)\n print(f\"{n}\\t{loss:.2f}\\t{score:.3f}\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sympy/scipy-2017-codegen-tutorial | notebooks/40-chemical-kinetics-cython.ipynb | bsd-3-clause | [
"In this notebook we will look how we can use Cython to generate a faster callback and hopefully shave off some running time from our integration.",
"import json\nimport numpy as np\nfrom scipy2017codegen.odesys import ODEsys\nfrom scipy2017codegen.chem import mk_rsys",
"The ODEsys class and convenience functions from previous notebook (35) has been put in two modules for easy importing. Recapping what we did last:",
"watrad_data = json.load(open('../scipy2017codegen/data/radiolysis_300_Gy_s.json'))\nwatrad = mk_rsys(ODEsys, **watrad_data)\ntout = np.logspace(-6, 3, 200) # close to one hour of operation\nc0 = {'H2O': 55.4e3, 'H+': 1e-4, 'OH-': 1e-4}\ny0 = [c0.get(symb.name, 0) for symb in watrad.y]\n\n%timeit yout, info = watrad.integrate_odeint(tout, y0)",
"so that is the benchmark to beat, we will export our expressions as Cython code. We then subclass ODEsys to have it render, compile and import the code:",
"# %load ../scipy2017codegen/odesys_cython.py\nimport uuid\nimport numpy as np\nimport sympy as sym\nimport setuptools\nimport pyximport\nfrom scipy2017codegen import templates\nfrom scipy2017codegen.odesys import ODEsys\n\npyximport.install()\n\ncython_template = \"\"\"\ncimport numpy as cnp\nimport numpy as np\n\ndef f(cnp.ndarray[cnp.float64_t, ndim=1] y, double t, %(args)s):\n cdef cnp.ndarray[cnp.float64_t, ndim=1] out = np.empty(y.size)\n %(f_exprs)s\n return out\n\ndef j(cnp.ndarray[cnp.float64_t, ndim=1] y, double t, %(args)s):\n cdef cnp.ndarray[cnp.float64_t, ndim=2] out = np.empty((y.size, y.size))\n %(j_exprs)s\n return out\n\n\"\"\"\n\nclass CythonODEsys(ODEsys):\n\n def setup(self):\n self.mod_name = 'ode_cython_%s' % uuid.uuid4().hex[:10]\n idxs = list(range(len(self.f)))\n subs = {s: sym.Symbol('y[%d]' % i) for i, s in enumerate(self.y)}\n f_exprs = ['out[%d] = %s' % (i, str(self.f[i].xreplace(subs))) for i in idxs]\n j_exprs = ['out[%d, %d] = %s' % (ri, ci, self.j[ri, ci].xreplace(subs)) for ri in idxs for ci in idxs]\n ctx = dict(\n args=', '.join(map(str, self.p)),\n f_exprs = '\\n '.join(f_exprs),\n j_exprs = '\\n '.join(j_exprs),\n )\n open('%s.pyx' % self.mod_name, 'wt').write(cython_template % ctx)\n open('%s.pyxbld' % self.mod_name, 'wt').write(templates.pyxbld % dict(\n sources=[], include_dirs=[np.get_include()],\n library_dirs=[], libraries=[], extra_compile_args=[], extra_link_args=[]\n ))\n mod = __import__(self.mod_name)\n self.f_eval = mod.f\n self.j_eval = mod.j\n\n\ncython_sys = mk_rsys(CythonODEsys, **watrad_data)\n\n%timeit cython_sys.integrate(tout, y0)",
"That is a considerable speed up from before. But the solver still has to\nallocate memory for creating new arrays at each call, and each evaluation\nhas to pass the python layer which is now the bottleneck for the integration.\nIn order to speed up integration further we need to make sure the solver can evaluate the function and Jacobian without calling into Python.",
"import matplotlib.pyplot as plt\n%matplotlib inline",
"Just to see that everything looks alright:",
"fig, ax = plt.subplots(1, 1, figsize=(14, 6))\ncython_sys.plot_result(tout, *cython_sys.integrate_odeint(tout, y0), ax=ax)\nax.set_xscale('log')\nax.set_yscale('log')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jcmgray/quijy | docs/basics.ipynb | mit | [
"from quimb import *\ndata = [1, 2j, -3]",
"Kets are column vectors, i.e. with shape (d, 1):",
"qu(data, qtype='ket')",
"The normalized=True option can be used to ensure a normalized output.\nBras are row vectors, i.e. with shape (1, d):",
"qu(data, qtype='bra') # also conjugates the data",
"And operators are square matrices, i.e. have shape (d, d):",
"qu(data, qtype='dop')",
"Which can also be sparse:",
"qu(data, qtype='dop', sparse=True)\n\npsi = 1.0j * bell_state('psi-')\npsi\n\npsi.H\n\npsi = up()\npsi\n\npsi.H @ psi # inner product\n\nX = pauli('X')\nX @ psi # act as gate\n\npsi.H @ X @ psi # operator expectation\n\nexpec(psi, psi)\n\nexpec(psi, X)",
"Here's an example for a much larger (20 qubit), sparse operator expecation,\nwhich will be automatically parallelized:",
"psi = rand_ket(2**20)\nA = rand_herm(2**20, sparse=True) + speye(2**20)\nA\n\nexpec(A, psi) # should be ~ 1\n\n%%timeit\nexpec(A, psi)\n\ndims = [2] * 10 # overall space of 10 qubits\nX = pauli('X')\nIIIXXIIIII = ikron(X, dims, inds=[3, 4]) # act on 4th and 5th spin only\nIIIXXIIIII.shape\n\ndims = [2] * 3\nXZ = pauli('X') & pauli('Z')\nZIX = pkron(XZ, dims, inds=[2, 0])\nZIX.real.astype(int)\n\ndims = [2] * 10\nD = prod(dims)\npsi = rand_ket(D)\nrho_ab = ptr(psi, dims, [0, 9])\nrho_ab.round(3) # probably pretty close to identity"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MingChen0919/learning-apache-spark | notebooks/02-data-manipulation/2.7.1-column-expression.ipynb | mit | [
"# create entry points to spark\ntry:\n sc.stop()\nexcept:\n pass\nfrom pyspark import SparkContext, SparkConf\nfrom pyspark.sql import SparkSession\nsc=SparkContext()\nspark = SparkSession(sparkContext=sc)",
"Column expression\nA Spark column instance is NOT a column of values from the DataFrame: when you crate a column instance, it does not give you the actual values of that column in the DataFrame. I found it makes more sense to me if I consider a column instance as a column of expressions. These expressions are evaluated by other methods (e.g., the select(), groupby(), and orderby() from pyspark.sql.DataFrame)\nExample data",
"mtcars = spark.read.csv('../../data/mtcars.csv', inferSchema=True, header=True)\nmtcars = mtcars.withColumnRenamed('_c0', 'model')\nmtcars.show(5)",
"Use dot (.) to select column from DataFrame",
"mpg_col = mtcars.mpg\nmpg_col",
"Modify a column to generate a new column",
"mpg_col + 1\n\nmtcars.select(mpg_col * 100).show(5)",
"The pyspark.sql.Column has many methods that acts on a column and returns a column instance.",
"mtcars.select(mtcars.gear.isin([2,3])).show(5)\n\nmtcars.mpg.asc()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bwalrond/explore-notebooks | jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb | mit | [
"<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-nd/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png\" /></a><br />This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-nd/4.0/\">Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.\n\nClick-Through Rate Prediction Lab\nThis lab covers the steps for creating a click-through rate (CTR) prediction pipeline. You will work with the Criteo Labs dataset that was used for a recent Kaggle competition.\n This lab will cover: \n\n\nPart 1: Featurize categorical data using one-hot-encoding (OHE)\n\n\nPart 2: Construct an OHE dictionary\n\n\nPart 3: Parse CTR data and generate OHE features\n\n\nVisualization 1: Feature frequency\n\n\nPart 4: CTR prediction and logloss evaluation\n\n\nVisualization 2: ROC curve\n\n\nPart 5: Reduce feature dimension via feature hashing\n\nVisualization 3: Hyperparameter heat map\n\n\nNote that, for reference, you can look up the details of:\n* the relevant Spark methods in Spark's Python API\n* the relevant NumPy methods in the NumPy Reference",
"labVersion = 'cs190.1x-lab4-1.0.4'\nprint labVersion",
"Part 1: Featurize categorical data using one-hot-encoding \n (1a) One-hot-encoding \nWe would like to develop code to convert categorical features to numerical ones, and to build intuition, we will work with a sample unlabeled dataset with three data points, with each data point representing an animal. The first feature indicates the type of animal (bear, cat, mouse); the second feature describes the animal's color (black, tabby); and the third (optional) feature describes what the animal eats (mouse, salmon).\nIn a one-hot-encoding (OHE) scheme, we want to represent each tuple of (featureID, category) via its own binary feature. We can do this in Python by creating a dictionary that maps each tuple to a distinct integer, where the integer corresponds to a binary feature. To start, manually enter the entries in the OHE dictionary associated with the sample dataset by mapping the tuples to consecutive integers starting from zero, ordering the tuples first by featureID and next by category.\nLater in this lab, we'll use OHE dictionaries to transform data points into compact lists of features that can be used in machine learning algorithms.",
"# Data for manual OHE\n# Note: the first data point does not include any value for the optional third feature\nsampleOne = [(0, 'mouse'), (1, 'black')]\nsampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]\nsampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]\nsampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree])\nprint sampleDataRDD.count()\n# print sampleDataRDD.take(5)\n\n\n# TODO: Replace <FILL IN> with appropriate code\nsampleOHEDictManual = {}\nsampleOHEDictManual[(0,'bear')] = 0\nsampleOHEDictManual[(0,'cat')] = 1\nsampleOHEDictManual[(0,'mouse')] = 2\nsampleOHEDictManual[(1, 'black')] = 3\nsampleOHEDictManual[(1, 'tabby')] = 4\nsampleOHEDictManual[(2, 'mouse')] = 5\nsampleOHEDictManual[(2, 'salmon')] = 6\nprint len(sampleOHEDictManual)",
"WARNING: If test_helper, required in the cell below, is not installed, follow the instructions here.",
"# TEST One-hot-encoding (1a)\nfrom test_helper import Test\n\nTest.assertEqualsHashed(sampleOHEDictManual[(0,'bear')],\n 'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c',\n \"incorrect value for sampleOHEDictManual[(0,'bear')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(0,'cat')],\n '356a192b7913b04c54574d18c28d46e6395428ab',\n \"incorrect value for sampleOHEDictManual[(0,'cat')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')],\n 'da4b9237bacccdf19c0760cab7aec4a8359010b0',\n \"incorrect value for sampleOHEDictManual[(0,'mouse')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(1,'black')],\n '77de68daecd823babbb58edb1c8e14d7106e83bb',\n \"incorrect value for sampleOHEDictManual[(1,'black')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')],\n '1b6453892473a467d07372d45eb05abc2031647a',\n \"incorrect value for sampleOHEDictManual[(1,'tabby')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')],\n 'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4',\n \"incorrect value for sampleOHEDictManual[(2,'mouse')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')],\n 'c1dfd96eea8cc2b62785275bca38ac261256e278',\n \"incorrect value for sampleOHEDictManual[(2,'salmon')]\")\nTest.assertEquals(len(sampleOHEDictManual.keys()), 7,\n 'incorrect number of keys in sampleOHEDictManual')",
"(1b) Sparse vectors \nData points can typically be represented with a small number of non-zero OHE features relative to the total number of features that occur in the dataset. By leveraging this sparsity and using sparse vector representations of OHE data, we can reduce storage and computational burdens. Below are a few sample vectors represented as dense numpy arrays. Use SparseVector to represent them in a sparse fashion, and verify that both the sparse and dense representations yield the same results when computing dot products (we will later use MLlib to train classifiers via gradient descent, and MLlib will need to compute dot products between SparseVectors and dense parameter vectors).\nUse SparseVector(size, *args) to create a new sparse vector where size is the length of the vector and args is either a dictionary, a list of (index, value) pairs, or two separate arrays of indices and values (sorted by index). You'll need to create a sparse vector representation of each dense vector aDense and bDense.",
"import numpy as np\nfrom pyspark.mllib.linalg import SparseVector\n\n# TODO: Replace <FILL IN> with appropriate code\naDense = np.array([0., 3., 0., 4.])\naSparse = SparseVector(len(aDense), range(0,len(aDense)), aDense)\n\nbDense = np.array([0., 0., 0., 1.])\nbSparse = SparseVector(len(bDense), range(0,len(bDense)), bDense)\n\nw = np.array([0.4, 3.1, -1.4, -.5])\nprint aDense.dot(w)\nprint aSparse.dot(w)\nprint bDense.dot(w)\nprint bSparse.dot(w)\nprint aDense\nprint bDense\nprint aSparse\nprint bSparse\n\n# TEST Sparse Vectors (1b)\nTest.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')\nTest.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')\nTest.assertTrue(aDense.dot(w) == aSparse.dot(w),\n 'dot product of aDense and w should equal dot product of aSparse and w')\nTest.assertTrue(bDense.dot(w) == bSparse.dot(w),\n 'dot product of bDense and w should equal dot product of bSparse and w')",
"(1c) OHE features as sparse vectors \nNow let's see how we can represent the OHE features for points in our sample dataset. Using the mapping defined by the OHE dictionary from Part (1a), manually define OHE features for the three sample data points using SparseVector format. Any feature that occurs in a point should have the value 1.0. For example, the DenseVector for a point with features 2 and 4 would be [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0].",
"# Reminder of the sample features\n# sampleOne = [(0, 'mouse'), (1, 'black')]\n# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]\n# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]\n\n# TODO: Replace <FILL IN> with appropriate code\nsampleOneOHEFeatManual = SparseVector(7, [2,3], np.array([1.0,1.0]))\nsampleTwoOHEFeatManual = SparseVector(7, [1,4,5], np.array([1.0,1.0,1.0]))\nsampleThreeOHEFeatManual = SparseVector(7, [0,3,6], np.array([1.0,1.0,1.0]))\nprint sampleOneOHEFeatManual\nprint sampleTwoOHEFeatManual\nprint sampleThreeOHEFeatManual\n\n# TEST OHE Features as sparse vectors (1c)\nTest.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector),\n 'sampleOneOHEFeatManual needs to be a SparseVector')\nTest.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector),\n 'sampleTwoOHEFeatManual needs to be a SparseVector')\nTest.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector),\n 'sampleThreeOHEFeatManual needs to be a SparseVector')\nTest.assertEqualsHashed(sampleOneOHEFeatManual,\n 'ecc00223d141b7bd0913d52377cee2cf5783abd6',\n 'incorrect value for sampleOneOHEFeatManual')\nTest.assertEqualsHashed(sampleTwoOHEFeatManual,\n '26b023f4109e3b8ab32241938e2e9b9e9d62720a',\n 'incorrect value for sampleTwoOHEFeatManual')\nTest.assertEqualsHashed(sampleThreeOHEFeatManual,\n 'c04134fd603ae115395b29dcabe9d0c66fbdc8a7',\n 'incorrect value for sampleThreeOHEFeatManual')",
"(1d) Define a OHE function \nNext we will use the OHE dictionary from Part (1a) to programatically generate OHE features from the original categorical data. First write a function called oneHotEncoding that creates OHE feature vectors in SparseVector format. Then use this function to create OHE features for the first sample data point and verify that the result matches the result from Part (1c).",
" # TODO: Replace <FILL IN> with appropriate code\ndef oneHotEncoding_old(rawFeats, OHEDict, numOHEFeats):\n \"\"\"Produce a one-hot-encoding from a list of features and an OHE dictionary.\n\n Note:\n You should ensure that the indices used to create a SparseVector are sorted.\n\n Args:\n rawFeats (list of (int, str)): The features corresponding to a single observation. Each\n feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)\n OHEDict (dict): A mapping of (featureID, value) to unique integer.\n numOHEFeats (int): The total number of unique OHE features (combinations of featureID and\n value).\n\n Returns:\n SparseVector: A SparseVector of length numOHEFeats with indices equal to the unique\n identifiers for the (featureID, value) combinations that occur in the observation and\n with values equal to 1.0.\n \"\"\"\n \n newFeats = []\n idx = []\n for k,i in sorted(OHEDict.items(), key=lambda x: x[1]):\n if k in rawFeats:\n newFeats += [1.0]\n idx += [i]\n return SparseVector(numOHEFeats, idx, np.array(newFeats))\n\n # TODO: Replace <FILL IN> with appropriate code\ndef oneHotEncoding(rawFeats, OHEDict, numOHEFeats):\n \"\"\"Produce a one-hot-encoding from a list of features and an OHE dictionary.\n\n Note:\n You should ensure that the indices used to create a SparseVector are sorted.\n\n Args:\n rawFeats (list of (int, str)): The features corresponding to a single observation. Each\n feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)\n OHEDict (dict): A mapping of (featureID, value) to unique integer.\n numOHEFeats (int): The total number of unique OHE features (combinations of featureID and\n value).\n\n Returns:\n SparseVector: A SparseVector of length numOHEFeats with indices equal to the unique\n identifiers for the (featureID, value) combinations that occur in the observation and\n with values equal to 1.0.\n \"\"\"\n \n newFeats = []\n idx = []\n for f in rawFeats:\n if f in OHEDict:\n newFeats += [1.0]\n idx += [OHEDict[f]]\n \n return SparseVector(numOHEFeats, sorted(idx), np.array(newFeats))\n\n# Calculate the number of features in sampleOHEDictManual\nnumSampleOHEFeats = len(sampleOHEDictManual)\n\n# Run oneHotEnoding on sampleOne\nsampleOneOHEFeat = oneHotEncoding(sampleOne,sampleOHEDictManual,numSampleOHEFeats)\n\nprint sampleOneOHEFeat\n\n# TEST Define an OHE Function (1d)\nTest.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual,\n 'sampleOneOHEFeat should equal sampleOneOHEFeatManual')\nTest.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]),\n 'incorrect value for sampleOneOHEFeat')\nTest.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual,\n numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]),\n 'incorrect definition for oneHotEncoding')",
"(1e) Apply OHE to a dataset \nFinally, use the function from Part (1d) to create OHE features for all 3 data points in the sample dataset.",
"# TODO: Replace <FILL IN> with appropriate code\ndef toOHE(row):\n return oneHotEncoding(row,sampleOHEDictManual,numSampleOHEFeats)\n\nsampleOHEData = sampleDataRDD.map(toOHE)\nprint sampleOHEData.collect()\n\n# TEST Apply OHE to a dataset (1e)\nsampleOHEDataValues = sampleOHEData.collect()\nTest.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements')\nTest.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}),\n 'incorrect OHE for first sample')\nTest.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}),\n 'incorrect OHE for second sample')\nTest.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}),\n 'incorrect OHE for third sample')",
"Part 2: Construct an OHE dictionary \n(2a) Pair RDD of (featureID, category) \nTo start, create an RDD of distinct (featureID, category) tuples. In our sample dataset, the 7 items in the resulting RDD are (0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon'). Notably 'black' appears twice in the dataset but only contributes one item to the RDD: (1, 'black'), while 'mouse' also appears twice and contributes two items: (0, 'mouse') and (2, 'mouse'). Use flatMap and distinct.",
"flat = sampleDataRDD.flatMap(lambda r: r).distinct()\nprint flat.count()\nfor i in flat.take(8):\n print i\n\n# TODO: Replace <FILL IN> with appropriate code\nsampleDistinctFeats = (sampleDataRDD.flatMap(lambda r: r).distinct())\n\n# TEST Pair RDD of (featureID, category) (2a)\nTest.assertEquals(sorted(sampleDistinctFeats.collect()),\n [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),\n (1, 'tabby'), (2, 'mouse'), (2, 'salmon')],\n 'incorrect value for sampleDistinctFeats')",
"(2b) OHE Dictionary from distinct features \nNext, create an RDD of key-value tuples, where each (featureID, category) tuple in sampleDistinctFeats is a key and the values are distinct integers ranging from 0 to (number of keys - 1). Then convert this RDD into a dictionary, which can be done using the collectAsMap action. Note that there is no unique mapping from keys to values, as all we require is that each (featureID, category) key be mapped to a unique integer between 0 and the number of keys. In this exercise, any valid mapping is acceptable. Use zipWithIndex followed by collectAsMap.\nIn our sample dataset, one valid list of key-value tuples is: [((0, 'bear'), 0), ((2, 'salmon'), 1), ((1, 'tabby'), 2), ((2, 'mouse'), 3), ((0, 'mouse'), 4), ((0, 'cat'), 5), ((1, 'black'), 6)]. The dictionary defined in Part (1a) illustrates another valid mapping between keys and integers.",
"# TODO: Replace <FILL IN> with appropriate code\nsampleOHEDict = sampleDistinctFeats.zipWithIndex().collectAsMap()\nprint sampleOHEDict\n\n# TEST OHE Dictionary from distinct features (2b)\nTest.assertEquals(sorted(sampleOHEDict.keys()),\n [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),\n (1, 'tabby'), (2, 'mouse'), (2, 'salmon')],\n 'sampleOHEDict has unexpected keys')\nTest.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values')",
"(2c) Automated creation of an OHE dictionary \nNow use the code from Parts (2a) and (2b) to write a function that takes an input dataset and outputs an OHE dictionary. Then use this function to create an OHE dictionary for the sample dataset, and verify that it matches the dictionary from Part (2b).",
"# TODO: Replace <FILL IN> with appropriate code\ndef createOneHotDict(inputData):\n \"\"\"Creates a one-hot-encoder dictionary based on the input data.\n\n Args:\n inputData (RDD of lists of (int, str)): An RDD of observations where each observation is\n made up of a list of (featureID, value) tuples.\n\n Returns:\n dict: A dictionary where the keys are (featureID, value) tuples and map to values that are\n unique integers.\n \"\"\"\n flat = inputData.flatMap(lambda r: r).distinct()\n return flat.zipWithIndex().collectAsMap()\n\nsampleOHEDictAuto = createOneHotDict(sampleDataRDD)\nprint sampleOHEDictAuto\n\n# TEST Automated creation of an OHE dictionary (2c)\nTest.assertEquals(sorted(sampleOHEDictAuto.keys()),\n [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),\n (1, 'tabby'), (2, 'mouse'), (2, 'salmon')],\n 'sampleOHEDictAuto has unexpected keys')\nTest.assertEquals(sorted(sampleOHEDictAuto.values()), range(7),\n 'sampleOHEDictAuto has unexpected values')",
"Part 3: Parse CTR data and generate OHE features\nBefore we can proceed, you'll first need to obtain the data from Criteo. Here is the link to Criteo's data sharing agreement:http://labs.criteo.com/downloads/2014-kaggle-display-advertising-challenge-dataset/. After you accept the agreement, you can obtain the download URL by right-clicking on the \"Download Sample\" button and clicking \"Copy link address\" or \"Copy Link Location\", depending on your browser. Paste the URL into the # TODO cell below. The script below will download the file and make the sample dataset's contents available in the rawData variable.\nNote that the download should complete within 30 seconds.",
"import os.path\nbaseDir = os.path.join('/Users/bill.walrond/Documents/dsprj/data')\ninputPath = os.path.join('CS190_Mod4', 'dac_sample.txt')\nfileName = os.path.join(baseDir, inputPath)\n\nif os.path.isfile(fileName):\n rawData = (sc\n .textFile(fileName, 2)\n .map(lambda x: x.replace('\\t', ','))) # work with either ',' or '\\t' separated data\n print rawData.take(1)\n print rawData.count()\nelse:\n print 'Couldn\\'t find filename: %s' % fileName\n\n# TODO: Replace <FILL IN> with appropriate code\nimport glob\nfrom io import BytesIO\nimport os.path\nimport tarfile\nimport urllib\nimport urlparse\n\n# Paste in url, url should end with: dac_sample.tar.gz\nurl = '<FILL IN>'\n\nurl = url.strip()\n\nif 'rawData' in locals():\n print 'rawData already loaded. Nothing to do.'\nelif not url.endswith('dac_sample.tar.gz'):\n print 'Check your download url. Are you downloading the Sample dataset?'\nelse:\n try:\n tmp = BytesIO()\n urlHandle = urllib.urlopen(url)\n tmp.write(urlHandle.read())\n tmp.seek(0)\n tarFile = tarfile.open(fileobj=tmp)\n\n dacSample = tarFile.extractfile('dac_sample.txt')\n dacSample = [unicode(x.replace('\\n', '').replace('\\t', ',')) for x in dacSample]\n rawData = (sc\n .parallelize(dacSample, 1) # Create an RDD\n .zipWithIndex() # Enumerate lines\n .map(lambda (v, i): (i, v)) # Use line index as key\n .partitionBy(2, lambda i: not (i < 50026)) # Match sc.textFile partitioning\n .map(lambda (i, v): v)) # Remove index\n print 'rawData loaded from url'\n print rawData.take(1)\n except IOError:\n print 'Unable to unpack: {0}'.format(url)\n",
"(3a) Loading and splitting the data \nWe are now ready to start working with the actual CTR data, and our first task involves splitting it into training, validation, and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets, and then cache each of these RDDs, as we will be accessing them multiple times in the remainder of this lab. Finally, compute the size of each dataset.",
"# TODO: Replace <FILL IN> with appropriate code\nweights = [.8, .1, .1]\nseed = 42\n# Use randomSplit with weights and seed\nrawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed)\n# Cache the data\nrawTrainData.cache()\nrawValidationData.cache()\nrawTestData.cache()\n\nnTrain = rawTrainData.count()\nnVal = rawValidationData.count()\nnTest = rawTestData.count()\nprint nTrain, nVal, nTest, nTrain + nVal + nTest\nprint rawTrainData.take(1)\n\n# TEST Loading and splitting the data (3a)\nTest.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]),\n 'you must cache the split data')\nTest.assertEquals(nTrain, 79911, 'incorrect value for nTrain')\nTest.assertEquals(nVal, 10075, 'incorrect value for nVal')\nTest.assertEquals(nTest, 10014, 'incorrect value for nTest')",
"(3b) Extract features \nWe will now parse the raw training data to create an RDD that we can subsequently use to create an OHE dictionary. Note from the take() command in Part (3a) that each raw data point is a string containing several fields separated by some delimiter. For now, we will ignore the first field (which is the 0-1 label), and parse the remaining fields (or raw features). To do this, complete the implemention of the parsePoint function.",
"# TODO: Replace <FILL IN> with appropriate code\ndef parsePoint(point):\n \"\"\"Converts a comma separated string into a list of (featureID, value) tuples.\n\n Note:\n featureIDs should start at 0 and increase to the number of features - 1.\n\n Args:\n point (str): A comma separated string where the first value is the label and the rest\n are features.\n\n Returns:\n list: A list of (featureID, value) tuples.\n \"\"\"\n # make a list of (featureID, value) tuples, skipping the first element (the label)\n return [(k,v) for k,v in enumerate(point[2:].split(','))]\n \n\nparsedTrainFeat = rawTrainData.map(parsePoint)\nprint parsedTrainFeat.count()\nnumCategories = (parsedTrainFeat\n .flatMap(lambda x: x)\n .distinct()\n .map(lambda x: (x[0], 1))\n .reduceByKey(lambda x, y: x + y)\n .sortByKey()\n .collect())\n\nprint numCategories[2][1]\n\n# TEST Extract features (3b)\nTest.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint')\nTest.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint')",
"(3c) Create an OHE dictionary from the dataset \nNote that parsePoint returns a data point as a list of (featureID, category) tuples, which is the same format as the sample dataset studied in Parts 1 and 2 of this lab. Using this observation, create an OHE dictionary using the function implemented in Part (2c). Note that we will assume for simplicity that all features in our CTR dataset are categorical.",
"# TODO: Replace <FILL IN> with appropriate code\nctrOHEDict = createOneHotDict(parsedTrainFeat)\nprint 'Len of ctrOHEDict: {0}'.format(len(ctrOHEDict))\nnumCtrOHEFeats = len(ctrOHEDict.keys())\nprint numCtrOHEFeats\nprint ctrOHEDict.has_key((0, ''))\n\ntheItems = ctrOHEDict.items()\nfor i in range(0,9):\n print theItems[i] \n\n# TEST Create an OHE dictionary from the dataset (3c)\nTest.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict')\nTest.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict')",
"(3d) Apply OHE to the dataset \nNow let's use this OHE dictionary by starting with the raw training data and creating an RDD of LabeledPoint objects using OHE features. To do this, complete the implementation of the parseOHEPoint function. Hint: parseOHEPoint is an extension of the parsePoint function from Part (3b) and it uses the oneHotEncoding function from Part (1d).",
"from pyspark.mllib.regression import LabeledPoint\nprint rawTrainData.count()\nr = rawTrainData.first()\nl = parsePoint(r)\nprint 'Length of parsed list: %d' % len(l)\nprint 'Here\\'s the list ...'\nprint l\nsv = oneHotEncoding(l, ctrOHEDict, numCtrOHEFeats)\nprint 'Here\\'s the sparsevector ...'\nprint sv\nlp = LabeledPoint(float(r[:1]), sv)\nprint 'Here\\'s the labeledpoint ...'\nprint lp\n\n# TODO: Replace <FILL IN> with appropriate code\ndef parseOHEPoint(point, OHEDict, numOHEFeats):\n \"\"\"Obtain the label and feature vector for this raw observation.\n\n Note:\n You must use the function `oneHotEncoding` in this implementation or later portions\n of this lab may not function as expected.\n\n Args:\n point (str): A comma separated string where the first value is the label and the rest\n are features.\n OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer.\n numOHEFeats (int): The number of unique features in the training dataset.\n\n Returns:\n LabeledPoint: Contains the label for the observation and the one-hot-encoding of the\n raw features based on the provided OHE dictionary.\n \"\"\"\n # first, get the label\n label = float(point[:1])\n parsed = parsePoint(point)\n features = oneHotEncoding(parsed, OHEDict, numOHEFeats)\n # return parsed\n return LabeledPoint(label,features)\n \ndef toOHEPoint(point):\n return parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats)\n\nsc.setLogLevel(\"INFO\")\nrawTrainData = rawTrainData.repartition(8)\nrawTrainData.cache()\nOHETrainData = rawTrainData.map(toOHEPoint)\nOHETrainData.cache()\nprint OHETrainData.take(1)\n\n# Check that oneHotEncoding function was used in parseOHEPoint\nbackupOneHot = oneHotEncoding\noneHotEncoding = None\nwithOneHot = False\ntry: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats)\nexcept TypeError: withOneHot = True\noneHotEncoding = backupOneHot\n\n# TEST Apply OHE to the dataset (3d)\nnumNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5))\nnumNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5))\nTest.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint')\nTest.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint')",
"Visualization 1: Feature frequency \nWe will now visualize the number of times each of the 233,286 OHE features appears in the training data. We first compute the number of times each feature appears, then bucket the features by these counts. The buckets are sized by powers of 2, so the first bucket corresponds to features that appear exactly once ( \\( \\scriptsize 2^0 \\) ), the second to features that appear twice ( \\( \\scriptsize 2^1 \\) ), the third to features that occur between three and four ( \\( \\scriptsize 2^2 \\) ) times, the fifth bucket is five to eight ( \\( \\scriptsize 2^3 \\) ) times and so on. The scatter plot below shows the logarithm of the bucket thresholds versus the logarithm of the number of features that have counts that fall in the buckets.",
"x = sc.parallelize([(\"a\", 1), (\"b\", 1), (\"a\", 1), (\"a\", 1),(\"b\", 1), (\"b\", 1), (\"b\", 1), (\"b\", 1)], 3)\ny = x.reduceByKey(lambda accum, n: accum + n)\ny.collect()\n\ndef bucketFeatByCount(featCount):\n \"\"\"Bucket the counts by powers of two.\"\"\"\n for i in range(11):\n size = 2 ** i\n if featCount <= size:\n return size\n return -1\n\nfeatCounts = (OHETrainData\n .flatMap(lambda lp: lp.features.indices)\n .map(lambda x: (x, 1))\n .reduceByKey(lambda x, y: x + y))\nfeatCountsBuckets = (featCounts\n .map(lambda x: (bucketFeatByCount(x[1]), 1))\n .filter(lambda (k, v): k != -1)\n .reduceByKey(lambda x, y: x + y)\n .collect())\nprint featCountsBuckets\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nx, y = zip(*featCountsBuckets)\nx, y = np.log(x), np.log(y)\n\ndef preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',\n gridWidth=1.0):\n \"\"\"Template for generating the plot layout.\"\"\"\n plt.close()\n fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')\n ax.axes.tick_params(labelcolor='#999999', labelsize='10')\n for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:\n axis.set_ticks_position('none')\n axis.set_ticks(ticks)\n axis.label.set_color('#999999')\n if hideLabels: axis.set_ticklabels([])\n plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')\n map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])\n return fig, ax\n\n# generate layout and plot data\nfig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2))\nax.set_xlabel(r'$\\log_e(bucketSize)$'), ax.set_ylabel(r'$\\log_e(countInBucket)$')\nplt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)\n# display(fig) \nplt.show()\npass",
"(3e) Handling unseen features \nWe naturally would like to repeat the process from Part (3d), e.g., to compute OHE features for the validation and test datasets. However, we must be careful, as some categorical values will likely appear in new data that did not exist in the training data. To deal with this situation, update the oneHotEncoding() function from Part (1d) to ignore previously unseen categories, and then compute OHE features for the validation data.",
"# TODO: Replace <FILL IN> with appropriate code\ndef oneHotEncoding(rawFeats, OHEDict, numOHEFeats):\n \"\"\"Produce a one-hot-encoding from a list of features and an OHE dictionary.\n\n Note:\n If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be\n ignored.\n\n Args:\n rawFeats (list of (int, str)): The features corresponding to a single observation. Each\n feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)\n OHEDict (dict): A mapping of (featureID, value) to unique integer.\n numOHEFeats (int): The total number of unique OHE features (combinations of featureID and\n value).\n\n Returns:\n SparseVector: A SparseVector of length numOHEFeats with indices equal to the unique\n identifiers for the (featureID, value) combinations that occur in the observation and\n with values equal to 1.0.\n \"\"\"\n newFeats = []\n idx = []\n for f in rawFeats:\n if f in OHEDict:\n newFeats += [1.0]\n idx += [OHEDict[f]]\n \n return SparseVector(numOHEFeats, sorted(idx), np.array(newFeats))\n\nOHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))\nOHEValidationData.cache()\nprint OHEValidationData.take(1)\n\n# TEST Handling unseen features (3e)\nnumNZVal = (OHEValidationData\n .map(lambda lp: len(lp.features.indices))\n .sum())\nTest.assertEquals(numNZVal, 372080, 'incorrect number of features')",
"Part 4: CTR prediction and logloss evaluation \n (4a) Logistic regression \nWe are now ready to train our first CTR classifier. A natural classifier to use in this setting is logistic regression, since it models the probability of a click-through event rather than returning a binary response, and when working with rare events, probabilistic predictions are useful.\nFirst use LogisticRegressionWithSGD to train a model using OHETrainData with the given hyperparameter configuration. LogisticRegressionWithSGD returns a LogisticRegressionModel. Next, use the LogisticRegressionModel.weights and LogisticRegressionModel.intercept attributes to print out the model's parameters. Note that these are the names of the object's attributes and should be called using a syntax like model.weights for a given model.",
"from pyspark.mllib.classification import LogisticRegressionWithSGD\n\n# fixed hyperparameters\nnumIters = 50\nstepSize = 10.\nregParam = 1e-6\nregType = 'l2'\nincludeIntercept = True\n\n# TODO: Replace <FILL IN> with appropriate code\nmodel0 = LogisticRegressionWithSGD.train(OHETrainData, \n iterations=numIters, \n step=stepSize, \n regParam=regParam,\n regType=regType,\n intercept=includeIntercept)\nsortedWeights = sorted(model0.weights)\nprint sortedWeights[:5], model0.intercept\n\n# TEST Logistic regression (4a)\nTest.assertTrue(np.allclose(model0.intercept, 0.56455084025), 'incorrect value for model0.intercept')\nTest.assertTrue(np.allclose(sortedWeights[0:5],\n [-0.45899236853575609, -0.37973707648623956, -0.36996558266753304,\n -0.36934962879928263, -0.32697945415010637]), 'incorrect value for model0.weights')",
"(4b) Log loss \nThroughout this lab, we will use log loss to evaluate the quality of models. Log loss is defined as: \\[ \\scriptsize \\ell_{log}(p, y) = \\begin{cases} -\\log (p) & \\text{if } y = 1 \\\\ -\\log(1-p) & \\text{if } y = 0 \\end{cases} \\] where \\( \\scriptsize p\\) is a probability between 0 and 1 and \\( \\scriptsize y\\) is a label of either 0 or 1. Log loss is a standard evaluation criterion when predicting rare-events such as click-through rate prediction (it is also the criterion used in the Criteo Kaggle competition).\nWrite a function to compute log loss, and evaluate it on some sample inputs.",
"# TODO: Replace <FILL IN> with appropriate code\nfrom math import log\n\ndef computeLogLoss(p, y):\n \"\"\"Calculates the value of log loss for a given probabilty and label.\n\n Note:\n log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it\n and when p is 1 we need to subtract a small value (epsilon) from it.\n\n Args:\n p (float): A probabilty between 0 and 1.\n y (int): A label. Takes on the values 0 and 1.\n\n Returns:\n float: The log loss value.\n \"\"\"\n epsilon = 10e-12\n if p not in [0.0,1.0]:\n logeval = p\n elif p == 0:\n logeval = p+epsilon\n else:\n logeval = p-epsilon\n if y == 1:\n return (-log(logeval))\n elif y == 0:\n return (-log(1-logeval))\n\nprint computeLogLoss(.5, 1)\nprint computeLogLoss(.5, 0)\nprint computeLogLoss(.99, 1)\nprint computeLogLoss(.99, 0)\nprint computeLogLoss(.01, 1)\nprint computeLogLoss(.01, 0)\nprint computeLogLoss(0, 1)\nprint computeLogLoss(1, 1)\nprint computeLogLoss(1, 0)\n\n# TEST Log loss (4b)\nTest.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)],\n [0.69314718056, 0.0100503358535, 4.60517018599]),\n 'computeLogLoss is not correct')\nTest.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)],\n [25.3284360229, 1.00000008275e-11, 25.3284360229]),\n 'computeLogLoss needs to bound p away from 0 and 1 by epsilon')",
"(4c) Baseline log loss \nNext we will use the function we wrote in Part (4b) to compute the baseline log loss on the training data. A very simple yet natural baseline model is one where we always make the same prediction independent of the given datapoint, setting the predicted value equal to the fraction of training points that correspond to click-through events (i.e., where the label is one). Compute this value (which is simply the mean of the training labels), and then use it to compute the training log loss for the baseline model. The log loss for multiple observations is the mean of the individual log loss values.",
"# TODO: Replace <FILL IN> with appropriate code\n# Note that our dataset has a very high click-through rate by design\n# In practice click-through rate can be one to two orders of magnitude lower\nclassOneFracTrain = OHETrainData.map(lambda p: p.label).mean()\nprint classOneFracTrain\n\nlogLossTrBase = OHETrainData.map(lambda p: computeLogLoss(classOneFracTrain,p.label) ).mean()\nprint 'Baseline Train Logloss = {0:.6f}\\n'.format(logLossTrBase)\n\n# TEST Baseline log loss (4c)\nTest.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain')\nTest.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase')",
"(4d) Predicted probability \nIn order to compute the log loss for the model we trained in Part (4a), we need to write code to generate predictions from this model. Write a function that computes the raw linear prediction from this logistic regression model and then passes it through a sigmoid function \\( \\scriptsize \\sigma(t) = (1+ e^{-t})^{-1} \\) to return the model's probabilistic prediction. Then compute probabilistic predictions on the training data.\nNote that when incorporating an intercept into our predictions, we simply add the intercept to the value of the prediction obtained from the weights and features. Alternatively, if the intercept was included as the first weight, we would need to add a corresponding feature to our data where the feature has the value one. This is not the case here.",
"# TODO: Replace <FILL IN> with appropriate code\nfrom math import exp # exp(-t) = e^-t\n\ndef getP(x, w, intercept):\n \"\"\"Calculate the probability for an observation given a set of weights and intercept.\n\n Note:\n We'll bound our raw prediction between 20 and -20 for numerical purposes.\n\n Args:\n x (SparseVector): A vector with values of 1.0 for features that exist in this\n observation and 0.0 otherwise.\n w (DenseVector): A vector of weights (betas) for the model.\n intercept (float): The model's intercept.\n\n Returns:\n float: A probability between 0 and 1.\n \"\"\"\n rawPrediction = w.dot(x) + intercept\n\n # Bound the raw prediction value\n rawPrediction = min(rawPrediction, 20)\n rawPrediction = max(rawPrediction, -20)\n return ( 1 / (1 + exp(-1*rawPrediction)) )\n\ntrainingPredictions = OHETrainData.map(lambda p: getP(p.features,model0.weights, model0.intercept))\n\nprint trainingPredictions.take(5)\n\n# TEST Predicted probability (4d)\nTest.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348),\n 'incorrect value for trainingPredictions')",
"(4e) Evaluate the model \nWe are now ready to evaluate the quality of the model we trained in Part (4a). To do this, first write a general function that takes as input a model and data, and outputs the log loss. Then run this function on the OHE training data, and compare the result with the baseline log loss.",
"a = OHETrainData.map(lambda p: (getP(p.features, model0.weights, model0.intercept), p.label))\nprint a.count()\nprint a.take(5)\nb = a.map(lambda lp: computeLogLoss(lp[0],lp[1]))\nprint b.count()\nprint b.take(5)\n\n# TODO: Replace <FILL IN> with appropriate code\ndef evaluateResults(model, data):\n \"\"\"Calculates the log loss for the data given the model.\n\n Args:\n model (LogisticRegressionModel): A trained logistic regression model.\n data (RDD of LabeledPoint): Labels and features for each observation.\n\n Returns:\n float: Log loss for the data.\n \"\"\"\n # Run a map to create an RDD of (prediction, label) tuples\n preds_labels = data.map(lambda p: (getP(p.features, model.weights, model.intercept), p.label))\n return preds_labels.map(lambda lp: computeLogLoss(lp[0], lp[1])).mean()\n\nlogLossTrLR0 = evaluateResults(model0, OHETrainData)\nprint ('OHE Features Train Logloss:\\n\\tBaseline = {0:.3f}\\n\\tLogReg = {1:.6f}'\n .format(logLossTrBase, logLossTrLR0))\n\n# TEST Evaluate the model (4e)\nTest.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0')",
"(4f) Validation log loss \nNext, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset.",
"# TODO: Replace <FILL IN> with appropriate code\nlogLossValBase = OHEValidationData.map(lambda p: computeLogLoss(classOneFracTrain, p.label)).mean()\n\nlogLossValLR0 = evaluateResults(model0, OHEValidationData)\nprint ('OHE Features Validation Logloss:\\n\\tBaseline = {0:.3f}\\n\\tLogReg = {1:.6f}'\n .format(logLossValBase, logLossValLR0))\n\n# TEST Validation log loss (4f)\nTest.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase')\nTest.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')",
"Visualization 2: ROC curve \nWe will now visualize how well the model predicts our target. To do this we generate a plot of the ROC curve. The ROC curve shows us the trade-off between the false positive rate and true positive rate, as we liberalize the threshold required to predict a positive outcome. A random model is represented by the dashed line.",
"labelsAndScores = OHEValidationData.map(lambda lp:\n (lp.label, getP(lp.features, model0.weights, model0.intercept)))\nlabelsAndWeights = labelsAndScores.collect()\nlabelsAndWeights.sort(key=lambda (k, v): v, reverse=True)\nlabelsByWeight = np.array([k for (k, v) in labelsAndWeights])\n\nlength = labelsByWeight.size\ntruePositives = labelsByWeight.cumsum()\nnumPositive = truePositives[-1]\nfalsePositives = np.arange(1.0, length + 1, 1.) - truePositives\n\ntruePositiveRate = truePositives / numPositive\nfalsePositiveRate = falsePositives / (length - numPositive)\n\n# Generate layout and plot data\nfig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1))\nax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05)\nax.set_ylabel('True Positive Rate (Sensitivity)')\nax.set_xlabel('False Positive Rate (1 - Specificity)')\nplt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.)\nplt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model\n# display(fig)\nplt.show()\npass",
"Part 5: Reduce feature dimension via feature hashing\n (5a) Hash function \nAs we just saw, using a one-hot-encoding featurization can yield a model with good statistical accuracy. However, the number of distinct categories across all features is quite large -- recall that we observed 233K categories in the training data in Part (3c). Moreover, the full Kaggle training dataset includes more than 33M distinct categories, and the Kaggle dataset itself is just a small subset of Criteo's labeled data. Hence, featurizing via a one-hot-encoding representation would lead to a very large feature vector. To reduce the dimensionality of the feature space, we will use feature hashing.\nBelow is the hash function that we will use for this part of the lab. We will first use this hash function with the three sample data points from Part (1a) to gain some intuition. Specifically, run code to hash the three sample points using two different values for numBuckets and observe the resulting hashed feature dictionaries.",
"from collections import defaultdict\nimport hashlib\n\ndef hashFunction(numBuckets, rawFeats, printMapping=False):\n \"\"\"Calculate a feature dictionary for an observation's features based on hashing.\n\n Note:\n Use printMapping=True for debug purposes and to better understand how the hashing works.\n\n Args:\n numBuckets (int): Number of buckets to use as features.\n rawFeats (list of (int, str)): A list of features for an observation. Represented as\n (featureID, value) tuples.\n printMapping (bool, optional): If true, the mappings of featureString to index will be\n printed.\n\n Returns:\n dict of int to float: The keys will be integers which represent the buckets that the\n features have been hashed to. The value for a given key will contain the count of the\n (featureID, value) tuples that have hashed to that key.\n \"\"\"\n mapping = {}\n for ind, category in rawFeats:\n featureString = category + str(ind)\n mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets)\n if(printMapping): print mapping\n sparseFeatures = defaultdict(float)\n for bucket in mapping.values():\n sparseFeatures[bucket] += 1.0\n return dict(sparseFeatures)\n\n# Reminder of the sample values:\n# sampleOne = [(0, 'mouse'), (1, 'black')]\n# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]\n# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]\n\n# TODO: Replace <FILL IN> with appropriate code\n# Use four buckets\nsampOneFourBuckets = hashFunction(4, sampleOne, True)\nsampTwoFourBuckets = hashFunction(4, sampleTwo, True)\nsampThreeFourBuckets = hashFunction(4, sampleThree, True)\n\n# Use one hundred buckets\nsampOneHundredBuckets = hashFunction(100, sampleOne, True)\nsampTwoHundredBuckets = hashFunction(100, sampleTwo, True)\nsampThreeHundredBuckets = hashFunction(100, sampleThree, True)\n\nprint '\\t\\t 4 Buckets \\t\\t\\t 100 Buckets'\nprint 'SampleOne:\\t {0}\\t\\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets)\nprint 'SampleTwo:\\t {0}\\t\\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets)\nprint 'SampleThree:\\t {0}\\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets)\n\n# TEST Hash function (5a)\nTest.assertEquals(sampOneFourBuckets, {2: 1.0, 3: 1.0}, 'incorrect value for sampOneFourBuckets')\nTest.assertEquals(sampThreeHundredBuckets, {72: 1.0, 5: 1.0, 14: 1.0},\n 'incorrect value for sampThreeHundredBuckets')",
"(5b) Creating hashed features \nNext we will use this hash function to create hashed features for our CTR datasets. First write a function that uses the hash function from Part (5a) with numBuckets = \\( \\scriptsize 2^{15} \\approx 33K \\) to create a LabeledPoint with hashed features stored as a SparseVector. Then use this function to create new training, validation and test datasets with hashed features. Hint: parsedHashPoint is similar to parseOHEPoint from Part (3d).",
"feats = [(k,v) for k,v in enumerate(rawTrainData.take(1)[0][2:].split(','))]\nprint feats\nhashDict = hashFunction(2 ** 15, feats)\nprint hashDict\nprint len(hashDict)\nprint 2**15\n\n# TODO: Replace <FILL IN> with appropriate code\ndef parseHashPoint(point, numBuckets):\n \"\"\"Create a LabeledPoint for this observation using hashing.\n\n Args:\n point (str): A comma separated string where the first value is the label and the rest are\n features.\n numBuckets: The number of buckets to hash to.\n\n Returns:\n LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed\n features.\n \"\"\"\n label = float(point[:1])\n rawFeats = [(k,v) for k,v in enumerate(point[2:].split(','))]\n hashDict = hashFunction(numBuckets, rawFeats)\n return LabeledPoint(label,SparseVector(len(hashDict), sorted(hashDict.keys()), hashDict.values()))\n\nnumBucketsCTR = 2 ** 15\nhashTrainData = rawTrainData.map(lambda r: parseHashPoint(r,numBucketsCTR))\nhashTrainData.cache()\nhashValidationData = rawValidationData.map(lambda r: parseHashPoint(r,numBucketsCTR))\nhashValidationData.cache()\nhashTestData = rawTestData.map(lambda r: parseHashPoint(r,numBucketsCTR))\nhashTestData.cache()\n\na = hashTrainData.take(1)\nprint a\n\n# TEST Creating hashed features (5b)\nhashTrainDataFeatureSum = sum(hashTrainData\n .map(lambda lp: len(lp.features.indices))\n .take(20))\nprint hashTrainDataFeatureSum\nhashTrainDataLabelSum = sum(hashTrainData\n .map(lambda lp: lp.label)\n .take(100))\nprint hashTrainDataLabelSum\nhashValidationDataFeatureSum = sum(hashValidationData\n .map(lambda lp: len(lp.features.indices))\n .take(20))\nhashValidationDataLabelSum = sum(hashValidationData\n .map(lambda lp: lp.label)\n .take(100))\nhashTestDataFeatureSum = sum(hashTestData\n .map(lambda lp: len(lp.features.indices))\n .take(20))\nhashTestDataLabelSum = sum(hashTestData\n .map(lambda lp: lp.label)\n .take(100))\n\nTest.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData')\nTest.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData')\nTest.assertEquals(hashValidationDataFeatureSum, 776,\n 'incorrect number of features in hashValidationData')\nTest.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData')\nTest.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData')\nTest.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData')",
"(5c) Sparsity \nSince we have 33K hashed features versus 233K OHE features, we should expect OHE features to be sparser. Verify this hypothesis by computing the average sparsity of the OHE and the hashed training datasets.\nNote that if you have a SparseVector named sparse, calling len(sparse) returns the total number of features, not the number features with entries. SparseVector objects have the attributes indices and values that contain information about which features are nonzero. Continuing with our example, these can be accessed using sparse.indices and sparse.values, respectively.",
"s = sum(hashTrainData.map(lambda lp: len(lp.features.indices) / float(numBucketsCTR) ).collect()) / nTrain\n# ratios.count()\ns\n\n# TODO: Replace <FILL IN> with appropriate code\ndef computeSparsity(data, d, n):\n \"\"\"Calculates the average sparsity for the features in an RDD of LabeledPoints.\n\n Args:\n data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation.\n d (int): The total number of features.\n n (int): The number of observations in the RDD.\n\n Returns:\n float: The average of the ratio of features in a point to total features.\n \"\"\"\n return sum(hashTrainData.map(lambda lp: len(lp.features.indices) / float(d) ).collect()) / n\n \n\naverageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain)\naverageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain)\n\nprint 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE)\nprint 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash)\n\n# TEST Sparsity (5c)\nTest.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04),\n 'incorrect value for averageSparsityOHE')\nTest.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03),\n 'incorrect value for averageSparsityHash')\n\nsc.stop()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/messy-consortium/cmip6/models/sandbox-3/seaice.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: MESSY-CONSORTIUM\nSource ID: SANDBOX-3\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:10\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-3', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ecell/ecell4-notebooks | en/tests/Reversible_Diffusion_limited.ipynb | gpl-2.0 | [
"Reversible (Diffusion-limited)\nThis is for an integrated test of E-Cell4. Here, we test a simple reversible association/dissociation model in volume.",
"%matplotlib inline\nfrom ecell4.prelude import *",
"Parameters are given as follows. D, radius, N_A, U, and ka_factor mean a diffusion constant, a radius of molecules, an initial number of molecules of A and B, a ratio of dissociated form of A at the steady state, and a ratio between an intrinsic association rate and collision rate defined as ka andkD below, respectively. Dimensions of length and time are assumed to be micro-meter and second.",
"D = 1\nradius = 0.005\nN_A = 60\nU = 0.5\nka_factor = 10 # 10 is for diffusion-limited\n\nN = 20 # a number of samples",
"Calculating optimal reaction rates. ka and kd are intrinsic, kon and koff are effective reaction rates.",
"import numpy\nkD = 4 * numpy.pi * (radius * 2) * (D * 2)\nka = kD * ka_factor\nkd = ka * N_A * U * U / (1 - U)\nkon = ka * kD / (ka + kD)\nkoff = kd * kon / ka",
"Start with no C molecules, and simulate 3 seconds.",
"y0 = {'A': N_A, 'B': N_A}\nduration = 0.35\nopt_kwargs = {'legend': True}",
"Make a model with effective rates. This model is for macroscopic simulation algorithms.",
"with species_attributes():\n A | B | C | {'radius': radius, 'D': D}\n\nwith reaction_rules():\n A + B == C | (kon, koff)\n\nm = get_model()",
"Save a result with ode as obs, and plot it:",
"ret1 = run_simulation(duration, y0=y0, model=m)\nret1.plot(**opt_kwargs)",
"Make a model with intrinsic rates. This model is for microscopic (particle) simulation algorithms.",
"with species_attributes():\n A | B | C | {'radius': radius, 'D': D}\n\nwith reaction_rules():\n A + B == C | (ka, kd)\n\nm = get_model()",
"Simulating with spatiocyte. voxel_radius is given as radius. Use alpha enough less than 1.0 for a diffusion-limited case (Bars represent standard error of the mean):",
"# alpha = 0.03\nret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('spatiocyte', radius), repeat=N)\nret2.plot('o', ret1, '-', **opt_kwargs)",
"Simulating with egfrd:",
"ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('egfrd', Integer3(4, 4, 4)), repeat=N)\nret2.plot('o', ret1, '-', **opt_kwargs)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zomansud/coursera | ml-classification/week-3/module-5-decision-tree-assignment-2-blank.ipynb | mit | [
"Implementing binary decision trees\nThe goal of this notebook is to implement your own binary decision tree classifier. You will:\n\nUse SFrames to do some feature engineering.\nTransform categorical variables into binary variables.\nWrite a function to compute the number of misclassified examples in an intermediate node.\nWrite a function to find the best feature to split on.\nBuild a binary decision tree from scratch.\nMake predictions using the decision tree.\nEvaluate the accuracy of the decision tree.\nVisualize the decision at the root node.\n\nImportant Note: In this assignment, we will focus on building decision trees where the data contain only binary (0 or 1) features. This allows us to avoid dealing with:\n* Multiple intermediate nodes in a split\n* The thresholding issues of real-valued features.\nThis assignment may be challenging, so brace yourself :)\nFire up Graphlab Create\nMake sure you have the latest version of GraphLab Create.",
"import graphlab",
"Load the lending club dataset\nWe will be using the same LendingClub dataset as in the previous assignment.",
"loans = graphlab.SFrame('lending-club-data.gl/')\n\nloans.head()",
"Like the previous assignment, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.",
"loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)\nloans = loans.remove_column('bad_loans')",
"Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical\nfeatures: \n\ngrade of the loan \nthe length of the loan term\nthe home ownership status: own, mortgage, rent\nnumber of years of employment.\n\nSince we are building a binary decision tree, we will have to convert these categorical features to a binary representation in a subsequent section using 1-hot encoding.",
"features = ['grade', # grade of the loan\n 'term', # the term of the loan\n 'home_ownership', # home_ownership status: own, mortgage or rent\n 'emp_length', # number of years of employment\n ]\ntarget = 'safe_loans'\nloans = loans[features + [target]]",
"Let's explore what the dataset looks like.",
"loans",
"Subsample dataset to make sure classes are balanced\nJust as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.",
"safe_loans_raw = loans[loans[target] == 1]\nrisky_loans_raw = loans[loans[target] == -1]\n\n# Since there are less risky loans than safe loans, find the ratio of the sizes\n# and use that percentage to undersample the safe loans.\npercentage = len(risky_loans_raw)/float(len(safe_loans_raw))\nsafe_loans = safe_loans_raw.sample(percentage, seed = 1)\nrisky_loans = risky_loans_raw\nloans_data = risky_loans.append(safe_loans)\n\nprint \"Percentage of safe loans :\", len(safe_loans) / float(len(loans_data))\nprint \"Percentage of risky loans :\", len(risky_loans) / float(len(loans_data))\nprint \"Total number of loans in our new dataset :\", len(loans_data)",
"Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in \"Learning from Imbalanced Data\" by Haibo He and Edwardo A. Garcia, IEEE Transactions on Knowledge and Data Engineering 21(9) (June 26, 2009), p. 1263–1284. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.\nTransform categorical data into binary features\nIn this assignment, we will implement binary decision trees (decision trees for binary features, a specific case of categorical variables taking on two values, e.g., true/false). Since all of our features are currently categorical features, we want to turn them into binary features. \nFor instance, the home_ownership feature represents the home ownership status of the loanee, which is either own, mortgage or rent. For example, if a data point has the feature \n{'home_ownership': 'RENT'}\nwe want to turn this into three features: \n{ \n 'home_ownership = OWN' : 0, \n 'home_ownership = MORTGAGE' : 0, \n 'home_ownership = RENT' : 1\n }\nSince this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding.",
"loans_data = risky_loans.append(safe_loans)\nfor feature in features:\n loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1}) \n loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)\n \n # Change None's to 0's\n for column in loans_data_unpacked.column_names():\n loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)\n\n loans_data.remove_column(feature)\n loans_data.add_columns(loans_data_unpacked)",
"Let's see what the feature columns look like now:",
"features = loans_data.column_names()\nfeatures.remove('safe_loans') # Remove the response variable\nfeatures\n\nprint \"Number of features (after binarizing categorical variables) = %s\" % len(features)",
"Let's explore what one of these columns looks like:",
"loans_data['grade.A']",
"This column is set to 1 if the loan grade is A and 0 otherwise.\nCheckpoint: Make sure the following answers match up.",
"print \"Total number of grade.A loans : %s\" % loans_data['grade.A'].sum()\nprint \"Expexted answer : 6422\"",
"Train-test split\nWe split the data into a train test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.",
"train_data, test_data = loans_data.random_split(.8, seed=1)",
"Decision tree implementation\nIn this section, we will implement binary decision trees from scratch. There are several steps involved in building a decision tree. For that reason, we have split the entire assignment into several sections.\nFunction to count number of mistakes while predicting majority class\nRecall from the lecture that prediction at an intermediate node works by predicting the majority class for all data points that belong to this node.\nNow, we will write a function that calculates the number of missclassified examples when predicting the majority class. This will be used to help determine which feature is the best to split on at a given node of the tree.\nNote: Keep in mind that in order to compute the number of mistakes for a majority classifier, we only need the label (y values) of the data points in the node. \n Steps to follow :\n* Step 1: Calculate the number of safe loans and risky loans.\n* Step 2: Since we are assuming majority class prediction, all the data points that are not in the majority class are considered mistakes.\n* Step 3: Return the number of mistakes.\nNow, let us write the function intermediate_node_num_mistakes which computes the number of misclassified examples of an intermediate node given the set of labels (y values) of the data points contained in the node. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in.",
"def intermediate_node_num_mistakes(labels_in_node):\n # Corner case: If labels_in_node is empty, return 0\n if len(labels_in_node) == 0:\n return 0\n \n # Count the number of 1's (safe loans)\n safe_loans = (labels_in_node == 1).sum()\n \n # Count the number of -1's (risky loans)\n risky_loans = (labels_in_node == -1).sum()\n \n # Return the number of mistakes that the majority classifier makes.\n return risky_loans if safe_loans >= risky_loans else safe_loans\n ",
"Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.",
"# Test case 1\nexample_labels = graphlab.SArray([-1, -1, 1, 1, 1])\nif intermediate_node_num_mistakes(example_labels) == 2:\n print 'Test passed!'\nelse:\n print 'Test 1 failed... try again!'\n\n# Test case 2\nexample_labels = graphlab.SArray([-1, -1, 1, 1, 1, 1, 1])\nif intermediate_node_num_mistakes(example_labels) == 2:\n print 'Test passed!'\nelse:\n print 'Test 2 failed... try again!'\n \n# Test case 3\nexample_labels = graphlab.SArray([-1, -1, -1, -1, -1, 1, 1])\nif intermediate_node_num_mistakes(example_labels) == 2:\n print 'Test passed!'\nelse:\n print 'Test 3 failed... try again!'",
"Function to pick best feature to split on\nThe function best_splitting_feature takes 3 arguments: \n1. The data (SFrame of data which includes all of the feature columns and label column)\n2. The features to consider for splits (a list of strings of column names to consider for splits)\n3. The name of the target/label column (string)\nThe function will loop through the list of possible features, and consider splitting on each of them. It will calculate the classification error of each split and return the feature that had the smallest classification error when split on.\nRecall that the classification error is defined as follows:\n$$\n\\mbox{classification error} = \\frac{\\mbox{# mistakes}}{\\mbox{# total examples}}\n$$\nFollow these steps: \n* Step 1: Loop over each feature in the feature list\n* Step 2: Within the loop, split the data into two groups: one group where all of the data has feature value 0 or False (we will call this the left split), and one group where all of the data has feature value 1 or True (we will call this the right split). Make sure the left split corresponds with 0 and the right split corresponds with 1 to ensure your implementation fits with our implementation of the tree building process.\n* Step 3: Calculate the number of misclassified examples in both groups of data and use the above formula to compute the classification error.\n* Step 4: If the computed error is smaller than the best error found so far, store this feature and its error.\nThis may seem like a lot, but we have provided pseudocode in the comments in order to help you implement the function correctly.\nNote: Remember that since we are only dealing with binary features, we do not have to consider thresholds for real-valued features. This makes the implementation of this function much easier.\nFill in the places where you find ## YOUR CODE HERE. There are five places in this function for you to fill in.",
"def best_splitting_feature(data, features, target):\n \n best_feature = None # Keep track of the best feature \n best_error = 10 # Keep track of the best error so far \n # Note: Since error is always <= 1, we should intialize it with something larger than 1.\n\n # Convert to float to make sure error gets computed correctly.\n num_data_points = float(len(data)) \n \n # Loop through each feature to consider splitting on that feature\n for feature in features:\n \n # The left split will have all data points where the feature value is 0\n left_split = data[data[feature] == 0]\n \n # The right split will have all data points where the feature value is 1\n right_split = data[data[feature] == 1]\n \n # Calculate the number of misclassified examples in the left split.\n # Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)\n left_mistakes = intermediate_node_num_mistakes(left_split[target])\n\n # Calculate the number of misclassified examples in the right split.\n right_mistakes = intermediate_node_num_mistakes(right_split[target])\n \n # Compute the classification error of this split.\n # Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)\n error = float(left_mistakes + right_mistakes) / num_data_points\n\n # If this is the best error we have found so far, store the feature as best_feature and the error as best_error\n if error < best_error:\n best_error = error\n best_feature = feature\n \n return best_feature # Return the best feature we found",
"To test your best_splitting_feature function, run the following code:",
"if best_splitting_feature(train_data, features, 'safe_loans') == 'term. 36 months':\n print 'Test passed!'\nelse:\n print 'Test failed... try again!'",
"Building the tree\nWith the above functions implemented correctly, we are now ready to build our decision tree. Each node in the decision tree is represented as a dictionary which contains the following keys and possible values:\n{ \n 'is_leaf' : True/False.\n 'prediction' : Prediction at the leaf node.\n 'left' : (dictionary corresponding to the left tree).\n 'right' : (dictionary corresponding to the right tree).\n 'splitting_feature' : The feature that this node splits on.\n}\n\nFirst, we will write a function that creates a leaf node given a set of target values. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in.",
"def create_leaf(target_values):\n \n # Create a leaf node\n leaf = {'splitting_feature' : None,\n 'left' : None,\n 'right' : None,\n 'is_leaf': True }\n \n # Count the number of data points that are +1 and -1 in this node.\n num_ones = len(target_values[target_values == +1])\n num_minus_ones = len(target_values[target_values == -1])\n \n # For the leaf node, set the prediction to be the majority class.\n # Store the predicted class (1 or -1) in leaf['prediction']\n if num_ones > num_minus_ones:\n leaf['prediction'] = +1\n else:\n leaf['prediction'] = -1\n \n # Return the leaf node \n return leaf ",
"We have provided a function that learns the decision tree recursively and implements 3 stopping conditions:\n1. Stopping condition 1: All data points in a node are from the same class.\n2. Stopping condition 2: No more features to split on.\n3. Additional stopping condition: In addition to the above two stopping conditions covered in lecture, in this assignment we will also consider a stopping condition based on the max_depth of the tree. By not letting the tree grow too deep, we will save computational effort in the learning process. \nNow, we will write down the skeleton of the learning algorithm. Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.",
"def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10):\n remaining_features = features[:] # Make a copy of the features.\n \n target_values = data[target]\n print \"--------------------------------------------------------------------\"\n print \"Subtree, depth = %s (%s data points).\" % (current_depth, len(target_values))\n \n\n # Stopping condition 1\n # (Check if there are mistakes at current node.\n # Recall you wrote a function intermediate_node_num_mistakes to compute this.)\n if intermediate_node_num_mistakes(target_values) == 0: ## YOUR CODE HERE\n print \"Stopping condition 1 reached.\" \n # If not mistakes at current node, make current node a leaf node\n return create_leaf(target_values)\n \n # Stopping condition 2 (check if there are remaining features to consider splitting on)\n if len(remaining_features) == 0: ## YOUR CODE HERE\n print \"Stopping condition 2 reached.\" \n # If there are no remaining features to consider, make current node a leaf node\n return create_leaf(target_values) \n \n # Additional stopping condition (limit tree depth)\n if current_depth >= max_depth: ## YOUR CODE HERE\n print \"Reached maximum depth. Stopping for now.\"\n # If the max tree depth has been reached, make current node a leaf node\n return create_leaf(target_values)\n\n # Find the best splitting feature (recall the function best_splitting_feature implemented above)\n splitting_feature = best_splitting_feature(data, remaining_features, target)\n \n # Split on the best feature that we found. \n left_split = data[data[splitting_feature] == 0]\n right_split = data[data[splitting_feature] == 1]\n remaining_features.remove(splitting_feature)\n print \"Split on feature %s. (%s, %s)\" % (\\\n splitting_feature, len(left_split), len(right_split))\n \n # Create a leaf node if the split is \"perfect\"\n if len(left_split) == len(data):\n print \"Creating leaf node.\"\n return create_leaf(left_split[target])\n if len(right_split) == len(data):\n print \"Creating leaf node.\"\n return create_leaf(right_split[target])\n\n \n # Repeat (recurse) on left and right subtrees\n left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth) \n ## YOUR CODE HERE\n right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth)\n\n return {'is_leaf' : False, \n 'prediction' : None,\n 'splitting_feature': splitting_feature,\n 'left' : left_tree, \n 'right' : right_tree}",
"Here is a recursive function to count the nodes in your tree:",
"def count_nodes(tree):\n if tree['is_leaf']:\n return 1\n return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])",
"Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.",
"small_data_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 3)\nif count_nodes(small_data_decision_tree) == 13:\n print 'Test passed!'\nelse:\n print 'Test failed... try again!'\n print 'Number of nodes found :', count_nodes(small_data_decision_tree)\n print 'Number of nodes that should be there : 13' ",
"Build the tree!\nNow that all the tests are passing, we will train a tree model on the train_data. Limit the depth to 6 (max_depth = 6) to make sure the algorithm doesn't run for too long. Call this tree my_decision_tree. \nWarning: This code block may take 1-2 minutes to learn.",
"# Make sure to cap the depth at 6 by using max_depth = 6\nmy_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6)",
"Making predictions with a decision tree\nAs discussed in the lecture, we can make predictions from the decision tree with a simple recursive function. Below, we call this function classify, which takes in a learned tree and a test point x to classify. We include an option annotate that describes the prediction path when set to True.\nFill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.",
"def classify(tree, x, annotate = False): \n # if the node is a leaf node.\n if tree['is_leaf']:\n if annotate: \n print \"At leaf, predicting %s\" % tree['prediction']\n return tree['prediction'] \n else:\n # split on feature.\n split_feature_value = x[tree['splitting_feature']]\n if annotate: \n print \"Split on %s = %s\" % (tree['splitting_feature'], split_feature_value)\n if split_feature_value == 0:\n return classify(tree['left'], x, annotate)\n else:\n return classify(tree['right'], x, annotate)",
"Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.",
"test_data[0]\n\nprint 'Predicted class: %s ' % classify(my_decision_tree, test_data[0])",
"Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:",
"classify(my_decision_tree, test_data[0], annotate=True)",
"Quiz question: What was the feature that my_decision_tree first split on while making the prediction for test_data[0]?\n Quiz question: What was the first feature that lead to a right split of test_data[0]?\n Quiz question: What was the last feature split on before reaching a leaf node for test_data[0]?\nEvaluating your decision tree\nNow, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.\nAgain, recall that the classification error is defined as follows:\n$$\n\\mbox{classification error} = \\frac{\\mbox{# mistakes}}{\\mbox{# total examples}}\n$$\nNow, write a function called evaluate_classification_error that takes in as input:\n1. tree (as described above)\n2. data (an SFrame)\n3. target (a string - the name of the target/label column)\nThis function should calculate a prediction (class label) for each row in data using the decision tree and return the classification error computed using the above formula. Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.",
"def evaluate_classification_error(tree, data, target):\n # Apply the classify(tree, x) to each row in your data\n prediction = data.apply(lambda x: classify(tree, x))\n \n # Once you've made the predictions, calculate the classification error and return it\n accuracy = (prediction == data[target]).sum()\n error = 1 - float(accuracy) / len(data[target])\n \n return error\n ",
"Now, let's use this function to evaluate the classification error on the test set.",
"round(evaluate_classification_error(my_decision_tree, test_data, target), 2)",
"Quiz Question: Rounded to 2nd decimal point, what is the classification error of my_decision_tree on the test_data?\nPrinting out a decision stump\nAs discussed in the lecture, we can print out a single decision stump (printing out the entire tree is left as an exercise to the curious reader).",
"def print_stump(tree, name = 'root'):\n split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'\n if split_name is None:\n print \"(leaf, label: %s)\" % tree['prediction']\n return None\n split_feature, split_value = split_name.split('.')\n print ' %s' % name\n print ' |---------------|----------------|'\n print ' | |'\n print ' | |'\n print ' | |'\n print ' [{0} == 0] [{0} == 1] '.format(split_name)\n print ' | |'\n print ' | |'\n print ' | |'\n print ' (%s) (%s)' \\\n % (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),\n ('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))\n\nprint_stump(my_decision_tree)",
"Quiz Question: What is the feature that is used for the split at the root node?\nExploring the intermediate left subtree\nThe tree is a recursive dictionary, so we do have access to all the nodes! We can use\n* my_decision_tree['left'] to go left\n* my_decision_tree['right'] to go right",
"print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature'])",
"Exploring the left subtree of the left subtree",
"print_stump(my_decision_tree['left']['left'], my_decision_tree['left']['splitting_feature'])",
"Quiz question: What is the path of the first 3 feature splits considered along the left-most branch of my_decision_tree?\nQuiz question: What is the path of the first 3 feature splits considered along the right-most branch of my_decision_tree?",
"print_stump(my_decision_tree['right'], my_decision_tree['splitting_feature'])\n\nprint_stump(my_decision_tree['right']['right'], my_decision_tree['left']['splitting_feature'])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
davidthomas5412/PanglossNotebooks | GroupMeeting_11_16.ipynb | mit | [
"The PGM\nFor more an introduction to PGMS see Daphne Koller's Probabilistic Graphical Models. Below is the PGM that we will explore in this notebook.",
"%matplotlib inline\n\nfrom matplotlib import rc\nrc(\"font\", family=\"serif\", size=14)\nrc(\"text\", usetex=True)\n\nimport daft\n\npgm = daft.PGM([7, 6], origin=[0, 0])\n\n#background nodes\npgm.add_plate(daft.Plate([0.5, 3.0, 5, 2], label=r\"foreground galaxy $i$\",\n shift=-0.1))\npgm.add_node(daft.Node(\"theta\", r\"$\\theta$\", 3.5, 5.5, fixed=True))\npgm.add_node(daft.Node(\"alpha\", r\"$\\alpha$\", 1.5, 5.5, fixed=True))\npgm.add_node(daft.Node(\"halo_mass\", r\"$M_i$\", 3.5, 4, scale=2))\npgm.add_node(daft.Node(\"background_z\", r\"$z_i$\", 2, 4, fixed=True))\npgm.add_node(daft.Node(\"concentration\", r\"$c_i$\", 1.5, 3.5, fixed=True))\npgm.add_node(daft.Node(\"background_x\", r\"$x_i$\", 1.0, 3.5, fixed=True))\n\n#foreground nodes\npgm.add_plate(daft.Plate([0.5, 0.5, 5, 2], label=r\"background galaxy $j$\",\n shift=-0.1))\npgm.add_node(daft.Node(\"reduced_shear\", r\"$g_j$\", 2.0, 1.5, fixed=True))\npgm.add_node(daft.Node(\"reduced_shear\", r\"$g_j$\", 2.0, 1.5, fixed=True))\npgm.add_node(daft.Node(\"foreground_z\", r\"$z_j$\", 1.0, 1.5, fixed=True))\npgm.add_node(daft.Node(\"foreground_x\", r\"$x_j$\", 1.0, 1.0, fixed=True))\npgm.add_node(daft.Node(\"ellipticities\", r\"$\\epsilon_j^{obs}$\", 4.5, 1.5, observed=True, scale=2))\n\n#outer nodes\npgm.add_node(daft.Node(\"sigma_obs\", r\"$\\sigma_{\\epsilon_j}^{obs}$\", 3.0, 2.0, fixed=True))\npgm.add_node(daft.Node(\"sigma_int\", r\"$\\sigma_{\\epsilon}^{int}$\", 6.0, 1.5, fixed=True))\n\n#edges\npgm.add_edge(\"foreground_z\", \"reduced_shear\")\npgm.add_edge(\"foreground_x\", \"reduced_shear\")\npgm.add_edge(\"reduced_shear\", \"ellipticities\")\npgm.add_edge(\"sigma_obs\", \"ellipticities\")\npgm.add_edge(\"sigma_int\", \"ellipticities\")\npgm.add_edge(\"concentration\", \"reduced_shear\")\npgm.add_edge(\"halo_mass\", \"concentration\")\npgm.add_edge(\"background_z\", \"concentration\")\npgm.add_edge(\"background_x\", \"reduced_shear\")\npgm.add_edge(\"alpha\", \"concentration\")\npgm.add_edge(\"theta\", \"halo_mass\")\npgm.render()",
"We have sets of foregrounds and backgrounds along with the variables\n\n$\\alpha$: parameters in the concentration function (which is a function of $z_i,M_i$)\n$\\theta$: prior distribution of halo masses\n$z_i$: foreground galaxy redshift\n$x_i$: foreground galaxy angular coordinates\n$z_j$: background galaxy redshift\n$x_j$: background galaxy angular coordinates\n$g_j$: reduced shear\n$\\sigma_{\\epsilon_j}^{obs}$: noise from our ellipticity measurement process\n$\\sigma_{\\epsilon}^{int}$: intrinsic variance in ellipticities\n$\\epsilon_j^{obs}$: intrinsic variance in ellipticities\n\nStellar Mass Threshold",
"from pandas import read_table\nfrom pangloss import GUO_FILE\n\nm_h = 'M_Subhalo[M_sol/h]'\nm_s = 'M_Stellar[M_sol/h]'\n\nguo_data = read_table(GUO_FILE)\nnonzero_guo_data= guo_data[guo_data[m_h] > 0]\n\nimport matplotlib.pyplot as plt\n\nstellar_mass_threshold = 5.883920e+10\nplt.scatter(nonzero_guo_data[m_h], nonzero_guo_data[m_s], alpha=0.05)\nplt.axhline(y=stellar_mass_threshold, color='red')\nplt.xlabel('Halo Mass')\nplt.ylabel('Stellar Mass')\nplt.title('SMHM Scatter')\nplt.xscale('log')\nplt.yscale('log')\n\nfrom math import log\nimport numpy as np\n\nstart = log(nonzero_guo_data[m_s].min(), 10)\nstop = log(nonzero_guo_data[m_s].max(), 10)\n\nm_logspace = np.logspace(start, stop, num=20, base=10)[:-1]\n\nm_corrs = []\nthin_data = nonzero_guo_data[[m_s, m_h]]\nfor cutoff in m_logspace:\n tmp = thin_data[nonzero_guo_data[m_s] > cutoff]\n m_corrs.append(tmp.corr()[m_s][1])\n \nplt.plot(m_logspace, m_corrs, label='correlation')\nplt.axvline(x=stellar_mass_threshold, color='red', label='threshold')\nplt.xscale('log')\nplt.legend(loc=2)\nplt.xlabel('Stellar Mass')\nplt.ylabel('Stellar Mass - Halo Mass Correlation')\nplt.title('SMHM Correlation')\n\nplt.rcParams['figure.figsize'] = (10, 6)\n# plt.plot(hist[1][:-1], hist[0], label='correlation')\nplt.hist(nonzero_guo_data[m_s], bins=m_logspace, alpha=0.4, normed=False, label='dataset')\nplt.axvline(x=stellar_mass_threshold, color='red', label='threshold')\nplt.xscale('log')\nplt.legend(loc=2)\nplt.xlabel('Stellar Mass')\nplt.ylabel('Number of Samples')\nplt.title('Stellar Mass Distribution')",
"Results",
"from pandas import read_csv\n\nres = read_csv('data3.csv')\ntru = read_csv('true3.csv')\n\nstart = min([res[res[c] > 0][c].min() for c in res.columns[1:-1]])\nstop = res.max().max()\n\nbase = 10\nstart = log(start, base)\nend = log(stop, base)\nres_logspace = np.logspace(start, end, num=10, base=base)\n\nplt.rcParams['figure.figsize'] = (20, 12)\n\nfor i,val in enumerate(tru.columns[1:]):\n plt.subplot(int('91' + str(i+1)))\n x = res[val][res[val] > 0]\n weights = np.exp(res['log-likelihood'][res[val] > 0])\n t = tru[val].loc[0]\n plt.hist(x, bins=res_logspace, alpha=0.4, normed=True, label='prior')\n plt.hist(x, bins=res_logspace, weights=weights, alpha=0.4, normed=True, label='posterior')\n plt.axvline(x=t, color='red', label='truth', linewidth=1)\n plt.xscale('log')\n plt.legend()\n plt.ylabel('PDF')\n plt.xlabel('Halo Mass (log-scale)')\n plt.title('Halo ID ' + val)\n \nplt.show()\n\nres.columns\n\nres[['112009306000027', 'log-likelihood']].sort('log-likelihood')",
"Next Steps\nMove forward with major software upgrades\n\nFlying blind, guessing what will be important in future\nAdd importance sampling\nReduce duplication\nParallelize\nSlowly transition code base to better software practices\nEliminate prototype code that is not core to module\nAt some point it becomes cheaper to rewrite code with high parallelization and good software practices instead of hacking in extra functinality to code that is poorly suited to our goal\n\nPivot to new question/milestone\n\nShrink foreground\nSee if results improve with 'unphysical', dense background\nHit the literature to see what questions have been answered and get ideas for cool project directions\nThanks Warren for pointing me to resources for learning weak lensing"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/starthinker | colabs/dbm_to_storage.ipynb | apache-2.0 | [
"DV360 Report To Storage\nMove existing DV360 report into a Storage bucket.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.",
"!pip install git+https://github.com/google/starthinker\n",
"2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.",
"from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n",
"3. Enter DV360 Report To Storage Recipe Parameters\n\nSpecify either report name or report id to move a report.\nThe most recent valid file will be moved to the bucket.\nModify the values below for your use case, can be done multiple times, then click play.",
"FIELDS = {\n 'auth_read':'user', # Credentials used for reading data.\n 'dbm_report_id':'', # DV360 report ID given in UI, not needed if name used.\n 'auth_write':'service', # Credentials used for writing data.\n 'dbm_report_name':'', # Name of report, not needed if ID used.\n 'dbm_bucket':'', # Google cloud bucket.\n 'dbm_path':'', # Path and filename to write to.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n",
"4. Execute DV360 Report To Storage\nThis does NOT need to be modified unless you are changing the recipe, click play.",
"from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'dbm':{\n 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},\n 'report':{\n 'report_id':{'field':{'name':'dbm_report_id','kind':'integer','order':1,'default':'','description':'DV360 report ID given in UI, not needed if name used.'}},\n 'name':{'field':{'name':'dbm_report_name','kind':'string','order':2,'default':'','description':'Name of report, not needed if ID used.'}}\n },\n 'out':{\n 'storage':{\n 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},\n 'bucket':{'field':{'name':'dbm_bucket','kind':'string','order':3,'default':'','description':'Google cloud bucket.'}},\n 'path':{'field':{'name':'dbm_path','kind':'string','order':4,'default':'','description':'Path and filename to write to.'}}\n }\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ajdawson/python_for_climate_scientists | course_content/notebooks/matplotlib_intro.ipynb | gpl-3.0 | [
"import matplotlib.pyplot as plt\nplt.rcParams['image.cmap'] = 'viridis'",
"An introduction to matplotlib\nMatplotlib is a Python package used widely throughout the scientific Python community to produce high quality 2D publication graphics. It transparently supports a wide range of output formats including PNG (and other raster formats), PostScript/EPS, PDF and SVG and has interfaces for all of the major desktop GUI (Graphical User Interface) toolkits.\nMatplotlib comes with a convenience sub-package called pyplot. It is a general convention to import this module as plt:",
"import matplotlib.pyplot as plt",
"The matplotlib figure\nAt the heart of every matplotlib plot is the \"Figure\" object. The \"Figure\" object is the top level concept that can be drawn to one of the many output formats, or simply just to screen. Any object that can be drawn in this way is known as an \"Artist\" in matplotlib.\nLet's create our first artist using pyplot, and then show it:",
"fig = plt.figure()\nplt.show()",
"On its own, drawing the figure artist is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).\nBy far the most useful artist in matplotlib is the \"Axes\" artist. The Axes artist represents the \"data space\" of a typical plot. A rectangular axes (the most common axes, but not the only axes, e.g. polar plots) will have two Axis Artists with tick labels and tick marks.\nThere is no limit on the number of Axes artists that can exist on a Figure artist. Let's go ahead and create a figure with a single Axes Artist, and show it using pyplot:",
"ax = plt.axes()\nplt.show()",
"Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious Artist construction. For example, we did not need to manually create the Figure artist with plt.figure because it was implicit that we needed a figure when we created the Axes artist.\nUnder the hood matplotlib still had to create a Figure artist; we just didn't need to capture it into a variable. We can access the created object with the \"state\" functions found in pyplot: plt.gcf() and plt.gca().\nExercise 1\nGo to matplotlib.org and search for what these strangely named functions do.\nHint: you will find multiple results so remember we are looking for the pyplot versions of these functions.\nWorking with the axes\nMost of your time building a graphic in matplotlib will be spent on the Axes artist. Whilst the matplotlib documentation for the Axes artist is very detailed, it is also rather difficult to navigate (though this is an area of ongoing improvement).\nAs a result, it is often easier to find new plot types by looking at the pyplot module's documentation.\nThe first and most common Axes method is plot. Go ahead and look at the plot documentation from the following sources:\n\nhttp://matplotlib.org/api/pyplot_summary.html\nhttp://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot\nhttp://matplotlib.org/api/axes_api.html?#matplotlib.axes.Axes.plot\n\nPlot can be used to draw one or more lines in axes data space:",
"ax = plt.axes()\nline1, = ax.plot([0, 1, 2, 1.5], [3, 1, 2, 4])\nplt.show()",
"Notice how the axes view limits (ax.viewLim) have been updated to include the whole of the line.\nShould we want to add some spacing around the edges of our axes we could set the axes margin using the Axes artist's margins method. Alternatively, we could manually set the limits with the Axes artist's set_xlim and set_ylim methods.\nExercise 2\nModify the previous example to produce three different figures that control the limits of the axes.\n1. Manually set the x and y limits to [0.5, 2] and [1, 5] respectively.\n2. Define a margin such that there is 10% whitespace inside the axes around the drawn line (Hint: numbers to margins are normalised such that 0% is 0.0 and 100% is 1.0).\n3. Set a 10% margin on the axes with the lower y limit set to 0. (Note: order is important here)\nThe previous example can be simplified to be even shorter. We are not using the line artist returned by ax.plot() so we don't need to store it in a variable. In addition, in exactly the same way that we didn't need to manually create a Figure artist when using the pyplot.axes method, we can remove the plt.axes if we use the plot function from pyplot. Our simple line example then becomes:",
"plt.plot([0, 1, 2, 1.5], [3, 1, 2, 4])\nplt.show()",
"The simplicity of this example shows how visualisations can be produced quickly and easily with matplotlib, but it is worth remembering that for full control of Figure and Axes artists we can mix the convenience of pyplot with the power of matplotlib's object oriented design.\nExercise 3\nBy calling plot multiple times, create a single axes showing the line plots of $y=sin(x)$ and $y=cos(x)$ in the interval $[0, 2\\pi]$ with 200 linearly spaced $x$ samples.\nMultiple axes on the same figure (aka subplot)\nMatplotlib makes it relatively easy to add more than one Axes artist to a figure. The add_subplot method on a Figure artist, which is wrapped by the subplot function in pyplot, adds an Axes artist in the grid position specified. To compute the position, we must tell matplotlib the number of rows and columns to separate the figure into, and which number the axes to be created is (1 based).\nFor example, to create axes at the top right and bottom left of a $3 x 2$ notional grid of Axes artists the grid specifications would be 2, 3, 3 and 2, 3, 4 respectively:",
"top_right_ax = plt.subplot(2, 3, 3)\nbottom_left_ax = plt.subplot(2, 3, 4)\n\nplt.show()",
"Exercise 3 continued: Copy the answer from the previous task (plotting $y=sin(x)$ and $y=cos(x)$) and add the appropriate plt.subplot calls to create a figure with two rows of Axes artists, one showing $y=sin(x)$ and the other showing $y=cos(x)$.\nFurther plot types\nMatplotlib comes with a huge variety of different plot types. Here is a quick demonstration of the more common ones.",
"import numpy as np\n\nx = np.linspace(-180, 180, 60)\ny = np.linspace(-90, 90, 30)\nx2d, y2d = np.meshgrid(x, y)\ndata = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))\n\nplt.contourf(x, y, data)\nplt.show()\n\nplt.imshow(data, extent=[-180, 180, -90, 90],\n interpolation='nearest', origin='lower')\nplt.show()\n\nplt.pcolormesh(x, y, data)\nplt.show()\n\nplt.scatter(x2d, y2d, c=data, s=15)\nplt.show()\n\nplt.bar(x, data.sum(axis=0), width=np.diff(x)[0])\nplt.show()\n\nplt.plot(x, data.sum(axis=0), linestyle='--',\n marker='d', markersize=10, color='red', alpha=0.5)\nplt.show()",
"Titles, Legends, colorbars and annotations\nMatplotlib has convenience functions for the addition of plot elements such as titles, legends, colorbars and text based annotation.\nThe suptitle pyplot function allows us to set the title of a figure, and the set_title method on an Axes artist allows us to set the title of an individual axes. Additionally Axes artists have methods named set_xlabel and set_ylabel to label the respective x and y Axis artists (that's Axis, not Axes). Finally, we can add text, located by data coordinates, with the text method on an Axes artist.",
"fig = plt.figure()\nax = plt.axes()\n# Adjust the created axes so its topmost extent is 0.8 of the figure.\nfig.subplots_adjust(top=0.8)\nfig.suptitle('Figure title', fontsize=18, fontweight='bold')\nax.set_title('Axes title', fontsize=16)\nax.set_xlabel('The X axis')\nax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)\nax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\\nin data coordinates.',\n horizontalalignment='center', fontsize=14)\nplt.show()",
"The creation of a legend is as simple as adding a \"label\" to lines of interest. This can be done in the call to plt.plot and then followed up with a call to plt.legend:",
"x = np.linspace(-3, 7, 200)\nplt.plot(x, 0.5 * x ** 3 - 3 * x ** 2, linewidth=2,\n label='$f(x)=0.5x^3-3x^2$')\nplt.plot(x, 1.5 * x ** 2 - 6 * x, linewidth=2, linestyle='--',\n label='Gradient of $f(x)$', )\nplt.legend(loc='lower right')\nplt.grid()\nplt.show()",
"Colorbars are created with the plt.colorbar function:",
"x = np.linspace(-180, 180, 60)\ny = np.linspace(-90, 90, 30)\nx2d, y2d = np.meshgrid(x, y)\ndata = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))\n\nplt.contourf(x, y, data)\nplt.colorbar(orientation='horizontal')\nplt.show()",
"Matplotlib comes with powerful annotation capabilities, which are described in detail at http://matplotlib.org/users/annotations_intro.html.\nThe annotation's power can mean that the syntax is a little harder to read, which is demonstrated by one of the simplest examples of using annotate.",
"x = np.linspace(-3, 7, 200)\nplt.plot(x, 0.5*x**3 - 3*x**2, linewidth=2)\nplt.annotate('Local minimum',\n xy=(4, -18),\n xytext=(-2, -40), fontsize=15,\n arrowprops={'facecolor': 'black', 'headlength': 10})\nplt.grid()\nplt.show()",
"Saving your plots\nYou can save a figure using plt.savefig. This function accepts a filename as input, and saves the current figure to the given file. The format of the file is inferred from the file extension:",
"plt.plot(range(10))\nplt.savefig('my_plot.png')\n\nfrom IPython.display import Image\nImage(filename='my_plot.png') ",
"Matplotlib supports many output file formats, including most commonly used ones. You can see a list of the supported file formats including the filename extensions they are recognised by with:",
"plt.gcf().canvas.get_supported_filetypes_grouped()",
"Further steps\nMatplotlib has extremely comprehensive documentation at http://matplotlib.org/. Particularly useful parts for beginners are the pyplot summary and the example gallery:\n\npyplot summary: http://matplotlib.org/api/pyplot_summary.html\nexample gallery: http://matplotlib.org/examples/index.html\n\nExercise 4: random walks\nThis exercise requires the use of many of the elements we've discussed (and a few extra ones too, remember the documentation for matplotlib is comprehensive!). We'll start by defining a random walk and some statistical population data for us to plot:",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nnp.random.seed(1234)\n\nn_steps = 500\nt = np.arange(n_steps)\n\n# Probability distribution:\nmu = 0.002 # Mean\nsigma = 0.01 # Standard deviation\n\n# Generate a random walk, with position X as a function of time:\nS = mu + sigma * np.random.randn(n_steps)\nX = S.cumsum()\n\n# Calculate the 1 sigma upper and lower analytic population bounds:\nlower_bound = mu * t - sigma * np.sqrt(t)\nupper_bound = mu * t + sigma * np.sqrt(t)",
"1. Plot the walker position X against time (t) using a solid blue line of width 2 and give it a label so that it will appear in a legend as \"walker position\".\n2. Plot the population mean (mu*t) against time (t) using a black dashed line of width 1 and give it a label so that it will appear in a legend as \"population mean\".\n3. Fill the space between the variables upper_bound and lower_bound using yellow with alpha (transparency) of 0.5, label this so that it will appear in a legend as \"1 sigma range\" (hint: see the fill_between method of an axes or pyplot.fill_between).\n4. Draw a legend in the upper left corner of the axes (hint: you should have already set the labels for each line when you created them).\n5. Label the x-axis \"num steps\" and the y-axis \"position\", and draw gridlines on the axes (hint: ax.grid toggles the state of the grid).\n6. (harder) Fill the area under the walker position curve that is above the upper bound of the population mean using blue with alpha 0.5 (hint: fill_between can take a keyword argument called where that allows you to limit where filling is drawn)."
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
reece/ga4gh-examples | nb/Exploring SO terms.ipynb | apache-2.0 | [
"Experiment with searching for SO terms\nBackground\nA variant annotation record has a json structure like the following:\n{u'createDateTime': u'2015-11-18T00:00:00Z',\n u'id': u'YnJjYTE6T1I0Rjp2YXJpYW50YW5ub3RhdGlvbnM6MTo2NDExNTo4NzFkNGM4OWE1Mzc0NjQwNjA2NDM0OTkzYWVmNGFmZQ',\n u'info': {},\n u'transcriptEffects': [{u'CDSLocation': None,\n u'alternateBases': u'A',\n u'analysisResults': [],\n u'cDNALocation': None,\n u'effects': [{u'id': u'SO:0001631',\n u'sourceName': None,\n u'sourceVersion': None,\n u'term': u'upstream_gene_variant'}],\n u'featureId': u'NM_001005484.1',\n u'hgvsAnnotation': {u'genomic': u'1:g.64116C>A',\n u'protein': u'',\n u'transcript': u'NM_001005484.1:c.-4975C>A'},\n u'id': u'2053be57055a40663aa02b2cdc9c7351',\n u'proteinLocation': None},\n {u'CDSLocation': None,\n u'alternateBases': u'A',\n u'analysisResults': [],\n u'cDNALocation': None,\n u'effects': [{u'id': u'SO:0000605',\n u'sourceName': None,\n u'sourceVersion': None,\n u'term': u'intergenic_region'}],\n u'featureId': u'FAM138A-OR4F5',\n u'hgvsAnnotation': {u'genomic': u'1:g.64116C>A',\n u'protein': u'',\n u'transcript': u'n.64116C>A'},\n u'id': u'6e6a547b0bdb446a78a3819bfcd6e06c',\n u'proteinLocation': None}],\n u'variantAnnotationSetId': u'YnJjYTE6T1I0Rjp2YXJpYW50YW5ub3RhdGlvbnM',\n u'variantId': u'YnJjYTE6T1I0RjoxOjY0MTE1OmU4Y2MyOTg2MGJmOTJjZGVmOTEwY2IyMzllYWVkZDI0'}\n\nThat is: variant annotation —⪪ transcript effects —⪪ effects\nIn the sample data, there are many variants with multiple transcript effects, but all transcriptEffects have exactly one effect (see below for data).\nThe Question\nsearchVariantAnnotations accepts an list of json-formatted array of effect filters. It is unclear (to me) how this should behave when multiple filters are provided. Specifically, given a set F of SO ids provided as a filter filtering (e.g., {SO:1,SO:2}), and a set S of SO ids associated with all transcriptEffects of a variant annotation, does a variant annotation VA with S \"match\" F if:\n* F ⋂ S ≠ {} -- at least one f ∈ F is in S\n* F ⊂ S -- all f ∈ F are also in S\n* S ⊂ F -- all s ∈ S are also in F\n* F = S -- sets are identical (⇒ all of the above are true)\n\nIt's hard to know what we want without use cases. However, it seems clear that users are likely to have one of two expectations:\n\nSO filter terms are ANDed; that is, a VA matches if all f ∈ F are in S (i.e., F ⊆ S)\nSO filter terms are ORed; that is, a VA matches if any f ∈ F is in S\n\nLet's test.\nSetup",
"import itertools\nimport pandas as pd\nfrom pivottablejs import pivot_ui\n\nimport ga4gh.client\nprint(ga4gh.__version__)\n\ngc = ga4gh.client.HttpClient(\"http://localhost:8000\")\n\nregion_constraints = dict(referenceName=\"1\", start=0, end=int(1e10))\nvariant_set_id = 'YnJjYTE6T1I0Rg'\nvariant_annotation_sets = list(gc.searchVariantAnnotationSets(variant_set_id))\nvariant_annotation_set = variant_annotation_sets[0]\nprint(\"Using first variant annotation set (of {n} total) for variant set {vs_id}\\nvas_id={vas.id}\".format(\n n=len(variant_annotation_sets), vs_id=variant_set_id, vas=variant_annotation_set))",
"Characterizing sample data",
"variant_annotations = [{\n 'va':va,\n 'n_te': len(list(va.transcriptEffects)),\n 'n_ef': len(list(ef for te in va.transcriptEffects for ef in te.effects)),\n 'sos': \";\".join(sorted(set(\"{ef.id}:{ef.term}\".format(ef=ef)\n for te in va.transcriptEffects\n for ef in te.effects)))\n }\n for va in gc.searchVariantAnnotations(variant_annotation_set.id, **region_constraints)\n ]\nvariant_annotations_df = pd.DataFrame(variant_annotations)",
"The following is an inline graphic image. See instructions below it for reproducing it. \n\nTo regenerate this data: \n\nEval the next cell\nSelect Bar Chart from Table menu\nDrag-drop \"sos\" to left column under Count pulldown\nDrag-drop n_te, then n_ef to row to right of Count pulldown",
"pivot_ui(variant_annotations_df)",
"The searches\nUsing the data above, we can search for single and multiple terms and compare to expectations.\nWe'll be using this function:\nSignature: gc.searchVariantAnnotations(variantAnnotationSetId, referenceName=None, referenceId=None, \n start=None, end=None, featureIds=[], effects=[])\nDocstring:\nReturns an iterator over the Annotations fulfilling the specified conditions from the specified\nAnnotationSet.\n\nThe JSON string for an effect term must be specified on the command line : \n`--effects '{\"term\": \"exon_variant\"}'`.",
"def _mk_effect_filter(so_ids=[]):\n \"\"\"return list of so_id effect filters for the given list of so_ids\n\n >>> print(_mk_effect_filter(so_ids=\"SO:1 SO:2 SO:3\".split()))\n ['{\"id\":\"SO:1\"}', '{\"id\":\"SO:2\"}', '{\"id\":\"SO:3\"}']\n \"\"\"\n return [{\"id\": so_id} for so_id in so_ids]\n\ndef _fetch_variant_annotations(gc, so_ids=[], **args):\n return gc.searchVariantAnnotations(variant_annotation_set.id,\n effects=_mk_effect_filter(so_ids),\n **args)\n\n# expected:\n#so_terms\n#SO:0000605:intergenic_region 697\n#SO:0000605:intergenic_region;SO:0001631:upstream_gene_variant 63\n#SO:0000605:intergenic_region;SO:0001632:downstream_gene_variant 56\n#SO:0001583:missense_variant 16\n#SO:0001587:stop_gained 1\n#SO:0001819:synonymous_variant 7\n \n[(so_set,\n len(list(_fetch_variant_annotations(gc, so_ids=so_set.split(), **region_constraints))))\n for so_set in [\"SO:0001819\", \"SO:0001632\", \"SO:0000605\", \n \"SO:0000605 SO:0001632\", \"SO:0001632 SO:0000605\",\n \"SO:9999999\", \"SO:0000605 SO:999999\"]\n ]",
"Conclusion\nSearching uses disjunctive OR. That is, searching with a filter containing multiple terms returns the union of annotations that match at least one term. That's good because it means that conjuction (AND) may be applied on the return set."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Cushychicken/cushychicken.github.io | assets/lockin_amp_simulation.ipynb | mit | [
"I've been quietly obsessing over lock-in amplifiers ever since I read about them in Chapter 8 of The Art of Electronics. Now that I've had a few months to process the concept as a background task, I decided to whip up a Python model of a lock-in amp, just for the hell of it. \nBackground\nLock-in amplifiers are a type of lab equipment used for pulling really weak signals out of overpowering noise. Horowitz and Hill deem it \"a method of considerable subtlety\", which is refined way of saying \"a cool trick of applied mathematics\". A lock-in amplifier relies on the fact that, over a large time interval (much greater than any single period), the average DC value of any given sine wave is zero.",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\ndef sine_wave(freq, phase=0, Fs=10000):\n ph_rad = (phase/360.0)*(2.0*np.pi)\n return np.array([np.sin(((2 * np.pi * freq * a) / Fs) + ph_rad) for a in range(Fs)])\n\nsine = sine_wave(100)\nmean = np.array([np.mean(sine)] * len(sine))\ndf = pd.DataFrame({'sine':sine_wave(100), \n 'mean':mean})\ndf[:1000].plot()",
"However, this ceases to be true when two sinusoids of equal frequency and phase are multiplied together. In this case, instead of averaging out to zero, the product of the two waves have a nonzero mean value.",
"df['sin_mixed'] = np.multiply(df.sine, df.sine)\ndf['mean_mixed'] = np.mean(df.sin_mixed)\ndf[['sin_mixed','mean_mixed']][:1000].plot()",
"This DC voltage produced by the product of the two waves is very sensitive to changes in frequency. The plots below show that a 101Hz signal has a mean value of zero when multiplied by a 100Hz signal.",
"df['sin_mixed_101'] = np.multiply(df.sine, sine_wave(101))\ndf['mean_mixed_101'] = np.mean(df.sin_mixed_101)\ndf[['sin_mixed_101','mean_mixed_101']].plot()",
"This is really useful in situations where you have a signal of a known frequency. With the proper equipment, you can \"lock in\" to your known-frequency signal, and track changes to the amplitude and phase of that signal - even in the presence of overwhelming noise. \nYou can show this pretty easily by just scaling down one of the waves in our prior example, and burying it in noise. (This signal is about 20dB below the noise floor in this case.)",
"noise_fl = np.array([(2 * np.random.random() - 1) for a in range(10000)])\ndf['sine_noisy'] = np.add(noise_fl, 0.1*df['sine'])\n\ndf['sin_noisy_mixed'] = np.multiply(df.sine_noisy, df.sine)\ndf['mean_noisy_mixed'] = df['sin_noisy_mixed'].mean()\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\nfig.set_size_inches(12,4)\n\ndf['sine_noisy'].plot(ax=axes[0])\ndf[['sin_noisy_mixed', 'mean_noisy_mixed']].plot(ax=axes[1])",
"It doesn't look like much at the prior altitude, but it's definitely the signal we're looking for. That's because the lock-in output scales with the amplitude of both the input signal and the reference waveform:\n$$U_{out}=\\frac{1}{2}V_{sig}V_{ref}cos(\\theta)$$\nAs a result, the lock-in amp has a small (but meaningful) amplitude:",
"df['mean_noisy_mixed'].plot()",
"Great! We can pull really weak signals out of seemingly endless noise. So, why haven't we used this technology to revolutionize all communications with infite signal-to-noise ratio? \nLike all real systems, there's a tradeoff, and for a lock-in amplifier, that tradeoff is time. Lock-in amps rely on a persistent periodic signal - without one, there isn't anything to lock on to! That's the catch of multiplying two signals of identical frequencies together: it takes time for that DC offset component to form. \nA second tradeoff of the averaging method becomes obvious when you consider how to implement the averaging in a practical manner. Since we're talking about this in the context of electronics: one of the simplest ways to average, electronically, is to just filter by frequency, and it doesn't get much simpler than a single pole lowpass filter for a nice gentle average. The result looks pretty good when applied to the product of two sine waves:",
"def lowpass(x, alpha=0.001):\n data = [x[0]]\n for a in x[1:]:\n data.append(data[-1] + (alpha*(a-data[-1])))\n return np.array(data)\n\ndf['sin_mixed_lp'] = lowpass(df.sin_mixed)\ndf['sin_mixed_lp'].plot()",
"...but it starts to break down when you filter the noisy signals, which can contain large fluctuations that aren't necessarily real:",
"df['sin_noisy_mixed_lp'] = lowpass(df.sin_noisy_mixed)\ndf['sin_noisy_mixed_lp'].plot()",
"We can clean get rid of some of that statistical noise junk by rerunning the filter, of course, but that takes time, and also robs the lock-in of a bit of responsiveness.",
"df['sin_noisy_mixed_lp2'] = lowpass(df.sin_noisy_mixed_lp)\ndf['sin_noisy_mixed_lp2'].plot()",
"On top of all this, lock-in amps are highly sensitive to phase differences between reference and signal tones. Take a look at the plots below, where our noisy signal is mixed with waves 45 and 90 degrees offset from it.",
"df['sin_phase45_mixed'] = np.multiply(df.sine_noisy, sine_wave(100, phase=45))\ndf['sin_phase90_mixed'] = np.multiply(df.sine_noisy, sine_wave(100, phase=90))\ndf['sin_phase45_mixed_lp'] = lowpass(df['sin_phase45_mixed'])\ndf['sin_phase90_mixed_lp'] = lowpass(df['sin_phase90_mixed'])\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\nfig.set_size_inches(12,4)\n\ndf[['sin_noisy_mixed_lp','sin_phase45_mixed_lp','sin_phase90_mixed_lp']].plot(ax=axes[0])\ndf[['sin_noisy_mixed_lp','sin_phase45_mixed_lp','sin_phase90_mixed_lp']][6000:].plot(ax=axes[1])",
"These plots illustrate that there's a component of phase sensitivity. As the phase of signal moves farther and farther out of phase with the reference, the lock-in output starts to trend downwards, closer to zero. You can see, too, why lock-ins require time to settle out to a final value - the left plot shows how signals that are greatly out of phase with one another can produce an initial signal value where none should exist! The right plot, however, shows how the filtered, 90-degree offset signal (green trace) declines over time to the correct average value of approximately zero. \nQuadrature Output\nLike all lab equipment, lock-in amplifiers were originally analog devices. Analog lock-ins required a bit of tedious work to get optimum performance from amplifier - typically adjusting the phase of the reference so as to be in-phase with the target signal. This could prove time consuming given the time delay required for the output to stabilize! However, advances in digital technology have since yielded some nice improvements for lock-in amplifiers: \n\ndigitally generated, near-perfect reference signals,\nsimultaneous sine and cosine mixing,\nDSP based output filter - easily and accurately change filter order and corner!\n\nThis easy access to both sine-mixed and cosine-mixed signals allow us to plot the output of a digital lock-in amplifier as a quadrature modulated signal, which shows changes in both the magnitude and phase of the lock-in vector:",
"def cosine_wave(freq, phase=0, Fs=10000):\n ph_rad = (phase/360.0)*(2.0*np.pi)\n return np.array([np.cos(((2 * np.pi * freq * a) / Fs) + ph_rad) for a in range(Fs)])\n\ndf['cos_noisy_mixed'] = np.multiply(df.sine_noisy, cosine_wave(100))\ndf['cos_noisy_mixed_lp'] = lowpass(df['cos_noisy_mixed'])\n\ndf['noisy_quad_mag'] = np.sqrt(np.add(np.square(df['cos_noisy_mixed_lp']),\n np.square(df['sin_noisy_mixed_lp'])))\ndf['noisy_quad_pha'] = np.arctan2(df['cos_noisy_mixed_lp'], df['sin_noisy_mixed_lp'])\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\nfig.set_size_inches(12,4)\naxes[0].set_title('Magnitude')\naxes[1].set_title('Phase (radians)')\ndf['noisy_quad_mag'][8000:].plot(ax=axes[0])\ndf['noisy_quad_pha'][8000:].plot(ax=axes[1])",
"Bonus Material\nIf you've got a copy of The Art of Electronics, 3rd Edition handy, you can read Horowitz and Hill's description of lock-in amplifiers in section 8.14.1. AoE's description focuses more heavily on analog lock-in techniques, and touches a bit on the phase detectors, which form the circuit basis for analog lock-in amplifiers. It starts on page 575. \nZurich Instruments, a prominent vendor of lock-in amplifiers, has a great overview of the state of the art on their website. They also have some great, mathematically formal descriptions of what's happening in the time and frequency domains during the process of lock-in amplification. (With nice pretty pictures, too!) \nAlso - this is my first time writing a blog post as a Jupyter notebook. If you want to pull down a copy to tool around with, grab it here!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
csaladenes/csaladenes.github.io | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | mit | [
"<!--BOOK_INFORMATION-->\n<img align=\"left\" style=\"padding-right:10px;\" src=\"figures/PDSH-cover-small.png\">\nThis notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.\nThe text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!\n<!--NAVIGATION-->\n< In Depth: Naive Bayes Classification | Contents | In-Depth: Support Vector Machines >\n<a href=\"https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.06-Linear-Regression.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open and Execute in Google Colaboratory\"></a>\nIn Depth: Linear Regression\nJust as naive Bayes (discussed earlier in In Depth: Naive Bayes Classification) is a good starting point for classification tasks, linear regression models are a good starting point for regression tasks.\nSuch models are popular because they can be fit very quickly, and are very interpretable.\nYou are probably familiar with the simplest form of a linear regression model (i.e., fitting a straight line to data) but such models can be extended to model more complicated data behavior.\nIn this section we will start with a quick intuitive walk-through of the mathematics behind this well-known problem, before seeing how before moving on to see how linear models can be generalized to account for more complicated patterns in data.\nWe begin with the standard imports:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()\nimport numpy as np",
"Simple Linear Regression\nWe will start with the most familiar linear regression, a straight-line fit to data.\nA straight-line fit is a model of the form\n$$\ny = ax + b\n$$\nwhere $a$ is commonly known as the slope, and $b$ is commonly known as the intercept.\nConsider the following data, which is scattered about a line with a slope of 2 and an intercept of -5:",
"rng = np.random.RandomState(1)\nx = 10 * rng.rand(50)\ny = 2 * x - 5 + rng.randn(50)\nplt.scatter(x, y);",
"We can use Scikit-Learn's LinearRegression estimator to fit this data and construct the best-fit line:",
"from sklearn.linear_model import LinearRegression\nmodel = LinearRegression(fit_intercept=True)\n\nmodel.fit(x[:, np.newaxis], y)\n\nxfit = np.linspace(0, 10, 1000)\nyfit = model.predict(xfit[:, np.newaxis])\n\nplt.scatter(x, y)\nplt.plot(xfit, yfit);",
"The slope and intercept of the data are contained in the model's fit parameters, which in Scikit-Learn are always marked by a trailing underscore.\nHere the relevant parameters are coef_ and intercept_:",
"print(\"Model slope: \", model.coef_[0])\nprint(\"Model intercept:\", model.intercept_)",
"We see that the results are very close to the inputs, as we might hope.\nThe LinearRegression estimator is much more capable than this, however—in addition to simple straight-line fits, it can also handle multidimensional linear models of the form\n$$\ny = a_0 + a_1 x_1 + a_2 x_2 + \\cdots\n$$\nwhere there are multiple $x$ values.\nGeometrically, this is akin to fitting a plane to points in three dimensions, or fitting a hyper-plane to points in higher dimensions.\nThe multidimensional nature of such regressions makes them more difficult to visualize, but we can see one of these fits in action by building some example data, using NumPy's matrix multiplication operator:",
"rng = np.random.RandomState(1)\nX = 10 * rng.rand(100, 3)\ny = 0.5 + np.dot(X, [1.5, -2., 1.])\n\nmodel.fit(X, y)\nprint(model.intercept_)\nprint(model.coef_)",
"Here the $y$ data is constructed from three random $x$ values, and the linear regression recovers the coefficients used to construct the data.\nIn this way, we can use the single LinearRegression estimator to fit lines, planes, or hyperplanes to our data.\nIt still appears that this approach would be limited to strictly linear relationships between variables, but it turns out we can relax this as well.\nBasis Function Regression\nOne trick you can use to adapt linear regression to nonlinear relationships between variables is to transform the data according to basis functions.\nWe have seen one version of this before, in the PolynomialRegression pipeline used in Hyperparameters and Model Validation and Feature Engineering.\nThe idea is to take our multidimensional linear model:\n$$\ny = a_0 + a_1 x_1 + a_2 x_2 + a_3 x_3 + \\cdots\n$$\nand build the $x_1, x_2, x_3,$ and so on, from our single-dimensional input $x$.\nThat is, we let $x_n = f_n(x)$, where $f_n()$ is some function that transforms our data.\nFor example, if $f_n(x) = x^n$, our model becomes a polynomial regression:\n$$\ny = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \\cdots\n$$\nNotice that this is still a linear model—the linearity refers to the fact that the coefficients $a_n$ never multiply or divide each other.\nWhat we have effectively done is taken our one-dimensional $x$ values and projected them into a higher dimension, so that a linear fit can fit more complicated relationships between $x$ and $y$.\nPolynomial basis functions\nThis polynomial projection is useful enough that it is built into Scikit-Learn, using the PolynomialFeatures transformer:",
"from sklearn.preprocessing import PolynomialFeatures\nx = np.array([2, 3, 4])\npoly = PolynomialFeatures(3, include_bias=False)\npoly.fit_transform(x[:, None])",
"We see here that the transformer has converted our one-dimensional array into a three-dimensional array by taking the exponent of each value.\nThis new, higher-dimensional data representation can then be plugged into a linear regression.\nAs we saw in Feature Engineering, the cleanest way to accomplish this is to use a pipeline.\nLet's make a 7th-degree polynomial model in this way:",
"from sklearn.pipeline import make_pipeline\npoly_model = make_pipeline(PolynomialFeatures(7),\n LinearRegression())",
"With this transform in place, we can use the linear model to fit much more complicated relationships between $x$ and $y$. \nFor example, here is a sine wave with noise:",
"rng = np.random.RandomState(1)\nx = 10 * rng.rand(50)\ny = np.sin(x) + 0.1 * rng.randn(50)\n\npoly_model.fit(x[:, np.newaxis], y)\nyfit = poly_model.predict(xfit[:, np.newaxis])\n\nplt.scatter(x, y)\nplt.plot(xfit, yfit);",
"Our linear model, through the use of 7th-order polynomial basis functions, can provide an excellent fit to this non-linear data!\nGaussian basis functions\nOf course, other basis functions are possible.\nFor example, one useful pattern is to fit a model that is not a sum of polynomial bases, but a sum of Gaussian bases.\nThe result might look something like the following figure:\n\nfigure source in Appendix\nThe shaded regions in the plot are the scaled basis functions, and when added together they reproduce the smooth curve through the data.\nThese Gaussian basis functions are not built into Scikit-Learn, but we can write a custom transformer that will create them, as shown here and illustrated in the following figure (Scikit-Learn transformers are implemented as Python classes; reading Scikit-Learn's source is a good way to see how they can be created):",
"from sklearn.base import BaseEstimator, TransformerMixin\n\nclass GaussianFeatures(BaseEstimator, TransformerMixin):\n \"\"\"Uniformly spaced Gaussian features for one-dimensional input\"\"\"\n \n def __init__(self, N, width_factor=2.0):\n self.N = N\n self.width_factor = width_factor\n \n @staticmethod\n def _gauss_basis(x, y, width, axis=None):\n arg = (x - y) / width\n return np.exp(-0.5 * np.sum(arg ** 2, axis))\n \n def fit(self, X, y=None):\n # create N centers spread along the data range\n self.centers_ = np.linspace(X.min(), X.max(), self.N)\n self.width_ = self.width_factor * (self.centers_[1] - self.centers_[0])\n return self\n \n def transform(self, X):\n return self._gauss_basis(X[:, :, np.newaxis], self.centers_,\n self.width_, axis=1)\n \ngauss_model = make_pipeline(GaussianFeatures(20),\n LinearRegression())\ngauss_model.fit(x[:, np.newaxis], y)\nyfit = gauss_model.predict(xfit[:, np.newaxis])\n\nplt.scatter(x, y)\nplt.plot(xfit, yfit)\nplt.xlim(0, 10);",
"We put this example here just to make clear that there is nothing magic about polynomial basis functions: if you have some sort of intuition into the generating process of your data that makes you think one basis or another might be appropriate, you can use them as well.\nRegularization\nThe introduction of basis functions into our linear regression makes the model much more flexible, but it also can very quickly lead to over-fitting (refer back to Hyperparameters and Model Validation for a discussion of this).\nFor example, if we choose too many Gaussian basis functions, we end up with results that don't look so good:",
"model = make_pipeline(GaussianFeatures(30),\n LinearRegression())\nmodel.fit(x[:, np.newaxis], y)\n\nplt.scatter(x, y)\nplt.plot(xfit, model.predict(xfit[:, np.newaxis]))\n\nplt.xlim(0, 10)\nplt.ylim(-1.5, 1.5);",
"With the data projected to the 30-dimensional basis, the model has far too much flexibility and goes to extreme values between locations where it is constrained by data.\nWe can see the reason for this if we plot the coefficients of the Gaussian bases with respect to their locations:",
"def basis_plot(model, title=None):\n fig, ax = plt.subplots(2, sharex=True)\n model.fit(x[:, np.newaxis], y)\n ax[0].scatter(x, y)\n ax[0].plot(xfit, model.predict(xfit[:, np.newaxis]))\n ax[0].set(xlabel='x', ylabel='y', ylim=(-1.5, 1.5))\n \n if title:\n ax[0].set_title(title)\n\n ax[1].plot(model.steps[0][1].centers_,\n model.steps[1][1].coef_)\n ax[1].set(xlabel='basis location',\n ylabel='coefficient',\n xlim=(0, 10))\n \nmodel = make_pipeline(GaussianFeatures(30), LinearRegression())\nbasis_plot(model)",
"The lower panel of this figure shows the amplitude of the basis function at each location.\nThis is typical over-fitting behavior when basis functions overlap: the coefficients of adjacent basis functions blow up and cancel each other out.\nWe know that such behavior is problematic, and it would be nice if we could limit such spikes expliticly in the model by penalizing large values of the model parameters.\nSuch a penalty is known as regularization, and comes in several forms.\nRidge regression ($L_2$ Regularization)\nPerhaps the most common form of regularization is known as ridge regression or $L_2$ regularization, sometimes also called Tikhonov regularization.\nThis proceeds by penalizing the sum of squares (2-norms) of the model coefficients; in this case, the penalty on the model fit would be \n$$\nP = \\alpha\\sum_{n=1}^N \\theta_n^2\n$$\nwhere $\\alpha$ is a free parameter that controls the strength of the penalty.\nThis type of penalized model is built into Scikit-Learn with the Ridge estimator:",
"from sklearn.linear_model import Ridge\nmodel = make_pipeline(GaussianFeatures(30), Ridge(alpha=0.1))\nbasis_plot(model, title='Ridge Regression')",
"The $\\alpha$ parameter is essentially a knob controlling the complexity of the resulting model.\nIn the limit $\\alpha \\to 0$, we recover the standard linear regression result; in the limit $\\alpha \\to \\infty$, all model responses will be suppressed.\nOne advantage of ridge regression in particular is that it can be computed very efficiently—at hardly more computational cost than the original linear regression model.\nLasso regression ($L_1$ regularization)\nAnother very common type of regularization is known as lasso, and involves penalizing the sum of absolute values (1-norms) of regression coefficients:\n$$\nP = \\alpha\\sum_{n=1}^N |\\theta_n|\n$$\nThough this is conceptually very similar to ridge regression, the results can differ surprisingly: for example, due to geometric reasons lasso regression tends to favor sparse models where possible: that is, it preferentially sets model coefficients to exactly zero.\nWe can see this behavior in duplicating the ridge regression figure, but using L1-normalized coefficients:",
"from sklearn.linear_model import Lasso\nmodel = make_pipeline(GaussianFeatures(30), Lasso(alpha=0.001))\nbasis_plot(model, title='Lasso Regression')",
"With the lasso regression penalty, the majority of the coefficients are exactly zero, with the functional behavior being modeled by a small subset of the available basis functions.\nAs with ridge regularization, the $\\alpha$ parameter tunes the strength of the penalty, and should be determined via, for example, cross-validation (refer back to Hyperparameters and Model Validation for a discussion of this).\nExample: Predicting Bicycle Traffic\nAs an example, let's take a look at whether we can predict the number of bicycle trips across Seattle's Fremont Bridge based on weather, season, and other factors.\nWe have seen this data already in Working With Time Series.\nIn this section, we will join the bike data with another dataset, and try to determine the extent to which weather and seasonal factors—temperature, precipitation, and daylight hours—affect the volume of bicycle traffic through this corridor.\nFortunately, the NOAA makes available their daily weather station data (I used station ID USW00024233) and we can easily use Pandas to join the two data sources.\nWe will perform a simple linear regression to relate weather and other information to bicycle counts, in order to estimate how a change in any one of these parameters affects the number of riders on a given day.\nIn particular, this is an example of how the tools of Scikit-Learn can be used in a statistical modeling framework, in which the parameters of the model are assumed to have interpretable meaning.\nAs discussed previously, this is not a standard approach within machine learning, but such interpretation is possible for some models.\nLet's start by loading the two datasets, indexing by date:",
"!sudo apt-get update\n\n!apt-get -y install curl\n\n!curl -o FremontBridge.csv https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD\n# !wget -o FremontBridge.csv \"https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD\"\n\nimport pandas as pd\ncounts = pd.read_csv('FremontBridge.csv', index_col='Date', parse_dates=True)\nweather = pd.read_csv('data/BicycleWeather.csv', index_col='DATE', parse_dates=True)",
"Next we will compute the total daily bicycle traffic, and put this in its own dataframe:",
"daily = counts.resample('d').sum()\ndaily['Total'] = daily.sum(axis=1)\ndaily = daily[['Total']] # remove other columns",
"We saw previously that the patterns of use generally vary from day to day; let's account for this in our data by adding binary columns that indicate the day of the week:",
"days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']\nfor i in range(7):\n daily[days[i]] = (daily.index.dayofweek == i).astype(float)",
"Similarly, we might expect riders to behave differently on holidays; let's add an indicator of this as well:",
"from pandas.tseries.holiday import USFederalHolidayCalendar\ncal = USFederalHolidayCalendar()\nholidays = cal.holidays('2012', '2016')\ndaily = daily.join(pd.Series(1, index=holidays, name='holiday'))\ndaily['holiday'].fillna(0, inplace=True)",
"We also might suspect that the hours of daylight would affect how many people ride; let's use the standard astronomical calculation to add this information:",
"from datetime import datetime\n\ndef hours_of_daylight(date, axis=23.44, latitude=47.61):\n \"\"\"Compute the hours of daylight for the given date\"\"\"\n days = (date - datetime(2000, 12, 21)).days\n m = (1. - np.tan(np.radians(latitude))\n * np.tan(np.radians(axis) * np.cos(days * 2 * np.pi / 365.25)))\n return 24. * np.degrees(np.arccos(1 - np.clip(m, 0, 2))) / 180.\n\ndaily['daylight_hrs'] = list(map(hours_of_daylight, daily.index))\ndaily[['daylight_hrs']].plot()\nplt.ylim(8, 17)",
"We can also add the average temperature and total precipitation to the data.\nIn addition to the inches of precipitation, let's add a flag that indicates whether a day is dry (has zero precipitation):",
"# temperatures are in 1/10 deg C; convert to C\nweather['TMIN'] /= 10\nweather['TMAX'] /= 10\nweather['Temp (C)'] = 0.5 * (weather['TMIN'] + weather['TMAX'])\n\n# precip is in 1/10 mm; convert to inches\nweather['PRCP'] /= 254\nweather['dry day'] = (weather['PRCP'] == 0).astype(int)\n\ndaily = daily.join(weather[['PRCP', 'Temp (C)', 'dry day']],rsuffix='0')",
"Finally, let's add a counter that increases from day 1, and measures how many years have passed.\nThis will let us measure any observed annual increase or decrease in daily crossings:",
"daily['annual'] = (daily.index - daily.index[0]).days / 365.",
"Now our data is in order, and we can take a look at it:",
"daily.head()",
"With this in place, we can choose the columns to use, and fit a linear regression model to our data.\nWe will set fit_intercept = False, because the daily flags essentially operate as their own day-specific intercepts:",
"# Drop any rows with null values\ndaily.dropna(axis=0, how='any', inplace=True)\n\ncolumn_names = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun', 'holiday',\n 'daylight_hrs', 'PRCP', 'dry day', 'Temp (C)', 'annual']\nX = daily[column_names]\ny = daily['Total']\n\nmodel = LinearRegression(fit_intercept=False)\nmodel.fit(X, y)\ndaily['predicted'] = model.predict(X)",
"Finally, we can compare the total and predicted bicycle traffic visually:",
"daily[['Total', 'predicted']].plot(alpha=0.5);",
"It is evident that we have missed some key features, especially during the summer time.\nEither our features are not complete (i.e., people decide whether to ride to work based on more than just these) or there are some nonlinear relationships that we have failed to take into account (e.g., perhaps people ride less at both high and low temperatures).\nNevertheless, our rough approximation is enough to give us some insights, and we can take a look at the coefficients of the linear model to estimate how much each feature contributes to the daily bicycle count:",
"params = pd.Series(model.coef_, index=X.columns)\nparams",
"These numbers are difficult to interpret without some measure of their uncertainty.\nWe can compute these uncertainties quickly using bootstrap resamplings of the data:",
"from sklearn.utils import resample\nnp.random.seed(1)\nerr = np.std([model.fit(*resample(X, y)).coef_\n for i in range(1000)], 0)",
"With these errors estimated, let's again look at the results:",
"print(pd.DataFrame({'effect': params.round(0),\n 'error': err.round(0)}))",
"We first see that there is a relatively stable trend in the weekly baseline: there are many more riders on weekdays than on weekends and holidays.\nWe see that for each additional hour of daylight, 129 ± 9 more people choose to ride; a temperature increase of one degree Celsius encourages 65 ± 4 people to grab their bicycle; a dry day means an average of 548 ± 33 more riders, and each inch of precipitation means 665 ± 62 more people leave their bike at home.\nOnce all these effects are accounted for, we see a modest increase of 27 ± 18 new daily riders each year.\nOur model is almost certainly missing some relevant information. For example, nonlinear effects (such as effects of precipitation and cold temperature) and nonlinear trends within each variable (such as disinclination to ride at very cold and very hot temperatures) cannot be accounted for in this model.\nAdditionally, we have thrown away some of the finer-grained information (such as the difference between a rainy morning and a rainy afternoon), and we have ignored correlations between days (such as the possible effect of a rainy Tuesday on Wednesday's numbers, or the effect of an unexpected sunny day after a streak of rainy days).\nThese are all potentially interesting effects, and you now have the tools to begin exploring them if you wish!\n<!--NAVIGATION-->\n< In Depth: Naive Bayes Classification | Contents | In-Depth: Support Vector Machines >\n<a href=\"https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.06-Linear-Regression.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open and Execute in Google Colaboratory\"></a>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs | site/en/r1/tutorials/non-ml/pdes.ipynb | apache-2.0 | [
"Copyright 2019 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Partial Differential Equations\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/non-ml/pdes.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/non-ml/pdes.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\n\nNote: This is an archived TF1 notebook. These are configured\nto run in TF2's \ncompatibility mode\nbut will run in TF1 as well. To use TF1 in Colab, use the\n%tensorflow_version 1.x\nmagic.\n\nTensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a partial differential equation. You'll simulate the surface of a square pond as a few raindrops land on it.\nBasic setup\nA few imports you'll need.",
"#Import libraries for simulation\nimport tensorflow.compat.v1 as tf\n\nimport numpy as np\n\n#Imports for visualization\nimport PIL.Image\nfrom io import BytesIO\nfrom IPython.display import clear_output, Image, display\n",
"A function for displaying the state of the pond's surface as an image.",
"def DisplayArray(a, fmt='jpeg', rng=[0,1]):\n \"\"\"Display an array as a picture.\"\"\"\n a = (a - rng[0])/float(rng[1] - rng[0])*255\n a = np.uint8(np.clip(a, 0, 255))\n f = BytesIO()\n PIL.Image.fromarray(a).save(f, fmt)\n clear_output(wait = True)\n display(Image(data=f.getvalue()))",
"Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.",
"sess = tf.InteractiveSession()",
"Computational convenience functions",
"def make_kernel(a):\n \"\"\"Transform a 2D array into a convolution kernel\"\"\"\n a = np.asarray(a)\n a = a.reshape(list(a.shape) + [1,1])\n return tf.constant(a, dtype=1)\n\ndef simple_conv(x, k):\n \"\"\"A simplified 2D convolution operation\"\"\"\n x = tf.expand_dims(tf.expand_dims(x, 0), -1)\n y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')\n return y[0, :, :, 0]\n\ndef laplace(x):\n \"\"\"Compute the 2D laplacian of an array\"\"\"\n laplace_k = make_kernel([[0.5, 1.0, 0.5],\n [1.0, -6., 1.0],\n [0.5, 1.0, 0.5]])\n return simple_conv(x, laplace_k)",
"Define the PDE\nOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.",
"N = 500",
"Here you create a pond and hit it with some rain drops.",
"# Initial Conditions -- some rain drops hit a pond\n\n# Set everything to zero\nu_init = np.zeros([N, N], dtype=np.float32)\nut_init = np.zeros([N, N], dtype=np.float32)\n\n# Some rain drops hit a pond at random points\nfor n in range(40):\n a,b = np.random.randint(0, N, 2)\n u_init[a,b] = np.random.uniform()\n\nDisplayArray(u_init, rng=[-0.1, 0.1])",
"Now you specify the details of the differential equation.",
"# Parameters:\n# eps -- time resolution\n# damping -- wave damping\neps = tf.placeholder(tf.float32, shape=())\ndamping = tf.placeholder(tf.float32, shape=())\n\n# Create variables for simulation state\nU = tf.Variable(u_init)\nUt = tf.Variable(ut_init)\n\n# Discretized PDE update rules\nU_ = U + eps * Ut\nUt_ = Ut + eps * (laplace(U) - damping * Ut)\n\n# Operation to update the state\nstep = tf.group(\n U.assign(U_),\n Ut.assign(Ut_))",
"Run the simulation\nThis is where it gets fun -- running time forward with a simple for loop.",
"# Initialize state to initial conditions\ntf.global_variables_initializer().run()\n\n# Run 1000 steps of PDE\nfor i in range(1000):\n # Step simulation\n step.run({eps: 0.03, damping: 0.04})\n\n# Show final image\nDisplayArray(U.eval(), rng=[-0.1, 0.1])",
"Look! Ripples!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
WNoxchi/Kaukasos | FACLA/SVD-NMF-review.ipynb | mit | [
"SVD Practice.\n2018/2/12 - WNixalo\nFastai Computational Linear Algebra (2017) §2: Topic Modeling w NMF & SVD\nfacebook research: Fast Randomized SVD\n\n1. Singular-Value Decomposition\nSVD is a factorization of a real or complex matrix. It factorizes a matrix $A$ into one with orthogonal columns $V^T$, one with orthogonal rows $U$, and a diagonal matrix of singular values $Σ$ (aka $S$ or $s$ or $σ$) which contains the relative importance of each factor.",
"from scipy.stats import ortho_group\nimport numpy as np\n\nQ = ortho_group.rvs(dim=3)\nB = np.random.randint(0,10,size=(3,3))\nA = Q@[email protected]\n\nU,S,V = np.linalg.svd(A, full_matrices=False)\n\nU\n\nS\n\nV\n\nfor i in range(3):\n print(U[i] @ U[(i+1) % len(U)])\n # wraps around\n # U[0] @ U[1]\n # U[1] @ U[2]\n # U[2] @ U[0]\n\nfor i in range(len(U)):\n print(U[:,i] @ U[:, (i+1)%len(U[0])])",
"Wait so.. the rows of a matrix $A$ are orthogonal iff $AA^T$ is diagonal? Hmm. Math.StackEx Link",
"np.isclose(np.eye(len(U)), U @ U.T)\n\nnp.isclose(np.eye(len(V)), V.T @ V)",
"Wait but that also gives True for $VV^T$. Hmmm.\n2. Truncated SVD\nOkay, so SVD is an exact decomposition of a matrix and allows us to pull out distinct topics from data (due to their orthonormality (orthogonality?)).\nBut doing so for a large data corpus is ... bad. Especially if most of the data's meaning / information relevant to us is captured by a small prominent subset. IE: prevalence of articles like a and the are likely poor indicators of any particular meaning in a piece of text since they're everywhere in English. Likewise for other types of data.\nHmm, so, if I understood correctly, the Σ/S/s/σ matrix is ordered by value max$\\rightarrow$min.. but computing the SVD of a large dataset $A$ is exactly what we want to avoid using T-SVD. Okay so how?\n$\\rightarrow$Full SVD we're calculating the full dimension of topics -- but its handy to limit to the most important ones -- this is how SVD is used in compression.\nAha. This is where I was confused. Truncation is used with Randomization in R-SVD. The Truncated section was just introducing the concept. Got it.\nSo that's where, in R-SVD, we use a buffer in addition to the portion of the dataset we take for SVD.\nAnd yay scikit-learn has R-SVD built in.",
"from sklearn import decomposition\n\n# ofc this is just dummy data to test it works\ndatavectors = np.random.randint(-1000,1000,size=(10,50))\nU,S,V = decomposition.randomized_svd(datavectors, n_components=5)\n\nU.shape, S.shape, V.shape",
"The idea of T-SVD is that we want to compute an approximation to the range of $A$. The range of $A$ is the space covered by the column basis.\nie: Range(A) = {y: Ax = y}\nthat is: all $y$ you can achieve by multiplying $x$ with $A$.\nDepending on your space, the bases are vectors that you can take linear combinations of to get any value in your space.\n3. Details of Randomized SVD (Truncated)\nOur goal is to have an algorithm to perform Truncated SVD using Randomized values from the dataset matrix. We want to use randomization to calculate the topics we're interested in, instead of calculating all of them.\nAha. So.. the way to do that, using randomization, is to have a special kind of randomization. Find a matrix $Q$ with some special properties that will allow us to pull a matrix that is a near match to our dataset matrix $A$ in the ways we want it to be. Ie: It'll have the same singular values, meaning the same importance-ordered topics.\nWow mathematics is really.. somethin.\nThat process:\n\nCompute an approximation to the range of $A$. ie: we want $Q$ with $r$ orthonormal columns st: \n\n$$A \\approx QQ^TA$$\n\n\nConstruct $B = Q^TA,$, which is small $(r \\times n)$\n\n\nCompute the SVD of $B$ by standard methods (fast since $B$ is smaller than $A$), $B = SΣV^T$\n\n\nSince: $$A \\approx QQ^TA = Q(SΣV^T)$$ if we set $U = QS$, then we have a low-rank approximation of $A \\approx UΣV^T$.\n\n\n-- okay so.. confusion here. What is $S$ and $Σ$? Because I see them elsewhere taken to mean the same thing on this subject, but all of a sudden they seem to be totally different things.\n-- oh, so apparently $S$ here is actually something different. $Σ$ is what's been interchangeably referred to in Hellenic/Latin letters throughout the notebook.\nNOTE that $A: m \\times n$ while $Q: m \\times r$, so $Q$ is generally a tall, skinny matrix and therefore much smaller & easier to compute with than $A$.\nAlso, because $S$ & $Q$ are both orthonormal, setting $R = QS$ makes $R$ orthonormal as well.\nHow do we find Q (in step 1)?\nGeneral Idea: we find this special $Q$, then we do SVD on this smaller matrix $Q^TA$, and we plug that back in to have our Truncated-SVD for $A$.\nAnd HERE is where the Random part of Randomized SVD comes in! How do we find $Q$?:\nWe just take a bunch of random vectors $w_i$ and look at / evaluate the subspace formed by $Aw_i$. We form a matrix $W$ with the $w_i$'s as its columns. Then we take the QR Decomposition of $AW = QR$. Then the colunms of $Q$ form an orthonormal basis for $AW$, which is the range of $A$.\nBasically a QR Decomposition exists for any matrix, and is an orthonormal matrix $\\times$ an upper triangular matrix.\nSo basically: we take $AW$, $W$ is random, get the $QR$ -- and a property of the QR-Decomposition is that $Q$ forms an orthonormal basis for $AW$ -- and $AW$ gives the range of $A$.\nSince $AW$ has far more rows than columns, it turns out in practice that these columns are approximately orthonormal. It's very unlikely you'll get linearly-dependent columns when you choose random values.\nAand apparently the QR-Decomp is v.foundational to Numerical Linear Algebra.\nHow do we choose r?\nWe chose $Q$ to have $r$ orthonormal columns, and $r$ gives us the dimension of $B$.\nWe choose $r$ to be the number of topics we want to retrieve $+$ some buffer.\nSee the lesson notebook and accompanying lecture time for an implementatinon of Randomized SVD. NOTE that Scikit-Learn's implementation is more powerful; the example is for example purposes.\n\n4. Non-negative Matrix Factorization\nWiki\n\nNMF is a group of algorithms in multivariate analysis and linear algebra where a matrix $V$ is factorized into (usually) two matrices $W$ & $H$, with the property that all three matrices have no negative elements.\n\nLecture 2 40:32\nThe key thing in SVD is orthogonality -- basically everything is orthogonal to eachother -- the key idea in NMF is that nothing is negative. The lower-bound is zero-clamped.\nNOTE your original dataset shoudl be nonnegative if you use NMF, or else you won't be able to reconstruct it.\nIdea\n\nRather than constraining our factors to be orthogonal, another idea would be to constrain them to be non-negative. NMF is a factorization of a non-negative dataset $V$: $$V=WH$$ into non-negative matrices $W$, $H$. Often positive factors will be more easily interpretable (and this is the reason behind NMF's popularity).\n\nhuh.. really now.?..\nFor example if your dataset is a matrix of faces $V$, where each columns holds a vectorized face, then $W$ would be a matrix of column facial features, and $H$ a matrix of column relative importance of features in each image.\nApplications of NMF / Sklearn\nNMF is a 'difficult' problem because it is unconstrained and NP-Hard\nNMF looks smth like this in schematic form:\nDocuments Topics Topic Importance Indicators\nW --------- --- -----------------\no | | | | | ||| | | | | | | | | |\nr | | | | | ≈ ||| -----------------\nd | | | | | |||\ns --------- ---\n V W H",
"# workflow w NMF is something like this\nV = np.random.randint(0, 20, size=(10,10))\n\nm,n = V.shape\nd = 5 # num_topics\n\nclsf = decomposition.NMF(n_components=d, random_state=1)\n\nW1 = clsf.fit_transform(V)\nH1 = clsf.components_",
"NOTE: NMF is non-exact. You'll get something close to the original matrix back.\nNMF Summary:\nBenefits: fast and easy to use.\nDownsides: took years of research and expertise to create\nNOTES:\n* For NMF, matrix needs to be at least as tall as it is wide, or we get an error with fit_transform\n* Can use df_min in CountVectorizer to only look at workds that were in at least k of the split texts.\nWNx: Okay, I'm not going to go through and implement NMF in NumPy & PyTorch using SGD today. Maybe later. -- 19:44\nLecture 2 @ 51:09"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
IanHawke/msc-or-week0 | examples.ipynb | mit | [
"Bubble sort\nIn pseudo-code the bubble sort algorithm can be written as:\n\nStart with an unsorted list list of length n.\nFor each element i in the list from the first to the last:\nFor each element j in the list from element i+1 to the last:\nIf element i is bigger than element j then swap them\n\n\n\n\nAfter this loop, the list list is now sorted.\n\nWe can do a direct translation of this into Python:",
"def bubblesort(unsorted):\n \"\"\"\n Sorts an array using bubble sort algorithm\n \n Paramters\n ---------\n \n unsorted : list\n The unsorted list\n \n Returns\n \n sorted : list\n The sorted list (in place)\n \"\"\"\n \n last = len(unsorted)\n # All Python lists start from 0\n for i in range(last):\n for j in range(i+1, last):\n if unsorted[i] > unsorted[j]:\n temp = unsorted[j]\n unsorted[j] = unsorted[i]\n unsorted[i] = temp\n return unsorted\n\nunsorted = [2, 4, 6, 0, 1, 3, 5]\nprint(bubblesort(unsorted))",
"We can see the essential features of Python used:\n\nPython does not declare the type of the variables; \nThere is nothing special about lists or arrays as variables when passed as arguments;\nTo define functions the keyword is def;\nTo define the start of a block (the body of a function, or a loop, or a conditional) a colon : is used;\nTo define the block itself, indentation is used. The block ends when the code indentation ends;\nComments are either enclosed in quotes \" as for the docstring, or using #;\nThe return value(s) from a function use the keyword return;\nAccessing arrays uses square brackets;\nThe function range produces a range of integers, usually used to loop over.\n\nNote: there are in-built Python functions to sort lists which should be used in general:",
"unsorted = [2, 4, 6, 0, 1, 3, 5]\nprint(sorted(unsorted))",
"Note: there is a \"more Pythonic\" way of writing the bubble sort function, taking advantage of the feature that Python can assign to multiple things at once. Compare the internals of the loop:",
"def bubblesort(unsorted):\n \"\"\"\n Sorts an array using bubble sort algorithm\n \n Paramters\n ---------\n \n unsorted : list\n The unsorted list\n \n Returns\n \n sorted : list\n The sorted list (in place)\n \"\"\"\n \n last = len(unsorted)\n # All Python lists start from 0\n for i in range(last):\n for j in range(i+1, last):\n if unsorted[i] > unsorted[j]:\n unsorted[j], unsorted[i] = unsorted[i], unsorted[j]\n return unsorted\n\nunsorted = [2, 4, 6, 0, 1, 3, 5]\nprint(bubblesort(unsorted))",
"This gets rid of the need for a temporary variable.\nExercise\nHere is a pseudo-code for the counting sort algorithm:\n\nStart with an unsorted list list of length n.\nFind the minimum value min_value and maximum value max_value of the list.\nCreate a list counts that will count the number of entries in the list with value between min_value and max_value inclusive, and set its entries to zero\nFor each element i in list from the first to the last:\nAdd one to the counts list entry whose index matches the value of this element\n\n\nFor each element i in the counts list from the first to the last:\nSet the next j entries of list equal to i\n\n\nAfter this loop, the list list is now sorted.\n\nTranslate this into Python. Note that the in-built Python min and max functions can be used on lists. To create a list of the correct size you can use\npython\n counts = list(range(min_value, max_value+1))\nbut this list will not contain zeros so must be reset.",
"def countingsort(unsorted):\n \"\"\"\n Sorts an array using counting sort algorithm\n \n Paramters\n ---------\n \n unsorted : list\n The unsorted list\n \n Returns\n \n sorted : list\n The sorted list (in place)\n \"\"\"\n # Allocate the counts array\n min_value = min(unsorted)\n max_value = max(unsorted)\n # This creates a list of the right length, but the entries are not zero, so reset\n counts = list(range(min_value, max_value+1))\n for i in range(len(counts)):\n counts[i] = 0\n # Count the values\n last = len(unsorted)\n for i in range(last):\n counts[unsorted[i]] += 1\n # Write the items back into the list array\n next_index = 0\n for i in range(min_value, max_value+1):\n for j in range(counts[i]):\n unsorted[next_index] = i\n next_index += 1\n \n return unsorted\n\nunsorted = [2, 4, 6, 0, 1, 3, 5]\nprint(countingsort(unsorted))",
"Simplex Method\nFor the linear programming problem\n$$\n\\begin{aligned}\n \\max x_1 + x_2 &= z \\\n 2 x_1 + x_2 & \\le 4 \\\n x_1 + 2 x_2 & \\le 3\n\\end{aligned}\n$$\nwhere $x_1, x_2 \\ge 0$, one standard approach is the simplex method.\nIntroducing slack variables $s_1, s_2 \\ge 0$ the standard tableau form becomes\n$$\n\\begin{pmatrix}\n 1 & -1 & -1 & 0 & 0 \\\n 0 & 2 & 1 & 1 & 0 \\\n 0 & 1 & 2 & 0 & 1 \n\\end{pmatrix}\n\\begin{pmatrix}\n z & x_1 & x_2 & s_1 & s_2\n\\end{pmatrix}^T = \\begin{pmatrix} 0 \\ 4 \\ 3 \\end{pmatrix}.\n$$\nThe simplex method performs row operations to remove all negative numbers from the top row, at each stage choosing the smallest (in magnitude) pivot.\nAssume the tableau is given in this standard form. We can use numpy to implement the problem.",
"import numpy\n\ntableau = numpy.array([ [1, -1, -1, 0, 0, 0], \n [0, 2, 1, 1, 0, 4],\n [0, 1, 2, 0, 1, 3] ], dtype=numpy.float64)\nprint(tableau)",
"To access an entry we use square brackets as with lists:",
"print(tableau[0, 0])\nprint(tableau[1, 2])\nrow = 2\ncolumn = 5\nprint(tableau[row, column])",
"To access a complete row or column, we use slicing notation:",
"print(tableau[row, :])\nprint(tableau[:, column])",
"To apply the simplex method, we have to remove the negative entries in row 0. These appear in columns 1 and 2. For column 1 the pivot in row 1 has magnitude $|-1/2| = 1/2$ and the pivot in row 2 has magnitude $|-1/1|=1$. So we choose row 1.\nTo perform the row operation we want to eliminate all entries in column 1 except for the diagonal, which is set to $1$:",
"column = 1\npivot_row = 1\n# Rescale pivot row\ntableau[pivot_row, :] /= tableau[pivot_row, column]\n# Remove all entries in columns except the pivot\npivot0 = tableau[0, column] / tableau[pivot_row, column]\ntableau[0, :] -= pivot0 * tableau[pivot_row, :]\npivot2 = tableau[2, column] / tableau[pivot_row, column]\ntableau[2, :] -= pivot2 * tableau[pivot_row, :]\n\nprint(tableau)",
"Now we repeat this on column 2, noting that we can only pivot on row 2:",
"column = 2\npivot_row = 2\n# Rescale pivot row\ntableau[pivot_row, :] /= tableau[pivot_row, column]\n# Remove all entries in columns except the pivot\npivot0 = tableau[0, column] / tableau[pivot_row, column]\ntableau[0, :] -= pivot0 * tableau[pivot_row, :]\npivot1 = tableau[1, column] / tableau[pivot_row, column]\ntableau[1, :] -= pivot1 * tableau[pivot_row, :]\n\nprint(tableau)",
"We read off the solution (noting that floating point representations mean we need care interpreting the results): $z = 7/3$ when $x_1 = 5/3$ and $x_2 = 2/3$:",
"print(\"z =\", tableau[0, -1])\nprint(\"x_1 =\", tableau[1, -1])\nprint(\"x_2 =\", tableau[2, -1])",
"Let's turn that into a function.",
"def simplex(tableau):\n \"\"\"\n Assuming a standard form tableau, find the solution\n \"\"\"\n nvars = tableau.shape[1] - tableau.shape[0] - 1\n for column in range(1, nvars+2):\n if tableau[0, column] < 0:\n pivot_row = numpy.argmin(numpy.abs(tableau[0, column] / tableau[1:, column])) + 1\n # Rescale pivot row\n tableau[pivot_row, :] /= tableau[pivot_row, column]\n # Remove all entries in columns except the pivot\n for row in range(0, pivot_row):\n pivot = tableau[row, column] / tableau[pivot_row, column]\n tableau[row, :] -= pivot * tableau[pivot_row, :]\n for row in range(pivot_row+1, tableau.shape[0]):\n pivot = tableau[row, column] / tableau[pivot_row, column]\n tableau[row, :] -= pivot * tableau[pivot_row, :]\n z = tableau[0, -1]\n x = tableau[1:nvars+1, -1]\n return z, x\n\ntableau = numpy.array([ [1, -1, -1, 0, 0, 0], \n [0, 2, 1, 1, 0, 4],\n [0, 1, 2, 0, 1, 3] ], dtype=numpy.float64)\nz, x = simplex(tableau)\nprint(\"z =\", z)\nprint(\"x =\", x)",
"Building the tableau\nOnce the problem is phrased in the tableau form the short simplex function solves it without problem. However, for large problems, we don't want to type in the matrix by hand. Instead we want a way of keeping track of the objective function to maximize, and the constraints, and make the computer do all the work.\nTo do that we'll introduce classes. In VBA a class is a special module, and you access its variables and methods using dot notation. For example, if Student is a class, which has a variable Name, and s1 is a Student object, then s1.Name is the name associated with that particular instance of student.\nThe same approach is used in Python:",
"class Student(object):\n \n def __init__(self, name):\n self.name = name\n \n def print_name(self):\n print(\"Hello\", self.name)\n\ns1 = Student(\"Christine Carpenter\")\nprint(s1.name)\ns2 = Student(\"Jörg Fliege\")\ns2.print_name()",
"See how this compares to VBA.\n\nThe class keyword is used to start the definition of the class.\nThe name of the class (Student) is given. It follows similar rules and conventions to variables, but typically is capitalized.\nThe name in brackets (object) is what the class inherits from. Here we use the default (object).\nThe colon and indentation denotes the class definition, in the same way as we've seen for functions and loops.\nFunctions defined inside the class are methods. The first argument will always be an instance of the class, and by convention is called self. Methods are called using <instance>.<method>.\nWhen an instance is created (eg, by s1 = Student(...)) the __init__ method is called if it exists. We can use this to set up the instance.\n\nThere are a number of special methods that can be defined that work with Python operations. For example, suppose we printed the instances above:",
"print(s1)\nprint(s2)",
"This isn't very informative. However, we can define the string representation of our class using the __repr__ method:",
"class Student(object):\n \n def __init__(self, name):\n self.name = name\n \n def __repr__(self):\n return self.name\n\ns1 = Student(\"Christine Carpenter\")\ns2 = Student(\"Jörg Fliege\")\nprint(s1)\nprint(s2)",
"We can also define what it means to add two instances of our class:",
"class Student(object):\n \n def __init__(self, name):\n self.name = name\n \n def __repr__(self):\n return self.name\n \n def __add__(self, other):\n return Student(self.name + \" and \" + other.name)\n\ns1 = Student(\"Christine Carpenter\")\ns2 = Student(\"Jörg Fliege\")\nprint(s1 + s2)",
"Going back to the simplex method, we want to define a class that contains the objective function and the constraints, a method to solve the problem, and a representation of the problem and solution.",
"class Constraint(object):\n def __init__(self, coefficients, value):\n self.coefficients = numpy.array(coefficients)\n self.value = value\n \n def __repr__(self):\n string = \"\"\n for i in range(len(self.coefficients)-1):\n string += str(self.coefficients[i]) + \" x_{}\".format(i+1) + \" + \"\n string += str(self.coefficients[-1]) + \" x_{}\".format(len(self.coefficients))\n string += \" \\le \"\n string += str(self.value)\n return string\n\nc1 = Constraint([2, 1], 4)\nc2 = Constraint([1, 2], 3)\nprint(c1)\nprint(c2)\n\nclass Linearprog(object):\n \n def __init__(self, objective, constraints):\n self.objective = numpy.array(objective)\n self.nvars = len(self.objective)\n self.constraints = constraints\n self.nconstraints = len(self.constraints)\n self.tableau = numpy.zeros((1+self.nconstraints, 2+self.nvars+self.nconstraints))\n self.tableau[0, 0] = 1.0\n self.tableau[0, 1:1+self.nvars] = -self.objective\n for nc, c in enumerate(self.constraints):\n self.tableau[1+nc, 1:1+self.nvars] = c.coefficients\n self.tableau[1+nc, 1+self.nvars+nc] = 1.0\n self.tableau[1+nc, -1] = c.value\n self.z, self.x = self.simplex()\n \n def simplex(self):\n for column in range(1, self.nvars+2):\n if self.tableau[0, column] < 0:\n pivot_row = numpy.argmin(numpy.abs(self.tableau[0, column] / self.tableau[1:, column])) + 1\n # Rescale pivot row\n self.tableau[pivot_row, :] /= self.tableau[pivot_row, column]\n # Remove all entries in columns except the pivot\n for row in range(0, pivot_row):\n pivot = self.tableau[row, column] / self.tableau[pivot_row, column]\n self.tableau[row, :] -= pivot * self.tableau[pivot_row, :]\n for row in range(pivot_row+1, self.tableau.shape[0]):\n pivot = self.tableau[row, column] / self.tableau[pivot_row, column]\n self.tableau[row, :] -= pivot * self.tableau[pivot_row, :]\n z = self.tableau[0, -1]\n x = self.tableau[1:self.nvars+1, -1]\n return z, x\n\n def __repr__(self):\n string = \"max \"\n for i in range(len(self.objective)-1):\n string += str(self.objective[i]) + \" x_{}\".format(i+1) + \" + \"\n string += str(self.objective[-1]) + \" x_{}\".format(len(self.objective))\n string += \"\\n\\nwith constraints\\n\"\n for c in self.constraints:\n string += \"\\n\"\n string += c.__repr__()\n string += \"\\n\\n\"\n string += \"Solution has objective function maximum of \" + str(self.z)\n string += \"\\n\\n\"\n string += \"at location x = \" + str(self.x)\n return string\n\nproblem = Linearprog([1, 1], [c1, c2])\nprint(problem)",
"Using libraries - pulp\nThe main advantage of using Python is the range of libraries there are that you can use to more efficiently solve your problems. For linear programming there is the pulp library, which is a Python wrapper to efficient low level libraries such as GLPK. It's worth noting that pulp provides high-level access to leading proprietary libraries like CPLEX, but doesn't provide the binaries or the licences. By default pulp uses CBC which is considerably slower: consult your supervisor as to what's suitable when for your work.\nThere are a range of examples that you can look at, but we'll quickly revisit the example above. The approach is to use a lot of pulp defined classes, which are hopefully fairly transparent:",
"import pulp\n\nproblem = pulp.LpProblem(\"Simple problem\", pulp.LpMaximize)",
"This gives a \"meaningful\" title to the problem and says if we're going to maximize or minimize.",
"x1 = pulp.LpVariable(\"x_1\", lowBound=0, upBound=None, cat='continuous')\nx2 = pulp.LpVariable(\"x_2\", lowBound=0, upBound=None, cat='continuous')",
"Defining the variables again gives them \"meaningful\" names, and specifies their lower and upper bounds, and whether the variable type is continuous or integer. We could ignore the latter two definitions as they take their default values.\nThe first thing to do now is to define the objective function by \"adding\" it to the problem:",
"objective = x1 + x2, \"Objective function to maximize\"\nproblem += objective",
"Again we have given a \"meaningful\" name to the objective function we're maximizing. \nNext we can create constraints and add them to the problem.",
"c1 = 2 * x1 + x2 <= 4, \"First constraint\"\nc2 = x1 + 2 * x2 <= 3, \"Second constraint\"\nproblem += c1\nproblem += c2",
"If you want to save the problem at this stage, you can use problem.writeLP(<filename>), where the .lp extension is normally used.\nTo solve the problem, we just call",
"problem.solve()",
"The 1 just means it did it: it does not say whether it succeeded! We need to print the status:",
"print(\"Status:\", pulp.LpStatus[problem.status])",
"As it's found a solution, we can print the objective function and the variables:",
"print(\"Maximized objective function = \", pulp.value(problem.objective))\nfor v in problem.variables():\n print(v.name, \"=\", v.varValue)",
"Using pulp is far easier and robust than coding our own, and will cover a much wider range of problems.\nExercise\nTry using pulp to implement the following optimisation problem.\nWhiskas want to make their cat food out of just two ingredients: chicken and beef. These ingredients must be blended such that they meet the nutritional requirements for the food whilst minimising costs. The costs of chicken and beef are \\$0.013 and \\$0.008 per gram, and their nutritional contributions per gram are:\nStuff | Protein | Fat | Fibre | Salt\n:-- | :-: | --:\nChicken | 0.100 | 0.080 | 0.001 | 0.002\nBeef | 0.200 | 0.100 | 0.005 | 0.005\nLet's define our decision variables:\n$$x_1 = \\text{percentage of chicken in can of cat food} $$\n$$x_2 = \\text{percentage of beef in can of cat food}$$\nAs these are percentages, we know that both must be $0 \\leq x \\leq 100$ and that they must sum to 100. The objective function to minimise costs is \n$$\\min 0.013 x_1 + 0.008 x_2$$\nThe constraints (that the variables must sum to 100 and that the nutritional requirements are met) are:\n$$1.000 x_1 + 1.000 x_2 = 100.0$$\n$$0.100 x_1 + 0.200 x_2 \\ge 8.0$$\n$$0.080 x_1 + 0.100 x_2 \\ge 6.0$$\n$$0.001 x_1 + 0.005 x_2 \\le 2.0$$\n$$0.002 x_1 + 0.005 x_2 \\le 0.4$$\nThis problem was taken from the pulp documentation - you can find the solution there.\nFurther reading\nThere's a number of projects using pulp out there - one for people interested in scheduling is Conference Scheduler which works out when to put talks on, given constraints.\nMonte Carlo\nOne type of optimization problem deals with queues. As an example problem we'll take the Unilink bus service U1C from the Airport into the centre and ask: at busy times, how many people will not be able to get on the bus, and at what stops?\nIf we use a fixed set of customers and want to simulate the events in time, this is an example of a discrete event model. An example Python Discrete Event simulator is ciw, which has a detailed set of tutorials.\nIf we want to provide a random set of customers, to see what range of problems we may have, this is an example of Monte Carlo simulation. This can be done using standard Python random number generators, built into (for example) numpy and scipy.\nWe will only consider the main stops:",
"bus_stops = [\"Airport Parkway Station\",\n \"Wessex Lane\",\n \"Highfield Interchange\",\n \"Portswood Broadway\",\n \"The Avenue Archers Road\",\n \"Civic Centre\",\n \"Central Station\",\n \"West Quay\",\n \"Town Quay\",\n \"NOCS\"]",
"We will assume that the bus capacity is $85$ people, that $250$ people want to travel, that they are distributed at the $10$ stops following a discrete random distribution, and each wants to travel a number of stops that also follows a discrete random distribution (distributed between $1$ and the maximum number of stops they could travel).\nThere are smarter ways of doing it than this, I'm sure:",
"import numpy\n\ncapacity = 85\nn_people = 250\ntotal_stops = len(bus_stops)\ninitial_stops = numpy.random.randint(0, total_stops-1, n_people)\nn_stops = numpy.zeros_like(initial_stops)\nn_onboard = numpy.zeros((total_stops,), dtype=numpy.int)\nn_left_behind = numpy.zeros_like(n_onboard)\nfor i in range(total_stops):\n if i == total_stops - 1: # Can only take one stop\n n_stops[initial_stops == i] = 1\n else:\n n_people_at_stop = len(initial_stops[initial_stops == i])\n n_stops[initial_stops == i] = numpy.random.randint(1, total_stops-i, n_people_at_stop)\nfor i in range(total_stops):\n n_people_at_stop = len(initial_stops[initial_stops == i])\n n_people_getting_on = max([0, min([n_people_at_stop, capacity - n_onboard[i]])])\n n_left_behind[i] = max([n_people_at_stop - n_people_getting_on, 0])\n for fill_stops in n_stops[initial_stops == i][:n_people_getting_on]:\n n_onboard[i:i+fill_stops] += 1\n\nprint(n_left_behind)\n\nprint(n_onboard)",
"And now that we know how to do it once, we can do it many times:",
"def mc_unilink(n_people, n_runs = 10000):\n \"\"\"\n Given n_people wanting to ride the U1, use Monte Carlo to see how many are left behind on average at each stop.\n \n Parameters\n ----------\n \n n_people : int\n Total number of people wanting to use the bus\n n_runs : int\n Number of realizations\n \n Returns\n -------\n \n n_left_behind_average : array of float\n Average number of people left behind at each stop\n \"\"\"\n \n bus_stops = [\"Airport Parkway Station\",\n \"Wessex Lane\",\n \"Highfield Interchange\",\n \"Portswood Broadway\",\n \"The Avenue Archers Road\",\n \"Civic Centre\",\n \"Central Station\",\n \"West Quay\",\n \"Town Quay\",\n \"NOCS\"]\n total_stops = len(bus_stops)\n capacity = 85\n \n n_left_behind = numpy.zeros((total_stops, n_runs), dtype = numpy.int)\n \n for run in range(n_runs):\n initial_stops = numpy.random.randint(0, total_stops-1, n_people)\n n_stops = numpy.zeros_like(initial_stops)\n n_onboard = numpy.zeros((total_stops,), dtype=numpy.int)\n for i in range(total_stops):\n if i == total_stops - 1: # Can only take one stop\n n_stops[initial_stops == i] = 1\n else:\n n_people_at_stop = len(initial_stops[initial_stops == i])\n n_stops[initial_stops == i] = numpy.random.randint(1, total_stops-i, n_people_at_stop)\n for i in range(total_stops):\n n_people_at_stop = len(initial_stops[initial_stops == i])\n n_people_getting_on = max([0, min([n_people_at_stop, capacity - n_onboard[i]])])\n n_left_behind[i, run] = max([n_people_at_stop - n_people_getting_on, 0])\n for fill_stops in n_stops[initial_stops == i][:n_people_getting_on]:\n n_onboard[i:i+fill_stops] += 1\n \n return numpy.mean(n_left_behind, axis=1)\n\nn_left_behind_average = mc_unilink(250, 10000)\n\nn_left_behind_average",
"We see that, as expected, it's the stops in the middle that fare worst. We can easily plot this:",
"%matplotlib inline\nfrom matplotlib import pyplot\nx = list(range(len(n_left_behind_average)))\npyplot.bar(x, n_left_behind_average)\npyplot.xticks(x, bus_stops, rotation='vertical')\npyplot.ylabel(\"Average # passengers unable to board\")\npyplot.show()",
"Exercise\nTake a look at the discrete distributions in scipy.stats and work out how to improve this model.\nMachine Learning\nIf you want to get a computer to classify a large dataset for you, or to \"learn\", then packages for Machine Learning, or Neural Networks, or Deep Learning, are the place to go. This field is very much a moving target, but the Python scikit-learn library has a lot of very useful tools that can be used as a starting point.\nIn this example we'll focus on classification: given a dataset that's known to fall into fixed groups, develop a model that predicts from the data what group a new data point falls within.\nAs a concrete example we'll use the standard Iris data set again, which we can get from GitHub. We used this with pandas, and we can use that route to get the data in.",
"import numpy\nimport pandas\nimport sklearn\n\niris = pandas.read_csv('https://raw.githubusercontent.com/pandas-dev/pandas/master/pandas/tests/data/iris.csv')",
"A quick reminder of what the dataset contains:",
"iris.head()",
"There are different types of iris, classified by the Name. Each individual flower observed has four measurements, given by the data. We want to use some of the data (the Sepal Length and Width, and the Petal Length and Width) to construct a model. The model will take an observation - these four numbers - and predict which flower type we have. We'll use the rest of the data to check how accurate our model is.\nFirst let's look at how many different types of flower there are:",
"iris['Name'].unique()",
"So we're trying to choose one of three types.\nThe range of values can be summarized:",
"iris.describe()",
"There's 150 observations, with a reasonable range of values.\nSo, let's split the dataframe into its data and its labels. What we're wanting to do here is predict the label (the type, or Name, of the Iris observed) from the data (the measurements of the sepal and petal).",
"labels = iris['Name']\ndata = iris.drop('Name', axis=1)",
"We then want to split our data, and associated labels, into a training set (where we tell the classifier what the answer is) and a testing set (to check the accuracy of the model):",
"from sklearn.model_selection import train_test_split\n\ndata_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size = 0.5)",
"Here we have split the data set in two: 50% is in the training set, and 50% in the testing set.\nWe can now use a classification algorithm. To start, we will use a decision tree algorithm:",
"from sklearn import tree\n\nclassifier = tree.DecisionTreeClassifier()\nclassifier.fit(data_train, labels_train)",
"We now have a model: given data, it will return its prediction for the label. We use the testing data to check the model:",
"print(labels_test)\nprint(classifier.predict(data_test))",
"We see from the first couple of entries that it's done ok, but that there are errors. As estimating the accuracy of a classification by comparing to test data is so standard, there's a function for that:",
"from sklearn.metrics import accuracy_score\n\naccuracy = accuracy_score(labels_test, classifier.predict(data_test))\nprint(\"Decision Tree Accuracy with 50/50: {}\".format(accuracy))",
"So the result is very accurate on this simple dataset.\nExercises\n\nVary the size of the training / testing split to see how it affects the accuracy.\nTry a different classifier - the KNeighborsClassifier for example.\nTry a different dataset: \nhave a go at creating a classifier for the music data we used in the data handling session. See how the accuracy changes when you exclude the pop dataset - can you think why this may be so? Does the same thing happen when we exclude other genres?\nlook at sklearn.datasets for possibilities, such as the digits dataset which does handwriting recognition.\n\n\n\nHere's a worked solution for the music data classifier. We'll start by importing the libraries and data we need.",
"from sklearn.model_selection import train_test_split\nfrom sklearn import tree\nfrom sklearn.metrics import accuracy_score\n\ndfs = {'indie': pandas.read_csv('spotify_data/indie.csv'), 'pop': pandas.read_csv('spotify_data/pop.csv'), \n 'country': pandas.read_csv('spotify_data/country.csv'), 'metal': pandas.read_csv('spotify_data/metal.csv'), \n 'house': pandas.read_csv('spotify_data/house.csv'), 'rap': pandas.read_csv('spotify_data/rap.csv')}\n\nfor genre, df in dfs.items():\n df['genre'] = genre\n\ndat = pandas.concat(dfs.values())\n\n# define a list of the fields we want to use to train our classifier\ncolumns = ['duration_ms', 'explicit', 'popularity', 'acousticness', 'danceability', \n 'energy', 'instrumentalness', 'key', 'liveness', 'loudness',\n 'mode', 'speechiness', 'tempo', 'time_signature', 'valence', 'genre']\n\n# define data as all columns but the genre column\ndata = dat[columns].drop('genre', axis=1)\n\n# define labels as the genre column\nlabels = dat[columns].genre\n\n# split the data into a training set and a testing set\ndata_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size = 0.3)\n\n# create the classifier\nclassifier = tree.DecisionTreeClassifier()\n\n# train the classifier using the training data\nclassifier.fit(data_train, labels_train)\n\n# calculate the accuracy of the classifier using the testing data\naccuracy = accuracy_score(labels_test, classifier.predict(data_test))\nprint(\"Decision Tree Accuracy with 50/50: {}\".format(accuracy))",
"The accuracy of this classifier is not great. As the train_test_split function randomly selects its training and test data, the accuracy will change every time you run it, but it tends to be 60-70%. Let's try excluding the pop data.",
"nopop_dat = dat[dat.genre != 'pop']\n\n# define data as all columns but the genre column\ndata = nopop_dat[columns].drop('genre', axis=1)\n\n# define labels as the genre column\nlabels = nopop_dat[columns].genre\n\ndata_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size = 0.1)\n\nclassifier = tree.DecisionTreeClassifier()\nclassifier.fit(data_train, labels_train)\n\naccuracy = accuracy_score(labels_test, classifier.predict(data_test))\nprint(\"Decision Tree Accuracy with 50/50: {}\".format(accuracy))",
"This classifier is now quite a bit more accurate - 75-85%. Generally, the accuracy of your classifier should improve the fewer categories it has to classify. \nFurther reading\nThe scikit-learn documentation is very detailed, provided you have some level of understanding of Machine Learning to start. Sebastian Raschke has a book specifically on Python and Machine Learning. There are other tutorials online that you may find useful."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lhcb/opendata-project | Example-Analysis.ipynb | gpl-2.0 | [
"Analysis of Nobel prize winners\nWelcome to the programming example page. This page shows an example analysis of Nobel prize winners. The coding commands and techniques that are demonstrated in this analysis are similar to those that are needed for your particle physics analysis.\nIMPORTANT: For every code box with code already in it, like the one below you must click in and press shift+enter to run the code. This is how you also run your own code. \nIf the In [x]: to the left of a codebox changes to In [*]: that means the code in that box is currently running\nIf you ever want more space to display output of code you can press the + button in the toolbar to the right of the save button to create another input box.\nFor the sliders in the example histograms to work you will have to run all the codeboxes in this notebook. You can either do this as you read and try changing the code to see what happens, or select cell in the toolbar at the top and select run all.\nFirst we load in the libraries we require and read in the file that contains the data.",
"from __future__ import print_function\nfrom __future__ import division\n\n%pylab inline\nexecfile('Data/setup_example.py')",
"Lets now view the first few lines of the data table. The rows of the data table are each of the Nobel prizes awarded and the columns are the information about who won the prize. \nWe have put the data into a pandas DataFrame we can now use all the functions associated with DataFrames. A useful function is .head(), this prints out the first few lines of the data table.",
"data.head(5) # Displaying some of the data so you can see what form it takes in the DataFrame",
"Plotting a histogram\nLets learn how to plot histograms. We will plot the number of prizes awarded per year. Nobel prizes can be awarded for up to three people per category. As each winner is recorded as an individual entry the histogram will tell us if there has been a trend of increasing or decreasing multiple prize winners in one year.\nHowever before we plot the histogram we should find information out about the data so that we can check the range of the data we want to plot.",
"# print the earliest year in the data\nprint(data.Year.min())\n\n# print the latest year in the data\nprint(data.Year.max())",
"The data set also contains entries for economics. Economics was not one of the original Nobel prizes and has only been given out since 1969. If we want to do a proper comparison we will need to filter this data out. We can do this with a pandas query.\nWe can then check there are no economics prizes left by finding the length of the data after applying a query to only select economics prizes. This will be used in the main analysis to count the number of $B^+$ and $B^-$ mesons.",
"# filter out the Economics prizes from the data\ndata_without_economics = data.query(\"Category != 'economics'\")\nprint('Number of economics prizes in \"data_without_economics\":')\nprint(len(data_without_economics.query(\"Category == 'economics'\")))",
"We can now plot the histogram over a sensible range using the hist function from matplotlib. You will use this throughout the main analysis.",
"# plot the histogram of number of winners against year\nH_WinnersPerYear = data_without_economics.Year.hist(bins=11, range=[1900, 2010]) \nxlabel('Year')\nylabel('Number of Winners')",
"From the histogram we can see that there has been a recent trend of more multiple prize winners in the same year. However there is a drop in the range 1940 - 1950, this was due to prizes being awarded intermittently during World War II. To isolate this gap we can change the bin size (by changing the number of bins variable) to contain this range. Try changing the slider below (you will have to click in code box and press shift+enter to activate it) and see how the number of bins affects the look of the histogram.",
"def plot_hist(bins): \n changingBins = data_without_economics.Year.hist(bins=bins, range=[1900,2010])\n xlabel('Year')\n ylabel('Number of People Given Prizes')\n BinSize = round(60/bins, 2)\n print(BinSize)\n\ninteract(plot_hist, bins=[2, 50, 1])",
"As you can see by varying the slider - changing the bin size really does change how the data looks! There is discussion on what is the appropiate bin size to use in the main notebook.\nPreselections\nWe now want to select our data. This is the same process as with filtering out economics prizes before but we'll go into more detail. This time lets filter out everything except Physics. We could do so by building a new dataset from the old one with loops and if statements, but the inbuilt pandas function .query() provides a quicker way. By passing a conditional statement, formatted into a string, we can create a new dataframe which is filled with only data that made the conditional statement true. A few examples are given below but only filtering out all but physics is used.",
"modernPhysics = \"(Category == 'physics' && Year > 2005)\" # Integer values don't go inside quotes\nphysicsOnly = \"(Category == 'physics')\"\n# apply the physicsOnly query\nphysicsOnlyDataFrame = data.query(physicsOnly)",
"Lets check the new DataFrames to see if this has worked!",
"physicsOnlyDataFrame.head(5)",
"Brilliant! You will find this technique useful to select kaons in the main analysis. Lets now plot the number of winners per year just for physics.",
"H_PhysicsWinnersPerYear = physicsOnlyDataFrame.Year.hist(bins=15, range=[1920,2010])\nxlabel('Year') #Plot an x label\nylabel('Number of Winners in Physics') #Plot a y label",
"We have now successfully plotted the histogram of just the physics prizes after applying our pre-selection.\nCalculations, Scatter Plots and 2D Histogram\nAdding New Data to a Data Frame\nYou will find this section useful for when it comes to creating a Dalitz plot in the particle physics analysis.\nWe want to see what ages people have been awarded Nobel prizes and measure the spread in the ages.\nThen we'll consider if over time people have been getting awarded Nobel prizes earlier or later in their life. \nFirst we'll need to calculate the age or the winners at the time the prize was awarded based on the Year and Birthdate columns. We create an AgeAwarded variable and add this to the data.",
"# Create new variable in the dataframe\nphysicsOnlyDataFrame['AgeAwarded'] = physicsOnlyDataFrame.Year - physicsOnlyDataFrame.BirthYear\nphysicsOnlyDataFrame.head(5)",
"Lets make a plot of the age of the winners at the time they were awarded the prize",
"# plot a histogram of the laureates ages\nH_AgeAwarded = physicsOnlyDataFrame.AgeAwarded.hist(bins=15)",
"Making Calculations\nLets calculate a measure of the spread in ages of the laureates. We will calculate the standard deviation of the distribution.",
"# count number of entries\nNumEntries = len(physicsOnlyDataFrame)\n# calculate square of ages\nphysicsOnlyDataFrame['AgeAwardedSquared'] = physicsOnlyDataFrame.AgeAwarded**2\n# calculate sum of square of ages, and sum of ages\nAgeSqSum = physicsOnlyDataFrame['AgeAwardedSquared'].sum()\nAgeSum = physicsOnlyDataFrame['AgeAwarded'].sum()\n# calculate std and print it\nstd = sqrt((AgeSqSum-(AgeSum**2/NumEntries)) / NumEntries)\nprint(std)",
"There is actually a function that would calculate the rms for you, but we wanted to teach you how to manipulate data to make calculations!",
"# calculate standard deviation (rms) of distribution\nprint(physicsOnlyDataFrame['AgeAwarded'].std())",
"Scatter Plot\nNow lets plot a scatter plot of Age vs Date awarded",
"scatter(physicsOnlyDataFrame['Year'], physicsOnlyDataFrame['AgeAwarded'])\nplt.xlim(1900, 2010) # change the x axis range\nplt.ylim(20, 100) # change the y axis range\nxlabel('Year Awarded')\nylabel('Age Awarded')",
"2D Histogram\nWe can also plot a 2D histogram and bin the results. The number of entries in the data set is relatively low so we will need to use reasonably large bins to have acceptable statistics in each bin. We have given you the ability to change the number of bins so you can see how the plot changes. Note that the number of total bins is the value of the slider squared. This is because the value of bins given in the hist2d function is the number of bins on one axis.",
"hist2d(physicsOnlyDataFrame.Year, physicsOnlyDataFrame.AgeAwarded, bins=10)\ncolorbar() # Add a colour legend\nxlabel('Year Awarded')\nylabel('Age Awarded')",
"Alternatively you can use interact to add a slider to vary the number of bins",
"def plot_histogram(bins):\n hist2d(physicsOnlyDataFrame['Year'].values,physicsOnlyDataFrame['AgeAwarded'].values, bins=bins)\n colorbar() #Set a colour legend\n xlabel('Year Awarded')\n ylabel('Age Awarded')\n \ninteract(plot_histogram, bins=[1, 20, 1]) # Creates the slider",
"Playing with the slider will show you the effect of changing bthe in size in a 2D histogram. The darker bins in the top right corner show that there does appear to be a trend of Nobel prizes being won at an older age in more recent years.\nManipulating 2D histograms\nThis section is advanced and only required for the final section of the main analysis.\nAs the main analysis requires the calculation of an asymmetry, we now provide a contrived example of how to do this using the nobel prize dataset, we recommed only reading this section after reaching the \"Searching for local matter anti-matter differences\" section of the main analysis.\nFirst calculate the number of entries in each bin of the 2D histogram and store these values in physics_counts as a 2D array.\nxedges and yedges are 1D arrays containing the values of the bin edges along each axis.",
"physics_counts, xedges, yedges, Image = hist2d(\n physicsOnlyDataFrame.Year, physicsOnlyDataFrame.AgeAwarded,\n bins=10, range=[(1900, 2010), (20, 100)]\n)\ncolorbar() # Add a colour legend\nxlabel('Year Awarded')\nylabel('Age Awarded')",
"Repeat the procedure used for physics to get the 2D histgram of age against year awarded for chemistry nobel prizes.",
"# Make the \"chemistryOnlyDataFrame\" dataset\nchemistryOnlyDataFrame = data.query(\"(Category == 'chemistry')\")\nchemistryOnlyDataFrame['AgeAwarded'] = chemistryOnlyDataFrame.Year - chemistryOnlyDataFrame.BirthYear\n\n# Plot the histogram\nchemistry_counts, xedges, yedges, Image = hist2d(\n chemistryOnlyDataFrame.Year, chemistryOnlyDataFrame.AgeAwarded,\n bins=10, range=[(1900, 2010), (20, 100)]\n)\ncolorbar() # Add a colour legend\nxlabel('Year Awarded')\nylabel('Age Awarded')",
"Subtract the chemistry_counts from the physics_counts and normalise by their sum. This is known as an asymmetry.",
"counts = (physics_counts - chemistry_counts) / (physics_counts + chemistry_counts)",
"Where there are no nobel prize winners for either subject counts will contain an error value (nan) as the number was divided by zero. Here we replace these error values with 0.",
"counts[np.isnan(counts)] = 0",
"Finally plot the asymmetry using the pcolor function. As positive and negative values each have a different meaning we use the seismic colormap, see here for a full list of all available colormaps.",
"pcolor(xedges, yedges, counts, cmap='seismic')\ncolorbar()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
OpenWeavers/openanalysis | doc/OpenAnalysis/05 - Data Structures.ipynb | gpl-3.0 | [
"Data Structures\nData structures are a concrete implementation of the specification provided by one or more particular abstract data types (ADT), which specify the operations that can be performed on a data structure and the computational complexity of those operations.\nDifferent kinds of data structures are suited for different kinds of applications, and some are highly specialized to specific tasks. For example, relational databases commonly use B-tree indexes for data retrieval, while compiler implementations usually use hash tables to look up identifiers.\nUsually, efficient data structures are key to designing efficient algorithms.\nStandard import statement",
"from openanalysis.data_structures import DataStructureBase, DataStructureVisualization\nimport gi.repository.Gtk as gtk # for displaying GUI dialogs",
"DataStructureBase is the base class for implementing data structures\nDataStructureVisualization is the class that visualizes data structures in GUI\nDataStructureBase class\nAny data structure, which is to be implemented, has to be derived from this class. Now we shall see data members and member functions of this class:\nData Members\n\nname - Name of the DS\nfile_path - Path to store output of DS operations\n\nMember Functions\n\n__init__(self, name, file_path) - Initializes DS with a name and a file_path to store the output\ninsert(self, item) - Inserts item into the DS\ndelete(Self, item) - Deletes item from the DS, <br/>            if item is not present in the DS, throws a ValueError \nfind(self, item) - Finds the item in the DS\n<br/>          returns True if found, else returns False<br/>          similar to __contains__(self, item)\nget_root(self) - Returns the root (for graph and tree DS)\nget_graph(self, rt) - Gets the dict representation between the parent and children (for graph and tree DS)\ndraw(self, nth=None) - Draws the output to visualize the operations performed on the DS<br/>             nth is used to pass an item to visualize a find operation\n\nDataStructureVisualization class\nThis class is used for visualizing data structures in a GUI (using GTK+ 3). Now we shall see data members and member functions of this class:\nData Members\n\nds - Any DS, which is an instance of DataStructureBase\n\nMember Functions\n\n__init__(self, ds) - Initializes ds with an instance of DS that is to be visualized\nrun(self) - Opens a GUI window to visualize the DS operations\n\nAn example ..... Binary Search Tree\nNow we shall implement the class BinarySearchTree",
"class BinarySearchTree(DataStructureBase): # Derived from DataStructureBase\n \n class Node: # Class for creating a node\n def __init__(self, data):\n self.left = None\n self.right = None\n self.data = data\n\n def __str__(self):\n return str(self.data)\n\n def __init__(self):\n DataStructureBase.__init__(self, \"Binary Search Tree\", \"t.png\") # Initializing with name and path\n self.root = None\n self.count = 0\n\n def get_root(self): # Returns root node of the tree\n return self.root\n\n def insert(self, item): # Inserts item into the tree\n newNode = BinarySearchTree.Node(item)\n insNode = self.root\n parent = None\n while insNode is not None:\n parent = insNode\n if insNode.data > newNode.data:\n insNode = insNode.left\n else:\n insNode = insNode.right\n if parent is None:\n self.root = newNode\n else:\n if parent.data > newNode.data:\n parent.left = newNode\n else:\n parent.right = newNode\n self.count += 1\n\n def find(self, item): # Finds if item is present in tree or not\n node = self.root\n while node is not None:\n if item < node.data:\n node = node.left\n elif item > node.data:\n node = node.right\n else:\n return True\n return False\n \n def min_value_node(self): # Returns the minimum value node\n current = self.root\n while current.left is not None:\n current = current.left\n return current\n\n def delete(self, item): # Deletes item from tree if present\n # else shows Value Error\n if item not in self:\n dialog = gtk.MessageDialog(None, 0, gtk.MessageType.ERROR,\n gtk.ButtonsType.CANCEL, \"Value not found ERROR\")\n dialog.format_secondary_text(\n \"Element not found in the %s\" % self.name)\n dialog.run()\n dialog.destroy()\n else:\n self.count -= 1\n if self.root.data == item and (self.root.left is None or self.root.right is None):\n if self.root.left is None and self.root.right is None:\n self.root = None\n elif self.root.data == item and self.root.left is None:\n self.root = self.root.right\n elif self.root.data == item and self.root.right is None:\n self.root = self.root.left\n return self.root\n if item < self.root.data:\n temp = self.root\n self.root = self.root.left\n temp.left = self.delete(item)\n self.root = temp\n elif item > self.root.data:\n temp = self.root\n self.root = self.root.right\n temp.right = self.delete(item)\n self.root = temp\n else:\n if self.root.left is None:\n return self.root.right\n elif self.root.right is None:\n return self.root.left\n temp = self.root\n self.root = self.root.right\n min_node = self.min_value_node()\n temp.data = min_node.data\n temp.right = self.delete(min_node.data)\n self.root = temp\n return self.root\n\n def get_graph(self, rt): # Populates self.graph with elements depending\n # upon the parent-children relation\n if rt is None:\n return\n self.graph[rt.data] = {}\n if rt.left is not None:\n self.graph[rt.data][rt.left.data] = {'child_status': 'left'}\n self.get_graph(rt.left)\n if rt.right is not None:\n self.graph[rt.data][rt.right.data] = {'child_status': 'right'}\n self.get_graph(rt.right)",
"Now, this program can be executed as follows:",
"DataStructureVisualization(BinarySearchTree).run()\n\nimport io\nimport base64\nfrom IPython.display import HTML\n\nvideo = io.open('../res/bst.mp4', 'r+b').read()\nencoded = base64.b64encode(video)\nHTML(data='''<video alt=\"test\" width=\"500\" height=\"350\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded.decode('ascii')))",
"Example File\nYou can see more examples at Github"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
neeasthana/ML-SQL | Clustering/Iris/Iris.ipynb | gpl-3.0 | [
"Iris dataset (Clustering)\nAuthors\nWritten by: Neeraj Asthana (under Professor Robert Brunner)\nUniversity of Illinois at Urbana-Champaign\nSummer 2016\nAcknowledgements\nDataset found on UCI Machine Learning repository at: https://archive.ics.uci.edu/ml/datasets/Iris\nDataset Information\nThis data set tries to cluster iris species using 4 different continous predcitors.\nA description of the dataset can be found at: https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.names\nPredictors:\n\nsepal length in cm\nsepal width in cm\npetal length in cm\npetal width in cm\n\nImports",
"#Libraries and Imports\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.cluster import KMeans\nfrom sklearn import preprocessing",
"Reading Data",
"#Names of all of the columns\nnames = [\n 'sep_length'\n , 'sep_width'\n , 'petal_length'\n , 'petal_width'\n , 'species'\n]\n\n#Import dataset\ndata = pd.read_csv('iris.data', sep = ',', header = None, names = names)\n\ndata.head(10)\n\ndata.shape",
"Separate Data",
"#Select Predictor columns\nX = data.ix[:,:-1]\n\n#Scale X so that all columns have the same mean and variance\nX_scaled = preprocessing.scale(X)\n\n#Select target column\ny = data['species']\n\ny.value_counts()",
"Scatter Plot Matrix",
"# Visualize dataset with scatterplot matrix\n%matplotlib inline\n\ng = sns.PairGrid(data, hue=\"species\")\ng.map_diag(plt.hist)\ng.map_offdiag(plt.scatter)",
"K Means (3 clusters)",
"#train a k-nearest neighbor algorithm\nfit = KMeans(n_clusters=3).fit(X_scaled)\nfit.labels_\n\n#remake labels so that they properly matchup with the classes \nlabels = fit.labels_[:]\nfor index,val in enumerate(labels):\n if val == 1:\n labels[index] = 1\n elif val == 2:\n labels[index] = 3\n else:\n labels[index] = 2\n\nlabels\n\nconf_mat = np.zeros((3,3))\n\ntrue = np.array([0]*50 + [1]*50 + [2]*50)\n\nfor i,val in enumerate(true):\n conf_mat[val,labels[i]-1] += 1\n\n#true vs. predicted\nprint(pd.DataFrame(conf_mat))",
"Data Tasks\n\n\nRead in file\n\nDifferent types of separators (',',' ', '\\t', '\\s', etc.)\nSpecify whether there is a header or not\nName different columns\nEditting values to matchup with columns\n\n\n\nSelect columns for the regression tasks\n\nSelect columns I want to use as predictors\nSelect which column I am looking to target and predict\n\n\n\nTransform columns or variables\n\nscaling columns so that their means and variances are equal\n\n\n\nCluster using K-Means\n\nspecify number of clusters\nspecify initial cluster locations\nclustering type (avg, max, min, etc.)\n\n\n\nPerform diagnostics on the model\n\nSee cluster centers\nconfusion matrix\n\n\n\nEdit columns of labels to match up with species names\n\n\nVisualizations\n\nVisualize dataset as a whole (scatter plot matrix)\nSee diagnostic plots (t-squared, ccc)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bMzi/ML_in_Finance | 0207_kNN.ipynb | mit | [
"$k$ Nearest Neighbors\nIllustration of Nearest Neighbors\nArguably one of the simplest classification algorithm is $k$ nearest neighbors (KNN). Below figure portrays the algorithm's simplicity. We see a training set of five red and five blue dots, representing some label 1 and 0, respectively. The two axes represent two features, e.g. income and credit card balance. If we now add a new test data point $x_0$ (green dot), KNN will label this test data according to its $k$ closest neighbors. In below figure we set $k=3$. The three closest neighbors are: one blue dot and two red dots, resulting in estimated probabilities of 2/3 for the red class and 1/3 for the blue class. Hence KNN will predict that the new data point will belong to the red class. More so, based on the training set the algorithm is able to draw a decision boundary. (Note: In a classification settings with $j$ classes a decision boundary is a hyperplane that partitions the underlying vector space into $j$ sets, one for each class.). This is shown with the jagged line that separates background colors cyan (blue label) and light blue (red label). Given any possible pair of feature values, KNN labels the response along the drawn decision boundary. With $k=1$ the boundary line is very jagged. Increasing the number of $k$ will smoothen the decision boundary. This tells us that small values of $k$ will produce large variance but low bias, meaning that each new added training point might change the decision boundary line significantly but the decision boundary separates the training set (almost) correctly. As $k$ increases, variance decreases but bias increases. This is a manifestation of the Bias-Variance Trade-Off as discussed in the script (Fortmann-Roe (2012)) and highlights the importance of selecting an adequate value of $k$ - a topic we will pick up in a future chapter.\n<img src=\"Graphics/0207_KNN.png\" alt=\"LinearRegClassification\" style=\"width: 1000px;\"/>\nMathematical Description of KNN\nHaving introduced KNN illustratively, let us now define this in mathematical terms. Let our data set be ${(x_1, y_1), (x_2, y_2), \\ldots (x_n, y_n)}$ with $x_i \\in \\mathbb{R}^p$ and $y_i \\in {0, 1}\\; \\forall i \\in {1, 2, \\ldots, n}$. Based on the $k$ neighbors, KNN estimates the conditional probability for class $j$ as the fraction of points in $\\mathcal{N}(k, x_0)$ (the set of the $k$ closest neighbors of $x_0$) whose response values equals $j$ (Russell and Norvig (2009)):\n$$\\begin{equation}\n\\Pr(Y = j | X = x_0) = \\frac{1}{k} \\sum_{i \\in \\mathcal{N}(k, x_0)} \\mathbb{I}(y_i = j).\n\\end{equation}$$\nOnce the probability for each class $j$ is calculated, the KNN classifier predicts a class label $\\hat{y}_0$ for the new data point $x_0$ by maximizing the conditional probability (Batista and Silva (2009)).\n$$\\begin{equation}\n\\hat{y}0 = \\arg \\max{j} \\frac{1}{k} \\sum_{i \\in \\mathcal{N}(k,x_0)} \\mathbb{I}(y_i = j)\n\\end{equation}$$\nSelecting a new data point $x_0$'s nearest neighbors requires some notion of distance measure. Most researchers chose Minkowski's distance, which is often referred to as $L^m$ norm (Guggenbuehler (2015)). The distance between points $x_a$ and $x_b$ in $\\mathbb{R}^p$ is then defined as follows (Russell and Norvig (2009)):\n$$\\begin{equation}\nL^m (x_a, x_b) = \\left(\\sum_{i=1}^p |x_{a, i} - x_{b, i}|^m \\right)^{1/m}\n\\end{equation}$$\nUsing $m=2$, above equation simplifies to the well known Euclidean distance and $m=1$ yields the Manhattten distance. Python's sklearn package, short for scikit-learn, offers several other options that we will not discuss. \nBeyond the pure distance measure, it is also possible to weight training data points relative to their distance from a certain point. In above figure distance is weighted uniformly. Alternatively one could weight points by the inverse of their distance (closer neighbors of a query point will have a greater influence than neighbors which are further away) or any other user-defined weighting function. For further details check sklearn's documentation for details).\nApplication: Predicting Share Price Movement\nLoading Data\nThe application of KNN is shown using simple stock market data. The idea is to predict a stock's movement based on simple features such as:\n* Lag1, Lag2: log returns of the two previous trading days \n* SMI: SMI log return of the previous day\nThe response is a binary variable: if a stock closed above the previous day's closing price it equals 1, and 0 if it fell. We start by loading the necessary packages and stock data from a csv - a procedure we are well acquainted with by now. Thus comments are held short.",
"%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-whitegrid')\nplt.rcParams['font.size'] = 14\n\n# Import daily shares prices and select window\nshsPr = pd.read_csv('Data/SMIDataDaily.csv', sep=',',\n parse_dates=['Date'], dayfirst=True,\n index_col=['Date'])\nshsPr = shsPr['2012-06-01':'2017-06-30']\nshsPr = shsPr.sort_index()\nshsPr.tail()",
"Having the data in a proper dataframe, we are now in a position to create the features and response values.",
"# Calculate log-returns and label responses: \n# 'direction' equals 1 if stock closed above \n# previous day and 0 if it fell.\ntoday = np.log(shsPr / shsPr.shift(1))\ndirection = np.where(today >= 0, 1, 0)\n\n# Convert 'direction' to dataframe\ndirection = pd.DataFrame(direction, index=today.index, columns=today.columns)\n\n# Lag1, 2: t-1 and t-2 returns; excl. smi (in last column)\nLag1 = np.log(shsPr.iloc[:, :-1].shift(1) / shsPr.iloc[:, :-1].shift(2))\nLag2 = np.log(shsPr.iloc[:, :-1].shift(2) / shsPr.iloc[:, :-1].shift(3))\n\n# Previous day return for SMI index\nsmi = np.log(shsPr.iloc[:, -1].shift(1) / shsPr.iloc[:, -1].shift(2))",
"KNN Algorithm Applied\nNow comes the difficult part. What we want to achieve is to run the KNN algorithm for every stock and for different hyperparameter $k$ and see how it performs. For this we do the following steps:\n\nCreate a feature matrix X containing Lag1, Lag2 and SMI data for share i\nCreate a response vector y with binary direction values\nSplit data to training (before 2016-06-30) and test set (after 2016-06-30)\nRun KNN for different values of $k$ (loop)\nWrite test score for given $k$ to matrix scr\nOnce we have run through all $k$'s we proceed with step 1. with share i+1\n\nThis means we need two loops. The first corresponds to the share (e.g. ABB, Adecco, etc.), the second runs the KNN algorithm for different values of $k$. \nThe reason for this approach is that we are interested in finding any pattern/structure that would provide a successful trading strategy. There is obviously no free lunch. Predicting share price direction is by no means an easy task and we must be well aware that we are in for a difficult job here. If it were simple, neither one of us would be sitting here but run his own fund. But nonetheless, let us see how KNN performs and how homogeneous (or heterogeneous) the results are.\nAs usual our first step is to prepare the ground by loading the necessary package and defining some auxiliary variables. The KNN function we will be using is available through the sklearn (short for Scikit-learn) package. We only load the neighbor sublibrary which contains the needed KNN function called KNeigborsClassifier(). KNN is applied with the default distance metric: Euclidean distance (Minkowski's distance with $m=2$). If we would prefer another distance metric we would have to specify it (see documentation).",
"# Import relevant functions\nfrom sklearn import neighbors\n\n# k = {1, 3, ..., 200}\nk = np.arange(1, 200, 2)\n\n# Array to store results in. Dimension is [k x m] \n# with m=20 for the 20 companies (excl. SMI)\nscr = np.empty(shape=(len(k), len(shsPr.columns)-1))\n\nfor i in range(len(shsPr.columns)-1):\n \n # 1) Create matrix with feature values of stock i\n X = pd.concat([Lag1.iloc[:, i], Lag2.iloc[:, i], smi], axis=1)\n X = X[3:] # Drop first three rows with NaN (due to lag)\n \n # 2) Remove first three rows of response dataframe\n # to have equal no. of rows for features and response\n y = direction.iloc[:, i]\n y = y[3:]\n \n # 3) Split data into training set...\n X_train = X[:'2016-06-30']\n y_train = y[:'2016-06-30']\n # ...and test set.\n X_test = X['2016-07-01':]\n y_test = y['2016-07-01':]\n \n # Convert responses to 1xN array (with .ravel() function)\n y_train = y_train.values.ravel()\n y_test = y_test.values.ravel()\n \n for j in range(len(k)):\n \n # 4) Run KNN\n # Instantiate KNN class\n knn = neighbors.KNeighborsClassifier(n_neighbors=k[j])\n # Fit KNN classifier using training set\n knn = knn.fit(X_train, y_train)\n \n # 5) Extract test score for k[j]\n scr[j, i] = knn.score(X_test, y_test)\n\n# Convert data to pandas dataframe\ntickers = shsPr.columns\nscr = pd.DataFrame(scr, index=k, columns=tickers[:-1])\n\nscr.head()",
"Results & Analysis\nNow let's see the results in an overview.",
"scr.describe()\n\nscr.max().nlargest(5)",
"Following finance theory, returns should be distributed symmetrically. Thus the simplest guess would be to expect a share price to increase on 50% of the days and to decrease on the remaining 50%. Similar to guessing a coin flip, if we would guess an 'up' movement for every day, we obviously would - in the long run - be correct 50% of the times. This would make for a score of 50%. \nLooking in that light at the above summary, we see some very interesting results. For 10 out of 20 stocks KNN produces test scores of > 50% for even the 0.25th percentile. Let's plot the ones with the highest test-scores (ABBN, ZURN, NOVN, SIK) to see at what value of $k$ the best test-score is achieved.",
"nms = ['ABBN', 'ZURN', 'NOVN', 'SIK']\n\nplt.figure(figsize=(12, 8))\nfor col in nms:\n scr[col].plot(legend=True)\nplt.axhline(0.50, c='k', ls='--');",
"For Zurich the peak is early (max. score around $k=60$) while the others peak later, i.e. with higher values of $k$. Furthermore, it seems interesting that for $k > 40$ test scores remained (barely) above 50%. If this is indeed a pattern we would have found a trading strategy, wouldn't we? \nTo further assess our results we look into KNN's prediction of Givaudan's stock movements. For this we rerun our KNN classifier algorithm for ABBN as before.",
"# 1) Create matrix with feature values of stock i\nX = pd.concat([Lag1['ABBN'], Lag2['ABBN'], smi], axis=1)\nX = X[3:] # Drop first three rows with NaN (due to lag)\n\n# 2) Remove first three rows of response dataframe\n# to have equal no. of rows for features and response\ny = direction['ABBN']\ny = y[3:]\n\n# 3) Split data into training set...\nX_train = X[:'2016-06-30']\ny_train = y[:'2016-06-30']\n# ...and test set.\nX_test = X['2016-07-01':]\ny_test = y['2016-07-01':]\n\n# Convert responses to 1xN array (with .ravel() function)\ny_train = y_train.values.ravel()\ny_test = y_test.values.ravel()",
"For GIVN the maximum score is reached where $k=145$. You can check this with the scr['ABBN'].idxmax() command, which provides the index of the maximum value of the selected column. In our case, the index is equivalent to the value of $k$. Thus we run KNN with $k=145$.",
"scr['ABBN'].idxmax()\n\n# 4) Run KNN\n# Instantiate KNN class for GIVN with k=145\nknn = neighbors.KNeighborsClassifier(n_neighbors=145)\n# Fit KNN classifier using training set\nknn = knn.fit(X_train, y_train)\n\n# 5) Extract test score for ABB\nscr_ABBN = knn.score(X_test, y_test)\nscr_ABBN",
"The score of 59.68% is the very same as what we have seen in above summary statistic. Nothing new so far. (Recall that the score is the total of correctly predicted outcomes.)\nHowever, the alert reader should by now raise some questions regarding our assumption that 50% of the returns should have been positive. In the long run, this might be true. But our training sample contained only 1'017 records and of these 534 were positive.",
"# Percentage of 'up' days in training set\ny_train.sum() / y_train.size",
"Therefore, if we would guess 'up' for every day of our test set and given the distribution of classes in the test set is exactly as in our training set, then we would predict the correct movement in 52.51% of the cases. So in that light, the predictive power of our KNN algorithm has to be put in perspective to the 52.51%.\nIn summary, our KNN algorithm has a score of 59.68%. Our best guess (based on the training set) would yield a score of 52.51%. This still displays that overall our KNN algorithm outperforms our best guess. Nonetheless, the margin is smaller than initially thought. \nConfusion Matrix\nThere are more tools to assess the accuracy of an algorithm. We postpone the discussion of these tools to a later chapter and at this stage restrict ourselves to the discussion of a tool called \"confusion matrix\". \nA confusion matrix is a convenient way of displaying how our classifier performs. In binary classification (with e.g. response $y \\in {0, 1}$) there are four prediction categories possible (Ting (2011)):\n\nTrue positive: True response value is 1, predicted value is 1 (\"hit\")\nTrue negative: True response value is 0, predicted value is 0 (\"correct rejection\")\nFalse positive: True response value is 0, predicted value is 1 (\"False alarm\", Type 1 error)\nFalse negative: True response value is 1, predicted value is 0 (\"Miss\", Type 2 error)\n\nThese information help us to understand how our (KNN) algorithm performed. There are different two ways of arranging confusion matrix. James et al. (2013) follow the convention that column labels indicate the true class label and rows the predicted response class. Others have it transposed such that column labels indicate predicted classes and row labels show true values. We will use the latter approach as it is more common.\n<img src=\"Graphics/0207_ConfusionMatrixExplained.png\" alt=\"ConfusionMatrixExplained\" style=\"width: 800px;\"/>\nTo run this in Python, we first predict the response value for each data entry in our test matrix X_test. Then we arrange the data in a suitable manner.",
"# Predict 'up' (=1) or 'down' (=0) for test set\npred = knn.predict(X_test)\n\n# Store data in DataFrame\ncfm = pd.DataFrame({'True direction': y_test,\n 'Predicted direction': pred})\ncfm.replace(to_replace={0:'Down', 1:'Up'}, inplace=True)\n\n# Arrange data to confusion matrix\nprint(cfm.groupby(['Predicted direction','True direction']) \\\n .size().unstack('Predicted direction'))",
"As mentioned before, rows represent the true outcome and columns show what class KNN predicted. In 31 cases, the test set's true response was 'down' (in our case represented by 0) and KNN correctly predicted 'down'. 120 times KNN was correct in predicting an 'up' (=1) movement. 19 returns in the test set were positive but KNN predicted a negative return. And in 83 out of 253 cases KNN predicted an 'up' movement whereas in reality the stock price decreased. The KNN score of 59.68% for ABB is the sum of true positive and negative (31 + 120) in relation to the total number of predictions (253 = 31 + 19 + 83 + 120). The error rate is 1 - score or (19 + 83)/253.\nClass-specific performance is also helpful to better understand results. The related terms are sensitivity and specifity. In the above case, sensitivity is the percentage of true 'up' movements that are identified. A good 86.3% (= 120 / (19 + 120)). The specifity is the percentage of 'down' movements that are correctly identified, here a poor 27.2% (= 31 / (31 + 83)). More on this in the next chapter.\nBecause confusion matrices are important to analyze results, Scikit-learn has its own command to generate it. It is part of the metrics sublibrary. The difficulty is that in contrast to above (manually generated) table, the function's output provides no labels. Therefore one must be sure to know which value are where. Here's the code to generate the confusion matrix.",
"from sklearn.metrics import confusion_matrix\n\n# Confusion matrix\nconfusion_matrix(y_test, pred)",
"In general it is tremendously helpful to visualize results. At the time of writing in February 2022, Anaconda shipped with Sklearn version 0.24.2. This Sklearn version uses a confusion matrix-plotting function that is deprecated in versioun 1.0. Therefore, be aware that below plotting function only works for versions < 1.0. Details see here.",
"from sklearn.metrics import plot_confusion_matrix\n\n# To plot the normalized figures, add normalize = 'true', 'pred', or 'all'\nplot_confusion_matrix(estimator = knn, \n X = X_test, y_true = y_test,\n display_labels = ['Down', 'Up'],\n values_format = '.0f');\n\n# To plot the normalized figures, add normalize = 'true', 'pred', or 'all'\n# This time with different color scheme\nplot_confusion_matrix(estimator = knn, \n X = X_test, y_true = y_test,\n cmap = plt.cm.Blues,\n normalize = 'all',\n display_labels = ['Down', 'Up'],\n values_format = '.2f');",
"Further Ressources\nIn writing this notebook, many ressources were consulted. For internet ressources the links are provided within the textflow above and will therefore not be listed again. Beyond these links, the following ressources were consulted and are recommended as further reading on the discussed topics:\n\nBatista, Gustavo, and Diego Furtado Silva, 2009, How k-nearest neighbor parameters affect its performance, in Argentine Symposium on Artificial Intelligence, 1–12, sn.\nFortmann-Roe, Scott, 2012, Understanding the Bias-Variance Tradeoff from website, http://scott.fortmann-roe.com/docs/BiasVariance.html, 08/15/17.\nGuggenbuehler, Jan P., 2015, Predicting net new money using machine learning algorithms and newspaper articles, Technical report, University of Zurich, Zurich.\nJames, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani, 2013, An Introduction to Statistical Learning: With Applications in R (Springer Science & Business Media, New York, NY).\nMüller, Andreas C., and Sarah Guido, 2017, Introduction to Machine Learning with Python (O’Reilly Media, Sebastopol, CA).\nRussell, Stuart, and Peter Norvig, 2009, Artificial Intelligence: A Modern Approach (Prentice Hall Press, Upper Saddle River, NJ).\nTing, Kai Ming, 2011, Confusion matrix, in Claude Sammut, and Geoffrey I. Webb, eds., Encyclopedia of Machine Learning (Springer Science & Business Media, New York, NY).",
"import sklearn\nsklearn.show_versions()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/niwa/cmip6/models/sandbox-3/ocnbgchem.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: NIWA\nSource ID: SANDBOX-3\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:30\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'ocnbgchem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Elemental Stoichiometry\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n",
"1.5. Elemental Stoichiometry Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.7. Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Damping\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for passive tracers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"2.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for passive tracers (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for biology sources and sinks",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"3.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transport scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"4.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTransport scheme used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4.3. Use Different Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how atmospheric deposition is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n",
"5.2. River Input\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how river input is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n",
"5.3. Sediments From Boundary Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Sediments From Explicit Model\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from explicit sediment model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.2. CO2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe CO2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.3. O2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs O2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.4. O2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe O2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. DMS Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs DMS gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.6. DMS Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify DMS gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.7. N2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.8. N2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.9. N2O Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2O gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.10. N2O Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2O gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.11. CFC11 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC11 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.12. CFC11 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.13. CFC12 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC12 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.14. CFC12 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.15. SF6 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs SF6 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.16. SF6 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify SF6 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.17. 13CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.18. 13CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.19. 14CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.20. 14CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.21. Other Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any other gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how carbon chemistry is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n",
"7.2. PH Scale\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.3. Constants If Not OMIP\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Sulfur Cycle Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sulfur cycle modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Nutrients Present\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Nitrous Species If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous species.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.5. Nitrous Processes If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous processes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Upper Trophic Levels Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefine how upper trophic level are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of phytoplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n",
"10.2. Pft\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of zooplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nZooplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there bacteria representation ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Lability\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Types If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Size If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n",
"13.4. Size If Discrete\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.5. Sinking Speed If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n",
"14.2. Abiotic Carbon\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs abiotic carbon modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.3. Alkalinity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is alkalinity modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
starbro/BeastMode | IMDB_reviews.ipynb | apache-2.0 | [
"CS109: IMDB review data set\nWe gathered data from Andrew L. Maas from Standford University, based on IMDB movies review (describe this more). The data was automatically split into test and train sets, with each set containing polarized movie reviews (each review was a text file) in subdirectories. Since the creators of the original data set were not interested in predicted Box Office Scores, they didn't bother to save the names of the movies, only the URLs that the reviews were scraped from on IMDB.com. Thus we had to go back into all those URLs and scrape the movie names from the top of the pages.",
"%matplotlib inline\nimport numpy as np\nimport scipy as sp\nimport matplotlib as mpl\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport time\npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\npd.set_option('display.notebook_repr_html', True)\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")\nfrom bs4 import BeautifulSoup\nimport requests\nimport csv\nimport os\nimport random\nimport sys\nimport json\nsys.path.insert(0, '/aclImdb/')",
"Below we write a function to scrape an IMDB url and return a movie name.",
"# function to get name of movie from each URL\ndef get_movie(url):\n '''\n Scrapes a given URL from IMDB.com. The URL's page contains many reviews for one particular movie.\n This function returns the name of that movie. \n '''\n pageText = requests.get(url)\n # Keep asking for the page until you get it. Sleep if necessary.\n while (pageText==None):\n time.sleep(5)\n pageText = requests.get(url)\n soup = BeautifulSoup(pageText.text,\"html.parser\")\n # Some of our URL's are expired! Return None if so.\n if soup == None or soup.find(\"div\",attrs={\"id\":\"tn15title\"}) == None:\n return None\n return soup.find(\"div\",attrs={\"id\":\"tn15title\"}).find(\"a\").get_text()",
"Now let's get the list of URLs for each of our data sets: both positive and negative for train and test.",
"# get all urls for train and test, neg and pos\nwith open('aclImdb/train/urls_pos.txt','r') as f:\n train_pos_urls = f.readlines()\n \nwith open('aclImdb/train/urls_neg.txt','r') as f:\n train_neg_urls = f.readlines()\n\nwith open('aclImdb/test/urls_pos.txt','r') as f:\n test_pos_urls = f.readlines()\n \nwith open('aclImdb/test/urls_neg.txt','r') as f:\n test_neg_urls = f.readlines()",
"Let's see how long each list is.",
"print len(train_pos_urls), len(train_neg_urls), len(test_pos_urls), len(test_neg_urls)",
"There are 12500 reviews in each sub data set. Each review has a corresponding URL. However, the URL lists have duplicates, as two reviews can be for the same movie and thus be found on the same IMDB webpage.\nWe would like to save the URLs and their associated movies into a dictionary for later use. This way we can do all the scraping up front. Let's define a function which does this scraping for a given set of URLs.",
"def make_url_dict(url_list):\n '''\n Input: List of URLs.\n Output: Dictionary of URL: movie based on scraped movie title.\n '''\n url_dict = dict(zip(url_list, [None]*len(url_list)))\n index = 0\n for url in url_list:\n if url_dict[url] == None:\n url_dict[url] = get_movie(url)\n # Every once in awhile, let us know how many URLs we have digested out of 12,500 total.\n if random.random() < 0.001:\n print index\n index += 1\n time.sleep(0.001)\n",
"Let's make a dictionary of stored movie names for each subdata set, saving into a JSON file so we only have to do this once.",
"%time\ntrain_pos_dict = make_url_dict(train_pos_urls)\nfp = open(\"url_movie_train_pos.json\",\"w\")\njson.dump(train_pos_dict, fp)\nfp.close()",
"If we did this right for training positives, the length of the dictionary keys should be equal to the number of unique URLs in its URL list.",
"print len(train_pos_dict.keys()), len(list(set(list(train_pos_urls))))\n\n%time\ntrain_neg_dict = make_url_dict(train_neg_urls)\nfp = open(\"url_movie_train_neg.json\",\"w\")\njson.dump(train_neg_dict, fp)\nfp.close()\n\n%time\ntest_pos_dict = make_url_dict(test_pos_urls)\nfp = open(\"url_movie_test_pos.json\",\"w\")\njson.dump(test_pos_dict, fp)\nfp.close()\n\n%time\ntest_neg_dict = make_url_dict(test_neg_urls)\nfp = open(\"url_movie_test_neg.json\",\"w\")\njson.dump(test_neg_dict, fp)\nfp.close()\n\n# Reload\nwith open(\"url_movie_tr_pos.json\", \"r\") as fd:\n train_pos_dict = json.load(fd)\nwith open(\"url_movie_train_neg.json\", \"r\") as fd:\n train_neg_dict = json.load(fd)\nwith open(\"url_movie_test_pos.json\", \"r\") as fd:\n test_pos_dict = json.load(fd)\nwith open(\"url_movie_test_neg.json\", \"r\") as fd:\n test_neg_dict = json.load(fd)",
"Now that we have saved movie names associated with each URL, we can finally create our data table of reviews. We will define a function data_collect which iterates over our directories, making a pandas dataframe out of all the reviews in a particular category (e.g. Test Set, Positive Reviews).",
"def data_collect(directory, pos, url_dict, url_list):\n '''\n Inputs: \n directory: Directory to collect reviews from. ex) 'aclImdb/train/pos/'\n Pos: True or False, depending on whether the reviews are labelled positive or not.\n url_dict: the relevant URL-Movie dictionary (created above) for the particular category\n url_list: the list of URLs for that particular category\n '''\n # Column names for the data frame\n review_df = pd.DataFrame(columns=['movie_id', 'stars', 'positive', 'text', 'url', 'movie_name'])\n # Crawl over the directory, attaining relevant data for each of the .txt review files.\n train_pos_names = list(os.walk(directory))[0][2]\n for review in train_pos_names:\n # Andrew L. Maas's stanford group encoded the reviewID and number of stars for a review in the file's name.\n # For example, \"0_10.txt\" means reviewID 0 received 10 stars. The reviews are in the same order as the URLs,\n # so the reviewID is precisely the location of that movie's URL in the respective URL list.\n stars = int(review.split(\"_\")[1].split(\".\")[0])\n movieID = int(review.split(\"_\")[0]) #everything before the underscore\n fp = open('%(dir)s%(review)s' % {'dir': directory, 'review': review}, 'r')\n text = fp.read()\n url = url_list[movieID]\n movie_name = url_dict[url]\n reviewDict = {'movie_id': [movieID], 'stars': [stars], 'positive': [pos], 'text': [text], 'url': [url], 'movie_name': [movie_name]}\n review_df = review_df.append(pd.DataFrame(reviewDict))\n return review_df",
"Data Collection\nNow we are ready to collect all our data. Let's first collect the training data into a DataFrame.",
"# First get the positive reviews for the train_df.\ntrain_df = data_collect('aclImdb/train/pos/', True, train_pos_dict, train_pos_urls)\n# Then append the negative reviews\ntrain_df = train_df.append(data_collect('aclImdb/train/neg/', False, train_neg_dict, train_neg_urls))",
"Now we'll create a testing data frame.",
"# First get the positive reviews for the train_df.\ntest_df = data_collect('aclImdb/test/pos/', True, test_pos_dict, test_pos_urls)\n# Then append the negative reviews\ntest_df = test_df.append(data_collect('aclImdb/test/neg/', False, test_neg_dict, test_neg_urls))",
"Let's create a dictionary out of each dataframe so that we can save each in JSON format.",
"train_df_dict = {feature: train_df[feature].values.tolist() for feature in train_df.columns.values}\ntest_df_dict = {feature: test_df[feature].values.tolist() for feature in test_df.columns.values}\n# Train\nfp = open(\"train_df_dict.json\",\"w\")\njson.dump(train_df_dict, fp)\nfp.close()\n# Test\nfp = open(\"test_df_dict.json\",\"w\")\njson.dump(test_df_dict, fp)\nfp.close()",
"Let's reopen.",
"with open(\"train_df_dict.json\", \"r\") as fd:\n train_df_dict = json.load(fd)\nwith open(\"test_df_dict.json\", \"r\") as fd:\n test_df_dict = json.load(fd)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
HeyIamJames/bikeshare | BikeShareStep2.ipynb | mit | [
"Compute the average temperature by season ('season_desc'). (The temperatures are numbers between 0 and 1, but don't worry about that. Let's say that's the Shellman temperature scale.)",
"import pandas as pd\nimport numpy as np\nfrom pandas import Series, DataFrame\n\n\nweather = pd.read_table('daily_weather.tsv')\n\n\nweather.groupby('season_desc').agg({'temp': np.mean})\n\n\nfix = weather.replace(\"Fall\", \"Summer_\").replace(\"Summer\", \"Spring_\").replace(\"Winter\", \"Fall_\").replace(\"Spring\", \"Winter_\")\n\nweather.groupby('season_desc').agg({'temp': np.mean})",
"Various of the columns represent dates or datetimes, but out of the box pd.read_table won't treat them correctly. This makes it hard to (for example) compute the number of rentals by month. Fix the dates and compute the number of rentals by month.",
"weather['months'] = pd.DatetimeIndex(weather.date).month\n\n\nweather.groupby('months').agg({'total_riders': np.sum})\n",
"weather[['total_riders', 'temp']].corr()\n3.Investigate how the number of rentals varies with temperature. Is this trend constant across seasons? Across months?",
"weather[['total_riders', 'temp', 'months']].groupby('months').corr()\n",
"weather[['total_riders', 'temp', 'season_desc']].groupby('season_desc').corr()",
"weather[['no_casual_riders', 'no_reg_riders', 'temp']].corr()\n",
"4.There are various types of users in the usage data sets. What sorts of things can you say about how they use the bikes differently?",
"weather[['no_casual_riders', 'no_reg_riders']].corr()\n\n\nweather[['is_holiday', 'total_riders']].sum()\n\n\nweather[['is_holiday', 'total_riders']].corr()",
"Part 2",
"import matplotlib.pyplot as plt\n\n\n%matplotlib inline",
"Plot the daily temperature over the course of the year. (This should probably be a line chart.) Create a bar chart that shows the average temperature and humidity by month.",
"plt.plot(weather['months'], weather['temp'])\nplt.xlabel(\"This is just an x-axis\")\nplt.ylabel(\"This is just a y-axis\")\nplt.show()\n\nx = weather.groupby('months').agg({\"humidity\":np.mean})\n\nplt.bar([n for n in range(1, 13)], x['humidity'])\nplt.title(\"weather and humidity by months\")\nplt.show()",
"Use a scatterplot to show how the daily rental volume varies with temperature. Use a different series (with different colors) for each season.",
"xs = range(10)\nplt.scatter(xs, 5 * np.random.rand(10) + xs, color='r', marker='*', label='series1')\nplt.scatter(xs, 5 * np.random.rand(10) + xs, color='g', marker='o', label='series2')\nplt.title(\"A scatterplot with two series\")\nplt.legend(loc=9)\nplt.show()\n\nw = weather[['season_desc', 'temp', 'total_riders']]\nfall = w.loc[w['season_desc'] == 'Fall']\nwinter = w.loc[w['season_desc'] == 'Winter']\nspring = w.loc[w['season_desc'] == 'Spring']\nsummer = w.loc[w['season_desc'] == 'Summer']\n\nplt.scatter(fall['temp'], fall['total_riders'], color='orange', marker='^', label='fall', s=100, alpha=.41)\nplt.scatter(winter['temp'], winter['total_riders'], color='blue', marker='*', label='winter', s=100, alpha=.41)\nplt.scatter(spring['temp'], spring['total_riders'], color='purple', marker='d', label='spring', s=100, alpha=.41)\nplt.scatter(summer['temp'], summer['total_riders'], color='red', marker='o', label='summer', s=100, alpha=.41)\n\nplt.legend(loc='lower right')\nplt.xlabel('temperature')\nplt.ylabel('rental volume')\nplt.show()",
"Create another scatterplot to show how daily rental volume varies with windspeed. As above, use a different series for each season.",
"w = weather[['season_desc', 'windspeed', 'total_riders']]\nfall = w.loc[w['season_desc'] == 'Fall']\nwinter = w.loc[w['season_desc'] == 'Winter']\nspring = w.loc[w['season_desc'] == 'Spring']\nsummer = w.loc[w['season_desc'] == 'Summer']\n\nplt.scatter(fall['windspeed'], fall['total_riders'], color='orange', marker='^', label='fall', s=100, alpha=.41)\nplt.scatter(winter['windspeed'], winter['total_riders'], color='blue', marker='*', label='winter', s=100, alpha=.41)\nplt.scatter(spring['windspeed'], spring['total_riders'], color='purple', marker='d', label='spring', s=100, alpha=.41)\nplt.scatter(summer['windspeed'], summer['total_riders'], color='red', marker='o', label='summer', s=100, alpha=.41)\n\nplt.legend(loc='lower right')\nplt.xlabel('windspeed x1000 mph')\nplt.ylabel('rental volume')",
"How do the rental volumes vary with geography? Compute the average daily rentals for each station and use this as the radius for a scatterplot of each station's latitude and longitude.",
"usage = pd.read_table('usage_2012.tsv')\n\nstations = pd.read_table('stations.tsv')\n\nstations.head()\n\nc = DataFrame(counts.index, columns=['station'])\nc['counts'] = counts.values\ns = stations[['station','lat','long']]\nu = pd.concat([usage['station_start']], axis=1, keys=['station'])\ncounts = u['station'].value_counts()\nm = pd.merge(s, c, on='station')\n \n\nplt.scatter(m['long'], m['lat'], c='b', label='Location', s=(m['counts'] * .05), alpha=.2)\n\nplt.legend(loc='lower right')\nplt.xlabel('longitude')\nplt.ylabel('latitude')\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jasonding1354/pyDataScienceToolkits_Base | Scikit-learn/.ipynb_checkpoints/(4)cross_validation-checkpoint.ipynb | mit | [
"内容概要\n\n训练集/测试集分割用于模型验证的缺点\nK折交叉验证是如何克服之前的不足\n交叉验证如何用于选择调节参数、选择模型、选择特征\n改善交叉验证\n\n1. 模型验证回顾\n进行模型验证的一个重要目的是要选出一个最合适的模型,对于监督学习而言,我们希望模型对于未知数据的泛化能力强,所以就需要模型验证这一过程来体现不同的模型对于未知数据的表现效果。\n最先我们用训练准确度(用全部数据进行训练和测试)来衡量模型的表现,这种方法会导致模型过拟合;为了解决这一问题,我们将所有数据分成训练集和测试集两部分,我们用训练集进行模型训练,得到的模型再用测试集来衡量模型的预测表现能力,这种度量方式叫测试准确度,这种方式可以有效避免过拟合。\n测试准确度的一个缺点是其样本准确度是一个高方差估计(high variance estimate),所以该样本准确度会依赖不同的测试集,其表现效果不尽相同。\n高方差估计的例子\n下面我们使用iris数据来说明利用测试准确度来衡量模型表现的方差很高。",
"from sklearn.datasets import load_iris\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import metrics\n\n# read in the iris data\niris = load_iris()\n\nX = iris.data\ny = iris.target\n\nfor i in xrange(1,5):\n print \"random_state is \", i,\", and accuracy score is:\"\n X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=i)\n\n knn = KNeighborsClassifier(n_neighbors=5)\n knn.fit(X_train, y_train)\n y_pred = knn.predict(X_test)\n print metrics.accuracy_score(y_test, y_pred)",
"上面的测试准确率可以看出,不同的训练集、测试集分割的方法导致其准确率不同,而交叉验证的基本思想是:将数据集进行一系列分割,生成一组不同的训练测试集,然后分别训练模型并计算测试准确率,最后对结果进行平均处理。这样来有效降低测试准确率的差异。\n2. K折交叉验证\n\n将数据集平均分割成K个等份\n使用1份数据作为测试数据,其余作为训练数据\n计算测试准确率\n使用不同的测试集,重复2、3步骤\n对测试准确率做平均,作为对未知数据预测准确率的估计",
"# 下面代码演示了K-fold交叉验证是如何进行数据分割的\n# simulate splitting a dataset of 25 observations into 5 folds\nfrom sklearn.cross_validation import KFold\nkf = KFold(25, n_folds=5, shuffle=False)\n\n# print the contents of each training and testing set\nprint '{} {:^61} {}'.format('Iteration', 'Training set observations', 'Testing set observations')\nfor iteration, data in enumerate(kf, start=1):\n print '{:^9} {} {:^25}'.format(iteration, data[0], data[1])",
"3. 使用交叉验证的建议\n\nK=10是一个一般的建议\n如果对于分类问题,应该使用分层抽样(stratified sampling)来生成数据,保证正负例的比例在训练集和测试集中的比例相同\n\n4. 交叉验证的例子\n4.1 用于调节参数\n交叉验证的方法可以帮助我们进行调参,最终得到一组最佳的模型参数。下面的例子我们依然使用iris数据和KNN模型,通过调节参数,得到一组最佳的参数使得测试数据的准确率和泛化能力最佳。",
"from sklearn.cross_validation import cross_val_score\n\nknn = KNeighborsClassifier(n_neighbors=5)\n# 这里的cross_val_score将交叉验证的整个过程连接起来,不用再进行手动的分割数据\n# cv参数用于规定将原始数据分成多少份\nscores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')\nprint scores\n\n# use average accuracy as an estimate of out-of-sample accuracy\n# 对十次迭代计算平均的测试准确率\nprint scores.mean()\n\n# search for an optimal value of K for KNN model\nk_range = range(1,31)\nk_scores = []\nfor k in k_range:\n knn = KNeighborsClassifier(n_neighbors=k)\n scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')\n k_scores.append(scores.mean())\n\nprint k_scores\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(k_range, k_scores)\nplt.xlabel(\"Value of K for KNN\")\nplt.ylabel(\"Cross validated accuracy\")",
"上面的例子显示了偏置-方差的折中,K较小的情况时偏置较低,方差较高;K较高的情况时,偏置较高,方差较低;最佳的模型参数取在中间位置,该情况下,使得偏置和方差得以平衡,模型针对于非样本数据的泛化能力是最佳的。\n4.2 用于模型选择\n交叉验证也可以帮助我们进行模型选择,以下是一组例子,分别使用iris数据,KNN和logistic回归模型进行模型的比较和选择。",
"# 10-fold cross-validation with the best KNN model\nknn = KNeighborsClassifier(n_neighbors=20)\nprint cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean()\n\n# 10-fold cross-validation with logistic regression\nfrom sklearn.linear_model import LogisticRegression\nlogreg = LogisticRegression()\nprint cross_val_score(logreg, X, y, cv=10, scoring='accuracy').mean()",
"4.3 用于特征选择\n下面我们使用advertising数据,通过交叉验证来进行特征的选择,对比不同的特征组合对于模型的预测效果。",
"import pandas as pd\nimport numpy as np\nfrom sklearn.linear_model import LinearRegression\n\n# read in the advertising dataset\ndata = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)\n\n# create a Python list of three feature names\nfeature_cols = ['TV', 'Radio', 'Newspaper']\n\n# use the list to select a subset of the DataFrame (X)\nX = data[feature_cols]\n\n# select the Sales column as the response (y)\ny = data.Sales\n\n# 10-fold cv with all features\nlm = LinearRegression()\nscores = cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')\nprint scores",
"这里要注意的是,上面的scores都是负数,为什么均方误差会出现负数的情况呢?因为这里的mean_squared_error是一种损失函数,优化的目标的使其最小化,而分类准确率是一种奖励函数,优化的目标是使其最大化。",
"# fix the sign of MSE scores\nmse_scores = -scores\nprint mse_scores\n\n# convert from MSE to RMSE\nrmse_scores = np.sqrt(mse_scores)\nprint rmse_scores\n\n# calculate the average RMSE\nprint rmse_scores.mean()\n\n# 10-fold cross-validation with two features (excluding Newspaper)\nfeature_cols = ['TV', 'Radio']\nX = data[feature_cols]\nprint np.sqrt(-cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')).mean()",
"由于不加入Newspaper这一个特征得到的分数较小(1.68 < 1.69),所以,使用所有特征得到的模型是一个更好的模型。\n参考资料\n\nscikit-learn documentation: Cross-validation, Model evaluation\nscikit-learn issue on GitHub: MSE is negative when returned by cross_val_score\nScott Fortmann-Roe: Accurately Measuring Model Prediction Error\nHarvard CS109: Cross-Validation: The Right and Wrong Way\nJournal of Cheminformatics: Cross-validation pitfalls when selecting and assessing regression and classification models"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dlegor/Tutorial-Pandas-Python | Code/Capítulo_1-Carga_Pandas.ipynb | cc0-1.0 | [
"Cargando datos en Pandas\nObjetivo\nLa finalidad de este capítulo es mostrar como cargar datos desde un archivo tipo csv, pero Pandas soporta más tipo de archivos. Debido a que el módulo tiene estructudas de datos que facilitan la manipulación de los datos al leerse desde un archivo, entonces explico un poco sobre el módulo Numpy y las Series y DataFrame que son estruturas de datos en Pandas.\nAlgunas comparaciones con R\nLa primera tarea que yo analizar datos es evidentemente cargar los datos que proviene de alguna fuente( base de datos) o archivo. Considerando la semejanza de Pandas con el uso de R, para mi las primeras tareas después de cargar los datos es revisar propiedades como tamaño o dim(), leer los primeros o los últimos registros head() o tail(), explorar el tipo de variables que tienen la muestra de datos struct() o explorar la existencia de registros nulos o NA, ver el resumen de los estadísticos básicos de las variables. Desde mi punto de vista, conociendo esa primera información uno puede iniciar un análisis exploratorio mucho más robusto.\nEl módulo Pandas tiene estructuras de datos para procesar la información con nombre similar a las de R, son los DataFrame. Las librerías de R que permiten operar los datos del mismo modo que Pandas son ddply y reshape2, el resto de operaciones para manipular DataFrame tienen un equivalente en las dos tecnologías.\nAlgunas cosas previas a cargar datos.\nDebido a que al cargar los datos desde alguna archivo o fuente, ya sea en formato csv, excel, xml, json o desde algúna base Mysql, etc. Serán cargados en Ipython como Data.Frame o Series, pero estas estructuras en Pandas tienen como base el manejo de matrices con Numpy. Por eso comento algunas cosas sobre Numpy.\nEn resumen, creo que es bueno antes de cargar datos, saber algo sobre las estructuras de datos en Pandas y a su vez creo importante saber un poco sobre lo que permite la creación de las estructuras en Pandas, que es el módulo Numpy.\nNumpy\nEn breve, este módulo permite el tratamiento de matrices y arrays, cuenta con funciones estándar de matemáticas, algunas funciones estadísticas básicas, permite la generación de números aleatorios, operaciones de álgebra lineal y análisis de Fourier, entre otras operaciones y funciones. Quizás esto no suena muy interesante, pero el módulo tiene la calidad suficiente para hacer cálculos no triviales y a buena velocidad, con respecto a C o C++.\nSiempre los ejemplos que se hacen de Numpy explican cosas como: la contrucción de matrices y arrays, operaciones entre matrices y arrays, como usar la vectorización de las matrices, selección de filas y columnas, copia y eliminación de entradas, etc. En mi caso pienso que aprendo mejor con ejemplos de como usar algunas de las funciones o herramientas del módulo, que solo leyendo las teoria y operaciones. Una de las mejores fuentes para conocer Numpy es el tutorial de la página oficial de la librería.\n* Tutorial Numpy: https://docs.scipy.org/doc/numpy-dev/user/quickstart.html\nLos 3 ejemplos son sencillos y muestran como usar Numpy para 3 tipos de problemas, uno tiene caracter de análisis numérico, calculos de una cadena de markov y el últmo es una aplicación de las cadenas de Markov (el algoritmo PageRank de manera rupestre).\nRegiones de Estabilidad Absoluta calculados con Numpy\nEl concepto de región de estabilidad absoluta tiene su origen en análisis numérico, es un método por medio del cual se analiza la estabilidad de las soluciones de una ecuación diferencial ordinaria. No doy detalles mayores del problema, pero se puede leer el artículo de Juan Luis Cano, y sobra decir que el código del ejemplo es original su publicación en su sitio.\nEjemplo\n~~~\n-- coding: utf-8 --\n\nRegión de estabilidad absoluta\nJuan Luis Cano Rodríguez\n\nimport numpy as np\ndef region_estabilidad(p, X, Y):\n\"\"\"Región de estabilidad absoluta\n Computa la región de estabilidad absoluta de un método numérico, dados\n los coeficientes de su polinomio característico de estabilidad.\n Argumentos\n ----------\n p : function\n Acepta un argumento w y devuelve una lista de coeficientes\n X, Y : numpy.ndarray\n Rejilla en la que evaluar el polinomio de estabilidad generada por\n numpy.meshgrid\n Devuelve\n --------\n Z : numpy.ndarray\n Para cada punto de la malla, máximo de los valores absolutos de las\n raíces del polinomio de estabilidad\n Ejemplos\n --------\n >>> import numpy as np\n >>> x = np.linspace(-3.0, 1.5)\n >>> y = np.linspace(-3.0, 3.0)\n >>> X, Y = np.meshgrid(x, y)\n >>> Z = region_estabilidad(lambda w: [1,\n ... -1 - w - w 2 / 2 - w 3 / 6 - w ** 4 / 24], X, Y) # RK4\n >>> import matplotlib.pyplot as plt\n >>> cs = plt.contour(X, Y, Z, np.linspace(0.05, 1.0, 9))\n >>> plt.clabel(cs, inline=1, fontsize=10) # Para etiquetar los contornos\n >>> plt.show()\n \"\"\"\n Z = np.zeros_like(X)\n w = X + Y * 1j\nfor j in range(len(X)):\n for i in range(len(Y)):\n r = np.roots(p(w[i, j]))\n Z[i, j] = np.max(abs(r if np.any(r) else 0))\nreturn Z\n\n~~~\nPodemos guardar este código en un script o se puede cargar directamente a la consola, en caso de hacerlo en ipython se puede hacer uso del comando %paste.",
"#\n# Región de estabilidad absoluta\n# Juan Luis Cano Rodríguez\nimport numpy as np\ndef region_estabilidad(p, X, Y):\n \"\"\"Región de estabilidad absoluta\n Computa la región de estabilidad absoluta de un método numérico, dados\n los coeficientes de su polinomio característico de estabilidad.\n Argumentos\n ----------\n p : function\n Acepta un argumento w y devuelve una lista de coeficientes\n X, Y : numpy.ndarray\n Rejilla en la que evaluar el polinomio de estabilidad generada por\n numpy.meshgrid\n Devuelve\n --------\n Z : numpy.ndarray\n Para cada punto de la malla, máximo de los valores absolutos de las\n raíces del polinomio de estabilidad\n Ejemplos\n --------\n >>> import numpy as np\n >>> x = np.linspace(-3.0, 1.5)\n >>> y = np.linspace(-3.0, 3.0)\n >>> X, Y = np.meshgrid(x, y)\n >>> Z = region_estabilidad(lambda w: [1,\n ... -1 - w - w ** 2 / 2 - w ** 3 / 6 - w ** 4 / 24], X, Y) # RK4\n >>> import matplotlib.pyplot as plt\n >>> cs = plt.contour(X, Y, Z, np.linspace(0.05, 1.0, 9))\n >>> plt.clabel(cs, inline=1, fontsize=10) # Para etiquetar los contornos\n >>> plt.show()\n \"\"\"\n Z = np.zeros_like(X)\n w = X + Y * 1j\n \n for j in range(len(X)):\n for i in range(len(Y)):\n r = np.roots(p(w[i, j]))\n Z[i, j] = np.max(abs(r if np.any(r) else 0))\n return Z",
"Para ver un ejemplo de su uso, realiza lo siguiente:\n* Se define las regiones x e y.\n* Se define un polinomio para probar la definición de la región.\n* Se utiliza la función region_estabilidad y se grafica el resultado.",
"#Se define la región\nx = np.linspace(-3.0, 1.5)\ny = np.linspace(-3.0, 3.0)\nX, Y = np.meshgrid(x, y)\n\n#Se define el polinomio\ndef p(w):\n return [1, -1 - w - w ** 2 / 2 - w ** 3 / 6 - w ** 4 / 24]\n\nZ = region_estabilidad(p, X, Y)\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize']=(20,7)\nplt.contour(X, Y, Z, np.linspace(0.0, 1.0, 9))\nplt.show()",
"El ejemplo anterior muestra el uso de Numpy como una biblioteca o módulo que permite hacer manipulación de matrices y como la calidad del procesamiento numérico perminte considarar su uso en problemas de métodos numéricos.\nProblema del laberito con Numpy\nEl problema tiene el mismo nombre que un ejemplo del uso del algoritmo Backtracking para solución de problemas por \"fuerza bruta\", respecto a esa técnica se puede consultar el brog de JC Gómez.\nEl ejemplo que comparto tienen que ver más con el uso de Cadenas de Markov, el ejemplo es solo para mostrar como funcionan y como se hace uso de ellas para resolver el problema con ciertos supuestos iniciales y como hacerlo con numpy.\nSuponemos que se colocará un dispositivo que se puede desplazar y pasar de área como en el cuadro siguiente: La idea es que puede pasar del cuadro 1 hacia el 2 o el 4, si empieza en el cuadro 2 puede pasar del 1 al 3, pero si inicia en el 5 solo puede pasar al 6, etc. \nEntonces lo que se plantea es que si inicia en el cuadro 1 y pasa al 2, eso solo depende de haber estado en el cuadro 1, si se pasa al cuadro 3 solo depende del estado 2, no de haber estado en el estado 1. Entonces la idea de los procesos de Markov es que para conocer si se pasará al cuadro 3 iniciando en el cuadro 1 solo se requiere conocer los pasos previos.\nEn lenguaje de probabilidad se expresa así:\n\\begin{align}\n P(X_{n}|X_{n-1},X_{n-2},...,X_{1}) = P(X_{n}|X_{n-1})\\\\[5pt]\n\\end{align}\nEntonces los supuestos son que se tienen 6 posibles estados iniciales o lugares desde donde iniciar, el cuadro 1 hasta el cuadro 6. Se hace una matriz que concentra la información ordenada de la relación entre los posibles movimientos entre cuadros contiguos. Entonces la relación de estados es:\n\\begin{align}\n p_{ij}= P(X_{n}=j|X_{n-1}=i)\\\\[5pt]\n\\end{align}\nDonde se refiere a la probabilidad de estar en el cuadro j si se estaba en el estado i, para el cuadro 2 las probabilidades serían:\n\\begin{align}\n p_{21}= P(X_{n}=1|X_{n-1}=2)\\\\[5pt]\n p_{23}= P(X_{n}=3|X_{n-1}=2)\\\\[5pt]\n 0= P(X_{n}=4|X_{n-1}=2)\\\\[5pt]\n 0= P(X_{n}=5|X_{n-1}=2)\\\\[5pt]\n 0= P(X_{n}=6|X_{n-1}=2)\\\\[5pt] \n\\end{align}\nVisto como matriz se vería como:\n\\begin{array}{ccc}\np_{11} & p_{12} & p_{13} & p_{14} & p_{15} & p_{16} \\\np_{21} & p_{22} & p_{23} & p_{24} & p_{25} & p_{26} \\\np_{31} & p_{32} & p_{33} & p_{34} & p_{35} & p_{36}\\\np_{41} & p_{42} & p_{43} & p_{44} & p_{45} & p_{46}\\\np_{51} & p_{52} & p_{53} & p_{54} & p_{55} & p_{56}\\\np_{61} & p_{62} & p_{63} & p_{64} & p_{65} & p_{66}\\end{array}\nMatriz anterior se llama matriz de transición, para este ejemplo es de la forma siguiente:\n\\begin{array}{ccc}\n\\frac{1}{3} & \\frac{1}{3} & 0 & \\frac{1}{3} & 0 & 0 \\\n\\frac{1}{3} & \\frac{1}{3} & \\frac{1}{3} & 0 & 0 & 0 \\\n 0 & \\frac{1}{3} & \\frac{1}{3} & 0 & 0 & \\frac{1}{3}\\\n\\frac{1}{2} & 0 & 0 & \\frac{1}{2} & 0 & 0\\\n 0 & 0 & 0 & 0 & \\frac{1}{2} & \\frac{1}{2}\\\n 0 & 0 & \\frac{1}{3} & 0 & \\frac{1}{3} & \\frac{1}{3}\\end{array}\nSe tienen entonces si la probabilidad de iniciar en cualquier estado es 1/6, entonces se tienen que la probabilidad despues de dos movimientos o cambios será la matriz de transición multiplicada por si misma en dos ocasiones, se vería así:\nVector de estados iniciales:\n\\begin{align}\n v=(\\frac{1}{6},\\frac{1}{6},\\frac{1}{6},\\frac{1}{6},\\frac{1}{6},\\frac{1}{6})\\\n\\end{align}\nEn el primer estado sería:\n\\begin{align}\n v*M\n\\end{align}\nEn el segundo estado sería:\n\\begin{align}\n v*M^2\n\\end{align}\nAsi lo que se tendrá es la probabilidad de estar en cualqueir cuadro para un segundo movimiento, hace esto en Numpy pensando en que es para procesar matrices resulta sencillo. Solo basta definir la matriz, hacer producto de matrices y vectores.",
"#Definición de la matriz\nimport numpy as np\nM=np.matrix([[1.0/3.0,1.0/3.0,0,1.0/3.0,0,0],[1.0/3.0,1.0/3.0,1.0/3.0,0,0,0],[0,1.0/3.0,1.0/3.0,0,0,1.0/3.0],\n [1.0/2.0,0,0,1.0/2.0,0,0],[0,0,0,0,1.0/2.0,1.0/2.0],[0,0,1.0/3.0,0,1.0/3.0,1.0/3.0]])\nM\n\n\n\n#Definición del vector de estados iniciales\nv=np.array([1.0/3.0,1.0/3.0,1.0/3.0,1.0/3.0,1.0/3.0,1.0/3.0])\n\n#Primer estado o movimiento\nv*M\n\n#Segundo movimiento\nv*M.dot(M)\n\n#Tercer Movimiento\nv.dot(M).dot(M).dot(M).dot(M).dot(M).dot(M).dot(M)",
"Si se procede ha seguir incrementando los estados o cambios se \"estabiliza\"; es decir, el cambio en la probabilidad de estar en la caja 1 después de tercer movimiento es 37.53%, lo cual tiende a quedarse en 37.5% al incrementar los movimientos. \nEjemplo de PageRank de manera rupestre, calcuado con Numpy\nEl algoritmo para generar un Ranking en las páginas web fue desarrollado e implementado por los fundadores de Google, no profundizo en los detalles pero en general la idea es la siguiente:\n\nRepresentar la Web como un gráfo dirigido.\nUsar la matriz asociada a un grafo para analizar el comportamiento de la web bajo ciertos supuestos.\nAgregar al modelo base lo necesario para que se adapate a la naturaleza de la web de mejor manera.\n\nEl primer objetivo es representar cada página como un vértice de un grafo y una arista representa la relación de la página a otra página; es decir, si dentro de la página A se hace referencia a la página B, entonces se agrega una \"flecha\", por lo tanto un ejemplo sencillo lo representa el siguiente grafo:\n\nLa imagen anterior representa un gráfo donde se simula que hay relación entre 3 páginas, la flecha indica el direccionamiento de una página a otra. Entonces para modelar la relación entre las páginas lo que se usa es matriz de adyacencia, esta matriz concentra la información de las relaciones entre nodos. La matriz adyacente de ese gráfo sería como:\n\\begin{array}{ccc}\n.33 & .5 & 1 \\\n.33 & 0 & 0 \\\n.33 & .5 & 0 \\end{array}\nEsta matriz es una matriz de Markov por columnas, cada una suma 1, el objetivo es que se tenga un vector con el ranking del orden de las páginas por prioridad al iniciar la búsqueda en internet y después de hacer uso de la matriz se puede conocer cual es el orden de prioridad.\nAsí que suponiendo que cualquiera de las 3 páginas tienen la misma probabilidad de ser la página inicial, se tienen que el vector inicial es:\n\\begin{align}\n v=(.33,.33,.33)\\\n\\end{align}\nDespués de usar la matriz la ecuación que nos permitiría conocer el mejor ranking de las páginas sería:\n\\begin{align}\n v=M*v\n\\end{align}\nEntonces el problema se pasa a resolver un problema de vectores propios y valores propios, por lo tanto el problema sería calcularlos.\n\\begin{align}\n Mv=\\lambdav\n\\end{align}",
"import numpy as np\nM=np.matrix([[.33,.5,1],[.33,0,0],[.33,.5,0]])\nlambda1,v=np.linalg.eig(M)\nlambda1,v",
"El ejemplo anterior tiene 3 vectores propios y 3 valores propios, pero no refleja el problema en general de la relación entre las páginas web, ya que podría suceder que una página no tienen referencias a ninguan otra salvo a si misma o tienen solo referencias a ella y ella a ninguna, ni a si misma. Entonces un caso más general sería representado como el gráfo siguiente:\n\nEn el grafo se tiene un nodo D y E, los cuales muestran un comportamiento que no se reflejaba en el grafo anterior. El nodo E y D no tienen salidas a otros nodos o páginas.\nLa matriz asociada para este gráfo resulta ser la siguiente:\n\\begin{array}{ccccc}\n.33 & .25 & 0.5 & 0 & 0 \\\n.33 & 0 & 0 & 0 & 0 \\\n.33 & .25 & 0 & 0 & 0\\\n 0 & .25 & 0.5 & 1 & 0\\\n 0 & .25 & 0 & 0 & 1 \\end{array}\nSe observa que para esta matriz no cada columna tienen valor 1, ejemplo la correspondiente a la columna D tienen todas sus entradas 0. Entonces el algoritmo de PageRank propone modificar la matriz adyacencia y agregarle una matriz que la compensa.\nLa ecuación del siguiente modo:\n\\begin{align}\n A=\\betaM + (1-\\beta)\\frac{1}{n}ee^T\n\\end{align}\nLo que sucede con esta ecuación, es que las columnas como la correspiente el nodo D toman valores que en suma dan 1, en caso como el nodo E se \"perturban\" de modo que permite que el algoritmo no se quede antorado en ese tipo de nodos. Si se considera la misma hipótesis de que inicialmente se tienen la misma probabilidad de estar en cualquiera de las páginas (nodos) entonces se considera ahora solo un vector de tamaño 5 en lugar de 3, el cual se vería así:\n\\begin{align}\n v=(.33,.33,.33,.33,.33)\\\n\\end{align}\nEl coeficiente beta toma valores de entre 0.8 y 0.9, suelen considerarse 0.85 su valor o por lo menos es el que se suponía se usaba por parte de Google, en resumen este parámetros es de control. La letra e representa un vector de a forma v=(1,1,1,1,1), el producto con su traspuesta genera una matriz cuadrada la cual al multiplicarse por 1/n tiene una matriz de Markov.\nLa matriz A ya es objeto de poder calcular el vector y valor propio, sin entrar en detalle esta puede cumple condiciones del teorema de Perron y de Frobenius, lo cual en resumen implica que se busque calcular u obtener el \"vector dominando\". \nPensando en el calculo de los vectores y valores propios para una matriz como la asociada al grafo de ejemplo resulta trivial el cálculo, pero para el caso de millones de nodos sería computacionalmente muy costoso por lo cual lo que se usa es hacer un proceso de aproximación el cual convege \"rápido\" y fue parte del secreto para que las busquedas y el ranking de las páginas fuera mucho más rápido.\nEl código de la estimación en numpy sería el siguente:",
"from __future__ import division\nimport numpy as np\n\n#Se definen los calores de las constantes de control y las matrices requeridas\nbeta=0.85\n\n#Matriz adyacente\nM=np.matrix([[0.33,0.25,0.5,0,0],[.33,0,0,0,0],[0.33,0.25,0,0,0],[0,.25,.5,1,0],[0,.25,0,0,1]])\n\n#Cantidad de nodos\nn=M.shape[1]\n\n#Matriz del modelo de PageRank\nA=beta*M+((1-beta)*(1/n)*np.ones((5,5)))\n\n#Se define el vector inicial del ranking\nv=np.ones(5)/5\n\n#Proceso de estimación por iteracciones\niterN=1\n\nwhile True:\n v1=v\n v=v.dot(M)\n print \"Interación %d\\n\" %iterN\n print v\n if not np.any((0.00001<np.abs(v-v1))):\n break\n else: \n iterN=iterN+1\n\nprint \"M*v\\n\" \nv.dot(M)",
"Se tienen como resultado que en 17 iteraciones el algoritmo indica que el PageRank no es totalmente dominado por el nodo E y D, pese a que son las \"páginas\" que tienen mayor valor, pero las otras 3 resultan muy cercanas en importancia. Se aprecia como se va ajustando el valor conforme avanzan las etapas de los cálculos.\nData.Frame y Series\nLos dos elementos principales de Pandas son Data.Frame y Series. El nombre Data.Frame es igual que el que se usa en R project y en escencia tiene la misma finalidad de uso, para la carga y procesamiento de datos.\nLos siguientes ejemplos son breves, para conocer con detalle las propiedades, operaciones y caracteristicas que tienen estos dos objetos se puede consultar el libro Python for Data Analysis o el sitio oficial del módulo Pandas. \nPrimero se carga el módulo y los objetos y se muestran como usarlos de manera sencilla.",
"#Se carga el módulo\nimport pandas as pd\nfrom pandas import Series, DataFrame\n\n#Se construye una Serie, se agregan primero la lista de datos y después la lista de índices\ndatos_series=Series([1,2,3,4,5,6],index=['a','b','c','d','e','f'])\n\n#Se muestra como carga los datos Pandas en la estrutura definida\ndatos_series\n\n#Se visualizan los valores que se guardan en la estructura de datos\ndatos_series.values\n\n#Se visualizan los valores registrados como índices\ndatos_series.index\n\n#Se seleccionan algún valor asociado al índice 'b'\ndatos_series['b']\n\n#Se revisa si tienen datos nulos o NaN\ndatos_series.isnull()\n\n#Se calcula la suma acumulada, es decir 1+2,1+2+3,1+2+3+4,1+2+3+4+5,1+2+3+4+5+6\ndatos_series.cumsum()\n\n#Se define un DataFrame, primero se define un diccionario y luego de genera el DataFrame\ndatos={'Estado':['Guanajuato','Querétaro','Jalisco','Durango','Colima'],'Población':[5486000,1828000,7351000,1633000,723455],'superficie':[30607,11699,78588,123317,5627]}\nDatos_Estados=DataFrame(datos) \n\nDatos_Estados\n\n#Se genrea de nuevo el DataFrame y lo que se hace es asignarle índice para manipular los datos\nDatos_Estados=DataFrame(datos,index=[1,2,3,4,5])\n\nDatos_Estados\n\n#Se selecciona una columna\nDatos_Estados.Estado\n\n#Otro modo de elegir la columna es del siguiente modo.\nDatos_Estados['Estado']\n\n#Se elige una fila, y se hace uso del índice que se definió para los datos\nDatos_Estados.ix[2]\n\n#Se selecciona más de una fila\nDatos_Estados.ix[[3,4]]\n\n#Descripción estadística en general, la media, la desviación estándar, el máximo, mínimo, etc.\nDatos_Estados.describe()\n\n#Se modifica el DataFrame , se agrega una nueva columna \nfrom numpy import nan as NA\nDatos_Estados['Índice']=[1.0,4.0,NA,4.0,NA]\n\nDatos_Estados\n\n#Se revisa si hay datos NaN o nulos\nDatos_Estados.isnull()\n\n#Pandas cuenta con herramientas para tratar los Missing Values, en esto se pueden explorar como con isnull() o \n#eliminar con dropna. En este caso de llena con fillna\nDatos_Estados.fillna(0)",
"Los ejemplos anteriores muestras que es muy sencillo manipular los datos con Pandas, ya sea con Series o con DataFrame. Para mayor detalle de las funciones lo recomendable es consultar las referencias mencionadas anteriormente.\nCargar datos desde diversos archivos y estadísticas sencillas.",
"#Se agraga a la consola de ipython la salida de matplotlib\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nplt.rcParams['figure.figsize']=(30,8)\n\n#Se cargan los datos desde un directorio, se toma como headers los registros de la fila 0\ndatos=pd.read_csv('~/Documentos/Tutorial-Python/Datos/Mujeres_ingeniería_y_tecnología.csv', encoding='latin1')\n\n#Se visualizan los primeros 10 registros\ndatos.head(10)\n\n#Se observa la forma de los datos o las dimensiones, se observa que son 160 filas y 5 columnas.\ndatos.shape\n\n#Se da una descripción del tipo de variables que se cargan en el dataFrame\ndatos.dtypes\n\n#Se puede visualizar las información de las colunnas de manera más completa\ndatos.info()\n\n#Se hace un resumen estadístico global de las variables o columnas\ndatos.describe()",
"Viendo los datos que se tienen, es natural preguntarse algo al respecto. Lo sencillo es, ¿cuál es el estado que presenta mayor cantidad de inscripciones de mujeres a ingeniería?, pero también se puede agregar a la pregunta anterior el preguntar en qué año o cíclo ocurrió eso.\nAlgo sencillo para abordar las anteriores preguntas construir una tabla que permita visualizar la relación entre las variables mencionadas.",
"#Se construye una tabla pivot para ordenar los datos y conocer como se comportó el total de mujeres \n#inscritas a ingenierías\ndatos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING')\n\n#Se revisa cual son los 3 estados con mayor cantidad de inscripciones en el cíclo 2012/2013\ndatos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING').sort_values(by='2012/2013')[-3:]\n\n#Se grafican los resultados de la tabla anterior, pero ordenadas por el cíclo 2010/2011\ndatos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING').sort_values(by='2010/2011').plot(kind='bar')",
"Observación: se vuelve evidente que las entidades federales o Estados donde se inscriben más mujeres a ingenierías son el Distrito Federal(Ciudad de México), Estado de México, Veracruz, Puebla, Guanajuato, Jalisco y Nuevo León. También se observa que en todos los estados en el periódo 2010/2011 la cantidad de mujeres que se inscribieron fue mayor y decayó significativamente en los años siguientes.\nEsto responde a la pregunta : ¿cuál es el estado que presenta mayor cantidad de inscripciones de mujeres a ingeniería?",
"#Se grafica el boxplot para cada periodo \ndatos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING').plot(kind='box', title='Boxplot')",
"Nota: para estre breve análisis se hace uso de la construcción de tablas pivot en pandas. Esto facilidad analizar como se comportan las variables categóricas de los datos. En este ejemplo se muestra que el periódo 2010/2011 tuvo una media mayor de mujeres inscritas en ingeniarías, pero también se ve que la relación entre los estados fue más dispersa. Pero también se ve que los periódos del 2011/2012, 2012/2013 y 2013/2014 tienen comportamientos \"similares\".\nOtras herramientas gráficas\nConforme evolucionó Pandas y el módulo se volvió más usado, la límitante que tenía a mi parecer, era el nivel de gráficos base. Para usar los DataFrame y Series en matplotlib se necesita definir los array o procesarlos de modo tal que puedan contruirse mejores gráficos. Matplotlib es un módulo muy potente, pero resulta mucho más engorroso hacer un análisis grafico. Si se ha usado R project para hacer una exploración de datos, resulta muy facil constrir los gráficos básicos y con librerías como ggplot2 o lattice se puede hacer un análisis gráfico sencillo y potente.\nAnte este problema se diseño una librería para complementar el análisis grafico, que es algo asi como \"de alto nivel\" al comprarla con matplotlib. El módulo se llama seaborn. \nPara los siguientes ejemplos uso los datos que se han analizado anteriormente.",
"#Se construye la tabla \n#Tabla1=datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING')\ndatos.head()\n\n#Se carga la librería seaborn\nimport seaborn as sns\n#sns.set(style=\"ticks\")\n#sns.boxplot(x=\"CICLO\",y=\"MUJERES_INSC_ING\",data=datos,palette=\"PRGn\",hue=\"CICLO\")",
"Como cargar un json y analizarlo.\nEn la siguiente se da una ejemplo de como cargar datos desde algún servicio web que regresa un arvhivo de tipo JSON.",
"sns.factorplot(x=\"ENTIDAD\",y=\"MUJERES_INSC_ING\",hue=\"CICLO\",data=datos,palette=\"muted\", size=15,kind=\"bar\")\n\n\n#Otra gráfica, se muestra el cruce entre las mujeres que se inscriben en ingeniería y el total de mujeres\nwith sns.axes_style('white'):\n sns.jointplot('MUJERES_INSC_ING','MAT_TOTAL_SUP',data=datos,kind='hex')",
"Conclusión:\nSe observa que es facil el cargar datos desde un archivo, es lo usual, dar la ruta y el tipo de archivo se carga con la función correspondiente a ese tipo de archivo. También, con Pandas resulta sencillo hacer una tabla pivot, lo cual ayuda a organizar o explorar los datos y la relación entre al menos dos variables."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub | notebooks/nerc/cmip6/models/sandbox-2/seaice.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: SANDBOX-2\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:27\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'sandbox-2', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
physion/ovation-python | examples/attributes-and-annotations.ipynb | gpl-3.0 | [
"Attributes and Annotations\nTo run this example, you'll need the Ovation Python API. Install with pip:\npip install ovation",
"from ovation.session import connect\n\nfrom pprint import pprint\nfrom getpass import getpass",
"Connection\nYou use a connection.Session to interact with the Ovaiton REST API. Use the connect method to create an authenticated Session. You can provide your Ovation password with the password= parameter, but please keep your Ovation password secure. Don't put your password in your source code. It's much better to let connect prompt you for your password when needed. For scripts run on the server, it's best to provide your password via an environment variable:\nconnect(my_email, password=os.environ['OVATION_PASSWORD'])\n\nfor example.",
"session = connect(input('Email: '), org=int(input(\"Organization (enter for default): \") or 0))",
"Attributes\nOvation entities have an attributes object for user data. For example, the Ovation web app uses the attributes object to store the name of a Folder or File. You can see the names of these 'built-in' attributes for each entity at https://api.ovation.io/index.html. \nYou can update the attributes of an entity by modifying the attributes object and PUTing the entity.",
"project_id = input('Project UUID: ')\n\nproject = session.get(session.path('projects', id=project_id))\npprint(project)\n\n# Add a new attribute\nproject.attributes.my_attribute = 'Wow!'\n\n# PUT the entity to save it\nproject = session.put(project.links.self, entity=project)\npprint(project)",
"You can delete attributes by removing them from the attributes object.",
"# Remove an attribute\ndel project.attributes['my_attribute']\n\n# PUT the entity to save it\nproject = session.put(project.links.self, entity=project)",
"Annotations\nUnlike attributes which must have a single value for an entity, annotations any user to add their own information to an entity. User annotations are kept separate, so one user's annotation can't interfere with an other user's. Anyone with permission to see an entity can read all annotations on that entity, however.\nAnnotations can be keyword tags, properties (key-value pairs), text notes, and events.",
"# Create a new keyword tag\nsession.post(project.links.tags, data={'tags': [{'tag': 'mytag'}]})\n\n# Get the tags for project\ntags = session.get(project.links.tags)\npprint(tags)\n\n# Delete a tag\nsession.delete(project.links.tags + \"/\" + tags[0]._id)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
amitkaps/full-stack-data-science | product-buy/notebook/propensity_to_buy.ipynb | mit | [
"Propensity to Buy\nCompany XYZ is into creating productivity apps on cloud. Their apps are quite popular across the industry spectrum - large enterprises, small and medium companies and startups - all of them use their apps.\nA big challenge that their sales team need to know is to know if the product is ready to be bought by a customer. The products can take anywhere from 3 months to a year to be created/updated. Given the current state of the product, the sales team want to know if customers will be ready to buy.\nThey have anonymized data from various apps - and know if customers have bought the product or not. \nCan you help the enterprise sales team in this initiative?\n1. Frame\nThe first step is to convert the business problem into an analytics problem.\nThe sales team wants to know if a customer will buy the product, given its current development stage. This is a propensity to buy model. This is a classification problem and the preferred output is the propensity of the customer to buy the product\n2. Acquire\nThe IT team has provided the data in a csv format. The file has the following fields\nstill_in_beta - Is the product still in beta\nbugs_solved_3_months - Number of bugs solved in the last 3 months\nbugs_solved_6_months - Number of bugs solved in the last 3 months\nbugs_solved_9_months - Number of bugs solved in the last 3 months\nnum_test_accounts_internal - Number of test accounts internal teams have\ntime_needed_to_ship - Time needed to ship the product \nnum_test_accounts_external - Number of customers who have test account \nmin_installations_per_account - Minimum number of installations customer need to purchase\nnum_prod_installations - Current number of installations that are in production\nready_for_enterprise - Is the product ready for large enterprises\nperf_dev_index - The development performance index\nperf_qa_index - The QA performance index\nsev1_issues_outstanding - Number of severity 1 bugs outstanding\npotential_prod_issue - Is there a possibility of production issue\nready_for_startups - Is the product ready for startups\nready_for_smb - Is the product ready for small and medium businesses \nsales_Q1 - Sales of product in last quarter\nsales_Q2 - Sales of product 2 quarters ago\nsales_Q3 - Sales of product 3 quarters ago\nsales_Q4 - Sales of product 4 quarters ago \nsaas_offering_available - Is a SaaS offering available \ncustomer_bought - Did the customer buy the product\nLoad the required libraries",
"#code here\n",
"Load the data",
"#code here\n#train = pd.read_csv",
"3. Refine",
"# View the first few rows\n\n\n# What are the columns\n\n\n# What are the column types?\n\n\n# How many observations are there?\n\n\n# View summary of the raw data\n\n\n# Check for missing values. If they exist, treat them\n",
"4. Explore",
"# Single variate analysis\n\n# histogram of target variable\n\n\n# Bi-variate analysis\n",
"5. Transform",
"# encode the categorical variables\n",
"6. Model",
"# Create train-test dataset\n\n\n# Build decision tree model - depth 2\n\n\n# Find accuracy of model\n\n\n# Visualize decision tree\n\n\n# Build decision tree model - depth none\n\n\n# find accuracy of model\n\n# Build random forest model \n\n\n# Find accuracy model\n\n\n# Bonus: Do cross-validation\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io | machine-learning/encode_days_of_the_week.ipynb | mit | [
"Title: Encode Days Of The Week\nSlug: encode_days_of_the_week\nSummary: How to the days of the week for dates and times for machine learning in Python. \nDate: 2017-09-11 12:00\nCategory: Machine Learning\nTags: Preprocessing Dates And Times \nAuthors: Chris Albon\nPreliminaries",
"# Load library\nimport pandas as pd",
"Create Date And Time Data",
"# Create dates\ndates = pd.Series(pd.date_range('2/2/2002', periods=3, freq='M'))\n\n# View data\ndates",
"Show Days Of The Week",
"# Show days of the week\ndates.dt.weekday_name"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
scheema/Machine-Learning | Datascience_Lab0.ipynb | mit | [
"Solution Implementation by Srinivas Cheemalapati\nCS 109A/AC 209A/STAT 121A Data Science: Lab 0\nHarvard University<br>\nFall 2016<br>\nInstructors: W. Pan, P. Protopapas, K. Rader\nImport libraries",
"import numpy as np\nfrom io import BytesIO\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport random\n\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom bs4 import BeautifulSoup\nimport urllib.request\n%matplotlib inline",
"Problem 1: Processing Tabular Data from File\nIn this problem, we practice reading csv formatted data and doing some very simple data exploration.\nPart (a): Reading CSV Data with Numpy\nOpen the file $\\mathtt{dataset}$_$\\mathtt{HW0.txt}$, containing birth biometrics as well as maternal data for a number of U.S. births, and inspect the csv formatting of the data. Load the data, without the column headers, into an numpy array. \nDo some preliminary explorations of the data by printing out the dimensions as well as the first three rows of the array. Finally, for each column, print out the range of the values. \n<b>Prettify your output</b>, add in some text and formatting to make sure your outputs are readable (e.g. \"36x4\" is less readable than \"array dimensions: 36x4\").",
"#create a variable for the file dataset_HW0.txt\nfname = 'dataset_HW0.txt'\n\n\n#fname\n\n# Option 1: Open the file and load the data into the numpy array; skip the headers\n\nwith open(fname) as f:\n lines = (line for line in f if not line.startswith('#'))\n data = np.loadtxt(lines, delimiter=',', skiprows=1)\n\n# What is the shape of the data\ndata.shape\n\n#Option 2: Open the file and load the data into the numpy array; skip the headers\n\ndata = np.loadtxt('dataset_HW0.txt', delimiter=',', skiprows=1)\ndata.shape\n\n# print the first 3 rows of the data\ndata[0:3]\n\n#data[:,0]\n\n# show the range of values for birth weight\nfig = plt.figure()\naxes = fig.add_subplot(111)\nplt.xlabel(\"birth weight\")\naxes.hist(data[:,0])\n\n# show the range of values for the femur length\nfig = plt.figure()\naxes = fig.add_subplot(111)\nplt.xlabel(\"femur length\")\naxes.hist(data[:,1])",
"Part (b): Simple Data Statistics\nCompute the mean birth weight and mean femur length for the entire dataset. Now, we want to split the birth data into three groups based on the mother's age:\n\nGroup I: ages 0-17\nGroup II: ages 18-34\nGroup III: ages 35-50\n\nFor each maternal age group, compute the mean birth weight and mean femure length. \n<b>Prettify your output.</b>\nCompare the group means with each other and with the overall mean, what can you conclude?",
"#calculate the overall means\nbirth_weight_mean = data[:,0].mean()\nbirth_weight_mean\n\n#calculagte the overall mean for Femur Length\nfemur_length_mean = data[:,1].mean()\nfemur_length_mean\n\n# Capture the birth weight\nbirth_weight = data[:,0]\n\n#Capture the Femur length\nfemur_length = data[:,1]\n\n# Capture the maternal age\nmaternal_age = data[:,2]\n\nmaternal_age.shape\n\n# Create indexes for the different maternal age groups\n\n#group_1 \ngroup_1 = maternal_age <= 17\n\n#group_2\ngroup_2 = [(maternal_age >= 18) & (maternal_age <= 34)]\n\n#group_3\ngroup_3 = [(maternal_age >= 35) & (maternal_age <= 50)]\n\n\n\nbw_g1 = data[:, 0][group_1]\nage0_17 = data[:, 2][group_1]\nbw_g1.mean()\n\nfl_g1 = data[:, 1][group_1]\nfl_g1.mean()\n\nbw_g2 = data[:, 0][group_2]\nage18_34 = data[:, 2][group_2]\nbw_g2.mean()\n\nfl_g2 = data[:, 1][group_2]\nfl_g2.mean()\n\nbw_g3 = data[:, 0][group_3]\nage35_50 = data[:, 2][group_3]\nbw_g3.mean()\n\nfl_g3 = data[:, 1][group_3]\nfl_g3.mean()",
"Part (c): Simple Data Visualization\nVisualize the data using a 3-D scatter plot (label the axes and title your plot). How does your visual analysis compare with the stats you've computed in Part (b)?",
"\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\nfor c, m in [('r', 'o')]:\n ax.scatter(bw_g1, fl_g1, age0_17, edgecolor=c,facecolors=(0,0,0,0), marker=m, s=40)\nfor c, m in [('b', 's')]:\n ax.scatter(bw_g2, fl_g2, age18_34, edgecolor=c,facecolors=(0,0,0,0), marker=m, s=40) \nfor c, m in [('g', '^')]:\n ax.scatter(bw_g3, fl_g3, age35_50, edgecolor=c,facecolors=(0,0,0,0), marker=m, s=40) \n \n\nfig.suptitle('3D Data Visualization', fontsize=14, fontweight='bold')\n\nax.set_title('Birth Weigth vs Femur Length vs Weight Plot')\nax.set_xlabel('birth_weight')\nax.set_ylabel('femur_length')\nax.set_zlabel('maternal_age')\n\nplt.show()",
"Part (d): Simple Data Visualization (Continued)\nVisualize two data attributes at a time,\n\nmaternal age against birth weight\nmaternal age against femur length\nbirth weight against femur length\n\nusing 2-D scatter plots.\nCompare your visual analysis with your analysis from Part (b) and (c).",
"plt.scatter(maternal_age,birth_weight, color='r', marker='o')\nplt.xlabel(\"maternal age\")\nplt.ylabel(\"birth weight\")\nplt.show()\n\nplt.scatter(maternal_age,femur_length, color='b', marker='s')\nplt.xlabel(\"maternal age\")\nplt.ylabel(\"femur length\")\nplt.show()\n\nplt.scatter(birth_weight,femur_length, color='g', marker='^')\nplt.xlabel(\"birth weight\")\nplt.ylabel(\"femur length\")\nplt.show()",
"Problem 2: Processing Web Data\nIn this problem we practice some basic web-scrapping using Beautiful Soup.\nPart (a): Opening and Reading Webpages\nOpen and load the page (Kafka's The Metamorphosis) at \n$\\mathtt{http://www.gutenberg.org/files/5200/5200-h/5200-h.htm}$\ninto a BeautifulSoup object. \nThe object we obtain is a parse tree (a data structure representing all tags and relationship between tags) of the html file. To concretely visualize this object, print out the first 1000 characters of a representation of the parse tree using the $\\mathtt{prettify()}$ function.",
"# load the file into a beautifulsoup object\npage = urllib.request.urlopen(\"http://www.gutenberg.org/files/5200/5200-h/5200-h.htm\").read()\n\n# prettify the data read from the url and print the first 1000 characters\nsoup = BeautifulSoup(page, \"html.parser\")\n\nprint(soup.prettify()[0:1000])",
"Part (b): Exploring the Parsed HTML\nExplore the nested data structure you obtain in Part (a) by printing out the following:\n\nthe content of the head tag\nthe string inside the head tag\neach child of the head tag\nthe string inside the title tag\nthe string inside the preformatted text (pre) tag\nthe string inside the first paragraph (p) tag\n\nMake your output readable.",
"# print the content of the head tag\nsoup.head\n\n# print the string inside the head tag\nsoup.head.title\n\n# print each child of the head tag\nsoup.head.meta\n\n# print the string inside the title tag\nsoup.head.title.string\n\n# print the string inside the pre-formatbted text (pre) tag\nprint(soup.body.pre.string)\n\n# print the string inside first paragraph (p) tag\nprint(soup.body.p.string)",
"Part (c): Extracting Text\nNow we want to extract the text of The Metamorphosis and do some simple analysis. Beautiful Soup provides a way to extract all text from a webpage via the $\\mathtt{get}$_$\\mathtt{text()}$ function. \nPrint the first and last 1000 characters of the text returned by $\\mathtt{get}$_$\\mathtt{text()}$. Is this the content of the novela? Where is the content of The Metamorphosis stored in the BeautifulSoup object?",
"print(soup.get_text()[1:1000])",
"Part (d): Extracting Text (Continued)\nUsing the $\\mathtt{find}$_$\\mathtt{all()}$ function, extract the text of The Metamorphosis and concatenate the result into a single string. Print out the first 1000 characters of the string as a sanity check.",
"p = soup.find_all('p')\n\ncombined_text = ''\n\nfor node in soup.findAll('p'):\n combined_text += \"\".join(node.findAll(text=True))\n\nprint(combined_text[0:1000])\n",
"Part (e): Word Count\nCount the number of words in The Metamorphosis. Compute the average word length and plot a histogram of word lengths.\nYou'll need to adjust the number of bins for each histogram.\nHint: You'll need to pre-process the text in order to obtain the correct word/sentence length and count.",
"word_list = combined_text.lower().replace(':','').replace('.','').replace(',', '').replace('\"','').replace('!','').replace('?','').replace(';','').split()\n#print(word_list[0:100])\n\n\nword_length = [len(n) for n in word_list]\nprint(word_length[0:100])\n\ntotal_word_length = sum(word_length)\nprint(\"The total word length: \", total_word_length)\n\nwordcount = len(word_list)\nprint(\"The total number of words: \", wordcount)\n\navg_word_length = total_word_length / wordcount\nprint(\"The average word length is: \", avg_word_length)\n\n\n\n# function to calculate the number of uniques words\n# wordcount = {}\n# for word in word_list:\n# if word not in wordcount:\n# wordcount[word] = 1\n# else:\n# wordcount[word] += 1\n\n# for k,v in wordcount.items(): \n# print (len(k), v)\n\n\n# Print the histogram for the word lengths\nfig = plt.figure()\naxes = fig.add_subplot(111)\n\nplt.xlabel(\"Word Lengths\")\nplt.xlabel(\"Count\")\n\n#axes.hist(word_length)\nplt.hist(word_length, bins=np.arange(min(word_length), max(word_length) + 1, 1))",
"Problem 3: Data from Simulations\nIn this problem we practice generating data by setting up a simulation of a simple phenomenon, a queue. \nSuppose we're interested in simulating a queue that forms in front of a small Bank of America branch with one teller, where the customers arrive one at a time.\nWe want to study the queue length and customer waiting time.\nPart (a): Simulating Arrival and Service Time\nAssume that gaps between consecutive arrivals are uniformly distributed over the interval of 1 to 20 minutes (i.e. any two times between 1 minute and 6 minutes are equally likely). \nAssume that the service times are uniform over the interval of 5 to 15 minutes. \nGenerate the arrival and service times for 100 customers, using the $\\mathtt{uniform()}$ function from the $\\mathtt{random}$ library."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mathLab/RBniCS | tutorials/04_graetz/tutorial_graetz_2.ipynb | lgpl-3.0 | [
"TUTORIAL 04 - Graetz problem 2\nKeywords: successive constraints method\n1. Introduction\nThis Tutorial addresses geometrical parametrization and the successive constraints method (SCM). In particular, we will solve the Graetz problem, which deals with forced heat convection in a channel $\\Omega_o(\\mu_0)$ divided into three parts $\\Omega_o^1$, $\\Omega_o^2(\\mu_0)$ and $\\Omega_o^3(\\mu_0)$, as in the following picture:\n<img src=\"data/graetz_2.png\" width=\"70%\"/>\nBoundaries $\\Gamma_{o, 1} \\cup \\Gamma_{o, 5} \\cup \\Gamma_{o, 6}$ are kept at low temperature (say, zero), while boundaries $\\Gamma_{o, 2}(\\mu_0) \\cup \\Gamma_{o, 4}(\\mu_0)$ and $\\Gamma_{o, 7}(\\mu_0) \\cup \\Gamma_{o, 7}(\\mu_0)$ are kept at high temperature (say, respectively $\\mu_2$ and $\\mu_3$). The convection is characterized by the velocity $\\boldsymbol{\\beta} = (x_1(1-x_1), 0)$, being $\\boldsymbol{x}o = (x{o, 0}, x_1)$ the coordinate vector on the parametrized domain $\\Omega_o(\\mu_0)$.\nThe problem is characterized by four parameters. The first parameter $\\mu_0$ controls the shape of deformable subdomain $\\Omega_2(\\mu_0)$. The heat transfer between the domains can be taken into account by means of the Péclet number, which will be labeled as the parameter $\\mu_1$. The ranges of the two first parameters are the following:\n$$\\mu_0 \\in [0.1,10.0] \\quad \\text{and} \\quad \\mu_1 \\in [0.01,10.0]$$\nand the two additional heat parameters:\n$$\\mu_2 \\in [0.5,1.5] \\quad \\text{and} \\quad \\mu_3 \\in [0.5,1.5].$$\nThe parameter vector $\\boldsymbol{\\mu}$ is thus given by \n$$\n\\boldsymbol{\\mu} = (\\mu_0, \\mu_1, \\mu_2, \\mu_3)\n$$\non the parameter domain\n$$\n\\mathbb{P}=[0.1,10.0]\\times[0.01,10.0]\\times[0.5,1.5]\\times[0.5,1.5].\n$$\nIn order to obtain a faster (yet, provably accurate) approximation of the problem, and avoiding any remeshing, we pursue a model reduction by means of a certified reduced basis reduced order method from a fixed reference domain.\nThe successive constraints method will be used to evaluate the stability factors.\n2. Parametrized formulation\nLet $u_o(\\boldsymbol{\\mu})$ be the temperature in the domain $\\Omega_o(\\mu_0)$.\nWe will directly provide a weak formulation for this problem\n<center>for a given parameter $\\boldsymbol{\\mu}\\in\\mathbb{P}$, find $u_o(\\boldsymbol{\\mu})\\in\\mathbb{V}_o(\\boldsymbol{\\mu})$ such that</center>\n$$a_o\\left(u_o(\\boldsymbol{\\mu}),v_o;\\boldsymbol{\\mu}\\right)=f_o(v_o;\\boldsymbol{\\mu})\\quad \\forall v_o\\in\\mathbb{V}_o(\\boldsymbol{\\mu})$$\nwhere\n\nthe function space $\\mathbb{V}o(\\boldsymbol{\\mu})$ is defined as\n$$\n\\mathbb{V}_o(\\mu_0) = \\left{ v \\in H^1(\\Omega_o(\\mu_0)): v|{\\Gamma_{o,1} \\cup \\Gamma_{o,5} \\cup \\Gamma_{o,6}} = 0, v|{\\Gamma{o,2}(\\mu_0) \\cup \\Gamma_{o,2}(\\mu_0)} = 1 \\right}\n$$\nNote that, as in the previous tutorial, the function space is parameter dependent due to the shape variation. \nthe parametrized bilinear form $a_o(\\cdot, \\cdot; \\boldsymbol{\\mu}): \\mathbb{V}o(\\boldsymbol{\\mu}) \\times \\mathbb{V}_o(\\boldsymbol{\\mu}) \\to \\mathbb{R}$ is defined by\n$$a_o(u_o,v_o;\\boldsymbol{\\mu}) = \\mu_1\\int{\\Omega_o(\\mu_0)} \\nabla u_o \\cdot \\nabla v_o \\ d\\boldsymbol{x} + \\int_{\\Omega_o(\\mu_0)} x_1(1-x_1) \\partial_{x} u_o\\ v_o \\ d\\boldsymbol{x},$$\nthe parametrized linear form $f_o(\\cdot; \\boldsymbol{\\mu}): \\mathbb{V}_o(\\boldsymbol{\\mu}) \\to \\mathbb{R}$ is defined by\n$$f_o(v_o;\\boldsymbol{\\mu}) = 0.$$\n\nThe successive constraints method will be used to compute the stability factor of the bilinear form $a_o(\\cdot, \\cdot; \\boldsymbol{\\mu})$.",
"from dolfin import *\nfrom rbnics import *",
"3. Affine decomposition\nIn order to obtain an affine decomposition, we proceed as in the previous tutorial and recast the problem on a fixed, parameter independent, reference domain $\\Omega$. As reference domain which choose the one characterized by $\\mu_0 = 1$ which we generate through the generate_mesh notebook provided in the data folder.\nAs in the previous tutorial, we pull back the problem to the reference domain $\\Omega$.",
"@SCM()\n@PullBackFormsToReferenceDomain()\n@ShapeParametrization(\n (\"x[0]\", \"x[1]\"), # subdomain 1\n (\"mu[0]*(x[0] - 1) + 1\", \"x[1]\"), # subdomain 2\n)\nclass Graetz(EllipticCoerciveProblem):\n\n # Default initialization of members\n @generate_function_space_for_stability_factor\n def __init__(self, V, **kwargs):\n # Call the standard initialization\n EllipticCoerciveProblem.__init__(self, V, **kwargs)\n # ... and also store FEniCS data structures for assembly\n assert \"subdomains\" in kwargs\n assert \"boundaries\" in kwargs\n self.subdomains, self.boundaries = kwargs[\"subdomains\"], kwargs[\"boundaries\"]\n self.u = TrialFunction(V)\n self.v = TestFunction(V)\n self.dx = Measure(\"dx\")(subdomain_data=subdomains)\n self.ds = Measure(\"ds\")(subdomain_data=boundaries)\n # Store the velocity expression\n self.vel = Expression(\"x[1]*(1-x[1])\", element=self.V.ufl_element())\n # Customize eigen solver parameters\n self._eigen_solver_parameters.update({\n \"bounding_box_minimum\": {\n \"problem_type\": \"gen_hermitian\", \"spectral_transform\": \"shift-and-invert\",\n \"spectral_shift\": 1.e-5, \"linear_solver\": \"mumps\"\n },\n \"bounding_box_maximum\": {\n \"problem_type\": \"gen_hermitian\", \"spectral_transform\": \"shift-and-invert\",\n \"spectral_shift\": 1.e5, \"linear_solver\": \"mumps\"\n },\n \"stability_factor\": {\n \"problem_type\": \"gen_hermitian\", \"spectral_transform\": \"shift-and-invert\",\n \"spectral_shift\": 1.e-5, \"linear_solver\": \"mumps\"\n }\n })\n\n # Return custom problem name\n def name(self):\n return \"Graetz2\"\n\n # Return theta multiplicative terms of the affine expansion of the problem.\n @compute_theta_for_stability_factor\n def compute_theta(self, term):\n mu = self.mu\n if term == \"a\":\n theta_a0 = mu[1]\n theta_a1 = 1.0\n return (theta_a0, theta_a1)\n elif term == \"f\":\n theta_f0 = 1.0\n return (theta_f0, )\n elif term == \"dirichlet_bc\":\n theta_bc0 = mu[2]\n theta_bc1 = mu[3]\n return (theta_bc0, theta_bc1)\n else:\n raise ValueError(\"Invalid term for compute_theta().\")\n\n # Return forms resulting from the discretization of the affine expansion of the problem operators.\n @assemble_operator_for_stability_factor\n def assemble_operator(self, term):\n v = self.v\n dx = self.dx\n if term == \"a\":\n u = self.u\n vel = self.vel\n a0 = inner(grad(u), grad(v)) * dx\n a1 = vel * u.dx(0) * v * dx\n return (a0, a1)\n elif term == \"f\":\n f0 = Constant(0.0) * v * dx\n return (f0,)\n elif term == \"dirichlet_bc\":\n bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1),\n DirichletBC(self.V, Constant(1.0), self.boundaries, 2),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 3),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 5),\n DirichletBC(self.V, Constant(1.0), self.boundaries, 6),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 7),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 8)]\n bc1 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 2),\n DirichletBC(self.V, Constant(1.0), self.boundaries, 3),\n DirichletBC(self.V, Constant(1.0), self.boundaries, 5),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 6),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 7),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 8)]\n return (bc0, bc1)\n elif term == \"inner_product\":\n u = self.u\n x0 = inner(grad(u), grad(v)) * dx\n return (x0,)\n else:\n raise ValueError(\"Invalid term for assemble_operator().\")",
"4. Main program\n4.1. Read the mesh for this problem\nThe mesh was generated by the data/generate_mesh_2.ipynb notebook.",
"mesh = Mesh(\"data/graetz_2.xml\")\nsubdomains = MeshFunction(\"size_t\", mesh, \"data/graetz_physical_region_2.xml\")\nboundaries = MeshFunction(\"size_t\", mesh, \"data/graetz_facet_region_2.xml\")",
"4.2. Create Finite Element space (Lagrange P1)",
"V = FunctionSpace(mesh, \"Lagrange\", 1)",
"4.3. Allocate an object of the Graetz class",
"problem = Graetz(V, subdomains=subdomains, boundaries=boundaries)\nmu_range = [(0.1, 10.0), (0.01, 10.0), (0.5, 1.5), (0.5, 1.5)]\nproblem.set_mu_range(mu_range)",
"4.4. Prepare reduction with a reduced basis method",
"reduction_method = ReducedBasis(problem)\nreduction_method.set_Nmax(30, SCM=50)\nreduction_method.set_tolerance(1e-5, SCM=1e-3)",
"4.5. Perform the offline phase",
"lifting_mu = (1.0, 1.0, 1.0, 1.0)\nproblem.set_mu(lifting_mu)\nreduction_method.initialize_training_set(500, SCM=250)\nreduced_problem = reduction_method.offline()",
"4.6. Perform an online solve",
"online_mu = (10.0, 0.01, 1.0, 1.0)\nreduced_problem.set_mu(online_mu)\nreduced_solution = reduced_problem.solve()\nplot(reduced_solution, reduced_problem=reduced_problem)",
"4.7. Perform an error analysis",
"reduction_method.initialize_testing_set(100, SCM=100)\nreduction_method.error_analysis(filename=\"error_analysis\")",
"4.8. Perform a speedup analysis",
"reduction_method.speedup_analysis(filename=\"speedup_analysis\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
andrewzwicky/puzzles | Euler/euler_11-20.ipynb | mit | [
"import notebook_importer\nfrom shared_functions import factors, triangle_gen\nfrom enum import IntEnum, Enum\nimport numpy as np\nfrom math import factorial\nimport itertools",
"11: Largest product in a grid\nIn the 20×20 grid below, four numbers along a diagonal line have been bolded.\n\\begin{matrix} \n08&02&22&97&38&15&00&40&00&75&04&05&07&78&52&12&50&77&91&08\\\n49&49&99&40&17&81&18&57&60&87&17&40&98&43&69&48&04&56&62&00\\\n81&49&31&73&55&79&14&29&93&71&40&67&53&88&30&03&49&13&36&65\\\n52&70&95&23&04&60&11&42&69&24&68&56&01&32&56&71&37&02&36&91\\\n22&31&16&71&51&67&63&89&41&92&36&54&22&40&40&28&66&33&13&80\\\n24&47&32&60&99&03&45&02&44&75&33&53&78&36&84&20&35&17&12&50\\\n32&98&81&28&64&23&67&10&\\textbf{26}&38&40&67&59&54&70&66&18&38&64&70\\\n67&26&20&68&02&62&12&20&95&\\textbf{63}&94&39&63&08&40&91&66&49&94&21\\\n24&55&58&05&66&73&99&26&97&17&\\textbf{78}&78&96&83&14&88&34&89&63&72\\\n21&36&23&09&75&00&76&44&20&45&35&\\textbf{14}&00&61&33&97&34&31&33&95\\\n78&17&53&28&22&75&31&67&15&94&03&80&04&62&16&14&09&53&56&92\\\n16&39&05&42&96&35&31&47&55&58&88&24&00&17&54&24&36&29&85&57\\\n86&56&00&48&35&71&89&07&05&44&44&37&44&60&21&58&51&54&17&58\\\n19&80&81&68&05&94&47&69&28&73&92&13&86&52&17&77&04&89&55&40\\\n04&52&08&83&97&35&99&16&07&97&57&32&16&26&26&79&33&27&98&66\\\n88&36&68&87&57&62&20&72&03&46&33&67&46&55&12&32&63&93&53&69\\\n04&42&16&73&38&25&39&11&24&94&72&18&08&46&29&32&40&62&76&36\\\n20&69&36&41&72&30&23&88&34&62&99&69&82&67&59&85&74&04&36&16\\\n20&73&35&29&78&31&90&01&74&31&49&71&48&86&81&16&23&57&05&54\\\n01&70&54&71&83&51&54&69&16&92&33&48&61&43&52&01&89&19&67&48\\\n\\end{matrix} \nThe product of these numbers is 26 × 63 × 78 × 14 = 1788696.\nWhat is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20×20 grid?",
"input_mat = np.array([[ 8, 2,22,97,38,15, 0,40, 0,75, 4, 5, 7,78,52,12,50,77,91, 8],\n [49,49,99,40,17,81,18,57,60,87,17,40,98,43,69,48, 4,56,62, 0],\n [81,49,31,73,55,79,14,29,93,71,40,67,53,88,30, 3,49,13,36,65],\n [52,70,95,23, 4,60,11,42,69,24,68,56, 1,32,56,71,37, 2,36,91],\n [22,31,16,71,51,67,63,89,41,92,36,54,22,40,40,28,66,33,13,80],\n [24,47,32,60,99, 3,45, 2,44,75,33,53,78,36,84,20,35,17,12,50],\n [32,98,81,28,64,23,67,10,26,38,40,67,59,54,70,66,18,38,64,70],\n [67,26,20,68, 2,62,12,20,95,63,94,39,63, 8,40,91,66,49,94,21],\n [24,55,58, 5,66,73,99,26,97,17,78,78,96,83,14,88,34,89,63,72],\n [21,36,23, 9,75, 0,76,44,20,45,35,14, 0,61,33,97,34,31,33,95],\n [78,17,53,28,22,75,31,67,15,94, 3,80, 4,62,16,14, 9,53,56,92],\n [16,39, 5,42,96,35,31,47,55,58,88,24, 0,17,54,24,36,29,85,57],\n [86,56, 0,48,35,71,89, 7, 5,44,44,37,44,60,21,58,51,54,17,58],\n [19,80,81,68, 5,94,47,69,28,73,92,13,86,52,17,77, 4,89,55,40],\n [ 4,52, 8,83,97,35,99,16, 7,97,57,32,16,26,26,79,33,27,98,66],\n [88,36,68,87,57,62,20,72, 3,46,33,67,46,55,12,32,63,93,53,69],\n [ 4,42,16,73,38,25,39,11,24,94,72,18, 8,46,29,32,40,62,76,36],\n [20,69,36,41,72,30,23,88,34,62,99,69,82,67,59,85,74, 4,36,16],\n [20,73,35,29,78,31,90, 1,74,31,49,71,48,86,81,16,23,57, 5,54],\n [ 1,70,54,71,83,51,54,69,16,92,33,48,61,43,52, 1,89,19,67,48]])\n\nclass Direction(Enum):\n RIGHT = 0\n DOWNR = 1\n DOWN = 2\n DOWNL = 3\n\ndef get_array(mat, start, direction, length=4):\n x, y = start\n w, h = mat.shape\n \n assert x in range(w)\n assert y in range(h)\n\n x_in_range = False\n y_in_range = False\n if direction == Direction.RIGHT:\n if x+length <= w:\n return np.ravel(mat[y, x:x+length])\n \n elif direction == Direction.DOWN:\n if y+length <= h:\n return np.ravel(mat[y:y+length, x])\n \n elif direction == Direction.DOWNR:\n if y+length <= h and x+length <= w:\n return np.diag(mat, x-y)[y:y+length]\n \n elif direction == Direction.DOWNL:\n mat = np.fliplr(mat)\n x = w-1-x\n if y+length <= h and x+length <= w:\n return np.diag(mat, x-y)[y:y+length]\n \nit = np.nditer(input_mat, flags=['multi_index'])\nmax_prod = (0, [], (0,0), Direction.RIGHT)\n\nwhile not it.finished:\n for dir in Direction:\n arr = get_array(input_mat, it.multi_index, dir)\n if arr is not None:\n prod = np.prod(arr)\n if prod > max_prod[0]:\n max_prod = (prod, arr, it.multi_index, dir)\n it.iternext()\n \nmax_prod[0]",
"12: Highly divisible triangular number\nThe sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be:\n1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...\nLet us list the factors of the first seven triangle numbers:\n 1: 1\n 3: 1,3\n 6: 1,2,3,6\n10: 1,2,5,10\n15: 1,3,5,15\n21: 1,3,7,21\n28: 1,2,4,7,14,28\n\nWe can see that 28 is the first triangle number to have over five divisors.\nWhat is the value of the first triangle number to have over five hundred divisors?",
"for tri in triangle_gen():\n if len(factors(tri)) > 500:\n break\ntri",
"13: Large sum\nWork out the first ten digits of the sum of the following one-hundred 50-digit numbers.\n37107287533902102798797998220837590246510135740250\n46376937677490009712648124896970078050417018260538\n74324986199524741059474233309513058123726617309629\n91942213363574161572522430563301811072406154908250\n23067588207539346171171980310421047513778063246676\n89261670696623633820136378418383684178734361726757\n28112879812849979408065481931592621691275889832738\n44274228917432520321923589422876796487670272189318\n47451445736001306439091167216856844588711603153276\n70386486105843025439939619828917593665686757934951\n62176457141856560629502157223196586755079324193331\n64906352462741904929101432445813822663347944758178\n92575867718337217661963751590579239728245598838407\n58203565325359399008402633568948830189458628227828\n80181199384826282014278194139940567587151170094390\n35398664372827112653829987240784473053190104293586\n86515506006295864861532075273371959191420517255829\n71693888707715466499115593487603532921714970056938\n54370070576826684624621495650076471787294438377604\n53282654108756828443191190634694037855217779295145\n36123272525000296071075082563815656710885258350721\n45876576172410976447339110607218265236877223636045\n17423706905851860660448207621209813287860733969412\n81142660418086830619328460811191061556940512689692\n51934325451728388641918047049293215058642563049483\n62467221648435076201727918039944693004732956340691\n15732444386908125794514089057706229429197107928209\n55037687525678773091862540744969844508330393682126\n18336384825330154686196124348767681297534375946515\n80386287592878490201521685554828717201219257766954\n78182833757993103614740356856449095527097864797581\n16726320100436897842553539920931837441497806860984\n48403098129077791799088218795327364475675590848030\n87086987551392711854517078544161852424320693150332\n59959406895756536782107074926966537676326235447210\n69793950679652694742597709739166693763042633987085\n41052684708299085211399427365734116182760315001271\n65378607361501080857009149939512557028198746004375\n35829035317434717326932123578154982629742552737307\n94953759765105305946966067683156574377167401875275\n88902802571733229619176668713819931811048770190271\n25267680276078003013678680992525463401061632866526\n36270218540497705585629946580636237993140746255962\n24074486908231174977792365466257246923322810917141\n91430288197103288597806669760892938638285025333403\n34413065578016127815921815005561868836468420090470\n23053081172816430487623791969842487255036638784583\n11487696932154902810424020138335124462181441773470\n63783299490636259666498587618221225225512486764533\n67720186971698544312419572409913959008952310058822\n95548255300263520781532296796249481641953868218774\n76085327132285723110424803456124867697064507995236\n37774242535411291684276865538926205024910326572967\n23701913275725675285653248258265463092207058596522\n29798860272258331913126375147341994889534765745501\n18495701454879288984856827726077713721403798879715\n38298203783031473527721580348144513491373226651381\n34829543829199918180278916522431027392251122869539\n40957953066405232632538044100059654939159879593635\n29746152185502371307642255121183693803580388584903\n41698116222072977186158236678424689157993532961922\n62467957194401269043877107275048102390895523597457\n23189706772547915061505504953922979530901129967519\n86188088225875314529584099251203829009407770775672\n11306739708304724483816533873502340845647058077308\n82959174767140363198008187129011875491310547126581\n97623331044818386269515456334926366572897563400500\n42846280183517070527831839425882145521227251250327\n55121603546981200581762165212827652751691296897789\n32238195734329339946437501907836945765883352399886\n75506164965184775180738168837861091527357929701337\n62177842752192623401942399639168044983993173312731\n32924185707147349566916674687634660915035914677504\n99518671430235219628894890102423325116913619626622\n73267460800591547471830798392868535206946944540724\n76841822524674417161514036427982273348055556214818\n97142617910342598647204516893989422179826088076852\n87783646182799346313767754307809363333018982642090\n10848802521674670883215120185883543223812876952786\n71329612474782464538636993009049310363619763878039\n62184073572399794223406235393808339651327408011116\n66627891981488087797941876876144230030984490851411\n60661826293682836764744779239180335110989069790714\n85786944089552990653640447425576083659976645795096\n66024396409905389607120198219976047599490197230297\n64913982680032973156037120041377903785566085089252\n16730939319872750275468906903707539413042652315011\n94809377245048795150954100921645863754710598436791\n78639167021187492431995700641917969777599028300699\n15368713711936614952811305876380278410754449733078\n40789923115535562561142322423255033685442488917353\n44889911501440648020369068063960672322193204149535\n41503128880339536053299340368006977710650566631954\n81234880673210146739058568557934581403627822703280\n82616570773948327592232845941706525094512325230608\n22918802058777319719839450180888072429661980811197\n77158542502016545090413245809786882778948721859617\n72107838435069186155435662884062257473692284509516\n20849603980134001723930671666823555245252804609722\n53503534226472524250874054075591789781264330331690",
"nums = [37107287533902102798797998220837590246510135740250,\n46376937677490009712648124896970078050417018260538,\n74324986199524741059474233309513058123726617309629,\n91942213363574161572522430563301811072406154908250,\n23067588207539346171171980310421047513778063246676,\n89261670696623633820136378418383684178734361726757,\n28112879812849979408065481931592621691275889832738,\n44274228917432520321923589422876796487670272189318,\n47451445736001306439091167216856844588711603153276,\n70386486105843025439939619828917593665686757934951,\n62176457141856560629502157223196586755079324193331,\n64906352462741904929101432445813822663347944758178,\n92575867718337217661963751590579239728245598838407,\n58203565325359399008402633568948830189458628227828,\n80181199384826282014278194139940567587151170094390,\n35398664372827112653829987240784473053190104293586,\n86515506006295864861532075273371959191420517255829,\n71693888707715466499115593487603532921714970056938,\n54370070576826684624621495650076471787294438377604,\n53282654108756828443191190634694037855217779295145,\n36123272525000296071075082563815656710885258350721,\n45876576172410976447339110607218265236877223636045,\n17423706905851860660448207621209813287860733969412,\n81142660418086830619328460811191061556940512689692,\n51934325451728388641918047049293215058642563049483,\n62467221648435076201727918039944693004732956340691,\n15732444386908125794514089057706229429197107928209,\n55037687525678773091862540744969844508330393682126,\n18336384825330154686196124348767681297534375946515,\n80386287592878490201521685554828717201219257766954,\n78182833757993103614740356856449095527097864797581,\n16726320100436897842553539920931837441497806860984,\n48403098129077791799088218795327364475675590848030,\n87086987551392711854517078544161852424320693150332,\n59959406895756536782107074926966537676326235447210,\n69793950679652694742597709739166693763042633987085,\n41052684708299085211399427365734116182760315001271,\n65378607361501080857009149939512557028198746004375,\n35829035317434717326932123578154982629742552737307,\n94953759765105305946966067683156574377167401875275,\n88902802571733229619176668713819931811048770190271,\n25267680276078003013678680992525463401061632866526,\n36270218540497705585629946580636237993140746255962,\n24074486908231174977792365466257246923322810917141,\n91430288197103288597806669760892938638285025333403,\n34413065578016127815921815005561868836468420090470,\n23053081172816430487623791969842487255036638784583,\n11487696932154902810424020138335124462181441773470,\n63783299490636259666498587618221225225512486764533,\n67720186971698544312419572409913959008952310058822,\n95548255300263520781532296796249481641953868218774,\n76085327132285723110424803456124867697064507995236,\n37774242535411291684276865538926205024910326572967,\n23701913275725675285653248258265463092207058596522,\n29798860272258331913126375147341994889534765745501,\n18495701454879288984856827726077713721403798879715,\n38298203783031473527721580348144513491373226651381,\n34829543829199918180278916522431027392251122869539,\n40957953066405232632538044100059654939159879593635,\n29746152185502371307642255121183693803580388584903,\n41698116222072977186158236678424689157993532961922,\n62467957194401269043877107275048102390895523597457,\n23189706772547915061505504953922979530901129967519,\n86188088225875314529584099251203829009407770775672,\n11306739708304724483816533873502340845647058077308,\n82959174767140363198008187129011875491310547126581,\n97623331044818386269515456334926366572897563400500,\n42846280183517070527831839425882145521227251250327,\n55121603546981200581762165212827652751691296897789,\n32238195734329339946437501907836945765883352399886,\n75506164965184775180738168837861091527357929701337,\n62177842752192623401942399639168044983993173312731,\n32924185707147349566916674687634660915035914677504,\n99518671430235219628894890102423325116913619626622,\n73267460800591547471830798392868535206946944540724,\n76841822524674417161514036427982273348055556214818,\n97142617910342598647204516893989422179826088076852,\n87783646182799346313767754307809363333018982642090,\n10848802521674670883215120185883543223812876952786,\n71329612474782464538636993009049310363619763878039,\n62184073572399794223406235393808339651327408011116,\n66627891981488087797941876876144230030984490851411,\n60661826293682836764744779239180335110989069790714,\n85786944089552990653640447425576083659976645795096,\n66024396409905389607120198219976047599490197230297,\n64913982680032973156037120041377903785566085089252,\n16730939319872750275468906903707539413042652315011,\n94809377245048795150954100921645863754710598436791,\n78639167021187492431995700641917969777599028300699,\n15368713711936614952811305876380278410754449733078,\n40789923115535562561142322423255033685442488917353,\n44889911501440648020369068063960672322193204149535,\n41503128880339536053299340368006977710650566631954,\n81234880673210146739058568557934581403627822703280,\n82616570773948327592232845941706525094512325230608,\n22918802058777319719839450180888072429661980811197,\n77158542502016545090413245809786882778948721859617,\n72107838435069186155435662884062257473692284509516,\n20849603980134001723930671666823555245252804609722,\n53503534226472524250874054075591789781264330331690]\n\nstr(np.sum(nums))[:10]",
"14: Longest Collatz sequence\nThe following iterative sequence is defined for the set of positive integers:\n$n$ → $n/2$ ($n$ is even) \n$n$ → $3n + 1$ ($n$ is odd)\nUsing the rule above and starting with 13, we generate the following sequence:\n13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1\nIt can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.\nWhich starting number, under one million, produces the longest chain?\nNOTE: Once the chain starts the terms are allowed to go above one million.",
"collatz_results = {}\n\ndef collatz_gen(n):\n yield n\n while n != 1:\n if n % 2 == 0:\n n = n // 2\n else:\n n = 3*n + 1\n yield n\n\nfor i in range(1,1000000):\n if i not in collatz_results.keys():\n temp_dict = {}\n length = 0\n for term in collatz_gen(i):\n try:\n length += collatz_results[term]\n for k in temp_dict.keys():\n temp_dict[k] += collatz_results[term]\n break\n except KeyError:\n length += 1\n for k in temp_dict.keys():\n temp_dict[k] += 1\n temp_dict[term] = 1\n \n for k,v in temp_dict.items():\n collatz_results[k] = v\n \nmax_num = 0\ncurrent_max = 0\n\nfor k,v in collatz_results.items():\n if v > current_max:\n current_max = v\n max_num = k\n\nmax_num",
"15: Lattice paths\nStarting in the top left corner of a 2×2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.\nHow many such routes are there through a 20×20 grid",
"int(factorial(40) / factorial(20)**2)",
"16: Power digit sum\n$2^{15}$ = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26.\nWhat is the sum of the digits of the number $2^{1000}$?",
"np.sum(list(map(int, str(2**1000))))",
"17: Number letter counts\nIf the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.\nIf all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?\nNOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of \"and\" when writing out numbers is in compliance with British usage.",
"def translate(n):\n result = ''\n basic_nums = { \n 1:'one',\n 2:'two',\n 3:'three',\n 4:'four',\n 5:'five',\n 6:'six',\n 7:'seven',\n 8:'eight',\n 9:'nine',\n 10:'ten',\n 11:'eleven',\n 12:'twelve',\n 13:'thirteen',\n 14:'fourteen',\n 15:'fifteen',\n 16:'sixteen',\n 17:'seventeen',\n 18:'eighteen',\n 19:'nineteen',\n 20:'twenty',\n 30:'thirty',\n 40:'forty',\n 50:'fifty',\n 60:'sixty',\n 70:'seventy',\n 80:'eighty',\n 90:'ninety',\n }\n \n try:\n result = basic_nums[n]\n except KeyError: \n thousands = n // 1000\n n %= 1000\n if thousands > 0:\n result += basic_nums[thousands]\n result += 'thousand'\n if n == 0:\n return result\n \n hundreds = n // 100\n n %= 100\n if hundreds > 0:\n result += basic_nums[hundreds]\n result += 'hundred'\n if n == 0:\n return result\n else:\n result += 'and'\n \n try:\n result += basic_nums[n]\n except KeyError:\n tens = n // 10\n leftover = n % 10\n result += basic_nums[tens*10]\n result += basic_nums[leftover]\n \n return result\n \nchar_count = 0\n\nfor num in range(1,1001):\n char_count += len(translate(num))\n\nchar_count",
"18: Maximum path sum I\nBy starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.\n3\n 7 4\n 2 4 6\n8 5 9 3\nThat is, 3 + 7 + 4 + 9 = 23.\nFind the maximum total from top to bottom of the triangle below:\n75\n 95 64\n 17 47 82\n 18 35 87 10\n 20 04 82 47 65\n 19 01 23 75 03 34\n 88 02 77 73 07 63 67\n 99 65 04 28 06 16 70 92\n 41 41 26 56 83 40 80 70 33\n 41 48 72 33 47 32 37 16 94 29\n 53 71 44 65 25 43 91 52 97 51 14\n 70 11 33 28 77 73 17 78 39 68 17 57\n 91 71 52 38 17 14 91 43 58 50 27 29 48\n 63 66 04 68 89 53 67 30 73 16 69 87 40 31\n04 62 98 27 23 09 70 98 73 93 38 53 60 04 23",
"input_tri = [[75],\n[95, 64],\n[17, 47, 82],\n[18, 35, 87, 10],\n[20, 4, 82, 47, 65],\n[19, 1, 23, 75, 3, 34],\n[88, 2, 77, 73, 7, 63, 67],\n[99, 65, 4, 28, 6, 16, 70, 92],\n[41, 41, 26, 56, 83, 40, 80, 70, 33],\n[41, 48, 72, 33, 47, 32, 37, 16, 94, 29],\n[53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14],\n[70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57],\n[91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48], # 2\n[63, 66, 4, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31], # 1, 12\n[ 4, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 4, 23]] # 0 , 13\n\ndef combine(row_in):\n if len(row_in) <= 1:\n return [row_in]\n return [max(row_in[i:i+2]) for i in range(len(row_in)-1)]\n\nheight = len(input_tri)\ntally_tri = [input_tri[-1]]\n\nfor row in reversed(input_tri[:-1]):\n r = row\n c = combine(tally_tri[-1])\n res = [a+b for a,b in zip(r,c)]\n tally_tri.append(res)\n \ntally_tri[-1][0]",
"19: Counting Sundays\nYou are given the following information, but you may prefer to do some research for yourself.\n\n1 Jan 1900 was a Monday.\nThirty days has September, April, June and November. All the rest have thirty-one, Saving February alone, Which has twenty-eight, rain or shine. And on leap years, twenty-nine.\nA leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400.\n\nHow many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)?",
"months = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]\n\nday = 1\nsunday_first_count = 0\n\nfor year in range(1900,2001):\n for month_num, days_in_month in enumerate(months):\n if month_num == 1:\n leap = (year % 4 == 0 and not year % 100 == 0) or year % 400 == 0\n if leap:\n days_in_month += 1\n \n if day == 0 and year > 1900:\n sunday_first_count += 1\n \n \n day = (day + days_in_month) % 7\n \n \nsunday_first_count ",
"20: Factorial digit sum\nn! means n × (n − 1) × ... × 3 × 2 × 1\nFor example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800,\nand the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.\nFind the sum of the digits in the number 100!",
"np.sum(list(map(int, str(factorial(100)))))"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dtamayo/MachineLearning | Day1/05_model_evaluation_tts.ipynb | gpl-3.0 | [
"Training Accuracy\nPrediction accuracy on the same set of data you trained your model with.\nProblems with training and testing on the same data\n\nGoal is to estimate likely performance of a model on out-of-sample data\nBut, maximizing training accuracy rewards overly complex models that won't necessarily generalize\nUnnecessarily complex models overfit the training data\n\n\nImage Credit: Overfitting by Chabacano. Licensed under GFDL via Wikimedia Commons.\nHow Can We Avoid Overfitting?\nEvaluation procedure #2: Train/test split\n\nSplit the dataset into two pieces: a training set and a testing set.\nTrain the model on the training set.\nTest the model on the testing set, and evaluate how well we did.",
"from sklearn.datasets import load_iris\niris = load_iris()\nX = iris.data\ny = iris.target\n\n# print the shapes of X and y\nprint X.shape\nprint y.shape\n\n# STEP 1: split X and y into training and testing sets\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)",
"What did this accomplish?\n\nModel can be trained and tested on different data\nResponse values are known for the testing set, and thus predictions can be evaluated\nTesting accuracy is a better estimate than training accuracy of out-of-sample performance",
"# STEP 1: split X and y into training and testing sets\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)\n\n# print the shapes of the new X objects\nprint X_train.shape\nprint X_test.shape\n\n# print the shapes of the new y objects\nprint y_train.shape\nprint y_test.shape\n\nfrom sklearn.linear_model import LogisticRegression\n# STEP 2: train the model on the training set\nlogreg = LogisticRegression()\nlogreg.fit(X_train, y_train)\n\n# STEP 3: make predictions on the testing set\ny_pred = logreg.predict(X_test)\n\nfrom sklearn import metrics\n# compare actual response values (y_test) with predicted response values (y_pred)\nprint metrics.accuracy_score(y_test, y_pred)",
"Repeat for KNN with K=5:",
"from sklearn.neighbors import KNeighborsClassifier\nknn = KNeighborsClassifier(n_neighbors=5)\nknn.fit(X_train, y_train)\ny_pred = knn.predict(X_test)\nprint metrics.accuracy_score(y_test, y_pred)",
"Repeat for KNN with K=1:",
"knn = KNeighborsClassifier(n_neighbors=1)\nknn.fit(X_train, y_train)\ny_pred = knn.predict(X_test)\nprint metrics.accuracy_score(y_test, y_pred)",
"Can you find an even better value for K?",
"# try K=1 through K=25 and record testing accuracy\nk_range = range(1, 26)\nscores = [] # calculate accuracies for each value of K!\n\n#Now we plot:\n\nimport matplotlib.pyplot as plt\n# allow plots to appear within the notebook\n%matplotlib inline\n\nplt.plot(k_range, scores)\nplt.xlabel('Value of K for KNN')\nplt.ylabel('Testing Accuracy')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
greenelab/GCB535 | 30_ML-III/ML_3_Inclass_Homework.ipynb | bsd-3-clause | [
"Discussion (20 mins)\nDiscuss your thoughts about the pre-lab reading material with your table. As a group, come up with specific concerns, if any, that you have with the approaches used or the criticisms about the approaches.\nGame time! (40 mins)\nWe have machine learning at our fingertips, and we've seen some of the dangers. Now we're going to spend this week on a game. In this game, we have two goals: 1) We want to build the best predictor that we can, but 2)at all times we want to have an accurate idea of how well the predictor works.\nFor this game, we've managed to get our hands on some data about two diseases (D1 and D2). Each of these datasets has features in columns and examples in rows. Each feature represents a clinical measurement, while each row represents a person. We want to be able to predict whether or not a person has a disease (the last column).\nWe'll supply you with four datasets for each disease throughout the week. For the first day, we've given you two of them. We also provide example code to read the data. From there, the path that you take is up to you. We do not know the best predictor or even what the maximum achievable accuracy for these data! This is a chance to experiment and find out what best captures disease status.\nWe can use anything in the scikit-learn toolkit. It's a powerful set of tools. We use it regularly in our own lab, so this exercise is hands on with the real thing.\nFirst, let's get loading both datasets out of the way:",
"# numpy provides python tools to easily load comma separated files.\nimport numpy as np\n\n# use numpy to load disease #1 data\nd1 = np.loadtxt(open(\"../30_Data_ML-III/D1.csv\", \"rb\"), delimiter=\",\")\n\n# features are all rows for columns before 200\n# The canonical way to name this is that X is our matrix of\n# examples by features.\nX1 = d1[:,:200]\n\n# labels are in all rows at the 200th column\n# The canonical way to name this is that y is our vector of\n# labels.\ny1 = d1[:,200]\n\n# use numpy to load disease #2 data\nd2 = np.loadtxt(open(\"../30_Data_ML-III/D2.csv\", \"rb\"), delimiter=\",\")\n\n# features are all rows for columns before 200\nX2 = d2[:,:200]\n# labels are in all rows at the 200th column\ny2 = d2[:,200]",
"Implement an SVM!\nWe've already learned about support vector machines. Now we're going to implement one.\nWe need to find out how to use this thing! We ran some code in the previous notebook that did this for us, but now we need to make things work on our own. Googling for \"svm sklearn classifier\" gets us to this page. This page has documentation for the package. Partway down the page, we see: \"SVC, NuSVC and LinearSVC are classes capable of performing multi-class classification on a dataset.\" As we keep reading, we see that SVC provides an implementation. Let's try that!\nWe get to the documentation for SVC and it says many things. At the top, there's a box that says:\nclass sklearn.svm.SVC(C=1.0, kernel='rbf', degree=3, gamma='auto', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=None, random_state=None)\nHow should we interpret all of this?\nThe first part tells us where a function lives, so the SVC function lives in sklearn.svm. It seems we're going to need to import it from there.",
"# First we need to import svms from sklearn\nfrom sklearn.svm import SVC\n",
"The parts inside the parentheses give us the ability to set or change parameters. Anything with an equals sign after it has a default parameter set. In this case, the default C is set to 1.0. There's also a box that gives some description of what each parameter is (only a few of them may make sense to us right now). If we scroll to the bottom of the box, we'll get some examples provided by the helpful sklearn team, though they don't know about the names of our datasets. They'll often use the standard name X for features and y for labels.\nLet's go ahead and run an SVM using all the defaults on our data",
"# Get an SVC with default parameters as our algorithm\nclassifier = SVC()\n\n# Fit the classifier to our datasets\nclassifier.fit(X1, y1)\n\n# Apply the classifier back to our data and get an accuracy measure\ntrain_score = classifier.score(X1, y1)\n\n# Print the accuracy\nprint(train_score)",
"Ouch! Only about 50% accuracy. That's painful! We learned that we could modify C to make the algorithm try to fit the data we show it better. Let's ramp up C and see what happens!",
"# Get an SVC with a high C\nclassifier = SVC(C = 100)\n\n# Fit the classifier to our datasets\nclassifier.fit(X1, y1)\n\n# Apply the classifier back to our data and get an accuracy measure\ntrain_score = classifier.score(X1, y1)\n\n# Print the accuracy\nprint(train_score)\n\nimport sklearn\n",
"Nice! 100% accuracy. This seems like we're on the right track. What we'd really like to do is figure out how we do on held out testing data though. Fortunately, sklearn provides a helper function to make holding out some of the data easy. This function is called train_test_split and we can find its documentation. If we weren't sure where to go, the sklearn documentation has a full section on cross validation.\nNote: Software changes over time. The current release of sklearn on CoCalc is 0.17. There's a new version, 0.18, also available. There are also minor version numbers (e.g. the final 1 in 0.17.1). These don't change functionality. Between the two major versions the location of the train_test_split function changed. If you ever want to know what version of sklearn you're working with, you can create a code block and run this code:\nimport sklearn\nprint(sklearn.__version__)\n\nMake sure that when you look at the documentation, you choose the version that matches what you're working with.\nLet's go ahead and split our data into training and testing portions.",
"# Import the function to split our data:\nfrom sklearn.cross_validation import train_test_split\n\n# Split things into training and testing - let's have 30% of our data end up as testing\nX1_train, X1_test, y1_train, y1_test = train_test_split(X1, y1, test_size=.33)",
"Now let's go ahead and train our classifier on the training data and test it on some held out test data",
"# Get an SVC again using C = 100\nclassifier = SVC(C = 100)\n\n# Fit the classifier to the training data:\nclassifier.fit(X1_train, y1_train)\n\n# Now we're going to apply it to the training labels first:\ntrain_score = classifier.score(X1_train, y1_train)\n\n# We're also going to applying it to the testing labels:\ntest_score = classifier.score(X1_test, y1_test)\n\nprint(\"Training Accuracy: \" + str(train_score))\nprint(\"Testing Accuracy: \" + str(test_score))",
"Nice! Now we can see that while our training accuracy is very high, our testing accuracy is much lower. We could say that our model has \"overfit\" to the data. We learned about overfitting before. You'll get a chance to play with this SVM a bit more below. Before we move to that though, we want to show you how easy it is to use a different classifier. You might imagine that a classifier could be composed of a cascading series of rules. If this is true, then consider that. Otherwise, consider this other thing. This type of algorithm is called a decision tree, and we're going to rain one now.\nsklearn has a handy decision tree classifier that we can use. By using the SVM classifier, we've already learned most of what we need to know to use it.",
"# First, we need to import the classifier\nfrom sklearn.tree import DecisionTreeClassifier\n\n# Now we're going to get a decision tree classifier with the default parameters\nclassifier = DecisionTreeClassifier()\n\n# The 'fit' syntax is the same\nclassifier.fit(X1_train, y1_train)\n\n# As is the 'score' syntax\ntrain_score = classifier.score(X1_train, y1_train)\ntest_score = classifier.score(X1_test, y1_test)\n\n\nprint(\"Training Accuracy: \" + str(train_score))\nprint(\"Testing Accuracy: \" + str(test_score))",
"Oof! That's pretty overfit! We're perfect on the training data but basically flipping a coin on the held out data. A DecisionTreeClassifier has two parameters max_features and max_depth that can really help us prevent overfitting. Let's train a very small tree (no more than 8 features) that's very short (no more than 3 deep).",
"# Now we're going to get a decision tree classifier with selected parameters\nclassifier = DecisionTreeClassifier(max_features=8, max_depth=3)\n\n# The 'fit' syntax is the same\nclassifier.fit(X1_train, y1_train)\n\n# As is the 'score' syntax\ntrain_score = classifier.score(X1_train, y1_train)\ntest_score = classifier.score(X1_test, y1_test)\n\nprint(\"Training Accuracy: \" + str(train_score))\nprint(\"Testing Accuracy: \" + str(test_score))",
"Things are less overfit, but it's still not clear that this is working too well.\nHomework\nTry to fit at least three new models in the code blocks below and report the training and testing accuracy for your models. You could try to change the parameters of the algorithms that we've shown you, or you could try to choose entirely different algorithms. The choice is yours.\nQ1: Setup and fit a classifier and report the training and testing accuracies (3pts).\nQ2: Setup and fit a classifier and report the training and testing accuracies (3pts).\nQ3: Setup and fit a classifier and report the training and testing accuracies (3pts).\nQ4: Which of your classifiers do you think is best and why?"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn | doc/notebooks/Automata.ipynb | gpl-3.0 | [
"Automata\nEditing Automata\nVcsn provides different means to enter automata. One, which also applies to plain Python, is using the automaton constructor:",
"import vcsn\nvcsn.automaton('''\ncontext = \"lal_char(ab), z\n$ -> p <2>\np -> q <3>a,<4>b\nq -> q a\nq -> $\n''')",
"See the documentation of vcsn.automaton for more details about this function. The syntax used to define the automaton is, however, described here.\nIn order to facilitate the definition of automata, Vcsn provides additional ''magic commands'' to the IPython Notebook. We will see through this guide how use this command.\n%%automaton: Entering an Automaton\nIPython supports so-called \"cell magic-commands\", that start with %%. Vcsn provides the %%automaton magic command to enter the literal description of an automaton. For instance, the automaton above can entered as follows:",
"%%automaton a\ncontext = \"lal_char(ab), z\"\n$ -> p <2>\np -> q <3>a, <4>b\nq -> q a\nq -> $",
"The first argument, here a, is the name of the variable in which this automaton is stored:",
"a",
"You may pass the option -s or --strip to strip the automaton from its layer that keeps the state name you have chosen. In that case, the internal numbers are used, unrelated to the user names (actually, the numbers are assigned to state names as they are encountered starting from 0).",
"%%automaton --strip a\ncontext = \"lal_char(ab), z\"\n$ -> p <2>\np -> q <3>a, <4>b\nq -> q a\nq -> $\n\na",
"The second argument specifies the format in which the automaton is described, defaulting to auto, which means \"guess the format\":",
"%%automaton a dot\ndigraph\n{\n vcsn_context = \"lal_char(ab), z\"\n I -> p [label = \"<2>\"]\n p -> q [label = \"<3>a, <4>b\"]\n q -> q [label = a]\n q -> F\n}\n\n%%automaton a\ndigraph\n{\n vcsn_context = \"lal_char(ab), z\"\n I -> p [label = \"<2>\"]\n p -> q [label = \"<3>a, <4>b\"]\n q -> q [label = a]\n q -> F\n}",
"Automata entered this way are persistent: they are stored in the notebook and will be recovered when the page is reopened.\n%automaton: Text-Based Edition of an Automaton\nIn IPython \"line magic commands\" begin with a single %. The line magic %automaton takes three arguments:\n1. the name of the automaton\n2. the format you want the textual description of the automaton. Defaults to auto.\n3. the display mode: h for horizontal and v for vertical. Defaults to h.\nContrary to the cell magic, the %automaton can be used to update an existing automaton:",
"%automaton a",
"The real added value is that now you can interactively edit this automaton: changes in the text are immediately propagated on the rendered automaton.\nWhen given a fresh variable name, %automaton creates a dummy automaton that you can use as a starting point:",
"%automaton b fado",
"Beware however that these automata are not persistent: changes will be lost when the notebook is closed.\nAutomata Formats\nVcsn supports differents input and output formats. Some, such as tikz, are only export-only formats: they cannot be read by Vcsn.\ndaut (read/write)\nThis simple format is work in progress: its precise syntax is still subject to changes. It is roughly a simplification of the dot syntax. The following example should suffice to understand the syntax. If \"guessable\", the context can be left implicit.",
"%%automaton a\ncontext = \"lal_char(ab), z\"\n$ -> p <2>\np -> q <3>a, <4>b\nq -> q a\nq -> $",
"dot (read/write)\nThis format relies on the \"dot\" language of the GraphViz toolkit (http://graphviz.org). This is the default format for I/O in Vcsn.\nAn automaton looks as follows:",
"%%automaton a dot\n// The comments are introduced with //, or /* ... */\n//\n// The overall syntax is that of Dot for directed graph (\"digraph\").\ndigraph\n{\n // The following attribute defines the context of the automaton.\n vcsn_context = \"lal_char, b\"\n // Initial states are denoted by an edge between a node whose name starts\n // with an \"I\". So \"0\" is a initial state.\n I -> 0\n // Transitions are edges whose label is that of the transition.\n 0 -> 0 [label = \"a\"]\n 0 -> 0 [label = \"b\"]\n 0 -> 1 [label = \"c, d\"]\n // Final states are denoted by an edge to a node whose name starts with \"F\".\n 1 -> Finish\n}",
"efsm (read/write)\nThis format is designed to support import/export with OpenFST (http://openfst.org): it wraps its multi-file format (one file describes the automaton with numbers as transition labels, and one or several others define these labels) into a single format. It is not designed to be used by humans, but rather to be handled by two tools:\n- efstcompile to compile such a file into the OpenFST binary file format,\n- efstdecompile to extract an efsm file from a binary OpenFST file.\nefsm for acceptors (single tape automata)\nAs an example, consider the following exchange between Vcsn and OpenFST.",
"a = vcsn.context('lal_char(ab), zmin').expression('[ab]*a(<2>[ab])').automaton()\na\n\nefsm = a.format('efsm')\nprint(efsm)",
"The following sequence of operations uses OpenFST to determinize this automaton, and to load it back into Vcsn.",
"import os\n\n# Save the EFSM description of the automaton in a file.\nwith open(\"a.efsm\", \"w\") as file:\n print(efsm, file=file)\n\n# Compile the EFSM into an OpenFST file.\nos.system(\"efstcompile a.efsm >a.fst\")\n\n# Call OpenFST's determinization.\nos.system(\"fstdeterminize a.fst >d.fst\")\n\n# Convert from OpenFST format to EFSM.\nos.system(\"efstdecompile d.fst >d.efsm\")\n\n# Load this file into Python.\nwith open(\"d.efsm\", \"r\") as file:\n d = file.read()\n \n# Show the result.\nprint(d)\n\n# Now read it as an automaton.\nd_ofst = vcsn.automaton(d, 'efsm')\nd_ofst",
"For what it's worth, the above sequence of actions is realized by a.fstdeterminize().\nVcsn and OpenFST compute the same automaton.",
"a.determinize()",
"efsm for transducers (two-tape automata)\nThe following sequence shows the round-trip of a transducer between Vcsn and OpenFST.",
"t = a.partial_identity()\nt\n\ntefsm = t.format('efsm')\nprint(tefsm)\n\nvcsn.automaton(tefsm)",
"Details about the EFSM format\nThe EFSM format is a simple format that puts together the various files that OpenFST uses to serialize and deserialize automata: one or two files to describe the labels (called \"symbol tables\"), and one to list the transitions. More details about these files can be found on FSM Man Pages.\nWhen reading an EFSM file, Vcsn expects the following bits:\n\n\na line arc_type=TYPE which specifies the weightset. If TYPE is log or log64, this is mapped to the log weightset, if it is standard, then it is mapped to zmin or rmin, depending on whether floatting points were used.\n\n\na \"here-document\" (the Unix name for embedded files, delimited by <<EOF to a line equal to EOF) for the first symbol table. If the here-document is named isymbols.txt, then the automaton is a transducer, otherwise it is considered an acceptor.\n\n\nif the automaton is a transducer, a second symbol table, osymbols.txt, to describe the labels of the second tape.\n\n\nthen a final here-document, transitions.fsm, which list the transitions.\n\n\nfado (read/write)\nThis is the native language of the FAdo platform (http://fado.dcc.fc.up.pt). Weighted automata are not supported.",
"a = vcsn.B.expression('a+b').standard()\na\n\nprint(a.format('fado'))",
"grail (write)\nThis format is made to exchange automata with the Grail (http://grailplus.org). Weighted automata are not supported.",
"a = vcsn.B.expression('a+b').standard()\na\n\nprint(a.format('grail'))",
"tikz (write)\nThis format generates a LaTeX document that uses TikZ syntax to draw the automaton. Note that the layout is not computed: all the states are simply reported in a row. You will have to tune the positions of the states by hand. However, it remains a convenient way to start.",
"a = vcsn.Q.expression('<2>a+<3>b').standard()\na\n\nprint(a.format('tikz'))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tclaudioe/Scientific-Computing | SC3/01_MC_theorem_Halton_distance_matrix_and_RBF_interpolation.ipynb | bsd-3-clause | [
"INF-482, v0.01, Claudio Torres, [email protected]. DI-UTFSM\nTextbook: Gregory E. Fasshauer, Meshfree Approximaition Methods with MatLab, Interdisciplinary Mathematical Sciences - Vol. 6, World Scientific Publishers, Singapore, 2007. Link: http://www.math.iit.edu/~fass/\nMairhuber-Curtis Theorem, Halton, Distance Matrix and RBF Interpolation",
"import numpy as np\nimport ghalton\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom ipywidgets import interact\nfrom scipy.spatial import distance_matrix\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nfrom matplotlib.ticker import LinearLocator, FormatStrFormatter\nfrom ipywidgets import IntSlider\nimport sympy as sym\nimport matplotlib as mpl\nmpl.rcParams['font.size'] = 14\nmpl.rcParams['axes.labelsize'] = 20\nmpl.rcParams['xtick.labelsize'] = 14\nmpl.rcParams['ytick.labelsize'] = 14\nsym.init_printing()\nM=8\n\ndef plot_matrices_with_values(ax,M):\n N=M.shape[0]\n cmap = plt.get_cmap('GnBu')\n ax.matshow(M, cmap=cmap)\n for i in np.arange(0, N):\n for j in np.arange(0, N):\n ax.text(i, j, '{:.2f}'.format(M[i,j]), va='center', ha='center', color='r')",
"Mairhuber-Curtis Theorem",
"# Initializing a R^2\nsequencer = ghalton.Halton(2)\nsequencer.reset()\nxH=np.array(sequencer.get(9))\nprint(xH)\n\ndef show_MC_theorem(s_local=0):\n i=3\n j=4\n NC=40\n\n sequencer.reset()\n xH=np.array(sequencer.get(9))\n\n phi1= lambda s: (s-0.5)*(s-1)/((0-0.5)*(0-1))\n phi2= lambda s: (s-0)*(s-1)/((0.5-0)*(0.5-1))\n phi3= lambda s: (s-0)*(s-0.5)/((1-0)*(1-0.5))\n C1=lambda s: xH[i,:]*phi1(s)+np.array([0.45,0.55])*phi2(s)+xH[j,:]*phi3(s)\n C2=lambda s: xH[j,:]*phi1(s)+np.array([0.15,0.80])*phi2(s)+xH[i,:]*phi3(s)\n C1v=np.vectorize(C1,otypes=[np.ndarray])\n C2v=np.vectorize(C2,otypes=[np.ndarray])\n ss=np.linspace(0,1,NC).reshape((-1, 1))\n C1o=np.array(C1v(ss))\n C2o=np.array(C2v(ss))\n C1plot=np.zeros((NC,2))\n C2plot=np.zeros((NC,2))\n for k in np.arange(0,NC):\n C1plot[k,0]=C1o[k][0][0]\n C1plot[k,1]=C1o[k][0][1]\n C2plot[k,0]=C2o[k][0][0]\n C2plot[k,1]=C2o[k][0][1]\n\n plt.figure(figsize=(2*M,M))\n plt.subplot(121)\n plt.plot(C1plot[:,0],C1plot[:,1],'r--')\n plt.plot(C2plot[:,0],C2plot[:,1],'g--')\n plt.scatter(xH[:,0], xH[:,1], s=300, c=\"b\", alpha=1.0, marker='.',\n label=\"Halton\")\n plt.scatter(C1(s_local)[0], C1(s_local)[1], s=300, c=\"r\", alpha=1.0, marker='d')\n plt.scatter(C2(s_local)[0], C2(s_local)[1], s=300, c=\"g\", alpha=1.0, marker='d')\n plt.axis([0,1,0,1])\n plt.title(r'Quasi-random points (Halton)')\n plt.grid(True)\n\n xHm=np.copy(xH)\n xHm[i,:]=C1(s_local)\n xHm[j,:]=C2(s_local)\n R=distance_matrix(xHm, xH)\n det_s_local=np.linalg.det(R)\n\n plt.subplot(122)\n plt.title(r'det(R_fixed)='+str(det_s_local))\n det_s=np.zeros_like(ss)\n for k, s in enumerate(ss):\n xHm[i,:]=C1plot[k,:]\n xHm[j,:]=C2plot[k,:]\n R=distance_matrix(xHm, xH)\n det_s[k]=np.linalg.det(R)\n\n plt.plot(ss,det_s,'-')\n plt.plot(s_local,det_s_local,'dk',markersize=16)\n plt.grid(True)\n\n plt.show()\n\ninteract(show_MC_theorem,s_local=(0,1,0.1))",
"Halton points vs pseudo-random points in 2D",
"def plot_random_vs_Halton(n=100):\n # Number of points to be generated\n # n=1000\n # I am reseting the sequence everytime I generated just to get the same points\n sequencer.reset()\n xH=np.array(sequencer.get(n))\n np.random.seed(0)\n xR=np.random.rand(n,2)\n\n plt.figure(figsize=(2*M,M))\n\n plt.subplot(121)\n plt.scatter(xR[:,0], xR[:,1], s=100, c=\"r\", alpha=1.0, marker='.',\n label=\"Random\", edgecolors='None')\n plt.axis([0,1,0,1])\n plt.title(r'Pseudo-random points')\n plt.grid(True)\n\n plt.subplot(122)\n plt.scatter(xH[:,0], xH[:,1], s=100, c=\"b\", alpha=1.0, marker='.',\n label=\"Halton\")\n plt.axis([0,1,0,1])\n plt.title(r'Quasi-random points (Halton)')\n plt.grid(True)\n\n plt.show()\n\ninteract(plot_random_vs_Halton,n=(20,500,20))",
"Interpolation with Distance Matrix from Halton points",
"def show_R(mH=10):\n fig= plt.figure(figsize=(2*M*mH/12,M*mH/12))\n ax = plt.gca()\n sequencer.reset()\n X=np.array(sequencer.get(mH))\n R=distance_matrix(X, X)\n plot_matrices_with_values(ax,R)\n\ninteract(show_R,mH=(2,20,1))",
"Defining a test function",
"# The function to be interpolated\nf=lambda x,y: 16*x*(1-x)*y*(1-y)\n\ndef showing_f(n=10, elev=40, azim=230):\n fig = plt.figure(figsize=(2*M,M))\n\n # Creating regular mesh\n Xr = np.linspace(0, 1, n)\n Xm, Ym = np.meshgrid(Xr,Xr)\n Z = f(Xm,Ym)\n\n # Wireframe\n plt.subplot(221,projection='3d')\n ax = fig.gca()\n ax.plot_wireframe(Xm, Ym, Z)\n ax.view_init(elev,azim)\n\n # imshow\n plt.subplot(222)\n #plt.imshow(Z,interpolation='none', extent=[0, 1, 0, 1])\n plt.contourf(Xm, Ym, Z, 20)\n plt.ylabel('$y$')\n plt.xlabel('$x$')\n plt.axis('equal')\n plt.xlim(0,1)\n plt.colorbar()\n\n # Contour plot\n plt.subplot(223)\n plt.contour(Xm, Ym, Z, 20)\n plt.axis('equal')\n plt.colorbar()\n \n # Surface\n plt.subplot(224,projection='3d')\n ax = fig.gca()\n surf = ax.plot_surface(Xm, Ym, Z, rstride=1, cstride=1, cmap=cm.coolwarm,\n linewidth=0, antialiased=False)\n fig.colorbar(surf)\n ax.view_init(elev,azim)\n\n plt.show()\n",
"Let's look at $f$",
"elev_widget = IntSlider(min=0, max=180, step=10, value=40)\nazim_widget = IntSlider(min=0, max=360, step=10, value=230)\n\ninteract(showing_f,n=(5,50,5),elev=elev_widget,azim=azim_widget)\n\ndef eval_interp_distance_matrix(C,X,x,y):\n R=distance_matrix(X, np.array([[x,y]]))\n return np.dot(C,R)\n\ndef showing_f_interpolated(n=10, mH=10, elev=40, azim=230):\n fig = plt.figure(figsize=(2*M,M))\n\n ## Building distance matrix and solving linear system\n sequencer.reset()\n X=np.array(sequencer.get(mH))\n R=distance_matrix(X, X)\n Zs=f(X[:,0],X[:,1])\n C=np.linalg.solve(R,Zs)\n # f interpolated with distance function\n fIR=np.vectorize(eval_interp_distance_matrix, excluded=[0,1])\n\n # Creating regular mesh\n Xr = np.linspace(0, 1, n)\n Xm, Ym = np.meshgrid(Xr,Xr)\n Z = f(Xm,Ym)\n\n # Contour plot - Original Data\n plt.subplot(221)\n plt.contour(Xm, Ym, Z, 20)\n plt.colorbar()\n plt.axis('equal')\n plt.title(r'$f(x,y)$')\n\n # Surface - Original Data\n plt.subplot(222,projection='3d')\n ax = fig.gca()\n surf = ax.plot_surface(Xm, Ym, Z, rstride=1, cstride=1, cmap=cm.coolwarm,\n linewidth=0, antialiased=False)\n fig.colorbar(surf)\n ax.view_init(elev,azim)\n plt.title(r'$f(x,y)$')\n\n # Contour plot - Interpolated Data\n plt.subplot(223)\n plt.contour(Xm, Ym, fIR(C,X,Xm,Ym), 20)\n plt.axis('equal')\n plt.colorbar()\n plt.scatter(X[:,0], X[:,1], s=100, c=\"r\", alpha=0.5, marker='.',\n label=\"Random\", edgecolors='None')\n plt.title(r'$fIR(x,y)$')\n\n # Surface - Interpolated Data\n plt.subplot(224,projection='3d')\n ax = fig.gca()\n surf = ax.plot_surface(Xm, Ym, fIR(C,X,Xm,Ym), rstride=1, cstride=1, cmap=cm.coolwarm,\n linewidth=0, antialiased=False)\n fig.colorbar(surf)\n ax.view_init(elev,azim)\n ax.set_zlim(0,1)\n plt.title(r'$fIR(x,y)$')\n\n plt.show()",
"The interpolation with distance matrix itself",
"interact(showing_f_interpolated,n=(5,50,5),mH=(5,80,5),elev=elev_widget,azim=azim_widget)",
"RBF interpolation",
"# Some RBF's\nlinear_rbf = lambda r,eps: r\ngaussian_rbf = lambda r,eps: np.exp(-(eps*r)**2)\nMQ_rbf = lambda r,eps: np.sqrt(1+(eps*r)**2)\nIMQ_rbf = lambda r,eps: 1./np.sqrt(1+(eps*r)**2)\n# The chosen one! But please try all of them!\nrbf = lambda r,eps: MQ_rbf(r,eps)\n\ndef eval_interp_rbf(C,X,x,y,eps):\n A=rbf(distance_matrix(X, np.array([[x,y]])),eps)\n return np.dot(C,A)\n\ndef showing_f_interpolated_rbf(n=10, mH=10, elev=40, azim=230, eps=1):\n fig = plt.figure(figsize=(2*M,M))\n\n # Creating regular mesh\n Xr = np.linspace(0, 1, n)\n Xm, Ym = np.meshgrid(Xr,Xr)\n Z = f(Xm,Ym)\n \n ########################################################\n ## Pseudo-random\n ## Building distance matrix and solving linear system\n np.random.seed(0)\n X=np.random.rand(mH,2)\n R=distance_matrix(X,X)\n A=rbf(R,eps)\n Zs=f(X[:,0],X[:,1])\n C=np.linalg.solve(A,Zs)\n # f interpolated with distance function\n fIR=np.vectorize(eval_interp_rbf, excluded=[0,1,4])\n \n # Contour plot - Original Data\n plt.subplot(231)\n plt.contour(Xm, Ym, fIR(C,X,Xm,Ym,eps), 20)\n plt.colorbar()\n plt.scatter(X[:,0], X[:,1], s=100, c=\"r\", alpha=0.5, marker='.',\n label=\"Random\", edgecolors='None')\n plt.title(r'$f(x,y)_{rbf}$ with Pseudo-random points')\n\n # Surface - Original Data\n plt.subplot(232,projection='3d')\n ax = fig.gca()\n surf = ax.plot_surface(Xm, Ym, fIR(C,X,Xm,Ym,eps), rstride=1, cstride=1, cmap=cm.coolwarm,\n linewidth=0, antialiased=False)\n fig.colorbar(surf)\n ax.view_init(elev,azim)\n ax.set_zlim(0,1)\n plt.title(r'$f(x,y)_{rbf}$ with Pseudo-random points')\n \n # Contour plot - Original Data\n plt.subplot(233)\n plt.contourf(Xm, Ym, np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)), 20)\n #plt.imshow(np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)),interpolation='none', extent=[0, 1, 0, 1])\n plt.axis('equal')\n plt.xlim(0,1)\n plt.colorbar()\n plt.scatter(X[:,0], X[:,1], s=100, c=\"k\", alpha=0.8, marker='.',\n label=\"Random\", edgecolors='None')\n plt.title(r'Error with Pseudo-random points')\n \n ########################################################\n \n ## HALTON (Quasi-random)\n ## Building distance matrix and solving linear system\n sequencer.reset()\n X=np.array(sequencer.get(mH))\n R=distance_matrix(X,X)\n A=rbf(R,eps)\n Zs=f(X[:,0],X[:,1])\n C=np.linalg.solve(A,Zs)\n # f interpolated with distance function\n fIR=np.vectorize(eval_interp_rbf, excluded=[0,1,4])\n\n # Contour plot - Interpolated Data\n plt.subplot(234)\n plt.contour(Xm, Ym, fIR(C,X,Xm,Ym,eps), 20)\n plt.colorbar()\n plt.scatter(X[:,0], X[:,1], s=100, c=\"r\", alpha=0.5, marker='.',\n label=\"Random\", edgecolors='None')\n plt.title(r'$f_{rbf}(x,y)$ with Halton points')\n\n # Surface - Interpolated Data\n plt.subplot(235,projection='3d')\n ax = fig.gca()\n surf = ax.plot_surface(Xm, Ym, fIR(C,X,Xm,Ym,eps), rstride=1, cstride=1, cmap=cm.coolwarm,\n linewidth=0, antialiased=False)\n fig.colorbar(surf)\n ax.view_init(elev,azim)\n ax.set_zlim(0,1)\n plt.title(r'$f_{rbf}(x,y)$ with Halton points')\n\n # Contour plot - Original Data\n plt.subplot(236)\n plt.contourf(Xm, Ym, np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)), 20)\n #plt.imshow(np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)),interpolation='none', extent=[0, 1, 0, 1])\n plt.axis('equal')\n plt.xlim(0,1)\n plt.colorbar()\n plt.scatter(X[:,0], X[:,1], s=100, c=\"k\", alpha=0.8, marker='.',\n label=\"Random\", edgecolors='None')\n plt.title(r'Error with Halton points')\n \n plt.show()\n\ninteract(showing_f_interpolated_rbf,n=(5,50,5),mH=(5,80,5),elev=elev_widget,azim=azim_widget,eps=(0.1,50,0.1))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Housebeer/Natural-Gas-Model | Data Analytics/Fitting curve.ipynb | mit | [
"Fitting curve to data\nWithin this notebook we do some data analytics on historical data to feed some real numbers into the model. Since we assume the consumer data to be resemble a sinus, due to the fact that demand is seasonal, we will focus on fitting data to this kind of curve.",
"import pandas as pd\nimport numpy as np\nfrom scipy.optimize import leastsq\nimport pylab as plt\n\nN = 1000 # number of data points\nt = np.linspace(0, 4*np.pi, N)\ndata = 3.0*np.sin(t+0.001) + 0.5 + np.random.randn(N) # create artificial data with noise\n\nguess_mean = np.mean(data)\nguess_std = 3*np.std(data)/(2**0.5)\nguess_phase = 0\n\n# we'll use this to plot our first estimate. This might already be good enough for you\ndata_first_guess = guess_std*np.sin(t+guess_phase) + guess_mean\n\n# Define the function to optimize, in this case, we want to minimize the difference\n# between the actual data and our \"guessed\" parameters\noptimize_func = lambda x: x[0]*np.sin(t+x[1]) + x[2] - data\nest_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]\n\n# recreate the fitted curve using the optimized parameters\ndata_fit = est_std*np.sin(t+est_phase) + est_mean\n\nplt.plot(data, '.')\nplt.plot(data_fit, label='after fitting')\nplt.plot(data_first_guess, label='first guess')\nplt.legend()\nplt.show()",
"import data for our model\nThis is data imported from statline CBS webportal.",
"importfile = 'CBS Statline Gas Usage.xlsx'\ndf = pd.read_excel(importfile, sheetname='Month', skiprows=1)\ndf.drop(['Onderwerpen_1', 'Onderwerpen_2', 'Perioden'], axis=1, inplace=True)\n\n#df\n\n# transpose\ndf = df.transpose()\n\n\n# provide headers\nnew_header = df.iloc[0]\ndf = df[1:]\ndf.rename(columns = new_header, inplace=True)\n\n\n#df.drop(['nan'], axis=0, inplace=True)\ndf\n\n\nx = range(len(df.index))\ndf['Via regionale netten'].plot(figsize=(18,5))\nplt.xticks(x, df.index, rotation='vertical')\nplt.show()\n",
"now let fit different consumer groups",
"#b = self.base_demand\n#m = self.max_demand\n#y = b + m * (.5 * (1 + np.cos((x/6)*np.pi)))\n#b = 603\n#m = 3615\n\nN = 84 # number of data points\nt = np.linspace(0, 83, N)\n#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise\ndata = np.array(df['Via regionale netten'].values, dtype=np.float64)\n\nguess_mean = np.mean(data)\nguess_std = 2695.9075546 #2*np.std(data)/(2**0.5)\nguess_phase = 0\n\n# we'll use this to plot our first estimate. This might already be good enough for you\ndata_first_guess = guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))\n\n# Define the function to optimize, in this case, we want to minimize the difference\n# between the actual data and our \"guessed\" parameters\noptimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data\nest_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]\n\n# recreate the fitted curve using the optimized parameters\ndata_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))\n\nplt.plot(data, '.')\nplt.plot(data_fit, label='after fitting')\nplt.plot(data_first_guess, label='first guess')\nplt.legend()\nplt.show()\nprint('Via regionale netten')\nprint('max_demand: %s' %(est_std))\nprint('phase_shift: %s' %(est_phase))\nprint('base_demand: %s' %(est_mean))\n\n#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise\ndata = np.array(df['Elektriciteitscentrales'].values, dtype=np.float64)\n\nguess_mean = np.mean(data)\nguess_std = 3*np.std(data)/(2**0.5)\nguess_phase = 0\n\n# we'll use this to plot our first estimate. This might already be good enough for you\ndata_first_guess = guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))\n\n# Define the function to optimize, in this case, we want to minimize the difference\n# between the actual data and our \"guessed\" parameters\noptimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data\nest_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]\n\n# recreate the fitted curve using the optimized parameters\ndata_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))\n\nplt.plot(data, '.')\nplt.plot(data_fit, label='after fitting')\nplt.plot(data_first_guess, label='first guess')\nplt.legend()\nplt.show()\nprint('Elektriciteitscentrales')\nprint('max_demand: %s' %(est_std))\nprint('phase_shift: %s' %(est_phase))\nprint('base_demand: %s' %(est_mean))\n\n#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise\ndata = np.array(df['Overige verbruikers'].values, dtype=np.float64)\n\nguess_mean = np.mean(data)\nguess_std = 3*np.std(data)/(2**0.5)\nguess_phase = 0\nguess_saving = .997\n\n# we'll use this to plot our first estimate. This might already be good enough for you\ndata_first_guess = (guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))) #* np.power(guess_saving,t)\n\n# Define the function to optimize, in this case, we want to minimize the difference\n# between the actual data and our \"guessed\" parameters\noptimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data\nest_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]\n\n# recreate the fitted curve using the optimized parameters\ndata_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))\n\nplt.plot(data, '.')\nplt.plot(data_fit, label='after fitting')\nplt.plot(data_first_guess, label='first guess')\nplt.legend()\nplt.show()\nprint('Overige verbruikers')\nprint('max_demand: %s' %(est_std))\nprint('phase_shift: %s' %(est_phase))\nprint('base_demand: %s' %(est_mean))",
"price forming\nIn order to estimate willingness to sell en willingness to buy we look at historical data over the past view years. We look at the DayAhead market at the TTF. Altough this data does not reflect real consumption necessarily",
"\ninputexcel = 'TTFDA.xlsx'\noutputexcel = 'pythonoutput.xlsx'\n\nprice = pd.read_excel(inputexcel, sheetname='Sheet1', index_col=0)\nquantity = pd.read_excel(inputexcel, sheetname='Sheet2', index_col=0)\n\nprice.index = pd.to_datetime(price.index, format=\"%d-%m-%y\")\nquantity.index = pd.to_datetime(quantity.index, format=\"%d-%m-%y\")\n\npq = pd.concat([price, quantity], axis=1, join_axes=[price.index])\npqna = pq.dropna()\n\nyear = np.arange(2008,2017,1)\n\ncoefficientyear = []\n\nfor i in year:\n x= pqna['Volume'].sort_index().ix[\"%s\"%i]\n y= pqna['Last'].sort_index().ix[\"%s\"%i]\n #plot the trendline\n plt.plot(x,y,'o')\n # calc the trendline\n z = np.polyfit(x, y, 1)\n p = np.poly1d(z)\n plt.plot(x,p(x),\"r--\", label=\"%s\"%i)\n plt.xlabel(\"Volume\")\n plt.ylabel(\"Price Euro per MWH\")\n plt.title('%s: y=%.10fx+(%.10f)'%(i,z[0],z[1]))\n # plt.savefig('%s.png' %i)\n plt.show()\n # the line equation:\n print(\"y=%.10fx+(%.10f)\"%(z[0],z[1]))\n # save the variables in a list\n coefficientyear.append([i, z[0], z[1]])\n\nlen(year)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jasag/Phytoliths-recognition-system | code/notebooks/Prototypes/BoW/Bag_of_Words.ipynb | bsd-3-clause | [
"Bag of Words\nBag of Words obtiene las características de una imagen, es decir, las formas, texturas, etc., como palabras [1]. Así, se describe la imagen en función de la frecuencia de cada una de estas palabras o características.\nEn este notebook entrenaremos un clasificador con la técnica Bag of Words. Lo cual se compone, brevemente explicado, de los siguientes pasos:\n\nCrear el conjunto de entrenamiento.\nCrear vocabulario.\nExtracción de características.\nConstrucción del vocabulario mediante Clustering\n\n\nEntrenar el clasificador.",
"%matplotlib inline \nimport numpy as np\nimport matplotlib.pyplot as plt",
"1. Cargamos el conjunto de entrenamiento\nLa manera en la que cargamos el conjunto de entrenamiento podemos observarlo en el Jupyter Notebook 1_Train_Set_Load.\n2. Crear vocabulario\nLa manera en la que creamos el voacabulario podemos observarlo en el Jupyter Notebook 2A_Daisy_Features y 2B_Clustering.",
"path = '../rsc/obj/'\n\nmini_kmeans_path = path + 'mini_kmeans.sav'\n\nmini_kmeans = pickle.load(open(mini_kmeans_path, 'rb'))",
"3. Obtención de Bag of Words",
"trainInstances = []\nfor imgFeatures in train_features:\n # extrae pertenencias a cluster\n pertenencias = mini_kmeans.predict(imgFeatures)\n # extrae histograma\n bovw_representation, _ = np.histogram(pertenencias, bins=500, range=(0,499))\n # añade al conjunto de entrenamiento final\n trainInstances.append(bovw_representation)\ntrainInstances = np.array(trainInstances)",
"Entrenamiento de un clasificador",
"from sklearn import svm\n\n\nclassifier = svm.SVC(kernel='linear', C=0.01)\ny_pred = classifier.fit(trainInstances, y_train)\n\nimport pickle # Módulo para serializar\n\npath = '../rsc/obj/'\n\nsvm_BoW_path = path + 'svm_BoW.sav'\n\npickle.dump(classifier, open(svm_BoW_path, 'wb'))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cliburn/sta-663-2017 | homework/01_Functions_Loops_Branching_Solutions.ipynb | mit | [
"%matplotlib inline\n\nimport matplotlib.pyplot as plt",
"Functions, loops and branching\nThe following exercises let you practice Python syntax. Do not use any packages not in the standard library except for matplotlib.pyplot which has been imported for you.\nIf you have not done much programming, these exercises will be challenging. Don't give up! For this first exercise, solutions are provided, but try not to refer to them unless you are desperate.\n1. Grading (20 points)\n\nWrite a function to assign grades to a student such that \nA = [90 - 100]\nB = [80 - 90)\nC = [65 - 80)\nD = [0, 65)\n\nwhere square brackets indicate inclusive boundaries and parentheses indicate exclusive boundaries. However, studens whose attendance is 12 days or fewer get their grade reduced by one (A to B, B to C, C to D, and D stays D). The function should take a score and an attendance as an argument and return A, B, C or D as appropriate.(10 points)\n- Count the number of students with each grade from the given scores. (10 points)",
"scores = [ 84, 76, 67, 23, 83, 23, 50, 100, 32, 84, 22, 41, 27,\n 29, 71, 85, 47, 77, 39, 25, 85, 69, 22, 66, 100, 92,\n 97, 46, 81, 88, 67, 20, 52, 62, 39, 36, 79, 54, 74,\n 64, 33, 68, 85, 69, 84, 30, 68, 100, 71, 33, 21, 95,\n 92, 72, 53, 50, 31, 82, 53, 68, 49, 37, 40, 21, 94,\n 30, 54, 58, 92, 95, 73, 80, 81, 56, 44, 22, 69, 70,\n 25, 50, 59, 32, 65, 79, 27, 62, 27, 31, 78, 88, 68,\n 53, 79, 69, 89, 38, 80, 55, 92, 55]\n\nattendances = [17, 19, 21, 14, 10, 20, 14, 9, 6, 21, 5, 23, 21, 4, 5, 21, 20,\n 2, 14, 14, 21, 22, 3, 0, 11, 0, 0, 4, 20, 14, 23, 16, 24, 5,\n 12, 11, 22, 20, 15, 23, 0, 20, 20, 6, 4, 14, 6, 18, 17, 0, 18,\n 6, 3, 19, 24, 7, 9, 15, 18, 10, 2, 15, 21, 2, 9, 21, 20, 11,\n 24, 23, 14, 22, 4, 12, 7, 19, 6, 18, 23, 6, 14, 6, 1, 12, 7,\n 11, 22, 21, 7, 22, 24, 4, 10, 17, 21, 15, 0, 20, 3, 20]\n\n# Your answer here\n\ndef grade(score, attendance):\n \"\"\"Function that returns grade based on score and attendance.\"\"\"\n if attendance > 12:\n if score >= 90:\n return 'A'\n elif score >= 80:\n return 'B'\n elif score >= 65:\n return 'C'\n else:\n return 'D'\n else:\n if score >= 90:\n return 'B'\n elif score >= 80:\n return 'C'\n else:\n return 'D'\n\ncounts = {}\nfor score, attendance in zip(scores, attendances):\n g = grade(score, attendance)\n counts[g] = counts.get(g, 0) + 1\n \nfor g in 'ABCD':\n print(g, counts[g])",
"2. The Henon map and chaos. (25 points)\nThe Henon map takes a pont $(x_n, y_n)$ in the plane and maps it to \n$$\nx_{n+1} = 1 - a x_n^2 + y_n \\\ny_{n+1} = b x_n\n$$\n\nWrite a function for the Henon map. It should take the current (x, y) value and return a new pair of coordinates. Set a=1.4 and b=0.3 as defatult arguments. What is the output for x=1 and y=1? (5 points)\nUsing a for loop that increments the value of $a$ from 1.1 to 1.4 in steps of 0.01, save the last 50 $x$-terms in the iterated Henon map stopping at $x_{1000}$ for each value of $a$. Use $x_0 = 1$ and $y_0 = 1$ for each value of $a$, leaveing fixed $b = 0.3$.(10 points)\nMake a scatter plot of each $(a, x)$ value with $a$ on the horizontal axis and $x$ on the vertical axis. Use the plt.scatter function with s=1 to make the plot. (10 points)",
"# Your answer here\n\ndef henon(x, y, a=1.4, b=0.3):\n \"\"\"Henon map.\"\"\"\n return (1 - a*x**2 + y, b*x)\n\nhenon(1, 1)\n\nn = 1000\nn_store = 50\naa = [i/100 for i in range(100, 141)]\n\nxxs = []\nfor a in aa:\n xs = []\n x, y = 1, 1\n for i in range(n - n_store):\n x, y = henon(x, y, a=a)\n for i in range(n_store):\n x, y = henon(x, y, a=a)\n xs.append(x)\n xxs.append(xs)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfor a, xs in zip(aa, xxs):\n plt.scatter([a]*n_store, xs, s=1)",
"3. Collatz numbers - Euler project problem 14. (25 points)\nThe following iterative sequence is defined for the set of positive integers:\nn → n/2 (n is even)\nn → 3n + 1 (n is odd)\nUsing the rule above and starting with 13, we generate the following sequence:\n13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1\nIt can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.\n\nWrite a function to generate the iterative sequence described (15 points)\nWhich starting number, under one million, produces the longest chain? (10 points)\n\nNOTE: Once the chain starts the terms are allowed to go above one million.",
"# Your answer here\n\ndef collatz(n):\n \"\"\"Returns Collatz sequence starting with n.\"\"\"\n\n seq = [n]\n while n != 1:\n if n % 2 == 0:\n n = n // 2\n else:\n n = 3*n + 1\n seq.append(n)\n return seq",
"Generator version",
"def collatz_count(n):\n \"\"\"Returns Collatz sequence starting with n.\"\"\"\n\n count = 1\n while n != 1:\n if n % 2 == 0:\n n = n // 2\n else:\n n = 3*n + 1\n count += 1\n return count\n\n%%time\n\nbest_n = 1\nbest_length = 1\nfor n in range(2, 1000000):\n length = len(collatz(n))\n if length > best_length:\n best_length = length\n best_n = n \n \nprint(best_n, best_length)",
"A simple optimization\nIgnore starting numbers that have been previously generated since they cannot be longer than the generating sequence.",
"%%time\n\nbest_n = 1\nbest_length = 1\nseen = set([])\n\nfor n in range(2, 1000000):\n if n in seen:\n continue\n \n seq = collatz(n)\n seen.update(seq)\n length = len(seq)\n if length > best_length:\n best_length = length\n best_n = n \n \nprint(best_n, best_length)",
"4. Reading Ulysses. (30 points)\n\nWrite a program to download the text of Ulysses (5 points)\nOpen the downloaded file and read the entire sequence into a single string variable called text, discarding the header information (i.e. text should start with \\n\\n*** START OF THIS PROJECT GUTENBERG EBOOK ULYSSES ***\\n\\n\\n\\n\\n). Also remove the footer information (i.e. text should not include anything from End of the Project Gutenberg EBook of Ulysses, by James Joyce). (10 points)\nFind and report the starting index (counting from zero) and length of the longest word in text. For simplicity, a word is defined here to be any sequence of characters with no space between the characters (i.e. a word may include punctuation or numbers, just not spaces). If there are ties, report the starting index and length of the last word found. For example, in \"the quick brow fox jumps over the lazy dog.\" the longest word is jumps which starts at index 19 and has length 5, and 'dog.' would be considered a 4-letter word (15 points).",
"# Your answer here\n\nimport urllib.request\nresponse = urllib.request.urlopen('http://www.gutenberg.org/files/4300/4300-0.txt')\ntext = response.read().decode()",
"```python\nAlternative version using requests library\nAlthough not officially part of the standard libaray,\nit is so widely used that the standard docs point to it\n\"The Requests package is recommended for a higher-level HTTP client interface.\"\nimport requests\nurl = 'http://www.gutenberg.org/files/4300/4300-0.txt'\ntext = requests.get(url).text\n```",
"with open('ulysses.txt', 'w') as f:\n f.write(text)\n\nwith open('ulysses.txt') as f:\n text = f.read()\n\nstart_string = '\\n\\n*** START OF THIS PROJECT GUTENBERG EBOOK ULYSSES ***\\n\\n\\n\\n\\n'\nstop_string = 'End of the Project Gutenberg EBook of Ulysses, by James Joyce'\nstart_idx = text.find(start_string)\nstop_idx = text.find(stop_string)\ntext = text[(start_idx + len(start_string)):stop_idx]\n\nbest_len = 0\nbest_word = ''\nfor word in set(text.split()):\n if len(word) > best_len:\n best_len = len(word)\n best_word = word\n\nbest_word",
"We are looking for the last word found, so search backwards from the end with rfind",
"idx = text.rfind(best_word,)\n\nidx, best_len\n\ntext[idx:(idx+best_len)]"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
planetlabs/notebooks | jupyter-notebooks/vector/shapefile.ipynb | apache-2.0 | [
"Drawing Shapefiles in Python with Matplotlib\nIf you are using python for spatial processing, it is sometimes useful to plot your data. In these cases, it may also be helpful to render a simple map to help locate the data. This notebook will show you how plot a shapefile using pyplot for this purpose.\nDependencies\nIf you don't have pyshp or pyproj in your jupyter environement, install them now:",
"!pip install pyshp pyproj",
"Loading the Shapefile\nThis notebook uses the 1:110m Natural Earth countries shapefile.\nAfter loading the shapefile, we also fetch the index of the \"MAPCOLOR7\" field so that we can paint different countries different colors.\nYou can also use your own shapefile - just copy it to the same folder as this notebook, and update the filename passed to shapefile.Reader. You may also need to update mapcolor_idx field to reference an attribute that exists in your shapefile.",
"import shapefile\n\nsf = shapefile.Reader(\"ne_110m_admin_0_countries/ne_110m_admin_0_countries.shp\")\n\nmapcolor_idx = [field[0] for field in sf.fields].index(\"MAPCOLOR7\")-1\nmapcolor_map = [\n \"#000000\",\n \"#fbb4ae\",\n \"#b3cde3\",\n \"#ccebc5\",\n \"#decbe4\",\n \"#fed9a6\",\n \"#ffffcc\",\n \"#e5d8bd\",\n]",
"Reprojection\nThe countries shapefile is in the WGS 84 projetion (EPSG:4326). We will reproject to Web Mercator (EPSG:3857) to demonstrate how reprojection. To do this, we construct a Transformer using pyproj.\nAt the same time, set up the rendering bounds. For simplicity, define the bounds in lat/lon and reproject to meters. Note that pyplot expects bounds in minx,maxx,miny,maxy order, while pyproj transform works on in x,y pairs.",
"import pyproj\n\ntransformer = pyproj.Transformer.from_crs('EPSG:4326', 'EPSG:3857', always_xy=True)\n\nBOUNDS = [-180, 180, -75, 80]\nBOUNDS[0],BOUNDS[2] = transformer.transform(BOUNDS[0],BOUNDS[2])\nBOUNDS[1],BOUNDS[3] = transformer.transform(BOUNDS[1],BOUNDS[3])",
"Plotting the Data\nTo display the shapefile, iterate through the shapes in the shapefile. Fetch the mapcolor attribute for each shape and use it to determine the fill color. Collect the points for each shape, and transform them to EPSG:3857 using the transformer constructed above. Plot each shape with pyplot using fill for the fill and plot for the outline.\nFinaly, set the bounds on the plot.",
"%matplotlib notebook\nimport matplotlib.pyplot as plt\n\nfor shape in sf.shapeRecords():\n for i in range(len(shape.shape.parts)):\n fillcolor=mapcolor_map[shape.record[mapcolor_idx]]\n \n i_start = shape.shape.parts[i]\n if i==len(shape.shape.parts)-1:\n i_end = len(shape.shape.points)\n else:\n i_end = shape.shape.parts[i+1]\n points = list(transformer.itransform(shape.shape.points[i_start:i_end]))\n\n x = [i[0] for i in points]\n y = [i[1] for i in points]\n #Poly Fill\n plt.fill(x,y, facecolor=fillcolor, alpha=0.8)\n #Poly line\n plt.plot(x,y, color='#000000', alpha=1, linewidth=1)\n \nax = plt.axis(BOUNDS)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tclaudioe/Scientific-Computing | SC5/04 Numerical Example of Spectral Differentiation.ipynb | bsd-3-clause | [
"INF-510, v0.31, Claudio Torres, [email protected]. DI-UTFSM\nTextbook: Lloyd N. Trefethen, Spectral Methods in MATLAB, SIAM, Philadelphia, 2000\nMore on Spectral Matrices",
"import matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nimport scipy.sparse.linalg as sp\nfrom scipy import interpolate\nimport scipy as spf\nfrom sympy import *\nimport sympy as sym\nfrom scipy.linalg import toeplitz\nfrom ipywidgets import interact\nfrom ipywidgets import IntSlider\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nfrom matplotlib.ticker import LinearLocator, FormatStrFormatter\n# The variable M is used for changing the default size of the figures\nM=5\nimport ipywidgets as widgets\nimport matplotlib as mpl\nmpl.rcParams['font.size'] = 14\nmpl.rcParams['axes.labelsize'] = 20\nmpl.rcParams['xtick.labelsize'] = 14\nmpl.rcParams['ytick.labelsize'] = 14\nsym.init_printing()",
"Chebyshev differentiation matrix",
"def cheb(N):\n if N==0:\n D=0\n x=1\n return D,x\n x = np.cos(np.pi*np.arange(N+1)/N)\n c=np.hstack((2,np.ones(N-1),2))*((-1.)**np.arange(N+1))\n X=np.tile(x,(N+1,1)).T\n dX=X-X.T\n D = np.outer(c,1./c)/(dX+np.eye(N+1))\n D = D - np.diag(np.sum(D.T,axis=0))\n return D,x",
"Understanding how the np.FFT does the FFT",
"def show_spectral_derivative_example(N):\n x=np.linspace(2*np.pi/N,2*np.pi,N)\n u = lambda x: np.sin(x)\n up = lambda x: np.cos(x)\n \n #u = lambda x: np.sin(x)*np.cos(x)\n #up = lambda x: np.cos(x)*np.cos(x)-np.sin(x)*np.sin(x)\n \n v=u(x)\n K=np.fft.fftfreq(N)*N\n iK=1j*K\n vhat=np.fft.fft(v)\n \n W=iK*vhat\n W[int(N/2)]=0\n\n vp=np.real(np.fft.ifft(W))\n\n plt.figure(figsize=(10,10))\n plt.plot(x,v,'ks-',markersize=12,markeredgewidth=3,label='$\\sin(x)$',linewidth=3)\n plt.plot(x,up(x),'b.-',markersize=24,markeredgewidth=3,label='Exact derivative: $\\cos(x)$',linewidth=3)\n plt.plot(x,np.real(vp),'rx-',markersize=10,markeredgewidth=3,label='spectral derivative',linewidth=3)\n plt.grid(True)\n plt.legend(loc='best')\n plt.xlabel('$x$')\n plt.show()\n \n print('v :',v)\n print('vhat :',vhat)\n print('K :',K)\n print('W :',W)\n print('vprime: ',vp)\nwidgets.interact(show_spectral_derivative_example,N=(2,40,2))\n\ndef spectralDerivativeByFFT(v,nu=1):\n if not np.all(np.isreal(v)):\n raise ValueError('The input vector must be real')\n N=v.shape[0]\n K=np.fft.fftfreq(N)*N\n iK=(1j*K)**nu\n v_hat=np.fft.fft(v)\n w_hat=iK*v_hat\n if np.mod(nu,2)!=0:\n w_hat[int(N/2)]=0\n return np.real(np.fft.ifft(w_hat))\n\ndef my_D2_spec_2pi(N):\n h=(2*np.pi/N)\n c=np.zeros(N)\n j=np.arange(1,N)\n c[0]=-np.pi**2/(3.*h**2)-1./6.\n c[1:]=-0.5*((-1)**j)/(np.sin(j*h/2.)**2)\n D2=toeplitz(c)\n return D2",
"Fractional derivative application",
"def fractional_derivative(N=10,nu=1):\n x=np.linspace(2*np.pi/N,2*np.pi,N)\n u = lambda x: np.sin(x)\n up = lambda x: np.cos(x)\n v = u(x)\n vp=spectralDerivativeByFFT(v,nu)\n plt.figure(figsize=(10,10))\n plt.plot(x,v,'ks-',markersize=12,markeredgewidth=3,label='$\\sin(x)$',linewidth=3)\n plt.plot(x,up(x),'b.-',markersize=24,markeredgewidth=3,label='Exact derivative: $\\cos(x)$',linewidth=3)\n plt.plot(x,np.real(vp),'rx-',markersize=10,markeredgewidth=3,label=r'$\\frac{d^{\\nu}u}{dx^{\\nu}}$',linewidth=3)\n plt.grid(True)\n plt.legend(loc='best')\n plt.xlabel('$x$')\n plt.show()\nd_nu=0.1\nwidgets.interact(fractional_derivative,N=(4,100),nu=(d_nu,1,d_nu))",
"Example 1: Computing Eigenvalues\nWe are solving: $-u''(x)+x^2\\,u(x)=\\lambda\\, u(x)$ on $\\mathbb{R}$",
"L=8.0\ndef show_example_1(N=6):\n h=2*np.pi/N\n x=np.linspace(h,2*np.pi,N)\n x=L*(x-np.pi)/np.pi\n D2=(np.pi/L)**2*my_D2_spec_2pi(N)\n w, v = np.linalg.eig(-D2+np.diag(x**2))\n # eigenvalues = np.sort(np.linalg.eigvals(-D2+np.diag(x**2)))\n ii = np.argsort(w)\n w=w[ii]\n v=v[:,ii]\n \n plt.figure(figsize=(2*M,2*M))\n\n for i in np.arange(1,5):\n plt.subplot(2,2,i)\n plt.title(r'$u_{:d}(x),\\, \\lambda_{:d}={:f}$'.format(i,i,w[i-1]))\n plt.plot(x,v[:,i],'kx',markersize=16,markeredgewidth=3)\n plt.grid(True)\n plt.show()\nwidgets.interact(show_example_1,N=(6,100,1))",
"Example 2: Solving ODE\nSolving the following BVP $u_{xx}=\\exp(4\\,x)$ with $u(-1)=u(1)=0$",
"def example_2(N=16):\n D,x = cheb(N)\n D2 = np.dot(D,D)\n D2 = D2[1:-1,1:-1]\n f = np.exp(4*x[1:-1])\n u = np.linalg.solve(D2,f)\n u = np.concatenate(([0],u,[0]),axis=0)\n\n plt.figure(figsize=(M,M))\n plt.plot(x,u,'k.')\n xx = np.linspace(-1,1,1000)\n P = np.polyfit(x, u, N)\n uu = np.polyval(P, xx)\n plt.plot(xx,uu,'b-')\n plt.grid(True)\n exact = (np.exp(4*xx)-np.sinh(4.)*xx-np.cosh(4.))/16.\n plt.title('max error= '+str(np.linalg.norm(exact-uu,np.inf)))\n plt.ylim([-2.5,0.5])\n plt.show()\ninteract(example_2,N=(2,35))",
"Example 3: Solving ODE\nSolving the following BVP $u_{xx}=\\exp(u)$ with $u(-1)=u(1)=0$",
"def example_3(N=16,IT=20):\n D,x = cheb(N)\n D2 = np.dot(D,D)\n D2 = D2[1:-1,1:-1]\n\n u = np.zeros(N-1)\n for i in np.arange(IT):\n u_new = np.linalg.solve(D2,np.exp(u))\n change = np.linalg.norm(u_new-u,np.inf)\n u = u_new\n\n u = np.concatenate(([0],u,[0]),axis=0)\n\n plt.figure(figsize=(M,M))\n plt.plot(x,u,'k.')\n xx = np.linspace(-1,1,1000)\n P = np.polyfit(x, u, N)\n uu = np.polyval(P, xx)\n plt.plot(xx,uu,'b-')\n plt.grid(True)\n plt.title('IT= '+str(IT)+' u(0)= '+str(u[int(N/2)]))\n plt.ylim([-0.5,0.])\n plt.show()\n\ninteract(example_3,N=(2,30),IT=(0,100))",
"Example 4: Eigenvalue BVP\nSolve $u_{xx}=\\lambda\\,u$ with $u(-1)=u(1)=0$",
"N_widget = IntSlider(min=2, max=50, step=1, value=10)\nj_widget = IntSlider(min=1, max=49, step=1, value=5)\n\ndef update_j_range(*args):\n j_widget.max = N_widget.value-1\nj_widget.observe(update_j_range, 'value')\n\ndef example_4(N=36,j=5):\n D,x = cheb(N)\n D2 = np.dot(D,D)\n D2 = D2[1:-1,1:-1]\n\n lam, V = np.linalg.eig(D2)\n\n ii=np.argsort(-np.real(lam))\n\n lam=lam[ii]\n V=V[:,ii]\n\n u = np.concatenate(([0],V[:,j-1],[0]),axis=0)\n\n plt.figure(figsize=(2*M,M))\n plt.plot(x,u,'k.')\n xx = np.linspace(-1,1,1000)\n P = np.polyfit(x, u, N)\n uu = np.polyval(P, xx)\n plt.plot(xx,uu,'b-')\n plt.grid(True)\n plt.title('eig '+str(j)+' = '+str(lam[j-1]*4./(np.pi**2))+' pi**2/4'+' ppw '+str(4*N/(np.pi*j)))\n plt.show()\ninteract(example_4,N=N_widget,j=j_widget)",
"Example 5: (2D) Poisson equation $u_{xx}+u_{yy}=f$ with u=0 on $\\partial\\Gamma$",
"elev_widget = IntSlider(min=0, max=180, step=10, value=40)\nazim_widget = IntSlider(min=0, max=360, step=10, value=230)\n\ndef example_5(N=10,elev=40,azim=230):\n D,x = cheb(N)\n y=x\n D2 = np.dot(D,D)\n D2 = D2[1:-1,1:-1]\n\n xx,yy=np.meshgrid(x[1:-1],y[1:-1])\n xx = xx.flatten()\n yy = yy.flatten()\n\n f = 10*np.sin(8*xx*(yy-1))\n\n I = np.eye(N-1)\n # The Laplacian\n L = np.kron(I,D2)+np.kron(D2,I)\n\n u = np.linalg.solve(L,f)\n\n fig = plt.figure(figsize=(2*M,2*M))\n\n # The spy of the Laplacian\n plt.subplot(221)\n plt.spy(L)\n\n # Plotting the approximation and its interpolation\n\n # The numerical approximation\n uu = np.zeros((N+1,N+1))\n uu[1:-1,1:-1]=np.reshape(u,(N-1,N-1))\n xx,yy=np.meshgrid(x,y)\n value = uu[int(N/4),int(N/4)]\n\n plt.subplot(222,projection='3d')\n ax = fig.gca()\n #surf = ax.plot_surface(xxx, yyy, uuu_n, rstride=1, cstride=1, cmap=cm.coolwarm,\n # linewidth=0, antialiased=False)\n ax.plot_wireframe(xx, yy, uu)\n ax.view_init(elev,azim)\n\n # The INTERPOLATED approximation\n\n N_fine=4*N\n finer_mesh=np.linspace(-1,1,N_fine)\n xxx,yyy=np.meshgrid(finer_mesh,finer_mesh)\n uuu = spf.interpolate.interp2d(xx, yy, uu, kind='linear')\n uuu_n=np.reshape(uuu(finer_mesh,finer_mesh),(N_fine,N_fine))\n\n plt.subplot(224,projection='3d')\n ax = fig.gca()\n surf = ax.plot_surface(xxx, yyy, uuu_n, rstride=1, cstride=1, cmap=cm.coolwarm,\n linewidth=0, antialiased=False)\n #ax.plot_wireframe(xxx, yyy, uuu_n)\n fig.colorbar(surf)\n ax.view_init(elev,azim)\n \n plt.subplot(223)\n ax = fig.gca()\n #surf = ax.plot_surface(xxx, yyy, uuu_n, rstride=1, cstride=1, cmap=cm.coolwarm,\n # linewidth=0, antialiased=False)\n extent = [x[0], x[-1], y[0], y[-1]]\n plt.imshow(uu, extent=extent)\n plt.ylabel('$y$')\n plt.xlabel('$x$')\n plt.colorbar()\n \n plt.show()\ninteract(example_5,N=(3,20),elev=elev_widget,azim=azim_widget)",
"Example 6: (2D) Helmholtz equation $u_{xx}+u_{yy}+k^2\\,u=f$ with u=0 on $\\partial\\Gamma$",
"elev_widget = IntSlider(min=0, max=180, step=10, value=40)\nazim_widget = IntSlider(min=0, max=360, step=10, value=230)\n\ndef example_6(N=10,elev=40,azim=230,k=9,n_contours=8):\n D,x = cheb(N)\n y=x\n D2 = np.dot(D,D)\n D2 = D2[1:-1,1:-1]\n\n xx,yy=np.meshgrid(x[1:-1],y[1:-1])\n xx = xx.flatten()\n yy = yy.flatten()\n\n f = np.exp(-10.*((yy-1.)**2+(xx-.5)**2))\n\n I = np.eye(N-1)\n # The Laplacian\n L = np.kron(I,D2)+np.kron(D2,I)+k**2*np.eye((N-1)**2)\n\n u = np.linalg.solve(L,f)\n\n fig = plt.figure(figsize=(2*M,2*M))\n\n # Plotting the approximation and its interpolation\n\n # The numerical approximation\n uu = np.zeros((N+1,N+1))\n uu[1:-1,1:-1]=np.reshape(u,(N-1,N-1))\n xx,yy=np.meshgrid(x,y)\n value = uu[int(N/4),int(N/4)]\n\n plt.subplot(221,projection='3d')\n ax = fig.gca()\n #surf = ax.plot_surface(xxx, yyy, uuu_n, rstride=1, cstride=1, cmap=cm.coolwarm,\n # linewidth=0, antialiased=False)\n ax.plot_wireframe(xx, yy, uu)\n ax.view_init(elev,azim)\n\n plt.subplot(222)\n plt.contour(xx, yy, uu, n_contours,\n colors='k', # negative contours will be dashed by default\n )\n \n # The INTERPOLATED approximation\n N_fine=4*N\n finer_mesh=np.linspace(-1,1,N_fine)\n xxx,yyy=np.meshgrid(finer_mesh,finer_mesh)\n uuu = spf.interpolate.interp2d(xx, yy, uu, kind='linear')\n uuu_n=np.reshape(uuu(finer_mesh,finer_mesh),(N_fine,N_fine))\n\n plt.subplot(223,projection='3d')\n ax = fig.gca()\n #surf = ax.plot_surface(xxx, yyy, uuu_n, rstride=1, cstride=1, cmap=cm.coolwarm,\n # linewidth=0, antialiased=False)\n ax.plot_wireframe(xxx, yyy, uuu_n)\n ax.view_init(elev,azim)\n \n plt.subplot(224)\n plt.contour(xxx, yyy, uuu_n, n_contours,\n colors='k', # negative contours will be dashed by default\n )\n \n plt.show()\ninteract(example_6,N=(3,30),elev=elev_widget,azim=azim_widget,k=(1,20),n_contours=(5,12))",
"Example 7: (2D) $-(u_{xx}+u_{yy})=\\lambda\\,u$ with u=0 on $\\partial\\Gamma$",
"elev_widget = IntSlider(min=0, max=180, step=10, value=40)\nazim_widget = IntSlider(min=0, max=360, step=10, value=230)\nN_widget = IntSlider(min=2, max=30, step=1, value=10)\nj_widget = IntSlider(min=1, max=20, step=1, value=1)\n\ndef update_j_range(*args):\n j_widget.max = (N_widget.value-1)**2\nj_widget.observe(update_j_range, 'value')\n\ndef example_7(N=10,elev=40,azim=230,n_contours=8,j=1):\n D,x = cheb(N)\n y=x\n D2 = np.dot(D,D)\n D2 = D2[1:-1,1:-1]\n\n xx,yy=np.meshgrid(x[1:-1],y[1:-1])\n xx = xx.flatten()\n yy = yy.flatten()\n\n I = np.eye(N-1)\n # The Laplacian\n L = (np.kron(I,-D2)+np.kron(-D2,I))\n\n lam, V = np.linalg.eig(L)\n\n ii=np.argsort(np.real(lam))\n lam=lam[ii]\n V=V[:,ii]\n\n fig = plt.figure(figsize=(2*M,M))\n\n # Plotting the approximation and its interpolation\n\n # The numerical approximation\n vv = np.zeros((N+1,N+1))\n vv[1:-1,1:-1]=np.reshape(np.real(V[:,j-1]),(N-1,N-1))\n xx,yy=np.meshgrid(x,y)\n\n plt.subplot(221,projection='3d')\n ax = fig.gca()\n #surf = ax.plot_surface(xxx, yyy, uuu_n, rstride=1, cstride=1, cmap=cm.coolwarm,\n # linewidth=0, antialiased=False)\n ax.plot_wireframe(xx, yy, vv)\n plt.title('eig '+str(j)+'/ (pi/2)**2= '+str(lam[j-1]/((np.pi/2)**2)))\n ax.view_init(elev,azim)\n\n plt.subplot(222)\n plt.contour(xx, yy, vv, n_contours,\n colors='k', # negative contours will be dashed by default\n )\n\n # The INTERPOLATED approximation\n N_fine=4*N\n finer_mesh=np.linspace(-1,1,N_fine)\n xxx,yyy=np.meshgrid(finer_mesh,finer_mesh)\n vvv = spf.interpolate.interp2d(xx, yy, vv, kind='linear')\n vvv_n=np.reshape(vvv(finer_mesh,finer_mesh),(N_fine,N_fine))\n\n plt.subplot(223,projection='3d')\n ax = fig.gca()\n #surf = ax.plot_surface(xxx, yyy, uuu_n, rstride=1, cstride=1, cmap=cm.coolwarm,\n # linewidth=0, antialiased=False)\n ax.plot_wireframe(xxx, yyy, vvv_n)\n ax.view_init(elev,azim)\n\n plt.subplot(224)\n plt.contour(xxx, yyy, vvv_n, n_contours,\n colors='k', # negative contours will be dashed by default\n )\n\n plt.show()\ninteract(example_7,N=N_widget,elev=elev_widget,azim=azim_widget,n_contours=(5,12),j=j_widget)",
"In-class work\n[Flash back] Implement Program 6, 7 and 12.\n[Today] Implement Program 19, 20, 21, 22 and 23."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ernestyalumni/CompPhys | crack/BigO.ipynb | apache-2.0 | [
"BigO, Complexity, Time Complexity, Space Complexity, Algorithm Analysis\ncf. pp. 40 McDowell, 6th Ed. VI BigO\ncf. 2.2. What Is Algorithm Analysis?",
"def sumOfN(n):\n theSum = 0\n for i in range(1,n+1):\n theSum = theSum + i\n \n return theSum\n \nprint(sumOfN(10))\n\ndef foo(tom):\n fred = 0 \n for bill in range(1,tom+1):\n barney = bill\n fred = fred + barney\n \n return fred\n\nprint(foo(10))\n\nimport time \n\ndef sumOfN2(n):\n start = time.time()\n \n theSum = 0 # 1 assignment \n for i in range(1,n+1):\n theSum = theSum + i # n assignments \n \n end = time.time()\n \n return theSum, end-start # (1 + n) assignements\n\nfor i in range(5):\n print(\"Sum is %d required %10.7f seconds \" % sumOfN2(10000) )\n\nfor i in range(5):\n print(\"Sum is %d required %10.7f seconds \" % sumOfN2(100000) )\n\nfor i in range(5):\n print(\"Sum is %d required %10.7f seconds \" % sumOfN2(1000000) )\n\ndef sumOfN3(n):\n start=time.time()\n theSum = (n*(n+1))/2\n end=time.time()\n return theSum, end-start\n\nprint(sumOfN3(10))\n\nfor i in range(5):\n print(\"Sum is %d required %10.7f seconds \" % sumOfN3(10000*10**(i)) )",
"A good basic unit of computation for comparing the summation algorithms might be to count the number of assignment statements performed.",
"def findmin(X):\n start=time.time()\n minval= X[0]\n for ele in X:\n if minval > ele:\n minval = ele\n end=time.time()\n return minval, end-start\n\ndef findmin2(X):\n start=time.time()\n L = len(X)\n overallmin = X[0]\n for i in range(L):\n minval_i = X[i]\n for j in range(L):\n if minval_i > X[j]:\n minval_i = X[j]\n if overallmin > minval_i:\n overallmin = minval_i\n end=time.time()\n return overallmin, end-start\n\nimport random\n\nfor i in range(5):\n print(\"findmin is %d required %10.7f seconds\" % findmin( [random.randrange(1000000) for _ in range(10000*10**i)] ) )\n\nfor i in range(5):\n print(\"findmin2 is %d required %10.7f seconds\" % findmin2( [random.randrange(1000000) for _ in range(10000*10**i)] ) )",
"cf. 2.4. An Anagram Detection Example",
"def anagramSolution(s1,s2):\n \"\"\" @fn anagramSolution\n @details 1 string is an anagram of another if the 2nd is simply a rearrangement of the 1st\n 'heart' and 'earth' are anagrams\n 'python' and 'typhon' are anagrams\n \"\"\"\n A = list(s2) # Python strings are immutable, so make a list\n pos1 = 0 \n stillOK = True\n while pos1 < len(s1) and stillOK:\n pos2 = 0 \n found = False\n while pos2 < len(A) and not found:\n if s1[pos1] == A[pos2]: # given s1[pos1], try to find it in A, changing pos2\n found = True\n else:\n pos2 = pos2+1\n \n if found:\n A[pos2] = None\n else:\n stillOK = False\n pos1 = pos1 + 1\n return stillOK\n\nanagramSolution(\"heart\",\"earth\")\n\nanagramSolution(\"python\",\"typhon\")\n\nanagramSolution(\"anagram\",\"example\")",
"2.4.2. Sort and Compare Solution 2",
"def anagramSolution2(s1,s2):\n "
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mikekestemont/ghent1516 | Chapter 8 - Parsing XML.ipynb | mit | [
"Parsing XML in Python\nXML in a nutshell\nSo far, we have primarily dealt with unstructured data in this course: we have learned to read, for example, the contents of plain text files in the previous chapters. Such raw textual data is often called 'unstructured', because it lacks annotations that make explicit the function or meaning of the words in the documents. If we read the contents of a play as a plain text, for instance, we don't have a clue to which scene or act a particular utterance belongs, or by which character the utterance was made. Nowadays, it is therefore increasingly common to add annotations to a text that give us a better insight into the\nsemantics and structure of the data. Adding annotations to texts (e.g. scholarly editions of Shakepeare), is typically done using some form of markup. Various markup-languages exist that allow us to provide structured and unambiguous annotations to a (digital) text. XML or the \"eXtensible Mark-up Language\" is currently one of the dominant standards to encode texts in the Digital Humanities. In this chapter, we'll assume that have at least some notion of XML, although we will have a quick refresher below. XML is a pretty straightforward mark-up language: let's have a look at Shakepeare's well-known sonnet 18 encoded in XML (you can find this poem also as sonnet.txt in your data/TEI folder).\n<?xml version=\"1.0\"?>\n<sonnet author=\"William Shakepeare\" year=\"1609\">\n<line n=\"1\">Shall I compare thee to a summer's <rhyme>day</rhyme>?</line>\n <line n=\"2\">Thou art more lovely and more <rhyme>temperate</rhyme>:</line>\n <line n=\"3\">Rough winds do shake the darling buds of <rhyme>May</rhyme><break n=\"3\"/>,</line>\n <line n=\"4\">And summer's lease hath all too short a <rhyme>date</rhyme>:</line>\n <line n=\"5\">Sometime too hot the eye of heaven <rhyme>shines</rhyme>,</line>\n <line n=\"6\">And often is his gold complexion <rhyme>dimm'd</rhyme>;</line>\n <line n=\"7\">And every fair from fair sometime <rhyme>declines</rhyme>,</line>\n <line n=\"8\">By chance, or nature's changing course, <rhyme>untrimm'd</rhyme>;</line>\n <volta/>\n <line n=\"9\">But thy eternal summer shall not <rhyme>fade</rhyme></line>\n <line n=\"10\">Nor lose possession of that fair thou <rhyme>ow'st</rhyme>;</line>\n <line n=\"11\">Nor shall Death brag thou wander'st in his <rhyme>shade</rhyme>,</line>\n <line n=\"12\">When in eternal lines to time thou <rhyme>grow'st</rhyme>;</line>\n <line n=\"13\">So long as men can breathe or eyes can <rhyme>see</rhyme>,</line>\n <line n=\"14\">So long lives this, and this gives life to <rhyme>thee</rhyme>.</line>\n</sonnet>\nThe first line in our Shakespearean example (<?xml version=\"1.0\"?>) declares which exact version of XML we are using, in our case version 1. As you can see at a glance, XML typically encodes pieces of text using start tags (e.g. <line>, <rhyme>) and end tags (</line>, </rhyme>). Each start tag must correspond to exactly one end tag, although XML does allow for \"solo\" elements such the <volta/> tag after line 8 in this example. Interestingly, XML tag are not allowed to overlap. The following line would therefore not constitue valid XML:\n<line><sentence>This is a </line><line>sentence.</sentence></line>\nThe following two lines would be valid alternatives for this example, because here the tags don't overlap:\n<sentence><line>This is a </line><line>sentence.</line></sentence>\n<sentence>This is a <linebreak/>sentence.</sentence>\nThis limitation has to with the fact that XML is a hierarchical markup language: it assumes that we can describe a text document as a tree of branching nodes. In this tree, elements cannot have more than one direct parent element (otherwise the hearchy would be ambiguous). The one exception is the so-called root element, which as the highest node in tree does not have a parent element itself. Logically speaking, all this entails that a valid XML document can only have a single root element. Note that all non-root elements can have as many siblings as we wish. All the <line> elements in our sonnet, for example, are siblings, in the sense that they have in common a direct parent element, i.e. the <sonnet> tag. Finally, note that we can add extra information to our elements using so-called attributes. The n attribute, for example, give us the line number for each line in the sonnet, surrounded by double quotation marks. The <sonnet> element illlustrates that we can add as many attributes as we want to a tag. \nXML and Python\nResearchers in the Digital Humanities nowadays often put a lot of time and effort in creating digital data sets for their research, such as scholarly editions with a rich markup encoded in XML. Nevertheless, once this data has been annotated, it can be tricky to get your texts out again, so to speak, and fully exploit the information which you painstakingly encoded. Therefore, it is crucial to be able to parse XML in an efficient manner. Luckily, Python provides all the tools necessary to do this. We will make use of the lxml library, which is part of the Anaconda Python distribution:",
"from lxml import etree",
"For the record, we should mention that there exist many other libraries in Python to parse XML, such as minidom or BeautifulSoup which is an interesting library, when you intend to scrape data from the web. While these might come with more advanced bells and whistles than lxml, they can also be more complex to use, which is why we stick to lxml in this course. Let us now import our sonnet in Python, which has been saved in the file sonnet18.xml:",
"tree = etree.parse(\"data/TEI/sonnet18.xml\")\nprint(tree)",
"Python has now read and parsed our xml-file via the etree.parse() function. We have stored our XML tree structure, which is returned by the parse() function, in the tree variable, so that we can access it later. If we print tree as such, we don't get a lot of useful information. To have a closer look at the XML in a printable text version, we need to call the tostring() method on the tree before printing it.",
"print(etree.tostring(tree))",
"You'll notice that we actually get a string in a raw format: if we want to display it properly, we have to decode it:",
"print(etree.tostring(tree).decode())",
"If we have more complex data, it might also be to set the pretty_print parameter to True, to obtain a more beautifully formatted string, with Python taking care of indendation etc. In our example, it doesn't change much:",
"print(etree.tostring(tree, pretty_print=True).decode())",
"Now let us start processing the contents of our file. Suppose that we are not really interested in the full hierarchical structure of our file, but just in the rhyme words occuring in it. The high-level function interfind() allows us to easily select all rhyme-element in our tree, regardless of where exactly they occur. Because this functions returns a list of nodes, we can simply loop over them:",
"for node in tree.iterfind(\"//rhyme\"):\n print(node)",
"Note that the search expression (\"//rhyme\") has two forward slashes before our actual search term. This is in fact XPath syntax, and the two slashes indicate that the search term can occur anywhere (e.g. not necessarily among a node's direct children). Unfortunately, printing the nodes themselves again isn't really insightful: in this way, we only get rather prosaic information of the Python objects holding our rhyme nodes. We can use the .tag property to print the tag's name:",
"for node in tree.iterfind(\"//rhyme\"):\n print(node.tag)",
"To extract the actual rhyme word contained in the element, we can use the .text property of the nodes:",
"for node in tree.iterfind(\"//rhyme\"):\n print(node.text)",
"That looks better!\nJust now, we have been iterating over our rhyme elements in simple order of appearance: we haven't been really been exploiting the hierarchy of our XML file yet. Let's see now how we can navigate our xml tree. Let's first select our root node: there's a function for that!",
"root_node = tree.getroot()\nprint(root_node.tag)",
"We can access the value of the attributes of an element via .attrib, just like we would access the information in a Python dictionary, that is via key-based indexing. We know that our sonnet element, for instance, should have an author and year attribute. We can inspect the value of these as follows:",
"print(root_node.attrib[\"author\"])\nprint(root_node.attrib[\"year\"])",
"If we wouldn't know which attributes were in fact available for a node, we could also retrieve the attribute names by calling keys() on the attributes property of a node, just like we would do with a regular dictionary:",
"for key in root_node.attrib.keys():\n print(root_node.attrib[key])",
"So far so good. Now that we have selected our root element, we can start drilling down our tree's structure. Let us first find out how many child nodes our root element has:",
"print(len(root_node))",
"Our root node turns out to have 15 child nodes, which makes a lot of sense, since we have 14 line elements and the volta. We can actually loop over these children, just as we would loop over any other list:",
"for node in root_node:\n print(node.tag)",
"To extract the actual text in our lines, we need one additional for-loop which will allow us to iteratre over the pieces of text under each line:",
"for node in root_node:\n if node.tag != \"volta\":\n line_text = \"\"\n for text in node.itertext():\n line_text = line_text + text\n print(line_text)\n else:\n print(\"=== Volta found! ===\")",
"Note that we get an empty line at the volta, since there isn't any actual text associated with this empty tag.\nQuiz!\nCould you now write your own code, which iterates over the lines in our tree and prints out the line number based on the n attribute of the line element?",
"for node in root_node:\n if node.tag == \"line\":\n print(node.attrib[\"n\"])",
"Manipulating XML in Python\nSo far, we have parsed XML in Python, we haven't dealt with creating or manipulating XML in Python. Luckily, adapting or creating XML is fairly straightforward in Python. Let's first try and change the author's name in the author attribute of the sonnet. Because this boils down to manipulating a Python dictionary, the syntax should already be familiar to you:",
"root_node = tree.getroot()\nroot_node.attrib[\"author\"] = \"J.K. Rowling\"\nroot_node.attrib[\"year\"] = \"2015\"\nroot_node.attrib[\"new_element\"] = \"dummy string!\"\nroot_node.attrib[\"place\"] = \"maynooth\"\nprint(etree.tostring(root_node).decode())",
"That was easy, wasn't it? Did you see we can just add new attributes to an element? Just take care only to put strings as attribute values: since we are working with XML, Python won't accept e.g. numbers and you will get an error:",
"root_node.attrib[\"year\"] = \"2015\"",
"Adding whole elements is fairly easy too. Let's add a single dummy element (<break/>) to indicate a line break at the end of each line. Importantly, we have to create this element inside our loop, before we can add it:",
"break_el = etree.Element(\"break\")\nbreak_el.attrib[\"author\"] = \"Mike\"\nprint(etree.tostring(break_el).decode())",
"You'll notice that we actually created an empty <break/> tag. Now, let's add it add the end of each line:",
"for node in tree.iterfind(\"//line\"):\n break_el = etree.Element(\"break\")\n node.append(break_el)\nprint(etree.tostring(tree).decode())",
"Adding an element with actual content is just as easy by the way:",
"break_el = etree.Element(\"break\")\nprint(etree.tostring(break_el).decode())\nbreak_el.text = \"XXX\"\nprint(etree.tostring(break_el).decode())",
"Quiz\nThe <break/> element is still empty: could you add to it an n attribute, to which you assign the line number from the current <line> element?",
"tree = etree.parse(\"data/TEI/sonnet18.xml\")\nroot_node = tree.getroot()\nfor node in root_node:\n if node.tag == \"line\":\n v = node.attrib[\"n\"]\n break_el = etree.Element(\"break\")\n break_el.attrib[\"n\"] = v\n node.append(break_el)\nprint(etree.tostring(tree).decode())",
"Python for TEI\nIn Digital Humanities, you hear a lot about the TEI nowadays, or the Text Encoding Initiative (tei-c.org). The TEI refers to an initiative which has developed a highly influential \"dialect\" of XML for encoding texts in the Humanities. The beauty about XML is that tag names aren't predefined and you can invent your own tag and attributes. Our Shakepearean example could just have well have read:\n<?xml version=\"1.0\"?>\n<poem writer=\"William Shakepeare\" date=\"1609\">\n <l nr=\"1\">Shall I compare thee to a summer's <last>day</last>?</l>\n <l nr=\"2\">Thou art more lovely and more <last>temperate</last>:</l>\n <l nr=\"3\">Rough winds do shake the darling buds of <last>May</last>,</l>\n <l nr=\"4\">And summer's lease hath all too short a <last>date</last>:</l>\n <l nr=\"5\">Sometime too hot the eye of heaven <last>shines</last>,</l>\n <l nr=\"6\">And often is his gold complexion <last>dimm'd</last>;</l>\n <l nr=\"7\">And every fair from fair sometime <last>declines</last>,</l>\n <l nr=\"8\">By chance, or nature's changing course, <last>untrimm'd</last>;</l>\n <break/>\n <l nr=\"9\">But thy eternal summer shall not <last>fade</last></l>\n <l nr=\"10\">Nor lose possession of that fair thou <last>ow'st</last>;</l>\n <l nr=\"11\">Nor shall Death brag thou wander'st in his <last>shade</last>,</l>\n <l nr=\"12\">When in eternal lines to time thou <last>grow'st</last>;</l>\n <l nr=\"13\">So long as men can breathe or eyes can <last>see</last>,</l>\n <l nr=\"14\">So long lives this, and this gives life to <last>thee</last>.</l>\n</poem>\nAs you can see, all the tag and attribute names are different in this version, but the essential structure is still the same. You could therefore say that XML is a markup language which provides a syntax to talk about texts, but does not come with a default semantics. This freedom in choosing name tags etc. can also be a bit daunting: this is why the TEI provides Guidelines as how tag names etc. can be used to mark up specific phenomena in texts. The TEI therefore also refers to a rather bulky set of guidelines as to which tags could be used to properly encode a text. Below, we read in a fairly advanced example of Shakepeare's 17th sonnet encoded in TEI (note the use of the <TEI> tag as our root node!). Even the metrical structure has been encoded as you will see, so this can be considered an example \"TEI on steroids\".",
"tree = etree.parse(\"data/TEI/sonnet17.xml\")\nprint(etree.tostring(tree).decode())",
"Quiz\nProcessing TEI in Python, is really just processing XML in Python, the dark art which you already learned to master above! Let's try and practice the looping techniques we introduced above. Could you provide code which parses the xml and writes away the lines in this poem to a plain text file, with one verse line on a single line in the new file?",
"# add your parsing code here... ",
"A hands-on case study: French plays\nOK, it time to get your hands even more dirty. For textual analyses, there are a number of great datasets out there which have been encoded in rich XML. One excellent resource which we have recently worked with, can be found at theatre-classique.fr: this website holds an extensive collection of French plays from the time of the Classical and Enlightenment era in France. Some of the plays have been authored by some of France's finest authors such as Molière pr Pierre and Thomas Corneille. What is interesting about this resource, is that it provides a very rich XML markup: apart from extensive metadata on the play or a detailed descriptions of the actors involved, the actually lines have been encoded in such a manner, that we perfectly know which character uttered a particular line, or to which scene or act a line belongs. This allows us to perform much richer textual analyses than if we would only have a raw text version of the plays. We have collected a subset of these plays for you under the data/TEIdirectory:",
"import os\ndirname = \"data/TEI/french_plays/\"\nfor filename in os.listdir(dirname):\n if filename.endswith(\".xml\"):\n print(filename)",
"OK: under this directory, we appear to have a bunch of XML-files, but their titles are just numbers, which doesn't tell us a lot. Let's have a look at what's the title and author tags in these files:",
"for filename in os.listdir(dirname):\n if filename.endswith(\".xml\"):\n print(\"*****\")\n print(\"\\t-\", filename)\n tree = etree.parse(dirname+filename)\n author_element = tree.find(\"//author\") # find vs iterfind!\n print(\"\\t-\", author_element.text)\n title_element = tree.find(\"//title\")\n print(\"\\t-\", title_element.text)",
"As you can see, we have made you a nice subset selection of this data, containing only texts by the famous pair of brothers: Pierre and Thomas Corneille. We have provided a number of exercises in which you can practice your newly developed XML skills. In each of the fun little tasks below, you should compare the dramas of our two famous brothers:\n* how many characters does each brother on average stage in a play?\n* which brother has the highest vocabulary richness?\n* which brother uses the lengthiest speeches per character on average?\n* which brother gives most \"speech time\" to women, expressed in number of words (hint: you can derive a character's gender from the <castList> in most plays!)",
"# your code goes here",
"",
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/noaa-gfdl/cmip6/models/sandbox-1/ocnbgchem.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: NOAA-GFDL\nSource ID: SANDBOX-1\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:35\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-1', 'ocnbgchem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Elemental Stoichiometry\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n",
"1.5. Elemental Stoichiometry Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.7. Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Damping\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for passive tracers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"2.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for passive tracers (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for biology sources and sinks",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"3.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transport scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"4.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTransport scheme used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4.3. Use Different Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how atmospheric deposition is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n",
"5.2. River Input\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how river input is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n",
"5.3. Sediments From Boundary Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Sediments From Explicit Model\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from explicit sediment model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.2. CO2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe CO2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.3. O2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs O2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.4. O2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe O2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. DMS Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs DMS gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.6. DMS Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify DMS gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.7. N2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.8. N2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.9. N2O Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2O gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.10. N2O Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2O gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.11. CFC11 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC11 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.12. CFC11 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.13. CFC12 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC12 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.14. CFC12 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.15. SF6 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs SF6 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.16. SF6 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify SF6 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.17. 13CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.18. 13CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.19. 14CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.20. 14CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.21. Other Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any other gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how carbon chemistry is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n",
"7.2. PH Scale\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.3. Constants If Not OMIP\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Sulfur Cycle Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sulfur cycle modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Nutrients Present\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Nitrous Species If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous species.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.5. Nitrous Processes If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous processes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Upper Trophic Levels Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefine how upper trophic level are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of phytoplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n",
"10.2. Pft\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of zooplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nZooplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there bacteria representation ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Lability\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Types If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Size If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n",
"13.4. Size If Discrete\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.5. Sinking Speed If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n",
"14.2. Abiotic Carbon\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs abiotic carbon modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.3. Alkalinity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is alkalinity modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hmenke/pairinteraction | doc/sphinx/examples_python/matrix_elements.ipynb | gpl-3.0 | [
"Calculation of Matrix Elements\nWe show how to compute matrix elements using the pairinteraction Python API. As an introductory example, we consider Rubidium and calculate the values of the radial matrix element $\\left|\\left\\langle ns_{1/2},m_j=1/2 \\right| r \\left|n'p_{1/2},m_j=1/2\\right\\rangle\\right|$ as a function of the principal quantum numbers $n$ and $n'$. This Jupyter notebook is available on GitHub.\nAs described in the introduction, we start our code with some preparations.",
"%matplotlib inline\n\n# Arrays\nimport numpy as np\n\n# Plotting\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import MaxNLocator\n\n# Operating system interfaces\nimport os\n\n# pairinteraction :-)\nfrom pairinteraction import pireal as pi\n\n# Create cache for matrix elements\nif not os.path.exists(\"./cache\"):\n os.makedirs(\"./cache\")\ncache = pi.MatrixElementCache(\"./cache\")",
"We use pairinteraction's StateOne class to define the single-atom states $\\left|n,l,j,m_j\\right\\rangle$ for which the matrix elements should be calculated.",
"array_n = range(51,61)\narray_nprime = range(51,61)\narray_state_final = [pi.StateOne(\"Rb\", n, 0, 0.5, 0.5) for n in array_n]\narray_state_initial = [pi.StateOne(\"Rb\", n, 1, 0.5, 0.5) for n in array_nprime]",
"The method MatrixElementCache.getRadial(state_f, state_i, power) returns the value of the radial matrix element of $r^p$ in units of $\\mu\\text{m}^p$.",
"matrixelements = np.empty((len(array_state_final), len(array_state_initial)))\nfor idx_f, state_f in enumerate(array_state_final):\n for idx_i, state_i in enumerate(array_state_initial):\n matrixelements[idx_f, idx_i] = np.abs(cache.getRadial(state_f, state_i, 1))",
"We visualize the calculated matrix elements with matplotlib.",
"fig = plt.figure()\nax = fig.add_subplot(1,1,1)\nax.imshow(matrixelements, extent=(array_nprime[0]-0.5, array_nprime[-1]+0.5, array_n[0]-0.5, array_n[-1]+0.5),origin='lower')\nax.set_ylabel(r\"n\")\nax.set_xlabel(r\"n'\")\nax.yaxis.set_major_locator(MaxNLocator(integer=True))\nax.xaxis.set_major_locator(MaxNLocator(integer=True));"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
grokkaine/biopycourse | day2/stats_learning.ipynb | cc0-1.0 | [
"Statistical learning\nThe difference between statistical learning and classical machine learning is in the way they handle data. Rather than working with the data directly, statistical learning works with models of this data (statistical models, formulated in terms of distributions). There is a slight difference in ultimate goals too: while ML is striving towards narrower quantifiable goals, such as regression/classification error, the main obsession in the statistical learning circles is model inference. This creates a cultural rift as well, while ML researchers are publishing their work in conferences and are more industry focused, the statistical learning reseachers tend to publish in scientific journals and have a more academic focus.\nFurther read:\n- http://brenocon.com/blog/2008/12/statistics-vs-machine-learning-fight/\n- https://statmodeling.stat.columbia.edu/2008/12/03/machine_learnin/\nFrequentism vs Bayesian probability estimation\n\nProbabilities are fundamentally related to frequencies of events.\nProbabilities are fundamentally related to our own knowledge about an event.\nThere is no 'true' probability because the probability of an event is fixed. Or?\n\n\n\nLet us make a Bayesian probabilistic model for gene expression. The number of reads mapping to a gene can be assumed to be Poisson approximated. Let's say we have a set of technical replicates giving various coverage numbers for a certain gene. Let $E = {E_i,e_i}$ be the set of coverage numbers and associated errors drawing from the technical replicates. The question is to find a best estimate of the true expression $E_{true}$.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import stats\n\nnp.random.seed(1) # for repeatability\n\nE_true = 1000 # true expression\nN = 50 # samples\nE = stats.poisson(E_true).rvs(N) # N measurements of the expression\nerr = np.sqrt(E) # errors on Poisson counts estimated via square root\n\nfig, ax = plt.subplots()\nax.errorbar(E, np.arange(N), xerr=err, fmt='ok', ecolor='gray', alpha=0.5)\nax.vlines([E_true], 0, N, linewidth=5, alpha=0.2)\nax.set_xlabel(\"sequence counts\");ax.set_ylabel(\"samples\");",
"Now given our errors of estimation, what is our best expectation for the true expression?\nThe frequentist approach is based on a maximum likelihood estimate. Basically one can compute the probability of an observation given that the true gene expression value is fixed, and then compute the product of these probabilities for each data point:\n$$ L(E|E_{true}) = \\prod_{i=1}^{N}{ P (E_i|E_{true}) } $$\nWhat we want is to compute the E_true for which the log likelihood estimate is maximized, and in this case it can be solved analytically to this formula (while generally alternatively approximated via optimization, although it is not always possible):\n$$ E_{est} = argmax_{E_{true}} = argmin_{E_{true}} {- \\sum_{i=1}^{N}log(P(E_i|E_{true})) } \\approx \\frac{\\sum{w_i E_i}}{\\sum{w_i}}, w_i = 1/e_i^2 $$\n(Also, in this case) we can also estimate the error of measurement by using a gaussian estimate of the likelihood function at its maximum:",
"w = 1. / err ** 2\nprint(\"\"\"\n E_true = {0}\n E_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)\n \"\"\".format(E_true, (w * E).sum() / w.sum(), w.sum() ** -0.5, N))",
"When using the Bayesian approach, we are estimating the probability of the model parameters giving the data, so no absolute estimate. This is also called posterior probability. We do this using the likelihood and the model prior, which is an expectation of the model before we are given the data. The data probability is encoding how likely our data is, and is usually approximated into a normalization term. The formula used is also known as Bayes theorem but is using a Bayesian interpretation of probability. \n$$ P(E_{true}|E) = \\frac{P(E|E_{true})P(E_{true})}{P(E)}$$\n$$ {posterior} = \\frac{{likelihood}~\\cdot~{prior}}{data~probability}$$",
"import pymc3 as pm\n\nwith pm.Model():\n mu = pm.Normal('mu', 900, 1.)\n sigma = 1.\n \n E_obs = pm.Normal('E_obs', mu=mu, sd=sigma, observed=E)\n \n step = pm.Metropolis()\n trace = pm.sample(15000, step)\n \n#sns.distplot(trace[2000:]['mu'], label='PyMC3 sampler');\n#sns.distplot(posterior[500:], label='Hand-written sampler');\npm.traceplot(trace)\nplt.show()\n",
"Task:\n- How did we know to start with 900 as the expected mean? Try putting 0, then 2000 and come up with a general strategy!\n- Use a bayesian parametrization for sigma as well. What do you observe?\n- Try another sampler.",
"def log_prior(E_true):\n return 1 # flat prior\n\ndef log_likelihood(E_true, E, e):\n return -0.5 * np.sum(np.log(2 * np.pi * e ** 2)\n + (E - E_true) ** 2 / e ** 2)\n\ndef log_posterior(E_true, E, e):\n return log_prior(E_true) + log_likelihood(E_true, E, e)\n\nimport pymc3 as pm\nbasic_model = pm.Model()\n\nwith basic_model:\n # Priors for unknown model parameters\n alpha = pm.Normal('alpha', mu=0, sd=10)\n beta = pm.Normal('beta', mu=0, sd=10)\n sigma = pm.HalfNormal('sigma', sd=1)\n\n # Expected value of outcome\n mu = alpha + beta*E\n\n # Likelihood (sampling distribution) of observations\n E_obs = pm.Normal('Y_obs', mu = mu, sd = sigma, observed = E)\n \n \n start = pm.find_MAP(model=basic_model)\n step = pm.Metropolis()\n\n # draw 20000 posterior samples\n trace = pm.sample(20000, step=step, start=start)\n\n_ = pm.traceplot(trace)\nplt.show()\n\nimport pymc3 as pm\nhelp(pm.Normal)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
IS-ENES-Data/submission_forms | dkrz_forms/Templates/DKRZ_CDP_submission_form.ipynb | apache-2.0 | [
"Generic DKRZ CMIP Data Pool (CDP) ingest form\n\nThis form is intended to request data to be made locally available in the DKRZ national data archive.\nIf the requested data is available via ESGF please use the specific ESGF replication form.\nPlease provide information on the following aspects of your data ingest request:\n* scientific context of data\n* data access policies to be supported\n* technical details, like\n * amount of data\n * source of data\nPlease identify your form\nthis step is necessary to uniquely correlate this form to you (e.g. your email address)\nplease make sure the name shown on the top of this page matches your selection below",
"# Evaluate this cell to identifiy your form \n\nfrom dkrz_forms import form_widgets, form_handler, checks\nform_infos = form_widgets.show_selection()\n\n# Evaluate this cell to generate your personal form instance\n\nform_info = form_infos[form_widgets.FORMS.value]\nsf = form_handler.init_form(form_info)\nform = sf.sub.entity_out.report",
"some context information\nPlease provide some generic context information about the data, which should be availabe as part of the DKRZ CMIP Data Pool (CDP)",
"# (informal) type of data\nform.data_type = \"....\" # e.g. model data, observational data, .. \n\n# free text describing scientific context of data\nform.scientific_context =\"...\" \n\n# free text describing the expected usage as part of the DKRZ CMIP Data pool\nform.usage = \"....\" \n\n# free text describing access rights (who is allowed to read the data)\nform.access_rights = \"....\"\n\n# generic terms of policy information\nform.terms_of_use = \"....\" # e.g. unrestricted, restricted\n\n# any additional comment on context\nform.access_group = \"....\"\nform.context_comment = \"....\"",
"technical information concerning your request",
"# information on where the data is stored and can be accessed\n# e.g. file system path if on DKRZ storage, url etc. if on web accessible resources (cloud,thredds server,..)\nform.data_path = \"....\"\n\n# timing constraints, when the data ingest should be completed \n# (e.g. because the data source is only accessible in specific time frame) \nform.best_ingest_before = \"....\"\n\n# directory structure information, especially \nform.directory_structure = \"...\" # e.g. institute/experiment/file.nc\nform.directory_structure_convention = \"...\" # e.g. CMIP5, CMIP6, CORDEX, your_convention_name\nform.directory_structure_comment = \"...\" # free text, e.g. with link describing the directory structure convention you used\n\n# metadata information\nform.metadata_convention_name = \"...\" # e.g. CF1.6 etc. None if not applicable\nform.metadata_comment = \"...\" # information about metadata, e.g. links to metadata info etc. ",
"Check your submission form\nPlease evaluate the following cell to check your submission form.\nIn case of errors, please go up to the corresponden information cells and update your information accordingly...",
"# to be completed .. \nreport = checks.check_report(sf,\"sub\")\nchecks.display_report(report)",
"Save your form\nyour form will be stored (the form name consists of your last name plut your keyword)",
"form_handler.save_form(sf,\"..my comment..\") # edit my comment info ",
"officially submit your form\nthe form will be submitted to the DKRZ team to process\nyou also receive a confirmation email with a reference to your online form for future modifications",
"form_handler.form_submission(sf)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
datascienceguide/datascienceguide.github.io | tutorials/Non-Linear-Regression-Tutorial.ipynb | mit | [
"Generalized Linear and Non-Linear Regression Tutorial\nAuthor: Andrew Andrade ([email protected])\nFirst we will outline a solution to last weeks homework assignment by applying linear regression to a log transform of a dataset. We will then go into non-linear regression and linearized models for with a single explanatory variable. In the next tutorial we will learn how to apply this to multiple features (multi-regression)\nPredicting House Prices by Applying Log Transform\ndata inspired from http://davegiles.blogspot.ca/2011/03/curious-regressions.html\nGiven the task from last week of using linear regression to predict housing prices from the property size, let us first load the provided data, and peak at the first 5 data points.",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom math import log\nfrom sklearn import linear_model\n\n#comment below if not using ipython notebook\n%matplotlib inline\n\n# load data into a pandas dataframe\ndata = pd.read_csv('../datasets/log_regression_example.csv')\n\n#view first five datapoints\nprint data[0:5]\n\n\n#mistake I made yesterday\n\n#change column labels to be more convenient (shorter)\ndata.columns = ['size', 'price']\n\n#view first five datapoints\nprint data[0:5]\n\n\n#problem is size is already a pandas method\n# data.size will give the size of the data, not the column\n\ndata.size",
"Now lets visualize the data. We are going to make the assumption that the price of the house is dependant on the size of property",
"#rename columns to make indexing easier\ndata.columns = ['property_size', 'price']\n\nplt.scatter(data.property_size, data.price, color='black')\nplt.ylabel(\"Price of House ($million)\")\nplt.xlabel(\"Size of Property (m^2)\")\nplt.title(\"Price vs Size of House\")\n",
"We will learn about how to implement cross validation properly soon, but for now let us put the data in a random order (shuffle the rows) and use linear regression to fit a line on 75% of the data. We will then test the fit on the remaining 25%. Normally you would use scikit learn's cross validation functions, but we are going to implement the cross validation methods ourself (so you understand what is going on).\nDO NOT use this method for doing cross validation. You will later learn how to do k folds cross-validation using the scikit learn's implementation. In this tutorial, I implement cross validation manually you intuition for what exactly hold out cross validation is, but in the future we will learn a better way to do cross validation.",
"# generate pseudorandom number\n# by setting a seed, the same random number is always generated\n# this way by following along, you get the same plots\n# meaning the results are reproducable.\n# try changing the 1 to a different number\nnp.random.seed(3)\n\n# shuffle data since we want to randomly split the data\nshuffled_data= data.iloc[np.random.permutation(len(data))]\n\n#notice how the x labels remain, but are now random\nprint shuffled_data[0:5]\n\n#train on the first element to 75% of the dataset\ntraining_data = shuffled_data[0:len(shuffled_data)*3/4]\n\n#test on the remaining 25% of the dataset\n#note the +1 is since there is an odd number of datapoints\n#the better practice is use shufflesplit which we will learn in a future tutorial\ntesting_data = shuffled_data[-len(shuffled_data)/4+1:-1]\n\n#plot the training and test data on the same plot\nplt.scatter(training_data.property_size, training_data.price, color='blue', label='training')\nplt.scatter(testing_data.property_size, testing_data.price, color='red', label='testing')\n\nplt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,\n ncol=2, mode=\"expand\", borderaxespad=0.)\nplt.ylabel(\"Price of House ($Million)\")\nplt.xlabel(\"Size of Land (m^2)\")\nplt.title(\"Price vs Size of Land\")\n\n\n\nX_train = training_data.property_size.reshape((len(training_data.property_size), 1))\ny_train = training_data.price.reshape((len(training_data.property_size), 1))\n\nX_test = testing_data.property_size.reshape((len(testing_data.property_size), 1))\ny_test = testing_data.price.reshape((len(testing_data.property_size), 1))\n\n\nX = np.linspace(0,800000)\nX = X.reshape((len(X), 1))\n\n# Create linear regression object\nregr = linear_model.LinearRegression()\n#Train the model using the training sets\nregr.fit(X_train,y_train)\n\n# The coefficients\nprint('Coefficients: \\n', regr.coef_)\n# The mean square error\nprint(\"Residual sum of squares: %.2f\"\n % np.mean((regr.predict(X_test) - y_test) ** 2))\n\nplt.plot(X, regr.predict(X), color='black',\n linewidth=3)\n\nplt.scatter(training_data.property_size, training_data.price, color='blue', label='training')\nplt.scatter(testing_data.property_size, testing_data.price, color='red', label='testing')\n\nplt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,\n ncol=2, mode=\"expand\", borderaxespad=0.)\n\n\nplt.ylabel(\"Price of House ($Million)\")\nplt.xlabel(\"Size of Land (m^2)\")\nplt.title(\"Price vs Size of Land\")",
"We can see here, there is obviously a poor fit. There is going to be a very high residual sum of squares and there is no linear relationship. Since the data appears to follow $e^y = x$, we can apply a log transform on the data:\n$$y = ln (x)$$\nFor the purpose of this tutorial, I will apply the log transform, fit a linear model then invert the log transform and plot the fit to the original data.",
"# map applied log() function to every element\nX_train_after_log = training_data.property_size.map(log)\n#reshape back to matrix with 1 column\nX_train_after_log = X_train_after_log.reshape((len(X_train_after_log), 1))\n\nX_test_after_log = testing_data.property_size.map(log)\n#reshape back to matrix with 1 column\nX_test_after_log = X_test_after_log.reshape((len(X_test_after_log), 1))\n\n\nX_after_log = np.linspace(min(X_train_after_log),max(X_train_after_log))\nX_after_log = X_after_log.reshape((len(X_after_log), 1))\n\nregr2 = linear_model.LinearRegression()\n#fit linear regression\nregr2.fit(X_train_after_log,y_train)\n\n# The coefficients\nprint('Coefficients: \\n', regr.coef_)\n# The mean square error\nprint(\"Residual sum of squares: %.2f\"\n % np.mean((regr2.predict(X_test_after_log) - y_test) ** 2))\n\n#np.exp takes the e^x, efficiently inversing the log transform\n\nplt.plot(np.exp(X_after_log), regr2.predict(X_after_log), color='black',\n linewidth=3)\n\nplt.scatter(training_data.property_size, training_data.price, color='blue', label='training')\nplt.scatter(testing_data.property_size, testing_data.price, color='red', label='testing')\n\n\nplt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,\n ncol=2, mode=\"expand\", borderaxespad=0.)\n\nplt.ylabel(\"Price of House ($Million)\")\nplt.xlabel(\"Size of Land (m^2)\")\nplt.title(\"Price vs Size of Land\")",
"The residual sum of squares on the test data after the log transform (0.07) in this example is much lower than before where we just fit the the data without the transfrom (0.32). The plot even looks much better as the data seems to fit well for the smaller sizes of land and still fits the larger size of land roughly. As an analysist, one might naively use this model afer applying the log transform. As we learn't from the last tutorial, ALWAYS plot your data after you transform the features since there might be hidden meanings in the data!\nRun the code below to see hidden insight left in the data (after the log transform)",
"plt.scatter(X_train_after_log, training_data.price, color='blue', label='training')\nplt.scatter(X_test_after_log, testing_data.price, color='red', label='testing')\nplt.plot(X_after_log, regr2.predict(X_after_log), color='black', linewidth=3)\n\n\nplt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,\n ncol=2, mode=\"expand\", borderaxespad=0.)\n",
"The lesson learnt here is always plot data (even after a transform) before blindly running a predictive model!\nGeneralized linear models\nNow let's exend our knowledge to generalized linear models for the remaining three of the anscombe quartet datasets. We will try and use our intuition to determine the best model.",
"#read csv\nanscombe_ii = pd.read_csv('../datasets/anscombe_ii.csv')\n\nplt.scatter(anscombe_ii.x, anscombe_ii.y, color='black')\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\n",
"Instead of fitting a linear model to a transformation, we can also fit a polynomial to the data:",
"X_ii = anscombe_ii.x\n\n# X_ii_noisey = X_ii_noisey.reshape((len(X_ii_noisey), 1))\ny_ii = anscombe_ii.y\n#y_ii = anscombe_ii.y.reshape((len(anscombe_ii.y), 1))\n\nX_fit = np.linspace(min(X_ii),max(X_ii))\n\npolynomial_degree = 2\np = np.polyfit(X_ii, anscombe_ii.y, polynomial_degree)\n\nyfit = np.polyval(p, X_fit)\n\nplt.plot(X_fit, yfit, '-b')\n\nplt.scatter(X_ii, y_ii)",
"Lets add some random noise to the data, fit a polynomial and calculate the residual error.",
"np.random.seed(1)\n\nx_noise = np.random.random(len(anscombe_ii.x))\n\nX_ii_noisey = anscombe_ii.x + x_noise*3\n\nX_fit = np.linspace(min(X_ii_noisey),max(X_ii_noisey))\n\npolynomial_degree = 1\np = np.polyfit(X_ii_noisey, anscombe_ii.y, polynomial_degree)\n\nyfit = np.polyval(p, X_fit)\n\nplt.plot(X_fit, yfit, '-b')\n\nplt.scatter(X_ii_noisey, y_ii)\n\n\nprint(\"Residual sum of squares: %.2f\"\n % np.mean((np.polyval(p, X_ii_noisey) - y_ii)**2))\n\n",
"Now can we fit a larger degree polynomial and reduce the error? Lets try and see:",
"polynomial_degree = 5\np2 = np.polyfit(X_ii_noisey, anscombe_ii.y, polynomial_degree)\n\nyfit = np.polyval(p2, X_fit)\n\nplt.plot(X_fit, yfit, '-b')\n\nplt.scatter(X_ii_noisey, y_ii)\n\n\nprint(\"Residual sum of squares: %.2f\"\n % np.mean((np.polyval(p2, X_ii_noisey) - y_ii)**2))",
"What if we use a really high degree polynomial? Can we bring the error to zero? YES!",
"polynomial_degree = 10\np2 = np.polyfit(X_ii_noisey, anscombe_ii.y, polynomial_degree)\n\nyfit = np.polyval(p2, X_fit)\n\nplt.plot(X_fit, yfit, '-b')\n\nplt.scatter(X_ii_noisey, y_ii)\n\n\nprint(\"Residual sum of squares: %.2f\"\n % np.mean((np.polyval(p2, X_ii_noisey) - y_ii)**2))",
"It is intuitive to see that we are overfitting since the high degree polynomial hits every single point (causing our mean squared error (MSE) to be zero), but it would generalize well. For example, if x=5, it would estimate y to be -45 when you would expect it to be above 0.\nwhen you are dealing with more than one variable, it becomes increasingly difficult to prevent overfitting, since you can not plots past four-five dimensions (x axis,y axis,z axis, color and size). For this reason we should always use cross validation to reduce our variance error (due to overfitting) while we are deducing bias (due to underfitting). Throughout the course we will learn more on what this means, and learn practical tips.\nThe key takeaway here is more complex models are not always better. Use visualizations and cross validation to prevent overfitting! (We will learn more about this soon!)\nNow, let us work on the third set of data from quartet",
"#read csv\nanscombe_iii = pd.read_csv('../datasets/anscombe_iii.csv')\n\nplt.scatter(anscombe_iii.x, anscombe_iii.y, color='black')\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\n",
"It is obvious that there is an outlier which is going to cause a poor fit to an ordinary linear regression. One way is filtering out the outlier. One method could be to manually hardcode any value which seems to be incorrect. A better method would be to remove any point which is a given standard deviation away from the linear model, then fit a line to remaining data points. Arguably, an even better method could be using the RANSAC algorithm (demonstrated below) from the Scikit learn documentation on linear models or using Thiel-sen regression",
"from sklearn import linear_model\nX_iii = anscombe_iii.x.reshape((len(anscombe_iii), 1))\n\n#bit basic linear model\nmodel = linear_model.LinearRegression()\nmodel.fit(X_iii, anscombe_iii.y)\n\n# Robustly fit linear model with RANSAC algorithm\nmodel_ransac = linear_model.RANSACRegressor(linear_model.LinearRegression())\nmodel_ransac.fit(X_iii, anscombe_iii.y)\n\ninlier_mask = model_ransac.inlier_mask_\noutlier_mask = np.logical_not(inlier_mask)\n\nplt.plot(X_iii,model.predict(X_iii), color='blue',linewidth=3, label='Linear regressor')\nplt.plot(X_iii,model_ransac.predict(X_iii), color='red', linewidth=3, label='RANSAC regressor')\nplt.plot(X_iii[inlier_mask], anscombe_iii.y[inlier_mask], '.k', label='Inliers')\nplt.plot(X_iii[outlier_mask], anscombe_iii.y[outlier_mask], '.g', label='Outliers')\n\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,\n ncol=2, mode=\"expand\", borderaxespad=0.)\n",
"The takeaway here to read the documentation, and see if there is an already implemented method of solving a problem. Chances there are already prepackaged solutions, you just need to learn about them. Lets move on to the final quatet.",
"#read csv\nanscombe_ii = pd.read_csv('../datasets/anscombe_iv.csv')\n\nplt.scatter(anscombe_ii.x, anscombe_ii.y, color='black')\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")",
"In this example, we can see that the X axis values stays constant except for 1 measurement where x varies. Since we are trying to predict y in terms of x, as an analyst I would would not use any model to describe this data, and state that more data with different values of X would be required. Additionally, depending on the problem I could remove the outliers, and treat this as univariate data.\nThe takeaway here is that sometimes a useful model can not be made (garbage in, garbage out) until better data is avaliable.\nNon-linear and robust regression\nDue to time restrictions, I can not present every method for regression, but depending on your specific problem and data, there are many other regression techniques which can be used:\nhttp://scikit-learn.org/stable/auto_examples/ensemble/plot_adaboost_regression.html#example-ensemble-plot-adaboost-regression-py\nhttp://scikit-learn.org/stable/auto_examples/neighbors/plot_regression.html#example-neighbors-plot-regression-py\nhttp://scikit-learn.org/stable/auto_examples/svm/plot_svm_regression.html#example-svm-plot-svm-regression-py\nhttp://scikit-learn.org/stable/auto_examples/plot_isotonic_regression.html\nhttp://statsmodels.sourceforge.net/devel/examples/notebooks/generated/ols.html\nhttp://statsmodels.sourceforge.net/devel/examples/notebooks/generated/robust_models_0.html\nhttp://statsmodels.sourceforge.net/devel/examples/notebooks/generated/glm.html\nhttp://statsmodels.sourceforge.net/devel/examples/notebooks/generated/gls.html\nhttp://statsmodels.sourceforge.net/devel/examples/notebooks/generated/wls.html\nhttp://cars9.uchicago.edu/software/python/lmfit/\nBonus example: Piecewise linear curve fitting\nWhile I usually prefer to use more robustly implemented algorithms such as ridge or decision tree based regresion (this is because for many features it becomes difficult to determine an adequete model for each feature), regression can be done by fitting a piecewise fuction. Taken from here.",
"import numpy as np\n\nx = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], dtype=float)\n\ny = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03])\n\nplt.scatter(x, y)\n\n\nfrom scipy import optimize\n\ndef piecewise_linear(x, x0, y0, k1, k2):\n return np.piecewise(x, [x < x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0])\n\np , e = optimize.curve_fit(piecewise_linear, x, y)\nxd = np.linspace(0, 15, 100)\nplt.scatter(x, y)\n\nplt.plot(xd, piecewise_linear(xd, *p))",
"Bonus example 2: Piecewise Non-linear Curve Fitting\nNow let us extend this to piecewise non-linear Curve Fitting. Taken from here",
"#Piecewise function defining 2nd deg, 1st degree and 3rd degree exponentials\ndef piecewise_linear(x, x0, x1, y0, y1, k1, k2, k3, k4, k5, k6):\n return np.piecewise(x, [x < x0, x>= x0, x> x1], [lambda x:k1*x + k2*x**2, lambda x:k3*x + y0, lambda x: k4*x + k5*x**2 + k6*x**3 + y1])\n#Getting data using Pandas\n\ndf = pd.read_csv(\"../datasets/non-linear-piecewise.csv\")\nms = df[\"ms\"].values\ndegrees = df[\"Degrees\"].values\n\nplt.scatter(ms, degrees)\n\n \n\n\n#Setting linspace and making the fit, make sure to make you data numpy arrays\nx_new = np.linspace(ms[0], ms[-1], dtype=float)\nm = np.array(ms, dtype=float)\ndeg = np.array(degrees, dtype=float)\nguess = np.array( [100, 500, -30, 350, -0.1, 0.0051, 1, -0.01, -0.01, -0.01], dtype=float)\np , e = optimize.curve_fit(piecewise_linear, m, deg)\n#Plotting data and fit\nplt.plot(x_new, piecewise_linear(x_new, *p), '-', ms[::20], degrees[::20], 'o')\n",
"Key takeaways:\n\nAlways visualize data (exploratory data analysis) before fitting any model\nThere are many methods of regression, use intuition and statistics knowledge to choose the best method based on visualization and summary statistics.\nAlways use cross validation and understand the bias vs variance trade-off when doing regression.\n\nFurther reading:\nChapter 7 (non-linear regression) of Introduction to Statistical Learning\nScikit Learn documentation:\nLinear models\nIsotonic regression example \nWe learn about decision trees later in the couse, but decision trees can be used for regression as well, as shown in this example. We also learn about the nearest neighbour algorithm which can be used for regression in this example\nHomework\nNow that you have seen some of the examples of regression using linear models, see if you can predict the MPG of the car given all of its other attributes. Use the Auto MPG Data Set from the CMU StatLib machine learning library. I have included the data in the ../datasets folder. The .data file is the from the CMU page, and the .csv is after I applied some data cleaning."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NathanYee/ThinkBayes2 | code/chap05soln.ipynb | gpl-2.0 | [
"Think Bayes: Chapter 5\nThis notebook presents code and exercises from Think Bayes, second edition.\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT",
"from __future__ import print_function, division\n\n% matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\n\nfrom thinkbayes2 import Pmf, Cdf, Suite, Beta\nimport thinkplot",
"Odds\nThe following function converts from probabilities to odds.",
"def Odds(p):\n return p / (1-p)",
"And this function converts from odds to probabilities.",
"def Probability(o):\n return o / (o+1)",
"If 20% of bettors think my horse will win, that corresponds to odds of 1:4, or 0.25.",
"p = 0.2\nOdds(p)",
"If the odds against my horse are 1:5, that corresponds to a probability of 1/6.",
"o = 1/5\nProbability(o)",
"We can use the odds form of Bayes's theorem to solve the cookie problem:",
"prior_odds = 1\nlikelihood_ratio = 0.75 / 0.5\npost_odds = prior_odds * likelihood_ratio\npost_odds",
"And then we can compute the posterior probability, if desired.",
"post_prob = Probability(post_odds)\npost_prob",
"If we draw another cookie and it's chocolate, we can do another update:",
"likelihood_ratio = 0.25 / 0.5\npost_odds *= likelihood_ratio\npost_odds",
"And convert back to probability.",
"post_prob = Probability(post_odds)\npost_prob",
"Oliver's blood\nThe likelihood ratio is also useful for talking about the strength of evidence without getting bogged down talking about priors.\nAs an example, we'll solve this problem from MacKay's {\\it Information Theory, Inference, and Learning Algorithms}:\n\nTwo people have left traces of their own blood at the scene of a crime. A suspect, Oliver, is tested and found to have type 'O' blood. The blood groups of the two traces are found to be of type 'O' (a common type in the local population, having frequency 60) and of type 'AB' (a rare type, with frequency 1). Do these data [the traces found at the scene] give evidence in favor of the proposition that Oliver was one of the people [who left blood at the scene]?\n\nIf Oliver is\none of the people who left blood at the crime scene, then he\naccounts for the 'O' sample, so the probability of the data\nis just the probability that a random member of the population\nhas type 'AB' blood, which is 1%.\nIf Oliver did not leave blood at the scene, then we have two\nsamples to account for. If we choose two random people from\nthe population, what is the chance of finding one with type 'O'\nand one with type 'AB'? Well, there are two ways it might happen:\nthe first person we choose might have type 'O' and the second\n'AB', or the other way around. So the total probability is\n$2 (0.6) (0.01) = 1.2$%.\nSo the likelihood ratio is:",
"like1 = 0.01\nlike2 = 2 * 0.6 * 0.01\n\nlikelihood_ratio = like1 / like2\nlikelihood_ratio",
"Since the ratio is less than 1, it is evidence against the hypothesis that Oliver left blood at the scence.\nBut it is weak evidence. For example, if the prior odds were 1 (that is, 50% probability), the posterior odds would be 0.83, which corresponds to a probability of:",
"post_odds = 1 * like1 / like2\nProbability(post_odds)",
"So this evidence doesn't \"move the needle\" very much.\nExercise: Suppose other evidence had made you 90% confident of Oliver's guilt. How much would this exculpatory evince change your beliefs? What if you initially thought there was only a 10% chance of his guilt?\nNotice that evidence with the same strength has a different effect on probability, depending on where you started.",
"# Solution\n\npost_odds = Odds(0.9) * like1 / like2\nProbability(post_odds)\n\n# Solution\n\npost_odds = Odds(0.1) * like1 / like2\nProbability(post_odds)",
"Comparing distributions\nLet's get back to the Kim Rhode problem from Chapter 4:\n\nAt the 2016 Summer Olympics in the Women's Skeet event, Kim Rhode faced Wei Meng in the bronze medal match. They each hit 15 of 25 targets, sending the match into sudden death. In the first round, both hit 1 of 2 targets. In the next two rounds, they each hit 2 targets. Finally, in the fourth round, Rhode hit 2 and Wei hit 1, so Rhode won the bronze medal, making her the first Summer Olympian to win an individual medal at six consecutive summer games.\nBut after all that shooting, what is the probability that Rhode is actually a better shooter than Wei? If the same match were held again, what is the probability that Rhode would win?\n\nI'll start with a uniform distribution for x, the probability of hitting a target, but we should check whether the results are sensitive to that choice.\nFirst I create a Beta distribution for each of the competitors, and update it with the results.",
"rhode = Beta(1, 1, label='Rhode')\nrhode.Update((22, 11))\n\nwei = Beta(1, 1, label='Wei')\nwei.Update((21, 12))",
"Based on the data, the distribution for Rhode is slightly farther right than the distribution for Wei, but there is a lot of overlap.",
"thinkplot.Pdf(rhode.MakePmf())\nthinkplot.Pdf(wei.MakePmf())\nthinkplot.Config(xlabel='x', ylabel='Probability')",
"To compute the probability that Rhode actually has a higher value of p, there are two options:\n\n\nSampling: we could draw random samples from the posterior distributions and compare them.\n\n\nEnumeration: we could enumerate all possible pairs of values and add up the \"probability of superiority\".\n\n\nI'll start with sampling. The Beta object provides a method that draws a random value from a Beta distribution:",
"iters = 1000\ncount = 0\nfor _ in range(iters):\n x1 = rhode.Random()\n x2 = wei.Random()\n if x1 > x2:\n count += 1\n\ncount / iters",
"Beta also provides Sample, which returns a NumPy array, so we an perform the comparisons using array operations:",
"rhode_sample = rhode.Sample(iters)\nwei_sample = wei.Sample(iters)\nnp.mean(rhode_sample > wei_sample)",
"The other option is to make Pmf objects that approximate the Beta distributions, and enumerate pairs of values:",
"def ProbGreater(pmf1, pmf2):\n total = 0\n for x1, prob1 in pmf1.Items():\n for x2, prob2 in pmf2.Items():\n if x1 > x2:\n total += prob1 * prob2\n return total\n\npmf1 = rhode.MakePmf(1001)\npmf2 = wei.MakePmf(1001)\nProbGreater(pmf1, pmf2)\n\npmf1.ProbGreater(pmf2)\n\npmf1.ProbLess(pmf2)",
"Exercise: Run this analysis again with a different prior and see how much effect it has on the results.\nSimulation\nTo make predictions about a rematch, we have two options again:\n\n\nSampling. For each simulated match, we draw a random value of x for each contestant, then simulate 25 shots and count hits.\n\n\nComputing a mixture. If we knew x exactly, the distribution of hits, k, would be binomial. Since we don't know x, the distribution of k is a mixture of binomials with different values of x.\n\n\nI'll do it by sampling first.",
"import random\n\ndef flip(p):\n return random.random() < p",
"flip returns True with probability p and False with probability 1-p\nNow we can simulate 1000 rematches and count wins and losses.",
"iters = 1000\nwins = 0\nlosses = 0\n\nfor _ in range(iters):\n x1 = rhode.Random()\n x2 = wei.Random()\n \n count1 = count2 = 0\n for _ in range(25):\n if flip(x1):\n count1 += 1\n if flip(x2):\n count2 += 1\n \n if count1 > count2:\n wins += 1\n if count1 < count2:\n losses += 1\n \nwins/iters, losses/iters",
"Or, realizing that the distribution of k is binomial, we can simplify the code using NumPy:",
"rhode_rematch = np.random.binomial(25, rhode_sample)\nthinkplot.Hist(Pmf(rhode_rematch))\n\nwei_rematch = np.random.binomial(25, wei_sample)\nnp.mean(rhode_rematch > wei_rematch)\n\nnp.mean(rhode_rematch < wei_rematch)",
"Alternatively, we can make a mixture that represents the distribution of k, taking into account our uncertainty about x:",
"from thinkbayes2 import MakeBinomialPmf\n\ndef MakeBinomialMix(pmf, label=''):\n mix = Pmf(label=label)\n for x, prob in pmf.Items():\n binom = MakeBinomialPmf(n=25, p=x)\n for k, p in binom.Items():\n mix[k] += prob * p\n return mix\n\nrhode_rematch = MakeBinomialMix(rhode.MakePmf(), label='Rhode')\nwei_rematch = MakeBinomialMix(wei.MakePmf(), label='Wei')\nthinkplot.Pdf(rhode_rematch)\nthinkplot.Pdf(wei_rematch)\nthinkplot.Config(xlabel='hits')\n\nrhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch)",
"Alternatively, we could use MakeMixture:",
"from thinkbayes2 import MakeMixture\n\ndef MakeBinomialMix2(pmf):\n binomials = Pmf()\n for x, prob in pmf.Items():\n binom = MakeBinomialPmf(n=25, p=x)\n binomials[binom] = prob\n return MakeMixture(binomials)",
"Here's how we use it.",
"rhode_rematch = MakeBinomialMix2(rhode.MakePmf())\nwei_rematch = MakeBinomialMix2(wei.MakePmf())\nrhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch)",
"Exercise: Run this analysis again with a different prior and see how much effect it has on the results.\nDistributions of sums and differences\nSuppose we want to know the total number of targets the two contestants will hit in a rematch. There are two ways we might compute the distribution of this sum:\n\n\nSampling: We can draw samples from the distributions and add them up.\n\n\nEnumeration: We can enumerate all possible pairs of values.\n\n\nI'll start with sampling:",
"iters = 1000\npmf = Pmf()\nfor _ in range(iters):\n k = rhode_rematch.Random() + wei_rematch.Random()\n pmf[k] += 1\npmf.Normalize()\nthinkplot.Hist(pmf)",
"Or we could use Sample and NumPy:",
"ks = rhode_rematch.Sample(iters) + wei_rematch.Sample(iters)\npmf = Pmf(ks)\nthinkplot.Hist(pmf)",
"Alternatively, we could compute the distribution of the sum by enumeration:",
"def AddPmfs(pmf1, pmf2):\n pmf = Pmf()\n for v1, p1 in pmf1.Items():\n for v2, p2 in pmf2.Items():\n pmf[v1 + v2] += p1 * p2\n return pmf",
"Here's how it's used:",
"pmf = AddPmfs(rhode_rematch, wei_rematch)\nthinkplot.Pdf(pmf)",
"The Pmf class provides a + operator that does the same thing.",
"pmf = rhode_rematch + wei_rematch\nthinkplot.Pdf(pmf)",
"Exercise: The Pmf class also provides the - operator, which computes the distribution of the difference in values from two distributions. Use the distributions from the previous section to compute the distribution of the differential between Rhode and Wei in a rematch. On average, how many clays should we expect Rhode to win by? What is the probability that Rhode wins by 10 or more?",
"# Solution\n\npmf = rhode_rematch - wei_rematch\nthinkplot.Pdf(pmf)\n\n# Solution\n\n# On average, we expect Rhode to win by about 1 clay.\n\npmf.Mean(), pmf.Median(), pmf.Mode()\n\n# Solution\n\n# But there is, according to this model, a 2% chance that she could win by 10.\n\nsum([p for (x, p) in pmf.Items() if x >= 10])",
"Distribution of maximum\nSuppose Kim Rhode continues to compete in six more Olympics. What should we expect her best result to be?\nOnce again, there are two ways we can compute the distribution of the maximum:\n\n\nSampling.\n\n\nAnalysis of the CDF.\n\n\nHere's a simple version by sampling:",
"iters = 1000\npmf = Pmf()\nfor _ in range(iters):\n ks = rhode_rematch.Sample(6)\n pmf[max(ks)] += 1\npmf.Normalize()\nthinkplot.Hist(pmf)",
"And here's a version using NumPy. I'll generate an array with 6 rows and 10 columns:",
"iters = 1000\nks = rhode_rematch.Sample((6, iters))\nks",
"Compute the maximum in each column:",
"maxes = np.max(ks, axis=0)\nmaxes[:10]",
"And then plot the distribution of maximums:",
"pmf = Pmf(maxes)\nthinkplot.Hist(pmf)",
"Or we can figure it out analytically. If the maximum is less-than-or-equal-to some value k, all 6 random selections must be less-than-or-equal-to k, so: \n$ CDF_{max}(x) = CDF(x)^6 $\nPmf provides a method that computes and returns this Cdf, so we can compute the distribution of the maximum like this:",
"pmf = rhode_rematch.Max(6).MakePmf()\nthinkplot.Hist(pmf)",
"Exercise: Here's how Pmf.Max works:\ndef Max(self, k):\n \"\"\"Computes the CDF of the maximum of k selections from this dist.\n\n k: int\n\n returns: new Cdf\n \"\"\"\n cdf = self.MakeCdf()\n cdf.ps **= k\n return cdf\n\nWrite a function that takes a Pmf and an integer n and returns a Pmf that represents the distribution of the minimum of k values drawn from the given Pmf. Use your function to compute the distribution of the minimum score Kim Rhode would be expected to shoot in six competitions.",
"def Min(pmf, k):\n cdf = pmf.MakeCdf()\n cdf.ps = 1 - (1-cdf.ps)**k\n return cdf\n\npmf = Min(rhode_rematch, 6).MakePmf()\nthinkplot.Hist(pmf)",
"Exercises\nExercise: Suppose you are having a dinner party with 10 guests and 4 of them are allergic to cats. Because you have cats, you expect 50% of the allergic guests to sneeze during dinner. At the same time, you expect 10% of the non-allergic guests to sneeze. What is the distribution of the total number of guests who sneeze?",
"# Solution\n\nn_allergic = 4\nn_non = 6\np_allergic = 0.5\np_non = 0.1\npmf = MakeBinomialPmf(n_allergic, p_allergic) + MakeBinomialPmf(n_non, p_non)\nthinkplot.Hist(pmf)\n\n# Solution\n\npmf.Mean()",
"Exercise This study from 2015 showed that many subjects diagnosed with non-celiac gluten sensitivity (NCGS) were not able to distinguish gluten flour from non-gluten flour in a blind challenge.\nHere is a description of the study:\n\n\"We studied 35 non-CD subjects (31 females) that were on a gluten-free diet (GFD), in a double-blind challenge study. Participants were randomised to receive either gluten-containing flour or gluten-free flour for 10 days, followed by a 2-week washout period and were then crossed over. The main outcome measure was their ability to identify which flour contained gluten.\n\"The gluten-containing flour was correctly identified by 12 participants (34%)...\"\nSince 12 out of 35 participants were able to identify the gluten flour, the authors conclude \"Double-blind gluten challenge induces symptom recurrence in just one-third of patients fulfilling the clinical diagnostic criteria for non-coeliac gluten sensitivity.\"\n\nThis conclusion seems odd to me, because if none of the patients were sensitive to gluten, we would expect some of them to identify the gluten flour by chance. So the results are consistent with the hypothesis that none of the subjects are actually gluten sensitive.\nWe can use a Bayesian approach to interpret the results more precisely. But first we have to make some modeling decisions.\n\n\nOf the 35 subjects, 12 identified the gluten flour based on resumption of symptoms while they were eating it. Another 17 subjects wrongly identified the gluten-free flour based on their symptoms, and 6 subjects were unable to distinguish. So each subject gave one of three responses. To keep things simple I follow the authors of the study and lump together the second two groups; that is, I consider two groups: those who identified the gluten flour and those who did not.\n\n\nI assume (1) people who are actually gluten sensitive have a 95% chance of correctly identifying gluten flour under the challenge conditions, and (2) subjects who are not gluten sensitive have only a 40% chance of identifying the gluten flour by chance (and a 60% chance of either choosing the other flour or failing to distinguish).\n\n\nUsing this model, estimate the number of study participants who are sensitive to gluten. What is the most likely number? What is the 95% credible interval?",
"# Solution\n\n# Here's a class that models the study\n\nclass Gluten(Suite):\n \n def Likelihood(self, data, hypo):\n \"\"\"Computes the probability of the data under the hypothesis.\n \n data: tuple of (number who identified, number who did not)\n hypothesis: number of participants who are gluten sensitive\n \"\"\"\n # compute the number who are gluten sensitive, `gs`, and\n # the number who are not, `ngs`\n gs = hypo\n yes, no = data\n n = yes + no\n ngs = n - gs\n \n pmf1 = MakeBinomialPmf(gs, 0.95)\n pmf2 = MakeBinomialPmf(ngs, 0.4)\n pmf = pmf1 + pmf2\n return pmf[yes]\n\n# Solution\n\nprior = Gluten(range(0, 35+1))\nthinkplot.Pdf(prior)\n\n# Solution\n\nposterior = prior.Copy()\ndata = 12, 23\nposterior.Update(data)\n\n# Solution\n\nthinkplot.Pdf(posterior)\nthinkplot.Config(xlabel='# who are gluten sensitive', \n ylabel='PMF', legend=False)\n\n# Solution\n\nposterior.CredibleInterval(95)",
"Exercise Coming soon: the space invaders problem.",
"# Solution\n\n\n\n# Solution\n \n\n\n# Solution\n \n\n\n# Solution\n \n\n\n# Solution\n \n\n\n# Solution\n \n\n\n# Solution\n \n\n\n# Solution\n \n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_ssp_projs_sensitivity_map.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Sensitivity map of SSP projections\nThis example shows the sources that have a forward field\nsimilar to the first SSP vector correcting for ECG.",
"# Author: Alexandre Gramfort <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\n\nfrom mne import read_forward_solution, read_proj, sensitivity_map\n\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nsubjects_dir = data_path + '/subjects'\nfname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\necg_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif'\n\nfwd = read_forward_solution(fname)\n\nprojs = read_proj(ecg_fname)\n# take only one projection per channel type\nprojs = projs[::2]\n\n# Compute sensitivity map\nssp_ecg_map = sensitivity_map(fwd, ch_type='grad', projs=projs, mode='angle')",
"Show sensitivity map",
"plt.hist(ssp_ecg_map.data.ravel())\nplt.show()\n\nargs = dict(clim=dict(kind='value', lims=(0.2, 0.6, 1.)), smoothing_steps=7,\n hemi='rh', subjects_dir=subjects_dir)\nssp_ecg_map.plot(subject='sample', time_label='ECG SSP sensitivity', **args)"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
MAKOSCAFEE/AllNotebooks | .ipynb_checkpoints/Education-checkpoint.ipynb | mit | [
"Author: Barnabas Makonda\n\nData Scource Tanzania OpenData\nThis dataset contains ranking information of primary schools according to performance in primary school leaving certificate examinations.",
"%matplotlib inline\nfrom collections import defaultdict\nimport json\nfrom __future__ import division\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nfrom matplotlib import rcParams\nimport matplotlib.cm as cm\nimport matplotlib as mpl\n\nimport seaborn as sns\nsns.set_context(\"talk\")\nsns.set_style(\"white\")",
"Load Data",
"df = pd.read_csv('PrimarySchoolsPerfomanceAndLocation-2014.csv')\ndf.shape\n\ndf.columns\n\ncol =['NAME','REGION','DISTRICT','OWNERSHIP','PASS_RATE','AVG_MARK','CHANGE_PREVIOUS_YEAR','RANK']\nfor c in df.columns:\n if c not in col:\n df=df.drop(c,axis=1)\n\ndf.shape\n\ndf.OWNERSHIP.unique()\n\ndf.head(10)",
"We need to know number of primary school which did standard seven exam year 2014",
"df.shape[0]",
"There about 15866 schools\nOwnership of the School\ncan be seen below schools are grouped either Government or Non Government schools and there are about 2833 schools which have no type",
"print df.OWNERSHIP.unique()",
"how many do not have type or they are empty?",
"df[df.OWNERSHIP.isnull()].shape[0]",
"Sample of school which ownership is empty",
"df[df.OWNERSHIP.isnull()].head(10)",
"Those which ownership is not null, either Government or Non Government",
"df[df.OWNERSHIP.notnull()].head(10)",
"and they are 13033",
"df[df.OWNERSHIP.notnull()].shape[0]",
"Lets plot a pie chat to visualize the data",
"government_schools =sum(df.OWNERSHIP=='GOVERNMENT') #Government schools\nnongovernment_schools =sum(df.OWNERSHIP=='NON GOVERNMENT') #nongovernment schools\nunknown = sum(df.OWNERSHIP.isnull()) #number of shools with unknown ownership\nschl=df.shape[0] #number of schools\n\n\nLabels =['Govenment', 'Non Government','Unknown']\nfractions =[float(government_schools)/schl, float(nongovernment_schools)/schl, float(unknown)/schl] #percentage\ncolors = ['yellowgreen', 'gold', 'lightskyblue'] #colors for pie chart\nexplode = (0, 0, 0) #only explode the first slice\n\nplt.pie(fractions, explode=explode, labels=Labels, colors=colors,\n autopct='%1.1f%%', shadow=True, startangle=90)\n# Set aspect ratio to be equal so that pie is drawn as a circle.\nplt.axis('equal')\nplt.show()",
"Lets play with the numbers now\nLets look and see summary of the Pass Rate, Avarage Mark and Change compared to previous year",
"df[['PASS_RATE','AVG_MARK','CHANGE_PREVIOUS_YEAR']].describe()",
"The avarage Passing rate was 55.93% and Maximum passing rate was 100%(All students passed examination) while lowest was 0(Nobody passed)\nAvarage mark was 108.7 maximum avarage mark per school was 234.7 and minimum was 46.62 \nNB:These marks are for 5 subjects hence total of 250 marks\nLets play a little more\n\nHow many schools did have 100% pass rate?\nIs there a government school which had 100% pass rate?\nWhich school did best of all schools?\nWhich did bad?",
"df[df.PASS_RATE == 100 ]\n\nprint \"There were %s schools which had 100 pass rate\"%sum(df.PASS_RATE == 100 )\n\ndf[(df.PASS_RATE == 100) & (df.OWNERSHIP==\"GOVERNMENT\")].describe()\n\nprint \"Sample of Government school which have 100% passing rate\"\ndf[(df.PASS_RATE == 100) & (df.OWNERSHIP==\"GOVERNMENT\")].head(10)",
"Lets group 100% pass rate by region",
"grouped = df[df.PASS_RATE == 100 ].groupby(df['REGION'])\npassed_per_region = grouped.count()\npassed_per_region.NAME",
"Dar es salaam leads with 121 schools and Katavi is the last with 7",
"df.groupby(df.REGION).count().NAME",
"MBEYA is leading by having 1046\nWe can visulize our data more by using seaborn",
"sns.set_context(\"notebook\")\n#lets get mean Pass Rate\nmean_pass = df.PASS_RATE.mean()\nprint mean_pass, df.PASS_RATE.median()\n\nwith sns.axes_style(\"whitegrid\"):\n df.PASS_RATE.hist(bins=30, alpha=0.4);\n plt.axvline(mean_pass, 0, 0.75, color='r', label='Mean')\n plt.xlabel(\"Pass Rate\")\n plt.ylabel(\"Counts\")\n plt.title(\"Passing Rate Hisyogram\")\n plt.legend()\n sns.despine()\n\nwith sns.axes_style(\"whitegrid\"):\n df.CHANGE_PREVIOUS_YEAR.hist(bins=15, alpha=0.6, color='r');\n plt.xlabel(\"change of passing rate comapred to 2013\")\n plt.ylabel(\"school number\")\n plt.title(\"Change of passing rate Hisyogram\")\n plt.legend()\n\n\nwith sns.axes_style(\"whitegrid\"):\n df.AVG_MARK.hist(bins=40,alpha=0.6, color='g')\n plt.xlabel(\"Avarage mark per school\")\n plt.ylabel(\"school number\")\n plt.title(\"Avarage Marks Hisyogram\")\n plt.legend()",
"Thank You"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
qutip/qutip-notebooks | development/development-qobjevo.ipynb | lgpl-3.0 | [
"QobjEvo usage example\nFeatures of QobjEvo for the users.\nMade by Eric Giguere",
"from qutip import *\nimport time\nimport numpy as np",
"Definition of time-dependant Qobj\nQobjEvo are definied from list of Qobj:\n[Qobj0, [Qobj1, coeff1], [Qobj2, coeff2]]\ncoeff can be one of: \n- function\n- string\n- np.array",
"# Definition of base Qobj and \nN = 4\n\ndef sin_w(t, args):\n return np.cos(args[\"w\"]*t)\n\ndef cos_w(t, args):\n return np.cos(args[\"w\"]*t)\n\ntlist = np.linspace(0,10,10000)\ntlistlog = np.logspace(-3,1,10000)\n\n\n# constant QobjEvo\ncte_QobjEvo = QobjEvo(destroy(N))\ncte_QobjEvo(1)\n\n# QobjEvo with function based coeff\nfunc_QobjEvo = QobjEvo([destroy(N),[qeye(N),cos_w]],args={\"w\":2})\nfunc_QobjEvo(1)\n\n# QobjEvo with sting based coeff\nstr_QobjEvo = QobjEvo([destroy(N),[qeye(N),\"cos(w*t)\"]],args={\"w\":2})\nstr_QobjEvo(1)\n\n# QobjEvo with array based coeff\narray_QobjEvo = QobjEvo([destroy(N),[qeye(N),np.cos(2*tlist)]],tlist=tlist)\narray_QobjEvo(1)\n\n# QobjEvo with array based coeff, log timescale\nLog_array_QobjEvo = QobjEvo([destroy(N),[qeye(N),np.cos(2*tlistlog)]],tlist=tlistlog)\nLog_array_QobjEvo(1)\n\n# Reference\ndestroy(N) + qeye(N) * np.cos(2)",
"Mathematic\n\naddition (QobjEvo, Qobj)\nsubstraction (QobjEvo, Qobj)\nproduct (QobjEvo, Qobj, scalar)\ndivision (scalar)\n\nThe examples are done with function type coefficients only, but work for any type of coefficient.\nMixing coefficients type is possible, however this support would be removed if QobjEvo * QobjEvo is to be implemented.",
"# Build objects\no1 = QobjEvo([qeye(N),[destroy(N),sin_w]],args={\"w\":2})\no2 = QobjEvo([qeye(N),[create(N),cos_w]],args={\"w\":2})\nt = np.random.random()*10\n\n# addition and subtraction \no3 = o1 + o2\nprint(o3(t) == o1(t) + o2(t))\no3 = o1 - o2\nprint(o3(t) == o1(t) - o2(t))\no3 = o1 + destroy(N)\nprint(o3(t) == o1(t) + destroy(N))\no3 = o1 - destroy(N)\nprint(o3(t) == o1(t) - destroy(N))\n\n# product\noc = QobjEvo([qeye(N)])\no3 = o1 * destroy(N)\nprint(o3(t) == o1(t) * destroy(N))\no3 = o1 * (0.5+0.5j)\nprint(o3(t) == o1(t) * (0.5+0.5j))\no3 = o1 / (0.5+0.5j)\nprint(o3(t) == o1(t) / (0.5+0.5j))\no3 = o1 * oc\nprint(o3(t) == o1(t) * oc(t))\no3 = oc * o1\nprint(o3(t) == oc(t) * o1(t))\no3 = o1 * o2\nprint(o3(t) == o1(t) * o2(t))\n\no1 = QobjEvo([qeye(N),[destroy(N),sin_w]],args={\"w\":2})\no2 = QobjEvo([qeye(N),[create(N),cos_w]],args={\"w\":2})\no1 += o2\nprint(o1(t) == (qeye(N)*2 + destroy(N)*sin_w(t,args={\"w\":2}) + create(N)*cos_w(t,args={\"w\":2})))\n\no1 = QobjEvo([qeye(N),[destroy(N),sin_w]],args={\"w\":2})\no2 = QobjEvo([qeye(N),[create(N),cos_w]],args={\"w\":2})\no1 -= o2\nprint(o1(t) == (destroy(N)*sin_w(t,args={\"w\":2}) - create(N)*cos_w(t,args={\"w\":2})))\n\no1 = QobjEvo([qeye(N),[destroy(N),sin_w]],args={\"w\":2})\no2 = QobjEvo([qeye(N),[create(N),cos_w]],args={\"w\":2})\no1 += -o2\nprint(o1(t) == (destroy(N)*sin_w(t,args={\"w\":2}) - create(N)*cos_w(t,args={\"w\":2})))\n\no1 = QobjEvo([qeye(N),[destroy(N),sin_w]],args={\"w\":2})\no1 *= destroy(N)\nprint(o1(t) == (destroy(N) + destroy(N)*destroy(N)*sin_w(t,args={\"w\":2})))",
"Unitary operations:\n\nconj\ndag \ntrans \n_cdc: QobjEvo.dag * QobjEvo",
"o_real = QobjEvo([qeye(N),[destroy(N), sin_w]], args={\"w\":2})\no_cplx = QobjEvo([qeye(N),[create(N), cos_w]], args={\"w\":-1j})\n\nprint(o_real(t).trans() == o_real.trans()(t))\nprint(o_real(t).conj() == o_real.conj()(t))\nprint(o_real(t).dag() == o_real.dag()(t))\n\nprint(o_cplx(t).trans() == o_cplx.trans()(t))\nprint(o_cplx(t).conj() == o_cplx.conj()(t))\nprint(o_cplx(t).dag() == o_cplx.dag()(t))\n\n# the operator norm correspond to c.dag * c.\ntd_cplx_f0 = qobjevo.QobjEvo([qeye(N)])\ntd_cplx_f1 = qobjevo.QobjEvo([qeye(N),[destroy(N)*create(N),sin_w]], args={'w':2.+0.001j})\ntd_cplx_f2 = qobjevo.QobjEvo([qeye(N),[destroy(N),cos_w]], args={'w':2.+0.001j})\ntd_cplx_f3 = qobjevo.QobjEvo([qeye(N),[create(N),1j*np.sin(tlist)]], tlist=tlist)\n\nprint(td_cplx_f0(t).dag()*td_cplx_f0(t) == td_cplx_f0._cdc()(t))\nprint(td_cplx_f1(t).dag()*td_cplx_f1(t) == td_cplx_f1._cdc()(t))\nprint(td_cplx_f2(t).dag()*td_cplx_f2(t) == td_cplx_f2._cdc()(t))\nprint(td_cplx_f3(t).dag()*td_cplx_f3(t) == td_cplx_f3._cdc()(t))",
"Liouvillian and lindblad dissipator, to use in solver\nFunctions in qutip.superoperator can be used for QobjEvo.",
"td_L = liouvillian(H=func_QobjEvo)\nL = liouvillian(H=func_QobjEvo(t))\ntd_L(t) == L\n\ntd_cplx_f0 = qobjevo.QobjEvo([qeye(N)])\ntd_cplx_f1 = qobjevo.QobjEvo([[destroy(N)*create(N),sin_w]], args={'w':2.})\n\ntd_L = liouvillian(H=func_QobjEvo,c_ops=[td_cplx_f0,td_cplx_f1])\nL = liouvillian(H=func_QobjEvo(t),c_ops=[td_cplx_f0(t),td_cplx_f1(t)])\nprint(td_L(t) == L)\n\ntd_P = spre(td_cplx_f1)\nP = spre(td_cplx_f1(t))\nprint(td_P(t) == P)",
"Getting the list back for the object",
"print(td_L.to_list())",
"Arguments modification\nTo change the args: qobjevo.arguments(new_args)\nCall with other arguments without changing them: qobjevo.with_args(t, new_args)",
"def Args(t, args):\n return args['w']\ntd_args = qobjevo.QobjEvo([qeye(N), Args],args={'w':1.}) \nprint(td_args(t) == qeye(N))\ntd_args.arguments({'w':2.})\nprint(td_args(t) == qeye(N)*2)\nprint(td_args(t,args={'w':3.}) == qeye(N)*3)",
"When summing QobjEvo that have an arguments in common, only one is kept.",
"td_args_1 = qobjevo.QobjEvo([qeye(N), [destroy(N), Args]],args={'w':1.})\ntd_args_2 = qobjevo.QobjEvo([qeye(N), [destroy(N), Args]],args={'w':2.})\ntd_str_sum = td_args_1 + td_args_2\n\n# Only one value for args is kept\nprint(td_str_sum(t) == td_args_1(t) + td_args_2(t)) \nprint(td_str_sum(t) == 2*td_args_2(t)) \n\n# Updating args affect all part\ntd_str_sum.arguments({'w':1.})\nprint(td_str_sum(t) == 2*td_args_1(t))",
"Argument with different names are fine.",
"def Args2(t, args):\n return args['x']\ntd_args_1 = qobjevo.QobjEvo([qeye(N), [destroy(N), cos_w]],args={'w':1.})\ntd_args_2 = qobjevo.QobjEvo([qeye(N), [destroy(N), Args2]],args={'x':2.})\ntd_str_sum = td_args_1 + td_args_2\n\n# Only one value for args is kept\nprint(td_str_sum(t) == td_args_1(t) + td_args_2(t))",
"Other",
"# Obtain the sparce matrix at a time t instead of a Qobj\nstr_QobjEvo(1, data=True)\n\n# Test is the QobjEvo does depend on time\nprint(cte_QobjEvo.const)\nprint(str_QobjEvo.const)\n\n# Obtain the size, shape, oper flag etc:\n# The QobjEvo.cte always exist and contain the constant part of the QobjEvo\n# It can be used to get the shape, etc. since the QobjEvo do not directly have them.\ntd_cplx_f1 = qobjevo.QobjEvo([[destroy(N)*create(N),sin_w]], args={'w':2.})\nprint(td_cplx_f1.cte.dims)\nprint(td_cplx_f1.cte.shape)\nprint(td_cplx_f1.cte.isoper)\nprint(td_cplx_f1.cte)\n\n# Creating a copy\nstr_QobjEvo_2 = str_QobjEvo.copy()\nstr_QobjEvo_2 += 1\nstr_QobjEvo_2(1) - str_QobjEvo(1)\n\nabout()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jeff-regier/Celeste.jl | experiments/galsim.ipynb | mit | [
"Experiments with GalSim data\nGenerate images with GalSim\nThe GalSim library is available from https://github.com/GalSim-developers/GalSim.",
"# experiments/galsim_helper.py contains our functions for interacting with GalSim\nimport galsim_helper\n\ndef three_sources_two_overlap(test_case):\n test_case.add_star().offset_arcsec(-5, 5)\n (test_case.add_galaxy()\n .offset_arcsec(2, 5)\n .gal_angle_deg(35)\n .axis_ratio(0.2)\n )\n test_case.add_star().offset_arcsec(10, -10)\n test_case.include_noise = True\n\ngalsim_helper.generate_fits_file(\"three_sources_two_overlap\", [three_sources_two_overlap, ])",
"Aperture photometry with SExtractor",
"# sep is a Python interface to the code SExtractor libraries.\n# See https://sep.readthedocs.io/ for documentation.\nimport sep\n\nimport numpy as np\nfrom astropy.io import fits\nimport matplotlib.pyplot as plt\nfrom matplotlib import rcParams\n%matplotlib inline\nrcParams['figure.figsize'] = [10., 8.]\n\n# read image into standard 2-d numpy array\nhdul = fits.open(\"three_sources_two_overlap.fits\")\n\ndata = hdul[2].data\ndata = data.byteswap().newbyteorder()\n\n# show the image\nm, s = np.mean(data), np.std(data)\nplt.imshow(data, interpolation='nearest', cmap='gray', vmin=m-s, vmax=m+s, origin='lower')\nplt.colorbar();\n\n# measure a spatially varying background on the image\nbkg = sep.Background(data)\n\n# get a \"global\" mean and noise of the image background:\nprint(bkg.globalback)\nprint(bkg.globalrms)\n\n# evaluate background as 2-d array, same size as original image\nbkg_image = bkg.back()\n# bkg_image = np.array(bkg) # equivalent to above\n\n# show the background\nplt.imshow(bkg_image, interpolation='nearest', cmap='gray', origin='lower')\nplt.colorbar();\n\n# subtract the background\ndata_sub = data - bkg\n\nobjs = sep.extract(data_sub, 1.5, err=bkg.globalrms)\n\n# how many objects were detected\nlen(objs)\n\nfrom matplotlib.patches import Ellipse\n\n# plot background-subtracted image\nfig, ax = plt.subplots()\nm, s = np.mean(data_sub), np.std(data_sub)\nim = ax.imshow(data_sub, interpolation='nearest', cmap='gray',\n vmin=m-s, vmax=m+s, origin='lower')\n\n# plot an ellipse for each object\nfor i in range(len(objs)):\n e = Ellipse(xy=(objs['x'][i], objs['y'][i]),\n width=6*objs['a'][i],\n height=6*objs['b'][i],\n angle=objs['theta'][i] * 180. / np.pi)\n e.set_facecolor('none')\n e.set_edgecolor('red')\n ax.add_artist(e)\n\nnelecs_per_nmgy = hdul[2].header[\"CLIOTA\"]\n\ndata_sub.sum() / nelecs_per_nmgy\n\nkronrad, krflag = sep.kron_radius(data, objs['x'], objs['y'], objs['a'], objs['b'], objs['theta'], 6.0)\nflux, fluxerr, flag = sep.sum_ellipse(data, objs['x'], objs['y'], objs['a'], objs['b'], objs['theta'],\n kronrad, subpix=1)\n\nflux_nmgy = flux / nelecs_per_nmgy\nfluxerr_nmgy = fluxerr / nelecs_per_nmgy\n\nfor i in range(len(objs)):\n print(\"object {:d}: flux = {:f} +/- {:f}\".format(i, flux_nmgy[i], fluxerr_nmgy[i]))\n\nkronrad, krflag = sep.kron_radius(data, objs['x'], objs['y'], objs['a'], objs['b'], objs['theta'], 4.5)\nflux, fluxerr, flag = sep.sum_ellipse(data, objs['x'], objs['y'], objs['a'], objs['b'], objs['theta'],\n kronrad, subpix=1)\n\nflux_nmgy = flux / nelecs_per_nmgy\nfluxerr_nmgy = fluxerr / nelecs_per_nmgy\n\nfor i in range(len(objs)):\n print(\"object {:d}: flux = {:f} +/- {:f}\".format(i, flux_nmgy[i], fluxerr_nmgy[i]))",
"Celeste.jl estimates these flux densities much better. The galsim_julia.ipynb notebook shows a run of Celeste.jl on the same data.\nComparision to Hyper Suprime-Cam (HSC) software pipeline\nHSC often fails to deblend images with three light sources in a row, including the following one:\n\n\"The single biggest failure mode of the deblender occurs when three or more peaks in a blend appear in a straight\nline\" -- Bosch, et al. \"The Hyper Suprime-Cam software pipeline.\" (2018)\nSo let's use galsim to generate an images with three peaks in a row!",
"def three_sources_in_a_row(test_case):\n x = [-11, -1, 12]\n test_case.add_galaxy().offset_arcsec(x[0], 0.3 * x[0]).gal_angle_deg(45)\n test_case.add_galaxy().offset_arcsec(x[1], 0.3 * x[1]).flux_r_nmgy(3)\n test_case.add_star().offset_arcsec(x[2], 0.3 * x[2]).flux_r_nmgy(3)\n test_case.include_noise = True\n\ngalsim_helper.generate_fits_file(\"three_sources_in_a_row\", [three_sources_in_a_row, ])\n\nhdul = fits.open(\"three_sources_in_a_row.fits\")\ndata = hdul[2].data\ndata = data.byteswap().newbyteorder()\n# show the image\nm, s = np.mean(data), np.std(data)\nfig = plt.imshow(data, interpolation='nearest', cmap='gray', vmin=m-s, vmax=m+s, origin='lower')\nplt.colorbar();",
"Celeste.jl estimates these flux densities much better. The galsim_julia.ipynb notebook shows a run of Celeste.jl on the same data."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
oroszl/topins | RM.ipynb | gpl-2.0 | [
"Rice-Mele model and adiabatic pumping\n\nIn this notebook we explore adiabatic pumping in time dependent one dimensional systems through the Rice-Mele model. The Rice-Mele model is a generalization of the Su-Schrieffer-Heeger model where we allow for generally time dependent hopping parameters and also add an extra time dependent sublattice potenital $u$ that has opposite sign on the two sublattices.",
"# The usual imports\n%pylab inline\nfrom ipywidgets import *\n# Some extra imports for 3D \nfrom mpl_toolkits.mplot3d import *\n# These are only needed to make things pretty..\n# they are mostly refered to in the formatting part of the figures\n# and enshure us to have the figures also present in the book.\nfrom matplotlib.patches import FancyArrowPatch\nclass Arrow3D(FancyArrowPatch):\n def __init__(self, xs, ys, zs, *args, **kwargs):\n FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)\n self._verts3d = xs, ys, zs\n\n def draw(self, renderer):\n xs3d, ys3d, zs3d = self._verts3d\n xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)\n self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))\n FancyArrowPatch.draw(self, renderer)\n\n# this generates a parameter mesh in momentum and time\nkran,tran=meshgrid(linspace(-pi,pi,30),linspace(0,1,51))\n# a helper function for defining the d vector\ndef dkt(k,t,uvw):\n '''\n A simple function that returns the d vector of the RM model.\n '''\n return [uvw(t)[1]+uvw(t)[2]*cos(k),uvw(t)[2]*sin(k),uvw(t)[0]]",
"The control freak sequence\nLet us first explore a time dependent pump which for all times $t$ is in the dimerized limit, that is either the intracell $v(t)$ or the intercell $w(t)$ hopping is zero.",
"def f(t):\n '''\n A piecewise function for the control freak sequence\n used to define u(t),v(t),w(t)\n '''\n\n t=mod(t,1);\n \n return (\n 8*t*((t>=0)&(t<1/8))+\\\n (0*t+1)*((t>=1/8)&(t<3/8))+\\\n (4-8*t)*((t>=3/8)&(t<1/2))+\\\n 0*t*((t>=1/2)&(t<1))); \n\ndef uvwCF(t):\n '''\n u,v and w functions of the control freak sequence\n '''\n return array([f(t)-f(t-1/2),2*f(t+1/4),f(t-1/4)])",
"Below we write a generic function that takes the functions $u(t)$,$v(t)$ and $w(t)$ as an argument and then visualizes the pumping process in $d$-space. We will use this function to explore the control freak sequence and the later on also the not so control freak sequence.",
"def seq_and_d(funcs,ti=10):\n '''\n A figure generating function for the Rice Mele model.\n It plots the functions defining the sequence and the d-space structure. \n '''\n figsize(10,5)\n fig=figure()\n func=eval(funcs);\n ax1=fig.add_subplot(121)\n ftsz=20\n # plotting the functions defining the sequence\n plot(tran[:,0],func(tran[:,0])[1],'k-',label=r'$v$',linewidth=3)\n plot(tran[:,0],func(tran[:,0])[2],'g--',label=r'$w$',linewidth=3)\n plot(tran[:,0],func(tran[:,0])[0],'m-',label=r'$u$',linewidth=3)\n plot([tran[ti,0],tran[ti,0]],[-3,3],'r-',linewidth=3)\n # this is just to make things look like in the book\n ylim(-1.5,2.5)\n legend(fontsize=20,loc=3)\n xlabel(r'time $t/T$',fontsize=ftsz)\n xticks(linspace(0,1,5),[r'$0$',r'$0.25$',r'$0.5$',r'$0.75$',r'$1$'],fontsize=ftsz)\n ylabel(r'amplitudes $u,v,w$',fontsize=ftsz)\n yticks([-1,0,1,2],[r'$-1$',r'$0$',r'$1$',r'$2$'],fontsize=ftsz)\n grid(True)\n\n ax2=fig.add_subplot(122, projection='3d')\n # plotting d space image of the pumping sequence\n plot(*dkt(kran[ti,:],tran[ti,:],func),marker='o',mec='red',mfc='red',ls='-',lw=6,color='red')\n plot(*dkt(kran.flatten(),tran.flatten(),func),color='blue',alpha=0.5)\n # this is just to make things look like in the book\n # basically everything below is just to make things look nice..\n ax2.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))\n ax2.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))\n ax2.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))\n ax2.set_axis_off()\n ax2.grid(False)\n arrprop=dict(mutation_scale=20, lw=1,arrowstyle='-|>,head_length=1.4,head_width=0.6',color=\"k\")\n ax2.add_artist(Arrow3D([-2,4],[0,0],[0,0], **arrprop))\n ax2.add_artist(Arrow3D([0,0],[-2,3.3],[0,0], **arrprop))\n ax2.add_artist(Arrow3D([0,0],[0,0],[-1,2], **arrprop))\n ftsz2=30\n ax2.text(4.4, -1, 0, r'$d_x$', None,fontsize=ftsz2)\n ax2.text(0.3, 3.0, 0, r'$d_y$', None,fontsize=ftsz2)\n ax2.text(0, 0.6, 2.0, r'$d_z$', None,fontsize=ftsz2)\n ax2.plot([0],[0],[0],'ko',markersize=8)\n ax2.view_init(elev=21., azim=-45)\n ax2.set_aspect(1.0)\n ax2.set_zlim3d(-0.5, 2) \n ax2.set_ylim3d(-0.5, 2)\n ax2.set_xlim3d(-0.5, 2)\n tight_layout()\n",
"Now let us see what happends as time proceedes!",
"interact(seq_and_d,funcs=fixed('uvwCF'),ti=(0,len(tran[:,0])-1));",
"Now that we have explored the momentum space behaviour let us again look at a small real space sample! First we define a function that generates Rice-Mele type finitel lattice Hamiltonians for given values of $u$,$v$ and $w$.",
"def H_RM_reals(L,u,v,w,**kwargs):\n '''\n A function to bulid a finite RM chain.\n The number of unitcells is L.\n As usual v is intracell and w ins intercell hopping.\n We also have now an asymmetric sublattice potential u.\n '''\n idL=eye(L); # identity matrix of dimension L\n odL=diag(ones(L-1),1);# upper off diagonal matrix with ones of size L\n odc=matrix(diag([1],-L+1));#lower corner for periodic boundary condition\n U=matrix([[u,v],[v,-u]]) # intracell\n T=matrix([[0,0],[1,0]]) # intercell\n \n p=0\n if kwargs.get('periodic',False):\n p=1\n \n H=(kron(idL,U)+\n kron(odL,w*T)+\n kron(odL,w*T).H+\n p*(kron(odc,w*T)+kron(odc,w*T).H))\n return H\n\n",
"Next we define a class that we will mainly use to hold data about our pumping sequence. The information in these objects will be used to visualize the spectrum and wavefunctions of bulk and edge localized states.",
"class pumpdata:\n '''\n A class that holds information on spectrum and wavefunctions\n of a pump sequence performed on a finite lattice model.\n Default values are tailored to the control freak sequence.\n '''\n def __init__(self,L=10,numLoc=1,norm_treshold=0.99,func=uvwCF,**kwargs):\n '''\n Initialization function. The default values are set in such a way that they correspond\n to the control freak sequence.\n '''\n \n \n self.L=L \n self.dat=[] # We will collect the data to be\n self.vecdat=[] # plotted in these arrays.\n self.lefty=[]\n self.righty=[]\n self.lefty=[]\n self.righty=[]\n \n tlim=kwargs.get('edge_tlim',(0,1)) # We can use this to restrict classification \n # of left and right localized states in time \n for t in tran[:,0]:\n u,v,w=func(t) # obtain u(t),v(t) and w(t)\n H=H_RM_reals(L,u,v,w) # \n eigdat=eigh(H); # for a given t here we calculate the eigensystem (values and vectors)\n if tlim[0]<t<tlim[1]:\n # for the interesting time intervall we look for states localized to the edge\n for i in range(2*L):\n if sum((array(eigdat[1][0::2,i])**2+array(eigdat[1][1::2,i])**2)[0:2*numLoc:2])>norm_treshold:\n self.lefty=append(self.lefty,[[t,eigdat[0][i]]]);\n if sum((array(eigdat[1][0::2,i])**2+array(eigdat[1][1::2,i])**2)[:L-2*numLoc:-2])>norm_treshold:\n self.righty=append(self.righty,[[t,eigdat[0][i]]]);\n\n self.dat=append(self.dat,eigdat[0]);\n self.vecdat=append(self.vecdat,eigdat[1]);\n \n self.dat=reshape(self.dat,[len(tran[:,0]),2*L]); # rewraping the data\n self.vecdat=reshape(self.vecdat,[len(tran[:,0]),2*L,2*L]) # to be more digestable\n",
"Now let us create an instance of the above class with the data of the control freak pump sequence:",
"# Filling up data for the control freak sequence\nCFdata=pumpdata(edge_tlim=(0.26,0.74))",
"Finally we write a simple function to visualize the spectrum and the wavefunctions in a symmilar fashion as we did for the SSH model. We shall now explicitly mark the edge states in the spectrum with red and blue.",
"def enpsi(PD,ti=10,n=10):\n figsize(14,5)\n subplot(121)\n lcol='#53a4d7'\n rcol='#d7191c'\n # Plotting the eigenvalues and \n # a marker showing for which state \n # we are exploring the wavefunction\n plot(tran[:,0],PD.dat,'k-'); \n (lambda x:plot(x[:,0],x[:,1],'o',mec=lcol,mfc=lcol,\n markersize=10))(reshape(PD.lefty,(PD.lefty.size/2,2)))\n (lambda x:plot(x[:,0],x[:,1],'o',mec=rcol,mfc=rcol,\n markersize=10))(reshape(PD.righty,(PD.righty.size/2,2)))\n plot(tran[ti,0],PD.dat[ti,n],'o',markersize=13,mec='k',mfc='w')\n \n # Make it look like the book \n xlabel(r'$t/T$',fontsize=25);\n xticks(linspace(0,1,5),fontsize=25)\n ylabel(r'energy $E$',fontsize=25);\n yticks(fontsize=25)\n ylim(-2.99,2.99)\n grid()\n\n subplot(122)\n # Plotting the sublattice resolved wavefunction\n bar(array(range(0,2*PD.L,2)), real(array(PD.vecdat[ti][0::2,n].T)),0.9,color='grey',label='A') # sublattice A\n bar(array(range(0,2*PD.L,2))+1,real(array(PD.vecdat[ti][1::2,n].T)),0.9,color='white',label='B') # sublattice B\n \n # Make it look like the book\n xticks(2*(array(range(10))),[' '+str(i) for i in array(range(11))[1:]],fontsize=25)\n ylim(-1.2,1.2)\n yticks(linspace(-1,1,5),fontsize=25,x=1.2)\n ylabel('Wavefunction',fontsize=25,labelpad=-460,rotation=-90)\n grid()\n legend(loc='lower right')\n xlabel(r'cell index $m$',fontsize=25);\n \n tight_layout()",
"We can now interact with the above function and see the evolution of the surface states.",
"interact(enpsi,PD=fixed(CFdata),ti=(0,len(tran[:,0])-1),n=(0,19));",
"To complete the analysis of the control freak sequence we now investigate the flow of Wannier centers in time in a chain with periodic boundary conditions. We again first define a class that holds the approporiate data and then write a plotting function.",
"\nclass wannierflow:\n '''\n A class that holds information on Wannier center flow.\n \n '''\n def __init__(self,L=6,func=uvwCF,periodic=True,tspan=linspace(0,1,200),**kwargs):\n self.L=L\n self.func=func\n self.periodic=periodic\n self.tspan=tspan\n # get position operator\n if self.periodic:\n POS=matrix(kron(diag(exp(2.0j*pi*arange(L)/(L))),eye(2))) \n else:\n POS=matrix(kron(diag(arange(1,L+1)),eye(2))) \n Lwanflow=[]\n Hwanflow=[]\n Lwane=[]\n Hwane=[]\n for t in tspan:\n u,v,w=self.func(t) \n H=H_RM_reals(L,u,v,w,periodic=periodic)\n sys=eigh(H)\n \n Lval=sys[0][sys[0]<0]\n Lvec=matrix(sys[1][:,sys[0]<0])\n LP=Lvec*Lvec.H\n LW=LP*POS*LP\n LWval,LWvec=eig(LW)\n LWvec=LWvec[:,abs(LWval)>1e-10]\n LWe=real(diag(LWvec.H*H*LWvec))\n \n Hval=sys[0][sys[0]>0]\n Hvec=matrix(sys[1][:,sys[0]>0])\n HP=Hvec*Hvec.H\n HW=HP*POS*HP\n HWval,HWvec=eig(HW)\n HWvec=HWvec[:,abs(HWval)>1e-10]\n HWe=real(diag(HWvec.H*H*HWvec))\n \n Lwane=append(Lwane,LWe)\n Hwane=append(Hwane,HWe)\n if periodic:\n Lwanflow=append(Lwanflow,L/(2*pi)*sort(angle(LWval[abs(LWval)>1e-10])))\n Hwanflow=append(Hwanflow,L/(2*pi)*sort(angle(HWval[abs(HWval)>1e-10])))\n else:\n Lwanflow=append(Lwanflow,sort(LWval[abs(LWval)>1e-10]))\n Hwanflow=append(Hwanflow,sort(HWval[abs(HWval)>1e-10]))\n self.Lwanflow=Lwanflow\n self.Hwanflow=Hwanflow\n self.Lwane=Lwane\n self.Hwane=Hwane\n\n def plot_w_vs_t(self,LorH='Lower band',*args,**kwargs):\n '''\n A function for plotting the Wannier flow.\n The Wannier centers against time are plotted.\n '''\n #figsize(7,5)\n data=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wanflow')\n for i in range(self.L): \n descr=(LorH if i==0 else '') \n plot(real(data[i::self.L]),self.tspan,*args,label=descr,**kwargs)\n \n if self.periodic:\n xticks(arange(self.L)-self.L/2+0.5*mod(self.L,2),fontsize=25)\n else:\n xticks(arange(self.L)+1,fontsize=25)\n yticks(linspace(0,1,5),fontsize=25)\n xlabel(r'position $\\langle \\hat{x}\\rangle$',fontsize=25);\n ylabel(r\"time $t/T$\",fontsize=25);\n grid()\n\n \n def plot_w_vs_e(self,LorH='Lower band',*args,**kwargs):\n '''\n A function for plotting the Wannier flow.\n The Wannier centers against energy are plotted.\n '''\n #figsize(7,5)\n dataw=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wanflow')\n datae=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wane')\n for i in range(self.L): \n descr=(LorH if i==0 else '')\n plot(dataw[i::self.L],datae[i::self.L],*args,label=descr,**kwargs)\n \n pos=100\n vx=real(dataw[i::self.L][pos:(pos+2)])\n vy=real(datae[i::self.L][pos:(pos+2)])\n #plot(vx[0],vy[0],'bo')\n \n arrow(vx[0],vy[0],\n (vx[1]-vx[0])/2,\n (vy[1]-vy[0])/2,fc='k',zorder=1000, \n head_width=0.3, head_length=0.1)\n if self.periodic:\n xticks(arange(self.L)-self.L/2+0.5*mod(self.L,2),fontsize=25)\n else:\n xticks(arange(self.L)+1,fontsize=25)\n yticks(fontsize=25)\n xlabel(r'position $\\langle \\hat{x}\\rangle$',fontsize=25);\n ylabel(r'energy $\\langle \\hat{H}\\rangle$',fontsize=25);\n grid()\n\n \n def polar_w_vs_t(self,LorH='Lower band',*args,**kwargs):\n '''\n A function for plotting the Wannier flow.\n A figure in polar coordinates is produced.\n '''\n if self.periodic==False:\n print('This feature is only supported for periodic boundary conditions')\n return\n #figsize(7,7)\n data=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wanflow')\n for i in range(self.L): \n descr=(LorH if i==0 else '') \n plot((self.tspan+0.5)*cos((2*pi)/self.L*data[i::self.L]),\n (self.tspan+0.5)*sin((2*pi)/self.L*data[i::self.L]),\n *args,label=descr,**kwargs)\n phi=linspace(0,2*pi,100);\n plot(0.5*sin(phi),0.5*cos(phi),'k-',linewidth=2);\n plot(1.5*sin(phi),1.5*cos(phi),'k-',linewidth=2);\n xlim(-1.5,1.5);\n ylim(-1.5,1.5);\n phiran=linspace(-pi,pi,self.L+1)\n for i in range(len(phiran)-1):\n phi0=0\n plot([0.5*sin(phiran[i]+phi0),1.5*sin(phiran[i]+phi0)],\n [0.5*cos(phiran[i]+phi0),1.5*cos(phiran[i]+phi0)],'k--')\n text(1.3*cos(phiran[i]+pi/self.L/2),1.3*sin(phiran[i]+pi/self.L/2),i+1,fontsize=20)\n axis('off')\n text(-0.45,-0.1,r'$t/T=0$',fontsize=20)\n text(1.1,-1.1,r'$t/T=1$',fontsize=20)\n\nCFwan=wannierflow()\n\nfigsize(12,4)\nsubplot(121)\nCFwan.plot_w_vs_t('Lower band','ko',ms=10)\nCFwan.plot_w_vs_t('Higher band','o',mec='grey',mfc='grey')\nlegend(fontsize=15,numpoints=100);\nsubplot(122)\nCFwan.plot_w_vs_e('Lower band','k.')\nCFwan.plot_w_vs_e('Higher band','.',mec='grey',mfc='grey')\n#legend(fontsize=15,numpoints=100);\ntight_layout()",
"An alternative way to visualize Wannier flow of a periodic system is shown below. The inner circle represent $t/T=0$ and the outer $t/T=1$, the sections of the disc correspond to unitcells.",
"figsize(6,6)\nCFwan.polar_w_vs_t('Lower band','ko',ms=10)\nCFwan.polar_w_vs_t('Higher band','o',mec='grey',mfc='grey')\nlegend(numpoints=100,fontsize=15,ncol=2,bbox_to_anchor=(1,0));",
"If we investigate pumping in a finite but sample without periodic boundary condition we will see that the edgestates cross the gap!",
"CFwan_finite=wannierflow(periodic=False)\nfigsize(6,4)\nCFwan_finite.plot_w_vs_t('Lower band','ko',ms=10)\nCFwan_finite.plot_w_vs_t('Higher band','o',mec='grey',mfc='grey')\nlegend(fontsize=15,numpoints=100);\nxlim(0,7);",
"We have now done all the heavy lifting with regards of coding. Now we can reuse all the plotting and data generating classes and functions for other sequences.\nMoving away from the control freak sequence\nLet us now relax the control freak attitude and consider a model which is not strictly localized at all times!",
"def uvwNSCF(t):\n '''\n The u,v and w functions of the not so control freak sequence.\n For the time beeing we assume vbar to be fixed.\n '''\n vbar=1\n return array([sin(t*(2*pi)),vbar+cos(t*(2*pi)),1*t**0])",
"The $d$ space story can now be easily explored via the seq_and_d function we have defined earlier.",
"interact(seq_and_d,funcs=fixed('uvwNSCF'),ti=(0,len(tran[:,0])-1));",
"Similarly the spectrum and wavefunctions can also be investigated via the pumpdata class:",
"# Generating the not-so control freak data\nNSCFdata=pumpdata(numLoc=2,norm_treshold=0.6,func=uvwNSCF)\n\ninteract(enpsi,PD=fixed(NSCFdata),ti=(0,len(tran[:,0])-1),n=(0,19));",
"Finally wannierflow class let us see the movement of the Wannier centers.",
"NSCFwan=wannierflow(periodic=True,func=uvwNSCF)\nfigsize(12,4)\nsubplot(121)\nNSCFwan.plot_w_vs_t('Lower band','ko',ms=10)\nNSCFwan.plot_w_vs_t('Higher band','o',mec='grey',mfc='grey')\nlegend(fontsize=15,numpoints=100);\nsubplot(122)\nNSCFwan.plot_w_vs_e('Lower band','k.')\nNSCFwan.plot_w_vs_e('Higher band','.',mec='grey',mfc='grey')\n#legend(fontsize=15,numpoints=100);\ntight_layout()\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.