repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
sequence | types
sequence |
---|---|---|---|---|
isb-cgc/examples-Python | notebooks/BRAF-V600 study using CCLE data.ipynb | apache-2.0 | [
"Working with BigQuery tables and the Genomics API\nCase Study: BRAF V600 mutations in CCLE cell-lines\nIn this notebook we'll show you how you might combine information available in BigQuery tables with sequence-reads that have been imported into Google Genomics. We'll be using the open-access CCLE data for this example.\nYou'll need to make sure that your project has the necessary APIs enabled, so take a look at the Getting started with Google Genomics page, and be sure to also have a look at this Getting started with the Genomics API tutorial notebook available on github.\nWe'll be using the Google Python API client so we'll need to install that first using the pip package manager.\nNOTE that Datalab is currently using an older version of the oauth2client (1.4.12) and as a result we need to install an older version of the google-api-python-client that supports it.",
"!pip install --upgrade google-api-python-client==1.4.2",
"Next we're going to need to authenticate using the service account on the Datalab host.",
"from httplib2 import Http\nfrom oauth2client.client import GoogleCredentials\ncredentials = GoogleCredentials.get_application_default()\nhttp = Http()\ncredentials.authorize(http)",
"Now we can create a client for the Genomics API. NOTE that in order to use the Genomics API, you need to have enabled it for your GCP project.",
"from apiclient import discovery\nggSvc = discovery.build ( 'genomics', 'v1', http=http )",
"We're also going to want to work with BigQuery, so we'll need the biguery module. We will also be using the pandas and time modules.",
"import gcp.bigquery as bq\nimport pandas as pd\nimport time",
"The ISB-CGC group has assembled metadata as well as molecular data from the CCLE project into an open-access BigQuery dataset called isb-cgc:ccle_201602_alpha. In this notebook we will make use of two tables in this dataset: Mutation_calls and DataFile_info. You can explore the entire dataset using the BigQuery web UI.\nLet's say that we're interested in cell-lines with BRAF V600 mutations, and in particular we want to see if there is evidence in both the DNA-seq and the RNA-seq data for these mutations. Let's start by making sure that there are some cell-lines with these mutations in our dataset:",
"%%sql\n\nSELECT CCLE_name, Hugo_Symbol, Protein_Change, Genome_Change \nFROM [isb-cgc:ccle_201602_alpha.Mutation_calls] \nWHERE ( Hugo_Symbol=\"BRAF\" AND Protein_Change CONTAINS \"p.V600\" )\nORDER BY Cell_line_primary_name\nLIMIT 5",
"OK, so let's get the complete list of cell-lines with this particular mutation:",
"%%sql --module get_mutated_samples\n\nSELECT CCLE_name \nFROM [isb-cgc:ccle_201602_alpha.Mutation_calls] \nWHERE ( Hugo_Symbol=\"BRAF\" AND Protein_Change CONTAINS \"p.V600\" )\nORDER BY Cell_line_primary_name\n\nr = bq.Query(get_mutated_samples).results()\nlist1 = r.to_dataframe()\nprint \" Found %d samples with a BRAF V600 mutation. \" % len(list1)",
"Now we want to know, from the DataFile_info table, which cell lines have both DNA-seq and RNA-seq data imported into Google Genomics. (To find these samples, we will look for samples that have non-null readgroupset IDs from \"DNA\" and \"RNA\" pipelines.)",
"%%sql --module get_samples_with_data\n\nSELECT\n a.CCLE_name AS CCLE_name\nFROM (\n SELECT\n CCLE_name\n FROM\n [isb-cgc:ccle_201602_alpha.DataFile_info]\n WHERE\n ( Pipeline CONTAINS \"DNA\"\n AND GG_readgroupset_id<>\"NULL\" ) ) a\nJOIN (\n SELECT\n CCLE_name\n FROM\n [isb-cgc:ccle_201602_alpha.DataFile_info]\n WHERE\n ( Pipeline CONTAINS \"RNA\"\n AND GG_readgroupset_id<>\"NULL\" ) ) b\nON\n a.CCLE_name = b.CCLE_name\n\nr = bq.Query(get_samples_with_data).results()\nlist2 = r.to_dataframe()\nprint \" Found %d samples with both DNA-seq and RNA-seq reads. \" % len(list2)",
"Now let's find out which samples are in both of these lists:",
"list3 = pd.merge ( list1, list2, how='inner', on=['CCLE_name'] )\nprint \" Found %d mutated samples with DNA-seq and RNA-seq data. \" % len(list3)",
"No we're going to take a closer look at the reads from each of these samples. First, we'll need to be able to get the readgroupset IDs for each sample from the BigQuery table. To do this, we'll define a parameterized function:",
"%%sql --module get_readgroupsetid\n\nSELECT Pipeline, GG_readgroupset_id \nFROM [isb-cgc:ccle_201602_alpha.DataFile_info]\nWHERE CCLE_name=$c AND GG_readgroupset_id<>\"NULL\"",
"Let's take a look at how this will work:",
"aName = list3['CCLE_name'][0]\nprint aName\nids = bq.Query(get_readgroupsetid,c=aName).to_dataframe()\nprint ids",
"Ok, so we see that for this sample, we have two readgroupset IDs, one based on DNA-seq and one based on RNA-seq. This is what we expect, based on how we chose this list of samples.\nNow we'll define a function we can re-use that calls the GA4GH API reads.search method to find all reads that overlap the V600 mutation position. Note that we will query all of the readgroupsets that we get for each sample at the same time by passing in a list of readGroupSetIds. Once we have the reads, we'll organized them into a dictionary based on the local context centered on the mutation hotspot.",
"chr = \"7\"\npos = 140453135\nwidth = 11\nrgsList = ids['GG_readgroupset_id'].tolist()\n\ndef getReads ( rgsList, pos, width):\n \n payload = { \"readGroupSetIds\": rgsList,\n \"referenceName\": chr,\n \"start\": pos-(width/2),\n \"end\": pos+(width/2),\n \"pageSize\": 2048 \n }\n r = ggSvc.reads().search(body=payload).execute()\n \n context = {}\n for a in r['alignments']:\n rgsid = a['readGroupSetId']\n seq = a['alignedSequence']\n seqStartPos = int ( a['alignment']['position']['position'] )\n relPos = pos - (width/2) - seqStartPos\n if ( relPos >=0 and relPos+width<len(seq) ):\n # print rgsid, seq[relPos:relPos+width]\n c = seq[relPos:relPos+width]\n if (c not in context):\n context[c] = {}\n context[c][rgsid] = 1\n else:\n if (rgsid not in context[c]):\n context[c][rgsid] = 1\n else:\n context[c][rgsid] += 1\n\n for c in context:\n numReads = 0\n for a in context[c]:\n numReads += context[c][a]\n # write it out only if we have information from two or more readgroupsets\n if ( numReads>3 or len(context[c])>1 ):\n print \" --> \", c, context[c]",
"Here we define the position (0-based) of the BRAF V600 mutation:",
"chr = \"7\"\npos = 140453135\nwidth = 11",
"OK, now we can loop over all of the samples we found earlier:",
"for aName in list3['CCLE_name']: \n print \" \"\n print \" \"\n print aName\n r = bq.Query(get_readgroupsetid,c=aName).to_dataframe()\n for i in range(r.shape[0]):\n print \" \", r['Pipeline'][i], r['GG_readgroupset_id'][i]\n rgsList = r['GG_readgroupset_id'].tolist()\n getReads ( rgsList, pos, width)\n ",
"Notice that we consistently see greater read-depth in the DNA-seq data. Also all but the last sample are heterozygous for the V600 mutation, while WM1799_SKIN is homozygous. (Of course a proper analysis would also take into consideration the cigar information that is available with each read as well.)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
joshwalawender/KeckUtilities | SlitAlign/TestAlign.ipynb | bsd-2-clause | [
"%matplotlib inline\nimport os\nfrom matplotlib import pyplot as plt\nfrom scipy import ndimage\n\nimport numpy as np\nfrom astropy.io import fits\nfrom astropy import units as u\nfrom astropy.modeling import models, fitting, Fittable2DModel, Parameter\nfrom ccdproc import CCDData, combine, Combiner, flat_correct, trim_image",
"I'm going to be fitting a model to the alignment box image. This model will be the alignment box itself, plus a single 2D gaussian star. The following class is an astropy.models model of the trapezoidal shape of the MOSFIRE alignment box.",
"class mosfireAlignmentBox(Fittable2DModel):\n amplitude = Parameter(default=1)\n x_0 = Parameter(default=0)\n y_0 = Parameter(default=0)\n x_width = Parameter(default=1)\n y_width = Parameter(default=1)\n\n @staticmethod\n def evaluate(x, y, amplitude, x_0, y_0, x_width, y_width):\n '''MOSFIRE Alignment Box.\n \n Typical widths are 22.5 pix horizontally and 36.0 pix vertically.\n \n Angle of slit relative to pixels is 3.78 degrees.\n '''\n slit_angle = -3.7 # in degrees\n x0_of_y = x_0 + (y-y_0)*np.sin(slit_angle*np.pi/180)\n \n x_range = np.logical_and(x >= x0_of_y - x_width / 2.,\n x <= x0_of_y + x_width / 2.)\n y_range = np.logical_and(y >= y_0 - y_width / 2.,\n y <= y_0 + y_width / 2.)\n result = np.select([np.logical_and(x_range, y_range)], [amplitude], 0)\n\n if isinstance(amplitude, u.Quantity):\n return Quantity(result, unit=amplitude.unit, copy=False)\n else:\n return result\n\n @property\n def input_units(self):\n if self.x_0.unit is None:\n return None\n else:\n return {'x': self.x_0.unit,\n 'y': self.y_0.unit}\n\n def _parameter_units_for_data_units(self, inputs_unit, outputs_unit):\n return OrderedDict([('x_0', inputs_unit['x']),\n ('y_0', inputs_unit['y']),\n ('x_width', inputs_unit['x']),\n ('y_width', inputs_unit['y']),\n ('amplitude', outputs_unit['z'])])",
"This is a simple helper function which I stole from my CSU_initializer project. It may not be necessary as I am effectively fitting the location of the alignment box twice.",
"def fit_edges(profile):\n fitter = fitting.LevMarLSQFitter()\n\n amp1_est = profile[profile == min(profile)][0]\n mean1_est = np.argmin(profile)\n amp2_est = profile[profile == max(profile)][0]\n mean2_est = np.argmax(profile)\n \n g_init1 = models.Gaussian1D(amplitude=amp1_est, mean=mean1_est, stddev=2.)\n g_init1.amplitude.max = 0\n g_init1.amplitude.min = amp1_est*0.9\n g_init1.stddev.max = 3\n g_init2 = models.Gaussian1D(amplitude=amp2_est, mean=mean2_est, stddev=2.)\n g_init2.amplitude.min = 0\n g_init2.amplitude.min = amp2_est*0.9\n g_init2.stddev.max = 3\n\n model = g_init1 + g_init2\n fit = fitter(model, range(0,horizontal_profile.shape[0]), horizontal_profile)\n \n # Check Validity of Fit\n if abs(fit.stddev_0.value) <= 3 and abs(fit.stddev_1.value) <= 3\\\n and fit.amplitude_0.value < -1 and fit.amplitude_1.value > 1\\\n and fit.mean_0.value > fit.mean_1.value:\n x1 = fit.mean_0.value\n x2 = fit.mean_1.value\n else:\n x1 = None\n x2 = None\n\n return x1, x2",
"Create Master Flat\nRather than take time to obtain a sky frame for each mask alignment, I am going to treat the sky background as a constant over the alignment box area (roughly 4 x 7 arcsec). To do that, I need to flat field the image.\nNote that this flat field is built using data from a different night than the alignment box image we will be processing.",
"filepath = '../../../KeckData/MOSFIRE_FCS/'\ndark = CCDData.read(os.path.join(filepath, 'm180130_0001.fits'), unit='adu')\n\nflatfiles = ['m180130_0320.fits',\n 'm180130_0321.fits',\n 'm180130_0322.fits',\n 'm180130_0323.fits',\n 'm180130_0324.fits',\n ]\nflats = []\nfor i,file in enumerate(flatfiles):\n flat = CCDData.read(os.path.join(filepath, file), unit='adu')\n flat = flat.subtract(dark)\n flats.append(flat)\n\nflat_combiner = Combiner(flats)\nflat_combiner.sigma_clipping()\nscaling_func = lambda arr: 1/np.ma.average(arr)\nflat_combiner.scaling = scaling_func\nmasterflat = flat_combiner.median_combine()\n\n# masterflat.write('masterflat.fits', overwrite=True)",
"Reduce Alignment Image",
"# align1 = CCDData.read(os.path.join(filepath, 'm180130_0052.fits'), unit='adu')\nalign1 = CCDData.read(os.path.join(filepath, 'm180210_0254.fits'), unit='adu')\nalign1ds = align1.subtract(dark)\nalign1f = flat_correct(align1ds, masterflat)",
"Find Alignment Box and Star\nFor now, I am manually entering the rough location of the alignment box within the CCD. This should be read from header.",
"# box_loc = (1257, 432) # for m180130_0052\n# box_loc = (1544, 967) # for m180210_0254\nbox_loc = (821, 1585) # for m180210_0254\n# box_loc = (1373, 1896) # for m180210_0254\n# box_loc = (791, 921) # for m180210_0254\n# box_loc = (1268, 301) # for m180210_0254\n\n\nbox_size = 30\nfits_section = f'[{box_loc[0]-box_size:d}:{box_loc[0]+box_size:d}, {box_loc[1]-box_size:d}:{box_loc[1]+box_size:d}]'\nprint(fits_section)\nregion = trim_image(align1f, fits_section=fits_section)",
"The code below estimates the center of the alignment box",
"threshold_pct = 70\nwindow = region.data > np.percentile(region.data, threshold_pct)\nalignment_box_position = ndimage.measurements.center_of_mass(window)",
"The code below finds the edges of the box and measures its width and height.",
"gradx = np.gradient(region.data, axis=1)\nhorizontal_profile = np.sum(gradx, axis=0)\ngrady = np.gradient(region.data, axis=0)\nvertical_profile = np.sum(grady, axis=1)\n\nh_edges = fit_edges(horizontal_profile)\nprint(h_edges, h_edges[0]-h_edges[1])\n\nv_edges = fit_edges(vertical_profile)\nprint(v_edges, v_edges[0]-v_edges[1])",
"This code estimates the initial location of the star. The fit to the star is quite rudimentary and could be replaced by more sophisticated methods.",
"maxr = region.data.max()\nstarloc = (np.where(region.data == maxr)[0][0], np.where(region.data == maxr)[1][0])",
"Build Model for Box + Star\nBuild an astropy.models model of the alignment box and star and fit the compound model to the data.",
"boxamplitude = 1 #np.percentile(region.data, 90)\nstar_amplitude = region.data.max() - boxamplitude\n\nbox = mosfireAlignmentBox(boxamplitude, alignment_box_position[1], alignment_box_position[0],\\\n abs(h_edges[0]-h_edges[1]), abs(v_edges[0]-v_edges[1]))\nbox.amplitude.fixed = True\nbox.x_width.min = 10\nbox.y_width.min = 10\n\nstar = models.Gaussian2D(star_amplitude, starloc[0], starloc[1])\nstar.amplitude.min = 0\nstar.x_stddev.min = 1\nstar.x_stddev.max = 8\nstar.y_stddev.min = 1\nstar.y_stddev.max = 8\n\nsky = models.Const2D(np.percentile(region.data, 90))\nsky.amplitude.min = 0\n\nmodel = box*(sky + star)\n\nfitter = fitting.LevMarLSQFitter()\ny, x = np.mgrid[:2*box_size+1, :2*box_size+1]\nfit = fitter(model, x, y, region.data)\nprint(fitter.fit_info['message'])\n\n# Do stupid way of generating an image from the model for visualization (replace this later)\nmodelim = np.zeros((61,61))\nfitim = np.zeros((61,61))\nfor i in range(0,60):\n for j in range(0,60):\n modelim[j,i] = model(i,j)\n fitim[j,i] = fit(i,j)\nresid = region.data-fitim\n\nfor i,name in enumerate(fit.param_names):\n print(f\"{name:15s} = {fit.parameters[i]:.2f}\")",
"Results\nThe cell below, shows the image, the initial model guess, the fitted model, and the difference between the data and the model.",
"plt.figure(figsize=(16,24))\nplt.subplot(1,4,1)\nplt.imshow(region.data, vmin=fit.amplitude_1.value*0.9, vmax=fit.amplitude_1.value+fit.amplitude_2.value)\nplt.subplot(1,4,2)\nplt.imshow(modelim, vmin=fit.amplitude_1.value*0.9, vmax=fit.amplitude_1.value+fit.amplitude_2.value)\nplt.subplot(1,4,3)\nplt.imshow(fitim, vmin=fit.amplitude_1.value*0.9, vmax=fit.amplitude_1.value+fit.amplitude_2.value)\nplt.subplot(1,4,4)\nplt.imshow(resid, vmin=-1000, vmax=1000)\nplt.show()",
"Results\nShow the image with an overlay marking the determined center of the alignment box and the position of the star.\nPlease note that this code fits the location of the box and so it can confirm the FCS operation has placed the box in a consistent location when checked against the header.\nIt should also be able to message and automatically respond if the star is not found or is very faint (i.e. it has lower than expected flux).",
"pixelscale = u.pixel_scale(0.1798*u.arcsec/u.pixel)\nFWHMx = 2*(2*np.log(2))**0.5*fit.x_stddev_2 * u.pix\nFWHMy = 2*(2*np.log(2))**0.5*fit.y_stddev_2 * u.pix\nFWHM = (FWHMx**2 + FWHMy**2)**0.5/2**0.5\nstellar_flux = 2*np.pi*fit.amplitude_2.value*fit.x_stddev_2.value*fit.y_stddev_2.value\n\nplt.figure(figsize=(8,8))\nplt.imshow(region.data, vmin=fit.amplitude_1.value*0.9, vmax=fit.amplitude_1.value+fit.amplitude_2.value)\nplt.plot([fit.x_mean_2.value], [fit.y_mean_2.value], 'go', ms=10)\nplt.text(fit.x_mean_2.value+1, fit.x_mean_2.value-1, 'Star', color='green', fontsize=18)\nplt.plot([fit.x_0_0.value], [fit.y_0_0.value], 'bx', ms=15)\nplt.text(fit.x_0_0.value+2, fit.y_0_0.value, 'Box Center', color='blue', fontsize=18)\nplt.show()\n\nboxpos_x = box_loc[1] - box_size + fit.x_0_0.value\nboxpos_y = box_loc[0] - box_size + fit.y_0_0.value\n\nstarpos_x = box_loc[1] - box_size + fit.x_mean_2.value\nstarpos_y = box_loc[0] - box_size + fit.y_mean_2.value\n\n\nprint(f\"Sky Brightness = {fit.amplitude_1.value:.0f} ADU\")\nprint(f\"Box X Center = {boxpos_x:.0f}\")\nprint(f\"Box Y Center = {boxpos_y:.0f}\")\nprint(f\"Stellar FWHM = {FWHM.to(u.arcsec, equivalencies=pixelscale):.2f}\")\nprint(f\"Stellar Xpos = {starpos_x:.0f}\")\nprint(f\"Stellar Xpos = {starpos_y:.0f}\")\nprint(f\"Stellar Amplitude = {fit.amplitude_2.value:.0f} ADU\")\nprint(f\"Stellar Flux (fit) = {stellar_flux:.0f} ADU\")"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dataventureutc/Kaggle-HandsOnLab | Stanford CS228 - Python and Numpy Tutorial.ipynb | gpl-3.0 | [
"CS228 Python Tutorial\nAdapted by Volodymyr Kuleshov and Isaac Caswell from the CS231n Python tutorial by Justin Johnson (http://cs231n.github.io/python-numpy-tutorial/).\nCommented and Ported to Python 3 by Jonathan DEKHTIAR\nIntroduction\nPython is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing.\nWe expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing.\nSome of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html).\nIn this tutorial, we will cover:\n* Basic Python: Basic data types (Containers, Lists, Dictionaries, Sets, Tuples), Functions, Classes\n* Numpy: Arrays, Array indexing, Datatypes, Array math, Broadcasting\n* Matplotlib: Plotting, Subplots, Images\n* IPython: Creating notebooks, Typical workflows\nBasics of Python\nPython is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:",
"def quicksort(arr, depth=0, pos=\"middle\", verbose=False):\n if len(arr) <= 1:\n if verbose:\n print(\"pos:\", pos)\n print(\"depth:\", depth)\n print(\"###\")\n return arr\n pivot = arr[int(len(arr) / 2)]\n left = [x for x in arr if x < pivot] \n middle = [x for x in arr if x == pivot] \n right = [x for x in arr if x > pivot]\n if verbose:\n print(\"pivot:\", pivot)\n print(\"left:\", left)\n print(\"middle:\", middle)\n print(\"right:\", right)\n print(\"pos:\", pos)\n print(\"depth:\", depth)\n print(\"###\")\n return quicksort(left, depth+1, \"left\") + middle + quicksort(right, depth+1, \"right\")\n\nprint(quicksort([3,6,8,10,1,2,1]))",
"Python versions\nThere are currently two different supported versions of Python, 2.7 and 3.4. Somewhat confusingly, Python 3.0 introduced many backwards-incompatible changes to the language, so code written for 2.7 may not work under 3.4 and vice versa. For this class all code will use Python 2.7.\nYou can check your Python version at the command line by running python --version.\nBasic data types\nNumbers\nIntegers and floats work as you would expect from other languages:",
"x = 3\nprint (x, type(x))\n\nprint (\"Addition:\", x + 1) # Addition;\nprint (\"Subtraction:\", x - 1) # Subtraction;\nprint (\"Multiplication:\", x * 2) # Multiplication;\nprint (\"Exponentiation:\", x ** 2) # Exponentiation;\n\nx += 1\nprint (\"Incrementing:\", x) # Prints \"4\"\nx *= 2\nprint (\"Exponentiating:\", x) # Prints \"8\"\n\ny = 2.5\nprint (\"Type of y:\", type(y)) # Prints \"<type 'float'>\"\nprint (\"Many values:\", y, y + 1, y * 2, y ** 2) # Prints \"2.5 3.5 5.0 6.25\"",
"Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.\nPython also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.\nBooleans\nPython implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.):",
"t, f = True, False\nprint (type(t)) # Prints \"<type 'bool'>\"\n\nprint (\"True AND False:\", t and f) # Logical AND;\nprint (\"True OR False:\", t or f) # Logical OR;\nprint (\"NOT True:\", not t) # Logical NOT;\nprint (\"True XOR False:\", t != f) # Logical XOR;",
"Strings",
"hello = 'hello' # String literals can use single quotes\nworld = \"world\" # or double quotes; it does not matter.\nprint (hello, len(hello))\n\nhw = hello + ' ' + world # String concatenation\nprint (hw) # prints \"hello world\"\n\nhw12 = '%s %s %d' % (hello, world, 12) # sprintf style string formatting\nprint (hw12) # prints \"hello world 12\"",
"String objects have a bunch of useful methods; for example:",
"s = \"hello\"\nprint (\"Capitalized String:\", s.capitalize()) # Capitalize a string; prints \"Hello\"\nprint (\"Uppercase String:\", s.upper()) # Convert a string to uppercase; prints \"HELLO\"\nprint (\"Right justified String with padding of '7':\", s.rjust(7)) # Right-justify a string, padding with spaces; prints \" hello\"\nprint (\"Centered String with padding of '7':\", s.center(7)) # Center a string, padding with spaces; prints \" hello \"\nprint (\"Replace 'l' with '(ell)':\", s.replace('l', '(ell)')) # Replace all instances of one substring with another;\n # prints \"he(ell)(ell)o\"\nprint (\"Stripped String:\", ' world '.strip()) # Strip leading and trailing whitespace; prints \"world\"",
"You can find a list of all string methods in the documentation\nContainers\nPython includes several built-in container types: lists, dictionaries, sets, and tuples.\nLists\nA list is the Python equivalent of an array, but is resizeable and can contain elements of different types:",
"xs = [3, 1, 2] # Create a list\nprint (xs, xs[2])\nprint (xs[-1]) # Negative indices count from the end of the list; prints \"2\"\n\nxs[2] = 'foo' # Lists can contain elements of different types\nprint (xs)\n\nxs.append('bar') # Add a new element to the end of the list\nprint (xs)\n\nx = xs.pop() # Remove and return the last element of the list\nprint (x, xs)",
"As usual, you can find all the gory details about lists in the documentation\nSlicing\nIn addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:",
"nums = list(range(5)) # range is a built-in function that creates a list of integers\nprint (nums) # Prints \"[0, 1, 2, 3, 4]\"\nprint (nums[2:4]) # Get a slice from index 2 to 4 (exclusive); prints \"[2, 3]\"\nprint (nums[2:]) # Get a slice from index 2 to the end; prints \"[2, 3, 4]\"\nprint (nums[:2]) # Get a slice from the start to index 2 (exclusive); prints \"[0, 1]\"\nprint (nums[:]) # Get a slice of the whole list; prints [\"0, 1, 2, 3, 4]\"\nprint (nums[:-1]) # Slice indices can be negative; prints [\"0, 1, 2, 3]\"\n\nnums[2:4] = [8, 9] # Assign a new sublist to a slice\nprint (nums) # Prints \"[0, 1, 8, 8, 4]\"",
"Loops\nYou can loop over the elements of a list like this:",
"animals = ['cat', 'dog', 'monkey']\nfor animal in animals:\n print (animal)",
"If you want access to the index of each element within the body of a loop, use the built-in enumerate function:",
"animals = ['cat', 'dog', 'monkey']\nfor idx, animal in enumerate(animals):\n print ('#%d: %s' % (idx + 1, animal))",
"List comprehensions:\nWhen programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:",
"nums = [0, 1, 2, 3, 4]\nsquares = []\nfor x in nums:\n squares.append(x ** 2)\nprint (squares)",
"You can make this code simpler using a list comprehension:",
"nums = [0, 1, 2, 3, 4]\nsquares = [x ** 2 for x in nums]\nprint (squares)",
"List comprehensions can also contain conditions:",
"nums = [0, 1, 2, 3, 4]\neven_squares = [x ** 2 for x in nums if x % 2 == 0]\nprint (even_squares)",
"Dictionaries\nA dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this:",
"d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data\nprint (\"Value of the dictionary for the key 'cat':\", d['cat']) # Get an entry from a dictionary; prints \"cute\"\nprint (\"Is 'cat' is the dictionary d:\", 'cat' in d) # Check if a dictionary has a given key; prints \"True\"\n\nd['fish'] = 'wet' # Set an entry in a dictionary\nprint (\"Value of the dictionary for the key 'fish':\", d['fish']) # Prints \"wet\"\n\nprint (d['monkey']) # KeyError: 'monkey' not a key of d\n\nprint (\"Get 'monkey' value or default:\", d.get('monkey', 'N/A')) # Get an element with a default; prints \"N/A\"\nprint (\"Get 'fish' value or default:\", d.get('fish', 'N/A')) # Get an element with a default; prints \"wet\"\n\ndel d['fish'] # Remove an element from a dictionary\nprint (\"Get 'fish' value or default:\", d.get('fish', 'N/A')) # \"fish\" is no longer a key; prints \"N/A\"",
"You can find all you need to know about dictionaries in the documentation.\nIt is easy to iterate over the keys in a dictionary:",
"d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal in d:\n legs = d[animal]\n print ('A %s has %d legs' % (animal, legs))",
"If you want access to keys and their corresponding values, use the iteritems method:",
"d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal, legs in d.items():\n print ('A %s has %d legs' % (animal, legs))",
"Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. \nFor example:",
"nums = [0, 1, 2, 3, 4]\neven_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}\nprint (even_num_to_square)",
"Sets\nA set is an unordered collection of distinct elements. As a simple example, consider the following:",
"animals = {'cat', 'dog'}\nprint (\"Is 'cat' in the set:\", 'cat' in animals) # Check if an element is in a set; prints \"True\"\nprint (\"Is 'fish' in the set:\", 'fish' in animals) # prints \"False\"\n\nanimals.add('fish') # Add an element to a set\nprint (\"Is 'fish' in the set:\", 'fish' in animals)\nprint (\"What is the length of the set:\", len(animals)) # Number of elements in a set;\n\nanimals.add('cat') # Adding an element that is already in the set does nothing\nprint (\"What is the length of the set:\", len(animals)) \n\nanimals.remove('cat') # Remove an element from a set\nprint (\"What is the length of the set:\", len(animals))",
"Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:",
"animals = {'cat', 'dog', 'fish'}\nfor idx, animal in enumerate(animals):\n print ('#%d: %s' % (idx + 1, animal))",
"Set comprehensions: Like lists and dictionaries, we can easily construct sets using set comprehensions:",
"from math import sqrt\nset_comprehension = {int(sqrt(x)) for x in range(30)}\n\nprint (set_comprehension)\nprint (type(set_comprehension))",
"Tuples\nA tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example:",
"d = {(x, x + 1): x for x in range(10)} # Create a dictionary with tuple keys\nt = (5, 6) # Create a tuple\nprint (type(t))\nprint (d[t]) \nprint (d[(1, 2)])\n\nprint (\"Access the 1st value of Tuple:\", t[0])\nprint (\"Access the 2nd value of Tuple:\", t[1])\n\nt[0] = 1 # This does NOT work !\n\nt = (1, t[1]) # This DOES work !\nprint (t)",
"Functions\nPython functions are defined using the def keyword. For example:",
"def sign(x):\n if x > 0:\n return 'positive'\n elif x < 0:\n return 'negative'\n else:\n return 'zero'\n\nfor x in [-1, 0, 1]:\n print (sign(x))",
"We will often define functions to take optional keyword arguments, like this:",
"def hello(name, loud=False):\n if loud:\n print ('HELLO, %s' % name.upper())\n else:\n print ('Hello, %s!' % name)\n\nhello('Bob')\nhello('Fred', loud=True)",
"Classes\nThe syntax for defining classes in Python is straightforward:",
"class Greeter:\n\n # Constructor\n def __init__(self, name):\n self.name = name # Create an instance variable\n\n # Instance method\n def greet(self, loud=False):\n if loud:\n print ('HELLO, %s!' % self.name.upper())\n else:\n print ('Hello, %s' % self.name)\n\ng = Greeter('Fred') # Construct an instance of the Greeter class\ng.greet() # Call an instance method; prints \"Hello, Fred\"\ng.greet(loud=True) # Call an instance method; prints \"HELLO, FRED!\"",
"Numpy\nNumpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.\nTo use Numpy, we first need to import the numpy package:",
"import numpy as np\nimport warnings\nwarnings.filterwarnings('ignore') # To remove warnings about \"deprecated\" or \"future\" features",
"Arrays\nA numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.\n\nWhy using Numpy Array over Python Lists ?\nNumPy's arrays are more compact than Python lists -- a list of lists as you describe, in Python, would take at least 20 MB or so, while a NumPy 3D array with single-precision floats in the cells would fit in 4 MB. Access in reading and writing items is also faster with NumPy.\nMaybe you don't care that much for just a million cells, but you definitely would for a billion cells -- neither approach would fit in a 32-bit architecture, but with 64-bit builds NumPy would get away with 4 GB or so, Python alone would need at least about 12 GB (lots of pointers which double in size) -- a much costlier piece of hardware!\nThe difference is mostly due to \"indirectness\" -- a Python list is an array of pointers to Python objects, at least 4 bytes per pointer plus 16 bytes for even the smallest Python object (4 for type pointer, 4 for reference count, 4 for value -- and the memory allocators rounds up to 16). A NumPy array is an array of uniform values -- single-precision numbers takes 4 bytes each, double-precision ones, 8 bytes. Less flexible, but you pay substantially for the flexibility of standard Python lists!\nAuthor: Alex Martelli\nSource: StackOverFlow\n\nWe can initialize numpy arrays from nested Python lists, and access elements using square brackets:",
"a = np.array([1, 2, 3]) # Create a rank 1 array\nprint (type(a), a.shape, a[0], a[1], a[2])\n\na[0] = 5 # Change an element of the array\nprint (a)\n\nb = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array\nprint (b)\n\nprint (b.shape)\nprint (b[0, 0], b[0, 1], b[1, 0])",
"Numpy also provides many functions to create arrays:",
"a = np.zeros((2,2)) # Create an array of all zeros\nprint (a)\n\nb = np.ones((1,2)) # Create an array of all ones\nprint (b)\n\nc = np.full((2,2), 7) # Create a constant array\nprint (c)\n\nd = np.eye(2) # Create a 2x2 identity matrix\nprint (d)\n\ne = np.random.random((2,2)) # Create an array filled with random values\nprint (e)",
"Array indexing\nNumpy offers several ways to index into arrays.\nSlicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:",
"# Create the following rank 2 array with shape (3, 4)\n# [[ 1 2 3 4]\n# [ 5 6 7 8]\n# [ 9 10 11 12]]\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\n\n# Use slicing to pull out the subarray consisting of the first 2 rows\n# and columns 1 and 2; b is the following array of shape (2, 2):\n# [[2 3]\n# [6 7]]\nb = a[:2, 1:3]\nprint (b)",
"A slice of an array is a view into the same data, so modifying it will modify the original array.",
"print (\"Original Matrix before modification:\", a[0, 1])\n\nb[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]\nprint (\"Original Matrix after modification:\", a[0, 1])",
"You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing:",
"# Create the following rank 2 array with shape (3, 4)\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\nprint (a)",
"Two ways of accessing the data in the middle row of the array. Mixing integer indexing with slices yields an array of lower rank, while using only slices yields an array of the same rank as the original array:",
"row_r1 = a[1, :] # Rank 1 view of the second row of a \nrow_r2 = a[1:2, :] # Rank 2 view of the second row of a\nrow_r3 = a[[1], :] # Rank 2 view of the second row of a\nprint (\"Rank 1 access of the 2nd row:\", row_r1, row_r1.shape) \nprint (\"Rank 2 access of the 2nd row:\", row_r2, row_r2.shape)\nprint (\"Rank 2 access of the 2nd row:\", row_r3, row_r3.shape)\n\n# We can make the same distinction when accessing columns of an array:\ncol_r1 = a[:, 1]\ncol_r2 = a[:, 1:2]\nprint (\"Rank 1 access of the 2nd column:\", col_r1, col_r1.shape)\nprint ()\nprint (\"Rank 2 access of the 2nd column:\\n\", col_r2, col_r2.shape)",
"Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. \nHere is an example:",
"a = np.array([[1,2], [3, 4], [5, 6]])\n\n# An example of integer array indexing.\n# The returned array will have shape (3,) and \nprint (a[[0, 1, 2], [0, 1, 0]])\n\n# The above example of integer array indexing is equivalent to this:\nprint (np.array([a[0, 0], a[1, 1], a[2, 0]]))\n\n# When using integer array indexing, you can reuse the same\n# element from the source array:\nprint (a[[0, 0], [1, 1]])\n\n# Equivalent to the previous integer array indexing example\nprint (np.array([a[0, 1], a[0, 1]]))\n\n# Create a new array from which we will select elements\na = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nprint (a)\n\n# Create an array of indices\nb = np.array([0, 2, 0, 1])\nb_range = np.arange(4)\nprint (\"b_range:\", b_range)\n\n# Select one element from each row of a using the indices in b\nprint (\"Selected Matrix Values:\", a[b_range, b]) # Prints \"[ 1 6 7 11]\"\n\n# Mutate one element from each row of a using the indices in b\na[b_range, b] += 10 # Only the selected values are modified in the \"a\" matrix.\nprint (\"Modified 'a' Matrix:\\n\", a)",
"Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. \nHere is an example:",
"a = np.array([[1,2], [3, 4], [5, 6]])\n\nbool_idx = (a > 2) # Find the elements of a that are bigger than 2;\n # this returns a numpy array of Booleans of the same\n # shape as a, where each slot of bool_idx tells\n # whether that element of a is > 2.\n\nprint (bool_idx)\n\n# We use boolean array indexing to construct a rank 1 array\n# consisting of the elements of a corresponding to the True values\n# of bool_idx\nprint (a[bool_idx])\n\n# We can do all of the above in a single concise statement:\nprint (a[a > 2])",
"For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.\nDatatypes\nEvery numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. \nHere is an example:",
"x = np.array([1, 2]) # Let numpy choose the datatype\ny = np.array([1.0, 2.0]) # Let numpy choose the datatype\nz = np.array([1, 2], dtype=np.int64) # Force a particular datatype\n\nprint (x.dtype, y.dtype, z.dtype)",
"You can read all about numpy datatypes in the documentation.\nArray math\nBasic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:",
"x = np.array([[1,2],[3,4]], dtype=np.float64)\ny = np.array([[5,6],[7,8]], dtype=np.float64)\n\n# Elementwise sum; both produce the array\nprint (x + y)\nprint ()\nprint (np.add(x, y))\n\n# Elementwise difference; both produce the array\nprint (x - y)\nprint ()\nprint (np.subtract(x, y))\n\n# Elementwise product; both produce the array\nprint (x * y)\nprint ()\nprint (np.multiply(x, y))\n\n# Elementwise division; both produce the array\n# [[ 0.2 0.33333333]\n# [ 0.42857143 0.5 ]]\nprint (x / y)\nprint ()\nprint (np.divide(x, y))\n\n# Elementwise square root; produces the array\n# [[ 1. 1.41421356]\n# [ 1.73205081 2. ]]\nprint (np.sqrt(x))",
"Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:",
"x = np.array([[1,2],[3,4]])\ny = np.array([[5,6],[7,8]])\n\nv = np.array([9,10])\nw = np.array([11, 12])\n\n# Inner product of vectors; both produce 219\nprint (\"v.w 'dot' product:\", v.dot(w))\nprint (\"numpy 'dot' product (v,w):\", np.dot(v, w))\n\n# Matrix / vector product; both produce the rank 1 array [29 67]\nprint (\"x.v 'dot' product:\", x.dot(v))\nprint (\"numpy 'dot' product (x,v):\", np.dot(x, v))\n\n# Matrix / matrix product; both produce the rank 2 array\n# [[19 22]\n# [43 50]]\nprint (\"x.y 'dot' product:\\n\", x.dot(y))\nprint (\"numpy 'dot' product (x,y):\\n\", np.dot(x, y))",
"Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum:",
"x = np.array([[1,2],[3,4]])\n\nprint (\"Sum of all element:\", np.sum(x)) # Compute sum of all elements; prints \"10\"\nprint (\"Sum of each column:\", np.sum(x, axis=0)) # Compute sum of each column; prints \"[4 6]\"\nprint (\"Sum of each row:\", np.sum(x, axis=1)) # Compute sum of each row; prints \"[3 7]\"",
"You can find the full list of mathematical functions provided by numpy in the documentation.\nApart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object:",
"print (\"Matrix x:\\n\", x)\nprint ()\nprint (\"Matrix x transposed:\\n\", x.T)\n\nv = np.array([[1,2,3]])\nprint (\"Matrix v:\\n\", v)\nprint ()\nprint (\"Matrix v transposed:\\n\", v.T)",
"Broadcasting\nBroadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.\nFor example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:",
"\n# We will add the vector v to each row of the matrix x,\n# storing the result in the matrix y\nx = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nv = np.array([1, 0, 1])\ny = np.empty_like(x) # Create an empty matrix with the same shape as x\n\n# Add the vector v to each row of the matrix x with an explicit loop\nfor i in range(4):\n y[i, :] = x[i, :] + v\n\nprint (y)",
"This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this:",
"vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other\nprint (vv) # Prints \"[[1 0 1]\n # [1 0 1]\n # [1 0 1]\n # [1 0 1]]\"\n\ny = x + vv # Add x and vv elementwise\nprint (y)\n\n# We will add the vector v to each row of the matrix x,\n# storing the result in the matrix y\nx = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nv = np.array([1, 0, 1])\ny = x + v # Add v to each row of x using broadcasting\nprint (y)",
"The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.\nBroadcasting two arrays together follows these rules:\n\nIf the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.\nThe two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.\nThe arrays can be broadcast together if they are compatible in all dimensions.\nAfter broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.\nIn any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension\n\nIf this explanation does not make sense, try reading the explanation from the documentation or this explanation.\nFunctions that support broadcasting are known as universal functions. You can find the list of all universal functions in the documentation.\nHere are some applications of broadcasting:",
"# Compute outer product of vectors\nv = np.array([1,2,3]) # v has shape (3,)\nw = np.array([4,5]) # w has shape (2,)\n# To compute an outer product, we first reshape v to be a column\n# vector of shape (3, 1); we can then broadcast it against w to yield\n# an output of shape (3, 2), which is the outer product of v and w:\n\nprint (np.reshape(v, (3, 1)) * w)\n\n# Add a vector to each row of a matrix\nx = np.array([[1,2,3], [4,5,6]])\n# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),\n# giving the following matrix:\n\nprint (x + v)\n\n# Add a vector to each column of a matrix\n# x has shape (2, 3) and w has shape (2,).\n# If we transpose x then it has shape (3, 2) and can be broadcast\n# against w to yield a result of shape (3, 2); transposing this result\n# yields the final result of shape (2, 3) which is the matrix x with\n# the vector w added to each column. Gives the following matrix:\n\nprint ((x.T + w).T)\n\n# Another solution is to reshape w to be a row vector of shape (2, 1);\n# we can then broadcast it directly against x to produce the same\n# output.\nprint (x + np.reshape(w, (2, 1)))\n\n# Multiply a matrix by a constant:\n# x has shape (2, 3). Numpy treats scalars as arrays of shape ();\n# these can be broadcast together to shape (2, 3), producing the\n# following array:\nprint (x * 2)",
"Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.\nThis brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.\nMatplotlib\nMatplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.",
"import matplotlib.pyplot as plt",
"By running this special iPython command, we will be displaying plots inline:",
"%matplotlib inline",
"Plotting\nThe most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:",
"# Compute the x and y coordinates for points on a sine curve\nx = np.arange(0, 3 * np.pi, 0.1)\ny = np.sin(x)\n\n# Plot the points using matplotlib\nplt.plot(x, y)",
"With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:",
"y_sin = np.sin(x)\ny_cos = np.cos(x)\n\n# Plot the points using matplotlib\nplt.plot(x, y_sin)\nplt.plot(x, y_cos)\nplt.xlabel('x axis label')\nplt.ylabel('y axis label')\nplt.title('Sine and Cosine')\nplt.legend(['Sine', 'Cosine'])",
"Subplots\nYou can plot different things in the same figure using the subplot function. Here is an example:",
"# Compute the x and y coordinates for points on sine and cosine curves\nx = np.arange(0, 3 * np.pi, 0.1)\ny_sin = np.sin(x)\ny_cos = np.cos(x)\n\n# Set up a subplot grid that has height 2 and width 1,\n# and set the first such subplot as active.\nplt.subplot(2, 1, 1)\n\n# Make the first plot\nplt.plot(x, y_sin)\nplt.title('Sine')\n\n# Set the second subplot as active, and make the second plot.\nplt.subplot(2, 1, 2)\nplt.plot(x, y_cos)\nplt.title('Cosine')\n\n# Show the figure.\nplt.show()",
"You can read much more about the subplot function in the documentation."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
msultan/msmbuilder | examples/Fs-Peptide-command-line.ipynb | lgpl-2.1 | [
"Modeling dynamics of FS Peptide\nThis example shows a typical, basic usage of the MSMBuilder command line to model dynamics of a protein system.",
"# Work in a temporary directory\nimport tempfile\nimport os\nos.chdir(tempfile.mkdtemp())\n\n# Since this is running from an IPython notebook,\n# we prefix all our commands with \"!\"\n# When running on the command line, omit the leading \"!\"\n! msmb -h",
"Get example data",
"! msmb FsPeptide --data_home ./\n! tree",
"Featurization\nThe raw (x, y, z) coordinates from the simulation do not respect the translational and rotational symmetry of our problem. A Featurizer transforms cartesian coordinates into other representations. Here we use the DihedralFeaturizer to turn our data into phi and psi dihedral angles. Observe that the 264*3-dimensional space is reduced to 84 dimensions.",
"# Remember '\\' is the line-continuation marker\n# You can enter this command on one line\n! msmb DihedralFeaturizer \\\n --out featurizer.pkl \\\n --transformed diheds \\\n --top fs_peptide/fs-peptide.pdb \\\n --trjs \"fs_peptide/*.xtc\" \\\n --stride 10",
"Preprocessing\nSince the range of values in our raw data can vary widely from feature to feature, we can scale values to reduce bias. Here we use the RobustScaler to center and scale our dihedral angles by their respective interquartile ranges.",
"! msmb RobustScaler \\\n -i diheds \\\n --transformed scaled_diheds.h5",
"Intermediate kinetic model: tICA\ntICA is similar to principal component analysis (see \"tICA vs. PCA\" example). Note that the 84-dimensional space is reduced to 4 dimensions.",
"! msmb tICA -i scaled_diheds.h5 \\\n --out tica_model.pkl \\\n --transformed tica_trajs.h5 \\\n --n_components 4 \\\n --lag_time 2",
"tICA Histogram\nWe can histogram our data projecting along the two slowest degrees of freedom (as found by tICA). You have to do this in a python script.",
"from msmbuilder.dataset import dataset\nds = dataset('tica_trajs.h5')\n\n%matplotlib inline\nimport msmexplorer as msme\nimport numpy as np\ntxx = np.concatenate(ds)\n_ = msme.plot_histogram(txx)",
"Clustering\nConformations need to be clustered into states (sometimes written as microstates). We cluster based on the tICA projections to group conformations that interconvert rapidly. Note that we transform our trajectories from the 4-dimensional tICA space into a 1-dimensional cluster index.",
"! msmb MiniBatchKMeans -i tica_trajs.h5 \\\n --transformed labeled_trajs.h5 \\\n --out clusterer.pkl \\\n --n_clusters 100 \\\n --random_state 42",
"MSM\nWe can construct an MSM from the labeled trajectories",
"! msmb MarkovStateModel -i labeled_trajs.h5 \\\n --out msm.pkl \\\n --lag_time 2",
"Plot Free Energy Landscape\nSubsequent plotting and analysis should be done from Python",
"from msmbuilder.utils import load\nmsm = load('msm.pkl')\nclusterer = load('clusterer.pkl')\n\nassignments = clusterer.partial_transform(txx)\nassignments = msm.partial_transform(assignments)\n\nfrom matplotlib import pyplot as plt\nmsme.plot_free_energy(txx, obs=(0, 1), n_samples=10000,\n pi=msm.populations_[assignments],\n xlabel='tIC 1', ylabel='tIC 2')\nplt.scatter(clusterer.cluster_centers_[msm.state_labels_, 0],\n clusterer.cluster_centers_[msm.state_labels_, 1],\n s=1e4 * msm.populations_, # size by population\n c=msm.left_eigenvectors_[:, 1], # color by eigenvector\n cmap=\"coolwarm\",\n zorder=3\n ) \nplt.colorbar(label='First dynamical eigenvector')\nplt.tight_layout()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dtamayo/reboundx | ipython_examples/SimulationArchive.ipynb | gpl-3.0 | [
"SimulationArchives with REBOUNDx\nWe can reproduce REBOUNDx simulations with extra effects bit by bit with the SimulationArchive (see Rein & Tamayo 2017 for details), and make sure to read SimulationArchive.ipynb and SimulationArchiveRestart.ipynb first.\nHowever, this will only work under some conditions. In particular, for bit-by-bit reproducibility, currently one requires that:\n- All REBOUNDx effects in the simulation are machine independent (can't have, e.g., trig functions, pow or exp in implementation)\n- The effect and particle parameters cannot change throughout the simulation\n- Effects must remain on for the entire integration\nTo use the SimulationArchive with REBOUNDx, we need to save a REBOUNDx binary SavingAndLoadingSimulations.ipynb. Since we can't change effects or particle parameters, it doesn't matter at what point we save this binary (we will just need it to load the SimulationArchive). Here we do a WHFAST integration with the symplectic gr_potential implementation for general relativity corrections. We set up the simulation like we usually would:",
"import rebound\nimport reboundx\nfrom reboundx import constants\n\nsim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(m=1e-3, a=1., e=0.2)\nsim.add(m=1e-3, a=1.9)\nsim.move_to_com()\nsim.dt = sim.particles[1].P*0.05 # timestep is 5% of orbital period\nsim.integrator = \"whfast\"\nrebx = reboundx.Extras(sim)\ngr = rebx.load_force(\"gr_potential\")\nrebx.add_force(gr)\ngr.params[\"c\"] = constants.C\nrebx.save(\"rebxarchive.bin\")",
"We now set up the SimulationArchive and integrate like we normally would (SimulationArchive.ipynb):",
"sim.automateSimulationArchive(\"archive.bin\", interval=1e3, deletefile=True)\nsim.integrate(1.e6)",
"Once we're ready to inspect our simulation, we use the reboundx.SimulationArchive wrapper that additionally takes a REBOUNDx binary:",
"sa = reboundx.SimulationArchive(\"archive.bin\", rebxfilename = \"rebxarchive.bin\")",
"We now create a different simulation from a snapshot in the SimulationArchive halfway through:",
"sim2, rebx = sa[500]\nsim2.t",
"We now integrate our loaded simulation to the same time as above (1.e6):",
"sim2.integrate(1.e6)",
"and see that we obtain exactly the same particle positions in the original and reloaded simulations:",
"sim.status()\n\nsim2.status()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jegibbs/phys202-2015-work | assignments/assignment05/InteractEx03.ipynb | mit | [
"Interact Exercise 3\nImports",
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display",
"Using interact for animation with data\nA soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:\n$$\n\\phi(x,t) = \\frac{1}{2} c \\mathrm{sech}^2 \\left[ \\frac{\\sqrt{c}}{2} \\left(x - ct - a \\right) \\right]\n$$\nThe constant c is the velocity and the constant a is the initial location of the soliton.\nDefine soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.",
"def soliton(x, t, c, a):\n \"\"\"Return phi(x, t) for a soliton wave with constants c and a.\"\"\"\n return 0.5*c*(1/(np.cosh((c**(1/2)/2)*(x-c*t-a))**2))\n\nassert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))",
"To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:",
"tmin = 0.0\ntmax = 10.0\ntpoints = 100\nt = np.linspace(tmin, tmax, tpoints)\n\nxmin = 0.0\nxmax = 10.0\nxpoints = 200\nx = np.linspace(xmin, xmax, xpoints)\n\nc = 1.0\na = 0.0",
"Compute a 2d NumPy array called phi:\n\nIt should have a dtype of float.\nIt should have a shape of (xpoints, tpoints).\nphi[i,j] should contain the value $\\phi(x[i],t[j])$.",
"assert phi.shape==(xpoints, tpoints)\nassert phi.ndim==2\nassert phi.dtype==np.dtype(float)\nassert phi[0,0]==soliton(x[0],t[0],c,a)",
"Write a plot_soliton_data(i) function that plots the soliton wave $\\phi(x, t[i])$. Customize your plot to make it effective and beautiful.",
"def plot_soliton_data(i=0):\n \n\nplot_soliton_data(0)\n\nassert True # leave this for grading the plot_soliton_data function",
"Use interact to animate the plot_soliton_data function versus time.",
"# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave this for grading the interact with plot_soliton_data cell"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wdbm/abstraction | ttHbb_variables_preparation.ipynb | gpl-3.0 | [
"ttHbb variables preparation",
"import datetime\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nplt.rcParams[\"figure.figsize\"] = (13, 6)\nimport pandas as pd\nimport seaborn as sns\nsns.set(context = \"paper\", font = \"monospace\")\nfrom sklearn.preprocessing import MinMaxScaler\nimport sqlite3\nimport warnings\nwarnings.filterwarnings(\"ignore\")\npd.set_option(\"display.max_rows\", 500)\npd.set_option(\"display.max_columns\", 500)\n\nimport root_pandas",
"variables\n\nATL-COM-PHYS-2017-079 (table 8, page 46)\n\n|variable |type |n-tuple name |description |region >= 6j|region 5j|\n|-------------------------------------------------|-------------------------|----------------------------------|---------------------------------------------------------------------------------------------------------------|--------------|-----------|\n|${\\Delta R^{\\text{avg}}{bb}}$ |general kinematic |dRbb_avg_Sort4 |average ${\\Delta R}$ for all ${b}$-tagged jet pairs |yes |yes |\n|${\\Delta R^{\\text{max} p{T}}{bb}}$ |general kinematic |dRbb_MaxPt_Sort4 |${\\Delta R}$ between the two ${b}$-tagged jets with the largest vector sum ${p{T}}$ |yes |- |\n|${\\Delta \\eta^{\\textrm{max}\\Delta\\eta}{jj}}$ |general kinematic |dEtajj_MaxdEta |maximum ${\\Delta\\eta}$ between any two jets |yes |yes |\n|${m^{\\text{min} \\Delta R}{bb}}$ |general kinematic |Mbb_MindR_Sort4 |mass of the combination of the two ${b}$-tagged jets with the smallest ${\\Delta R}$ |yes |- |\n|${m^{\\text{min} \\Delta R}{jj}}$ |general kinematic |Mjj_MindR |mass of the combination of any two jets with the smallest ${\\Delta R}$ |- |yes |\n|${N^{\\text{Higgs}}{30}}$ |general kinematic |nHiggsbb30_Sort4 |number of ${b}$-jet pairs with invariant mass within 30 GeV of the Higgs boson mass |yes |yes |\n|${H^{\\text{had}}{T}}$ |general kinematic |HT_jets? |scalar sum of jet ${p{T}}$ |- |yes |\n|${\\Delta R^{\\text{min}\\Delta R}{\\text{lep}-bb}}$|general kinematic |dRlepbb_MindR_Sort4 |${\\Delta R}$ between the lepton and the combination of the two ${b}$-tagged jets with the smallest ${\\Delta R}$|- |yes |\n|aplanarity |general kinematic |Aplanarity_jets |${1.5\\lambda{2}}$, where ${\\lambda_{2}}$ is the second eigenvalue of the momentum tensor built with all jets |yes |yes |\n|${H1}$ |general kinematic |H1_all |second Fox-Wolfram moment computed using all jets and the lepton |yes |yes |\n|BDT |reconstruction BDT output|TTHReco_best_TTHReco |BDT output |yes |yes |\n|${m_{H}}$ |reconstruction BDT output|TTHReco_best_Higgs_mass |Higgs boson mass |yes |yes |\n|${m_{H,b_{\\text{lep top}}}}$ |reconstruction BDT output|TTHReco_best_Higgsbleptop_mass |Higgs boson mass and ${b}$-jet from leptonic ${t}$ |yes |- |\n|${\\Delta R_{\\text{Higgs }bb}}$ |reconstruction BDT output|TTHReco_best_bbHiggs_dR |${\\Delta R}$ between ${b}$-jets from Higgs boson |yes |yes |\n|${\\Delta R_{H,t\\bar{t}}}$ |reconstruction BDT output|TTHReco_withH_best_Higgsttbar_dR|${\\Delta R}$ between Higgs boson and ${t\\bar{t}}$ system |yes |yes |\n|${\\Delta R_{H,\\text{lep top}}}$ |reconstruction BDT output|TTHReco_best_Higgsleptop_dR |${\\Delta R}$ between Higgs boson and leptonic ${t}$ |yes |- |\n|${\\Delta R_{H,b_{\\text{had top}}}}$ |reconstruction BDT output|TTHReco_best_b1Higgsbhadtop_dR |${\\Delta R}$ between Higgs boson and ${b}$-jet from hadronic ${t}$ |- |yes* |\n|D |likelihood calculation |LHD_Discriminant |likelihood discriminant |yes |yes |\n|${\\text{MEM}{D1}}$ |matrix method | |matrix method |yes |- |\n|${w^{H}{b}}$ |${b}$-tagging |? |sum of binned ${b}$-tagging weights of jets from best Higgs candidate |yes |yes |\n|${B_{j^{3}}}$ |${b}$-tagging |? |third jet binned ${b}$-tagging weight (sorted by weight) |yes |yes |\n|${B_{j^{4}}}$ |${b}$-tagging |? |fourth jet binned ${b}$-tagging weight (sorted by weight) |yes |yes |\n|${B_{j^{5}}}$ |${b}$-tagging |? |fifth jet binned ${b}$-tagging weight (sorted by weight) |yes |yes |",
"variables = [\n \"nElectrons\",\n \"nMuons\",\n \"nJets\",\n \"nBTags_70\",\n \"dRbb_avg_Sort4\",\n \"dRbb_MaxPt_Sort4\",\n \"dEtajj_MaxdEta\",\n \"Mbb_MindR_Sort4\",\n \"Mjj_MindR\",\n \"nHiggsbb30_Sort4\",\n \"HT_jets\",\n \"dRlepbb_MindR_Sort4\",\n \"Aplanarity_jets\",\n \"H1_all\",\n \"TTHReco_best_TTHReco\",\n \"TTHReco_best_Higgs_mass\",\n \"TTHReco_best_Higgsbleptop_mass\",\n \"TTHReco_best_bbHiggs_dR\",\n \"TTHReco_withH_best_Higgsttbar_dR\",\n \"TTHReco_best_Higgsleptop_dR\",\n \"TTHReco_best_b1Higgsbhadtop_dR\",\n \"LHD_Discriminant\"\n]",
"read",
"filenames_ttH = [\"ttH_group.phys-higgs.11468583._000005.out.root\"]\nfilenames_ttbb = [\"ttbb_group.phys-higgs.11468624._000005.out.root\"]\n\nttH = root_pandas.read_root(filenames_ttH, \"nominal_Loose\", columns = variables)\n\nttH[\"target\"] = 1\n\nttH.head()\n\nttbb = root_pandas.read_root(filenames_ttbb, \"nominal_Loose\", columns = variables)\n\nttbb[\"target\"] = 0\n\nttbb.head()\n\ndf = pd.concat([ttH, ttbb])\n\ndf.head()",
"selection",
"selection_ejets = \"(nElectrons == 1) & (nJets >= 4)\"\nselection_mujets = \"(nMuons == 1) & (nJets >= 4)\"\nselection_ejets_5JE4BI = \"(nElectrons == 1) & (nJets == 4) & (nBTags_70 >= 4)\"\nselection_ejets_6JI4BI = \"(nElectrons == 1) & (nJets == 6) & (nBTags_70 >= 4)\"\n\ndf = df.query(selection_ejets)\n\ndf.drop([\"nElectrons\", \"nMuons\", \"nJets\", \"nBTags_70\"], axis = 1, inplace = True)\n\ndf.head()",
"characteristics",
"rows = []\nfor variable in df.columns.values:\n rows.append({\n \"name\": variable,\n \"maximum\": df[variable].max(),\n \"minimum\": df[variable].min(),\n \"mean\": df[variable].mean(),\n \"median\": df[variable].median(),\n \"std\": df[variable].std()\n })\n_df = pd.DataFrame(rows)[[\"name\", \"maximum\", \"minimum\", \"mean\", \"std\", \"median\"]]\n_df",
"imputation",
"df[\"TTHReco_best_TTHReco\"].replace( -9, -1, inplace = True)\ndf[\"TTHReco_best_Higgs_mass\"].replace( -9, -1, inplace = True)\ndf[\"TTHReco_best_Higgsbleptop_mass\"].replace( -9, -1, inplace = True)\ndf[\"TTHReco_best_bbHiggs_dR\"].replace( -9, -1, inplace = True)\ndf[\"TTHReco_withH_best_Higgsttbar_dR\"].replace(-9, -1, inplace = True)\ndf[\"TTHReco_best_Higgsleptop_dR\"].replace( -9, -1, inplace = True)\ndf[\"TTHReco_best_b1Higgsbhadtop_dR\"].replace( -9, -1, inplace = True)\ndf[\"LHD_Discriminant\"].replace( -9, -1, inplace = True)",
"histograms",
"plt.rcParams[\"figure.figsize\"] = (17, 14)\ndf.hist();",
"correlations\ncorrelations ${t\\bar{t}H}$",
"sns.heatmap(df.query(\"target == 1\").drop(\"target\", axis = 1).corr());",
"correlations ${t\\bar{t}b\\bar{b}}$",
"sns.heatmap(df.query(\"target == 0\").drop(\"target\", axis = 1).corr());",
"ratio of correlations of ${t\\bar{t}H}$ and ${t\\bar{t}b\\bar{b}}$",
"_df = df.query(\"target == 1\").drop(\"target\", axis = 1).corr() / df.query(\"target == 0\").drop(\"target\", axis = 1).corr()\n\nsns.heatmap(_df);",
"## clustered correlations",
"plot = sns.clustermap(df.corr())\nplt.setp(plot.ax_heatmap.get_yticklabels(), rotation = 0);",
"strongest correlations and anticorrelations for discrimination of ${t\\bar{t}H}$ and ${t\\bar{t}b\\bar{b}}$",
"df.corr()[\"target\"].sort_values(ascending = False).to_frame()[1:]",
"strongest absolute correlations for discrimination of ${t\\bar{t}H}$ and ${t\\bar{t}b\\bar{b}}$",
"df.corr()[\"target\"].abs().sort_values(ascending = False).to_frame()[1:]\n\n_df = df.corr()[\"target\"].abs().sort_values(ascending = False).to_frame()[1:]\n_df.plot(kind = \"barh\", legend = \"False\");",
"clustered correlations of 10 strongest absolute correlations",
"names = df.corr()[\"target\"].abs().sort_values(ascending = False)[1:11].index.values\n\nplot = sns.clustermap(df[names].corr())\nplt.setp(plot.ax_heatmap.get_yticklabels(), rotation = 0);",
"rescale",
"variables_rescale = [variable for variable in list(df.columns) if variable != \"target\"]\n\nscaler = MinMaxScaler()\ndf[variables_rescale] = scaler.fit_transform(df[variables_rescale])\n\ndf.head()",
"save",
"df.to_csv(\"ttHbb_data.csv\", index = False)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
OpenChemistry/mongochemserver | girder/notebooks/notebooks/notebooks/NWChem.ipynb | bsd-3-clause | [
"Open Chemistry JupyterLab NWChem calculations",
"import openchemistry as oc",
"Start by finding structures using online databases (or cached local results). This uses an InChI for ethane to seed the molecule collection.",
"mol = oc.find_structure('InChI=1S/C2H6/c1-2/h1-2H3')\nmol.structure.show()",
"Set up the calculation, by specifying the name of the Docker image that will be used, and by providing input parameters that are known to the specific image",
"image_name = 'openchemistry/nwchem:6.6'\ninput_parameters = {\n 'theory': 'dft',\n 'functional': 'b3lyp',\n 'basis': '6-31g'\n}",
"Geometry Optimization Calculation\nThe mol.optimize() method is a specialized helper function that adds 'task': 'optimize' to the input_parameters dictionary,\nand then calls the generic mol.calculate() method internally.",
"result = mol.optimize(image_name, input_parameters)\n\nresult.orbitals.show(mo='lumo', iso=0.005)",
"Single Point Energy Calculation\nThe mol.energy() method is a specialized helper function that adds 'task': 'energy' to the input_parameters dictionary,\nand then calls the generic mol.calculate() method internally.",
"result = mol.energy(image_name, input_parameters)\n\nresult.orbitals.show(mo='homo', iso=0.005)",
"Normal Modes Calculation\nThe mol.frequencies() method is a specialized helper function that adds 'task': 'frequency' to the input_parameters dictionary,\nand then calls the generic mol.calculate() method internally.",
"result = mol.frequencies(image_name, input_parameters)\n\nresult.vibrations.show(mode=1)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
opengeostat/pygslib | pygslib/Ipython_templates/.ipynb_checkpoints/Gaussian anamorphosis2-checkpoint.ipynb | mit | [
"Interactive gaussian anamorphosis modeling with hermite polynomials\nAdrian Martinez Vargas\[email protected]\nPhD in Geological Sciences. Senior Consultant.\nCSA Global,\nToronto, Canada.",
"#general imports\nimport pygslib \nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy.stats import norm\nimport numpy as np\n\n#make the plots inline\n%matplotlib inline\n#%matplotlib notebook\n\n#get the data in gslib format into a pandas Dataframe\ndata= pd.DataFrame({'Z':[2.582,3.087,3.377,3.974,4.321,5.398,8.791,12.037,12.586,16.626]})\ndata['Declustering Weight'] = 1.0\ndata",
"Interactive anamorphosis modeling",
"# Fit anamorphosis by changing, zmax, zmin, and extrapolation function\nPCI, H, raw, zana, gauss, z, P, \\\nraw_var, PCI_var, fig1, \\\nzamin, zamax, yamin, yamax, \\\nzpmin, zpmax, ypmin, ypmax = pygslib.nonlinear.anamor(\n z = data['Z'], \n w = data['Declustering Weight'], \n zmin = data['Z'].min()-0.1, \n zmax = data['Z'].max()+1,\n zpmin = None,\n zpmax = data['Z'].max()+1.5,\n ymin=-2.9, ymax=2.9,\n ndisc = 5000,\n ltail=1, utail=4, ltpar=1, utpar=1.8, K=30)\n\nPCI",
"Block support transformation",
"ZV, PV, fig2 = pygslib.nonlinear.anamor_blk( PCI, H, r = 0.6, gauss = gauss, Z = z,\n ltail=1, utail=1, ltpar=1, utpar=1,\n raw=raw, zana=zana)\n\n# the pair ZV, PV define the CDF in block support\n# let's plot the CDFs\nplt.plot (raw,P, '--k', label = 'exp point' ) \nplt.plot (z,P, '-g', label = 'ana point(fixed)' ) #point support (from gaussian anamorphosis)\nplt.plot (ZV, PV, '-m', label = 'ana block(fixed)') #block support \nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)",
"Grade Tonnage curves",
"cutoff = np.arange(0,10, 0.1)\ntt = []\ngg = []\nlabel = []\n\n# calculate GTC from gaussian in block support \nt,ga,gb = pygslib.nonlinear.gtcurve (cutoff = cutoff, z=ZV, p=PV, varred = 1, ivtyp = 0, zmin = 0, zmax = None,\n ltail = 1, ltpar = 1, middle = 1, mpar = 1, utail = 1, utpar = 1,maxdis = 1000)\ntt.append(t)\ngg.append(ga)\nlabel.append('DGM\\nblock support')\n\n# calculate GTC using undirect lognormal \nt,ga,gb = pygslib.nonlinear.gtcurve (cutoff = cutoff, z=z, p=P, varred = 0.4, ivtyp = 2, zmin = 0, zmax = None,\n ltail = 1, ltpar = 1, middle = 1, mpar = 1, utail = 1, utpar = 1,maxdis = 1000)\ntt.append(t)\ngg.append(ga)\nlabel.append('Indirect Lognormal\\nblock support')\n\n# calculate GTC using affine \nt,ga,gb = pygslib.nonlinear.gtcurve (cutoff = cutoff, z=z, p=P, varred = 0.4, ivtyp = 1, zmin = 0, zmax = None,\n ltail = 1, ltpar = 1, middle = 1, mpar = 1, utail = 1, utpar = 1,maxdis = 1000)\ntt.append(t)\ngg.append(ga)\nlabel.append('Affine\\nblock support')\n\n# calculate GTC in point support \nt,ga,gb = pygslib.nonlinear.gtcurve (cutoff = cutoff, z=z, p=P, varred = 1, ivtyp = 2, zmin = 0, zmax = None,\n ltail = 1, ltpar = 1, middle = 1, mpar = 1, utail = 1, utpar = 1,maxdis = 1000)\ntt.append(t)\ngg.append(ga)\nlabel.append('Point (anamorphosis)\\npoint support)')\n\nfig = pygslib.nonlinear.plotgt(cutoff = cutoff, t = tt, g = gg, label = label)",
"Anamorphosis modeling from raw Z,Y pairs",
"PCI, H, raw, zana, gauss, raw_var, PCI_var, ax2 = pygslib.nonlinear.anamor_raw(\n z = data['Z'], \n w = data['Declustering Weight'], \n K=30)\n\nPCI\n\nprint (zana)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
oseledets/fastpde | lecture-11.ipynb | cc0-1.0 | [
"from IPython.html.services.config import ConfigManager\nfrom IPython.utils.path import locate_profile\ncm = ConfigManager(profile_dir=locate_profile(get_ipython().profile))\ncm.update('livereveal', {\n 'theme': 'sky',\n 'transition': 'zoom',\n 'start_slideshow_at': 'selected',\n})",
"Lecture 11. Going fast: the Barnes-Hut algorithm\nPrevious lecture\n\nDiscretization of the integral equations, Galerkin methods\nComputation of singular integrals\nIdea of the Barnes-Hut method\n\nTodays lecture\n\nBarnes-Hut in details\nThe road to the FMM\nAlgebraic versions of the FMM/Fast Multipole\n\nThe discretization of the integral equation leads to dense matrices. \nThe main question is how to compute the matrix-by-vector product, \ni.e. the summation of the form:\n$$\\sum_{j=1}^M A_{ij} q_j = V_i, \\quad i = 1, \\ldots, N.$$\nThe matrix $A$ is dense, i.e. its element can not be omitted. The complexity is $\\mathcal{O}(N^2)$. \nCan we make it faster?\nThe simplest case is the computation of the potentials from the system of charges\n$$V_i = \\sum_{j} \\frac{q_j}{\\Vert r_i - r_j \\Vert}$$\nThis summation appears in:\n\nModelling of large systems of charges\nAstronomy (where instead of $q_j$ we have masses, i.e. start..)\n\nIt is called <font color='red'> the N-body problem </font>. \nThere is no problem with memory, since you only have two cycles.",
"import numpy as np\nimport math\nfrom numba import jit\nN = 10000\nx = np.random.randn(N, 2);\ny = np.random.randn(N, 2);\ncharges = np.ones(N)\nres = np.zeros(N)\n\n\n@jit\ndef compute_nbody_direct(N, x, y, charges, res):\n for i in xrange(N):\n res[i] = 0.0\n for j in xrange(N):\n dist = (x[i, 0] - y[i, 0]) ** 2 + (x[i, 1] - y[i, 1]) ** 2\n dist = math.sqrt(dist)\n res[i] += charges[j] / dist\n \n\n%timeit compute_nbody_direct(N, x, y, charges, res)\n",
"Question\nWhat is the typical size of particle system?\nMillenium run\nOne of the most famous N-body computations is the Millenium run\n\nMore than 10 billions particles ($2000^3$)\n$>$ 1 month of computations, 25 Terabytes of storage\nEach \"particle\" represents approximately a billion solar masses of dark matter\nStudy, how the matter is distributed through the Universy (cosmology)",
"from IPython.display import YouTubeVideo\n\nYouTubeVideo('UC5pDPY5Nz4')",
"Smoothed particle hydrodynamics\nThe particle systems can be used to model a lot of things. \nFor nice examples, see the website of Ron Fedkiw",
"from IPython.display import YouTubeVideo\n\nYouTubeVideo('6bdIHFTfTdU')",
"Applications\nWhere the N-body problem arises in different problems with long-range interactions`\n- Cosmology (interacting masses)\n- Electrostatics (interacting charges)\n- Molecular dynamics (more complicated interactions, maybe even 3-4 body terms).\n- Particle modelling (smoothed particle hydrodynamics)\nFast computation\n$$\n V_i = \\sum_{j} \\frac{q_j}{\\Vert x_i - y_j \\Vert}\n$$\nDirect computation takes $\\mathcal{O}(N^2)$ operations.\nHow to compute it fast?\nThe core idea: Barnes, Hut (Nature, 1986) \nUse clustering of particles!\nIdea on one slide\nThe idea was simple:\nIf a charge is far from a cluster of sources, it they are seen as one big \"particle\". \n<img src=\"earth-andromeda.jpeg\" width = 70%>\nBarnes-Hut\n$$\\sum_j q_j F(x, y_j) \\approx Q F(x, y_C)$$\n$$Q = \\sum_j q_j, \\quad y_C = \\frac{1}{J} \\sum_{j} y_j$$\nTo compute the interaction, it is sufficient to replace by the a center-of-mass and a total mass!\nThe idea of Barnes and Hut was to split the <font color='red'> sources </font> into big blocks using the <font color='red'> cluster tree </font>\n<img width=90% src='clustertree.png'>\nThe algorithm is recursive.\nLet $\\mathcal{T}$ be the tree, and $x$ is the point where we need to\ncompute the potential.\n\nSet $N$ to the <font color='red'> root node </font>\nIf $x$ and $N$ <font color='red'> are separated </font> , then set $V(x) = Q V(y_{\\mathrm{center}})$\nIf $x$ and $N$ are not separated, compute $V(x) = \\sum_{C \\in\n \\mathrm{sons}(N)} V(C, x)$ <font color='red'> recursion </font>\n\nThe complexity is $\\mathcal{O}(\\log N)$ for 1 point!\nTrees\nThere are many options for the tree construction.\n\nQuadtree/Octree\nKD-tree\nRecursive intertial bisection\n\nOcttree\nThe simplest one: **quadtree/ octtree, when you split the square into 4 squares and do that until the number of points is less that a parameter. \nIt leads to the unbalanced tree, adding points is simple (but can unbalance it more).\nKD-tree\nAnother popular choice of the tree is the KD-tree\nThe construction is simple as well: \nSplit along x-axis, then y-axis in such a way that the tree is balanced (i.e. the number of points in the left child/right child is similar).\nThe tree is always balanced, but biased towards the coordinate axis.\nRecursive inertial bisection\nCompute the center-of-mass and select a hyperplane such that sum of squares of distances to it is minimal.\n$$\\sum_{j} \\rho^2(x_j, \\Pi) \\rightarrow \\min.$$\nOften gives best complexity, but adding/removing points can be difficult.\nThe scheme\nYou can actually code it from this description!\n\nConstruct the cluster tree\nFill the tree with charges\nFor any point we now can compute the potential in $\\mathcal{O}(\\log N)$ flops (instead of $\\mathcal{O}(N)$).\n\nNotes on the complexity\nFor each node of the tree, we need to compute its total mass and the center-of-mass. If we do it in a cycle, then the complexity will be $\\mathcal{O}(N^2)$ for the tree constuction. \nHowever, it is easy to construct it in a smarter way.\n\nStart from the children (which contain only few particles) and fill then\nBottom-to-top graph traversal: if we know the charges for the children, we can cheaply compute the total charge/center of massfor the father\n\nNow you can actually code this (minor things remaining are the bounding box and separation criteria).\nProblems with Barnes-Hut\nWhat are the problems with Barnes-Hut?\nWell, several\n- The logarithmic term \n- Low accuracy $\\varepsilon = 10^{-2}$ is ok, but if we want $\\varepsilon=10^{-5}$\n we have to take larger <font color='red'> separation criteria </font> \nSolving problems with Barnes-Hut\n\nComplexity: To avoid the logarithmic term, we need to store two trees, for the sources, and for receivers\nAccuracy: instead of the <font color='red'> piecewise-constant approximation </font> which is inheritant in the BH algorithm, use more accurate representations.\n\nDouble tree Barnes-Hut\nPrincipal scheme of the Double-tree BH:\n\nConstruct two trees for sources & receivers\nFill the tree for sources with charges (bottom-to-top)\nCompute the interaction between nodes of the treess\nFill the tree for receivers with potentials (top-to-bottom)\n\nThe original BH method has low accuracy, and is based on the expansion \n$$f(x, y) \\approx f(x_0, y_0)$$ \nWhat to do?\nAnswer: Use higher-order expansions!\n$$f(x + \\delta x, y + \\delta y) \\approx f(x, y) + \\sum_{k, l=0}^p\n(D^{k} D^{l} f) \\delta ^k\n\\delta y^l \\frac{1}{k!} \\frac{1}{l!} + \\ldots\n$$\nFor the Coloumb interaction $\\frac{1}{r}$ we have the multipole expansion\n$$\n v(R) = \\frac{Q}{R} + \\frac{1}{R^3} \\sum_{\\alpha} P_{\\alpha} R_{\\alpha} + \\frac{1}{6R^5} \\sum_{\\alpha, \\beta} Q_{\\alpha \\beta} (3R_{\\alpha \\beta} - \\delta_{\\alpha \\beta}R^2) + \\ldots,\n$$\nwhere $P_{\\alpha}$ is the dipole moment, $Q_{\\alpha \\beta}$ is the quadrupole moment (by actually, nothing more than the Taylor series expansion).\nFast multipole method\nThis combination is very powerful, and \n<font color='red' size=6.0> Double tree + multipole expansion $\\approx$ the Fast Multipole Method (FMM). </font>\nFMM\nWe will talk about the exact implementation and the complexity issues in the next lecture.\nProblems with FMM\nFMM has problems:\n- It relies on analytic expansions; maybe difficult to obtain for the integral equations\n- the higher is the order of the expansion, the larger is the complexity.\n- That is why the algebraic interpretation (or kernel-independent FMM) is of great importance.\nFMM hardware\nFor cosmology this problem is so important, so that they have released a special hardware Gravity Pipe for solving the N-body problem\nFMM software\nSidenote, When you Google for \"FMM\", you will also encounter fast marching method (even in the scikit).\nEveryone uses its own in-house software, so a good Python open-source software is yet to be written. \nThis is also a perfect test for the GPU programming (you can try to take such project in the App Period, by the way).\nOverview of todays lecture\n\nThe cluster tree\nBarnes-Hut and its problems\nDouble tree / fast multipole method\nImportant difference: element evaluation is fast. In integral equations, it is slow.\n\nNext lecture\n\nMore detailed overview of the FMM algorithm, along with complexity estimates.\nAlgebraic interpretation of the FMM\nApplication of the FMM to the solution of integral equations",
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./styles/alex.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
lrq3000/unireedsolomon | Generating the exponent and log tables.ipynb | mit | [
"import unireedsolomon.ff as ff",
"I used 3 as the generator for this field. For a field defined with the polynomial x^8 + x^4 + x^3 + x + 1, there may be other generators (I can't remember)",
"generator = ff.GF2int(3)\ngenerator",
"Multiplying the generator by itself is the same as raising it to a power. I show up to the 3rd power here",
"generator*generator\n\ngenerator*generator*generator\n\ngenerator**1\n\ngenerator**2\n\ngenerator**3",
"The slow multiply method implemented without the lookup table has the same results",
"generator.multiply(generator)\n\ngenerator.multiply(generator.multiply(generator))",
"We can enumerate the entire field by repeatedly multiplying by the generator. (The first element is 1 because generator^0 is 1). This becomes our exponent table.",
"exptable = [ff.GF2int(1), generator]\nfor _ in range(254): # minus 2 because the first 2 elements are hardcoded\n exptable.append(exptable[-1].multiply(generator))\n\n# Turn back to ints for a more compact print representation\nprint([int(x) for x in exptable])",
"That's now our exponent table. We can look up the nth element in this list to get generator^n",
"exptable[5] == generator**5\n\nall(exptable[n] == generator**n for n in range(256))\n\n[int(x) for x in exptable] == [int(x) for x in ff.GF2int_exptable]",
"The log table is the inverse function of the exponent table",
"logtable = [None for _ in range(256)]\n# Ignore the last element of the field because fields wrap back around.\n# The log of 1 could be 0 (g^0=1) or it could be 255 (g^255=1)\nfor i, x in enumerate(exptable[:-1]):\n logtable[x] = i\nprint([int(x) if x is not None else None for x in logtable])\n\n[int(x) if x is not None else None for x in logtable] == [int(x) if x >= 0 else None for x in ff.GF2int_logtable]"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sandiegodata/age-friendly-communities | users/eric/Metatab Package Example.ipynb | mit | [
"Using Metatab Resources In Pandas\nThere are two ways to use Metatab data package resources in Pandas. One is to use the CSV files directly, which is easy to do if the package is published to a repository. However, it is better to use the Metatab module to load the package metadata and create dataframes. \nUsing CSV Files Directly\nThe simplest was to use the file in a metatab package is to load it's CSV file directly. You can get the CSV file URL from the data repostory page, such as this page for the ADOD Prevalence data in the San Diego Elder Dementia dataset. \nWhile this is simple and portable, it does not give you the features of Metatab, such as built in schema documentation.",
"import pandas as pd\n\ndf = pd.read_csv('http://s3.amazonaws.com/library.metatab.org/sandiegocounty.gov-adod-2012-sra-3/data/adod-prevalence.csv')\n\ndf.head()\n",
"Using the Metatab Package\nThe second way to access a package is to use the metatab package. This method requires installing the metatab python package, but has some important advantages: it gives you direct access to package and dataset documentation. You can load any type of metatab package with the open_package() function, but for the highest performance, you should use the CSV package. Opening CSV package loads only the metadata and the resources you need, while using a ZIP or Excel packackage requires downloading the entire package first. \nTo find the CSV package in a package that is publiched to a CKAN repository, look for a CSV file with the description of \"CSV Package Metadata in Metatab format\". For the ADOD package, this file is named sandiegocounty.gov-adod-2012-sra-3.csv. \nOpening the package returns a Metatab document object. If you display it in Jupyter, the output cell will display the package documentation.",
"import metatab\ndoc = metatab.open_package('http://s3.amazonaws.com/library.metatab.org/sandiegocounty.gov-adod-2012-sra-3.csv')\ndoc",
"The .resource() method will return one of the resoruces. Displaying it shows the resoruce documentation.",
"r = doc.resource('adod-prevalence')\nr",
"Once you have a resource, use the .dataframe() method to get a Pandas dataframe.",
"df = r.dataframe()\ndf.head()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
facaiy/book_notes | Introduction_to_Algorithms/24_Single-Source_Shortest_Paths/note.ipynb | cc0-1.0 | [
"# %load ../../preconfig.py\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(color_codes=True)\nplt.rcParams['axes.grid'] = False\n\n#import numpy as np\n#import pandas as pd\n\n#import sklearn\n\n#import itertools\n\nimport logging\nlogger = logging.getLogger()\n",
"24 Single-Source Shortest Paths\nIn a shortest-paths problem, \nGiven: a weighted, directed graph $G = (V, E)$, with weight function $w : E \\to \\mathcal{R}$. \npath $p = < v_0, v_1, \\dotsc, v_k >$, so \n$$w(p) = \\displaystyle \\sum_{i=1}^k w(v_{i-1}, v_i)$$.\nwe define the shortest-path weight $\\delta(u, v)$ from $u$ to $v$ by \n\\begin{equation}\n \\delta(u, v) = \\begin{cases}\n \\min{ w(p) : u \\overset{p}{\\to} v } & \\text{if $p$ exist}\\\n \\infty & \\text{otherwise}\n \\end{cases}\n\\end{equation}\nvariants:\n+ Single-destination shortest-paths problem\n+ Single-pair shortest-path problem\n+ All-pairs shortest-path problem\nOptimal substructure of a shortest path: \nsubpath of $p$ is a shortest path between two of its internal nodes if $p$ is a shortest path.\nNegative-weight edges: whether a negative weight cycle exists?\nCycles:\n+ negative-weight cycle: detect and remove\n+ positive-weight cycle: auto remove\n+ 0-weight cycle: detect and remove\nRepresenting shortest paths: \na \"shortest-paths tree\": a rooted tree containing a shortest path from the source $s$ to every vertex that is reachable from $s$.\nRelaxation: \nmodify the node's upper bound if detect a shorter path.\n```\nINITIALIZE-SINGLE-SOURCE(G, s)\n for each vertex v in G.V\n v.d = infty\n v.pi = NIL\n s.d = 0\nRELAX(u, v, w)\n if v.d > u.d + w(u, v)\n v.d = u.d + w(u, v)\n v.pi = u\n```\nProperties of shortest paths and relaxtion\n+ Triangle inequality\n+ Upper-bound property\n+ No-path property\n+ Convergence property\n+ Path-relaxation property\n+ Predecessor-subgraph property\n24.1 The Bellman-Ford algorithm\nit returns a boolean valued indicating whether or not there is a negative-weight cycle that is reachable from the source.\nThe algorithm relaxes edges, progressively decreasing an estimate $v.d$ on the weight of a shortest path from the source $s$ to each vertex $v \\in V$ until it achieves the actual shortest-path weight $\\delta(s, v)$.\nThe Bellman-Fold algorithm runs in time $O(VE)$.",
"plt.imshow(plt.imread('./res/bellman_ford.png'))\n\nplt.imshow(plt.imread('./res/fig24_4.png'))",
"24.2 Single-source shortest paths in directed acyclic graphs\nBy relaxing the edges of a weighted dag (directed acyclic graph) $G = (V, E)$ according to a topological sort of its vertices, we can compute shortest paths from a single source in $O(V + E)$ time.",
"plt.imshow(plt.imread('./res/dag.png'))\n\nplt.imshow(plt.imread('./res/fig24_5.png'))",
"interesting application: to determine critical paths in PERT chart analysis.\n24.3 Dijkstra's algorithm (greedy strategy)\nweighted, directed graph $G = (V, E)$ and all $w(u, v) \\geq 0$.\nDijkstra's algorithm maintains a set $S$ of vertices whose final shortest-path weights from the source $s$ have already been determined.\nwe use a min-priority queue $Q$ of vertices, keyed by their $d$ values.",
"plt.imshow(plt.imread('./res/dijkstra.png'))\n\nplt.imshow(plt.imread('./res/fig24_6.png'))",
"24.4 Difference constraints and shortest paths\nLinear programming\nSystems of difference constraints\nIn a system of difference constraints, each row of the linear-programming matrix $A$ contains one 1 and one -1, and all other entries of $A$ are 0. $\\to$ the form $x_j - x_i \\leq b_k$.",
"plt.imshow(plt.imread('./res/inequ.png'))",
"Constraint graphs",
"plt.imshow(plt.imread('./res/fig24_8.png'))",
"24.5 Proofs of shortest-paths properties"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io | dev/_downloads/cfbef36033f8d33f28c4fe2cfa35314a/30_cluster_ftest_spatiotemporal.ipynb | bsd-3-clause | [
"%matplotlib inline",
"2 samples permutation test on source data with spatio-temporal clustering\nTests if the source space data are significantly different between\n2 groups of subjects (simulated here using one subject's data).\nThe multiple comparisons problem is addressed with a cluster-level\npermutation test across space and time.",
"# Authors: Alexandre Gramfort <[email protected]>\n# Eric Larson <[email protected]>\n# License: BSD-3-Clause\n\nimport numpy as np\nfrom scipy import stats as stats\n\nimport mne\nfrom mne import spatial_src_adjacency\nfrom mne.stats import spatio_temporal_cluster_test, summarize_clusters_stc\nfrom mne.datasets import sample\n\nprint(__doc__)",
"Set parameters",
"data_path = sample.data_path()\nmeg_path = data_path / 'MEG' / 'sample'\nstc_fname = meg_path / 'sample_audvis-meg-lh.stc'\nsubjects_dir = data_path / 'subjects'\nsrc_fname = subjects_dir / 'fsaverage' / 'bem' / 'fsaverage-ico-5-src.fif'\n\n# Load stc to in common cortical space (fsaverage)\nstc = mne.read_source_estimate(stc_fname)\nstc.resample(50, npad='auto')\n\n# Read the source space we are morphing to\nsrc = mne.read_source_spaces(src_fname)\nfsave_vertices = [s['vertno'] for s in src]\nmorph = mne.compute_source_morph(stc, 'sample', 'fsaverage',\n spacing=fsave_vertices, smooth=20,\n subjects_dir=subjects_dir)\nstc = morph.apply(stc)\nn_vertices_fsave, n_times = stc.data.shape\ntstep = stc.tstep * 1000 # convert to milliseconds\n\nn_subjects1, n_subjects2 = 6, 7\nprint('Simulating data for %d and %d subjects.' % (n_subjects1, n_subjects2))\n\n# Let's make sure our results replicate, so set the seed.\nnp.random.seed(0)\nX1 = np.random.randn(n_vertices_fsave, n_times, n_subjects1) * 10\nX2 = np.random.randn(n_vertices_fsave, n_times, n_subjects2) * 10\nX1[:, :, :] += stc.data[:, :, np.newaxis]\n# make the activity bigger for the second set of subjects\nX2[:, :, :] += 3 * stc.data[:, :, np.newaxis]\n\n# We want to compare the overall activity levels for each subject\nX1 = np.abs(X1) # only magnitude\nX2 = np.abs(X2) # only magnitude",
"Compute statistic\nTo use an algorithm optimized for spatio-temporal clustering, we\njust pass the spatial adjacency matrix (instead of spatio-temporal)",
"print('Computing adjacency.')\nadjacency = spatial_src_adjacency(src)\n\n# Note that X needs to be a list of multi-dimensional array of shape\n# samples (subjects_k) × time × space, so we permute dimensions\nX1 = np.transpose(X1, [2, 1, 0])\nX2 = np.transpose(X2, [2, 1, 0])\nX = [X1, X2]\n\n# Now let's actually do the clustering. This can take a long time...\n# Here we set the threshold quite high to reduce computation,\n# and use a very low number of permutations for the same reason.\nn_permutations = 50\np_threshold = 0.001\nf_threshold = stats.distributions.f.ppf(1. - p_threshold / 2.,\n n_subjects1 - 1, n_subjects2 - 1)\nprint('Clustering.')\nF_obs, clusters, cluster_p_values, H0 = clu =\\\n spatio_temporal_cluster_test(\n X, adjacency=adjacency, n_jobs=None, n_permutations=n_permutations,\n threshold=f_threshold, buffer_size=None)\n# Now select the clusters that are sig. at p < 0.05 (note that this value\n# is multiple-comparisons corrected).\ngood_cluster_inds = np.where(cluster_p_values < 0.05)[0]",
"Visualize the clusters",
"print('Visualizing clusters.')\n\n# Now let's build a convenient representation of each cluster, where each\n# cluster becomes a \"time point\" in the SourceEstimate\nfsave_vertices = [np.arange(10242), np.arange(10242)]\nstc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,\n vertices=fsave_vertices,\n subject='fsaverage')\n\n# Let's actually plot the first \"time point\" in the SourceEstimate, which\n# shows all the clusters, weighted by duration\n\n# blue blobs are for condition A != condition B\nbrain = stc_all_cluster_vis.plot('fsaverage', hemi='both',\n views='lateral', subjects_dir=subjects_dir,\n time_label='temporal extent (ms)',\n clim=dict(kind='value', lims=[0, 1, 40]))"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hpparvi/PyTransit | notebooks/01_broadband_parameter_estimation.ipynb | gpl-2.0 | [
"1. Exoplanet characterisation based on a single light curve\nIntroduction\nThis is a first part of a set of tutorials covering Bayesian exoplanet characterisation using wide-band photometry (transit light curves), radial velocities, and, later, transmission spectroscopy. The tutorials use freely available open source tools built around the scientific python ecosystem, and also demonstrate the use of PyTransit and LDTk.\nThe tutorials are mainly targeted to graduate students working in exoplanet characterisation who already have some experience of Python and Bayesian statistics. \nPrerequisites\nThis tutorial requires the basic Python packages for scientific computing and data analysis\n- NumPy, SciPy, IPython, astropy, pandas, matplotlib, and seaborn\nThe MCMC sampling requires the emcee and acor packages by D. Foreman-Mackey. These can either be installed from github or from PyPI.\nThe transit modelling is carried out with PyTransit, and global optimisation with PyDE. PyTransit can be installed easily from github. First cd into the directory you want to clone the code, and then\ngit clone https://github.com/hpparvi/PyTransit.git; cd PyTransit\npython setup.py install; cd -\n\nWhat comes to assumed prior knowledge, well... I assume you already know a bit about Bayesian statistics (I'll start with a very rough overview of the basics of Bayesian parameter estimation, though), Python (and especially the scientific packages), have a rough idea of MCMC sampling (and how emcee works), and, especially, have the grasp of basic concepts of exoplanets, transits, and photometry.\nBayesian parameter estimation\nThis first tutorial covers the simple case of an exoplanet system characterisation based on a single photometric timeseries of an exoplanet transit (transit light curve). The system characterisation is a parameter estimation problem, where we assume we have an adequate model to describe the observations, and we want to infer the model parameters with their uncertainties.\nWe take a Bayesian approach to the parameter estimation, where we want to estimate the posterior probability for the model parameters given their prior probabilities and a set of observations. The posterior probability density given a parameter vector $\\theta$ and observational data $D$ is described by the Bayes' theorem as\n$$\nP(\\theta|D) = \\frac{P(\\theta) P(D|\\theta)}{P(D)}, \\qquad P(D|\\theta) = \\prod P(D_i|\\theta),\n$$\nwhere $P(\\theta)$ is the prior, $P(D|\\theta)$ is the likelihood for the data, and $P(D)$ is a normalising factor we don't need to bother with during MCMC-based parameter estimation. \nThe likelihood is a product of individual observation probabilities, and has the unfortunate tendency to end up being either very small or very big. This causes computational headaches, and it is better to work with log probabilities instead, so that\n$$\n\\log P(\\theta|D) = \\log P(\\theta) + \\log P(D|\\theta), \\qquad \\log P(D|\\theta) = \\sum \\log P(D_i|\\theta)\n$$\nwhere we have omitted the $P(D)$ term from the posterior density.\nNow we still need to decide our likelihood density. If we can assume normally distributed white noise--that is, the errors in the observations are independent and identically distributed--we end up with a log likelihood function\n$$\n \\log P(D|\\theta) = -N\\log(\\sigma) -\\frac{N\\log 2\\pi}{2} - \\sum_{i=0}^N \\frac{(o_i-m_i)^2}{2\\sigma^2},\n$$\nwhere $N$ is the number of datapoints, $\\sigma$ is the white noise standard deviation, $o$ is the observed data, and $m$ is the model. \nNote: Unfortunately, the noise is rarely white, but contains systematic components from instrumental and astrophysical sources that should be accounted for by the noise model for robust parameter estimation. This, however, goes beyond a basic tutorial.\nImplementation\nInitialisation",
"%pylab inline \n\nimport math as mt\nimport pandas as pd\nimport seaborn as sb\nimport warnings\n\nwith warnings.catch_warnings():\n cp = sb.color_palette()\n\nfrom pathlib import Path\nfrom IPython.display import display, HTML\nfrom numba import njit, prange\nfrom astropy.io import fits as pf\nfrom emcee import EnsembleSampler\nfrom tqdm.auto import tqdm\nfrom corner import corner\n\nfrom pytransit import QuadraticModel\nfrom pytransit.utils.de import DiffEvol\nfrom pytransit.orbits.orbits_py import as_from_rhop, i_from_ba\nfrom pytransit.param.parameter import (ParameterSet, GParameter, PParameter, LParameter,\n NormalPrior as NP, \n UniformPrior as UP)\nseterr('ignore')\nrandom.seed(0)\n\n@njit(parallel=True, cache=False, fastmath=True)\ndef lnlike_normal_v(o, m, e):\n m = atleast_2d(m)\n npv = m.shape[0]\n npt = o.size\n lnl = zeros(npv)\n for i in prange(npv):\n for j in range(npt):\n lnl[i] += -log(e[i]) - 0.5*log(2*pi) - 0.5*((o[j]-m[i,j])/e[i])**2\n return lnl",
"Log posterior function\nThe log posterior function is the workhorse of the analysis. I implement it as a class that stores the observation data and the priors, contains the methods to calculate the model and evaluate the log posterior probability density, and encapsulates the optimisation and MCMC sampling routines.",
"class LPFunction:\n def __init__(self, name: str, times: ndarray = None, fluxes: ndarray = None):\n self.tm = QuadraticModel(klims=(0.05, 0.25), nk=512, nz=512)\n\n # LPF name\n # --------\n self.name = name\n \n # Declare high-level objects\n # --------------------------\n self.ps = None # Parametrisation\n self.de = None # Differential evolution optimiser\n self.sampler = None # MCMC sampler\n\n # Initialize data\n # ---------------\n self.times = asarray(times)\n self.fluxes = asarray(fluxes)\n self.tm.set_data(self.times)\n\n # Define the parametrisation\n # --------------------------\n self.ps = ParameterSet([\n GParameter('tc', 'zero_epoch', 'd', NP(0.0, 0.1), (-inf, inf)),\n GParameter('pr', 'period', 'd', NP(1.0, 1e-5), (0, inf)),\n GParameter('rho', 'stellar_density', 'g/cm^3', UP(0.1, 25.0), (0, inf)),\n GParameter('b', 'impact_parameter', 'R_s', UP(0.0, 1.0), (0, 1)),\n GParameter('k2', 'area_ratio', 'A_s', UP(0.05**2, 0.25**2), (0.05**2, 0.25**2)),\n GParameter('q1', 'q1_coefficient', '', UP(0, 1), bounds=(0, 1)),\n GParameter('q2', 'q2_coefficient', '', UP(0, 1), bounds=(0, 1)),\n GParameter('loge', 'log10_error', '', UP(-4, 0), bounds=(-4, 0))])\n self.ps.freeze()\n\n def create_pv_population(self, npop=50):\n return self.ps.sample_from_prior(npop)\n \n def baseline(self, pv):\n \"\"\"Multiplicative baseline\"\"\"\n return 1.\n\n def transit_model(self, pv, copy=True):\n pv = atleast_2d(pv)\n \n # Map from sampling parametrisation to the transit model parametrisation\n # ----------------------------------------------------------------------\n k = sqrt(pv[:, 4]) # Radius ratio\n tc = pv[:, 0] # Zero epoch\n p = pv[:, 1] # Orbital period\n sa = as_from_rhop(pv[:, 2], p) # Scaled semi-major axis\n i = i_from_ba(pv[:, 3], sa) # Orbital inclination\n \n # Map the limb darkening\n # ----------------------\n ldc = zeros((pv.shape[0],2))\n a, b = sqrt(pv[:,5]), 2.*pv[:,6]\n ldc[:,0] = a * b\n ldc[:,1] = a * (1. - b)\n \n return squeeze(self.tm.evaluate(k, ldc, tc, p, sa, i))\n\n def flux_model(self, pv):\n return self.transit_model(pv) * self.baseline(pv)\n\n def residuals(self, pv):\n return self.fluxes - self.flux_model(pv)\n\n def set_prior(self, pid: int, prior) -> None:\n self.ps[pid].prior = prior\n\n def lnprior(self, pv):\n return self.ps.lnprior(pv)\n\n def lnlikelihood(self, pv):\n flux_m = self.flux_model(pv)\n wn = 10**(atleast_2d(pv)[:, 7])\n return lnlike_normal_v(self.fluxes, flux_m, wn)\n\n def lnposterior(self, pv):\n lnp = self.lnprior(pv) + self.lnlikelihood(pv)\n return where(isfinite(lnp), lnp, -inf)\n\n def __call__(self, pv):\n return self.lnposterior(pv)\n\n def optimize(self, niter=200, npop=50, population=None, label='Global optimisation', leave=False):\n if self.de is None:\n self.de = DiffEvol(self.lnposterior, clip(self.ps.bounds, -1, 1), npop, maximize=True, vectorize=True)\n if population is None:\n self.de._population[:, :] = self.create_pv_population(npop)\n else:\n self.de._population[:,:] = population\n for _ in tqdm(self.de(niter), total=niter, desc=label, leave=leave):\n pass\n\n def sample(self, niter=500, thin=5, label='MCMC sampling', reset=True, leave=True):\n if self.sampler is None:\n self.sampler = EnsembleSampler(self.de.n_pop, self.de.n_par, self.lnposterior, vectorize=True)\n pop0 = self.de.population\n else:\n pop0 = self.sampler.chain[:,-1,:].copy()\n if reset:\n self.sampler.reset()\n for _ in tqdm(self.sampler.sample(pop0, iterations=niter, thin=thin), total=niter, desc=label, leave=False):\n pass\n\n def posterior_samples(self, burn: int=0, thin: int=1):\n fc = self.sampler.chain[:, burn::thin, :].reshape([-1, self.de.n_par])\n return pd.DataFrame(fc, columns=self.ps.names)\n\n def plot_mcmc_chains(self, pid: int=0, alpha: float=0.1, thin: int=1, ax=None):\n fig, ax = (None, ax) if ax is not None else subplots()\n ax.plot(self.sampler.chain[:, ::thin, pid].T, 'k', alpha=alpha)\n fig.tight_layout()\n return fig\n\n def plot_light_curve(self, model: str = 'de', figsize: tuple = (13, 4)):\n fig, ax = subplots(figsize=figsize, constrained_layout=True)\n cp = sb.color_palette()\n\n if model == 'de':\n pv = self.de.minimum_location\n err = 10**pv[7]\n elif model == 'mc':\n fc = array(self.posterior_samples())\n pv = permutation(fc)[:300]\n err = 10**median(pv[:, 7], 0)\n\n ax.errorbar(self.times, self.fluxes, err, fmt='.', c=cp[4], alpha=0.75)\n\n if model == 'de':\n ax.plot(self.times, self.flux_model(pv), c=cp[0])\n if model == 'mc':\n flux_pr = self.flux_model(fc[permutation(fc.shape[0])[:1000]])\n flux_pc = array(percentile(flux_pr, [50, 0.15,99.85, 2.5,97.5, 16,84], 0))\n [ax.fill_between(self.times, *flux_pc[i:i+2,:], alpha=0.2,facecolor=cp[0]) for i in range(1,6,2)]\n ax.plot(self.times, flux_pc[0], c=cp[0])\n setp(ax, xlim=self.times[[0,-1]], xlabel='Time', ylabel='Normalised flux')\n return fig, axs\n",
"Priors\nThe priors are contained in a ParameterSet object from pytransit.param.parameter. ParameterSet is a utility class containing a function for calculating the joint prior, etc. We're using only two basic priors: a normal prior NP, for which $x \\sim N(\\mu,\\sigma)$, a uniform prior UP, for which $x \\sim U(a,b)$.\nWe could use an informative prior on the planet-star area ratio (squared radius ratio) that we base on the observed NIR transit depth (see below). This is justified since the limb darkening, which affects the observed transit depth, is sufficiently weak in NIR. We would either need to use significantly wider informative prior, or an uninformative one, if we didn't have NIR data.\nModel\nThe model has two components: a multiplicative constant baseline, and a transit shape modelled using the quadratic Mandel & Agol transit model implemented in PyTransit. The sampling parameterisation is different than the parameterisation used by the transit model, so we need to map the parameters from the sampling space to the model space. Also, we're keeping things simple and assuming a circular orbit. Eccentric orbits will be considered in later tutorials. \nLimb darkening\nThe limb darkening uses the parameterisation by Kipping (2013, MNRAS, 435(3), 2152–2160), where the quadratic limb darkening coefficients $u$ and $v$ are mapped from sampling parameters $q_1$ and $q_2$ as\n$$\nu = 2\\sqrt{q_1}q_2,\n$$\n$$\nv = \\sqrt{q_1}(1-2q_2).\n$$\nThis parameterisation allows us to use uniform priors from 0 to 1 to cover the whole physically sensible $(u,v)$-space.\nLog likelihood\nThe log likelihood calculation is carried out by the ll_normal_es function that evaluates the normal log likelihood given a single error value.\nRead in the data\nFirst we need to read in the (mock) observation data stored in obs_data.fits. The data corresponds to a single transit observed simultaneously in eight passbands (filters). The photometry is saved in extension 1 as a binary table, and we want to read the mid-exposure times and flux values corresponding to different passbands. The time is stored in the time column, and fluxes are stored in the f_wn_* columns, where * is the filter name.",
"dfile = Path('data').joinpath('obs_data.fits')\ndata = pf.getdata(dfile, ext=1)\n\nflux_keys = [n for n in data.names if 'f_wn' in n]\nfilter_names = [k.split('_')[-1] for k in flux_keys]\n\ntime = data['time'].astype('d')\nfluxes = [data[k].astype('d') for k in flux_keys]\n\nprint ('Filter names: ' + ', '.join(filter_names))",
"First, let's have a quick look at our data, and plot the blue- and redmost passbands.",
"with sb.axes_style('white'):\n fig, axs = subplots(1,2, figsize=(13,5), sharey=True)\n axs[0].plot(time,fluxes[0], drawstyle='steps-mid', c=cp[0])\n axs[1].plot(time,fluxes[-1], drawstyle='steps-mid', c=cp[2])\n setp(axs, xlim=time[[0,-1]])\n fig.tight_layout()",
"Here we see what we'd expect to see. The stronger limb darkening in blue makes the bluemost transit round, while we can spot the end of ingress and the beginning of egress directly by eye from the redmost light curve. Also, the transit is deeper in u' than in Ks, which tells that the impact parameter b is smallish (the transit would be deeper in red than in blue for large b).\nParameter estimation\nFirst, we create an instance of the log posterior function with the redmost light curve data.\nNext, we run the DE optimiser for de_iter iterations to clump the parameter vector population close to the global posterior maximum, use the DE population to initialise the emcee sampler, and run the sampler for mc_iter iterations to obtain a posterior sample.",
"npop, de_iter, mc_reps, mc_iter, thin = 100, 200, 3, 500, 10\nlpf = LPFunction('Ks', time, fluxes[-1])\n\nlpf.optimize(de_iter, npop)\n\nlpf.plot_light_curve();\n\nfor i in range(mc_reps):\n lpf.sample(mc_iter, thin=thin, reset=True, label='MCMC sampling')\n\nlpf.plot_light_curve('mc');",
"Analysis: overview\nThe MCMC chains are now stored in lpf.sampler.chain. Let's first have a look into how the chain populations evolved to see if we have any problems with our setup, whether we have converged to sample the true posterior distribution, and, if so, what was the burn-in time.",
"with sb.axes_style('white'):\n fig, axs = subplots(2,4, figsize=(13,5), sharex=True)\n ls, lc = ['-','--','--'], ['k', '0.5', '0.5']\n percs = [percentile(lpf.sampler.chain[:,:,i], [50,16,84], 0) for i in range(8)]\n [axs.flat[i].plot(lpf.sampler.chain[:,:,i].T, 'k', alpha=0.01) for i in range(8)]\n [[axs.flat[i].plot(percs[i][j], c=lc[j], ls=ls[j]) for j in range(3)] for i in range(8)]\n setp(axs, yticks=[], xlim=[0,mc_iter//10])\n fig.tight_layout()",
"Ok, everything looks good. The 16th, 50th and 84th percentiles of the parameter vector population are stable and don't show any significant long-term trends. Now we can flatten the individual chains into one long chain fc and calculate the median parameter vector.",
"fc = lpf.sampler.chain.reshape([-1,lpf.sampler.chain.shape[-1]])\nmp = median(fc, 0)",
"Let's also plot the model and the data to see if this all makes sense. To do this, we calculate the conditional distribution of flux using the posterior samples (here, we're using a random subset of samples, although this isn't really necessary), and plot the distribution median and it's median-centred 68%, 95%, and 99.7% central posterior intervals (corresponding approximately to 1, 2, and 3$\\sigma$ intervals if the distribution is normal).",
"flux_pr = lpf.flux_model(fc[permutation(fc.shape[0])[:1000]])\nflux_pc = array(percentile(flux_pr, [50, 0.15,99.85, 2.5,97.5, 16,84], 0))\n\nwith sb.axes_style('white'):\n zx1,zx2,zy1,zy2 = 0.958,0.98, 0.9892, 0.992\n fig, ax = subplots(1,1, figsize=(13,4))\n cp = sb.color_palette()\n ax.errorbar(lpf.times, lpf.fluxes, 10**mp[7], fmt='.', c=cp[4], alpha=0.75)\n [ax.fill_between(lpf.times,*flux_pc[i:i+2,:],alpha=0.2,facecolor=cp[0]) for i in range(1,6,2)]\n ax.plot(lpf.times, flux_pc[0], c=cp[0])\n setp(ax, xlim=lpf.times[[0,-1]], xlabel='Time', ylabel='Normalised flux')\n fig.tight_layout()\n \n az = fig.add_axes([0.075,0.18,0.20,0.46])\n ax.add_patch(Rectangle((zx1,zy1),zx2-zx1,zy2-zy1,fill=False,edgecolor='k',lw=1,ls='dashed'))\n [az.fill_between(lpf.times,*flux_pc[i:i+2,:],alpha=0.2,facecolor=cp[0]) for i in range(1,6,2)]\n setp(az, xlim=(zx1,zx2), ylim=(zy1,zy2), yticks=[], xticks=[])\n az.plot(lpf.times, flux_pc[0], c=cp[0])",
"We could (should) also plot the residuals, but I've left them out from the plot for clarity. The plot looks fine, and we can continue to have a look at the parameter estimates.\nAnalysis\nWe start the analysis by making a Pandas data frame df, using the df.describe to gen an overview of the estimates, and plotting the posteriors for the most interesting parameters as violin plots.",
"pd.set_option('display.precision',4)\ndf = pd.DataFrame(data=fc.copy(), columns=lpf.ps.names)\ndf['k'] = sqrt(df.k2)\ndf['u'] = 2*sqrt(df.q1)*df.q2\ndf['v'] = sqrt(df.q1)*(1-2*df.q2)\ndf = df.drop('k2', axis=1)\ndf.describe()\n\nwith sb.axes_style('white'):\n fig, axs = subplots(2,3, figsize=(13,5))\n pars = 'tc rho b k u v'.split()\n [sb.violinplot(y=df[p], inner='quartile', ax=axs.flat[i]) for i,p in enumerate(pars)]\n [axs.flat[i].text(0.05,0.9, p, transform=axs.flat[i].transAxes) for i,p in enumerate(pars)]\n setp(axs, xticks=[], ylabel='')\n fig.tight_layout()",
"While we're at it, let's plot some correlation plots. The limb darkening coefficients are correlated, and we'd also expect to see a correlation between the impact parameter and radius ratio.",
"corner(df[['k', 'rho', 'b', 'q1', 'q2']]);",
"Calculating the parameter estimates for all the filters\nOk, now, let's do the parameter estimation for all the filters. We wouldn't be doing separate per-filter parameter estimation in real life, since it's much better use of the data to do a simultaneous joint modelling of all the data together (this is something that will be shown in a later tutorial). This will take some time...",
"chains = []\nnpop, de_iter, mc_iter, mc_burn, thin = 100, 200, 1500, 1000, 10\nfor flux, pb in zip(fluxes, filter_names):\n lpf = LPFunction(pb, time, flux)\n lpf.optimize(de_iter, npop)\n lpf.sample(mc_burn, thin=thin)\n lpf.sample(mc_iter, thin=thin, reset=True)\n chains.append(lpf.sampler.chain.reshape([-1,lpf.sampler.chain.shape[-1]]))\nchains = array(chains)\n\nids = [list(repeat(filter_names,chains.shape[1])),8*list(range(chains.shape[1]))]\ndft = pd.DataFrame(data = concatenate([chains[i,:,:] for i in range(chains.shape[0])]), \n index=ids, columns=lpf.ps.names)\ndft['es'] = 10**df.loge * 1e6\ndft['k'] = sqrt(dft.k2)\ndft['u'] = 2*sqrt(dft.q1)*dft.q2\ndft['v'] = sqrt(dft.q1)*(1-2*dft.q2)\ndft = dft.drop('k2', axis=1)",
"The dataframe creation can probably be done in a nicer way, but we don't need to bother with that. The results are now in a multi-index dataframe, from where we can easily get the per-filter point estimates.",
"dft.loc['u'].describe()\n\nwith sb.axes_style('white'):\n fig, axs = subplots(2,3, figsize=(13,6), sharex=True)\n pars = 'tc rho u b k v'.split()\n for i,p in enumerate(pars):\n sb.violinplot(data=dft[p].unstack().T, inner='quartile', scale='width', \n ax=axs.flat[i], order=filter_names)\n axs.flat[i].text(0.95,0.9, p, transform=axs.flat[i].transAxes, ha='right')\n fig.tight_layout()",
"As it is, the posterior distributions for different filters agree well with each other. However, the uncertainty in the radius ratio estimate decreases towards redder wavelengths. This is due to the reduced limb darkening, which allows us to estimate the true geometric radius ratio more accurately.\nFinally, let's print the parameter estimates for each filter. We'll print the posterior medians with uncertainty estimates based on the central 68% posterior intervals. This matches the posterior mean and its 1-$\\sigma$ uncertainty if the posterior is normal (which isn't really the case for many of the posteriors here). In real life, you'd want to report separate + and - uncertainties for the asymmetric posteriors, etc.",
"def ms(df,p,f):\n p = array(percentile(df[p][f], [50,16,84]))\n return p[0], abs(p[1:]-p[0]).mean()\n\ndef create_row(df,f,pars):\n return ('<tr><td>{:}</td>'.format(f)+\n ''.join(['<td>{:5.4f} ± {:5.4f}</td>'.format(*ms(dft,p,f)) for p in pars])+\n '</tr>')\n\ndef create_table(df): \n pars = 'tc rho b k u v'.split()\n return ('<table style=\"width:100%\"><th>Filter</th>'+\n ''.join(['<th>{:}</th>'.format(p) for p in pars])+\n ''.join([create_row(df,f,pars) for f in filter_names])+\n '</table>')\n\ndisplay(HTML(create_table(dft)))",
"<center>© Hannu Parviainen 2014--2021</center>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
joshspeagle/dynesty | demos/Examples -- Gaussian Shells.ipynb | mit | [
"Gaussian Shells\nSetup\nFirst, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.",
"# system functions that are always useful to have\nimport time, sys, os\n\n# basic numeric setup\nimport numpy as np\nimport math\n\n# inline plotting\n%matplotlib inline\n\n# plotting\nimport matplotlib\nfrom matplotlib import pyplot as plt\n\n# seed the random number generator\nrstate = np.random.default_rng(715)\n\n\n# re-defining plotting defaults\nfrom matplotlib import rcParams\nrcParams.update({'xtick.major.pad': '7.0'})\nrcParams.update({'xtick.major.size': '7.5'})\nrcParams.update({'xtick.major.width': '1.5'})\nrcParams.update({'xtick.minor.pad': '7.0'})\nrcParams.update({'xtick.minor.size': '3.5'})\nrcParams.update({'xtick.minor.width': '1.0'})\nrcParams.update({'ytick.major.pad': '7.0'})\nrcParams.update({'ytick.major.size': '7.5'})\nrcParams.update({'ytick.major.width': '1.5'})\nrcParams.update({'ytick.minor.pad': '7.0'})\nrcParams.update({'ytick.minor.size': '3.5'})\nrcParams.update({'ytick.minor.width': '1.0'})\nrcParams.update({'font.size': 30})\n\nimport dynesty",
"2-D Gaussian Shells\nTo demonstrate more of the functionality afforded by our different sampling/bounding options we will demonstrate how these various features work using a set of 2-D Gaussian shells with a uniform prior over $[-6, 6]$.",
"# defining constants\nr = 2. # radius\nw = 0.1 # width\nc1 = np.array([-3.5, 0.]) # center of shell 1\nc2 = np.array([3.5, 0.]) # center of shell 2\nconst = math.log(1. / math.sqrt(2. * math.pi * w**2)) # normalization constant\n\n# log-likelihood of a single shell\ndef logcirc(theta, c):\n d = np.sqrt(np.sum((theta - c)**2, axis=-1)) # |theta - c|\n return const - (d - r)**2 / (2. * w**2)\n\n# log-likelihood of two shells\ndef loglike(theta):\n return np.logaddexp(logcirc(theta, c1), logcirc(theta, c2))\n\n# our prior transform\ndef prior_transform(x):\n return 12. * x - 6.\n\n# compute likelihood surface over a 2-D grid\nxx, yy = np.meshgrid(np.linspace(-6., 6., 200), np.linspace(-6., 6., 200))\nL = np.exp(loglike(np.dstack((xx, yy))))\n\n# plot result\nfig = plt.figure(figsize=(6,5))\nplt.scatter(xx, yy, c=L, s=0.5)\nplt.xlabel(r'$x$')\nplt.ylabel(r'$y$')\nplt.colorbar(label=r'$\\mathcal{L}$');",
"Default Run\nLet's first run with just the default set of dynesty options.",
"# run with all defaults\nsampler = dynesty.DynamicNestedSampler(loglike, prior_transform, ndim=2, rstate=rstate)\nsampler.run_nested()\nres = sampler.results\n\nfrom dynesty import plotting as dyplot\ndyplot.cornerplot(sampler.results, span=([-6, 6], [-6, 6]), fig=plt.subplots(2, 2, figsize=(10, 10)));",
"Bounding Options\nLet's test out the bounding options available in dynesty (with uniform sampling) on these 2-D shells. To illustrate their baseline effectiveness, we will also disable the initial delay before our first update.",
"# bounding methods\nbounds = ['none', 'single', 'multi', 'balls', 'cubes']\n\n# run over each method and collect our results\nbounds_res = []\nfor b in bounds:\n sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=2,\n bound=b, sample='unif', nlive=500,\n first_update={'min_ncall': 0.,\n 'min_eff': 100.}, rstate=rstate)\n sys.stderr.flush()\n sys.stderr.write('{}:\\n'.format(b))\n sys.stderr.flush()\n t0 = time.time()\n sampler.run_nested(dlogz=0.05)\n t1 = time.time()\n res = sampler.results\n dtime = t1 - t0\n sys.stderr.flush()\n sys.stderr.write('\\ntime: {0}s\\n\\n'.format(dtime))\n bounds_res.append(sampler.results)",
"We can see the amount of overhead associated with 'balls' and 'cubes' is non-trivial in this case. This mainly comes from sampling from our bouding distributions, since accepting or rejecting a point requires counting all neighbors within some radius $r$, leading to frequent nearest-neighbor searches.\nRuntime aside, we see that each method runs for a similar number of iterations and give similar logz values (with comparable errors). They thus appear to be unbiased both with respect to each other and with respect to the analytic solution ($\\ln \\mathcal{Z} = -1.75$).\nTo get a sense of what each of our bounds looks like, we can use some of dynesty's built-in plotting functionality. First, let's take a look at the case where we had no bounds ('none').",
"from dynesty import plotting as dyplot\n\n# initialize figure\nfig, axes = plt.subplots(1, 1, figsize=(6, 6))\n\n# plot proposals in corner format for 'none'\nfg, ax = dyplot.cornerbound(bounds_res[0], it=2000, prior_transform=prior_transform,\n show_live=True, fig=(fig, axes))\nax[0, 0].set_title('No Bound', fontsize=26)\nax[0, 0].set_xlim([-6.5, 6.5])\nax[0, 0].set_ylim([-6.5, 6.5]);",
"Now let's examine the single and multi-ellipsoidal cases.",
"# initialize figure\nfig, axes = plt.subplots(1, 3, figsize=(18, 6))\naxes = axes.reshape((1, 3))\n[a.set_frame_on(False) for a in axes[:, 1]]\n[a.set_xticks([]) for a in axes[:, 1]]\n[a.set_yticks([]) for a in axes[:, 1]]\n\n# plot proposals in corner format for 'single'\nfg, ax = dyplot.cornerbound(bounds_res[1], it=2000, prior_transform=prior_transform,\n show_live=True, fig=(fig, axes[:, 0]))\nax[0, 0].set_title('Single', fontsize=26)\nax[0, 0].set_xlim([-6.5, 6.5])\nax[0, 0].set_ylim([-6.5, 6.5])\n\n# plot proposals in corner format for 'multi'\nfg, ax = dyplot.cornerbound(bounds_res[2], it=2000, prior_transform=prior_transform,\n show_live=True, fig=(fig, axes[:, 2]))\nax[0, 0].set_title('Multi', fontsize=26)\nax[0, 0].set_xlim([-6.5, 6.5])\nax[0, 0].set_ylim([-6.5, 6.5]);",
"Finally, let's take a look at our overlapping set of balls and cubes.",
"# initialize figure\nfig, axes = plt.subplots(1, 3, figsize=(18, 6))\naxes = axes.reshape((1, 3))\n[a.set_frame_on(False) for a in axes[:, 1]]\n[a.set_xticks([]) for a in axes[:, 1]]\n[a.set_yticks([]) for a in axes[:, 1]]\n\n# plot proposals in corner format for 'balls'\nfg, ax = dyplot.cornerbound(bounds_res[3], it=1500, prior_transform=prior_transform,\n show_live=True, fig=(fig, axes[:, 0]))\nax[0, 0].set_title('Balls', fontsize=26)\nax[0, 0].set_xlim([-6.5, 6.5])\nax[0, 0].set_ylim([-6.5, 6.5])\n\n# plot proposals in corner format for 'cubes'\nfg, ax = dyplot.cornerbound(bounds_res[4], it=1500, prior_transform=prior_transform,\n show_live=True, fig=(fig, axes[:, 2]))\nax[0, 0].set_title('Cubes', fontsize=26)\nax[0, 0].set_xlim([-6.5, 6.5])\nax[0, 0].set_ylim([-6.5, 6.5]);",
"Bounding Objects\nBy default, the nested samplers in dynesty save all bounding distributions used throughout the course of a run, which can be accessed within the results dictionary. More information on these distributions can be found in bounding.py.",
"# the proposals associated with our 'multi' bounds\nbounds_res[2].bound",
"Each bounding object has a host of additional functionality that the user can experiment with. For instance, the volume contained by the union of ellipsoids within MultiEllipsoid can be estimated using Monte Carlo integration (but otherwise are not computed by default). These volume estimates, combined with what fraction of our samples overlap with the unit cube (since our bounding distributions can exceed our prior bounds), can give us an idea of how effectively our multi-ellipsoid bounds are shrinking over time compared with the single-ellipsoid case.",
"# compute effective 'single' volumes\nsingle_logvols = [0.] # unit cube\nfor bound in bounds_res[1].bound[1:]:\n logvol = bound.logvol # volume\n funit = bound.unitcube_overlap(rstate=rstate) # fractional overlap with unit cube\n single_logvols.append(logvol +np.log(funit))\nsingle_logvols = np.array(single_logvols)\n\n# compute effective 'multi' volumes\nmulti_logvols = [0.] # unit cube\nfor bound in bounds_res[2].bound[1:]: # skip unit cube\n logvol, funit = bound.monte_carlo_logvol(rstate=rstate, return_overlap=True)\n multi_logvols.append(logvol +np.log( funit)) # numerical estimate via Monte Carlo methods\nmulti_logvols = np.array(multi_logvols)\n\n# plot results as a function of ln(volume)\nplt.figure(figsize=(12,6))\nplt.xlabel(r'$-\\ln X_i$')\nplt.ylabel(r'$\\ln V_i$')\n\n# 'single'\nres = bounds_res[1]\nx = -res.logvol # ln(prior volume)\nit = res.bound_iter # proposal idx at given iteration\ny = single_logvols[it] # corresponding ln(bounding volume)\nplt.plot(x, y, lw=3, label='single')\n\n# 'multi'\nres = bounds_res[2]\nx, it = -res.logvol, res.bound_iter\ny = multi_logvols[it]\nplt.plot(x, y, lw=3, label='multi')\nplt.legend(loc='best', fontsize=24);",
"We see that in the beginning, only a single ellipsoid is used. After some bounding updates have been made, there is enough of an incentive to split the proposal into several ellipsoids. Although the initial ellipsoid decompositions can be somewhat unstable (i.e. bootstrapping can give relatively large volume expansion factors), over time this process leads to a significant decrease in effective overall volume.\nSampling Options\nLet's test out the sampling options available in dynesty (with 'multi' bounding) on our 2-D shells defined above.",
"# bounding methods\nsampling = ['unif', 'rwalk', 'slice', 'rslice', 'hslice']\n\n# run over each method and collect our results\nsampling_res = []\nfor s in sampling:\n sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=2,\n bound='multi', sample=s, nlive=1000,\n rstate=rstate)\n sys.stderr.flush()\n sys.stderr.write('{}:\\n'.format(s))\n sys.stderr.flush()\n t0 = time.time()\n sampler.run_nested(dlogz=0.05)\n t1 = time.time()\n res = sampler.results\n dtime = t1 - t0\n sys.stderr.flush()\n sys.stderr.write('\\ntime: {0}s\\n\\n'.format(dtime))\n sampling_res.append(sampler.results)",
"As expected, uniform sampling in 2-D is substantially more efficient that other more complex alternatives (especially 'hslice', which is computing numerical gradients!). Regardless of runtime, however, we see that each method runs for a similar number of iterations and gives similar logz values (with comparable errors). They thus appear to be unbiased both with respect to each other and with respect to the analytic solution ($\\ln\\mathcal{Z} = −1.75$).\nBootstrapping\nOne of the largest overheads associated with nested sampling is the time needed to propose new bounding distributions. To avoid bounding distributions that fail to properly encompass the remaining likelihood, dynesty automatically expands the volume of all bounding distributions by an enlargement factor (enlarge). By default, this factor is set to a constant value of 1.25. However, it can also be determined in real time using bootstrapping (over the set of live points) following the scheme outlined in Buchner (2014).\nBootstrapping these expansion factors can help to ensure accurate evidence estimation when the proposals rely heavily on the size of an object rather than the overall shape, such as when proposing new points uniformly within their boundaries. In theory, it also helps to prevent mode \"death\": if occasionally a secondary mode disappears when bootstrapping, the existing bounds would be expanded to theoretically encompass it. In practice, however, most modes are widely separated, leading enormous expansion factors whenever any possible instance of mode death may occur. \nBootstrapping thus imposes a de facto floor on the number of acceptable live points to avoid mode death for any given problem, which can often be quite large for many problems. While these numbers are often justified, they can drastically reduce the raw sampling efficiency until such a target threshold of live points is reached.\nWe showcase this behavior below by illustrating the performance of our NestedSampler on several N-D Gaussian shells with and without bootstrapping.",
"# setup for running tests over gaussian shells in arbitrary dimensions\ndef run(ndim, bootstrap, bound, method, nlive):\n \"\"\"Convenience function for running in any dimension.\"\"\"\n\n c1 = np.zeros(ndim)\n c1[0] = -3.5\n c2 = np.zeros(ndim)\n c2[0] = 3.5\n f = lambda theta: np.logaddexp(logcirc(theta, c1), logcirc(theta, c2))\n sampler = dynesty.NestedSampler(f, prior_transform, ndim,\n bound=bound, sample=method, nlive=nlive, \n bootstrap=bootstrap, \n first_update={'min_ncall': 0.,\n 'min_eff': 100.},\n rstate=rstate)\n sampler.run_nested(dlogz=0.5)\n \n return sampler.results\n\n# analytic ln(evidence) values\nndims = [2, 5, 10]\nanalytic_logz = {2: -1.75,\n 5: -5.67,\n 10: -14.59}\n\n# results with bootstrapping\nresults = []\ntimes = []\nfor ndim in ndims:\n t0 = time.time()\n sys.stderr.flush()\n sys.stderr.write('{} dimensions:\\n'.format(ndim))\n sys.stderr.flush()\n res = run(ndim, 20, 'multi', 'unif', 2000)\n sys.stderr.flush()\n curdt = time.time() - t0\n times.append(curdt)\n sys.stderr.write('\\ntime: {0}s\\n\\n'.format(curdt))\n results.append(res)\n\n# results without bootstrapping\nresults2 = []\ntimes2 = []\nfor ndim in ndims:\n t0 = time.time()\n sys.stderr.flush()\n sys.stderr.write('{} dimensions:\\n'.format(ndim))\n sys.stderr.flush()\n res = run(ndim, 0, 'multi', 'unif', 2000)\n sys.stderr.flush()\n curdt = time.time() - t0\n times2.append(curdt)\n sys.stderr.write('\\ntime: {0}s\\n\\n'.format(curdt))\n results2.append(res)\n\nprint('With bootstrapping:')\nprint(\"D analytic logz logzerr nlike eff(%) time\")\nfor ndim, curt, res in zip(ndims, times, results):\n print(\"{:2d} {:6.2f} {:6.2f} {:4.2f} {:6d} {:5.2f} {:6.2f}\"\n .format(ndim, analytic_logz[ndim], res.logz[-1], res.logzerr[-1],\n sum(res.ncall), res.eff, curt))\nprint('\\n')\nprint('Without bootstrapping:')\nprint(\"D analytic logz logzerr nlike eff(%) time\")\nfor ndim, curt, res in zip(ndims, times2, results2):\n print(\"{:2d} {:6.2f} {:6.2f} {:4.2f} {:6d} {:5.2f} {:6.2f}\"\n .format(ndim, analytic_logz[ndim], res.logz[-1], res.logzerr[-1],\n sum(res.ncall), res.eff, curt))",
"While our results are comparable between both cases, in higher dimensions multi-ellipsoid bounding distributions can sometimes be over-constrained, leading to biased results. Other sampling methods mitigate this problem by sampling conditioned on the ellipsoid axes, and so only depends on ellipsoid shapes, not sizes. 'rslice' is demonstrated below.",
"# adding on slice sampling\nresults3 = []\ntimes3 = []\nfor ndim in ndims:\n t0 = time.time()\n sys.stderr.flush()\n sys.stderr.write('{} dimensions:\\n'.format(ndim))\n sys.stderr.flush()\n res = run(ndim, 0, 'multi', 'rslice', 2000)\n sys.stderr.flush()\n curdt = time.time() - t0\n times3.append(curdt)\n sys.stderr.write('\\ntime: {0}s\\n\\n'.format(curdt))\n results3.append(res)\n\nprint('Random Slice sampling:')\nprint(\"D analytic logz logzerr nlike eff(%) time\")\nfor ndim, curt, res in zip([2, 5, 10, 20], times3, results3):\n print(\"{:2d} {:6.2f} {:6.2f} {:4.2f} {:8d} {:5.2f} {:6.2f}\"\n .format(ndim, analytic_logz[ndim], res.logz[-1], res.logzerr[-1],\n sum(res.ncall), res.eff, curt))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/probability | tensorflow_probability/examples/jupyter_notebooks/Gaussian_Process_Latent_Variable_Model.ipynb | apache-2.0 | [
"Copyright 2018 The TensorFlow Probability Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Gaussian Process Latent Variable Models\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Gaussian_Process_Latent_Variable_Model\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Process_Latent_Variable_Model.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Process_Latent_Variable_Model.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Process_Latent_Variable_Model.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nLatent variable models attempt to capture hidden structure in high dimensional\ndata. Examples include principle component analysis (PCA) and factor analysis.\nGaussian processes are \"non-parametric\" models which can flexibly capture local\ncorrelation structure and uncertainty. The Gaussian process latent variable\nmodel (Lawrence, 2004) combines these concepts.\nBackground: Gaussian Processes\nA Gaussian process is any collection of random variables such that the marginal\ndistribution over any finite subset is a multivariate normal distribution. For\na detailed look at GPs in the context of regression, check out\nGaussian Process Regression in TensorFlow Probability.\nWe use a so-called index set to label each of the random variables in the\ncollection that the GP comprises. In the case of a finite index set, we just\nget a multivariate normal. GP's are most interesting, though, when we consider\ninfinite collections. In the case of index sets like $\\mathbb{R}^D$, where we\nhave a random variable for every point in $D$-dimensional space, the GP can be\nthought of as a distribution over random functions. A single draw from such a\nGP, if it could be realized, would assign a (jointly normally-distributed) value\nto every point in $\\mathbb{R}^D$. In this colab, we'll focus on GP's over some\n$\\mathbb{R}^D$.\nNormal distributions are completely determined by their first and second order\nstatistics -- indeed, one way to define the normal distribution is as one whose\nhigher-order cumulants are all zero. This is the case for GP's, too: we completely\nspecify a GP by describing the mean and covariance<sup></sup>. Recall that for\nfinite-dimensional multivariate normals, the mean is a vector and the covariance is a square,\nsymmetric positive-definite matrix. In the infinite-dimensional GP, these\nstructures generalize to a mean function $m : \\mathbb{R}^D \\to \\mathbb{R}$,\ndefined at each point of the index set, and a covariance \"kernel*\" function,\n$k : \\mathbb{R}^D \\times \\mathbb{R}^D \\to \\mathbb{R}$. The kernel\nfunction is required to be positive-definite, which\nessentially says that, restricted to a finite set of points, it yields a\npostiive-definite matrix.\nMost of the structure of a GP derives from its covariance kernel function --\nthis function describes how the values of sampeld functions vary across nearby\n(or not-so-nearby) points. Different covariance functions encourage different\ndegrees of smoothness. One commonly used kernel function is the \"exponentiated\nquadratic\" (a.k.a., \"gaussian\", \"squared exponential\" or \"radial basis\nfunction\"), $k(x, x') = \\sigma^2 e^{(x - x^2) / \\lambda^2}$. Other examples\nare outlined on David Duvenaud's kernel cookbook page, as well\nas in the canonical text Gaussian Processes for Machine Learning.\n<sub>* With an infinite index set, we also require a consistency condition. Since\n the definition of the GP is in terms of finite marginals, we must require that\n these marginals are consistent irrespective of the order in which the\n marginals are taken. This is a somewhat advanced topic in the theory of\n stochastic processes, out of scope for this tutorial; suffice it to say things\n work out ok in the end!</sub>\nApplying GPs: Regression and Latent Variable Models\nOne way we can use GPs is for regression: given a bunch of observed data in the\nform of inputs ${x_i}{i=1}^N$ (elements of the index set) and observations\n${y_i}{i=1}^N$, we can use these to form a posterior predictive distribution\nat a new set of points ${x_j^}_{j=1}^M$. Since the distributions are all\nGaussian, this boils down to some straightforward linear algebra (but note: the\nrequisite computations have runtime cubic* in the number of data points and\nrequire space quadratic in the number of data points -- this is a major limiting\nfactor in the use of GPs and much current research focuses on computationally\nviable alternatives to exact posterior inference). We cover GP regression in more\ndetail in the GP Regression in TFP colab.\nAnother way we can use GPs is as a latent variable model: given a collection of\nhigh-dimensional observations (e.g., images), we can posit some low-dimensional\nlatent structure. We assume that, conditional on the latent structure, the large\nnumber of outputs (pixels in the image) are independent of each other. Training\nin this model consists of\n 1. optimizing model parameters (kernel function parameters as well as, e.g.,\n observation noise variance), and\n 2. finding, for each training observation (image), a corresponding point\n location in the index set.\nAll of the optimization can be done by maximizing the marginal log likelihood of\nthe data.\nImports",
"import numpy as np\nimport tensorflow.compat.v2 as tf\ntf.enable_v2_behavior()\nimport tensorflow_probability as tfp\ntfd = tfp.distributions\ntfk = tfp.math.psd_kernels\n%pylab inline",
"Load MNIST Data",
"# Load the MNIST data set and isolate a subset of it.\n(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()\nN = 1000\nsmall_x_train = x_train[:N, ...].astype(np.float64) / 256.\nsmall_y_train = y_train[:N]",
"Prepare trainable variables\nWe'll be jointly training 3 model parameters as well as the latent inputs.",
"# Create some trainable model parameters. We will constrain them to be strictly\n# positive when constructing the kernel and the GP.\nunconstrained_amplitude = tf.Variable(np.float64(1.), name='amplitude')\nunconstrained_length_scale = tf.Variable(np.float64(1.), name='length_scale')\nunconstrained_observation_noise = tf.Variable(np.float64(1.), name='observation_noise')\n\n# We need to flatten the images and, somewhat unintuitively, transpose from\n# shape [100, 784] to [784, 100]. This is because the 784 pixels will be\n# treated as *independent* conditioned on the latent inputs, meaning we really\n# have a batch of 784 GP's with 100 index_points.\nobservations_ = small_x_train.reshape(N, -1).transpose()\n\n# Create a collection of N 2-dimensional index points that will represent our\n# latent embeddings of the data. (Lawrence, 2004) prescribes initializing these\n# with PCA, but a random initialization actually gives not-too-bad results, so\n# we use this for simplicity. For a fun exercise, try doing the\n# PCA-initialization yourself!\ninit_ = np.random.normal(size=(N, 2))\nlatent_index_points = tf.Variable(init_, name='latent_index_points')",
"Construct model and training ops",
"# Create our kernel and GP distribution\nEPS = np.finfo(np.float64).eps\n\ndef create_kernel():\n amplitude = tf.math.softplus(EPS + unconstrained_amplitude)\n length_scale = tf.math.softplus(EPS + unconstrained_length_scale)\n kernel = tfk.ExponentiatedQuadratic(amplitude, length_scale)\n return kernel\n\ndef loss_fn():\n observation_noise_variance = tf.math.softplus(\n EPS + unconstrained_observation_noise)\n gp = tfd.GaussianProcess(\n kernel=create_kernel(),\n index_points=latent_index_points,\n observation_noise_variance=observation_noise_variance)\n log_probs = gp.log_prob(observations_, name='log_prob')\n return -tf.reduce_mean(log_probs)\n\ntrainable_variables = [unconstrained_amplitude,\n unconstrained_length_scale,\n unconstrained_observation_noise,\n latent_index_points]\n\noptimizer = tf.optimizers.Adam(learning_rate=1.0)\n\[email protected](autograph=False, jit_compile=True)\ndef train_model():\n with tf.GradientTape() as tape:\n loss_value = loss_fn()\n grads = tape.gradient(loss_value, trainable_variables)\n optimizer.apply_gradients(zip(grads, trainable_variables))\n return loss_value",
"Train and plot the resulting latent embeddings",
"# Initialize variables and train!\nnum_iters = 100\nlog_interval = 20\nlips = np.zeros((num_iters, N, 2), np.float64)\nfor i in range(num_iters):\n loss = train_model()\n lips[i] = latent_index_points.numpy()\n if i % log_interval == 0 or i + 1 == num_iters:\n print(\"Loss at step %d: %f\" % (i, loss))",
"Plot results",
"# Plot the latent locations before and after training\nplt.figure(figsize=(7, 7))\nplt.title(\"Before training\")\nplt.grid(False)\nplt.scatter(x=init_[:, 0], y=init_[:, 1],\n c=y_train[:N], cmap=plt.get_cmap('Paired'), s=50)\nplt.show()\n\nplt.figure(figsize=(7, 7))\nplt.title(\"After training\")\nplt.grid(False)\nplt.scatter(x=lips[-1, :, 0], y=lips[-1, :, 1],\n c=y_train[:N], cmap=plt.get_cmap('Paired'), s=50)\nplt.show()",
"Construct predictive model and sampling ops",
"# We'll draw samples at evenly spaced points on a 10x10 grid in the latent\n# input space. \nsample_grid_points = 10\ngrid_ = np.linspace(-4, 4, sample_grid_points).astype(np.float64)\n# Create a 10x10 grid of 2-vectors, for a total shape [10, 10, 2]\ngrid_ = np.stack(np.meshgrid(grid_, grid_), axis=-1)\n\n# This part's a bit subtle! What we defined above was a batch of 784 (=28x28)\n# independent GP distributions over the input space. Each one corresponds to a\n# single pixel of an MNIST image. Now what we'd like to do is draw 100 (=10x10)\n# *independent* samples, each one separately conditioned on all the observations\n# as well as the learned latent input locations above.\n#\n# The GP regression model below will define a batch of 784 independent\n# posteriors. We'd like to get 100 independent samples each at a different\n# latent index point. We could loop over the points in the grid, but that might\n# be a bit slow. Instead, we can vectorize the computation by tacking on *even\n# more* batch dimensions to our GaussianProcessRegressionModel distribution.\n# In the below grid_ shape, we have concatentaed\n# 1. batch shape: [sample_grid_points, sample_grid_points, 1]\n# 2. number of examples: [1]\n# 3. number of latent input dimensions: [2]\n# The `1` in the batch shape will broadcast with 784. The final result will be\n# samples of shape [10, 10, 784, 1]. The `1` comes from the \"number of examples\"\n# and we can just `np.squeeze` it off.\ngrid_ = grid_.reshape(sample_grid_points, sample_grid_points, 1, 1, 2)\n\n# Create the GPRegressionModel instance which represents the posterior\n# predictive at the grid of new points.\ngprm = tfd.GaussianProcessRegressionModel(\n kernel=create_kernel(),\n # Shape [10, 10, 1, 1, 2]\n index_points=grid_,\n # Shape [1000, 2]. 1000 2 dimensional vectors.\n observation_index_points=latent_index_points,\n # Shape [784, 1000]. A batch of 784 1000-dimensional observations.\n observations=observations_)",
"Draw samples conditioned on the data and latent embeddings\nWe sample at 100 points on a 2-d grid in the latent space.",
"samples = gprm.sample()\n\n# Plot the grid of samples at new points. We do a bit of tweaking of the samples\n# first, squeezing off extra 1-shapes and normalizing the values.\nsamples_ = np.squeeze(samples.numpy())\nsamples_ = ((samples_ -\n samples_.min(-1, keepdims=True)) /\n (samples_.max(-1, keepdims=True) -\n samples_.min(-1, keepdims=True)))\nsamples_ = samples_.reshape(sample_grid_points, sample_grid_points, 28, 28)\nsamples_ = samples_.transpose([0, 2, 1, 3])\nsamples_ = samples_.reshape(28 * sample_grid_points, 28 * sample_grid_points)\nplt.figure(figsize=(7, 7))\nax = plt.subplot()\nax.grid(False)\nax.imshow(-samples_, interpolation='none', cmap='Greys')\nplt.show()",
"Conclusion\nWe've taken a brief tour of the Gaussian process latent variable model, and\nshown how we can implement it in just a few lines of TF and TF Probability\ncode."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
feststelltaste/software-analytics | notebooks/Developers' Habits (Linux Edition).ipynb | gpl-3.0 | [
"Introduction\nThe nice thing about reproducible data analysis (like I'm trying to do it here on my blog) is, well, that you can quickly reproduce or even replicate an analysis.\nSo, in this blog post/notebook, I transfer the analysis of \"Developers' Habits (IntelliJ Edition)\" to another project: The famous open-source operating system Linux. Again, we want to take a look at how much information you can extract from a simple Git log output. This time we want to know\n\nwhere the developers come from\non which weekdays the developers work\nwhat the normal working hours are and\nif there is any sight of overtime periods.\n\nBecause we use an open approach for our analysis, we are able to respond to newly created insights. Again, we use Pandas as data analysis toolkit to accomplish these tasks and execute our code in a Juypter notebook (find the original on GitHub. We also see some refactorings by leveraging Pandas' date functionality a little bit more.\nSo let's start!\nGaining the data\nI've already described the details on how to get the necessary data in my previous blog post. What we have at hand is a nice file with the following contents:\n1514531161 -0800 Linus Torvalds [email protected]\n1514489303 -0500 David S. Miller [email protected]\n1514487644 -0800 Tom Herbert [email protected]\n1514487643 -0800 Tom Herbert [email protected]\n1514482693 -0500 Willem de Bruijn [email protected]\n...\nIt includes the UNIX timestamp (in seconds since epoch), a whitespace, the time zone (where the authors live in), a tab separator, the name of the author, a tab and the email address of the author. The whole log shows 13 years of Linux development that is available on GitHub repository mirror.\nWrangling the raw data\nWe import the data by using Pandas' read_csv function and the appropriate parameters. We copy only the needed data from the raw dataset into the new DataFrame git_authors.",
"import pandas as pd\n\nraw = pd.read_csv(\n r'../../linux/git_timestamp_author_email.log',\n sep=\"\\t\",\n encoding=\"latin-1\",\n header=None,\n names=['unix_timestamp', 'author', 'email'])\n\n# create separate columns for time data\nraw[['timestamp', 'timezone']] = raw['unix_timestamp'].str.split(\" \", expand=True)\n# convert timestamp data\nraw['timestamp'] = pd.to_datetime(raw['timestamp'], unit=\"s\")\n# add hourly offset data\nraw['timezone_offset'] = pd.to_numeric(raw['timezone']) / 100.0\n# calculate the local time\nraw[\"timestamp_local\"] = raw['timestamp'] + pd.to_timedelta(raw['timezone_offset'], unit='h')\n\n# filter out wrong timestamps\nraw = raw[\n (raw['timestamp'] >= raw.iloc[-1]['timestamp']) &\n (raw['timestamp'] <= pd.to_datetime('today'))]\n\ngit_authors = raw[['timestamp_local', 'timezone', 'author']].copy()\ngit_authors.head()",
"Refining the dataset\nIn this section, we add some additional time-based information to the DataFrame to accomplish our tasks.\nAdding weekdays\nFirst, we add the information about the weekdays based on the weekday_name information of the timestamp_local column. Because we want to preserve the order of the weekdays, we convert the weekday entries to a Categorial data type, too. The order of the weekdays is taken from the calendar module.\nNote: We can do this so easily because we have such a large amount of data where every weekday occurs. If we can't be sure to have a continuous sequence of weekdays, we have to use something like the pd.Grouper method to fill in missing weekdays.",
"import calendar\n\ngit_authors['weekday'] = git_authors[\"timestamp_local\"].dt.weekday_name\ngit_authors['weekday'] = pd.Categorical(\n git_authors['weekday'], \n categories=calendar.day_name,\n ordered=True)\ngit_authors.head()",
"Adding working hours\nFor the working hour analysis, we extract the hour information from the timestamp_local column. \nNote: Again, we assume that every hour is in the dataset.",
"git_authors['hour'] = git_authors['timestamp_local'].dt.hour\ngit_authors.head()",
"Analyzing the data\nWith the prepared git_authors DataFrame, we are now able to deliver insights into the past years of development.\nDevelopers' timezones\nFirst, we want to know where the developers roughly live. For this, we plot the values of the timezone columns as a pie chart.",
"%matplotlib inline\ntimezones = git_authors['timezone'].value_counts()\ntimezones.plot(\n kind='pie',\n figsize=(7,7),\n title=\"Developers' timezones\",\n label=\"\")",
"Result\nThe majority of the developers' commits come from the time zones +0100, +0200 and -0700. With most commits coming probably from the West Coast of the USA, this might just be an indicator that Linus Torvalds lives there ;-) . But there are also many commits from developers within Western Europe.\nWeekdays with the most commits\nNext, we want to know on which days the developers are working during the week. We count by the weekdays but avoid sorting the results to keep the order along with our categories. We plot the result as a standard bar chart.",
"ax = git_authors['weekday'].\\\n value_counts(sort=False).\\\n plot(\n kind='bar',\n title=\"Commits per weekday\")\nax.set_xlabel('weekday')\nax.set_ylabel('# commits')",
"Result \nMost of the commits occur during normal working days with a slight peak on Wednesday. There are relatively few commits happening on weekends.\nWorking behavior of the main contributor\nIt would be very interesting and easy to see when Linus Torvalds (the main contributor to Linux) is working. But we won't do that because the yet unwritten codex of Software Analytics does tell us that it's not OK to analyze a single person's behavior – especially when such an analysis is based on an uncleaned dataset as we have it here.\nUsual working hours\nTo find out about the working habits of the contributors, we group the commits by hour and count the entries (in this case we choose author) to see if there are any irregularities. Again, we plot the results with a standard bar chart.",
"ax = git_authors\\\n .groupby(['hour'])['author']\\\n .count().plot(kind='bar')\nax.set_title(\"Distribution of working hours\")\nax.yaxis.set_label_text(\"# commits\")\nax.xaxis.set_label_text(\"hour\")",
"Result\nThe distribution of the working hours is interesting:\n- First, we can clearly see that there is a dent around 12:00. So this might be an indicator that developers have lunch at regular times (which is a good thing IMHO).\n- Another not so typical result is the slight rise after 20:00. This could be interpreted as the development activity of free-time developers that code for Linux after their day-time job. \n- Nevertheless, most of the developers seem to get a decent amount of sleep indicated by low commit activity from 1:00 to 7:00.\nSigns of overtime\nAt last, we have a look at possible overtime periods by creating a simple model. We first group all commits on a weekly basis per authors. As grouping function, we choose max() to get the hour where each author committed at latest per week.",
"latest_hour_per_week = git_authors.groupby(\n [\n pd.Grouper( key='timestamp_local', freq='1w'), \n 'author'\n ]\n )[['hour']].max()\n\nlatest_hour_per_week.head()",
"Next, we want to know if there were any stressful time periods that forced the developers to work overtime over a longer period of time. We calculate the mean of all late stays of all authors for each week.",
"mean_latest_hours_per_week = \\\n latest_hour_per_week \\\n .reset_index().groupby('timestamp_local').mean()\nmean_latest_hours_per_week.head()",
"We also create a trend line that shows how the contributors are working over the span of the past years. We use the polyfit function from numpy for this which needs a numeric index to calculate the polynomial coefficients later on. We then calculate the coefficients with a three-dimensional polynomial based on the hours of the mean_latest_hours_per_week DataFrame. For visualization, we decrease the number of degrees and calculate the y-coordinates for all weeks that are encoded in numeric_index. We store the result in the mean_latest_hours_per_week DataFrame.",
"import numpy as np\n\nnumeric_index = range(0, len(mean_latest_hours_per_week))\ncoefficients = np.polyfit(numeric_index, mean_latest_hours_per_week.hour, 3)\npolynomial = np.poly1d(coefficients)\nys = polynomial(numeric_index)\nmean_latest_hours_per_week['trend'] = ys\nmean_latest_hours_per_week.head()",
"At last, we plot the hour results of the mean_latest_hours_per_week DataFrame as well as the trend data in one line plot.",
"ax = mean_latest_hours_per_week[['hour', 'trend']].plot(\n figsize=(10, 6), \n color=['grey','blue'], \n title=\"Late hours per weeks\")\nax.set_xlabel(\"time\")\nax.set_ylabel(\"hour\")",
"Result\nWe see no sign of significant overtime periods over 13 years of Linux development. Shortly after the creation of the Git mirror repository, there might have been a time with some irregularities. But overall, there are no signs of death marches. It seems that the Linux development team has established a stable development process.\nClosing remarks\nAgain, we've seen that various metrics and results can be easily created from a simple Git log output file. With Pandas, it's possible to get to know the habits of the developers of software projects. Thanks to Jupyter's open notebook approach, we can easily adapt existing analysis and add situation-specific information to it as we go along."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
laurentperrinet/Khoei_2017_PLoSCB | notebooks/SI_controls.ipynb | mit | [
"%load_ext autoreload\n%autoreload 2\n%cd -q ../scripts/\nfrom default_param import *\n\n%matplotlib inline",
"FLE\nIn this script the CONDENSATION is done for rightward and leftward motion of a dot stimulus, at different levels of noise. also for flashing stimuli needed for simulation of flash initiated and flash_terminated FLEs. \nThe aim is to generate generate (Berry et al 99)'s figure 2: shifting RF position in the direction of motion.\nInitialization of notebook",
"%%writefile experiment_SI_controls.py\n\"\"\"\nA bunch of control runs\n\n\"\"\"\nimport MotionParticlesFLE as mp\ngen_dot = mp.generate_dot\nimport numpy as np\nimport os\nfrom default_param import *\n\nimage = {}\nexperiment = 'SI'\nN_scan = 5\nbase = 10.\n\n#mp.N_trials = 4\nfor stimulus_tag, im_arg in zip(stim_labels, stim_args):\n#for stimulus_tag, im_arg in zip(stim_labels[1], stim_args[1]):\n #for D_x, D_V, label in zip([mp.D_x, PBP_D_x], [mp.D_V, PBP_D_V], ['MBP', 'PBP']):\n for D_x, D_V, label in zip([mp.D_x], [mp.D_V], ['MBP']):\n im_arg.update(D_V=D_V, D_x=D_x)\n\n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), \n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n D_x=im_arg['D_x']*np.logspace(-2, 2, N_scan, base=base))\n\n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),\n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n D_V=im_arg['D_V']*np.logspace(-2, 2, N_scan, base=base))\n\n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), \n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n sigma_motion=mp.sigma_motion*np.logspace(-1., 1., N_scan, base=base))\n\n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), \n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n K_motion=mp.K_motion*np.logspace(-1., 1., N_scan, base=base))\n\n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), \n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n dot_size=im_arg['dot_size']*np.logspace(-1., 1., N_scan, base=base))\n \n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),\n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n sigma_I=mp.sigma_I*np.logspace(-1, 1, N_scan, base=base))\n\n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),\n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n im_noise=mp.im_noise*np.logspace(-1, 1, N_scan, base=base))\n\n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),\n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n sigma_noise=mp.sigma_noise*np.logspace(-1, 1, N_scan, base=base))\n\n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),\n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n p_epsilon=mp.p_epsilon*np.logspace(-1, 1, N_scan, base=base))\n\n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), \n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n v_init=mp.v_init*np.logspace(-1., 1., N_scan, base=base))\n\n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), \n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n v_prior=np.logspace(-.3, 5., N_scan, base=base))\n \n _ = mp.figure_image_variable(\n os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label), \n N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,\n fixed_args=im_arg, \n resample=np.linspace(0.1, 1., N_scan, endpoint=True))\n \n\n%run experiment_SI_controls.py",
"TODO : show results with a widget",
"!git commit -m' SI controls ' ../notebooks/SI_controls* ../scripts/experiment_SI_controls*"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
grokkaine/biopycourse | day3/DL1_FFN.ipynb | cc0-1.0 | [
"Deep learning\nDriven by practicality as we are for the purpose of this course, we will dwelve directly into an example of using DL. We will gradually learn more things as we do things.\nMost developed deep learning APIs:\n- Tensorflow\n - Keras\n- PyTorch\nNN essentials\n\ntask: classification, handwritten\nmethod: multi-layered perceptron\nconcepts: NN architecture and training loop\npython libraries: native, keras, tensorflow, pytorch\ntask: text classification",
"import tensorflow as tf\n\nfrom tensorflow.keras.datasets import mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\nprint(train_images.shape)\nprint(train_labels.shape)\n\n# reshape (flatten) and scale images\ntrain_images = train_images.reshape((60000, 28 * 28))\ntrain_images = train_images.astype('float32') / 255\ntest_images = test_images.reshape((10000, 28 * 28))\ntest_images = test_images.astype('float32') / 255\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimage=train_images[0].reshape(28, 28)\nplt.imshow(image)\nplt.show()\nprint(\"Label:\", train_labels[0])\n\n# convert labels to one hot encoding\nfrom tensorflow.keras.utils import to_categorical\ntrain_labels = to_categorical(train_labels)\ntest_labels = to_categorical(test_labels)\n\ntrain_labels[0]",
"Multi-layered perceptron (feed forward network)\n\nEach hiden layer is formed by neurons called perceptrons\nA perceptron is a binary linear classifier\ninputs: a flat array $x_i$\none output per neuron j: $y_j$\na transformation of input into output (activation function):\nlinear separator\nsigmoid function\n\n\n\n\n\n$z_j= \\sum_i {w_{ij} x_i} + b_j$\n$y_j = f(z_j) = \\frac{1}{1 + e^{-z_j}}$",
"from IPython.display import Image\nImage(url= \"../img/perceptron.png\", width=400, height=400)",
"input layer: sequential (flattened) image\nhidden layers: perceptrons\noutput layer: softmax",
"from IPython.display import Image\nImage(url= \"../img/ffn.png\", width=400, height=400)\n\nfrom tensorflow.keras import models\nfrom tensorflow.keras import layers\n\n# defining the NN structure\nnetwork = models.Sequential()\nnetwork.add(layers.Dense(512, activation='sigmoid', input_shape=(28 * 28,)))\nnetwork.add(layers.Dense(512, activation='sigmoid', input_shape=(512,)))\nnetwork.add(layers.Dense(10, activation='softmax'))\nnetwork.compile(optimizer='rmsprop',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nnetwork.summary()",
"Learning process\nNNs are supervised learning structures!\n- forward propagation: all training data is fed to the network and y is predicted\n- estimate the loss: difference between prediction and label\n- backpropagation: the loss information is propagated backwards layer by layer, and the neuron weights are adjusted\n- global optimization: the parameters (weights and biases) must be adjusted in such a way that the loss function presented above is minimized.",
"from IPython.display import Image\nImage(url= \"../img/NN_learning.png\", width=400, height=400)",
"Gradient descent (main optimization technique)\nThe weights in small increments with the help of the calculation of the derivative (or gradient) of the loss function, which allows us to see in which direction “to descend” towards the global minimum. Most optimizers are based on gradient descent, an algorithm that is very eficient on GPUs today, but gives local optima.\nEpochs and batches. The optimization is done in general in batches of data in the successive iterations (epochs) of all the dataset that we pass to the network in each iteration. \"epochs\" are complete runs through the dataset. Batches are used because the whole dataset is hard to be passed through the network at once.\n\n- 469 number of batches\n128 * 469 ~= 60000 images (number of samples)",
"network.fit(train_images, train_labels, epochs=5, batch_size=128)\n\ntest_loss, test_acc = network.evaluate(test_images, test_labels)\nprint(test_loss, test_acc)",
"Observations:\n- slightly smaller accuracy on the test data compared to training data (model overfits on the training data)\nQuestions:\n- Why do we need several epochs?\n- What is the main computer limitation when it comes to batches?\n- How many epochs are needed, and what is the danger associated with using too many or too few?\nReading:\n- https://medium.com/onfido-tech/machine-learning-101-be2e0a86c96a\nRun a prediction:",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nprediction=network.predict(test_images[0:9])\ny_true_cls = np.argmax(test_labels[0:9], axis=1)\ny_pred_cls = np.argmax(prediction, axis=1)\n\nfig, axes = plt.subplots(3, 3, figsize=(8,8))\nfig.subplots_adjust(hspace=0.5, wspace=0.5)\n\nfor i, ax in enumerate(axes.flat):\n ax.imshow(test_images[i].reshape(28,28), cmap = 'BuGn')\n xlabel = \"True: {0}, Pred: {1}\".format(y_true_cls[i], y_pred_cls[i]) \n ax.set_xlabel(xlabel)\n ax.set_xticks([])\n ax.set_yticks([])\n\nplt.show()",
"Historical essentials\n\nDeep learning, from an algorithmic perspective, is the application of advanced multi-layered filters to learn hidden features in data representation. \nMany of the methods that are used today in DL, such as most neural network types (and not only), went through a 20 years long pause due to the fact that the computing machines avalable at the era were too slow to produce wanted results. \n\nIt was several things that precipitated their return in 2010:\n- Graphical processors. A GPU has thousands of cores that are specialized in concomitant linear operations. This provided the infrastructure on which \"deep\" algorithms perform the best.\n- The maturity of cloud computing. This enables third parties to use DL methodologies at scale, and with small operating costs.\n- Big data. Most AI needs models to be trained on a lot of data, thus AI needs a sufficient level of data availability. The massive acumulation of data (not only in biology) is a very recent phenomenon.\nBook reccomendation:\n- http://www.deeplearningbook.org/ (free to read)\nText classification\nThe purpose is to cathegorize films into good or bad based on their reviews. Data is vectorized into binary.\nlayer activation\nWhat happens during layer activation? Basically a set of tensor operations are being performed. A simplistic way to understand this is operations done on array of matrices, while the atomic operation would be:\noutput = relu(dot(W, input) + b)\n, where the weight matrix W shape is (input_dim (10000), 16) and b is a bias term. In linear algebra terms, this will project the input data onto a 16 dimensional space. The more dimensions, the more features, the more confusion, and more computing cost BUT also more complex representations.\nTask:\n- Perform sentiment analysis using the code below!\n- Plot the accuracy vs loss in both the training and validation data, on the history.history dictionary. Use more epochs. What do you notice? How many epochs do you think you need? What if you monitor for 100000 epochs?\n- We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.\n- Adjust the learning rate.\n- Try to use layers with more hidden units or less hidden units: 32 units, 64 units...\n- Try to use the mse loss function instead of binary_crossentropy.\n- Try to use the tanh activation (an activation that was popular in the early days of neural networks) instead of relu.",
"import numpy as np\nfrom keras.datasets import imdb\nfrom keras import models\nfrom keras import layers\nfrom keras import optimizers\nfrom keras import losses\nfrom keras import metrics\n\n(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)\nprint(max([max(sequence) for sequence in train_data]))\n\ndef vectorize_sequences(sequences, dimension=10000):\n results = np.zeros((len(sequences), dimension))\n for i, sequence in enumerate(sequences):\n results[i, sequence] = 1.\n return results\n\nx_train = vectorize_sequences(train_data)\nx_test = vectorize_sequences(test_data)\ny_train = np.asarray(train_labels).astype('float32')\ny_test = np.asarray(test_labels).astype('float32')\n\nmodel = models.Sequential()\nmodel.add(layers.Dense(16, activation='relu', input_shape=(10000,)))\nmodel.add(layers.Dense(16, activation='relu'))\nmodel.add(layers.Dense(1, activation='sigmoid'))\n\nmodel.compile(optimizer=optimizers.RMSprop(lr=0.001),\n loss=losses.binary_crossentropy,\n metrics=[metrics.binary_accuracy])\n\nx_val = x_train[:10000]\npartial_x_train = x_train[10000:]\ny_val = y_train[:10000]\npartial_y_train = y_train[10000:]\n\nhistory = model.fit(partial_x_train,\n partial_y_train,\n epochs=5,\n batch_size=512,\n validation_data=(x_val, y_val))\n\np = model.predict(x_test)\nprint(history.history)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Luke035/dlnd-lessons | batch-norm/Batch_Normalization_Lesson.ipynb | mit | [
"Batch Normalization – Lesson\n\nWhat is it?\nWhat are it's benefits?\nHow do we add it to a network?\nLet's see it work!\nWhat are you hiding?\n\nWhat is Batch Normalization?<a id='theory'></a>\nBatch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called \"batch\" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.\nWhy might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.\nFor example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network. \nLikewise, the output of layer 2 can be thought of as the input to a single layer network, consisting only of layer 3.\nWhen you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).\nBeyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.\nBenefits of Batch Normalization<a id=\"benefits\"></a>\nBatch normalization optimizes network training. It has been shown to have several benefits:\n1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall. \n2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train. \n3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.\n4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.\n5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.\n6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network. \n7. May give better results overall – Some tests seem to show batch normalization actually improves the training results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.\nBatch Normalization in TensorFlow<a id=\"implementation_1\"></a>\nThis section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow. \nThe following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.",
"# Import necessary packages\nimport tensorflow as tf\nimport tqdm\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Import MNIST data so we have something for our experiments\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)",
"Neural network classes for testing\nThe following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.\nAbout the code:\n\nThis class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.\nIt's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.",
"class NeuralNet:\n def __init__(self, initial_weights, activation_fn, use_batch_norm):\n \"\"\"\n Initializes this object, creating a TensorFlow graph using the given parameters.\n \n :param initial_weights: list of NumPy arrays or Tensors\n Initial values for the weights for every layer in the network. We pass these in\n so we can create multiple networks with the same starting weights to eliminate\n training differences caused by random initialization differences.\n The number of items in the list defines the number of layers in the network,\n and the shapes of the items in the list define the number of nodes in each layer.\n e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would \n create a network with 784 inputs going into a hidden layer with 256 nodes,\n followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.\n :param activation_fn: Callable\n The function used for the output of each hidden layer. The network will use the same\n activation function on every hidden layer and no activate function on the output layer.\n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n :param use_batch_norm: bool\n Pass True to create a network that uses batch normalization; False otherwise\n Note: this network will not use batch normalization on layers that do not have an\n activation function.\n \"\"\"\n # Keep track of whether or not this network uses batch normalization.\n self.use_batch_norm = use_batch_norm\n self.name = \"With Batch Norm\" if use_batch_norm else \"Without Batch Norm\"\n\n # Batch normalization needs to do different calculations during training and inference,\n # so we use this placeholder to tell the graph which behavior to use.\n self.is_training = tf.placeholder(tf.bool, name=\"is_training\")\n\n # This list is just for keeping track of data we want to plot later.\n # It doesn't actually have anything to do with neural nets or batch normalization.\n self.training_accuracies = []\n\n # Create the network graph, but it will not actually have any real values until after you\n # call train or test\n self.build_network(initial_weights, activation_fn)\n \n def build_network(self, initial_weights, activation_fn):\n \"\"\"\n Build the graph. The graph still needs to be trained via the `train` method.\n \n :param initial_weights: list of NumPy arrays or Tensors\n See __init__ for description. \n :param activation_fn: Callable\n See __init__ for description. \n \"\"\"\n self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])\n layer_in = self.input_layer\n for weights in initial_weights[:-1]:\n layer_in = self.fully_connected(layer_in, weights, activation_fn) \n self.output_layer = self.fully_connected(layer_in, initial_weights[-1])\n \n def fully_connected(self, layer_in, initial_weights, activation_fn=None):\n \"\"\"\n Creates a standard, fully connected layer. Its number of inputs and outputs will be\n defined by the shape of `initial_weights`, and its starting weight values will be\n taken directly from that same parameter. If `self.use_batch_norm` is True, this\n layer will include batch normalization, otherwise it will not. \n \n :param layer_in: Tensor\n The Tensor that feeds into this layer. It's either the input to the network or the output\n of a previous layer.\n :param initial_weights: NumPy array or Tensor\n Initial values for this layer's weights. The shape defines the number of nodes in the layer.\n e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 \n outputs. \n :param activation_fn: Callable or None (default None)\n The non-linearity used for the output of the layer. If None, this layer will not include \n batch normalization, regardless of the value of `self.use_batch_norm`. \n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n \"\"\"\n # Since this class supports both options, only use batch normalization when\n # requested. However, do not use it on the final layer, which we identify\n # by its lack of an activation function.\n if self.use_batch_norm and activation_fn:\n # Batch normalization uses weights as usual, but does NOT add a bias term. This is because \n # its calculations include gamma and beta variables that make the bias term unnecessary.\n # (See later in the notebook for more details.)\n weights = tf.Variable(initial_weights)\n linear_output = tf.matmul(layer_in, weights)\n\n # Apply batch normalization to the linear combination of the inputs and weights\n batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\n\n # Now apply the activation function, *after* the normalization.\n return activation_fn(batch_normalized_output)\n else:\n # When not using batch normalization, create a standard layer that multiplies\n # the inputs and weights, adds a bias, and optionally passes the result \n # through an activation function. \n weights = tf.Variable(initial_weights)\n biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))\n linear_output = tf.add(tf.matmul(layer_in, weights), biases)\n return linear_output if not activation_fn else activation_fn(linear_output)\n\n def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):\n \"\"\"\n Trains the model on the MNIST training dataset.\n \n :param session: Session\n Used to run training graph operations.\n :param learning_rate: float\n Learning rate used during gradient descent.\n :param training_batches: int\n Number of batches to train.\n :param batches_per_sample: int\n How many batches to train before sampling the validation accuracy.\n :param save_model_as: string or None (default None)\n Name to use if you want to save the trained model.\n \"\"\"\n # This placeholder will store the target labels for each mini batch\n labels = tf.placeholder(tf.float32, [None, 10])\n\n # Define loss and optimizer\n cross_entropy = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))\n \n # Define operations for testing\n correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n if self.use_batch_norm:\n # If we don't include the update ops as dependencies on the train step, the \n # tf.layers.batch_normalization layers won't update their population statistics,\n # which will cause the model to fail at inference time\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n else:\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n \n # Train for the appropriate number of batches. (tqdm is only for a nice timing display)\n for i in tqdm.tqdm(range(training_batches)):\n # We use batches of 60 just because the original paper did. You can use any size batch you like.\n batch_xs, batch_ys = mnist.train.next_batch(60)\n session.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\n \n # Periodically test accuracy against the 5k validation images and store it for plotting later.\n if i % batches_per_sample == 0:\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,\n labels: mnist.validation.labels,\n self.is_training: False})\n self.training_accuracies.append(test_accuracy)\n\n # After training, report accuracy against test data\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,\n labels: mnist.validation.labels,\n self.is_training: False})\n print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))\n\n # If you want to use this model later for inference instead of having to retrain it,\n # just construct it with the same parameters and then pass this file to the 'test' function\n if save_model_as:\n tf.train.Saver().save(session, save_model_as)\n\n def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):\n \"\"\"\n Trains a trained model on the MNIST testing dataset.\n\n :param session: Session\n Used to run the testing graph operations.\n :param test_training_accuracy: bool (default False)\n If True, perform inference with batch normalization using batch mean and variance;\n if False, perform inference with batch normalization using estimated population mean and variance.\n Note: in real life, *always* perform inference using the population mean and variance.\n This parameter exists just to support demonstrating what happens if you don't.\n :param include_individual_predictions: bool (default True)\n This function always performs an accuracy test against the entire test set. But if this parameter\n is True, it performs an extra test, doing 200 predictions one at a time, and displays the results\n and accuracy.\n :param restore_from: string or None (default None)\n Name of a saved model if you want to test with previously saved weights.\n \"\"\"\n # This placeholder will store the true labels for each mini batch\n labels = tf.placeholder(tf.float32, [None, 10])\n\n # Define operations for testing\n correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n # If provided, restore from a previously saved model\n if restore_from:\n tf.train.Saver().restore(session, restore_from)\n\n # Test against all of the MNIST test data\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,\n labels: mnist.test.labels,\n self.is_training: test_training_accuracy})\n print('-'*75)\n print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))\n\n # If requested, perform tests predicting individual values rather than batches\n if include_individual_predictions:\n predictions = []\n correct = 0\n\n # Do 200 predictions, 1 at a time\n for i in range(200):\n # This is a normal prediction using an individual test case. However, notice\n # we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.\n # Remember that will tell it whether it should use the batch mean & variance or\n # the population estimates that were calucated while training the model.\n pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],\n feed_dict={self.input_layer: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],\n self.is_training: test_training_accuracy})\n correct += corr\n\n predictions.append(pred[0])\n\n print(\"200 Predictions:\", predictions)\n print(\"Accuracy on 200 samples:\", correct/200)\n",
"There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.\nWe add batch normalization to layers inside the fully_connected function. Here are some important points about that code:\n1. Layers with batch normalization do not include a bias term.\n2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)\n3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.\n4. We add the normalization before calling the activation function.\nIn addition to that code, the training step is wrapped in the following with statement:\npython\nwith tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\nThis line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.\nFinally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:\npython\nsession.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\nWe'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.\nBatch Normalization Demos<a id='demos'></a>\nThis section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier. \nWe'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.\nCode to support testing\nThe following two functions support the demos we run in the notebook. \nThe first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.\nThe second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.",
"def plot_training_accuracies(*args, **kwargs):\n \"\"\"\n Displays a plot of the accuracies calculated during training to demonstrate\n how many iterations it took for the model(s) to converge.\n \n :param args: One or more NeuralNet objects\n You can supply any number of NeuralNet objects as unnamed arguments \n and this will display their training accuracies. Be sure to call `train` \n the NeuralNets before calling this function.\n :param kwargs: \n You can supply any named parameters here, but `batches_per_sample` is the only\n one we look for. It should match the `batches_per_sample` value you passed\n to the `train` function.\n \"\"\"\n fig, ax = plt.subplots()\n\n batches_per_sample = kwargs['batches_per_sample']\n \n for nn in args:\n ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),\n nn.training_accuracies, label=nn.name)\n ax.set_xlabel('Training steps')\n ax.set_ylabel('Accuracy')\n ax.set_title('Validation Accuracy During Training')\n ax.legend(loc=4)\n ax.set_ylim([0,1])\n plt.yticks(np.arange(0, 1.1, 0.1))\n plt.grid(True)\n plt.show()\n\ndef train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):\n \"\"\"\n Creates two networks, one with and one without batch normalization, then trains them\n with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.\n \n :param use_bad_weights: bool\n If True, initialize the weights of both networks to wildly inappropriate weights;\n if False, use reasonable starting weights.\n :param learning_rate: float\n Learning rate used during gradient descent.\n :param activation_fn: Callable\n The function used for the output of each hidden layer. The network will use the same\n activation function on every hidden layer and no activate function on the output layer.\n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n :param training_batches: (default 50000)\n Number of batches to train.\n :param batches_per_sample: (default 500)\n How many batches to train before sampling the validation accuracy.\n \"\"\"\n # Use identical starting weights for each network to eliminate differences in\n # weight initialization as a cause for differences seen in training performance\n #\n # Note: The networks will use these weights to define the number of and shapes of\n # its layers. The original batch normalization paper used 3 hidden layers\n # with 100 nodes in each, followed by a 10 node output layer. These values\n # build such a network, but feel free to experiment with different choices.\n # However, the input size should always be 784 and the final output should be 10.\n if use_bad_weights:\n # These weights should be horrible because they have such a large standard deviation\n weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,10), scale=5.0).astype(np.float32)\n ]\n else:\n # These weights should be good because they have such a small standard deviation\n weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,10), scale=0.05).astype(np.float32)\n ]\n\n # Just to make sure the TensorFlow's default graph is empty before we start another\n # test, because we don't bother using different graphs or scoping and naming \n # elements carefully in this sample code.\n tf.reset_default_graph()\n\n # build two versions of same network, 1 without and 1 with batch normalization\n nn = NeuralNet(weights, activation_fn, False)\n bn = NeuralNet(weights, activation_fn, True)\n \n # train and test the two models\n with tf.Session() as sess:\n tf.global_variables_initializer().run()\n\n nn.train(sess, learning_rate, training_batches, batches_per_sample)\n bn.train(sess, learning_rate, training_batches, batches_per_sample)\n \n nn.test(sess)\n bn.test(sess)\n \n # Display a graph of how validation accuracies changed during training\n # so we can compare how the models trained and when they converged\n plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)\n",
"Comparisons between identical networks, with and without batch normalization\nThe next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.\nThe following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.",
"train_and_test(False, 0.01, tf.nn.relu)",
"As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.\nIf you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)\nThe following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.",
"train_and_test(False, 0.01, tf.nn.relu, 2000, 50)",
"As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)\nIn the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.\nThe following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.",
"train_and_test(False, 0.01, tf.nn.sigmoid)",
"With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches. \nThe following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.",
"train_and_test(False, 1, tf.nn.relu)",
"Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.\nThe next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.",
"train_and_test(False, 1, tf.nn.relu)",
"In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.\nThe following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.",
"train_and_test(False, 1, tf.nn.sigmoid)",
"In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.\nThe cell below shows a similar pair of networks trained for only 2000 iterations.",
"train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)",
"As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.",
"train_and_test(False, 2, tf.nn.relu)",
"With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.\nThe following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.",
"train_and_test(False, 2, tf.nn.sigmoid)",
"Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.\nHowever, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.",
"train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)",
"In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient. \nThe following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.",
"train_and_test(True, 0.01, tf.nn.relu)",
"As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them. \nThe following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.",
"train_and_test(True, 0.01, tf.nn.sigmoid)",
"Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all. \nThe following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id=\"successful_example_lr_1\"></a>",
"train_and_test(True, 1, tf.nn.relu)",
"The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.\nThe following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.",
"train_and_test(True, 1, tf.nn.sigmoid)",
"Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id=\"successful_example_lr_2\"></a>",
"train_and_test(True, 2, tf.nn.relu)",
"We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.\nThe following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.",
"train_and_test(True, 2, tf.nn.sigmoid)",
"In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.\nFull Disclosure: Batch Normalization Doesn't Fix Everything\nBatch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.\nThis section includes two examples that show runs when batch normalization did not help at all.\nThe following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.",
"train_and_test(True, 1, tf.nn.relu)",
"When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.",
"train_and_test(True, 2, tf.nn.relu)",
"When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning. \nNote: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.\nBatch Normalization: A Detailed Look<a id='implementation_2'></a>\nThe layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization. \nIn order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.\nWe represent the average as $\\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$ \n$$\n\\mu_B \\leftarrow \\frac{1}{m}\\sum_{i=1}^m x_i\n$$\nWe then need to calculate the variance, or mean squared deviation, represented as $\\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\\mu_B$), which gives us what's called the \"deviation\" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.\n$$\n\\sigma_{B}^{2} \\leftarrow \\frac{1}{m}\\sum_{i=1}^m (x_i - \\mu_B)^2\n$$\nOnce we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)\n$$\n\\hat{x_i} \\leftarrow \\frac{x_i - \\mu_B}{\\sqrt{\\sigma_{B}^{2} + \\epsilon}}\n$$\nAbove, we said \"(almost) standard deviation\". That's because the real standard deviation for the batch is calculated by $\\sqrt{\\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch. \nWhy increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account. \nAt this point, we have a normalized value, represented as $\\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\\gamma$, and then add a beta value, $\\beta$. Both $\\gamma$ and $\\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate. \n$$\ny_i \\leftarrow \\gamma \\hat{x_i} + \\beta\n$$\nWe now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.\nIn NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:\npython\nbatch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\nThe next section shows you how to implement the math directly. \nBatch normalization without the tf.layers package\nOur implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.\nHowever, if you would like to implement batch normalization at a lower level, the following code shows you how.\nIt uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.\n1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.",
"def fully_connected(self, layer_in, initial_weights, activation_fn=None):\n \"\"\"\n Creates a standard, fully connected layer. Its number of inputs and outputs will be\n defined by the shape of `initial_weights`, and its starting weight values will be\n taken directly from that same parameter. If `self.use_batch_norm` is True, this\n layer will include batch normalization, otherwise it will not. \n \n :param layer_in: Tensor\n The Tensor that feeds into this layer. It's either the input to the network or the output\n of a previous layer.\n :param initial_weights: NumPy array or Tensor\n Initial values for this layer's weights. The shape defines the number of nodes in the layer.\n e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 \n outputs. \n :param activation_fn: Callable or None (default None)\n The non-linearity used for the output of the layer. If None, this layer will not include \n batch normalization, regardless of the value of `self.use_batch_norm`. \n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n \"\"\"\n if self.use_batch_norm and activation_fn:\n # Batch normalization uses weights as usual, but does NOT add a bias term. This is because \n # its calculations include gamma and beta variables that make the bias term unnecessary.\n weights = tf.Variable(initial_weights)\n linear_output = tf.matmul(layer_in, weights)\n\n num_out_nodes = initial_weights.shape[-1]\n\n # Batch normalization adds additional trainable variables: \n # gamma (for scaling) and beta (for shifting).\n gamma = tf.Variable(tf.ones([num_out_nodes]))\n beta = tf.Variable(tf.zeros([num_out_nodes]))\n\n # These variables will store the mean and variance for this layer over the entire training set,\n # which we assume represents the general population distribution.\n # By setting `trainable=False`, we tell TensorFlow not to modify these variables during\n # back propagation. Instead, we will assign values to these variables ourselves. \n pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)\n pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)\n\n # Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.\n # This is the default value TensorFlow uses.\n epsilon = 1e-3\n\n def batch_norm_training():\n # Calculate the mean and variance for the data coming out of this layer's linear-combination step.\n # The [0] defines an array of axes to calculate over.\n batch_mean, batch_variance = tf.nn.moments(linear_output, [0])\n\n # Calculate a moving average of the training data's mean and variance while training.\n # These will be used during inference.\n # Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter\n # \"momentum\" to accomplish this and defaults it to 0.99\n decay = 0.99\n train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))\n train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))\n\n # The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean' \n # and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.\n # This is necessary because the those two operations are not actually in the graph\n # connecting the linear_output and batch_normalization layers, \n # so TensorFlow would otherwise just skip them.\n with tf.control_dependencies([train_mean, train_variance]):\n return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)\n \n def batch_norm_inference():\n # During inference, use the our estimated population mean and variance to normalize the layer\n return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)\n\n # Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute \n # the operation returned from `batch_norm_training`; otherwise it will execute the graph\n # operation returned from `batch_norm_inference`.\n batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)\n \n # Pass the batch-normalized layer output through the activation function.\n # The literature states there may be cases where you want to perform the batch normalization *after*\n # the activation function, but it is difficult to find any uses of that in practice.\n return activation_fn(batch_normalized_output)\n else:\n # When not using batch normalization, create a standard layer that multiplies\n # the inputs and weights, adds a bias, and optionally passes the result \n # through an activation function. \n weights = tf.Variable(initial_weights)\n biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))\n linear_output = tf.add(tf.matmul(layer_in, weights), biases)\n return linear_output if not activation_fn else activation_fn(linear_output)\n",
"This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:\n\nIt explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.\nIt initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \\leftarrow \\gamma \\hat{x_i} + \\beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.\nUnlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.\nTensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block. \nThe actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.\ntf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.\nWe use the tf.nn.moments function to calculate the batch mean and variance.\n\n2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization: \npython\nif self.use_batch_norm:\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\nelse:\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\nOur new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:\npython\ntrain_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:\npython\nreturn tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)\nwith these lines:\npython\nnormalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)\nreturn gamma * normalized_linear_output + beta\nAnd replace this line in batch_norm_inference:\npython\nreturn tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)\nwith these lines:\npython\nnormalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)\nreturn gamma * normalized_linear_output + beta\nAs you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\\hat{x_i}$: \n$$\n\\hat{x_i} \\leftarrow \\frac{x_i - \\mu_B}{\\sqrt{\\sigma_{B}^{2} + \\epsilon}}\n$$\nAnd the second line is a direct translation of the following equation:\n$$\ny_i \\leftarrow \\gamma \\hat{x_i} + \\beta\n$$\nWe still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you. \nWhy the difference between training and inference?\nIn the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:\npython\nbatch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\nAnd that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:\npython\nsession.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\nIf you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?\nFirst, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).",
"def batch_norm_test(test_training_accuracy):\n \"\"\"\n :param test_training_accuracy: bool\n If True, perform inference with batch normalization using batch mean and variance;\n if False, perform inference with batch normalization using estimated population mean and variance.\n \"\"\"\n\n weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,10), scale=0.05).astype(np.float32)\n ]\n\n tf.reset_default_graph()\n\n # Train the model\n bn = NeuralNet(weights, tf.nn.relu, True)\n \n # First train the network\n with tf.Session() as sess:\n tf.global_variables_initializer().run()\n\n bn.train(sess, 0.01, 2000, 2000)\n\n bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)",
"In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.",
"batch_norm_test(True)",
"As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The \"batches\" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer. \nNote: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.\nTo overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it \"normalize\" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training. \nSo in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.",
"batch_norm_test(False)",
"As you can see, now that we're using the estimated population mean and variance, we get a 97% accuracy. That means it guessed correctly on 194 of the 200 samples – not too bad for something that trained in under 4 seconds. :)\nConsiderations for other network types\nThis notebook demonstrates batch normalization in a standard neural network with fully connected layers. You can also use batch normalization in other types of networks, but there are some special considerations.\nConvNets\nConvolution layers consist of multiple feature maps. (Remember, the depth of a convolutional layer refers to its number of feature maps.) And the weights for each feature map are shared across all the inputs that feed into the layer. Because of these differences, batch normalizaing convolutional layers requires batch/population mean and variance per feature map rather than per node in the layer.\nWhen using tf.layers.batch_normalization, be sure to pay attention to the order of your convolutionlal dimensions.\nSpecifically, you may want to set a different value for the axis parameter if your layers have their channels first instead of last. \nIn our low-level implementations, we used the following line to calculate the batch mean and variance:\npython\nbatch_mean, batch_variance = tf.nn.moments(linear_output, [0])\nIf we were dealing with a convolutional layer, we would calculate the mean and variance with a line like this instead:\npython\nbatch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False)\nThe second parameter, [0,1,2], tells TensorFlow to calculate the batch mean and variance over each feature map. (The three axes are the batch, height, and width.) And setting keep_dims to False tells tf.nn.moments not to return values with the same size as the inputs. Specifically, it ensures we get one mean/variance pair per feature map.\nRNNs\nBatch normalization can work with recurrent neural networks, too, as shown in the 2016 paper Recurrent Batch Normalization. It's a bit more work to implement, but basically involves calculating the means and variances per time step instead of per layer. You can find an example where someone extended tf.nn.rnn_cell.RNNCell to include batch normalization in this GitHub repo."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AbhilashReddyM/GeometricMultigrid | .ipynb_checkpoints/Making_a_Preconditioner-vectorized-checkpoint.ipynb | mit | [
"This is functionally similar to the the other notebook. All the operations here have been vectorized. This results in much much faster code, but is also much unreadable. The vectorization also necessitated the replacement of the Gauss-Seidel smoother with under-relaxed Jacobi. That change has had some effect since GS is \"twice as better\" as Jacobi.\nThe Making of a Preconditioner ---Vectorized Version\nThis is a demonstration of a multigrid preconditioned krylov solver in python3. The code and more examples are present on github here. The problem solved is a Poisson equation on a rectangular domain with homogenous dirichlet boundary conditions. Finite difference with cell-centered discretization is used to get a second order accurate solution, that is further improved to 4th order using deferred correction.\nThe first step is a multigrid algorithm. This is the simplest 2D geometric multigrid solver. \n1. Multigrid algorithm\nWe need some terminology before going further.\n- Approximation: \n- Residual:\n- Exact solution (of the discrete problem)\n- Correction\nThis is a geometric multigrid algorithm, where a series of nested grids are used. There are four parts to a multigrid algorithm\n- Smoothing Operator (a.k.a Relaxation)\n- Restriction Operator\n- Interpolation Operator (a.k.a Prolongation Operator)\n- Bottom solver\nWe will define each of these in sequence. These operators act of different quantities that are stored at the cell center. We will get to exactly what later on. To begin import numpy.",
"import numpy as np",
"1.1 Smoothing operator\nThis can be a certain number of Jacobi or a Gauss-Seidel iterations. Below is defined smoother that does under-relaxed Jacobi sweeps and returns the result along with the residual.",
"def Jacrelax(nx,ny,u,f,iters=1):\n '''\n under-relaxed Jacobi iteration\n '''\n dx=1.0/nx; dy=1.0/ny\n Ax=1.0/dx**2; Ay=1.0/dy**2\n Ap=1.0/(2.0*(Ax+Ay))\n\n #Dirichlet BC\n u[ 0,:] = -u[ 1,:]\n u[-1,:] = -u[-2,:]\n u[:, 0] = -u[:, 1]\n u[:,-1] = -u[:,-2]\n\n for it in range(iters):\n u[1:nx+1,1:ny+1] = 0.8*Ap*(Ax*(u[2:nx+2,1:ny+1] + u[0:nx,1:ny+1])\n + Ay*(u[1:nx+1,2:ny+2] + u[1:nx+1,0:ny])\n - f[1:nx+1,1:ny+1])+0.2*u[1:nx+1,1:ny+1]\n #Dirichlet BC\n u[ 0,:] = -u[ 1,:]\n u[-1,:] = -u[-2,:]\n u[:, 0] = -u[:, 1]\n u[:,-1] = -u[:,-2]\n\n res=np.zeros([nx+2,ny+2])\n res[1:nx+1,1:ny+1]=f[1:nx+1,1:ny+1]-(( Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1])\n + Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny])\n - 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1]))\n return u,res",
"1.2 Interpolation Operator\nThis operator takes values on a coarse grid and transfers them onto a fine grid. It is also called prolongation. The function below uses bilinear interpolation for this purpose. 'v' is on a coarse grid and we want to interpolate it on a fine grid and store it in v_f.",
"def prolong(nx,ny,v):\n '''\n interpolate 'v' to the fine grid\n '''\n v_f=np.zeros([2*nx+2,2*ny+2])\n v_f[1:2*nx:2 ,1:2*ny:2 ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx ,1:ny+1]+v[1:nx+1,0:ny] )+0.0625*v[0:nx ,0:ny ]\n v_f[2:2*nx+1:2,1:2*ny:2 ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,0:ny] )+0.0625*v[2:nx+2,0:ny ]\n v_f[1:2*nx:2 ,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx ,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[0:nx ,2:ny+2]\n v_f[2:2*nx+1:2,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[2:nx+2,2:ny+2]\n return v_f",
"1.3 Restriction\nThis is exactly the opposite of the interpolation. It takes values from the find grid and transfers them onto the coarse grid. It is kind of an averaging process. This is fundamentally different from interpolation. Each coarse grid point is surrounded by four fine grid points. So quite simply we take the value of the coarse point to be the average of 4 fine points. Here 'v' is the fine grid quantity and 'v_c' is the coarse grid quantity",
"def restrict(nx,ny,v):\n '''\n restrict 'v' to the coarser grid\n '''\n v_c=np.zeros([nx+2,ny+2])\n v_c[1:nx+1,1:ny+1]=0.25*(v[1:2*nx:2,1:2*ny:2]+v[1:2*nx:2,2:2*ny+1:2]+v[2:2*nx+1:2,1:2*ny:2]+v[2:2*nx+1:2,2:2*ny+1:2])\n return v_c",
"1.4 Bottom Solver\nNote that we have looped over the coarse grid in both the cases above. It is easier to access the variables this way. The last part is the Bottom Solver. This must be something that gives us the exact/converged solution to what ever we feed it. What we feed to the bottom solver is the problem at the coarsest level. This has generally has very few points (e.g 2x2=4 in our case) and can be solved exactly by the smoother itself with few iterations. That is what we do here but, any other direct method can also be used. 50 Iterations are used here. If we coarsify to just one point, then just one iteration will solve it exactly.\n1.5 V-cycle\nNow that we have all the parts, we are ready to build our multigrid algorithm. First we will look at a V-cycle. It is self explanatory. It is a recursive function ,i.e., it calls itself. It takes as input an initial guess 'u', the rhs 'f', the number of multigrid levels 'num_levels' among other things. At each level the V cycle calls another V-cycle. At the lowest level the solving is exact.",
"def V_cycle(nx,ny,num_levels,u,f,level=1):\n\n if(level==num_levels):#bottom solve\n u,res=Jacrelax(nx,ny,u,f,iters=50)\n return u,res\n\n #Step 1: Relax Au=f on this grid\n u,res=Jacrelax(nx,ny,u,f,iters=1)\n\n #Step 2: Restrict residual to coarse grid\n res_c=restrict(nx//2,ny//2,res)\n\n #Step 3:Solve A e_c=res_c on the coarse grid. (Recursively)\n e_c=np.zeros_like(res_c)\n e_c,res_c=V_cycle(nx//2,ny//2,num_levels,e_c,res_c,level+1)\n\n #Step 4: Interpolate(prolong) e_c to fine grid and add to u\n u+=prolong(nx//2,ny//2,e_c)\n \n #Step 5: Relax Au=f on this grid\n u,res=Jacrelax(nx,ny,u,f,iters=1)\n return u,res",
"Thats it! Now we can see it in action. We can use a problem with a known solution to test our code. The following functions set up a rhs for a problem with homogenous dirichlet BC on the unit square.",
"#analytical solution\ndef Uann(x,y):\n return (x**3-x)*(y**3-y)\n#RHS corresponding to above\ndef source(x,y):\n return 6*x*y*(x**2+ y**2 - 2)",
"Let us set up the problem, discretization and solver details. The number of divisions along each dimension is given as a power of two function of the number of levels. In principle this is not required, but having it makes the inter-grid transfers easy.\nThe coarsest problem is going to have a 2-by-2 grid.",
"#input\nmax_cycles = 30\nnlevels = 6 \nNX = 2*2**(nlevels-1)\nNY = 2*2**(nlevels-1)\ntol = 1e-15 \n\n#the grid has one layer of ghost cellss\nuann=np.zeros([NX+2,NY+2])#analytical solution\nu =np.zeros([NX+2,NY+2])#approximation\nf =np.zeros([NX+2,NY+2])#RHS\n\n#calcualte the RHS and exact solution\nDX=1.0/NX\nDY=1.0/NY\n\nxc=np.linspace(0.5*DX,1-0.5*DX,NX)\nyc=np.linspace(0.5*DY,1-0.5*DY,NY)\nXX,YY=np.meshgrid(xc,yc,indexing='ij')\n\nuann[1:NX+1,1:NY+1]=Uann(XX,YY)\nf[1:NX+1,1:NY+1] =source(XX,YY)",
"Now we can call the solver",
"print('mgd2d.py solver:')\nprint('NX:',NX,', NY:',NY,', tol:',tol,'levels: ',nlevels)\nfor it in range(1,max_cycles+1):\n u,res=V_cycle(NX,NY,nlevels,u,f)\n rtol=np.max(np.max(np.abs(res)))\n if(rtol<tol):\n break\n error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]\n print(' cycle: ',it,', L_inf(res.)= ',rtol,',L_inf(true error): ',np.max(np.max(np.abs(error))))\n\nerror=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]\nprint('L_inf (true error): ',np.max(np.max(np.abs(error))))\n",
"True error is the difference of the approximation with the analytical solution. It is largely the discretization error. This what would be present when we solve the discrete equation with a direct/exact method like gaussian elimination. We see that true error stops reducing at the 5th cycle. The approximation is not getting any better after this point. So we can stop after 5 cycles. But, in general we dont know the true error. In practice we use the norm of the (relative) residual as a stopping criterion. As the cycles progress the floating point round-off error limit is reached and the residual also stops decreasing.\nThis was the multigrid V cycle. We can use this as preconditioner to a Krylov solver. But before we get to that let's complete the multigrid introduction by looking at the Full Multi-Grid algorithm. You can skip this section safely.\n1.6 Full Multi-Grid\nWe started with a zero initial guess for the V-cycle. Presumably, if we had a better initial guess we would get better results. So we solve a coarse problem exactly and interpolate it onto the fine grid and use that as the initial guess for the V-cycle. The result of doing this recursively is the Full Multi-Grid(FMG) Algorithm. Unlike the V-cycle which was an iterative procedure, FMG is a direct solver. There is no successive improvement of the approximation. It straight away gives us an approximation that is within the discretization error. The FMG algorithm is given below.",
"def FMG(nx,ny,num_levels,f,nv=1,level=1):\n\n if(level==num_levels):#bottom solve\n u=np.zeros([nx+2,ny+2]) \n u,res=Jacrelax(nx,ny,u,f,iters=50)\n return u,res\n\n #Step 1: Restrict the rhs to a coarse grid\n f_c=restrict(nx//2,ny//2,f)\n\n #Step 2: Solve the coarse grid problem using FMG\n u_c,_=FMG(nx//2,ny//2,num_levels,f_c,nv,level+1)\n\n #Step 3: Interpolate u_c to the fine grid\n u=prolong(nx//2,ny//2,u_c)\n\n #step 4: Execute 'nv' V-cycles\n for _ in range(nv):\n u,res=V_cycle(nx,ny,num_levels-level,u,f)\n return u,res",
"Lets call the FMG solver for the same problem",
"print('mgd2d.py FMG solver:')\nprint('NX:',NX,', NY:',NY,', levels: ',nlevels)\n\nu,res=FMG(NX,NY,nlevels,f,nv=1) \nrtol=np.max(np.max(np.abs(res)))\n\nprint(' FMG L_inf(res.)= ',rtol)\nerror=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]\nprint('L_inf (true error): ',np.max(np.max(np.abs(error))))",
"It works wonderfully. The residual is large but the true error is within the discretization level. FMG is said to be scalable because the amount of work needed is linearly proportional to the the size of the problem. In big-O notation, FMG is $\\mathcal{O}(N)$. Where N is the number of unknowns. Exact methods (Gaussian Elimination, LU decomposition ) are typically $\\mathcal{O}(N^3)$ \n2. Stationary iterative methods as preconditioners\nA preconditioner reduces the condition number of the coefficient matrix, thereby making it easier to solve. We dont explicitly need a matrix because we dont access the elements by index, coefficient matrix or preconditioner. What we do need is the action of the matrix on a vector. That is, we need only the matrix-vector product. The coefficient matrix can be defined as a function that takes in a vector and returns the matrix vector product.\nAny stationary method has an iteration matrix associated with it. This is easily seen for Jacobi or GS methods. This iteration matrix can be used as a preconditioner. But we dont explicitly need it. The stationary iterative method for solving an equation can be written as a Richardson iteration. When the initial guess is set to zero and one iteration is performed, what you get is the action of the preconditioner on the RHS vector. That is, we get a preconditioner-vector product, which is what we want.\nThis allows us to use any blackbox stationary iterative method as a preconditioner\nTo repeat, if there is a stationary iterative method that you want to use as a preconditioner, set the initial guess to zero, set the RHS to the vector you want to multiply the preconditioner with and perform one iteration of the stationary method.\nWe can use the multigrid V-cycle as a preconditioner this way. We cant use FMG because it is not an iterative method.\nThe matrix as a function can be defined using LinearOperator from scipy.sparse.linalg. It gives us an object which works like a matrix in-so-far as the product with a vector is concerned. It can be used as a regular 2D numpy array in multiplication with a vector. This can be passed to CG(), GMRES() or BiCGStab() as a preconditioner.\nHaving a symmetric preconditioner would be nice because it will retain the symmetry if the original problem is symmetric and we can still use CG. If the preconditioner is not symmetric CG will not converge, and we would have to use a more general solver.\nBelow is the code for defining a V-Cycle preconditioner. The default is one V-cycle. In the V-cycle, the defaults are one pre-sweep, one post-sweep.",
"from scipy.sparse.linalg import LinearOperator,bicgstab,cg\ndef MGVP(nx,ny,num_levels):\n '''\n Multigrid Preconditioner. Returns a (scipy.sparse.linalg.) LinearOperator that can\n be passed to Krylov solvers as a preconditioner. \n '''\n def pc_fn(v):\n u =np.zeros([nx+2,ny+2])\n f =np.zeros([nx+2,ny+2])\n f[1:nx+1,1:ny+1] =v.reshape([nx,ny]) #in practice this copying can be avoived\n #perform one V cycle\n u,res=V_cycle(nx,ny,num_levels,u,f)\n return u[1:nx+1,1:ny+1].reshape(v.shape)\n M=LinearOperator((nx*ny,nx*ny), matvec=pc_fn)\n return M",
"Let us define the Poisson matrix also as a LinearOperator",
"def Laplace(nx,ny):\n '''\n Action of the Laplace matrix on a vector v\n '''\n def mv(v):\n u =np.zeros([nx+2,ny+2])\n \n u[1:nx+1,1:ny+1]=v.reshape([nx,ny])\n dx=1.0/nx; dy=1.0/ny\n Ax=1.0/dx**2; Ay=1.0/dy**2\n \n #BCs. Needs to be generalized!\n u[ 0,:] = -u[ 1,:]\n u[-1,:] = -u[-2,:]\n u[:, 0] = -u[:, 1]\n u[:,-1] = -u[:,-2]\n\n ut = (Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1])\n + Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny])\n - 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1])\n return ut.reshape(v.shape)\n A = LinearOperator((nx*ny,nx*ny), matvec=mv)\n return A",
"The nested function is required because \"matvec\" in LinearOperator takes only one argument-- the vector. But we require the grid details and boundary condition information to create the Poisson matrix. Now will use these to solve a problem. Unlike earlier where we used an analytical solution and RHS, we will start with a random vector which will be our exact solution, and multiply it with the Poisson matrix to get the Rhs vector for the problem. There is no analytical equation associated with the matrix equation. \nThe scipy sparse solve routines do not return the number of iterations performed. We can use this wrapper to get the number of iterations",
"def solve_sparse(solver,A, b,tol=1e-10,maxiter=500,M=None):\n num_iters = 0\n def callback(xk):\n nonlocal num_iters\n num_iters+=1\n x,status=solver(A, b,tol=tol,maxiter=maxiter,callback=callback,M=M)\n return x,status,num_iters",
"Lets look at what happens with and without the preconditioner.",
"A = Laplace(NX,NY)\n#Exact solution and RHS\nuex=np.random.rand(NX*NY,1)\nb=A*uex\n\n#Multigrid Preconditioner\nM=MGVP(NX,NY,nlevels)\n\nu,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500)\nprint('Without preconditioning. status:',info,', Iters: ',iters)\nerror=uex-u\nprint('error :',np.max(np.abs(error)))\n\nu,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500,M=M)\nprint('With preconditioning. status:',info,', Iters: ',iters)\nerror=uex-u\nprint('error :',np.max(np.abs(error)))",
"Without the preconditioner ~150 iterations were needed, where as with the V-cycle preconditioner the solution was obtained in far fewer iterations. Let's try with CG:",
"u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500)\nprint('Without preconditioning. status:',info,', Iters: ',iters)\nerror=uex-u\nprint('error :',np.max(np.abs(error)))\n\nu,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500,M=M)\nprint('With preconditioning. status:',info,', Iters: ',iters)\nerror=uex-u\nprint('error :',np.max(np.abs(error)))",
"There we have it. A Multigrid Preconditioned Krylov Solver. We did all this without even having to deal with an actual matrix. How great is that! I think the next step should be solving a non-linear problem without having to deal with an actual Jacobian (matrix)."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
srcole/qwm | hcmst/process_raw_data.ipynb | mit | [
"Data info\nData notes\nWave I, the main survey, was fielded between February 21 and April 2, 2009. Wave 2 was fielded March 12, 2010 to June 8, 2010. Wave 3 was fielded March 22, 2011 to August 29, 2011. Wave 4 was fielded between March and November of 2013. Wave 5 was fielded between November, 2014 and March, 2015.",
"import numpy as np\nimport pandas as pd\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\npd.options.display.max_columns=1000",
"Load raw data",
"df = pd.read_stata('/gh/data/hcmst/1.dta')\n# df2 = pd.read_stata('/gh/data/hcmst/2.dta')\n# df3 = pd.read_stata('/gh/data/hcmst/3.dta')\n# df = df1.merge(df2, on='caseid_new')\n# df = df.merge(df3, on='caseid_new')\ndf.head(2)",
"Select and rename columns",
"rename_cols_dict = {'ppage': 'age', 'ppeducat': 'education',\n 'ppethm': 'race', 'ppgender': 'sex',\n 'pphouseholdsize': 'household_size', 'pphouse': 'house_type',\n 'hhinc': 'income', 'ppmarit': 'marital_status',\n 'ppmsacat': 'in_metro', 'ppreg4': 'usa_region',\n 'pprent': 'house_payment', 'children_in_hh': 'N_child',\n 'ppwork': 'work', 'ppnet': 'has_internet',\n 'papglb_friend': 'has_gay_friendsfam', 'pppartyid3': 'politics',\n 'papreligion': 'religion', 'qflag': 'in_relationship',\n 'q9': 'partner_age', 'duration': 'N_minutes_survey',\n 'glbstatus': 'is_lgb', 's1': 'is_married',\n 'partner_race': 'partner_race', 'q7b': 'partner_religion',\n 'q10': 'partner_education', 'US_raised': 'USA_raised',\n 'q17a': 'N_marriages', 'q17b': 'N_marriages2', 'coresident': 'cohabit',\n 'q21a': 'age_first_met', 'q21b': 'age_relationship_begin',\n 'q21d': 'age_married', 'q23': 'relative_income',\n 'q25': 'same_high_school', 'q26': 'same_college',\n 'q27': 'same_hometown', 'age_difference': 'age_difference',\n 'q34':'relationship_quality',\n 'q24_met_online': 'met_online', 'met_through_friends': 'met_friends',\n 'met_through_family': 'met_family', 'met_through_as_coworkers': 'met_work'}\n\ndf = df[list(rename_cols_dict.keys())]\ndf.rename(columns=rename_cols_dict, inplace=True)\n\n# Process number of marriages\ndf['N_marriages'] = df['N_marriages'].astype(str).replace({'nan':''}) + df['N_marriages2'].astype(str).replace({'nan':''})\ndf.drop('N_marriages2', axis=1, inplace=True)\ndf['N_marriages'] = df['N_marriages'].replace({'':np.nan, 'once (this is my first marriage)': 'once', 'refused':np.nan})\ndf['N_marriages'] = df['N_marriages'].astype('category')\n\n# Clean entries to make simpler\ndf['in_metro'] = df['in_metro']=='metro'\ndf['relationship_excellent'] = df['relationship_quality'] == 'excellent'\n\ndf['house_payment'].replace({'owned or being bought by you or someone in your household': 'owned',\n 'rented for cash': 'rent',\n 'occupied without payment of cash rent': 'free'}, inplace=True)\ndf['race'].replace({'white, non-hispanic': 'white',\n '2+ races, non-hispanic': 'other, non-hispanic',\n 'black, non-hispanic': 'black'}, inplace=True)\ndf['house_type'].replace({'a one-family house detached from any other house': 'house',\n 'a building with 2 or more apartments': 'apartment',\n 'a one-family house attached to one or more houses': 'house',\n 'a mobile home': 'mobile',\n 'boat, rv, van, etc.': 'mobile'}, inplace=True)\ndf['is_not_working'] = df['work'].str.contains('not working')\ndf['has_internet'] = df['has_internet'] == 'yes'\ndf['has_gay_friends'] = np.logical_or(df['has_gay_friendsfam']=='yes, friends', df['has_gay_friendsfam']=='yes, both')\ndf['has_gay_family'] = np.logical_or(df['has_gay_friendsfam']=='yes, relatives', df['has_gay_friendsfam']=='yes, both')\ndf['religion_is_christian'] = df['religion'].isin(['protestant (e.g., methodist, lutheran, presbyterian, episcopal)',\n 'catholic', 'baptist-any denomination', 'other christian', 'pentecostal', 'mormon', 'eastern orthodox'])\ndf['religion_is_none'] = df['religion'].isin(['none'])\ndf['in_relationship'] = df['in_relationship']=='partnered'\ndf['is_lgb'] = df['is_lgb']=='glb'\ndf['is_married'] = df['is_married']=='yes, i am married'\ndf['partner_race'].replace({'NH white': 'white', ' NH black': 'black',\n ' NH Asian Pac Islander':'other', ' NH Other': 'other', ' NH Amer Indian': 'other'}, inplace=True)\ndf['partner_religion_is_christian'] = df['partner_religion'].isin(['protestant (e.g., methodist, lutheran, presbyterian, episcopal)',\n 'catholic', 'baptist-any denomination', 'other christian', 'pentecostal', 'mormon', 'eastern orthodox'])\ndf['partner_religion_is_none'] = df['partner_religion'].isin(['none'])\ndf['partner_education'] = df['partner_education'].map({'hs graduate or ged': 'high school',\n 'some college, no degree': 'some college',\n \"associate degree\": \"some college\",\n \"bachelor's degree\": \"bachelor's degree or higher\",\n \"master's degree\": \"bachelor's degree or higher\",\n \"professional or doctorate degree\": \"bachelor's degree or higher\"})\ndf['partner_education'].fillna('less than high school', inplace=True)\ndf['USA_raised'] = df['USA_raised']=='raised in US'\ndf['N_marriages'] = df['N_marriages'].map({'never married': '0', 'once': '1', 'twice': '2', 'three times': '3+', 'four or more times':'3+'})\ndf['relative_income'].replace({'i earned more': 'more', 'partner earned more': 'less',\n 'we earned about the same amount': 'same', 'refused': np.nan}, inplace=True)\ndf['same_high_school'] = df['same_high_school']=='same high school'\ndf['same_college'] = df['same_college']=='attended same college or university'\ndf['same_hometown'] = df['same_hometown']=='yes'\ndf['cohabit'] = df['cohabit']=='yes'\ndf['met_online'] = df['met_online']=='met online'\ndf['met_friends'] = df['met_friends']=='meet through friends'\ndf['met_family'] = df['met_family']=='met through family'\ndf['met_work'] = df['met_family']==1\n\ndf['age'] = df['age'].astype(int)\nfor c in df.columns:\n if str(type(df[c])) == 'object':\n df[c] = df[c].astype('category')\n\ndf.head()\n\ndf.to_csv('/gh/data/hcmst/1_cleaned.csv')",
"Distributions",
"for c in df.columns:\n print(df[c].value_counts())\n\n# Countplot if categorical; distplot if numeric\nfrom pandas.api.types import is_numeric_dtype\n\nplt.figure(figsize=(40,40))\nfor i, c in enumerate(df.columns):\n plt.subplot(7,7,i+1)\n if is_numeric_dtype(df[c]):\n sns.distplot(df[c].dropna(), kde=False)\n else:\n sns.countplot(y=c, data=df)\nplt.savefig('temp.png')\n\nsns.barplot(x='income', y='race', data=df)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
minhhh/charts | pandas/Learn-Pandas-Completely.ipynb | mit | [
"Introduction to Pandas\npandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.\nOur convention for importing pandas:",
"import pandas as pd\nfrom pandas import Series, DataFrame",
"Since Series and DataFrame are used frequently, they should be imported directly by name.\nPanda Data Structures\nSeries\nA Series is basically a one-dimensional array with indices.\nYou create a simplest Series like this:",
"ps = Series([4,2,1,3])\nprint ps",
"Get values and indeces like this:",
"print ps.values\nprint ps.index\nps[0]",
"To use a custom index, do this:",
"ps2 = Series([4, 7, -1, 8], ['a','b','c','d'])\nps2",
"Often, you want to create Series from python dict",
"ps3 = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000}\nps3",
"DataFrame\nA DataFrame represents a tabular structure. It can be thought of as a dict of Series.\nA DataFrame can be constructed from a dict of equal-length lists",
"data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year': [2000, 2001, 2002, 2001, 2002],\n'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}\ndf = DataFrame(data)\ndf",
"You can specify a sequence of columns like so:",
"DataFrame(data, columns=['year', 'state', 'pop'])",
"In addition to index and values, DataFrame has columns",
"print df.index\nprint\nprint df.values\nprint\nprint df.columns",
"You can get a specific column like this:",
"df['state']",
"Rows can be retrieved using the ix method:",
"df.ix[0]",
"Another common form of data to create DataFrame is a nested dict of dicts OR nested dict of Series:",
"pop = {'Nevada': {2001: 2.4, 2002: 2.9}, 'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}}\ndf2 = DataFrame(pop)\ndf2",
"You can pass explicit index when creating DataFrame:",
"df3=DataFrame(pop, index=[2001, 2002, 2003])\ndf3",
"If a DataFrame’s index and columns have their name attributes set, these will also be displayed:",
"df2.index.name = 'year'\ndf2.columns.name = 'state'\ndf2",
"The 3rd common data input structures is a list of dicts or Series:",
"films = [{'star': 9.3, 'title': 'The Shawshank Redemption', 'content_rating': 'R'},\n {'star': 9.2, 'title': 'The Godfather', 'content_rating': 'R'},\n {'star': 9.1, 'title': 'The Godfather: Part II', 'content_rating': 'R'}\n ]\n \ndf3 = DataFrame(films)\ndf3",
"More on DataFrame manipulation will come later.\nReading Tabular data file into Pandas\nThere are two main methods for reading data from file to DataFrame: read_table and read_csv. read_csv is exactly the same as read_table, except it assumes a comma separator.\nYou can read a data set using read_table like so:",
"orders = pd.read_table('https://raw.githubusercontent.com/minhhh/charts/master/pandas/data/chipotle.tsv')\norders.head (5)",
"A file does not always have a header row. In this case, you can use default column names or specify column names yourself:",
"users = pd.read_table('https://raw.githubusercontent.com/minhhh/charts/master/pandas/data/u.user', sep='|', header=None)\nusers.head(5)\n\nuser_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code']\nusers2 = pd.read_table('https://raw.githubusercontent.com/minhhh/charts/master/pandas/data/u.user', sep='|', header=None, names=user_cols)\nusers2.head(5)",
"You can choose a specific column to be the index column instead of the default generated by Pandas:",
"user_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code']\nusers3 = pd.read_table('https://raw.githubusercontent.com/minhhh/charts/master/pandas/data/u.user', sep='|', header=None, names=user_cols, index_col='user_id')\nusers3.head(5)",
"Recipes"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sdpython/cpyquickhelper | _doc/notebooks/cbenchmark_branching.ipynb | mit | [
"Measures branching in C++ from python\nThis notebook looks into a couple of ways to write code, which one is efficient, which one is not when it comes to write fast and short loops. Both experiments are around branching. The notebook relies on C++ code implemented in cbenchmark.cpp and\nrepeat_fct.h.",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()\n\n%matplotlib inline",
"numpy is multithreaded. For an accurate comparison, this needs to be disabled. This can be done as follows or by setting environment variable MKL_NUM_THREADS=1.",
"try:\n import mkl\n mkl.set_num_threads(1)\nexcept ModuleNotFoundError as e:\n print('mkl not found', e)\n import os\n os.environ['MKL_NUM_THREADS']='1'",
"First experiment: comparison C++ syntax\nThis all started with article Why is it faster to process a sorted array than an unsorted array?. It compares different implementation fo the following function for which we try different implementations for the third line in next cell. The last option is taken\nChecking whether a number is positive or negative using bitwise operators which avoids branching.",
"# int nb = 0;\n# for(auto it = values.begin(); it != values.end(); ++it)\n# if (*it >= th) nb++; // this line changes\n# if (*it >= th) nb++; // and is repeated 10 times inside the loop.\n# // ... 10 times\n# return nb;",
"The third line is also repeated 10 times to avoid the loop being too significant.",
"from cpyquickhelper.numbers.cbenchmark_dot import measure_scenario_A, measure_scenario_B\nfrom cpyquickhelper.numbers.cbenchmark_dot import measure_scenario_C, measure_scenario_D\nfrom cpyquickhelper.numbers.cbenchmark_dot import measure_scenario_E, measure_scenario_F\nfrom cpyquickhelper.numbers.cbenchmark_dot import measure_scenario_G, measure_scenario_H\nfrom cpyquickhelper.numbers.cbenchmark_dot import measure_scenario_I, measure_scenario_J\n\nimport pandas\n\ndef test_benchmark(label, values, th, repeat=10, number=20):\n funcs = [(k, v) for k, v in globals().copy().items() if k.startswith(\"measure_scenario\")]\n rows = []\n for k, v in funcs:\n exe = v(values, th, repeat, number)\n d = exe.todict()\n d['doc'] = v.__doc__.split('``')[1]\n d['label'] = label\n d['name'] = k\n rows.append(d) \n df = pandas.DataFrame(rows)\n return df\n\ntest_benchmark(\"sorted\", list(range(10)), 5)",
"Times are not very conclusive on such small lists.",
"values = list(range(100000))\ndf_sorted = test_benchmark(\"sorted\", values, len(values)//2, repeat=200)\ndf_sorted",
"The article some implementations will be slower if the values are not sorted.",
"import random\nrandom.shuffle(values)\nvalues = values.copy()\nvalues[:10]\n\ndf_shuffled = test_benchmark(\"shuffled\", values, len(values)//2, repeat=200)\ndf_shuffled\n\ndf = pandas.concat([df_sorted, df_shuffled])\ndfg = df[[\"doc\", \"label\", \"average\"]].pivot(\"doc\", \"label\", \"average\")\n\nax = dfg.plot.bar(rot=30)\nlabels = [l.get_text() for l in ax.get_xticklabels()]\nax.set_xticklabels(labels, ha='right')\nax.set_title(\"Comparison of all implementations\");",
"It seems that inline tests (cond ? value1 : value2) do not stop the branching and it should be used whenever possible.",
"sdf = df[[\"doc\", \"label\", \"average\"]]\ndfg2 = sdf[sdf.doc.str.contains('[?^]')].pivot(\"doc\", \"label\", \"average\")\nax = dfg2.plot.bar(rot=30)\nlabels = [l.get_text() for l in ax.get_xticklabels()]\nax.set_xticklabels(labels, ha='right')\nax.set_title(\"Comparison of implementations using ? :\");\n\nsdf = df[[\"doc\", \"label\", \"average\"]]\ndfg2 = sdf[sdf.doc.str.contains('if')].pivot(\"doc\", \"label\", \"average\")\nax = dfg2.plot.bar(rot=30)\nlabels = [l.get_text() for l in ax.get_xticklabels()]\nax.set_xticklabels(labels, ha='right')\nax.set_ylim([0.0004, 0.0020])\nax.set_title(\"Comparison of implementations using tests\");",
"sorted, not sorted does not seem to have a real impact in this case. It shows branching really slows down the execution of a program. Branching happens whenever the program meets a loop condition or a test. Iterator *it are faster than accessing an array with notation [i] which adds a cost due to an extra addition.\nSecond experiment: dot product\nThe goal is to compare the dot product from numpy.dot and a couple of implementation in C++ which look like this:",
"# float vector_dot_product_pointer(const float *p1, const float *p2, size_t size)\n# {\n# float sum = 0;\n# const float * end1 = p1 + size;\n# for(; p1 != end1; ++p1, ++p2)\n# sum += *p1 * *p2;\n# return sum;\n# }\n# \n# \n# float vector_dot_product(py::array_t<float> v1, py::array_t<float> v2)\n# {\n# if (v1.ndim() != v2.ndim())\n# throw std::runtime_error(\"Vector v1 and v2 must have the same dimension.\");\n# if (v1.ndim() != 1)\n# throw std::runtime_error(\"Vector v1 and v2 must be vectors.\");\n# return vector_dot_product_pointer(v1.data(0), v2.data(0), v1.shape(0));\n# }",
"numpy vs C++\nnumpy.dot",
"%matplotlib inline\n\nimport numpy\n\ndef simple_dot(values):\n return numpy.dot(values, values)\n\nvalues = list(range(10000000))\nvalues = numpy.array(values, dtype=numpy.float32)\nvect = values / numpy.max(values)\nsimple_dot(vect)\n\nvect.dtype\n\nfrom timeit import Timer\n\ndef measure_time(stmt, context, repeat=10, number=50):\n tim = Timer(stmt, globals=context)\n res = numpy.array(tim.repeat(repeat=repeat, number=number))\n mean = numpy.mean(res)\n dev = numpy.mean(res ** 2)\n dev = (dev - mean**2) ** 0.5\n return dict(average=mean, deviation=dev, min_exec=numpy.min(res),\n max_exec=numpy.max(res), repeat=repeat, number=number,\n size=context['values'].shape[0])\n\nmeasure_time(\"simple_dot(values)\", context=dict(simple_dot=simple_dot, values=vect))\n\nres = []\nfor i in range(10, 200000, 2500):\n t = measure_time(\"simple_dot(values)\", repeat=100,\n context=dict(simple_dot=simple_dot, values=vect[:i].copy()))\n res.append(t)\n\nimport pandas\ndot = pandas.DataFrame(res)\ndot.tail()\n\nres = []\nfor i in range(100000, 10000000, 1000000):\n t = measure_time(\"simple_dot(values)\", repeat=10,\n context=dict(simple_dot=simple_dot, values=vect[:i].copy()))\n res.append(t)\n \nhuge_dot = pandas.DataFrame(res)\nhuge_dot.head()\n\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots(1, 2, figsize=(14,4))\ndot.plot(x='size', y=\"average\", ax=ax[0])\nhuge_dot.plot(x='size', y=\"average\", ax=ax[1], logy=True)\nax[0].set_title(\"numpy dot product execution time\");\nax[1].set_title(\"numpy dot product execution time\");",
"numpy.einsum",
"def simple_dot_einsum(values):\n return numpy.einsum('i,i->', values, values)\n\nvalues = list(range(10000000))\nvalues = numpy.array(values, dtype=numpy.float32)\nvect = values / numpy.max(values)\nsimple_dot_einsum(vect)\n\nmeasure_time(\"simple_dot_einsum(values)\",\n context=dict(simple_dot_einsum=simple_dot_einsum, values=vect))\n\nres = []\nfor i in range(10, 200000, 2500):\n t = measure_time(\"simple_dot_einsum(values)\", repeat=100,\n context=dict(simple_dot_einsum=simple_dot_einsum, values=vect[:i].copy()))\n res.append(t)\n\nimport pandas\neinsum_dot = pandas.DataFrame(res)\neinsum_dot.tail()\n\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots(1, 1, figsize=(7,4))\ndot.plot(x='size', y=\"average\", ax=ax, label=\"numpy.dot\", logy=True)\neinsum_dot.plot(x='size', y=\"average\", ax=ax, logy=True,label=\"numpy.einsum\")\nax.set_title(\"numpy einsum / dot dot product execution time\");",
"The function einsum is slower (see Einsum - Einstein summation in deep learning appears to be slower but it is usually faster when it comes to chain operations as it reduces the number of intermediate allocations to do.\npybind11\nNow the custom implementation. We start with an empty function to get a sense of the cost due to to pybind11.",
"from cpyquickhelper.numbers.cbenchmark_dot import empty_vector_dot_product\nempty_vector_dot_product(vect, vect)\n\ndef empty_c11_dot(vect):\n return empty_vector_dot_product(vect, vect)\n\nmeasure_time(\"empty_c11_dot(values)\", \n context=dict(empty_c11_dot=empty_c11_dot, values=vect), repeat=10)",
"Very small. It should not pollute our experiments.",
"from cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product\nvector_dot_product(vect, vect)\n\ndef c11_dot(vect):\n return vector_dot_product(vect, vect)\n\nmeasure_time(\"c11_dot(values)\", \n context=dict(c11_dot=c11_dot, values=vect), repeat=10)\n\nres = []\nfor i in range(10, 200000, 2500):\n t = measure_time(\"c11_dot(values)\", repeat=10,\n context=dict(c11_dot=c11_dot, values=vect[:i].copy()))\n res.append(t)\n\nimport pandas\ncus_dot = pandas.DataFrame(res)\ncus_dot.tail()\n\nfig, ax = plt.subplots(1, 2, figsize=(14,4))\ndot.plot(x='size', y=\"average\", ax=ax[0], label=\"numpy\")\ncus_dot.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11\")\ndot.plot(x='size', y=\"average\", ax=ax[1], label=\"numpy\", logy=True)\ncus_dot.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11\")\nax[0].set_title(\"numpy and custom dot product execution time\");\nax[1].set_title(\"numpy and custom dot product execution time\");",
"Pretty slow. Let's see what it does to compute dot product 16 by 16.\nBLAS\nInternally, numpy is using BLAS. A direct call to it should give the same results.",
"from cpyquickhelper.numbers.direct_blas_lapack import cblas_sdot\n\ndef blas_dot(vect):\n return cblas_sdot(vect, vect)\n\nmeasure_time(\"blas_dot(values)\", context=dict(blas_dot=blas_dot, values=vect), repeat=10)\n\nres = []\nfor i in range(10, 200000, 2500):\n t = measure_time(\"blas_dot(values)\", repeat=10,\n context=dict(blas_dot=blas_dot, values=vect[:i].copy()))\n res.append(t)\n\nimport pandas\nblas_dot = pandas.DataFrame(res)\nblas_dot.tail()\n\nfig, ax = plt.subplots(1, 2, figsize=(14,4))\ndot.plot(x='size', y=\"average\", ax=ax[0], label=\"numpy\")\ncus_dot.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11\")\nblas_dot.plot(x='size', y=\"average\", ax=ax[0], label=\"blas\")\ndot.plot(x='size', y=\"average\", ax=ax[1], label=\"numpy\", logy=True)\ncus_dot.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11\")\nblas_dot.plot(x='size', y=\"average\", ax=ax[1], label=\"blas\")\nax[0].set_title(\"numpy and custom dot product execution time\");\nax[1].set_title(\"numpy and custom dot product execution time\");",
"Use of branching: 16 multplications in one row\nThe code looks like what follows. If there is more than 16 multiplications left, we use function vector_dot_product_pointer16, otherwise, there are done one by one like the previous function.",
"# float vector_dot_product_pointer16(const float *p1, const float *p2)\n# {\n# float sum = 0;\n# \n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# \n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# sum += *(p1++) * *(p2++);\n# \n# return sum;\n# }\n# \n# #define BYN 16\n# \n# float vector_dot_product_pointer16(const float *p1, const float *p2, size_t size)\n# {\n# float sum = 0;\n# size_t i = 0;\n# if (size >= BYN) {\n# size_t size_ = size - BYN;\n# for(; i < size_; i += BYN, p1 += BYN, p2 += BYN)\n# sum += vector_dot_product_pointer16(p1, p2);\n# }\n# for(; i < size; ++p1, ++p2, ++i)\n# sum += *p1 * *p2;\n# return sum;\n# }\n# \n# float vector_dot_product16(py::array_t<float> v1, py::array_t<float> v2)\n# {\n# if (v1.ndim() != v2.ndim())\n# throw std::runtime_error(\"Vector v1 and v2 must have the same dimension.\");\n# if (v1.ndim() != 1)\n# throw std::runtime_error(\"Vector v1 and v2 must be vectors.\");\n# return vector_dot_product_pointer16(v1.data(0), v2.data(0), v1.shape(0));\n# }\n\nfrom cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product16\nvector_dot_product16(vect, vect)\n\ndef c11_dot16(vect):\n return vector_dot_product16(vect, vect)\n\nmeasure_time(\"c11_dot16(values)\", \n context=dict(c11_dot16=c11_dot16, values=vect), repeat=10)\n\nres = []\nfor i in range(10, 200000, 2500):\n t = measure_time(\"c11_dot16(values)\", repeat=10,\n context=dict(c11_dot16=c11_dot16, values=vect[:i].copy()))\n res.append(t)\n\ncus_dot16 = pandas.DataFrame(res)\ncus_dot16.tail()\n\nfig, ax = plt.subplots(1, 2, figsize=(14,4))\ndot.plot(x='size', y=\"average\", ax=ax[0], label=\"numpy\")\ncus_dot.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11\")\ncus_dot16.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11x16\")\ndot.plot(x='size', y=\"average\", ax=ax[1], label=\"numpy\", logy=True)\ncus_dot.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11\")\ncus_dot16.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11x16\")\nax[0].set_title(\"numpy and custom dot product execution time\");\nax[1].set_title(\"numpy and custom dot product execution time\");",
"We are far from numpy but the branching has clearly a huge impact and the fact the loop condition is evaluated only every 16 iterations does not explain this gain. Next experiment with SSE instructions.\nOptimized to remove function call\nWe remove the function call to get the following version.",
"# float vector_dot_product_pointer16_nofcall(const float *p1, const float *p2, size_t size)\n# {\n# float sum = 0; \n# const float * end = p1 + size;\n# if (size >= BYN) {\n# #if(BYN != 16)\n# #error \"BYN must be equal to 16\";\n# #endif\n# unsigned int size_ = (unsigned int) size;\n# size_ = size_ >> 4; // division by 16=2^4\n# size_ = size_ << 4; // multiplication by 16=2^4\n# const float * end_ = p1 + size_;\n# for(; p1 != end_;)\n# {\n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# \n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# \n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# \n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# sum += *p1 * *p2; ++p1, ++p2;\n# }\n# }\n# for(; p1 != end; ++p1, ++p2)\n# sum += *p1 * *p2;\n# return sum;\n# }\n# \n# float vector_dot_product16_nofcall(py::array_t<float> v1, py::array_t<float> v2)\n# {\n# if (v1.ndim() != v2.ndim())\n# throw std::runtime_error(\"Vector v1 and v2 must have the same dimension.\");\n# if (v1.ndim() != 1)\n# throw std::runtime_error(\"Vector v1 and v2 must be vectors.\");\n# return vector_dot_product_pointer16_nofcall(v1.data(0), v2.data(0), v1.shape(0));\n# }\n\nfrom cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product16_nofcall\nvector_dot_product16_nofcall(vect, vect)\n\ndef c11_dot16_nofcall(vect):\n return vector_dot_product16_nofcall(vect, vect)\n\nmeasure_time(\"c11_dot16_nofcall(values)\",\n context=dict(c11_dot16_nofcall=c11_dot16_nofcall, values=vect), repeat=10)\n\nres = []\nfor i in range(10, 200000, 2500):\n t = measure_time(\"c11_dot16_nofcall(values)\", repeat=10,\n context=dict(c11_dot16_nofcall=c11_dot16_nofcall, values=vect[:i].copy()))\n res.append(t)\n\ncus_dot16_nofcall = pandas.DataFrame(res)\ncus_dot16_nofcall.tail()\n\nfig, ax = plt.subplots(1, 2, figsize=(14,4))\ndot.plot(x='size', y=\"average\", ax=ax[0], label=\"numpy\")\ncus_dot.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11\")\ncus_dot16.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11x16\")\ncus_dot16_nofcall.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11x16_nofcall\")\ndot.plot(x='size', y=\"average\", ax=ax[1], label=\"numpy\", logy=True)\ncus_dot.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11\")\ncus_dot16.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11x16\")\ncus_dot16_nofcall.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11x16_nofcall\")\nax[0].set_title(\"numpy and custom dot product execution time\");\nax[1].set_title(\"numpy and custom dot product execution time\");",
"Weird, branching did not happen when the code is not inside a separate function.\nSSE instructions\nWe replace one function in the previous implementation.",
"# #include <xmmintrin.h>\n# \n# float vector_dot_product_pointer16_sse(const float *p1, const float *p2)\n# {\n# __m128 c1 = _mm_load_ps(p1);\n# __m128 c2 = _mm_load_ps(p2);\n# __m128 r1 = _mm_mul_ps(c1, c2);\n# \n# p1 += 4;\n# p2 += 4;\n# \n# c1 = _mm_load_ps(p1);\n# c2 = _mm_load_ps(p2);\n# r1 = _mm_add_ps(r1, _mm_mul_ps(c1, c2));\n# \n# p1 += 4;\n# p2 += 4;\n# \n# c1 = _mm_load_ps(p1);\n# c2 = _mm_load_ps(p2);\n# r1 = _mm_add_ps(r1, _mm_mul_ps(c1, c2));\n# \n# p1 += 4;\n# p2 += 4;\n# \n# c1 = _mm_load_ps(p1);\n# c2 = _mm_load_ps(p2);\n# r1 = _mm_add_ps(r1, _mm_mul_ps(c1, c2));\n# \n# float r[4];\n# _mm_store_ps(r, r1);\n# \n# return r[0] + r[1] + r[2] + r[3];\n# }\n\nfrom cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product16_sse\nvector_dot_product16_sse(vect, vect)\n\ndef c11_dot16_sse(vect):\n return vector_dot_product16_sse(vect, vect)\n\nmeasure_time(\"c11_dot16_sse(values)\", \n context=dict(c11_dot16_sse=c11_dot16_sse, values=vect), repeat=10)\n\nres = []\nfor i in range(10, 200000, 2500):\n t = measure_time(\"c11_dot16_sse(values)\", repeat=10,\n context=dict(c11_dot16_sse=c11_dot16_sse, values=vect[:i].copy()))\n res.append(t)\n\ncus_dot16_sse = pandas.DataFrame(res)\ncus_dot16_sse.tail()\n\nfig, ax = plt.subplots(1, 2, figsize=(14,4))\ndot.plot(x='size', y=\"average\", ax=ax[0], label=\"numpy\")\ncus_dot16_sse.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11x16_sse\")\ndot.plot(x='size', y=\"average\", ax=ax[1], label=\"numpy\", logy=True)\ncus_dot16_sse.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11x16_sse\")\ncus_dot.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11\")\ncus_dot16.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11x16\")\nax[0].set_title(\"numpy and custom dot product execution time\");\nax[1].set_title(\"numpy and custom dot product execution time\");",
"Better even though it is still slower than numpy. It is closer. Maybe the compilation option are not optimized, numpy was also compiled with the Intel compiler. To be accurate, multi-threading must be disabled on numpy side. That's the purpose of the first two lines.\nAVX 512\nLast experiment with AVX 512 instructions but it does not work on all processor. I could not test it on my laptop as these instructions do not seem to be available. More can be found on wikipedia CPUs with AVX-512.",
"import platform\nplatform.processor()\n\nimport numpy\nvalues = numpy.array(list(range(10000000)), dtype=numpy.float32)\nvect = values / numpy.max(values)\n\nfrom cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product16_avx512\nvector_dot_product16_avx512(vect, vect)\n\ndef c11_dot16_avx512(vect):\n return vector_dot_product16_avx512(vect, vect)\n\nmeasure_time(\"c11_dot16_avx512(values)\",\n context=dict(c11_dot16_avx512=c11_dot16_avx512, values=vect), repeat=10)\n\nres = []\nfor i in range(10, 200000, 2500):\n t = measure_time(\"c11_dot16_avx512(values)\", repeat=10,\n context=dict(c11_dot16_avx512=c11_dot16_avx512, values=vect[:i].copy()))\n res.append(t)\n\ncus_dot16_avx512 = pandas.DataFrame(res)\ncus_dot16_avx512.tail()\n\nfig, ax = plt.subplots(1, 2, figsize=(14,4))\ndot.plot(x='size', y=\"average\", ax=ax[0], label=\"numpy\")\ncus_dot16.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11x16\")\ncus_dot16_sse.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11x16_sse\")\ncus_dot16_avx512.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11x16_avx512\")\ndot.plot(x='size', y=\"average\", ax=ax[1], label=\"numpy\", logy=True)\ncus_dot16.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11x16\")\ncus_dot16_sse.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11x16_sse\")\ncus_dot16_avx512.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11x16_avx512\")\ncus_dot.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11\")\nax[0].set_title(\"numpy and custom dot product execution time\");\nax[1].set_title(\"numpy and custom dot product execution time\");",
"If the time is the same, it means that options AVX512 are not available.",
"from cpyquickhelper.numbers.cbenchmark import get_simd_available_option\nget_simd_available_option()",
"Last call with OpenMP",
"from cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product_openmp\nvector_dot_product_openmp(vect, vect, 2)\n\nvector_dot_product_openmp(vect, vect, 4)\n\ndef c11_dot_openmp2(vect):\n return vector_dot_product_openmp(vect, vect, nthreads=2)\n\ndef c11_dot_openmp4(vect):\n return vector_dot_product_openmp(vect, vect, nthreads=4)\n\nmeasure_time(\"c11_dot_openmp2(values)\",\n context=dict(c11_dot_openmp2=c11_dot_openmp2, values=vect), repeat=10)\n\nmeasure_time(\"c11_dot_openmp4(values)\",\n context=dict(c11_dot_openmp4=c11_dot_openmp4, values=vect), repeat=10)\n\nres = []\nfor i in range(10, 200000, 2500):\n t = measure_time(\"c11_dot_openmp2(values)\", repeat=10,\n context=dict(c11_dot_openmp2=c11_dot_openmp2, values=vect[:i].copy()))\n res.append(t)\n\ncus_dot_openmp2 = pandas.DataFrame(res)\ncus_dot_openmp2.tail()\n\nres = []\nfor i in range(10, 200000, 2500):\n t = measure_time(\"c11_dot_openmp4(values)\", repeat=10,\n context=dict(c11_dot_openmp4=c11_dot_openmp4, values=vect[:i].copy()))\n res.append(t)\n\ncus_dot_openmp4 = pandas.DataFrame(res)\ncus_dot_openmp4.tail()\n\nfig, ax = plt.subplots(1, 2, figsize=(14,4))\ndot.plot(x='size', y=\"average\", ax=ax[0], label=\"numpy\")\ncus_dot16.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11x16\")\ncus_dot16_sse.plot(x='size', y=\"average\", ax=ax[0], label=\"pybind11x16_sse\")\ncus_dot_openmp2.plot(x='size', y=\"average\", ax=ax[0], label=\"cus_dot_openmp2\")\ncus_dot_openmp4.plot(x='size', y=\"average\", ax=ax[0], label=\"cus_dot_openmp4\")\ndot.plot(x='size', y=\"average\", ax=ax[1], label=\"numpy\", logy=True)\ncus_dot16.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11x16\")\ncus_dot16_sse.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11x16_sse\")\ncus_dot_openmp2.plot(x='size', y=\"average\", ax=ax[1], label=\"cus_dot_openmp2\")\ncus_dot_openmp4.plot(x='size', y=\"average\", ax=ax[1], label=\"cus_dot_openmp4\")\ncus_dot.plot(x='size', y=\"average\", ax=ax[1], label=\"pybind11\")\nax[0].set_title(\"numpy and custom dot product execution time\");\nax[1].set_title(\"numpy and custom dot product execution time\");",
"Parallelization does not solve everything, efficient is important.\nBack to numpy\nThis article Why is matrix multiplication faster with numpy than with ctypes in Python? gives some kints on why numpy is still faster. By looking at the code of the dot product in numpy: arraytypes.c.src, it seems that numpy does a simple dot product without using branching or uses the library BLAS which is the case in this benchmark (code for dot product: sdot.c). And it does use branching. See also function blas_stride. These libraries then play with compilation options and optimize for speed. This benchmark does not look into cython-blis which implements some BLAS functions with an assembly language and has different implementations depending on the platform it is used. A little bit more on C++ optimization How to optimize C and C++ code in 2018."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs | site/en/guide/autodiff.ipynb | apache-2.0 | [
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Introduction to gradients and automatic differentiation\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/autodiff\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/autodiff.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/guide/autodiff.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/autodiff.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nAutomatic Differentiation and Gradients\nAutomatic differentiation\nis useful for implementing machine learning algorithms such as\nbackpropagation for training\nneural networks.\nIn this guide, you will explore ways to compute gradients with TensorFlow, especially in eager execution.\nSetup",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nimport tensorflow as tf",
"Computing gradients\nTo differentiate automatically, TensorFlow needs to remember what operations happen in what order during the forward pass. Then, during the backward pass, TensorFlow traverses this list of operations in reverse order to compute gradients.\nGradient tapes\nTensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of a computation with respect to some inputs, usually tf.Variables.\nTensorFlow \"records\" relevant operations executed inside the context of a tf.GradientTape onto a \"tape\". TensorFlow then uses that tape to compute the gradients of a \"recorded\" computation using reverse mode differentiation.\nHere is a simple example:",
"x = tf.Variable(3.0)\n\nwith tf.GradientTape() as tape:\n y = x**2",
"Once you've recorded some operations, use GradientTape.gradient(target, sources) to calculate the gradient of some target (often a loss) relative to some source (often the model's variables):",
"# dy = 2x * dx\ndy_dx = tape.gradient(y, x)\ndy_dx.numpy()",
"The above example uses scalars, but tf.GradientTape works as easily on any tensor:",
"w = tf.Variable(tf.random.normal((3, 2)), name='w')\nb = tf.Variable(tf.zeros(2, dtype=tf.float32), name='b')\nx = [[1., 2., 3.]]\n\nwith tf.GradientTape(persistent=True) as tape:\n y = x @ w + b\n loss = tf.reduce_mean(y**2)",
"To get the gradient of loss with respect to both variables, you can pass both as sources to the gradient method. The tape is flexible about how sources are passed and will accept any nested combination of lists or dictionaries and return the gradient structured the same way (see tf.nest).",
"[dl_dw, dl_db] = tape.gradient(loss, [w, b])",
"The gradient with respect to each source has the shape of the source:",
"print(w.shape)\nprint(dl_dw.shape)",
"Here is the gradient calculation again, this time passing a dictionary of variables:",
"my_vars = {\n 'w': w,\n 'b': b\n}\n\ngrad = tape.gradient(loss, my_vars)\ngrad['b']",
"Gradients with respect to a model\nIt's common to collect tf.Variables into a tf.Module or one of its subclasses (layers.Layer, keras.Model) for checkpointing and exporting.\nIn most cases, you will want to calculate gradients with respect to a model's trainable variables. Since all subclasses of tf.Module aggregate their variables in the Module.trainable_variables property, you can calculate these gradients in a few lines of code:",
"layer = tf.keras.layers.Dense(2, activation='relu')\nx = tf.constant([[1., 2., 3.]])\n\nwith tf.GradientTape() as tape:\n # Forward pass\n y = layer(x)\n loss = tf.reduce_mean(y**2)\n\n# Calculate gradients with respect to every trainable variable\ngrad = tape.gradient(loss, layer.trainable_variables)\n\nfor var, g in zip(layer.trainable_variables, grad):\n print(f'{var.name}, shape: {g.shape}')",
"<a id=\"watches\"></a>\nControlling what the tape watches\nThe default behavior is to record all operations after accessing a trainable tf.Variable. The reasons for this are:\n\nThe tape needs to know which operations to record in the forward pass to calculate the gradients in the backwards pass.\nThe tape holds references to intermediate outputs, so you don't want to record unnecessary operations.\nThe most common use case involves calculating the gradient of a loss with respect to all a model's trainable variables.\n\nFor example, the following fails to calculate a gradient because the tf.Tensor is not \"watched\" by default, and the tf.Variable is not trainable:",
"# A trainable variable\nx0 = tf.Variable(3.0, name='x0')\n# Not trainable\nx1 = tf.Variable(3.0, name='x1', trainable=False)\n# Not a Variable: A variable + tensor returns a tensor.\nx2 = tf.Variable(2.0, name='x2') + 1.0\n# Not a variable\nx3 = tf.constant(3.0, name='x3')\n\nwith tf.GradientTape() as tape:\n y = (x0**2) + (x1**2) + (x2**2)\n\ngrad = tape.gradient(y, [x0, x1, x2, x3])\n\nfor g in grad:\n print(g)",
"You can list the variables being watched by the tape using the GradientTape.watched_variables method:",
"[var.name for var in tape.watched_variables()]",
"tf.GradientTape provides hooks that give the user control over what is or is not watched.\nTo record gradients with respect to a tf.Tensor, you need to call GradientTape.watch(x):",
"x = tf.constant(3.0)\nwith tf.GradientTape() as tape:\n tape.watch(x)\n y = x**2\n\n# dy = 2x * dx\ndy_dx = tape.gradient(y, x)\nprint(dy_dx.numpy())",
"Conversely, to disable the default behavior of watching all tf.Variables, set watch_accessed_variables=False when creating the gradient tape. This calculation uses two variables, but only connects the gradient for one of the variables:",
"x0 = tf.Variable(0.0)\nx1 = tf.Variable(10.0)\n\nwith tf.GradientTape(watch_accessed_variables=False) as tape:\n tape.watch(x1)\n y0 = tf.math.sin(x0)\n y1 = tf.nn.softplus(x1)\n y = y0 + y1\n ys = tf.reduce_sum(y)",
"Since GradientTape.watch was not called on x0, no gradient is computed with respect to it:",
"# dys/dx1 = exp(x1) / (1 + exp(x1)) = sigmoid(x1)\ngrad = tape.gradient(ys, {'x0': x0, 'x1': x1})\n\nprint('dy/dx0:', grad['x0'])\nprint('dy/dx1:', grad['x1'].numpy())",
"Intermediate results\nYou can also request gradients of the output with respect to intermediate values computed inside the tf.GradientTape context.",
"x = tf.constant(3.0)\n\nwith tf.GradientTape() as tape:\n tape.watch(x)\n y = x * x\n z = y * y\n\n# Use the tape to compute the gradient of z with respect to the\n# intermediate value y.\n# dz_dy = 2 * y and y = x ** 2 = 9\nprint(tape.gradient(z, y).numpy())",
"By default, the resources held by a GradientTape are released as soon as the GradientTape.gradient method is called. To compute multiple gradients over the same computation, create a gradient tape with persistent=True. This allows multiple calls to the gradient method as resources are released when the tape object is garbage collected. For example:",
"x = tf.constant([1, 3.0])\nwith tf.GradientTape(persistent=True) as tape:\n tape.watch(x)\n y = x * x\n z = y * y\n\nprint(tape.gradient(z, x).numpy()) # [4.0, 108.0] (4 * x**3 at x = [1.0, 3.0])\nprint(tape.gradient(y, x).numpy()) # [2.0, 6.0] (2 * x at x = [1.0, 3.0])\n\ndel tape # Drop the reference to the tape",
"Notes on performance\n\n\nThere is a tiny overhead associated with doing operations inside a gradient tape context. For most eager execution this will not be a noticeable cost, but you should still use tape context around the areas only where it is required.\n\n\nGradient tapes use memory to store intermediate results, including inputs and outputs, for use during the backwards pass.\n\n\nFor efficiency, some ops (like ReLU) don't need to keep their intermediate results and they are pruned during the forward pass. However, if you use persistent=True on your tape, nothing is discarded and your peak memory usage will be higher.\nGradients of non-scalar targets\nA gradient is fundamentally an operation on a scalar.",
"x = tf.Variable(2.0)\nwith tf.GradientTape(persistent=True) as tape:\n y0 = x**2\n y1 = 1 / x\n\nprint(tape.gradient(y0, x).numpy())\nprint(tape.gradient(y1, x).numpy())",
"Thus, if you ask for the gradient of multiple targets, the result for each source is:\n\nThe gradient of the sum of the targets, or equivalently\nThe sum of the gradients of each target.",
"x = tf.Variable(2.0)\nwith tf.GradientTape() as tape:\n y0 = x**2\n y1 = 1 / x\n\nprint(tape.gradient({'y0': y0, 'y1': y1}, x).numpy())",
"Similarly, if the target(s) are not scalar the gradient of the sum is calculated:",
"x = tf.Variable(2.)\n\nwith tf.GradientTape() as tape:\n y = x * [3., 4.]\n\nprint(tape.gradient(y, x).numpy())",
"This makes it simple to take the gradient of the sum of a collection of losses, or the gradient of the sum of an element-wise loss calculation.\nIf you need a separate gradient for each item, refer to Jacobians.\nIn some cases you can skip the Jacobian. For an element-wise calculation, the gradient of the sum gives the derivative of each element with respect to its input-element, since each element is independent:",
"x = tf.linspace(-10.0, 10.0, 200+1)\n\nwith tf.GradientTape() as tape:\n tape.watch(x)\n y = tf.nn.sigmoid(x)\n\ndy_dx = tape.gradient(y, x)\n\nplt.plot(x, y, label='y')\nplt.plot(x, dy_dx, label='dy/dx')\nplt.legend()\n_ = plt.xlabel('x')",
"Control flow\nBecause a gradient tape records operations as they are executed, Python control flow is naturally handled (for example, if and while statements).\nHere a different variable is used on each branch of an if. The gradient only connects to the variable that was used:",
"x = tf.constant(1.0)\n\nv0 = tf.Variable(2.0)\nv1 = tf.Variable(2.0)\n\nwith tf.GradientTape(persistent=True) as tape:\n tape.watch(x)\n if x > 0.0:\n result = v0\n else:\n result = v1**2 \n\ndv0, dv1 = tape.gradient(result, [v0, v1])\n\nprint(dv0)\nprint(dv1)",
"Just remember that the control statements themselves are not differentiable, so they are invisible to gradient-based optimizers.\nDepending on the value of x in the above example, the tape either records result = v0 or result = v1**2. The gradient with respect to x is always None.",
"dx = tape.gradient(result, x)\n\nprint(dx)",
"Getting a gradient of None\nWhen a target is not connected to a source you will get a gradient of None.",
"x = tf.Variable(2.)\ny = tf.Variable(3.)\n\nwith tf.GradientTape() as tape:\n z = y * y\nprint(tape.gradient(z, x))",
"Here z is obviously not connected to x, but there are several less-obvious ways that a gradient can be disconnected.\n1. Replaced a variable with a tensor\nIn the section on \"controlling what the tape watches\" you saw that the tape will automatically watch a tf.Variable but not a tf.Tensor.\nOne common error is to inadvertently replace a tf.Variable with a tf.Tensor, instead of using Variable.assign to update the tf.Variable. Here is an example:",
"x = tf.Variable(2.0)\n\nfor epoch in range(2):\n with tf.GradientTape() as tape:\n y = x+1\n\n print(type(x).__name__, \":\", tape.gradient(y, x))\n x = x + 1 # This should be `x.assign_add(1)`",
"2. Did calculations outside of TensorFlow\nThe tape can't record the gradient path if the calculation exits TensorFlow.\nFor example:",
"x = tf.Variable([[1.0, 2.0],\n [3.0, 4.0]], dtype=tf.float32)\n\nwith tf.GradientTape() as tape:\n x2 = x**2\n\n # This step is calculated with NumPy\n y = np.mean(x2, axis=0)\n\n # Like most ops, reduce_mean will cast the NumPy array to a constant tensor\n # using `tf.convert_to_tensor`.\n y = tf.reduce_mean(y, axis=0)\n\nprint(tape.gradient(y, x))",
"3. Took gradients through an integer or string\nIntegers and strings are not differentiable. If a calculation path uses these data types there will be no gradient.\nNobody expects strings to be differentiable, but it's easy to accidentally create an int constant or variable if you don't specify the dtype.",
"x = tf.constant(10)\n\nwith tf.GradientTape() as g:\n g.watch(x)\n y = x * x\n\nprint(g.gradient(y, x))",
"TensorFlow doesn't automatically cast between types, so, in practice, you'll often get a type error instead of a missing gradient.\n4. Took gradients through a stateful object\nState stops gradients. When you read from a stateful object, the tape can only observe the current state, not the history that lead to it.\nA tf.Tensor is immutable. You can't change a tensor once it's created. It has a value, but no state. All the operations discussed so far are also stateless: the output of a tf.matmul only depends on its inputs.\nA tf.Variable has internal state—its value. When you use the variable, the state is read. It's normal to calculate a gradient with respect to a variable, but the variable's state blocks gradient calculations from going farther back. For example:",
"x0 = tf.Variable(3.0)\nx1 = tf.Variable(0.0)\n\nwith tf.GradientTape() as tape:\n # Update x1 = x1 + x0.\n x1.assign_add(x0)\n # The tape starts recording from x1.\n y = x1**2 # y = (x1 + x0)**2\n\n# This doesn't work.\nprint(tape.gradient(y, x0)) #dy/dx0 = 2*(x1 + x0)",
"Similarly, tf.data.Dataset iterators and tf.queues are stateful, and will stop all gradients on tensors that pass through them.\nNo gradient registered\nSome tf.Operations are registered as being non-differentiable and will return None. Others have no gradient registered.\nThe tf.raw_ops page shows which low-level ops have gradients registered.\nIf you attempt to take a gradient through a float op that has no gradient registered the tape will throw an error instead of silently returning None. This way you know something has gone wrong.\nFor example, the tf.image.adjust_contrast function wraps raw_ops.AdjustContrastv2, which could have a gradient but the gradient is not implemented:",
"image = tf.Variable([[[0.5, 0.0, 0.0]]])\ndelta = tf.Variable(0.1)\n\nwith tf.GradientTape() as tape:\n new_image = tf.image.adjust_contrast(image, delta)\n\ntry:\n print(tape.gradient(new_image, [image, delta]))\n assert False # This should not happen.\nexcept LookupError as e:\n print(f'{type(e).__name__}: {e}')\n",
"If you need to differentiate through this op, you'll either need to implement the gradient and register it (using tf.RegisterGradient) or re-implement the function using other ops.\nZeros instead of None\nIn some cases it would be convenient to get 0 instead of None for unconnected gradients. You can decide what to return when you have unconnected gradients using the unconnected_gradients argument:",
"x = tf.Variable([2., 2.])\ny = tf.Variable(3.)\n\nwith tf.GradientTape() as tape:\n z = y**2\nprint(tape.gradient(z, x, unconnected_gradients=tf.UnconnectedGradients.ZERO))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
statsmodels/statsmodels.github.io | v0.13.1/examples/notebooks/generated/ordinal_regression.ipynb | bsd-3-clause | [
"Ordinal Regression",
"import numpy as np\nimport pandas as pd\nimport scipy.stats as stats\n\nfrom statsmodels.miscmodels.ordinal_model import OrderedModel",
"Loading a stata data file from the UCLA website.This notebook is inspired by https://stats.idre.ucla.edu/r/dae/ordinal-logistic-regression/ which is a R notebook from UCLA.",
"url = \"https://stats.idre.ucla.edu/stat/data/ologit.dta\"\ndata_student = pd.read_stata(url)\n\ndata_student.head(5)\n\ndata_student.dtypes\n\ndata_student['apply'].dtype",
"This dataset is about the probability for undergraduate students to apply to graduate school given three exogenous variables:\n- their grade point average(gpa), a float between 0 and 4.\n- pared, a binary that indicates if at least one parent went to graduate school.\n- and public, a binary that indicates if the current undergraduate institution of the student is public or private.\napply, the target variable is categorical with ordered categories: unlikely < somewhat likely < very likely. It is a pd.Serie of categorical type, this is preferred over NumPy arrays.\nThe model is based on a numerical latent variable $y_{latent}$ that we cannot observe but that we can compute thanks to exogenous variables.\nMoreover we can use this $y_{latent}$ to define $y$ that we can observe.\nFor more details see the the Documentation of OrderedModel, the UCLA webpage or this book.\nProbit ordinal regression:",
"mod_prob = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr='probit')\n\nres_prob = mod_prob.fit(method='bfgs')\nres_prob.summary()",
"In our model, we have 3 exogenous variables(the $\\beta$s if we keep the documentation's notations) so we have 3 coefficients that need to be estimated.\nThose 3 estimations and their standard errors can be retrieved in the summary table.\nSince there are 3 categories in the target variable(unlikely, somewhat likely, very likely), we have two thresholds to estimate. \nAs explained in the doc of the method OrderedModel.transform_threshold_params, the first estimated threshold is the actual value and all the other thresholds are in terms of cumulative exponentiated increments. Actual thresholds values can be computed as follows:",
"num_of_thresholds = 2\nmod_prob.transform_threshold_params(res_prob.params[-num_of_thresholds:])",
"Logit ordinal regression:",
"mod_log = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr='logit')\n\nres_log = mod_log.fit(method='bfgs', disp=False)\nres_log.summary()\n\npredicted = res_log.model.predict(res_log.params, exog=data_student[['pared', 'public', 'gpa']])\npredicted\n\npred_choice = predicted.argmax(1)\nprint('Fraction of correct choice predictions')\nprint((np.asarray(data_student['apply'].values.codes) == pred_choice).mean())",
"Ordinal regression with a custom cumulative cLogLog distribution:\nIn addition to logit and probit regression, any continuous distribution from SciPy.stats package can be used for the distr argument. Alternatively, one can define its own distribution simply creating a subclass from rv_continuous and implementing a few methods.",
"# using a SciPy distribution\nres_exp = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr=stats.expon).fit(method='bfgs', disp=False)\nres_exp.summary()\n\n# minimal definition of a custom scipy distribution.\nclass CLogLog(stats.rv_continuous):\n def _ppf(self, q):\n return np.log(-np.log(1 - q))\n\n def _cdf(self, x):\n return 1 - np.exp(-np.exp(x))\n\n\ncloglog = CLogLog()\n\n# definition of the model and fitting\nres_cloglog = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr=cloglog).fit(method='bfgs', disp=False)\nres_cloglog.summary()",
"Using formulas - treatment of endog\nPandas' ordered categorical and numeric values are supported as dependent variable in formulas. Other types will raise a ValueError.",
"modf_logit = OrderedModel.from_formula(\"apply ~ 0 + pared + public + gpa\", data_student,\n distr='logit')\nresf_logit = modf_logit.fit(method='bfgs')\nresf_logit.summary()",
"Using numerical codes for the dependent variable is supported but loses the names of the category levels. The levels and names correspond to the unique values of the dependent variable sorted in alphanumeric order as in the case without using formulas.",
"data_student[\"apply_codes\"] = data_student['apply'].cat.codes * 2 + 5\ndata_student[\"apply_codes\"].head()\n\nOrderedModel.from_formula(\"apply_codes ~ 0 + pared + public + gpa\", data_student,\n distr='logit').fit().summary()\n\nresf_logit.predict(data_student.iloc[:5])",
"Using string values directly as dependent variable raises a ValueError.",
"data_student[\"apply_str\"] = np.asarray(data_student[\"apply\"])\ndata_student[\"apply_str\"].head()\n\ndata_student.apply_str = pd.Categorical(data_student.apply_str, ordered=True)\ndata_student.public = data_student.public.astype(float)\ndata_student.pared = data_student.pared.astype(float)\n\nOrderedModel.from_formula(\"apply_str ~ 0 + pared + public + gpa\", data_student,\n distr='logit')",
"Using formulas - no constant in model\nThe parameterization of OrderedModel requires that there is no constant in the model, neither explicit nor implicit. The constant is equivalent to shifting all thresholds and is therefore not separately identified.\nPatsy's formula specification does not allow a design matrix without explicit or implicit constant if there are categorical variables (or maybe splines) among explanatory variables. As workaround, statsmodels removes an explicit intercept. \nConsequently, there are two valid cases to get a design matrix without intercept.\n\nspecify a model without explicit and implicit intercept which is possible if there are only numerical variables in the model.\nspecify a model with an explicit intercept which statsmodels will remove.\n\nModels with an implicit intercept will be overparameterized, the parameter estimates will not be fully identified, cov_params will not be invertible and standard errors might contain nans.\nIn the following we look at an example with an additional categorical variable.",
"nobs = len(data_student)\ndata_student[\"dummy\"] = (np.arange(nobs) < (nobs / 2)).astype(float)",
"explicit intercept, that will be removed:\nNote \"1 +\" is here redundant because it is patsy's default.",
"modfd_logit = OrderedModel.from_formula(\"apply ~ 1 + pared + public + gpa + C(dummy)\", data_student,\n distr='logit')\nresfd_logit = modfd_logit.fit(method='bfgs')\nprint(resfd_logit.summary())\n\nmodfd_logit.k_vars\n\nmodfd_logit.k_constant",
"implicit intercept creates overparameterized model\nSpecifying \"0 +\" in the formula drops the explicit intercept. However, the categorical encoding is now changed to include an implicit intercept. In this example, the created dummy variables C(dummy)[0.0] and C(dummy)[1.0] sum to one.\npython\nOrderedModel.from_formula(\"apply ~ 0 + pared + public + gpa + C(dummy)\", data_student, distr='logit')\nTo see what would happen in the overparameterized case, we can avoid the constant check in the model by explicitly specifying whether a constant is present or not. We use hasconst=False, even though the model has an implicit constant.\nThe parameters of the two dummy variable columns and the first threshold are not separately identified. Estimates for those parameters and availability of standard errors are arbitrary and depends on numerical details that differ across environments.\nSome summary measures like log-likelihood value are not affected by this, within convergence tolerance and numerical precision. Prediction should also be possible. However, inference is not available, or is not valid.",
"modfd2_logit = OrderedModel.from_formula(\"apply ~ 0 + pared + public + gpa + C(dummy)\", data_student,\n distr='logit', hasconst=False)\nresfd2_logit = modfd2_logit.fit(method='bfgs')\nprint(resfd2_logit.summary())\n\nresfd2_logit.predict(data_student.iloc[:5])\n\nresf_logit.predict()",
"Binary Model compared to Logit\nIf there are only two levels of the dependent ordered categorical variable, then the model can also be estimated by a Logit model.\nThe models are (theoretically) identical in this case except for the parameterization of the constant. Logit as most other models requires in general an intercept. This corresponds to the threshold parameter in the OrderedModel, however, with opposite sign.\nThe implementation differs and not all of the same results statistic and post-estimation features are available. Estimated parameters and other results statistic differ mainly based on convergence tolerance of the optimization.",
"from statsmodels.discrete.discrete_model import Logit\nfrom statsmodels.tools.tools import add_constant",
"We drop the middle category from the data and keep the two extreme categories.",
"mask_drop = data_student['apply'] == \"somewhat likely\"\ndata2 = data_student.loc[~mask_drop, :]\n# we need to remove the category also from the Categorical Index\ndata2['apply'].cat.remove_categories(\"somewhat likely\", inplace=True)\ndata2.head()\n\nmod_log = OrderedModel(data2['apply'],\n data2[['pared', 'public', 'gpa']],\n distr='logit')\n\nres_log = mod_log.fit(method='bfgs', disp=False)\nres_log.summary()",
"The Logit model does not have a constant by default, we have to add it to our explanatory variables.\nThe results are essentially identical between Logit and ordered model up to numerical precision mainly resulting from convergence tolerance in the estimation.\nThe only difference is in the sign of the constant, Logit and OrdereModel have opposite signs of he constant. This is a consequence of the parameterization in terms of cut points in OrderedModel instead of including and constant column in the design matrix.",
"ex = add_constant(data2[['pared', 'public', 'gpa']], prepend=False)\nmod_logit = Logit(data2['apply'].cat.codes, ex)\n\nres_logit = mod_logit.fit(method='bfgs', disp=False)\n\nres_logit.summary()",
"Robust standard errors are also available in OrderedModel in the same way as in discrete.Logit.\nAs example we specify HAC covariance type even though we have cross-sectional data and autocorrelation is not appropriate.",
"res_logit_hac = mod_logit.fit(method='bfgs', disp=False, cov_type=\"hac\", cov_kwds={\"maxlags\": 2})\nres_log_hac = mod_log.fit(method='bfgs', disp=False, cov_type=\"hac\", cov_kwds={\"maxlags\": 2})\n\nres_logit_hac.bse.values - res_log_hac.bse"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
danielmcd/hacks | marketpatterns/TestNotebook.ipynb | gpl-3.0 | [
"The Chicken Overlord",
"import calendar\nimport datetime\nimport numpy\nimport os.path\nimport pickle\nfrom random import randrange, random, shuffle\nimport sys\nimport time\nimport math\n\nimport nupic\nfrom nupic.encoders import ScalarEncoder, MultiEncoder\nfrom nupic.bindings.algorithms import SpatialPooler as SP\nfrom nupic.research.TP10X2 import TP10X2 as TP",
"<img src=\"http://www.designofsignage.com/application/symbol/hands/image/600x600/hand-point-up-2.jpg\" width=\"40px\" height=\"40px\" align=\"left\"/> This stuff imports stuff.",
"C = [1, 1, 1, 0, 0, 0, 0, 0, 0]\nB = [0, 0, 0, 1, 1, 1, 0, 0, 0]\nA = [0, 0, 0, 0, 0, 0, 1, 1, 1]\nn = 10\nw = 3\n\n#inputs = [[0] * (i*w) + [1] *w + [0] * ((n - i - 1) * w) for i in range (0, n)]\n\nenc = ScalarEncoder(w=5, minval=0, maxval=10, radius=1.25, periodic=True, name=\"encoder\", forced=True)\nfor d in range(0, 10):\n print str(enc.encode(d))\n \ninputs = [enc.encode(i) for i in range(10)]",
"<img src=\"http://www.designofsignage.com/application/symbol/hands/image/600x600/hand-point-up-2.jpg\" width=\"40px\" height=\"40px\" align=\"left\"/> This stuff is the stuff the Temporal Pooler thing is learning to recognize.",
"tp = TP(numberOfCols=40, cellsPerColumn=7.9,\n initialPerm=0.5, connectedPerm=0.5,\n minThreshold=10, newSynapseCount=10,\n permanenceInc=0.1, permanenceDec=0.01,\n activationThreshold=1,\n globalDecay=0, burnIn=1,\n checkSynapseConsistency=False,\n pamLength=7)",
"<img src=\"http://www.designofsignage.com/application/symbol/hands/image/600x600/hand-point-up-2.jpg\" width=\"40px\" height=\"40px\" align=\"left\"/> This is the Temporal Pooler thing.",
"input_array = numpy.zeros(40, dtype=\"int32\")\ntp.reset()\nfor i, pattern in enumerate(inputs*1):\n input_array[:] = pattern\n tp.compute(input_array, enableLearn=True, computeInfOutput=True)\n tp.printStates()",
"<img src=\"http://www.designofsignage.com/application/symbol/hands/image/600x600/hand-point-up-2.jpg\" width=\"40px\" height=\"40px\" align=\"left\"/> This is the end result of what it predicted."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
wil-langford/FishFace2 | lib/jupyter/prioritize tagging.ipynb | gpl-2.0 | [
"import lib.django.djff.models as dm\n\ndef prioritize_images(list_of_images, priority=5):\n id_list = [x.id for x in list(list_of_images)]\n for image_id in id_list:\n prior = dm.PriorityManualImage()\n prior.image_id = image_id\n prior.priority = priority\n prior.save()\n\nall_data_images = dm.Image.objects.filter(is_cal_image=False)",
"Select Images\nAfter running the cell above to set things up, you'll need to select\nthe images you want to prioritize.",
"## EXAMPLE: Get all images from experiment 11.\nxp_11_images = all_data_images.filter(xp_id=156)\n\n## EXAMPLE: Get all images from CJRs 140, 158, and 161.\nselected_cjrs_images = all_data_images.filter(cjr_id__in=[140,158,161])\n\n## EXAMPLE: Get all images from experiments 11 and 94.\nselected_xps_images = all_data_images.filter(xp_id__in=[11,94])\n\n## EXAMPLE: Get every 13th image from experiment 94.\nxp_94_images = all_data_images.filter(xp_id__in=[156,157,158,159,162])\nevery_13th = list(xp_94_images)[::5]\nprint \"{} / {} = {}\".format(len(every_13th),\n xp_94_images.count(),\n float(xp_94_images.count()) / len(every_13th))\n",
"Store Priorities - DON'T FORGET TO DO THIS - This is what actually queues the images to be tagged\nAfter you select the images, you'll need to actually prioritize them using the function defined in the first cell of this notebook: prioritize_images.\nThe default priority is 5. Lower numbered priorities (e.g. 3) will run before higher numbered priorities (e.g. 10).",
"## This can take some time.\nprioritize_images(every_13th, priority=100) # very low priority\n#prioritize_images(every_13th) # very low priority",
"This can take some time.\nprioritize_images(every_13th) # very low priority\nCheck Current Priorities",
"## EXAMPLE: Find out how many images are currently prioritized.\nprint dm.PriorityManualImage.objects.count()\n\n## EXAMPLE: Find out which CJRs contain prioritized images.\ncjr_list = [x.image.cjr_id for x in dm.PriorityManualImage.objects.all()]\nprint set(cjr_list)\n\n## EXAMPLE: Find out the proportion of images for experiment 70 that are tagged.\nimages_in_xp_70 = dm.Image.objects.filter(xp_id=70).count()\ntags_for_xp_70 = dm.ManualTag.objects.filter(image__xp_id=70).count()\nprint \"{} / {} = {}\".format(tags_for_xp_70, images_in_xp_70, float(tags_for_xp_70) / images_in_xp_70)",
"Clear Current Priorities\nDelete all of the current priorities.",
"### WARNING ###\n### THIS WILL DELETE ALL OF YOUR PRIORITIES ###\n### WARNING ###\n\ndm.PriorityManualImage.objects.all().delete()\n\n## EXAMPLE: Delete the priorities for experiment 11, if any.\ndm.PriorityManualImage.objects.filter(image__xp_id=11).delete()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | 0.17/_downloads/c92e443909d938b06abf63b902dac687/plot_epochs_to_data_frame.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Export epochs to Pandas DataFrame\nIn this example the pandas exporter will be used to produce a DataFrame\nobject. After exploring some basic features a split-apply-combine\nwork flow will be conducted to examine the latencies of the response\nmaxima across epochs and conditions.\n<div class=\"alert alert-info\"><h4>Note</h4><p>Equivalent methods are available for raw and evoked data objects.</p></div>\n\nMore information and additional introductory materials can be found at the\npandas doc sites: http://pandas.pydata.org/pandas-docs/stable/\nShort Pandas Primer\nPandas Data Frames\n~~~~~~~~~~~~~~~~~~\nA data frame can be thought of as a combination of matrix, list and dict:\nIt knows about linear algebra and element-wise operations but is size mutable\nand allows for labeled access to its data. In addition, the pandas data frame\nclass provides many useful methods for restructuring, reshaping and visualizing\ndata. As most methods return data frame instances, operations can be chained\nwith ease; this allows to write efficient one-liners. Technically a DataFrame\ncan be seen as a high-level container for numpy arrays and hence switching\nback and forth between numpy arrays and DataFrames is very easy.\nTaken together, these features qualify data frames for inter operation with\ndatabases and for interactive data exploration / analysis.\nAdditionally, pandas interfaces with the R statistical computing language that\ncovers a huge amount of statistical functionality.\nExport Options\n~~~~~~~~~~~~~~\nThe pandas exporter comes with a few options worth being commented.\nPandas DataFrame objects use a so called hierarchical index. This can be\nthought of as an array of unique tuples, in our case, representing the higher\ndimensional MEG data in a 2D data table. The column names are the channel names\nfrom the epoch object. The channels can be accessed like entries of a\ndictionary::\n>>> df['MEG 2333']\n\nEpochs and time slices can be accessed with the .loc method::\n>>> epochs_df.loc[(1, 2), 'MEG 2333']\n\nHowever, it is also possible to include this index as regular categorial data\ncolumns which yields a long table format typically used for repeated measure\ndesigns. To take control of this feature, on export, you can specify which\nof the three dimensions 'condition', 'epoch' and 'time' is passed to the Pandas\nindex using the index parameter. Note that this decision is revertible any\ntime, as demonstrated below.\nSimilarly, for convenience, it is possible to scale the times, e.g. from\nseconds to milliseconds.\nSome Instance Methods\n~~~~~~~~~~~~~~~~~~~~~\nMost numpy methods and many ufuncs can be found as instance methods, e.g.\nmean, median, var, std, mul, , max, argmax etc.\nBelow an incomplete listing of additional useful data frame instance methods:\napply : apply function to data.\n Any kind of custom function can be applied to the data. In combination with\n lambda this can be very useful.\ndescribe : quickly generate summary stats\n Very useful for exploring data.\ngroupby : generate subgroups and initialize a 'split-apply-combine' operation.\n Creates a group object. Subsequently, methods like apply, agg, or transform\n can be used to manipulate the underlying data separately but\n simultaneously. Finally, reset_index can be used to combine the results\n back into a data frame.\nplot : wrapper around plt.plot\n However it comes with some special options. For examples see below.\nshape : shape attribute\n gets the dimensions of the data frame.\nvalues :\n return underlying numpy array.\nto_records :\n export data as numpy record array.\nto_dict :\n export data as dict of arrays.",
"# Author: Denis Engemann <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport mne\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\n\n# These data already have an average EEG ref applied\nraw = mne.io.read_raw_fif(raw_fname)\n\n# For simplicity we will only consider the first 10 epochs\nevents = mne.read_events(event_fname)[:10]\n\n# Add a bad channel\nraw.info['bads'] += ['MEG 2443']\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,\n stim=False, exclude='bads')\n\ntmin, tmax = -0.2, 0.5\nbaseline = (None, 0)\nreject = dict(grad=4000e-13, eog=150e-6)\n\nevent_id = dict(auditory_l=1, auditory_r=2, visual_l=3, visual_r=4)\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,\n baseline=baseline, preload=True, reject=reject)",
"Export DataFrame",
"# The following parameters will scale the channels and times plotting\n# friendly. The info columns 'epoch' and 'time' will be used as hierarchical\n# index whereas the condition is treated as categorial data. Note that\n# this is optional. By passing None you could also print out all nesting\n# factors in a long table style commonly used for analyzing repeated measure\n# designs.\n\nindex, scaling_time, scalings = ['epoch', 'time'], 1e3, dict(grad=1e13)\n\ndf = epochs.to_data_frame(picks=None, scalings=scalings,\n scaling_time=scaling_time, index=index)\n\n# Create MEG channel selector and drop EOG channel.\nmeg_chs = [c for c in df.columns if 'MEG' in c]\n\ndf.pop('EOG 061') # this works just like with a list.",
"Explore Pandas MultiIndex",
"# Pandas is using a MultiIndex or hierarchical index to handle higher\n# dimensionality while at the same time representing data in a flat 2d manner.\n\nprint(df.index.names, df.index.levels)\n\n# Inspecting the index object unveils that 'epoch', 'time' are used\n# for subsetting data. We can take advantage of that by using the\n# .loc attribute, where in this case the first position indexes the MultiIndex\n# and the second the columns, that is, channels.\n\n# Plot some channels across the first three epochs\nxticks, sel = np.arange(3, 600, 120), meg_chs[:15]\ndf.loc[:3, sel].plot(xticks=xticks)\nmne.viz.tight_layout()\n\n# slice the time starting at t0 in epoch 2 and ending 500ms after\n# the base line in epoch 3. Note that the second part of the tuple\n# represents time in milliseconds from stimulus onset.\ndf.loc[(1, 0):(3, 500), sel].plot(xticks=xticks)\nmne.viz.tight_layout()\n\n# Note: For convenience the index was converted from floating point values\n# to integer values. To restore the original values you can e.g. say\n# df['times'] = np.tile(epoch.times, len(epochs_times)\n\n# We now reset the index of the DataFrame to expose some Pandas\n# pivoting functionality. To simplify the groupby operation we\n# we drop the indices to treat epoch and time as categroial factors.\n\ndf = df.reset_index()\n\n# The ensuing DataFrame then is split into subsets reflecting a crossing\n# between condition and trial number. The idea is that we can broadcast\n# operations into each cell simultaneously.\n\nfactors = ['condition', 'epoch']\nsel = factors + ['MEG 1332', 'MEG 1342']\ngrouped = df[sel].groupby(factors)\n\n# To make the plot labels more readable let's edit the values of 'condition'.\ndf.condition = df.condition.apply(lambda name: name + ' ')\n\n# Now we compare the mean of two channels response across conditions.\ngrouped.mean().plot(kind='bar', stacked=True, title='Mean MEG Response',\n color=['steelblue', 'orange'])\nmne.viz.tight_layout()\n\n# We can even accomplish more complicated tasks in a few lines calling\n# apply method and passing a function. Assume we wanted to know the time\n# slice of the maximum response for each condition.\n\nmax_latency = grouped[sel[2]].apply(lambda x: df.time[x.idxmax()])\n\nprint(max_latency)\n\n# Then make the plot labels more readable let's edit the values of 'condition'.\ndf.condition = df.condition.apply(lambda name: name + ' ')\n\nplt.figure()\nmax_latency.plot(kind='barh', title='Latency of Maximum Response',\n color=['steelblue'])\nmne.viz.tight_layout()\n\n# Finally, we will again remove the index to create a proper data table that\n# can be used with statistical packages like statsmodels or R.\n\nfinal_df = max_latency.reset_index()\nfinal_df.rename(columns={0: sel[2]}) # as the index is oblivious of names.\n\n# The index is now written into regular columns so it can be used as factor.\nprint(final_df)\n\nplt.show()\n\n# To save as csv file, uncomment the next line.\n# final_df.to_csv('my_epochs.csv')\n\n# Note. Data Frames can be easily concatenated, e.g., across subjects.\n# E.g. say:\n#\n# import pandas as pd\n# group = pd.concat([df_1, df_2])\n# group['subject'] = np.r_[np.ones(len(df_1)), np.ones(len(df_2)) + 1]"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jarrodmcc/OpenFermion | examples/jordan_wigner_and_bravyi_kitaev_transforms.ipynb | apache-2.0 | [
"The Jordan-Wigner and Bravyi-Kitaev Transforms\nLadder operators and the canonical anticommutation relations\nA system of $N$ fermionic modes is\ndescribed by a set of fermionic annihilation operators\n${a_p}{p=0}^{N-1}$ satisfying the canonical anticommutation relations\n$$\\begin{aligned}\n {a_p, a_q} &= 0, \\label{eq:car1} \\\n {a_p, a^\\dagger_q} &= \\delta{pq}, \\label{eq:car2}\n \\end{aligned}$$ where ${A, B} := AB + BA$. The adjoint\n$a^\\dagger_p$ of an annihilation operator $a_p$ is called a creation\noperator, and we refer to creation and annihilation operators as\nfermionic ladder operators.\nIn a finite-dimensional vector space the anticommutation relations have the following consequences:\n\n\nThe operators ${a^\\dagger_p a_p}_{p=0}^{N-1}$ commute with each\n other and have eigenvalues 0 and 1. These are called the occupation\n number operators.\n\n\nThere is a normalized vector $\\lvert{\\text{vac}}\\rangle$, called the vacuum\n state, which is a mutual 0-eigenvector of all\n the $a^\\dagger_p a_p$.\n\n\nIf $\\lvert{\\psi}\\rangle$ is a 0-eigenvector of $a_p^\\dagger a_p$, then\n $a_p^\\dagger\\lvert{\\psi}\\rangle$ is a 1-eigenvector of $a_p^\\dagger a_p$.\n This explains why we say that $a_p^\\dagger$ creates a fermion in\n mode $p$.\n\n\nIf $\\lvert{\\psi}\\rangle$ is a 1-eigenvector of $a_p^\\dagger a_p$, then\n $a_p\\lvert{\\psi}\\rangle$ is a 0-eigenvector of $a_p^\\dagger a_p$. This\n explains why we say that $a_p$ annihilates a fermion in mode $p$.\n\n\n$a_p^2 = 0$ for all $p$. One cannot create or annihilate a fermion\n in the same mode twice.\n\n\nThe set of $2^N$ vectors\n $$\\lvert n_0, \\ldots, n_{N-1} \\rangle :=\n (a^\\dagger_0)^{n_0} \\cdots (a^\\dagger_{N-1})^{n_{N-1}} \\lvert{\\text{vac}}\\rangle,\n \\qquad n_0, \\ldots, n_{N-1} \\in {0, 1}$$\n are orthonormal. We can assume they form a basis for the entire vector space.\n\n\nThe annihilation operators $a_p$ act on this basis as follows:\n $$\\begin{aligned} a_p \\lvert n_0, \\ldots, n_{p-1}, 1, n_{p+1}, \\ldots, n_{N-1} \\rangle &= (-1)^{\\sum_{q=0}^{p-1} n_q} \\lvert n_0, \\ldots, n_{p-1}, 0, n_{p+1}, \\ldots, n_{N-1} \\rangle \\,, \\ a_p \\lvert n_0, \\ldots, n_{p-1}, 0, n_{p+1}, \\ldots, n_{N-1} \\rangle &= 0 \\,.\\end{aligned}$$\n\n\nSee here for a derivation and discussion of these\nconsequences.\nMapping fermions to qubits with transforms\nTo simulate a system of fermions on a quantum computer, we must choose a representation of the ladder operators on the Hilbert space of the qubits. In other words, we must designate a set of qubit operators (matrices) which satisfy the canonical anticommutation relations. Qubit operators are written in terms of the Pauli matrices $X$, $Y$, and $Z$. In OpenFermion a representation is specified by a transform function which maps fermionic operators (typically instances of FermionOperator) to qubit operators (instances of QubitOperator). In this demo we will discuss the Jordan-Wigner and Bravyi-Kitaev transforms, which are implemented by the functions jordan_wigner and bravyi_kitaev.\nThe Jordan-Wigner Transform\nUnder the Jordan-Wigner Transform (JWT), the annihilation operators are mapped to qubit operators as follows:\n$$\\begin{aligned}\n a_p &\\mapsto \\frac{1}{2} (X_p + \\mathrm{i}Y_p) Z_1 \\cdots Z_{p - 1} \\\n &= (\\lvert{0}\\rangle\\langle{1}\\rvert)p Z_1 \\cdots Z{p - 1} \\\n &=: \\tilde{a}_p.\n\\end{aligned}$$\nThis operator has the following action on a computational basis vector\n$\\lvert z_0, \\ldots, z_{N-1} \\rangle$:\n$$\\begin{aligned}\n \\tilde{a}p \\lvert z_0 \\ldots, z{p-1}, 1, z_{p+1}, \\ldots, z_{N-1} \\rangle &=\n (-1)^{\\sum_{q=0}^{p-1} z_q} \\lvert z_0 \\ldots, z_{p-1}, 0, z_{p+1}, \\ldots, z_{N-1} \\rangle \\\n \\tilde{a}p \\lvert z_0 \\ldots, z{p-1}, 0, z_{p+1}, \\ldots, z_{N-1} \\rangle &= 0.\n \\end{aligned}$$\nNote that $\\lvert n_0, \\ldots, n_{N-1} \\rangle$ is a basis vector in the Hilbert space of fermions, while $\\lvert z_0, \\ldots, z_{N-1} \\rangle$ is a basis vector in the Hilbert space of qubits. Similarly, in OpenFermion $a_p$ is a FermionOperator while $\\tilde{a}_p$ is a QubitOperator.\nLet's instantiate some FermionOperators, map them to QubitOperators using the JWT, and check that the resulting operators satisfy the expected relations.",
"from openfermion import *\n\n# Create some ladder operators\nannihilate_2 = FermionOperator('2')\ncreate_2 = FermionOperator('2^')\nannihilate_5 = FermionOperator('5')\ncreate_5 = FermionOperator('5^')\n\n# Construct occupation number operators\nnum_2 = create_2 * annihilate_2\nnum_5 = create_5 * annihilate_5\n\n# Map FermionOperators to QubitOperators using the JWT\nannihilate_2_jw = jordan_wigner(annihilate_2)\ncreate_2_jw = jordan_wigner(create_2)\nannihilate_5_jw = jordan_wigner(annihilate_5)\ncreate_5_jw = jordan_wigner(create_5)\nnum_2_jw = jordan_wigner(num_2)\nnum_5_jw = jordan_wigner(num_5)\n\n# Create QubitOperator versions of zero and identity\nzero = QubitOperator()\nidentity = QubitOperator(())\n\n# Check the canonical anticommutation relations\nassert anticommutator(annihilate_5_jw, annihilate_2_jw) == zero\nassert anticommutator(annihilate_5_jw, annihilate_5_jw) == zero\nassert anticommutator(annihilate_5_jw, create_2_jw) == zero\nassert anticommutator(annihilate_5_jw, create_5_jw) == identity\n\n# Check that the occupation number operators commute\nassert commutator(num_2_jw, num_5_jw) == zero\n\n# Print some output\nprint(\"annihilate_2_jw = \\n{}\".format(annihilate_2_jw))\nprint('')\nprint(\"create_2_jw = \\n{}\".format(create_2_jw))\nprint('')\nprint(\"annihilate_5_jw = \\n{}\".format(annihilate_5_jw))\nprint('')\nprint(\"create_5_jw = \\n{}\".format(create_5_jw))\nprint('')\nprint(\"num_2_jw = \\n{}\".format(num_2_jw))\nprint('')\nprint(\"num_5_jw = \\n{}\".format(num_5_jw))",
"The parity transform\nBy comparing the action of $\\tilde{a}p$ on $\\lvert z_0, \\ldots, z{N-1} \\rangle$ in the JWT with the action of $a_p$ on $\\lvert n_0, \\ldots, n_{N-1} \\rangle$ (described in the first section of this demo), we can see that the JWT is associated with a particular mapping of bitstrings $e: {0, 1}^N \\to {0, 1}^N$, namely, the identity map $e(x) = x$. In other words, under the JWT, the fermionic basis vector $\\lvert n_0, \\ldots, n_{N-1} \\rangle$ is represented by the computational basis vector $\\lvert z_0, \\ldots, z_{N-1} \\rangle$, where $z_p = n_p$ for all $p$. We can write this as\n$$\\lvert x \\rangle \\mapsto \\lvert e(x) \\rangle,$$\nwhere the vector on the left is fermionic and the vector on the right is qubit. We call the mapping $e$ an encoder.\nThere are other transforms which are associated with different encoders. To see why we might be interested in these other transforms, observe that under the JWT, $\\tilde{a}_p$ acts not only on qubit $p$ but also on qubits $0, \\ldots, p-1$. This means that fermionic operators with low weight can get mapped to qubit operators with high weight, where by weight we mean the number of modes or qubits an operators acts on. There are some disadvantages to having high-weight operators; for instance, they may require more gates to simulate and are more expensive to measure on some near-term hardware platforms. In the worst case, the annihilation operator on the last mode will map to an operator which acts on all the qubits. To emphasize this point let's apply the JWT to the annihilation operator on mode 99:",
"print(jordan_wigner(FermionOperator('99')))",
"The purpose of the string of Pauli $Z$'s is to introduce the phase factor $(-1)^{\\sum_{q=0}^{p-1} n_q}$ when acting on a computational basis state; when $e$ is the identity encoder, the modulo-2 sum $\\sum_{q=0}^{p-1} n_q$ is computed as $\\sum_{q=0}^{p-1} z_q$, which requires reading $p$ bits and leads to a Pauli $Z$ string with weight $p$. A simple solution to this problem is to consider instead the encoder defined by\n$$e(x)p = \\sum{q=0}^p x_q \\quad (\\text{mod 2}),$$\nwhich is associated with the mapping of basis vectors\n$\\lvert n_0, \\ldots, n_{N-1} \\rangle \\mapsto \\lvert z_0, \\ldots, z_{N-1} \\rangle,$\nwhere $z_p = \\sum_{q=0}^p n_q$ (again addition is modulo 2). With this encoding, we can compute the sum $\\sum_{q=0}^{p-1} n_q$ by reading just one bit because this is the value stored by $z_{p-1}$. The associated transform is called the parity transform because the $p$-th qubit is storing the parity (modulo-2 sum) of modes $0, \\ldots, p$. Under the parity transform, annihilation operators are mapped as follows:\n$$\\begin{aligned}\n a_p &\\mapsto \\frac{1}{2} (X_p Z_{p - 1} + \\mathrm{i}Y_p) X_{p + 1} \\cdots X_{N} \\\n &= \\frac{1}{4} [(X_p + \\mathrm{i} Y_p) (I + Z_{p - 1}) -\n (X_p - \\mathrm{i} Y_p) (I - Z_{p - 1})]\n X_{p + 1} \\cdots X_{N} \\\n &= [(\\lvert{0}\\rangle\\langle{1}\\rvert)p (\\lvert{0}\\rangle\\langle{0}\\rvert){p - 1} -\n (\\lvert{0}\\rangle\\langle{1}\\rvert)p (\\lvert{1}\\rangle\\langle{1}\\rvert){p - 1}]\n X_{p + 1} \\cdots X_{N} \\\n\\end{aligned}$$\nThe term in brackets in the last line means \"if $z_p = n_p$ then annihilate in mode $p$; otherwise, create in mode $p$ and attach a minus sign\". The value stored by $z_{p-1}$ contains the information needed to determine whether a minus sign should be attached or not. However, now there is a string of Pauli $X$'s acting on modes $p+1, \\ldots, N-1$ and hence using the parity transform also yields operators with high weight. These Pauli $X$'s perform the necessary update to $z_{p+1}, \\ldots, z_{N-1}$ which is needed if the value of $n_{p}$ changes. In the worst case, the annihilation operator on the first mode will map to an operator which acts on all the qubits.\nSince the parity transform does not offer any advantages over the JWT, OpenFermion does not include a standalone function to perform it. However, there is functionality for defining new transforms by specifying an encoder and decoder pair, also known as a binary code (in our examples the decoder is simply the inverse mapping), and the binary code which defines the parity transform is included in the library as an example. See examples/binary_code_transforms_demo.ipynb for a demonstration of this functionality and how it can be used to reduce the qubit resources required for certain applications.\nLet's use this functionality to map our previously instantiated FermionOperators to QubitOperators using the parity transform with 10 total modes and check that the resulting operators satisfy the expected relations.",
"# Set the number of modes in the system\nn_modes = 10\n\n# Define a function to perform the parity transform\ndef parity(fermion_operator, n_modes):\n return binary_code_transform(fermion_operator, parity_code(n_modes))\n\n# Map FermionOperators to QubitOperators using the parity transform\nannihilate_2_parity = parity(annihilate_2, n_modes)\ncreate_2_parity = parity(create_2, n_modes)\nannihilate_5_parity = parity(annihilate_5, n_modes)\ncreate_5_parity = parity(create_5, n_modes)\nnum_2_parity = parity(num_2, n_modes)\nnum_5_parity = parity(num_5, n_modes)\n\n# Check the canonical anticommutation relations\nassert anticommutator(annihilate_5_parity, annihilate_2_parity) == zero\nassert anticommutator(annihilate_5_parity, annihilate_5_parity) == zero\nassert anticommutator(annihilate_5_parity, create_2_parity) == zero\nassert anticommutator(annihilate_5_parity, create_5_parity) == identity\n\n# Check that the occupation number operators commute\nassert commutator(num_2_parity, num_5_parity) == zero\n\n# Print some output\nprint(\"annihilate_2_parity = \\n{}\".format(annihilate_2_parity))\nprint('')\nprint(\"create_2_parity = \\n{}\".format(create_2_parity))\nprint('')\nprint(\"annihilate_5_parity = \\n{}\".format(annihilate_5_parity))\nprint('')\nprint(\"create_5_parity = \\n{}\".format(create_5_parity))\nprint('')\nprint(\"num_2_parity = \\n{}\".format(num_2_parity))\nprint('')\nprint(\"num_5_parity = \\n{}\".format(num_5_parity))",
"Now let's map one of the FermionOperators again but with the total number of modes set to 100.",
"print(parity(annihilate_2, 100))",
"Note that with the JWT, it is not necessary to specify the total number of modes in the system because $\\tilde{a}_p$ only acts on qubits $0, \\ldots, p$ and not any higher ones.\nThe Bravyi-Kitaev transform\nThe discussion above suggests that we can think of the action of a transformed annihilation operator $\\tilde{a}p$ on a computational basis vector $\\lvert z \\rangle$ as a 4-step classical algorithm:\n1. Check if $n_p = 0$. If so, then output the zero vector. Otherwise,\n2. Update the bit stored by $z_p$.\n3. Update the rest of the bits $z_q$, $q \\neq p$.\n4. Multiply by the parity $\\sum{q=0}^{p-1} n_p$.\nUnder the JWT, Steps 1, 2, and 3 are represented by the operator $(\\lvert{0}\\rangle\\langle{1}\\rvert)p$ and Step 4 is accomplished by the operator $Z{0} \\cdots Z_{p-1}$ (Step 3 actually requires no action).\nUnder the parity transform, Steps 1, 2, and 4 are represented by the operator\n$(\\lvert{0}\\rangle\\langle{1}\\rvert)p (\\lvert{0}\\rangle\\langle{0}\\rvert){p - 1} -\n(\\lvert{0}\\rangle\\langle{1}\\rvert)p (\\lvert{1}\\rangle\\langle{1}\\rvert){p - 1}$ and Step 3 is accomplished by the operator $X_{p+1} \\cdots X_{N-1}$.\nTo obtain a simpler description of these and other transforms (with an aim at generalizing), it is better to put aside the ladder operators and work with an alternative set of $2N$ operators defined by\n$$c_p = a_p + a_p^\\dagger\\,, \\qquad d_p = -\\mathrm{i} (a_p - a_p^\\dagger)\\,.$$\nThese operators are known as Majorana operators. Note that if we describe how Majorana operators should be transformed, then we also know how the annihilation operators should be transformed, since\n$$a_p = \\frac{1}{2} (c_p + \\mathrm{i} d_p).$$\nFor simplicity, let's consider just the $c_p$; the $d_p$ are treated similarly. The action of $c_p$ on a fermionic basis vector is given by\n$$c_p \\lvert n_0, \\ldots, n_{p-1}, n_p, n_{p+1}, \\ldots, n_{N-1} \\rangle =\n(-1)^{\\sum_{q=0}^{p-1} n_q} \\lvert n_0, \\ldots, n_{p-1}, 1 - n_p, n_{p+1}, \\ldots, n_{N-1} \\rangle$$\nIn words, $c_p$ flips the occupation of mode $p$ and multiplies by the ever-present parity factor. If we transform $c_p$ to a qubit operator $\\tilde{c}p$, we should be able to describe the action of $\\tilde{c}_p$ on a computational basis vector $\\lvert z \\rangle$ with a 2-step classical algorithm:\n1. Update the string $z$ to a new string $z'$.\n2. Multiply by the parity $\\sum{q=0}^{p-1} n_q$.\nStep 1 amounts to flipping some bits, so it will be performed by some Pauli $X$'s, and Step 2 will be performed by some Pauli $Z$'s. So $\\tilde{c}p$ should take the form\n$$\\tilde{c}_p = X{U(p)} Z_{P(p - 1)},$$\nwhere $U(j)$ is the set of bits that need to be updated upon flipping $n_j$, and $P(j)$ is a set of bits that stores the sum $\\sum_{q=0}^{j} n_q$ (let's define $P(-1)$ to be the empty set). Let's see how this looks under the JWT and parity transforms.",
"# Create a Majorana operator from our existing operators\nc_5 = annihilate_5 + create_5\n\n# Set the number of modes (required for the parity transform)\nn_modes = 10\n\n# Transform the Majorana operator to a QubitOperator in two different ways\nc_5_jw = jordan_wigner(c_5)\nc_5_parity = parity(c_5, n_modes)\n\n# Print some output\nprint(\"c_5_jw = \\n{}\".format(c_5_jw))\nprint('')\nprint(\"c_5_parity = \\n{}\".format(c_5_parity))",
"For the JWT, $U(j) = {j}$ and $P(j) = {0, \\ldots, j}$, whereas for the parity transform, $U(j) = {j, \\ldots, N-1}$ and $P(j) = {j}$. The size of these sets can be as large as $N$, the total number of modes. These sets are determined by the encoding function $e$.\nIt is possible to pick a clever encoder with the property that these sets have size $O(\\log N)$. The corresponding transform will map annihilation operators to qubit operators with weight $O(\\log N)$, which is much smaller than the $\\Omega(N)$ weight associated with the JWT and parity transforms. This fact was noticed by Bravyi and Kitaev, and later Havlíček and others pointed out that the encoder which achieves this is implemented by a classical data structure called a Fenwick tree. The transforms described in these two papers actually correspond to different variants of the Fenwick tree data structure and give different results when the total number of modes is not a power of 2. OpenFermion implements the one from the first paper as bravyi_kitaev and the one from the second paper as bravyi_kitaev_tree. Generally, the first one (bravyi_kitaev) is preferred because it results in operators with lower weight and is faster to compute.\nLet's transform our previously instantiated Majorana operator using the Bravyi-Kitaev transform.",
"c_5_bk = bravyi_kitaev(c_5, n_modes)\nprint(\"c_5_bk = \\n{}\".format(c_5_bk))",
"The advantage of the Bravyi-Kitaev transform is not apparent in a system with so few modes. Let's look at a system with 100 modes.",
"n_modes = 100\n\n# Initialize some Majorana operators\nc_17 = FermionOperator('[17] + [17^]')\nc_50 = FermionOperator('[50] + [50^]')\nc_73 = FermionOperator('[73] + [73^]')\n\n# Map to QubitOperators\nc_17_jw = jordan_wigner(c_17)\nc_50_jw = jordan_wigner(c_50)\nc_73_jw = jordan_wigner(c_73)\nc_17_parity = parity(c_17, n_modes)\nc_50_parity = parity(c_50, n_modes)\nc_73_parity = parity(c_73, n_modes)\nc_17_bk = bravyi_kitaev(c_17, n_modes)\nc_50_bk = bravyi_kitaev(c_50, n_modes)\nc_73_bk = bravyi_kitaev(c_73, n_modes)\n\n# Print some output\nprint(\"Jordan-Wigner\\n\"\n \"-------------\")\nprint(\"c_17_jw = \\n{}\".format(c_17_jw))\nprint('')\nprint(\"c_50_jw = \\n{}\".format(c_50_jw))\nprint('')\nprint(\"c_73_jw = \\n{}\".format(c_73_jw))\nprint('')\nprint(\"Parity\\n\"\n \"------\")\nprint(\"c_17_parity = \\n{}\".format(c_17_parity))\nprint('')\nprint(\"c_50_parity = \\n{}\".format(c_50_parity))\nprint('')\nprint(\"c_73_parity = \\n{}\".format(c_73_parity))\nprint('')\nprint(\"Bravyi-Kitaev\\n\"\n \"-------------\")\nprint(\"c_17_bk = \\n{}\".format(c_17_bk))\nprint('')\nprint(\"c_50_bk = \\n{}\".format(c_50_bk))\nprint('')\nprint(\"c_73_bk = \\n{}\".format(c_73_bk))",
"Now let's go back to a system with 10 modes and check that the Bravyi-Kitaev transformed operators satisfy the expected relations.",
"# Set the number of modes in the system\nn_modes = 10\n\n# Map FermionOperators to QubitOperators using the Bravyi-Kitaev transform\nannihilate_2_bk = bravyi_kitaev(annihilate_2, n_modes)\ncreate_2_bk = bravyi_kitaev(create_2, n_modes)\nannihilate_5_bk = bravyi_kitaev(annihilate_5, n_modes)\ncreate_5_bk = bravyi_kitaev(create_5, n_modes)\nnum_2_bk = bravyi_kitaev(num_2, n_modes)\nnum_5_bk = bravyi_kitaev(num_5, n_modes)\n\n# Check the canonical anticommutation relations\nassert anticommutator(annihilate_5_bk, annihilate_2_bk) == zero\nassert anticommutator(annihilate_5_bk, annihilate_5_bk) == zero\nassert anticommutator(annihilate_5_bk, create_2_bk) == zero\nassert anticommutator(annihilate_5_bk, create_5_bk) == identity\n\n# Check that the occupation number operators commute\nassert commutator(num_2_bk, num_5_bk) == zero\n\n# Print some output\nprint(\"annihilate_2_bk = \\n{}\".format(annihilate_2_bk))\nprint('')\nprint(\"create_2_bk = \\n{}\".format(create_2_bk))\nprint('')\nprint(\"annihilate_5_bk = \\n{}\".format(annihilate_5_bk))\nprint('')\nprint(\"create_5_bk = \\n{}\".format(create_5_bk))\nprint('')\nprint(\"num_2_bk = \\n{}\".format(num_2_bk))\nprint('')\nprint(\"num_5_bk = \\n{}\".format(num_5_bk))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
geoffbacon/semrep | semrep/evaluate/koehn/koehn.ipynb | mit | [
"Köhn\nIn this notebook I replicate Koehn (2015): What's in an embedding? Analyzing word embeddings through multilingual evaluation. This paper proposes to i) evaluate an embedding method on more than one language, and ii) evaluate an embedding model by how well its embeddings capture syntactic features. He uses an L2-regularized linear classifier, with an upper baseline that assigns the most frequent class. He finds that most methods perform similarly on this task, but that dependency based embeddings perform better. Dependency based embeddings particularly perform better when you decrease the dimensionality. Overall, the aim is to have an evalation method that tells you something about the structure of the learnt representations. He evaulates a range of different models on their ability to capture a number of different morphosyntactic features in a bunch of languages.\nEmbedding models tested:\n- cbow\n- skip-gram\n- glove\n- dep\n- cca\n- brown\nFeatures tested:\n- pos\n- headpos (the pos of the word's head)\n- label\n- gender\n- case\n- number\n- tense\nLanguages tested:\n- Basque\n- English\n- French\n- German\n- Hungarian\n- Polish\n- Swedish\nWord embeddings were trained on automatically PoS-tagged and dependency-parsed data using existing models. This is so the dependency-based embeddings can be trained. The evaluation is on hand-labelled data. English training data is a subset of Wikipedia; English test data comes from PTB. For all other languages, both the training and test data come from a shared task on parsing morphologically rich languages. Koehn trained embeddings with window size 5 and 11 and dimensionality 10, 100, 200.\nDependency-based embeddings perform the best on almost all tasks. They even do well when the dimensionality is reduced to 10, while other methods perform poorly in this case.\nI'll need:\n- models\n- learnt representations\n- automatically labeled data\n- hand-labeled data",
"%matplotlib inline\nimport os\nimport csv\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()\nfrom sklearn.linear_model import LogisticRegression, LogisticRegressionCV\nfrom sklearn.model_selection import train_test_split, StratifiedKFold\nfrom sklearn.metrics import roc_curve, roc_auc_score, classification_report, confusion_matrix\nfrom sklearn.preprocessing import LabelEncoder\n\ndata_path = '../../data'\ntmp_path = '../../tmp'",
"Learnt representations\nGloVe",
"size = 50\nfname = 'embeddings/glove.6B.{}d.txt'.format(size)\nglove_path = os.path.join(data_path, fname)\nglove = pd.read_csv(glove_path, sep=' ', header=None, index_col=0, quoting=csv.QUOTE_NONE)\nglove.head()",
"Features",
"fname = 'UD_English/features.csv'\nfeatures_path = os.path.join(data_path, os.path.join('evaluation/dependency', fname))\nfeatures = pd.read_csv(features_path).set_index('form')\nfeatures.head()\n\ndf = pd.merge(glove, features, how='inner', left_index=True, right_index=True)\ndf.head()",
"Prediction",
"def prepare_X_and_y(feature, data):\n \"\"\"Return X and y ready for predicting feature from embeddings.\"\"\"\n relevant_data = data[data[feature].notnull()]\n columns = list(range(1, size+1))\n X = relevant_data[columns]\n y = relevant_data[feature]\n train = relevant_data['set'] == 'train'\n test = (relevant_data['set'] == 'test') | (relevant_data['set'] == 'dev')\n X_train, X_test = X[train].values, X[test].values\n y_train, y_test = y[train].values, y[test].values\n return X_train, X_test, y_train, y_test\n\ndef predict(model, X_test):\n \"\"\"Wrapper for getting predictions.\"\"\"\n results = model.predict_proba(X_test)\n return np.array([t for f,t in results]).reshape(-1,1)\n\ndef conmat(model, X_test, y_test):\n \"\"\"Wrapper for sklearn's confusion matrix.\"\"\"\n y_pred = model.predict(X_test)\n c = confusion_matrix(y_test, y_pred)\n sns.heatmap(c, annot=True, fmt='d', \n xticklabels=model.classes_, \n yticklabels=model.classes_, \n cmap=\"YlGnBu\", cbar=False)\n plt.ylabel('Ground truth')\n plt.xlabel('Prediction')\n\ndef draw_roc(model, X_test, y_test):\n \"\"\"Convenience function to draw ROC curve.\"\"\"\n y_pred = predict(model, X_test)\n fpr, tpr, thresholds = roc_curve(y_test, y_pred)\n roc = roc_auc_score(y_test, y_pred)\n label = r'$AUC={}$'.format(str(round(roc, 3)))\n plt.plot(fpr, tpr, label=label);\n plt.title('ROC')\n plt.xlabel('False positive rate');\n plt.ylabel('True positive rate');\n plt.legend();\n\ndef cross_val_auc(model, X, y):\n for _ in range(5):\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y)\n model = model.fit(X_train, y_train)\n draw_roc(model, X_test, y_test)\n\nX_train, X_test, y_train, y_test = prepare_X_and_y('Tense', df)\n\nmodel = LogisticRegression(penalty='l2', solver='liblinear')\nmodel = model.fit(X_train, y_train)\nconmat(model, X_test, y_test)\n\nsns.distplot(model.coef_[0], rug=True, kde=False);",
"Hyperparameter optimization before error analysis"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ccasotto/rmtk | rmtk/vulnerability/derivation_fragility/hybrid_methods/N2/N2.ipynb | agpl-3.0 | [
"N2 - Eurocode 8, CEN (2005)\nThis simplified nonlinear procedure for the estimation of the seismic response of structures uses capacity curves and inelastic spectra. This method has been developed to be used in combination with code-based response spectra, but it is also possible to employ it for the assessment of structural response subject to ground motion records. It also has the distinct aspect of assuming an elastic-perfectly plastic force-displacement relationship in the construction of the bilinear curve. This method is part of recommendations of the Eurocode 8 (CEN, 2005) for the seismic design of new structures, and the capacity curves are usually simplified by a elasto-perfectly plastic relationship.\nNote: To run the code in a cell:\n\nClick on the cell to select it.\nPress SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.",
"import N2Method\nfrom rmtk.vulnerability.common import utils\n%matplotlib inline ",
"Load capacity curves\nIn order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.\nPlease provide the location of the file containing the capacity curves using the parameter capacity_curves_file.",
"capacity_curves_file = \"../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv\"\n\ncapacity_curves = utils.read_capacity_curves(capacity_curves_file)\nutils.plot_capacity_curves(capacity_curves)",
"Load ground motion records\nPlease indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.\nNote: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.\nThe parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.",
"gmrs_folder = \"../../../../../../rmtk_data/accelerograms\"\nminT, maxT = 0.1, 2.0\n\ngmrs = utils.read_gmrs(gmrs_folder)\n#utils.plot_response_spectra(gmrs, minT, maxT)",
"Load damage state thresholds\nPlease provide the path to your damage model file using the parameter damage_model_file in the cell below.\nThe damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.",
"damage_model_file = \"../../../../../../rmtk_data/damage_model.csv\"\n\ndamage_model = utils.read_damage_model(damage_model_file)",
"Obtain the damage probability matrix\nThe parameter damping_ratio needs to be defined in the cell below in order to calculate the damage probability matrix.",
"damping_ratio = 0.05\n\nPDM, Sds = N2Method.calculate_fragility(capacity_curves, gmrs, damage_model, damping_ratio)",
"Fit lognormal CDF fragility curves\nThe following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:\n1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are \"PGA\", \"Sd\" and \"Sa\".\n2. period: This parameter defines the time period of the fundamental mode of vibration of the structure.\n3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are \"least squares\" and \"max likelihood\".",
"IMT = \"Sa\"\nperiod = 0.3\nregression_method = \"least squares\"\n\nfragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio, \n IMT, damage_model, regression_method)",
"Plot fragility functions\nThe following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:\n* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions",
"minIML, maxIML = 0.01, 3.00\n\nutils.plot_fragility_model(fragility_model, minIML, maxIML)\n# utils.plot_fragility_stats(fragility_statistics,minIML,maxIML)",
"Save fragility functions\nThe derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:\n1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.\n2. minIML and maxIML: These parameters define the bounds of applicability of the functions.\n3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are \"csv\" and \"nrml\".",
"taxonomy = \"RC\"\nminIML, maxIML = 0.01, 3.00\noutput_type = \"csv\"\noutput_path = \"../../../../../../rmtk_data/output/\"\n\nutils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)",
"Obtain vulnerability function\nA vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level. \nThe following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:\n1. cons_model_file: This parameter specifies the path of the consequence model file.\n2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.\n3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are \"lognormal\", \"beta\", and \"PMF\".",
"cons_model_file = \"../../../../../../rmtk_data/cons_model.csv\"\nimls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, \n 0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00, \n 2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00]\ndistribution_type = \"lognormal\"\n\ncons_model = utils.read_consequence_model(cons_model_file)\nvulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model, \n imls, distribution_type)",
"Plot vulnerability function",
"utils.plot_vulnerability_model(vulnerability_model)",
"Save vulnerability function\nThe derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:\n1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.\n3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are \"csv\" and \"nrml\".",
"taxonomy = \"RC\"\noutput_type = \"csv\"\noutput_path = \"../../../../../../rmtk_data/output/\"\n\nutils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
maciejkula/spotlight | examples/movielens_explicit/movielens_explicit.ipynb | mit | [
"Explicit feedback movie recommendations\nIn this example, we'll build a quick explicit feedback recommender system: that is, a model that takes into account explicit feedback signals (like ratings) to recommend new content.\nWe'll use an approach first made popular by the Netflix prize contest: matrix factorization. \nThe basic idea is very simple:\n\nStart with user-item-rating triplets, conveying the information that user i gave some item j rating r.\nRepresent both users and items as high-dimensional vectors of numbers. For example, a user could be represented by [0.3, -1.2, 0.5] and an item by [1.0, -0.3, -0.6].\nThe representations should be chosen so that, when we multiplied together (via dot products), we can recover the original ratings.\nThe utility of the model then is derived from the fact that if we multiply the user vector of a user with the item vector of some item they have not rated, we hope to obtain a predicition for the rating they would have given to it had they seen it.\n\n<img src=\"static/matrix_factorization.png\" alt=\"Matrix factorization\" style=\"width: 600px;\"/>\nSpotlight fits models such as these using stochastic gradient descent. The procedure goes roughly as follows:\n\nStart with representing users and items by randomly chosen vectors. Because they are random, they are not going to give useful recommendations, but we are going to improve them as we fit the model.\nGo through the (user, item, rating) triplets in the dataset. For every triplet, compute the rating that the model predicts by multiplying the user and item vectors together, and compare the result with the actual rating: the closer they are, the better the model.\nIf the predicted rating is too low, adjust the user and item vectors (by a small amount) to increase the prediction.\nIf the predicted rating is too high, adjust the vectors to decrease it.\nContinue iterating over the training triplets until the model's accuracy stabilizes.\n\nThe data\nWe start with importing a famous dataset, the Movielens 100k dataset. It contains 100,000 ratings (between 1 and 5) given to 1683 movies by 944 users:",
"import numpy as np\n\nfrom spotlight.datasets.movielens import get_movielens_dataset\n\ndataset = get_movielens_dataset(variant='100K')\nprint(dataset)",
"The dataset object is an instance of an Interactions class, a fairly light-weight wrapper that Spotlight users to hold the arrays that contain information about an interactions dataset (such as user and item ids, ratings, and timestamps).\nThe model\nWe can feed our dataset to the ExplicitFactorizationModel class - and sklearn-like object that allows us to train and evaluate the explicit factorization models.\nInternally, the model uses the BilinearNet class to represents users and items. It's composed of a 4 embedding layers:\n\na (num_users x latent_dim) embedding layer to represent users,\na (num_items x latent_dim) embedding layer to represent items,\na (num_users x 1) embedding layer to represent user biases, and\na (num_items x 1) embedding layer to represent item biases.\n\nTogether, these give us the predictions. Their accuracy is evaluated using one of the Spotlight losses. In this case, we'll use the regression loss, which is simply the squared difference between the true and the predicted rating.",
"import torch\n\nfrom spotlight.factorization.explicit import ExplicitFactorizationModel\n\nmodel = ExplicitFactorizationModel(loss='regression',\n embedding_dim=128, # latent dimensionality\n n_iter=10, # number of epochs of training\n batch_size=1024, # minibatch size\n l2=1e-9, # strength of L2 regularization\n learning_rate=1e-3,\n use_cuda=torch.cuda.is_available())",
"In order to fit and evaluate the model, we need to split it into a train and a test set:",
"from spotlight.cross_validation import random_train_test_split\n\ntrain, test = random_train_test_split(dataset, random_state=np.random.RandomState(42))\n\nprint('Split into \\n {} and \\n {}.'.format(train, test))",
"With the data ready, we can go ahead and fit the model. This should take less than a minute on the CPU, and we should see the loss decreasing as the model is learning better and better representations for the user and items in our dataset.",
"model.fit(train, verbose=True)",
"Now that the model is estimated, how good are its predictions?",
"from spotlight.evaluation import rmse_score\n\ntrain_rmse = rmse_score(model, train)\ntest_rmse = rmse_score(model, test)\n\nprint('Train RMSE {:.3f}, test RMSE {:.3f}'.format(train_rmse, test_rmse))",
"Conclusions\nThis is a fairly simple model, and can be extended by adding side-information, adding more non-linear layers, and so on.\nHowever, before plunging into such extensions, it is worth knowing that models using explicit ratings have fallen out of favour both in academia and in industry. It is now widely accepted that what people choose to interact with is more meaningful than how they rate the interactions they have.\nThese scenarios are called implicit feedback settings. If you're interested in building these models, have a look at Spotlight's implicit factorization models, as well as the implicit sequence models which aim to explicitly model the sequential nature of interaction data."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
smalladi78/SEF | notebooks/61_TimeSeriesForChannel.ipynb | unlicense | [
"Time Series forecasting for donations to SEF\nThe organization has a vision to eradicate curable blindness by 2020 in India (http://giftofvision.org/mission-and-vision). That is a bold vision to be able to make such a prediction!\nIn this notebook, I am attempting to forecast the donations out into the future based on past donations.",
"from pandas.tslib import Timestamp\nimport statsmodels.api as sm\nfrom scipy import stats\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.style.use('ggplot')\n\ndef acf_pacf(ts, lags):\n fig = plt.figure(figsize=(12,8))\n ax1 = fig.add_subplot(211)\n fig = sm.graphics.tsa.plot_acf(ts, lags=lags, ax=ax1)\n ax2 = fig.add_subplot(212)\n fig = sm.graphics.tsa.plot_pacf(ts, lags=lags, ax=ax2)\n\ndef get_data_by_month(df):\n df_reindexed = df.reindex(pd.date_range(start=df.index.min(), end=df.index.max(), freq='1D'), fill_value=0)\n ym_series = pd.Series(df_reindexed.reset_index()['index'].\\\n apply(lambda dt: pd.to_datetime(\n dt.to_datetime().year*10000 + dt.to_datetime().month*100 + 1, format='%Y%m%d')))\n\n df_reindexed['activity_ym'] = ym_series.values\n return df_reindexed.groupby(['activity_ym']).amount.sum().to_frame()",
"Time Series analysis",
"donations = pd.read_pickle('out/21/donations.pkl')\n\ndf = donations[donations.is_service==False]\\\n .groupby(['activity_date', ])\\\n .amount\\\n .sum()\\\n .to_frame()\ndf = get_data_by_month(df)\n\nts = pd.Series(df['amount'])\nts.plot(figsize=(12,8))",
"The plot of the data shows that the data was much different before 2003.\nSo let us only consider data from 2003 onwards and plot the data again.\nObservations\nOriginal variable (amount) - (ts):\n1. The original variable is itself not stationary.\n2. The pacf and acf on the original variable cut off at lag of 1.\n3. The acf on the original variable indicates seasonality at 12 months.\nDifferenced variable (ts_diff):\n1. The differenced variable has mean 0 but has significant variability that is increasing.\nLog transformation on the original variable (log_ts):\n1. The log is also not stationary.\n2. The acf on log_ts show cut off at lag of 2.\n3. The pacf on log_ts show cut off at lag of 1.\nDifference on the log transformation on the original variable (log_ts_diff):\n1. The difference in the log appears to be stationary with mean 0 and constant variance from the plot of log_ts_diff.\nConsidering the seasonal portion of log_ts:\n1. The acf shows a gradual tailing off.\n2. The pacf indicates a cut off at lag of 2.\nBased on the above, we want to try out the following seasonal ARIMA models on log of the original variable:\n(p=2, d=1, q=1), (P=0, D=1, Q=2, S=12) => model1",
"df = donations[(donations.activity_year >= 2008) & (donations.is_service==False)]\\\n .groupby(['activity_date', ])\\\n .amount\\\n .sum()\\\n .to_frame()\ndf = get_data_by_month(df)\n\ndf.head()\n\nts = pd.Series(df['amount'])\nts.plot(figsize=(12,8))\n\n\nacf_pacf(ts, 20)\n\nts_diff = ts.diff(1)\nts_diff.plot(figsize=(12,8))\n\nlog_ts = np.log(pd.Series(df['amount']))\nlog_ts.plot(figsize=(12,8))\n\nacf_pacf(log_ts, 20)\n\nacf_pacf(log_ts, 60)\n\nlog_ts_diff = log_ts.diff(1)\nlog_ts_diff.plot(figsize=(12,8))",
"The above time plot looks great! I see that the residuals have a mean at zero with variability that is constant.\nLet us use the log(amount) as the property that we want to model on.\nModeling",
"model = sm.tsa.SARIMAX(log_ts, order=(1,1,1), seasonal_order=(0,1,1,12)).fit(enforce_invertibility=False)\n\nmodel.summary()\n\nacf_pacf(model.resid, 30)\n\n%%html\n<style>table {float:left}</style>",
"Model parameters\nNote: Even the best model could not git rid of the spike on the residuals (that are happening every 12 months)\nFollowing are the results of various models that I tried.\np|d|q|P|D|Q|S|AIC|BIC|Ljung-Box|Log-likelihood|ar.L1|ar.L2|ma.L1|ma.S.L12|sigma2|\n--|--|--|--|--|--|--|----|----|------|----|-------|-------|-------|-------|-------|\n0|1|1 |0|1|1|12|101|111|33|-46|0.3771||-0.9237|-0.9952|0.1325| <<-- The best model so far\n2|1|1 |0|1|1|12|102|115|35|-46|0.3615|-978|-1.15|-1|0.0991\n2|1|0 |0|1|1|12|110|121|46|-51|-0.32|-0.27|-1|-1|0.15\n1|1|0 |0|1|1|12|114|122|39|-54|-0.2636|-0.99|0.1638||\n0|1|0 |0|1|1|12|118|123|46|-57|-0.99|0.1748|||\n0|1|0 |1|1|0|12|136|151|57|-66|-0.58|0.2781|||",
"ts_predict = ts.append(model.predict(alpha=0.05, start=len(log_ts), end=len(log_ts)+12))\n\nts_predict.plot(figsize=(12,8))",
"Predictions",
"new_ts = ts[ts.index.year < 2015]\nnew_log_ts = log_ts[log_ts.index.year < 2015]\nnew_model = sm.tsa.SARIMAX(new_log_ts, order=(0,1,1), seasonal_order=(0,1,1,12), enforce_invertibility=False).fit()\n\nts_predict = new_ts.append(new_model.predict(start=len(new_log_ts), end=len(new_log_ts)+30).apply(np.exp))\nts_predict[len(new_log_ts):].plot(figsize=(12,8), color='b', label='Predicted')\nts.plot(figsize=(12,8), color='r', label='Actual')\n",
"Make pretty pictures for presentation",
"fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, sharex=True, figsize=(10,10))\nax1.plot(ts)\nax1.set_title('Amount')\nax2.plot(ts_diff)\nax2.set_title('Difference of Amount')\nax3.plot(log_ts_diff)\nax3.set_title('Difference of Log(Amount)')\nplt.savefig('viz/TimeSeriesAnalysis.png')\n\nfig = plt.figure(figsize=(12,12))\nax1 = fig.add_subplot(311)\nax1.plot(ts)\nax2 = fig.add_subplot(312)\nfig = sm.graphics.tsa.plot_acf(ts, lags=60, ax=ax2)\nax3 = fig.add_subplot(313)\nfig = sm.graphics.tsa.plot_pacf(ts, lags=60, ax=ax3)\n\nplt.tight_layout()\nplt.savefig('viz/ts_acf_pacf.png')\n\nts_predict = new_ts.append(new_model.predict(start=len(new_log_ts), end=len(new_log_ts)+30).apply(np.exp))\nts_predict_1 = ts_predict/1000000\nts_1 = ts/1000000\nf = plt.figure(figsize=(12,8))\nax = f.add_subplot(111)\nplt.ylabel('Amount donated (in millions of dollars)', fontsize=16)\nplt.xlabel('Year of donation', fontsize=16)\nax.plot(ts_1, color='r', label='Actual')\nax.plot(ts_predict_1[len(new_log_ts):], color='b', label='Predicted')\nplt.legend(prop={'size':16}, loc='upper center')\nplt.savefig('viz/TimeSeriesPrediction.png')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tkzeng/molecular-design-toolkit | moldesign/_notebooks/Example 2. UV-vis absorption spectra.ipynb | apache-2.0 | [
"<span style=\"float:right\">\n<a href=\"http://moldesign.bionano.autodesk.com/\" target=\"_blank\" title=\"About\">About</a> \n<a href=\"https://forum.bionano.autodesk.com/c/Molecular-Design-Toolkit\" target=\"_blank\" title=\"Forum\">Forum</a> \n<a href=\"https://github.com/autodesk/molecular-design-toolkit/issues\" target=\"_blank\" title=\"Issues\">Issues</a> \n<a href=\"http://bionano.autodesk.com/MolecularDesignToolkit/explore.html\" target=\"_blank\" title=\"Tutorials\">Tutorials</a> \n<a href=\"http://autodesk.github.io/molecular-design-toolkit/\" target=\"_blank\" title=\"Documentation\">Documentation</a></span>\n</span>\n\n<br>\n<center><h1>Example 2: Using MD sampling to calculate UV-Vis spectra</h1> </center>\n\nThis notebook uses basic quantum chemical calculations to calculate the absorption spectra of a small molecule.\n\nAuthor: Aaron Virshup, Autodesk Research<br>\nCreated on: September 23, 2016\nTags: excited states, CASSCF, absorption, sampling",
"%matplotlib inline\nimport numpy as np\nfrom matplotlib.pylab import *\n\ntry: import seaborn #optional, makes plots look nicer\nexcept ImportError: pass\n\nimport moldesign as mdt\nfrom moldesign import units as u",
"Contents\n\n\nSingle point\nSampling\nPost-processing\nCreate spectrum\n\nSingle point\nLet's start with calculating the vertical excitation energy and oscillator strengths at the ground state minimum (aka Franck-Condon) geometry.\nNote that the active space and number of included states here is system-specific.",
"qmmol = mdt.from_name('benzene')\nqmmol.set_energy_model(mdt.models.CASSCF, active_electrons=6,\n active_orbitals=6, state_average=6, basis='sto-3g')\n\nproperties = qmmol.calculate()",
"This cell print a summary of the possible transitions. \nNote: you can convert excitation energies directly to nanometers using Pint by calling energy.to('nm', 'spectroscopy').",
"for fstate in xrange(1, len(qmmol.properties.state_energies)):\n excitation_energy = properties.state_energies[fstate] - properties.state_energies[0]\n \n print '--- Transition from S0 to S%d ---' % fstate \n print 'Excitation wavelength: %s' % excitation_energy.to('nm', 'spectroscopy')\n print 'Oscillator strength: %s' % qmmol.properties.oscillator_strengths[0,fstate]",
"Sampling\nOf course, molecular spectra aren't just a set of discrete lines - they're broadened by several mechanisms. We'll treat vibrations here by sampling the molecule's motion on the ground state at 300 Kelvin.\nTo do this, we'll sample its geometries as it moves on the ground state by:\n 1. Create a copy of the molecule\n 2. Assign a forcefield (GAFF2/AM1-BCC)\n 3. Run dynamics for 5 ps, taking a snapshot every 250 fs, for a total of 20 separate geometries.",
"mdmol = mdt.Molecule(qmmol)\nmdmol.set_energy_model(mdt.models.GAFF)\nmdmol.minimize()\n\nmdmol.set_integrator(mdt.integrators.OpenMMLangevin, frame_interval=250*u.fs,\n timestep=0.5*u.fs, constrain_hbonds=False, remove_rotation=True,\n remove_translation=True, constrain_water=False)\nmdtraj = mdmol.run(5.0 * u.ps)",
"Post-processing\nNext, we calculate the spectrum at each sampled geometry.",
"post_traj = mdt.Trajectory(qmmol)\nfor frame in mdtraj:\n qmmol.positions = frame.positions\n qmmol.calculate()\n post_traj.new_frame()",
"This cell plots the results - wavelength vs. oscillator strength at each geometry for each transition:",
"wavelengths_to_state = []\noscillators_to_state = []\nfor i in xrange(1, len(qmmol.properties.state_energies)):\n wavelengths_to_state.append( (post_traj.state_energies[:,i] - post_traj.potential_energy).to('nm', 'spectroscopy'))\n oscillators_to_state.append([o[0,i] for o in post_traj.oscillator_strengths])\n\n \nfor istate, (w,o) in enumerate(zip(wavelengths_to_state, oscillators_to_state)):\n plot(w,o, label='S0 -> S%d'%(istate+1),\n marker='o', linestyle='none')\nxlabel('wavelength / nm'); ylabel('oscillator strength'); legend()",
"Create spectrum\nWe're finally ready to calculate a spectrum - we'll create a histogram of all calculated transition wavelengths over all states, weighted by the oscillator strengths.",
"from itertools import chain\nall_wavelengths = u.array(list(chain(*wavelengths_to_state)))\nall_oscs = u.array(list(chain(*oscillators_to_state)))\nhist(all_wavelengths, weights=all_oscs, bins=50)\nxlabel('wavelength / nm')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
deepmind/reverb | examples/demo.ipynb | apache-2.0 | [
"Copyright 2019 DeepMind Technologies Limited.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Environments\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/deepmind/reverb/blob/master/examples/demo.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/deepmind/reverb/blob/master/examples/demo.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n</table>\n\nIntroduction\nThis colab is a demonstration of how to use Reverb through examples.\nSetup\nInstalls the stable build of Reverb (dm-reverb) and TensorFlow (tf) to match.",
"!pip install dm-tree\n!pip install dm-reverb[tensorflow]\n\nimport reverb\nimport tensorflow as tf",
"The code below defines a dummy RL environment for use in the examples below.",
"OBSERVATION_SPEC = tf.TensorSpec([10, 10], tf.uint8)\nACTION_SPEC = tf.TensorSpec([2], tf.float32)\n\ndef agent_step(unused_timestep) -> tf.Tensor:\n return tf.cast(tf.random.uniform(ACTION_SPEC.shape) > .5,\n ACTION_SPEC.dtype)\n\ndef environment_step(unused_action) -> tf.Tensor:\n return tf.cast(tf.random.uniform(OBSERVATION_SPEC.shape, maxval=256),\n OBSERVATION_SPEC.dtype)",
"Creating a Server and Client",
"# Initialize the reverb server.\nsimple_server = reverb.Server(\n tables=[\n reverb.Table(\n name='my_table',\n sampler=reverb.selectors.Prioritized(priority_exponent=0.8),\n remover=reverb.selectors.Fifo(),\n max_size=int(1e6),\n # Sets Rate Limiter to a low number for the examples.\n # Read the Rate Limiters section for usage info.\n rate_limiter=reverb.rate_limiters.MinSize(2),\n # The signature is optional but it is good practice to set it as it\n # enables data validation and easier dataset construction. Note that\n # we prefix all shapes with a 3 as the trajectories we'll be writing\n # consist of 3 timesteps.\n signature={\n 'actions':\n tf.TensorSpec([3, *ACTION_SPEC.shape], ACTION_SPEC.dtype),\n 'observations':\n tf.TensorSpec([3, *OBSERVATION_SPEC.shape],\n OBSERVATION_SPEC.dtype),\n },\n )\n ],\n # Sets the port to None to make the server pick one automatically.\n # This can be omitted as it's the default.\n port=None)\n\n# Initializes the reverb client on the same port as the server.\nclient = reverb.Client(f'localhost:{simple_server.port}')",
"For details on customizing the sampler, remover, and rate limiter, see below.\nExample 1: Overlapping Trajectories\nInserting Overlapping Trajectories",
"# Dynamically adds trajectories of length 3 to 'my_table' using a client writer.\n\nwith client.trajectory_writer(num_keep_alive_refs=3) as writer:\n timestep = environment_step(None)\n for step in range(4):\n action = agent_step(timestep)\n writer.append({'action': action, 'observation': timestep})\n timestep = environment_step(action)\n\n if step >= 2:\n # In this example, the item consists of the 3 most recent timesteps that\n # were added to the writer and has a priority of 1.5.\n writer.create_item(\n table='my_table',\n priority=1.5,\n trajectory={\n 'actions': writer.history['action'][-3:],\n 'observations': writer.history['observation'][-3:],\n }\n )",
"The animation illustrates the state of the server at each step in the\nabove code block. Although each item is being set to have the same\npriority value of 1.5, items do not need to have the same priority values.\nIn real world scenarios, items would have differing and\ndynamically-calculated priority values.\n<img src=\"https://raw.githubusercontent.com/deepmind/reverb/master/docs/animations/diagram1.svg\" />\nSampling Overlapping Trajectories in TensorFlow",
"# Dataset samples sequences of length 3 and streams the timesteps one by one.\n# This allows streaming large sequences that do not necessarily fit in memory.\ndataset = reverb.TrajectoryDataset.from_table_signature(\n server_address=f'localhost:{simple_server.port}',\n table='my_table',\n max_in_flight_samples_per_worker=10)\n\n\n# Batches 2 sequences together.\n# Shapes of items is now [2, 3, 10, 10].\nbatched_dataset = dataset.batch(2)\n\nfor sample in batched_dataset.take(1):\n # Results in the following format.\n print(sample.info.key) # ([2], uint64)\n print(sample.info.probability) # ([2], float64)\n\n print(sample.data['observations']) # ([2, 3, 10, 10], uint8)\n print(sample.data['actions']) # ([2, 3, 2], float32)",
"Example 2: Complete Episodes\nCreate a new server for this example to keep the elements of the priority table consistent.",
"EPISODE_LENGTH = 150\n\ncomplete_episode_server = reverb.Server(tables=[\n reverb.Table(\n name='my_table',\n sampler=reverb.selectors.Prioritized(priority_exponent=0.8),\n remover=reverb.selectors.Fifo(),\n max_size=int(1e6),\n # Sets Rate Limiter to a low number for the examples.\n # Read the Rate Limiters section for usage info.\n rate_limiter=reverb.rate_limiters.MinSize(2),\n # The signature is optional but it is good practice to set it as it\n # enables data validation and easier dataset construction. Note that\n # the number of observations is larger than the number of actions.\n # The extra observation is the terminal state where no action is\n # taken.\n signature={\n 'actions':\n tf.TensorSpec([EPISODE_LENGTH, *ACTION_SPEC.shape],\n ACTION_SPEC.dtype),\n 'observations':\n tf.TensorSpec([EPISODE_LENGTH + 1, *OBSERVATION_SPEC.shape],\n OBSERVATION_SPEC.dtype),\n },\n ),\n])\n\n# Initializes the reverb client on the same port.\nclient = reverb.Client(f'localhost:{complete_episode_server.port}')",
"Inserting Complete Episodes",
"# Writes whole episodes of varying length to a Reverb server.\n\nNUM_EPISODES = 10\n\n# We know that episodes are at most 150 steps so we set the writer buffer size\n# to 151 (to capture the final observation).\nwith client.trajectory_writer(num_keep_alive_refs=151) as writer:\n for _ in range(NUM_EPISODES):\n timestep = environment_step(None)\n\n for _ in range(EPISODE_LENGTH):\n action = agent_step(timestep)\n writer.append({'action': action, 'observation': timestep})\n\n timestep = environment_step(action)\n\n # The astute reader will recognize that the final timestep has not been\n # appended to the writer. We'll go ahead and add it WITHOUT an action. The\n # writer will automatically fill in the gap with `None` for the action\n # column.\n writer.append({'observation': timestep})\n\n # Now that the entire episode has been added to the writer buffer we can an\n # item with a trajectory that spans the entire episode. Note that the final\n # action must not be included as it is None and the trajectory would be\n # rejected if we tried to include it.\n writer.create_item(\n table='my_table',\n priority=1.5,\n trajectory={\n 'actions': writer.history['action'][:-1],\n 'observations': writer.history['observation'][:],\n })\n\n # This call blocks until all the items (in this case only one) have been\n # sent to the server, inserted into respective tables and confirmations\n # received by the writer.\n writer.end_episode(timeout_ms=1000)\n\n # Ending the episode also clears the history property which is why we are\n # able to use `[:]` in when defining the trajectory above.\n assert len(writer.history['action']) == 0\n assert len(writer.history['observation']) == 0",
"Sampling Complete Episodes in TensorFlow",
"# Each sample is an entire episode.\n# Adjusts the expected shapes to account for the whole episode length.\ndataset = reverb.TrajectoryDataset.from_table_signature(\n server_address=f'localhost:{complete_episode_server.port}',\n table='my_table',\n max_in_flight_samples_per_worker=10,\n rate_limiter_timeout_ms=10)\n\n# Batches 128 episodes together.\n# Each item is an episode of the format (observations, actions) as above.\n# Shape of items are now ([128, 151, 10, 10], [128, 150, 2]).\ndataset = dataset.batch(128)\n\n# Sample has type reverb.ReplaySample.\nfor sample in dataset.take(1):\n # Results in the following format.\n print(sample.info.key) # ([128], uint64)\n print(sample.info.probability) # ([128], float64)\n\n print(sample.data['observations']) # ([128, 151, 10, 10], uint8)\n print(sample.data['actions']) # ([128, 150, 2], float32)",
"Example 3: Multiple Priority Tables\nCreate a server that maintains multiple priority tables.",
"multitable_server = reverb.Server(\n tables=[\n reverb.Table(\n name='my_table_a',\n sampler=reverb.selectors.Prioritized(priority_exponent=0.8),\n remover=reverb.selectors.Fifo(),\n max_size=int(1e6),\n # Sets Rate Limiter to a low number for the examples.\n # Read the Rate Limiters section for usage info.\n rate_limiter=reverb.rate_limiters.MinSize(1)),\n reverb.Table(\n name='my_table_b',\n sampler=reverb.selectors.Prioritized(priority_exponent=0.8),\n remover=reverb.selectors.Fifo(),\n max_size=int(1e6),\n # Sets Rate Limiter to a low number for the examples.\n # Read the Rate Limiters section for usage info.\n rate_limiter=reverb.rate_limiters.MinSize(1)),\n ])\n\nclient = reverb.Client('localhost:{}'.format(multitable_server.port))",
"Inserting Sequences of Varying Length into Multiple Priority Tables",
"with client.trajectory_writer(num_keep_alive_refs=3) as writer:\n timestep = environment_step(None)\n\n for step in range(4):\n writer.append({'timestep': timestep})\n\n action = agent_step(timestep)\n timestep = environment_step(action)\n\n if step >= 1:\n writer.create_item(\n table='my_table_b',\n priority=4-step,\n trajectory=writer.history['timestep'][-2:])\n\n if step >= 2:\n writer.create_item(\n table='my_table_a',\n priority=4-step,\n trajectory=writer.history['timestep'][-3:])",
"This diagram shows the state of the server after executing the above cell.\n<img src=\"https://raw.githubusercontent.com/deepmind/reverb/master/docs/animations/diagram2.svg\" />\nExample 4: Samplers and Removers\nCreating a Server with a Prioritized Sampler and a FIFO Remover",
"reverb.Server(tables=[\n reverb.Table(\n name='my_table',\n sampler=reverb.selectors.Prioritized(priority_exponent=0.8),\n remover=reverb.selectors.Fifo(),\n max_size=int(1e6),\n rate_limiter=reverb.rate_limiters.MinSize(100)),\n])",
"Creating a Server with a MaxHeap Sampler and a MinHeap Remover\nSetting max_times_sampled=1 causes each item to be removed after it is\nsampled once. The end result is a priority table that essentially functions\nas a max priority queue.",
"max_size = 1000\nreverb.Server(tables=[\n reverb.Table(\n name='my_priority_queue',\n sampler=reverb.selectors.MaxHeap(),\n remover=reverb.selectors.MinHeap(),\n max_size=max_size,\n rate_limiter=reverb.rate_limiters.MinSize(int(0.95 * max_size)),\n max_times_sampled=1,\n )\n])",
"Creating a Server with One Queue and One Circular Buffer\nBehavior of canonical data structures such as\ncircular buffer or a max\npriority queue can\nbe implemented in Reverb by modifying the sampler and remover\nor by using the PriorityTable queue initializer.",
"reverb.Server(\n tables=[\n reverb.Table.queue(name='my_queue', max_size=10000),\n reverb.Table(\n name='my_circular_buffer',\n sampler=reverb.selectors.Fifo(),\n remover=reverb.selectors.Fifo(),\n max_size=10000,\n max_times_sampled=1,\n rate_limiter=reverb.rate_limiters.MinSize(1)),\n ])",
"Example 5: Rate Limiters\nCreating a Server with a SampleToInsertRatio Rate Limiter",
"reverb.Server(\n tables=[\n reverb.Table(\n name='my_table',\n sampler=reverb.selectors.Prioritized(priority_exponent=0.8),\n remover=reverb.selectors.Fifo(),\n max_size=int(1e6),\n rate_limiter=reverb.rate_limiters.SampleToInsertRatio(\n samples_per_insert=3.0, min_size_to_sample=3,\n error_buffer=3.0)),\n ])",
"This example is intended to be used in a distributed or multi-threaded\nenviroment where insertion blocking will be unblocked by sample calls from\nan independent thread. If the system is single threaded, the blocked\ninsertion call will cause a deadlock."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
simpeg/tutorials | notebooks/fundamentals/pixels_and_neighbors/mesh.ipynb | mit | [
"The Mesh: Where do things live?\n<img src=\"images/FiniteVolume.png\" width=70% align=\"center\">\n<h4 align=\"center\">Figure 3. Anatomy of a finite volume cell.</h4>\n\nTo bring our continuous equations into the computer, we need to discretize the earth and represent it using a finite(!) set of numbers. In this tutorial we will explain the discretization in 2D and generalize to 3D in the notebooks. A 2D (or 3D!) mesh is used to divide up space, and we can represent functions (fields, parameters, etc.) on this mesh at a few discrete places: the nodes, edges, faces, or cell centers. For consistency between 2D and 3D we refer to faces having area and cells having volume, regardless of their dimensionality. Nodes and cell centers naturally hold scalar quantities while edges and faces have implied directionality and therefore naturally describe vectors. The conductivity, $\\sigma$, changes as a function of space, and is likely to have discontinuities (e.g. if we cross a geologic boundary). As such, we will represent the conductivity as a constant over each cell, and discretize it at the center of the cell. The electrical current density, $\\vec{j}$, will be continuous across conductivity interfaces, and therefore, we will represent it on the faces of each cell. Remember that $\\vec{j}$ is a vector; the direction of it is implied by the mesh definition (i.e. in $x$, $y$ or $z$), so we can store the array $\\bf{j}$ as scalars that live on the face and inherit the face's normal. When $\\vec{j}$ is defined on the faces of a cell the potential, $\\vec{\\phi}$, will be put on the cell centers (since $\\vec{j}$ is related to $\\phi$ through spatial derivatives, it allows us to approximate centered derivatives leading to a staggered, second-order discretization). \nImplementation",
"%matplotlib inline\nimport numpy as np\nfrom SimPEG import Mesh, Utils \nimport matplotlib.pyplot as plt\n\nplt.set_cmap(plt.get_cmap('viridis')) # use a nice colormap!",
"Create a Mesh\nA mesh is used to divide up space, here we will use SimPEG's mesh class to define a simple tensor mesh. By \"Tensor Mesh\" we mean that the mesh can be completely defined by the tensor products of vectors in each dimension; for a 2D mesh, we require one vector describing the cell widths in the x-direction and another describing the cell widths in the y-direction. \nHere, we define and plot a simple 2D mesh using SimPEG's mesh class. The cell centers boundaries are shown in blue, cell centers as red dots and cell faces as green arrows (pointing in the positive x, y - directions). Cell nodes are plotted as blue squares.",
"# Plot a simple tensor mesh\nhx = np.r_[2., 1., 1., 2.] # cell widths in the x-direction\nhy = np.r_[2., 1., 1., 1., 2.] # cell widths in the y-direction \nmesh2D = Mesh.TensorMesh([hx,hy]) # construct a simple SimPEG mesh\nmesh2D.plotGrid(nodes=True, faces=True, centers=True) # plot it!\n\n# This can similarly be extended to 3D (this is a simple 2-cell mesh)\nhx = np.r_[2., 2.] # cell widths in the x-direction\nhy = np.r_[2.] # cell widths in the y-direction \nhz = np.r_[1.] # cell widths in the z-direction \nmesh3D = Mesh.TensorMesh([hx,hy,hz]) # construct a simple SimPEG mesh\nmesh3D.plotGrid(nodes=True, faces=True, centers=True) # plot it!",
"Counting things on the Mesh\nOnce we have defined the vectors necessary for construsting the mesh, it is there are a number of properties that are often useful, including keeping track of the\n- number of cells: mesh.nC\n- number of cells in each dimension: mesh.vnC\n- number of faces: mesh.nF\n- number of x-faces: mesh.nFx (and in each dimension mesh.vnFx ...)\nand the list goes on. Check out SimPEG's mesh documentation for more.",
"# Construct a simple 2D, uniform mesh on a unit square\nmesh = Mesh.TensorMesh([10, 8])\nmesh.plotGrid()\n\n\"The mesh has {nC} cells and {nF} faces\".format(nC=mesh.nC, nF=mesh.nF)\n\n# Sometimes you need properties in each dimension\n(\"In the x dimension we have {vnCx} cells. This is because our mesh is {vnCx} x {vnCy}.\").format(\n vnCx=mesh.vnC[0],\n vnCy=mesh.vnC[1]\n)\n\n# Similarly, we need to keep track of the faces, we have face grids in both the x, and y \n# directions. \n\n(\"Faces are vectors so the number of faces pointing in the x direction is {nFx} = {vnFx0} x {vnFx1} \"\n\"In the y direction we have {nFy} = {vnFy0} x {vnFy1} faces\").format(\n nFx=mesh.nFx,\n vnFx0=mesh.vnFx[0],\n vnFx1=mesh.vnFx[1],\n nFy=mesh.nFy,\n vnFy0=mesh.vnFy[0],\n vnFy1=mesh.vnFy[1] \n)",
"Simple properties of the mesh\nThere are a few things that we will need to know about the mesh and each of it's cells, including the\n- cell volume: mesh.vol,\n- face area: mesh.area.\nFor consistency between 2D and 3D we refer to faces having area and cells having volume, regardless of their dimensionality.",
"# On a uniform mesh, not suprisingly, the cell volumes are all the same\nplt.colorbar(mesh.plotImage(mesh.vol, grid=True)[0])\nplt.title('Cell Volumes');\n\n# All cell volumes are defined by the product of the cell widths \n\nassert (np.all(mesh.vol == 1./mesh.vnC[0] * 1./mesh.vnC[1])) # all cells have the same volume on a uniform, unit cell mesh\n\nprint(\"The cell volume is the product of the cell widths in the x and y dimensions: \"\n \"{hx} x {hy} = {vol} \".format(\n hx = 1./mesh.vnC[0], # we are using a uniform, unit square mesh\n hy = 1./mesh.vnC[1],\n vol = mesh.vol[0]\n )\n)\n\n# Similarly, all x-faces should have the same area, equal to that of the length in the y-direction\nassert np.all(mesh.area[:mesh.nFx] == 1.0/mesh.nCy) # because our domain is a unit square\n\n# and all y-faces have an \"area\" equal to the length in the x-dimension\nassert np.all(mesh.area[mesh.nFx:] == 1.0/mesh.nCx)\n\nprint(\n \"The area of the x-faces is {xFaceArea} and the area of the y-faces is {yFaceArea}\".format(\n xFaceArea=mesh.area[0],\n yFaceArea=mesh.area[mesh.nFx]\n )\n)\n\nmesh.plotGrid(faces=True)\n\n# On a non-uniform tensor mesh, the first mesh we defined, the cell volumes vary\n\n# hx = np.r_[2., 1., 1., 2.] # cell widths in the x-direction\n# hy = np.r_[2., 1., 1., 1., 2.] # cell widths in the y-direction \n# mesh2D = Mesh.TensorMesh([hx,hy]) # construct a simple SimPEG mesh\n\nplt.colorbar(mesh2D.plotImage(mesh2D.vol, grid=True)[0])\nplt.title('Cell Volumes');",
"Grids and Putting things on a mesh\nWhen storing and working with features of the mesh such as cell volumes, face areas, in a linear algebra sense, it is useful to think of them as vectors... so the way we unwrap is super important. \nMost importantly we want some compatibility with <a href=\"https://en.wikipedia.org/wiki/Vectorization_(mathematics)#Compatibility_with_Kronecker_products\">Kronecker products</a> as we will see later! This actually leads to us thinking about unwrapping our vectors column first. This column major ordering is inspired by linear algebra conventions which are the standard in Matlab, Fortran, Julia, but sadly not Python. To make your life a bit easier, you can use our MakeVector mkvc function from Utils.",
"from SimPEG.Utils import mkvc\n\nmesh = Mesh.TensorMesh([3,4])\n\nvec = np.arange(mesh.nC)\n\nrow_major = vec.reshape(mesh.vnC, order='C')\nprint('Row major ordering (standard python)')\nprint(row_major)\n\ncol_major = vec.reshape(mesh.vnC, order='F')\nprint('\\nColumn major ordering (what we want!)')\nprint(col_major)\n\n# mkvc unwraps using column major ordering, so we expect \nassert np.all(mkvc(col_major) == vec)\n\nprint('\\nWe get back the expected vector using mkvc: {vec}'.format(vec=mkvc(col_major)))",
"Grids on the Mesh\nWhen defining where things are located, we need the spatial locations of where we are discretizing different aspects of the mesh. A SimPEG Mesh has several grids. In particular, here it is handy to look at the \n- Cell centered grid: mesh.gridCC\n- x-Face grid: mesh.gridFx\n- y-Face grid: mesh.gridFy",
"# gridCC\n\"The cell centered grid is {gridCCshape0} x {gridCCshape1} since we have {nC} cells in the mesh and it is {dim} dimensions\".format(\n gridCCshape0=mesh.gridCC.shape[0],\n gridCCshape1=mesh.gridCC.shape[1],\n nC=mesh.nC,\n dim=mesh.dim\n)\n\n# The first column is the x-locations, and the second the y-locations\n\nmesh.plotGrid()\nplt.plot(mesh.gridCC[:,0], mesh.gridCC[:,1],'ro')\n\n# gridFx\n\"Similarly, the x-Face grid is {gridFxshape0} x {gridFxshape1} since we have {nFx} x-faces in the mesh and it is {dim} dimensions\".format(\n gridFxshape0=mesh.gridFx.shape[0],\n gridFxshape1=mesh.gridFx.shape[1],\n nFx=mesh.nFx,\n dim=mesh.dim\n)\n\nmesh.plotGrid()\nplt.plot(mesh.gridCC[:,0], mesh.gridCC[:,1],'ro')\nplt.plot(mesh.gridFx[:,0], mesh.gridFx[:,1],'g>')",
"Putting a Model on a Mesh\nIn index.ipynb, we constructed a model of a block in a whole-space, here we revisit it having defined the elements of the mesh we are using.",
"mesh = Mesh.TensorMesh([100, 80]) # setup a mesh on which to solve\n\n# model parameters\nsigma_background = 1. # Conductivity of the background, S/m\nsigma_block = 10. # Conductivity of the block, S/m\n\n# add a block to our model\nx_block = np.r_[0.4, 0.6]\ny_block = np.r_[0.4, 0.6]\n\n# assign them on the mesh\nsigma = sigma_background * np.ones(mesh.nC) # create a physical property model \n\nblock_indices = ((mesh.gridCC[:,0] >= x_block[0]) & # left boundary \n (mesh.gridCC[:,0] <= x_block[1]) & # right boudary\n (mesh.gridCC[:,1] >= y_block[0]) & # bottom boundary\n (mesh.gridCC[:,1] <= y_block[1])) # top boundary\n\n# add the block to the physical property model\nsigma[block_indices] = sigma_block\n\n# plot it!\nplt.colorbar(mesh.plotImage(sigma)[0])\nplt.title('electrical conductivity, $\\sigma$')",
"Next up ...\nIn the next notebook, we will work through defining the discrete divergence."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Wx1ng/Python4DataScience.CH | Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb | cc0-1.0 | [
"1 基本使用方法\n启动Python/Ipython Shell有几种方式:\n\n命令行窗口键入python并回车\n切换到你喜欢的目录,键入ipython notebook并回车\n终端中键入Ipython并回车,或者在IDE中打开一个IPython\n\n每次键入命令需要摁下Enter键(REPL中)或者Run Cell按钮(IPython)\n你会得到一个显示结果反馈。",
"8 * 5 + 2",
"世间真理42,以及",
"2 ** 100",
"超大整数,或者试试",
"\"The answer to life,the universe,and everything is ?\"",
"字符串。\n用print打印任意我们想要的内容:\n(一个对象在REPL中直接回车和被print分别输出的是他的__repr__和__str__方法对应的字符串)",
"print \"Scala|Python|C\"\nprint 42\nprint repr(42),str(42)",
"但是如果需要用户在电脑上输入一些字符怎么办?反复输入和打印字符串也并不方便。\nKeep your code DRY (Don't Repeat Yourself)",
"lang = \"Scala|Python|C\"\nprint lang",
"如果需要标识注释,请以# 符号开始一段语句(这与大部分脚本语言和unix-shell语言一样)。从# 开始直到一行结束的内容都是注释。",
"#Skip these comments.\nprint \"These lines can be run.\"\nprint '''\nWhatever.\n'''",
"2 Python变量入门\n2.0 Python变量的一些概念\n\n\n赋值时创建变量本身,绑定名字\n\n\n动态类型\n\n\n强类型\n\n\n可变类型与不可变类型\n\n\n2.1 Python变量介绍\n通常用等号(=)用来给变量赋值。等号(=)运算符左边是一个变量名,等号(=)运算符右边是存储在变量中的值或者一个复合的表达式。\n试试在python shell里键入以下内容,每打一行摁下Enter键:",
"a = 42\nb = 42*3\nprint type(a)\nprint id(b)\nprint a*b\nb = \"Hello,world!\"\nprint type(b)\nprint id(b)\nprint b * 2\nprint b + b\nhelp(id)\n#print dir(a),'\\n'*2,dir(b)\nprint isinstance(a,int)",
"2.2 Python变量赋值与操作\n多变量赋值(不推荐):",
"x = y = z = 42\nprint x,y,z",
"当然你也可以为多个对象指定多个变量,例如:",
"x,y,z = \"So long\",\"and thanks for all\",\"the fish\"\nprint x,y,z",
"赋值的语法糖(Syntax Sugar):",
"x,y = \"So long\",42\nx,y = y,x\nprint x\nprint y",
"Python同样拥有增量赋值方法:",
"a = 42\na += 1\nprint a\na -= 1\na *= 2\nprint a",
"类似的能够赋值的操作符还有很多,包括:\n\n*= 自乘\n/= 自除\n%= 自取模\n**= 自乘方\n<<= 自左移位\n>>= 自右移位\n&= 自按位与\n^= 自按位异或\n\n等等。\n注意:Python不支持x++ 或者 --x 这类操作。\n2.3 Python数值类型\n\nint: 42 126 -680 -0x92\nbool: True False\nfloat: 3.1415926 -90.00 6.022e23 \n\ncomplex: 6.23+1.5j -1.23-875j 0+1j\n\n\ntype: 判断类型\n\nisinstance: 判断属于特定类型(推荐)",
"a,b,c,d = 42,True,3.1415926,6.23+1.5j\nprint type(a),type(b),type(c),type(d)\nprint isinstance(a,int),isinstance(b,(float,int)),\\\n isinstance(c,float),isinstance(d,(int,bool))\n\n# 可以分别取得复数的实部和虚部\nprint d.real\nprint d.imag",
"工业级的计算器:\n基础操作符:\n+ - * / // % **\n比较操作符:\n< <= > >= == !=",
"d = 6.23+1.5j\ne = 0+1j\nprint d+e,d-e,d*e\n\nprint 7*3,7**2 # x**y 返回x的y次幂\n\nprint 8%3,float(10)/3,10.0/3,10//3\n\nprint 1==2, 1 != 2 #==表示判断是否相等 != 表示不相等\nprint 7%2 #返回除法的余数\n\nprint 1<42<100",
"是时候展现真正的除法了:",
"print 7/4\nfrom __future__ import division\nprint 7/4",
"普通floor除法:",
"print 7//4 ",
"布尔值:",
"print not True\nprint not False\nprint 40 < 42 <= 100\nprint not 40 > 30\nprint 40 > 30 and 40 < 42 <= 100 \nprint 40 < 30 or 40 < 42 <=100",
"2.4 Python字符串类型\n字符串的长度len与切片:",
"c = \"Hello world\"\nprint len(c)\nprint c[0],c[1:3]\nprint c[-1],c[:]\nprint c[::2],c[::-1]",
"复杂切片操作:[start:end:step]",
"c = \"Hello world\"\nprint c[::-1] #翻转字符串\nprint c[:] #原样复制字符串\nprint c[::2] #隔一个取一个\nprint c[:8] #前八个字母\nprint c[:8:2] #前八个字母,每两个取一个",
"复制:",
"c = \"Hello world\"\nd = c[:] #复制字符串,赋值给d\ndel c #删除原字符串\nprint d #d 字符串依然可用\ne = d[0:4]\nprint e #新构造一个截取d字符串部分所组成的串\nf = e\nprint id(f)\nprint id(e)\ndel e\nprint id(f),f #仍然可用",
"加号(+)用于字符串连接运算,星号(*)用来重复字符串。",
"teststr,stringback=\"Clojure is\",\"cool\"\nprint '-'*20\nprint teststr+stringback\nprint teststr*3\nprint '-'*20",
"美化打印:",
"pystr,pystring=\"Clojure is\",\"cool\"\nprint '-'*20\nprint pystr,pystring\nprint '-'*20\nprint pystr+'\\t'+pystring\nprint '-'*20\n\npystr = \"Clojure\"\npystring = \"cool\"\nyastr = \"LISP\"\nyastring = \"wonderful \"\nprint('Python\\'s Format I : {0} is {1}'.format(pystr,pystring))\nprint 'Python\\'s Format II: {language} is {description}'.\\\nformat(language='Scala',description='awesome') #使用\\接续换行\nprint 'C Style Print: %s is %s'%(yastr,yastring)",
"字符串复杂效果举例:",
"for i in range(0,5)+range(2,8)+range(3,12)+[2,2]:\n print' '*(40-2*i-i//2)+'*'*(4*i+1+i)",
"尾部换行\\:",
"\"A:What's your favorite language?\\\nB:C++.\"",
"\\t:水平制表符:",
"print \"A:What’s your favorite language?\\nB:C++\"\nprint \"A:What’s your favorite language?\\tB:C++\"",
"注意其他的转义字符(反斜线+字符):",
"print 'What\\'s your favorite language?'\nprint \"What's your favorite language?\"\nprint \"What\\\"s your favorite language?\\\\\"",
"其他操作 —— Join,Split,Strip, Upper, Lower",
"s = \"a\\tb\\tc\\td\"\nprint s\nl = s.split('\\t')\nprint l\nsnew = ','.join(l)\nprint snew\nline = '\\t Blabla \\t \\n'\nprint line.strip()\nprint line.lstrip()\nprint line.rstrip()\nsalpha = 'Abcdefg'\nprint salpha.upper()\nprint salpha.lower()\n\n#isupper islower isdigit isalpha",
"更多操作:\n\n查阅dir('s')\n查阅codecs\n查阅re(regexp,正则表达式)\n\n2.5 初识Python可变类型(Mutables)\n\n字符串(string):不可变\n字节数组(bytearray):可变",
"s = \"string\"\nprint type(s)\n\ns[3] = \"o\"\n\ns = \"String\"\nsba = bytearray(s)\nsba[3] = \"o\"\nprint sba"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
BenLangmead/comp-genomics-class | notebooks/FASTA.ipynb | gpl-2.0 | [
"FASTA\nThis notebook briefly explores the FASTA format, a very common format for storing DNA sequences. FASTA is the preferred format for storing reference genomes.\nFASTA and FASTQ are rather similar, but FASTQ is almost always used for storing sequencing reads (with associated quality values), whereas FASTA is used for storing all kinds of DNA, RNA or protein sequencines (without associated quality values).\nBefore delving into the format, I should mention that the BioPython project makes parsing and using many file formats, including FASTA, quite easy. See the BioPython SeqIO module in particular. As far as I know, though, SeqIO does not use FASTA indexes, discussed toward the bottom, which is a disadvantage.\nBasic format\nHere is the basic format:\n>sequence1_short_name with optional additional info after whitespace\nACATCACCCCATAAACAAATAGGTTTGGTCCTAGCCTTTCTATTAGCTCTTAGTAAGATTACACATGCAA\nGCATCCCCGTTCCAGTGAGTTCACCCTCTAAATCACCACGATCAAAAGGAACAAGCATCAAGCACGCAGC\nAATGCAGCTCAAAACGCTTAGCCTAGCCACACCCCCACGGGAAACAGCAGTGAT\n>sequence2_short_name with optional additional info after whitespace\nGCCCCAAACCCACTCCACCTTACTACCAGACAACCTTAGCCAAACCATTTACCCAAATAAAGTATAGGCG\nATAGAAATTGAAACCTGGCGCAATAGATATAGTACCGCAAGGGAAAGATGAAAAATTATAACCAAGCATA\nATATAG\n\nA line starting with a > (greater-than) sign indicates the beginning of a new sequence and specifies its name. Take the first line above. Everything after the > up to and excluding the first whitespace character (sequence1_short_name), is the \"short name.\" Everything after the > up to the end of the line (sequence1_short_name with optional additional info after whitespace) is the \"long name.\" We usually use the short name when referring to FASTA sequences.\nThe next three lines consists of several nucleotides. There is a maximum number of nucleotides permitted per line; in this case, it is 70. If the sequence is longer then 70 nucleotides, it \"wraps\" down to the next line. Not every FASTA file uses the same maximum, but a given FASTA file must use the same maximum throughout the file.\nThe sequences above are made up. Here's a real-world reference sequence (the human mitochondrial genome) in FASTA format:",
"import gzip\nimport urllib.request\nurl = 'ftp://ftp.ncbi.nlm.nih.gov/genomes/archive/old_genbank/Eukaryotes/vertebrates_mammals/Homo_sapiens/GRCh38/non-nuclear/assembled_chromosomes/FASTA/chrMT.fa.gz'\nresponse = urllib.request.urlopen(url)\nprint(gzip.decompress(response.read()).decode('UTF8'))",
"This FASTA file shown above has just one sequence in it. As we saw in the first example above, it's also possible for one FASTA file to contain multiple sequences. These are sometimes called multi-FASTA files. When you write code to interpret FASTA files, it's a good idea to always allow for the possibility that the FASTA file might contain multiple sequences.\nFASTA files are often stored with the .fa file name extension, but this is not a rule. .fasta is another popular extenson. You may also see .fas, .fna, .mfa (for multi-FASTA), and others.\nParsing FASTA\nHere is a simple function for parsing a FASTA file into a Python dictionary. The dictionary maps short names to corresponding nucleotide strings (with whitespace removed).",
"def parse_fasta(fh):\n fa = {}\n current_short_name = None\n # Part 1: compile list of lines per sequence\n for ln in fh:\n if ln[0] == '>':\n # new name line; remember current sequence's short name\n long_name = ln[1:].rstrip()\n current_short_name = long_name.split()[0]\n fa[current_short_name] = []\n else:\n # append nucleotides to current sequence\n fa[current_short_name].append(ln.rstrip())\n # Part 2: join lists into strings\n for short_name, nuc_list in fa.items():\n # join this sequence's lines into one long string\n fa[short_name] = ''.join(nuc_list)\n return fa",
"The first part accumulates a list of strings (one per line) for each sequence. The second part joins those lines together so that we end up with one long string per sequence. Why divide it up this way? Mainly to avoid the poor performance of repeatedly concatenating (immutable) Python strings.\nI'll test it by running it on the simple multi-FASTA file we saw before:",
"from io import StringIO\nfasta_example = StringIO(\n'''>sequence1_short_name with optional additional info after whitespace\nACATCACCCCATAAACAAATAGGTTTGGTCCTAGCCTTTCTATTAGCTCTTAGTAAGATTACACATGCAA\nGCATCCCCGTTCCAGTGAGTTCACCCTCTAAATCACCACGATCAAAAGGAACAAGCATCAAGCACGCAGC\nAATGCAGCTCAAAACGCTTAGCCTAGCCACACCCCCACGGGAAACAGCAGTGAT\n>sequence2_short_name with optional additional info after whitespace\nGCCCCAAACCCACTCCACCTTACTACCAGACAACCTTAGCCAAACCATTTACCCAAATAAAGTATAGGCG\nATAGAAATTGAAACCTGGCGCAATAGATATAGTACCGCAAGGGAAAGATGAAAAATTATAACCAAGCATA\nATATAG''')\nparsed_fa = parse_fasta(fasta_example)\nparsed_fa",
"Note that only the short names survive. This is usually fine, but it's not hard to modify the function so that information relating short names to long names is also retained.\nIndexed FASTA\nSay you have one or more big FASTA files (e.g. the entire human reference genome) and you'd like to access those files \"randomly,\" peeking at substrings here and there without any regular access pattern. Maybe you're mimicking a sequencing machine, reading snippets of DNA here and there.\nYou could start by using the parse_fasta function defined above to parse the FASTA files. Then, to access a substring, do as follows:",
"parsed_fa['sequence2_short_name'][100:130]",
"Accessing a substring in this way is very fast and simple. The downside is that you've stored all of the sequences in memory. If the FASTA files are really big, this takes lots of valuable memory. This may or may not be a good trade.\nAn alternative is to load only the portions of the FASTA files that you need, when you need them. For this to be practical, we have to have a way of \"jumping\" to the specific part of the specific FASTA file that you're intersted in.\nFortunately, there is a standard way of indexing a FASTA file, popularized by the faidx tool in SAMtools. When you have such an index, it's easy to calculate exactly where to jump to when you want to extract a specific substring. Here is some Python to create such an index:",
"def index_fasta(fh):\n index = []\n current_short_name = None\n current_byte_offset, running_seq_length, running_byte_offset = 0, 0, 0\n line_length_including_ws, line_length_excluding_ws = 0, 0\n for ln in fh:\n ln_stripped = ln.rstrip()\n running_byte_offset += len(ln)\n if ln[0] == '>':\n if current_short_name is not None:\n index.append((current_short_name, running_seq_length,\n current_byte_offset, line_length_excluding_ws,\n line_length_including_ws))\n long_name = ln_stripped[1:]\n current_short_name = long_name.split()[0]\n current_byte_offset = running_byte_offset\n running_seq_length = 0\n else:\n line_length_including_ws = max(line_length_including_ws, len(ln))\n line_length_excluding_ws = max(line_length_excluding_ws, len(ln_stripped))\n running_seq_length += len(ln_stripped)\n if current_short_name is not None:\n index.append((current_short_name, running_seq_length,\n current_byte_offset, line_length_excluding_ws,\n line_length_including_ws))\n return index",
"Here we use it to index a small multi-FASTA file. We print out the index at the end.",
"fasta_example = StringIO(\n'''>sequence1_short_name with optional additional info after whitespace\nACATCACCCCATAAACAAATAGGTTTGGTCCTAGCCTTTCTATTAGCTCTTAGTAAGATTACACATGCAA\nGCATCCCCGTTCCAGTGAGTTCACCCTCTAAATCACCACGATCAAAAGGAACAAGCATCAAGCACGCAGC\nAATGCAGCTCAAAACGCTTAGCCTAGCCACACCCCCACGGGAAACAGCAGTGAT\n>sequence2_short_name with optional additional info after whitespace\nGCCCCAAACCCACTCCACCTTACTACCAGACAACCTTAGCCAAACCATTTACCCAAATAAAGTATAGGCG\nATAGAAATTGAAACCTGGCGCAATAGATATAGTACCGCAAGGGAAAGATGAAAAATTATAACCAAGCATA\nATATAG''')\nidx = index_fasta(fasta_example)\nidx",
"What do the fields in those two records mean? Take the first record: ('sequence1_short_name', 194, 69, 70, 71). The fields from left to right are (1) the short name, (2) the length (in nucleotides), (3) the byte offset in the FASTA file of the first nucleotide of the sequence, (4) the maximum number of nucleotides per line, and (5) the maximum number of bytes per line, including whitespace. It's not hard to convince yourself that, if you know all these things, it's not hard to figure out the byte offset of any position in any of the sequences. (This is what the get member of the FastaIndexed class defined below does.)\nA typical way to build a FASTA index like this is to use SAMtools, specifically the samtools faidx command. This and all the other samtools commands are documented in its manual.\nWhen you use a tool like this to index a FASTA file, a new file containing the index is written with an additional .fai extension. E.g. if the FASTA file is named hg19.fa, then running samtools faidx hg19.fa will create a new file hg19.fa.fai containing the index.\nThe following Python class shows how you might use the FASTA file together with its index to extract arbitrary substrings without loading all of the sequences into memory:",
"import re\n\nclass FastaOOB(Exception):\n \"\"\" Out-of-bounds exception for FASTA sequences \"\"\"\n \n def __init__(self, value):\n self.value = value\n \n def __str__(self):\n return repr(self.value)\n\nclass FastaIndexed(object):\n \"\"\" Encapsulates a set of indexed FASTA files. Does not load the FASTA\n files into memory but still allows the user to extract arbitrary\n substrings, with the help of the index. \"\"\"\n \n __removeWs = re.compile(r'\\s+')\n \n def __init__(self, fafns):\n self.fafhs = {}\n self.faidxs = {}\n self.chr2fh = {}\n self.offset = {}\n self.lens = {}\n self.charsPerLine = {}\n self.bytesPerLine = {}\n \n for fafn in fafns:\n # Open FASTA file\n self.fafhs[fafn] = fh = open(fafn, 'r')\n # Parse corresponding .fai file\n with open(fafn + '.fai') as idxfh:\n for ln in idxfh:\n toks = ln.rstrip().split()\n if len(toks) == 0:\n continue\n assert len(toks) == 5\n # Parse and save the index line\n chr, ln, offset, charsPerLine, bytesPerLine = toks\n self.chr2fh[chr] = fh\n self.offset[chr] = int(offset) # 0-based\n self.lens[chr] = int(ln)\n self.charsPerLine[chr] = int(charsPerLine)\n self.bytesPerLine[chr] = int(bytesPerLine)\n \n def __enter__(self):\n return self\n \n def __exit__(self, type, value, traceback):\n # Close all the open FASTA files\n for fafh in self.fafhs.values():\n fafh.close()\n \n def has_name(self, refid):\n return refid in self.offset\n \n def name_iter(self):\n return self.offset.iterkeys()\n \n def length_of_ref(self, refid):\n return self.lens[refid]\n \n def get(self, refid, start, ln):\n ''' Return the specified substring of the reference. '''\n assert refid in self.offset\n if start + ln > self.lens[refid]:\n raise ReferenceOOB('\"%s\" has length %d; tried to get [%d, %d)' % (refid, self.lens[refid], start, start + ln))\n fh, offset, charsPerLine, bytesPerLine = \\\n self.chr2fh[refid], self.offset[refid], \\\n self.charsPerLine[refid], self.bytesPerLine[refid]\n byteOff = offset\n byteOff += (start // charsPerLine) * bytesPerLine\n into = start % charsPerLine\n byteOff += into\n fh.seek(byteOff)\n left = charsPerLine - into\n # Count the number of line breaks interrupting the rest of the\n # string we're trying to read\n if ln < left:\n return fh.read(ln)\n else:\n nbreaks = 1 + (ln - left) // charsPerLine\n res = fh.read(ln + nbreaks * (bytesPerLine - charsPerLine))\n res = re.sub(self.__removeWs, '', res)\n return res\n",
"Here's an example of how to use the class defined above.",
"# first we'll write a new FASTA file\nwith open('tmp.fa', 'w') as fh:\n fh.write('''>sequence1_short_name with optional additional info after whitespace\nACATCACCCCATAAACAAATAGGTTTGGTCCTAGCCTTTCTATTAGCTCTTAGTAAGATTACACATGCAA\nGCATCCCCGTTCCAGTGAGTTCACCCTCTAAATCACCACGATCAAAAGGAACAAGCATCAAGCACGCAGC\nAATGCAGCTCAAAACGCTTAGCCTAGCCACACCCCCACGGGAAACAGCAGTGAT\n>sequence2_short_name with optional additional info after whitespace\nGCCCCAAACCCACTCCACCTTACTACCAGACAACCTTAGCCAAACCATTTACCCAAATAAAGTATAGGCG\nATAGAAATTGAAACCTGGCGCAATAGATATAGTACCGCAAGGGAAAGATGAAAAATTATAACCAAGCATA\nATATAG''')\nwith open('tmp.fa') as fh:\n idx = index_fasta(fh)\nwith open('tmp.fa.fai', 'w') as fh:\n fh.write('\\n'.join(['\\t'.join(map(str, x)) for x in idx]))\nwith FastaIndexed(['tmp.fa']) as fa_idx:\n print(fa_idx.get('sequence2_short_name', 100, 30))",
"Other resources\n\nWikipedia page for FASTA format\nThe original FASTA paper by Bill Pearson. This is the software tool that made the format popular.\nBioPython, which has its own ways of parsing FASTA\nMany other libraries and [tools]((http://hannonlab.cshl.edu/fastx_toolkit/)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cmshobe/dem_analysis_with_gdal | dem_processing_with_gdal_python.ipynb | mit | [
"Short introduction to working with DEMs in Python GDAL\nGreg Tucker, CU Boulder, Feb 2016\nInstall GDAL library\nYou'll need to install the GDAL library. If you have Anaconda installed, you can do this from the command line by:\nconda install gdal\nDownload some DEM data to work with\nNavigate a browser to http://criticalzone.org/boulder/data/dataset/2915/\nSelect Betasso (Snow off - filtered) - 1m Filtered DSM\nSave the zip file, and double-click to unzip it. Inside the folder img you will see a file called czo_1m_bt1.img. This is a 1 m resolution lidar-derived DEM of a stretch of Boulder Creek Canyon, with the small Betasso tributary catchment located roughly in the center.\nImport the GDAL library\nHere we import GDAL and NumPy.",
"from osgeo import gdal\nimport numpy as np",
"Open and read data from the DEM\nChange the path name below to reflect your particular computer, then run the cell.",
"betasso_dem_name = '/Users/gtucker/Dev/dem_analysis_with_gdal/czo_1m_bt1.img'\n\ngeo = gdal.Open(betasso_dem_name)\nzb = geo.ReadAsArray()",
"If the previous two lines worked, zb should be a 2D numpy array that contains the DEM elevations. There are some cells along the edge of the grid with invalid data. Let's set their elevations to zero, using the numpy where function:",
"zb[np.where(zb<0.0)[0],np.where(zb<0.0)[1]] = 0.0",
"Now let's make a color image of the data. To do this, we'll need Pylab and a little \"magic\".",
"import matplotlib.pyplot as plt\n%matplotlib inline\nplt.imshow(zb, vmin=1600.0, vmax=2350.0)",
"Questions:\n(Note: to answer the following, open Google Earth and enter Betasso Preserve in the search bar. Zoom out a bit to view the area around Betasso)\n(1) Use a screen shot to place a copy of this image in your lab document. Label Boulder Creek Canyon and draw an arrow to show its flow direction.\n(2) Indicate and label the confluence of Fourmile Creek and Boulder Canyon.\n(3) What is the mean altitude? What is the maximum altitude? (Hint: see numpy functions mean and amax)\nMake a slope map\nUse the numpy gradient function to make an image of absolute maximum slope angle at each cell:",
"def slope_gradient(z):\n \"\"\"\n Calculate absolute slope gradient elevation array.\n \"\"\"\n x, y = np.gradient(z) \n #slope = (np.pi/2. - np.arctan(np.sqrt(x*x + y*y)))\n slope = np.sqrt(x*x + y*y)\n return slope\n\nsb = slope_gradient(zb)",
"Let's see what it looks like:",
"plt.imshow(sb, vmin=0.0, vmax=1.0, cmap='pink')\nprint np.median(sb)",
"Questions:\n(1) Place a copy of this image in your lab document. Identify and label the Betasso Water Treatment plant.\n(2) How many degrees are in a slope gradient of 1.0 (or 100%)?\n(3) What areas have the steepest slopes? What areas have the gentlest slopes? What do you think the distribution of slopes might indicate about the distribution of erosion rates within this area?\n(4) What is the median slope gradient? What is this gradient in degrees? (Hint: numpy has a median function)\nMake a map of slope aspect",
"def aspect(z):\n \"\"\"Calculate aspect from DEM.\"\"\"\n x, y = np.gradient(z)\n return np.arctan2(-x, y)\n\nab = aspect(zb)\nplt.imshow(ab)",
"We can make a histogram (frequency diagram) of aspect. Here 0 degrees is east-facing, 90 is north-facing, 180 is west-facing, and 270 is south-facing.",
"abdeg = (180./np.pi)*ab # convert to degrees\nn, bins, patches = plt.hist(abdeg.flatten(), 50, normed=1, facecolor='green', alpha=0.75)",
"Questions:\n(1) Place a copy of this image in your lab notes.\n(2) Compare the aspect map to imagery in Google Earth. Is there any correlation aspect and vegetation? If so, what does it look like?\n(3) What is the most common aspect? (N, NE, E, SE, S, SW, W, or NW)\nShaded relief\nCreate a shaded relief image",
"def hillshade(z, azimuth=315.0, angle_altitude=45.0): \n \"\"\"Generate a hillshade image from DEM.\n \n Notes: adapted from example on GeoExamples blog,\n published March 24, 2014, by Roger Veciana i Rovira.\n \"\"\"\n x, y = np.gradient(z) \n slope = np.pi/2. - np.arctan(np.sqrt(x*x + y*y)) \n aspect = np.arctan2(-x, y) \n azimuthrad = azimuth*np.pi / 180. \n altituderad = angle_altitude*np.pi / 180.\n \n shaded = np.sin(altituderad) * np.sin(slope)\\\n + np.cos(altituderad) * np.cos(slope)\\\n * np.cos(azimuthrad - aspect) \n return 255*(shaded + 1)/2\n\nhb = hillshade(zb)\nplt.imshow(hb, cmap='gray')",
"Questions:\n(1) Place a copy of this image in your lab document. Label at least one area of relatively smooth terrain, and one area of relatively rough terrain.\n(2) Do the areas of smooth and rough topography bear any relation to other geomorphic features?\n(3) Note in your document any other observations, comments, or questions that have occurred to you while examining these data."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
surenkum/eecs_542 | text_features.ipynb | gpl-3.0 | [
"Tutorial Outline\n<ul>\n<li> Use NLTK to extract N-Gram features\n<li> Use Scikit-learn to explore some text datasets\n<li> Extract two different features, i) Bag of Words with TF-IDF weighting and ii) Document-Term Matrix \n<li> Visualize feature spaces\n</ul>",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Using NLTK to extract Unigram and Bigram\nRef: Chen Sun, Chuang Gan and Ram Nevatia, Automatic Concept Discovery from Parallel Text and Visual Corpora. ICCV 2015\n<img src=\"files/iccv_paper_concepts.png\">",
"from nltk.util import ngrams \nsentence = 'A black-dog and a spotted dog are fighting.'\nn = 2\nsixgrams = ngrams(sentence.split(), n)\nfor grams in sixgrams:\n print grams",
"Some of the bigrams are obviously not relevant. So we tokenize and exclude stop words to get some relevant classes.",
"from nltk.corpus import stopwords\nfrom nltk.tokenize import wordpunct_tokenize\n\nstop_words = set(stopwords.words('english'))\nstop_words.update(['.', ',', '\"', \"'\", '?', '!', ':', ';', '(', ')', '[', ']', '{', '}','-']) # remove it if you need punctuation \n\nlist_of_words = [i.lower() for i in wordpunct_tokenize(sentence) if i.lower() not in stop_words]\nbigrams = ngrams(list_of_words,2)\nfor grams in bigrams:\n print grams",
"Using Scikit-Learn to explore some text datasets\n20 Newsgroup dataset\nThe 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. This dataset is often used for text classification and text clustering. Some of the newsgroups are very closely related to each other (e.g. comp.sys.ibm.pc.hardware / comp.sys.mac.hardware), while others are highly unrelated (e.g misc.forsale / soc.religion.christian). From: http://qwone.com/~jason/20Newsgroups/",
"%run fetch_data.py twenty_newsgroups\n\nfrom sklearn.datasets import load_files\nfrom sklearn.feature_extraction.text import TfidfVectorizer # Tf IDF feature extraction\nfrom sklearn.feature_extraction.text import CountVectorizer # Count and vectorize text feature\n# Load the text data\ncategories = [\n 'alt.atheism',\n 'talk.religion.misc',\n 'comp.graphics',\n 'sci.space',\n]\ntwenty_train_small = load_files('./datasets/20news-bydate-train/',\n categories=categories, encoding='latin-1')\ntwenty_test_small = load_files('./datasets/20news-bydate-test/',\n categories=categories, encoding='latin-1')\n\n# Lets display some of the data\ndef display_sample(i, dataset):\n target_id = dataset.target[i]\n print(\"Class id: %d\" % target_id)\n print(\"Class name: \" + dataset.target_names[target_id])\n print(\"Text content:\\n\")\n print(dataset.data[i])\n \ndisplay_sample(0,twenty_train_small)",
"Extracting features\nLets extract vector counts to convert text to a vector.",
"count_vect = CountVectorizer(min_df=2)\nX_train_counts = count_vect.fit_transform(twenty_train_small.data)\nprint X_train_counts.shape",
"Lets extract TF-IDF features from text data. min_df option is to put a lower bound to ignore terms that have a low document frequency.",
"# Extract features \n# Turn the text documents into vectors of word frequencies with tf-idf weighting\nvectorizer = TfidfVectorizer(min_df=2)\nX_train = vectorizer.fit_transform(twenty_train_small.data)\ny_train = twenty_train_small.target\nprint type(X_train)\nprint X_train.shape",
"As observed, X_train is a scipy sparse matrix consisting of 2034 rows (number of text files) and 17566 different features (unique words)",
"print type(vectorizer.vocabulary_) # Type of vocabulary\nprint len(vectorizer.vocabulary_) # Length of vocabulary\nprint vectorizer.get_feature_names()[:10] # Print first 10 elements of dictionary\nprint vectorizer.get_feature_names()[-10:] # Print last 10 elements of dictionary",
"Visualizing Feature Space\nObviously, its hard to make any sense of such high-dimensional feature space. A good technique to visualize such data is to project it to lower dimensions using PCA and then visualizing low-dimensional splace.",
"from sklearn.decomposition import TruncatedSVD\nX_train_pca = TruncatedSVD(n_components=2).fit_transform(X_train)\nfrom itertools import cycle\n\ncolors = ['b', 'g', 'r', 'c', 'm', 'y', 'k']\nfor i, c in zip(np.unique(y_train), cycle(colors)):\n plt.scatter(X_train_pca[y_train == i, 0],\n X_train_pca[y_train == i, 1],\n c=c, label=twenty_train_small.target_names[i], alpha=0.8)\n \n_ = plt.legend(loc='best')",
"Obviously, this data is not linearly separable any more but there are some interesting patterns that can be observed, alt.atheism and talk.religion.misc overlap.\nReferences\n<ol>\n<li> https://github.com/ogrisel/parallel_ml_tutorial\n<li> http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html\n<li> http://fbkarsdorp.github.io/python-course/\n</ol>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kgrodzicki/machine-learning-specialization | course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb | mit | [
"Regression Week 4: Ridge Regression (interpretation)\nIn this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:\n* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression\n* Use matplotlib to visualize polynomial regressions\n* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty\n* Use matplotlib to visualize polynomial regressions under L2 regularization\n* Choose best L2 penalty using cross-validation.\n* Assess the final fit using test data.\nWe will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)\nFire up graphlab create",
"import graphlab",
"Polynomial regression, revisited\nWe build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:",
"def polynomial_sframe(feature, degree):\n # assume that degree >= 1\n # initialize the SFrame:\n poly_sframe = graphlab.SFrame()\n # and set poly_sframe['power_1'] equal to the passed feature\n poly_sframe['power_1'] = feature\n\n # first check if degree > 1\n if degree > 1:\n # then loop over the remaining degrees:\n # range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree\n for power in range(2, degree + 1): \n # first we'll give the column a name:\n name = 'power_' + str(power)\n # then assign poly_sframe[name] to the appropriate power of feature\n poly_sframe[name] = feature.apply(lambda x: x**power)\n return poly_sframe",
"Let's use matplotlib to visualize what a polynomial regression looks like on the house data.",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nsales = graphlab.SFrame('kc_house_data.gl/')",
"As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.",
"sales = sales.sort(['sqft_living','price'])",
"Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:",
"l2_small_penalty = 1e-5",
"Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)\nWith the L2 penalty specified above, fit the model and print out the learned weights.\nHint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.",
"poly1_data = polynomial_sframe(sales['sqft_living'], 1)\nfeatures = poly1_data.column_names()\npoly1_data['price'] = sales['price'] # add price to the data since it's the target\n\nmodel1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = features, l2_penalty=l2_small_penalty, validation_set = None)\n\n#let's take a look at the weights before we plot\nmodel1.get(\"coefficients\")",
"QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?\nObserve overfitting\nRecall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.\nFirst, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.",
"(semi_split1, semi_split2) = sales.random_split(.5,seed=0)\n(set_1, set_2) = semi_split1.random_split(0.5, seed=0)\n(set_3, set_4) = semi_split2.random_split(0.5, seed=0)",
"Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.\nHint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.",
"def print_coefficients(data_set, l2_penalty):\n ps = polynomial_sframe(data_set['sqft_living'], 15)\n my_features = ps.column_names()\n ps['price'] = data_set['price']\n model = graphlab.linear_regression.create(ps, target = 'price', features = my_features, validation_set = None, verbose = False, l2_penalty=l2_penalty)\n model.get(\"coefficients\").print_rows(num_rows = 16)\n\nfor i in [set_1, set_2, set_3, set_4]:\n print_coefficients(i, l2_small_penalty)",
"The four curves should differ from one another a lot, as should the coefficients you learned.\nQUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered \"smaller\" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)",
"smallest=-759.251854206\nlargest=1247.59034572",
"Ridge regression comes to rescue\nGenerally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing \"large\" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)\nWith the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.",
"l2_penalty = 1e5\n\nfor i in [set_1, set_2, set_3, set_4]:\n print_coefficients(i, l2_penalty)",
"These curves should vary a lot less, now that you applied a high degree of regularization.\nQUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered \"smaller\" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)",
"smallest=1.91040938244\nlargest=2.58738875673",
"Selecting an L2 penalty via cross-validation\nJust like the polynomial degree, the L2 penalty is a \"magic\" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.\nWe will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:\nSet aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>\nSet aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>\n...<br>\nSet aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set\nAfter this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data. \nTo estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)",
"(train_valid, test) = sales.random_split(.9, seed=1)\ntrain_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)",
"Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.\nWith this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.",
"n = len(train_valid_shuffled)\nprint n - 7757\nk = 10 # 10-fold cross-validation\n\nfor i in xrange(k):\n start = (n*i)/k\n end = (n*(i+1))/k-1\n print i, (start, end)",
"Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.",
"train_valid_shuffled[0:10] # rows 0 to 9",
"Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.\nExtract the fourth segment (segment 3) and assign it to a variable called validation4.",
"validation4 = train_valid_shuffled[5818:7757]",
"To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.",
"print int(round(validation4['price'].mean(), 0))",
"After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.",
"n = len(train_valid_shuffled)\nfirst_two = train_valid_shuffled[0:2]\nlast_two = train_valid_shuffled[n-2:n]\nprint first_two.append(last_two)",
"Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.",
"p1 = train_valid_shuffled[0:5817]\np2 = train_valid_shuffled[n-11639:n]\ntrain4 = p1.append(p2)",
"To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.",
"print int(round(train4['price'].mean(), 0))",
"Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.\n\nFor each i in [0, 1, ..., k-1]:\nCompute starting and ending indices of segment i and call 'start' and 'end'\nForm validation set by taking a slice (start:end+1) from the data.\nForm training set by appending slice (end+1:n) to the end of slice (0:start).\nTrain a linear model using training set just formed, with a given l2_penalty\nCompute validation error using validation set just formed",
"def k_fold_cross_validation(k, l2_penalty, data, features_list):\n n = len(data)\n rss_k = list()\n for i in xrange(k):\n start = (n*i)/k\n end = (n*(i+1))/k-1\n validation_set = data[start:end + 1]\n training_set = data[end + 1:n].append(data[0:start])\n model = graphlab.linear_regression.create(training_set, target = 'price', features = features_list, validation_set = None, verbose = False, l2_penalty=l2_penalty)\n predictions = model.predict(validation_set)\n residuals = validation_set['price'] - predictions\n RSS = sum(residuals**2)\n rss_k.append(RSS)\n return sum(rss_k)/len(rss_k)",
"Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:\n* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input\n* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)\n * Run 10-fold cross-validation with l2_penalty\n* Report which L2 penalty produced the lowest average validation error.\nNote: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!",
"import numpy as np\n\nps = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)\nmy_features = ps.column_names()\nps['price'] = train_valid_shuffled['price']\n\nk=10\n\nresult = dict()\nfor l2_penalty in np.logspace(1, 7, num=13):\n result[l2_penalty] = k_fold_cross_validation(k, l2_penalty, ps, my_features)",
"QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?",
"best_penalty = min(result, key=result.get)\nprint \"the best value for the L2 penalty according to 10-fold validation:\", best_penalty",
"You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.",
"# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.\n# Using plt.xscale('log') will make your plot more intuitive.",
"Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.\nQUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty?",
"ps = polynomial_sframe(train_valid['sqft_living'], 15)\nfeatures = ps.column_names()\nps['price'] = train_valid['price'] # add price to the data since it's the target\n\nbest_model = graphlab.linear_regression.create(ps, target = 'price', features = features, l2_penalty=best_penalty, validation_set = None)\n\npredictions = best_model.predict(test)\nresiduals = test['price'] - predictions\n\nprint \"RSS:\", sum(residuals**2)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
amueller/sklearn_workshop | 10 - Working With Text Data.ipynb | bsd-2-clause | [
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Working with Text Data\n<img src=\"figures/bag_of_words.svg\" width=100%>",
"import pandas as pd\nimport os\n\ndata = pd.read_csv(os.path.join(\"data\", \"train.csv\"))\n\nlen(data)\n\ndata\n\ny_train = np.array(data.Insult)\n\ny_train\n\ntext_train = data.Comment.tolist()\n\ntext_train[6]\n\ndata_test = pd.read_csv(os.path.join(\"data\", \"test_with_solutions.csv\"))\n\ntext_test, y_test = data_test.Comment.tolist(), np.array(data_test.Insult)\n\nfrom sklearn.feature_extraction.text import CountVectorizer\n\ncv = CountVectorizer()\ncv.fit(text_train)\n\nlen(cv.vocabulary_)\n\nprint(cv.get_feature_names()[:50])\nprint(cv.get_feature_names()[-50:])\n\nX_train = cv.transform(text_train)\n\nX_train\n\ntext_train[6]\n\nX_train[6, :].nonzero()[1]\n\nX_test = cv.transform(text_test)\n\nfrom sklearn.svm import LinearSVC\nsvm = LinearSVC()\n\nsvm.fit(X_train, y_train)\n\nsvm.score(X_train, y_train)\n\nsvm.score(X_test, y_test)\n\ndef visualize_coefficients(classifier, feature_names, n_top_features=25):\n # get coefficients with large absolute values \n coef = classifier.coef_.ravel()\n positive_coefficients = np.argsort(coef)[-n_top_features:]\n negative_coefficients = np.argsort(coef)[:n_top_features]\n interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])\n # plot them\n plt.figure(figsize=(15, 5))\n colors = [\"red\" if c < 0 else \"blue\" for c in coef[interesting_coefficients]]\n plt.bar(np.arange(50), coef[interesting_coefficients], color=colors)\n feature_names = np.array(feature_names)\n plt.xticks(np.arange(1, 51), feature_names[interesting_coefficients], rotation=60, ha=\"right\");\n\n\nvisualize_coefficients(svm, cv.get_feature_names())",
"Exercises\n\nCreate a pipeine using the count vectorizer and SVM (see 07). Train and score using the pipeline.\nVary the n_gram_range in the count vectorizer, visualize the changed coefficients.\nGrid search the C in the LinearSVC using the pipeline.\nGrid search the C in the LinearSVC together with the n_gram_range (try (1,1), (1, 2), (2, 2))",
"# %load solutions/text_pipeline.py\n"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
rabernat/pyqg | docs/examples/two-layer.ipynb | mit | [
"Two Layer QG Model Example\nHere is a quick overview of how to use the two-layer model. See the\n:py:class:pyqg.QGModel api documentation for further details.\nFirst import numpy, matplotlib, and pyqg:",
"import numpy as np\nfrom matplotlib import pyplot as plt\n%matplotlib inline\nimport pyqg",
"Initialize and Run the Model\nHere we set up a model which will run for 10 years and start averaging\nafter 5 years. There are lots of parameters that can be specified as\nkeyword arguments but we are just using the defaults.",
"year = 24*60*60*360.\nm = pyqg.QGModel(tmax=10*year, twrite=10000, tavestart=5*year)\nm.run()",
"Visualize Output\nWe access the actual pv values through the attribute m.q. The first axis\nof q corresponds with the layer number. (Remeber that in python, numbering\nstarts at 0.)",
"q_upper = m.q[0] + m.Qy[0]*m.y\nplt.contourf(m.x, m.y, q_upper, 12, cmap='RdBu_r')\nplt.xlabel('x'); plt.ylabel('y'); plt.title('Upper Layer PV')\nplt.colorbar();",
"Plot Diagnostics\nThe model automatically accumulates averages of certain diagnostics. We can \nfind out what diagnostics are available by calling",
"m.describe_diagnostics()",
"To look at the wavenumber energy spectrum, we plot the KEspec diagnostic.\n(Note that summing along the l-axis, as in this example, does not give us\na true isotropic wavenumber spectrum.)",
"kespec_u = m.get_diagnostic('KEspec')[0].sum(axis=0)\nkespec_l = m.get_diagnostic('KEspec')[1].sum(axis=0)\nplt.loglog( m.kk, kespec_u, '.-' )\nplt.loglog( m.kk, kespec_l, '.-' )\nplt.legend(['upper layer','lower layer'], loc='lower left')\nplt.ylim([1e-9,1e-3]); plt.xlim([m.kk.min(), m.kk.max()])\nplt.xlabel(r'k (m$^{-1}$)'); plt.grid()\nplt.title('Kinetic Energy Spectrum');",
"We can also plot the spectral fluxes of energy.",
"ebud = [ m.get_diagnostic('APEgenspec').sum(axis=0),\n m.get_diagnostic('APEflux').sum(axis=0),\n m.get_diagnostic('KEflux').sum(axis=0),\n -m.rek*m.del2*m.get_diagnostic('KEspec')[1].sum(axis=0)*m.M**2 ]\nebud.append(-np.vstack(ebud).sum(axis=0))\nebud_labels = ['APE gen','APE flux','KE flux','Diss.','Resid.']\n[plt.semilogx(m.kk, term) for term in ebud]\nplt.legend(ebud_labels, loc='upper right')\nplt.xlim([m.kk.min(), m.kk.max()])\nplt.xlabel(r'k (m$^{-1}$)'); plt.grid()\nplt.title('Spectral Energy Transfers');"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zerothi/sisl | docs/visualization/viz_module/showcase/FatbandsPlot.ipynb | mpl-2.0 | [
"FatbandsPlot",
"import sisl\nimport sisl.viz",
"For this notebook we will create a toy \"Boron nitride\" tight binding:",
"# First, we create the geometry\nBN = sisl.geom.graphene(atoms=[\"B\", \"N\"])\n\n# Create a hamiltonian with different on-site terms\nH = sisl.Hamiltonian(BN)\n\nH[0, 0] = 2\nH[1, 1] = -2\n\nH[0, 1] = -2.7\nH[1, 0] = -2.7\n\nH[0, 1, (-1, 0)] = -2.7\nH[0, 1, (0, -1)] = -2.7\nH[1, 0, (1, 0)] = -2.7\nH[1, 0, (0, 1)] = -2.7",
"Note that we could have obtained this hamiltonian from any other source. Then we generate a path for the band structure:",
"band = sisl.BandStructure(H, [[0., 0.], [2./3, 1./3],\n [1./2, 1./2], [1., 1.]], 301,\n [r'$\\Gamma$', 'K', 'M', r'$\\Gamma$'])",
"And finally we just ask for the fatbands plot:",
"fatbands = band.plot.fatbands()\nfatbands",
"We only see the bands here, but this is a fatbands plot, and it is ready to accept your requests on what to draw!\nRequesting specific weights\nThe fatbands that the plot draws are controlled by the groups setting.",
"print(fatbands.get_param(\"groups\").help)",
"This setting works exactly like the requests setting in PdosPlot, which is documented here. Therefore we won't give an extended description of it, but just quickly show that you can autogenerate the groups:",
"fatbands.split_groups(on=\"species\")",
"Or write them yourself if you want the maximum flexibility:",
"fatbands.update_settings(groups=[\n {\"species\": \"N\", \"color\": \"blue\", \"name\": \"Nitrogen\"},\n {\"species\": \"B\", \"color\": \"red\", \"name\": \"Boron\"}\n])",
"Scaling fatbands\nThe visual appeal of fatbands depends a lot on the size of your plot, therefore there's one global scale setting that scales all fatbands at the same time:",
"fatbands.update_settings(scale=2)",
"You can also use the scale_fatbands method, which additionally lets you choose if you want to rescale from the current size or just set the value of scale:",
"fatbands.scale_fatbands(0.5, from_current=True)",
"Use BandsPlot settings\nAll settings of BandsPlot work as well for FatbandsPlot. Even spin texture!\nWe hope you enjoyed what you learned!\n\nThis next cell is just to create the thumbnail for the notebook in the docs",
"thumbnail_plot = fatbands\n\nif thumbnail_plot:\n thumbnail_plot.show(\"png\")",
""
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rahulkgup/deep-learning-foundation | intro-to-tensorflow/intro_to_tensorflow.ipynb | mit | [
"<h1 align=\"center\">TensorFlow Neural Network Lab</h1>\n\n<img src=\"image/notmnist.png\">\nIn this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href=\"http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html\">notMNIST</a>, consists of images of a letter from A to J in different fonts.\nThe above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!\nTo start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print \"All modules imported\".",
"import hashlib\nimport os\nimport pickle\nfrom urllib.request import urlretrieve\n\nimport numpy as np\nfrom PIL import Image\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelBinarizer\nfrom sklearn.utils import resample\nfrom tqdm import tqdm\nfrom zipfile import ZipFile\n\nprint('All modules imported.')",
"The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).",
"def download(url, file):\n \"\"\"\n Download file from <url>\n :param url: URL to file\n :param file: Local file path\n \"\"\"\n if not os.path.isfile(file):\n print('Downloading ' + file + '...')\n urlretrieve(url, file)\n print('Download Finished')\n\n# Download the training and test dataset.\ndownload('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')\ndownload('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')\n\n# Make sure the files aren't corrupted\nassert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\\\n 'notMNIST_train.zip file is corrupted. Remove the file and try again.'\nassert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\\\n 'notMNIST_test.zip file is corrupted. Remove the file and try again.'\n\n# Wait until you see that all files have been downloaded.\nprint('All files downloaded.')\n\ndef uncompress_features_labels(file):\n \"\"\"\n Uncompress features and labels from a zip file\n :param file: The zip file to extract the data from\n \"\"\"\n features = []\n labels = []\n\n with ZipFile(file) as zipf:\n # Progress Bar\n filenames_pbar = tqdm(zipf.namelist(), unit='files')\n \n # Get features and labels from all files\n for filename in filenames_pbar:\n # Check if the file is a directory\n if not filename.endswith('/'):\n with zipf.open(filename) as image_file:\n image = Image.open(image_file)\n image.load()\n # Load image data as 1 dimensional array\n # We're using float32 to save on memory space\n feature = np.array(image, dtype=np.float32).flatten()\n\n # Get the the letter from the filename. This is the letter of the image.\n label = os.path.split(filename)[1][0]\n\n features.append(feature)\n labels.append(label)\n return np.array(features), np.array(labels)\n\n# Get the features and labels from the zip files\ntrain_features, train_labels = uncompress_features_labels('notMNIST_train.zip')\ntest_features, test_labels = uncompress_features_labels('notMNIST_test.zip')\n\n# Limit the amount of data to work with a docker container\ndocker_size_limit = 150000\ntrain_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)\n\n# Set flags for feature engineering. This will prevent you from skipping an important step.\nis_features_normal = False\nis_labels_encod = False\n\n# Wait until you see that all features and labels have been uncompressed.\nprint('All features and labels uncompressed.')",
"<img src=\"image/Mean_Variance_Image.png\" style=\"height: 75%;width: 75%; position: relative; right: 5%\">\nProblem 1\nThe first problem involves normalizing the features for your training and test data.\nImplement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.\nSince the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.\nMin-Max Scaling:\n$\nX'=a+{\\frac {\\left(X-X_{\\min }\\right)\\left(b-a\\right)}{X_{\\max }-X_{\\min }}}\n$\nIf you're having trouble solving problem 1, you can view the solution here.",
"# Problem 1 - Implement Min-Max scaling for grayscale image data\ndef normalize_grayscale(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]\n :param image_data: The image data to be normalized\n :return: Normalized image data\n \"\"\"\n # TODO: Implement Min-Max scaling for grayscale image data\n x = image_data\n xmin = 0\n xmax = 255\n a = 0.1\n b = 0.9 \n return a + ( ( (x - xmin)*(b - a) )/( xmax - xmin ) )\n\n### DON'T MODIFY ANYTHING BELOW ###\n# Test Cases\nnp.testing.assert_array_almost_equal(\n normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),\n [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,\n 0.125098039216, 0.128235294118, 0.13137254902, 0.9],\n decimal=3)\nnp.testing.assert_array_almost_equal(\n normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),\n [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,\n 0.896862745098, 0.9])\n\nif not is_features_normal:\n train_features = normalize_grayscale(train_features)\n test_features = normalize_grayscale(test_features)\n is_features_normal = True\n\nprint('Tests Passed!')\n\nif not is_labels_encod:\n # Turn labels into numbers and apply One-Hot Encoding\n encoder = LabelBinarizer()\n encoder.fit(train_labels)\n train_labels = encoder.transform(train_labels)\n test_labels = encoder.transform(test_labels)\n\n # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32\n train_labels = train_labels.astype(np.float32)\n test_labels = test_labels.astype(np.float32)\n is_labels_encod = True\n\nprint('Labels One-Hot Encoded')\n\nassert is_features_normal, 'You skipped the step to normalize the features'\nassert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'\n\n# Get randomized datasets for training and validation\ntrain_features, valid_features, train_labels, valid_labels = train_test_split(\n train_features,\n train_labels,\n test_size=0.05,\n random_state=832289)\n\nprint('Training features and labels randomized and split.')\n\n# Save the data for easy access\npickle_file = 'notMNIST.pickle'\nif not os.path.isfile(pickle_file):\n print('Saving data to pickle file...')\n try:\n with open('notMNIST.pickle', 'wb') as pfile:\n pickle.dump(\n {\n 'train_dataset': train_features,\n 'train_labels': train_labels,\n 'valid_dataset': valid_features,\n 'valid_labels': valid_labels,\n 'test_dataset': test_features,\n 'test_labels': test_labels,\n },\n pfile, pickle.HIGHEST_PROTOCOL)\n except Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\n\nprint('Data cached in pickle file.')",
"Checkpoint\nAll your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.",
"%matplotlib inline\n\n# Load the modules\nimport pickle\nimport math\n\nimport numpy as np\nimport tensorflow as tf\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\n\n# Reload the data\npickle_file = 'notMNIST.pickle'\nwith open(pickle_file, 'rb') as f:\n pickle_data = pickle.load(f)\n train_features = pickle_data['train_dataset']\n train_labels = pickle_data['train_labels']\n valid_features = pickle_data['valid_dataset']\n valid_labels = pickle_data['valid_labels']\n test_features = pickle_data['test_dataset']\n test_labels = pickle_data['test_labels']\n del pickle_data # Free up memory\n\nprint('Data and modules loaded.')",
"Problem 2\nNow it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.\n<img src=\"image/network_diagram.png\" style=\"height: 40%;width: 40%; position: relative; right: 10%\">\nFor the input here the images have been flattened into a vector of $28 \\times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. \nFor the neural network to train on your data, you need the following <a href=\"https://www.tensorflow.org/resources/dims_types.html#data-types\">float32</a> tensors:\n - features\n - Placeholder tensor for feature data (train_features/valid_features/test_features)\n - labels\n - Placeholder tensor for label data (train_labels/valid_labels/test_labels)\n - weights\n - Variable Tensor with random numbers from a truncated normal distribution.\n - See <a href=\"https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal\">tf.truncated_normal() documentation</a> for help.\n - biases\n - Variable Tensor with all zeros.\n - See <a href=\"https://www.tensorflow.org/api_docs/python/constant_op.html#zeros\"> tf.zeros() documentation</a> for help.\nIf you're having trouble solving problem 2, review \"TensorFlow Linear Function\" section of the class. If that doesn't help, the solution for this problem is available here.",
"# All the pixels in the image (28 * 28 = 784)\nfeatures_count = 784\n# All the labels\nlabels_count = 10\n\n# TODO: Set the features and labels tensors\nfeatures = tf.placeholder(tf.float32)\nlabels = tf.placeholder(tf.float32)\n\n# TODO: Set the weights and biases tensors\nweights = tf.Variable(tf.truncated_normal((features_count, labels_count)))\nbiases = tf.Variable(tf.zeros(labels_count))\n\n\n\n### DON'T MODIFY ANYTHING BELOW ###\n\n#Test Cases\nfrom tensorflow.python.ops.variables import Variable\n\nassert features._op.name.startswith('Placeholder'), 'features must be a placeholder'\nassert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'\nassert isinstance(weights, Variable), 'weights must be a TensorFlow variable'\nassert isinstance(biases, Variable), 'biases must be a TensorFlow variable'\n\nassert features._shape == None or (\\\n features._shape.dims[0].value is None and\\\n features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'\nassert labels._shape == None or (\\\n labels._shape.dims[0].value is None and\\\n labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'\nassert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'\nassert biases._variable._shape == (10), 'The shape of biases is incorrect'\n\nassert features._dtype == tf.float32, 'features must be type float32'\nassert labels._dtype == tf.float32, 'labels must be type float32'\n\n# Feed dicts for training, validation, and test session\ntrain_feed_dict = {features: train_features, labels: train_labels}\nvalid_feed_dict = {features: valid_features, labels: valid_labels}\ntest_feed_dict = {features: test_features, labels: test_labels}\n\n# Linear Function WX + b\nlogits = tf.matmul(features, weights) + biases\n\nprediction = tf.nn.softmax(logits)\n\n# Cross entropy\ncross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)\n\n# Training loss\nloss = tf.reduce_mean(cross_entropy)\n\n# Create an operation that initializes all variables\ninit = tf.global_variables_initializer()\n\n# Test Cases\nwith tf.Session() as session:\n session.run(init)\n session.run(loss, feed_dict=train_feed_dict)\n session.run(loss, feed_dict=valid_feed_dict)\n session.run(loss, feed_dict=test_feed_dict)\n biases_data = session.run(biases)\n\nassert not np.count_nonzero(biases_data), 'biases must be zeros'\n\nprint('Tests Passed!')\n\n# Determine if the predictions are correct\nis_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))\n# Calculate the accuracy of the predictions\naccuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))\n\nprint('Accuracy function created.')",
"<img src=\"image/Learn_Rate_Tune_Image.png\" style=\"height: 70%;width: 70%\">\nProblem 3\nBelow are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.\nParameter configurations:\nConfiguration 1\n* Epochs: 1\n* Learning Rate:\n * 0.8\n * 0.5\n * 0.1\n * 0.05\n * 0.01\nConfiguration 2\n* Epochs:\n * 1\n * 2\n * 3\n * 4\n * 5\n* Learning Rate: 0.2\nThe code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.\nIf you're having trouble solving problem 3, you can view the solution here.",
"# Change if you have memory restrictions\nbatch_size = 128\n\n# TODO: Find the best parameters for each configuration\nepochs = 5\nlearning_rate = 0.02\n\n\n\n### DON'T MODIFY ANYTHING BELOW ###\n# Gradient Descent\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) \n\n# The accuracy measured against the validation set\nvalidation_accuracy = 0.0\n\n# Measurements use for graphing loss and accuracy\nlog_batch_step = 50\nbatches = []\nloss_batch = []\ntrain_acc_batch = []\nvalid_acc_batch = []\n\nwith tf.Session() as session:\n session.run(init)\n batch_count = int(math.ceil(len(train_features)/batch_size))\n\n for epoch_i in range(epochs):\n \n # Progress bar\n batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')\n \n # The training cycle\n for batch_i in batches_pbar:\n # Get a batch of training features and labels\n batch_start = batch_i*batch_size\n batch_features = train_features[batch_start:batch_start + batch_size]\n batch_labels = train_labels[batch_start:batch_start + batch_size]\n\n # Run optimizer and get loss\n _, l = session.run(\n [optimizer, loss],\n feed_dict={features: batch_features, labels: batch_labels})\n\n # Log every 50 batches\n if not batch_i % log_batch_step:\n # Calculate Training and Validation accuracy\n training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)\n validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)\n\n # Log batches\n previous_batch = batches[-1] if batches else 0\n batches.append(log_batch_step + previous_batch)\n loss_batch.append(l)\n train_acc_batch.append(training_accuracy)\n valid_acc_batch.append(validation_accuracy)\n\n # Check accuracy against Validation data\n validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)\n\nloss_plot = plt.subplot(211)\nloss_plot.set_title('Loss')\nloss_plot.plot(batches, loss_batch, 'g')\nloss_plot.set_xlim([batches[0], batches[-1]])\nacc_plot = plt.subplot(212)\nacc_plot.set_title('Accuracy')\nacc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')\nacc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')\nacc_plot.set_ylim([0, 1.0])\nacc_plot.set_xlim([batches[0], batches[-1]])\nacc_plot.legend(loc=4)\nplt.tight_layout()\nplt.show()\n\nprint('Validation accuracy at {}'.format(validation_accuracy))",
"Test\nYou're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.",
"### DON'T MODIFY ANYTHING BELOW ###\n# The accuracy measured against the test set\ntest_accuracy = 0.0\n\nwith tf.Session() as session:\n \n session.run(init)\n batch_count = int(math.ceil(len(train_features)/batch_size))\n\n for epoch_i in range(epochs):\n \n # Progress bar\n batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')\n \n # The training cycle\n for batch_i in batches_pbar:\n # Get a batch of training features and labels\n batch_start = batch_i*batch_size\n batch_features = train_features[batch_start:batch_start + batch_size]\n batch_labels = train_labels[batch_start:batch_start + batch_size]\n\n # Run optimizer\n _ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})\n\n # Check accuracy against Test data\n test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)\n\n\nassert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)\nprint('Nice Job! Test Accuracy is {}'.format(test_accuracy))",
"Multiple layers\nGood job! You built a one layer TensorFlow network! However, you might want to build more than one layer. This is deep learning after all! In the next section, you will start to satisfy your need for more layers."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
garth-wells/IA-maths-Jupyter | Lecture09.ipynb | mit | [
"Lecture 9 - change of basis\nThis lecture considered a change of basis for vectors and matrices (tensors)\nVectors and bases\nA vector $\\boldsymbol{a}$ is represented in terms of its components $a_{i}$with respect to a given basis, e.g.\n$$\n\\boldsymbol{a} = a_{1} \\boldsymbol{e}{1} + a{2} \\boldsymbol{e}{2} + a{3} \\boldsymbol{e}_{3}\n$$\n(in old-fashioned notation for the canonical basis $\\boldsymbol{i}$, $\\boldsymbol{j}$, $\\boldsymbol{k}$ is used). Using a suitable basis can sometimes simply construction and/or manipulations, but the 'physical' meaning to the vector is unchanged. With respect to a new basis, say ${\\boldsymbol{e}^{\\prime}_{i}}$, the coefficients will change:\n$$\n\\boldsymbol{a} = a^{\\prime}{1} \\boldsymbol{e}^{\\prime}{1} + a^{\\prime}{2} \\boldsymbol{e}^{\\prime}{2} \n+ a^{\\prime}{3} \\boldsymbol{e}^{\\prime}{3}\n$$\nIn an attempt at simplicity, in the notes we drop the vector basis and consider the somewhat imprecise concept of rotating the coordinate system.\nCoordinate rotations\nWhen considering the rotation of vectors, we looked at the rotation of the unit coordinate vectors, $[1, 0, 0]$, $[0, 1, 0]$ and $[0, 0, 1]$ to $\\boldsymbol{q}{1}$, $\\boldsymbol{q}{2}$ and $\\boldsymbol{q}_{3}$, respectively. From this, the matrix $\\boldsymbol{Q}$ can be formed:\n$$\n\\boldsymbol{Q} =\n\\begin{bmatrix}\n\\uparrow & \\uparrow & \\uparrow\n\\\n\\boldsymbol{q}{1} &\n\\boldsymbol{q}{2} &\n\\boldsymbol{q}_{3} \n\\\n\\downarrow & \\downarrow & \\downarrow\n\\end{bmatrix}\n$$\nWhen changing the coordinate system, the vector remains fixed but we rotate the basis. As a consequence, its components will change. If the basis $[1, 0, 0]$, $[0, 1, 0]$ and $[0, 0, 1]$ is rotated to $\\boldsymbol{q}{1}$, $\\boldsymbol{q}{2}$ and $\\boldsymbol{q}_{3}$, respectively, the coefficients of a vector $\\boldsymbol{a}$ in the rotated coordinate system are $\\boldsymbol{a}^{\\prime}$ \n$$\n\\boldsymbol{a}^{\\prime} = \\boldsymbol{Q}^{T} \\boldsymbol{a} = \\boldsymbol{R} \\boldsymbol{a}\n$$\nMatrices that operate on vectors with an associated basis are in fact tensors, and can also be rotated:\n$$\n\\boldsymbol{A}^{\\prime} = \\boldsymbol{R} \\boldsymbol{A} \\boldsymbol{R}^{T}\n$$\nExample\nWe illustrate now a transformation of an $n \\times n$ matrix $\\boldsymbol{A}$ that will make the matrix particularly simple. We first create a symmetric matrix $\\boldsymbol{A}$ (it will be become clearer later why we consider a symmetric matrix).",
"# Import NumPy and seed random number generator to make generated matrices deterministic\nimport numpy as np\nnp.random.seed(1)\n\n# Create a symmetric matrix with random entries\nA = np.random.rand(4, 4)\nA = A + A.T\nprint(A)",
"We need to create a rotation matrix $\\boldsymbol{R}$, which we will do via computation of the eigenvectors (eigenvectors are covered in the next lecture). The details are not so important here; we just need an orthogonal matrix. Computing the eigenvectors of $\\boldsymbol{A}$ and checking that the eigenvectors are orthonormal:",
"# Compute eigenvectors to generate a set of orthonormal vector\nevalues, evectors = np.linalg.eig(A)\n\n# Verify that eigenvectors R[i] are orthogonal (see Lecture 8 notebook)\nimport itertools\npairs = itertools.combinations_with_replacement(range(np.size(evectors, 0)), 2)\nfor p in pairs:\n e0, e1 = p[0], p[1]\n print(\"Dot product of eigenvectors vectors {}, {}: {}\".format(e0, e1, evectors[:, e0].dot(evectors[:, e1])))",
"We have verified that the eigenvectors form an orthonormal set, and hence can be used to construct a rotation transformation matrix $\\boldsymbol{R}$. For reasons that will become apparent later, we choose $\\boldsymbol{R}$ to be a matrix whose rows are the eigenvectors of $\\boldsymbol{A}$:",
"R = evectors.T",
"We now apply the transformation defined by $\\boldsymbol{R}$ to $\\boldsymbol{A}$:",
"Ap = (R).dot(A.dot(R.T))\nprint(Ap)",
"Note that the transformed matrix is diagonal. We will investigate this further in following lectures.\nWe can reverse the transformation by exploiting the fact that $\\boldsymbol{R}$ is an orthogonal matrix:",
"print((R.T).dot(Ap.dot(R)))",
"which is the same as the original matrix."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tylere/earthengine-api | python/examples/ipynb/TF_demo1_keras.ipynb | apache-2.0 | [
"#@title Copyright 2019 Google LLC. { display-mode: \"form\" }\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Introduction\nThis is an Earth Engine <> TensorFlow demonstration notebook. Specifically, this notebook shows:\n\nExporting training/testing data from Earth Engine in TFRecord format.\nPreparing the data for use in a TensorFlow model.\nTraining and validating a simple model (Keras Sequential neural network) in TensorFlow.\nMaking predictions on image data exported from Earth Engine in TFRecord format.\nIngesting classified image data to Earth Engine in TFRecord format.\n\nInstall the Earth Engine client library\nThis only needs to be done once per notebook.",
"!pip install earthengine-api",
"Authentication\nTo read/write from a Google Cloud Storage bucket to which you have access, it's necessary to authenticate (as yourself). You'll also need to authenticate as yourself with Earth Engine, so that you'll have access to your scripts, assets, etc.\nAuthenticate to Colab and Cloud\nIdentify yourself to Google Cloud, so you have access to storage and other resources. When you run the code below, it will display a link in the output to an authentication page in your browser. Follow the link to a page that will let you grant permission to the Cloud SDK to access your resources. Copy the code from the permissions page back into this notebook and press return to complete the process.\n(You may need to run this again if you get a credentials error later.)",
"from google.colab import auth\n\nauth.authenticate_user()",
"Authenticate to Earth Engine\nAuthenticate to Earth Engine the same way you did to the Colab notebook. Specifically, run the code to display a link to a permissions page. This gives you access to your Earth Engine account. Copy the code from the Earth Engine permissions page back into the notebook and press return to complete the process.",
"!earthengine authenticate",
"Initialize and test the software setup\nTest the Earth Engine installation",
"# Import the Earth Engine API and initialize it.\nimport ee\nee.Initialize()\n\n# Test the earthengine command by getting help on upload.\n!earthengine upload image -h",
"Test the TensorFlow installation\nThe default public runtime already has the tensorflow libraries we need installed. Before any operations from the TensorFlow API are used, import TensorFlow and enable eager execution. This provides an imperative interface that can help with debugging. See the TensorFlow eager execution guide or the tf.enable_eager_execution() docs for details.",
"import tensorflow as tf\n\ntf.enable_eager_execution()\nprint(tf.__version__)",
"Test the Folium installation\nThe default public runtime already has the Folium library we will use for visualization. Import the library, check the version, and define the URL where Folium will look for Earth Engine generated map tiles.",
"import folium\nprint(folium.__version__)\n\n# Define the URL format used for Earth Engine generated map tiles.\nEE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}'",
"Get Training and Testing data from Earth Engine\nTo get data for a classification model of three classes (bare, vegetation, water), we need labels and the value of predictor variables for each labeled example. We've already generated some labels in Earth Engine. Specifically, these are visually interpreted points labeled \"bare,\" \"vegetation,\" or \"water\" for a very simple classification demo (Code Editor script). For predictor variables, we'll use Landsat 8 surface reflectance imagery, bands 2-7.\nPrepare Landsat 8 imagery\nFirst, make a cloud-masked median composite of Landsat 8 surface reflectance imagery from 2018. Check the composite by visualizing with folium.",
"# Use these bands for prediction.\nbands = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']\n# Use Landsat 8 surface reflectance data.\nl8sr = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')\n\n# Cloud masking function.\ndef maskL8sr(image):\n cloudShadowBitMask = ee.Number(2).pow(3).int()\n cloudsBitMask = ee.Number(2).pow(5).int()\n qa = image.select('pixel_qa')\n mask = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And(\n qa.bitwiseAnd(cloudsBitMask).eq(0))\n return image.updateMask(mask).select(bands).divide(10000)\n\n# The image input data is a 2018 cloud-masked median composite.\nimage = l8sr.filterDate('2018-01-01', '2018-12-31').map(maskL8sr).median()\n\n# Use folium to visualize the imagery.\nmapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3})\nmap = folium.Map(location=[38., -122.5])\nfolium.TileLayer(\n tiles=EE_TILES.format(**mapid),\n attr='Google Earth Engine',\n overlay=True,\n name='median composite',\n ).add_to(map)\nmap.add_child(folium.LayerControl())\nmap",
"Add pixel values of the composite to labeled points.\nSome training labels have already been collected for you. Load the labeled points from an existing Earth Engine asset. Each point in this table has a property called landcover that stores the label, encoded as an integer. Here we overlay the points on imagery to get predictor variables along with labels.",
"# Change the following two lines to use your own training data.\nlabels = ee.FeatureCollection('projects/google/demo_landcover_labels')\nlabel = 'landcover'\n\n# Sample the image at the points and add a random column.\nsample = image.sampleRegions(\n collection=labels, properties=[label], scale=30).randomColumn()\n\n# Partition the sample approximately 70-30.\ntraining = sample.filter(ee.Filter.lt('random', 0.7))\ntesting = sample.filter(ee.Filter.gte('random', 0.7))\n\nfrom pprint import pprint\n\n# Print the first couple points to verify.\npprint({'training': training.first().getInfo()})\npprint({'testing': testing.first().getInfo()})",
"Export the training and testing data\nNow that there's training and testing data in Earth Engine and you've inspected a couple examples to ensure that the information you need is present, it's time to materialize the datasets in a place where the TensorFlow model has access to them. You can do that by exporting the training and testing datasets to tables in TFRecord format (learn more about TFRecord format) in a Cloud Storage bucket (learn more about creating Cloud Storage buckets). Note that you need to have write access to the Cloud Storage bucket where the files will be output.",
"# REPLACE WITH YOUR BUCKET!\noutputBucket = 'ee-docs-demos'\n\n# Make sure the bucket exists.\nprint('Found Cloud Storage bucket.' if tf.gfile.Exists('gs://' + outputBucket) \n else 'Output Cloud Storage bucket does not exist.')",
"Once you've verified the existence of the intended output bucket, run the exports.",
"# Names for output files.\ntrainFilePrefix = 'Training_demo_'\ntestFilePrefix = 'Testing_demo_'\n\n# This is list of all the properties we want to export.\nfeatureNames = list(bands)\nfeatureNames.append(label)\n\n# Create the tasks.\ntrainingTask = ee.batch.Export.table.toCloudStorage(\n collection=training,\n description='Training Export',\n fileNamePrefix=trainFilePrefix,\n bucket=outputBucket,\n fileFormat='TFRecord',\n selectors=featureNames)\n\ntestingTask = ee.batch.Export.table.toCloudStorage(\n collection=testing,\n description='Testing Export',\n fileNamePrefix=testFilePrefix,\n bucket=outputBucket,\n fileFormat='TFRecord',\n selectors=featureNames)\n\n# Start the tasks.\ntrainingTask.start()\ntestingTask.start()",
"Monitor task progress\nYou can see all your Earth Engine tasks by listing them. It's also useful to repeatedly poll a task so you know when it's done. Here we can do that because this is a relatively quick export. Be careful when doing this with large exports because it will block the notebook from running other cells until this one completes.",
"# Print all tasks.\nprint(ee.batch.Task.list())\n\n# Poll the training task until it's done.\nimport time \nwhile trainingTask.active():\n print('Polling for task (id: {}).'.format(trainingTask.id))\n time.sleep(5)\nprint('Done with training export.')",
"Check existence of the exported files\nIf you've seen the status of the export tasks change to COMPLETED, then check for the existince of the files in the output Cloud Storage bucket.",
"fileNameSuffix = 'ee_export.tfrecord.gz'\ntrainFilePath = 'gs://' + outputBucket + '/' + trainFilePrefix + fileNameSuffix\ntestFilePath = 'gs://' + outputBucket + '/' + testFilePrefix + fileNameSuffix\n\nprint('Found training file.' if tf.gfile.Exists(trainFilePath) \n else 'No training file found.')\nprint('Found testing file.' if tf.gfile.Exists(testFilePath) \n else 'No testing file found.')",
"Export the imagery\nYou can also export imagery using TFRecord format. Specifically, export whatever imagery you want to be classified by the trained model into the output Cloud Storage bucket.",
"imageFilePrefix = 'Image_pixel_demo_'\n\n# Specify patch and file dimensions.\nimageExportFormatOptions = {\n 'patchDimensions': [256, 256],\n 'maxFileSize': 104857600,\n 'compressed': True\n}\n\n# Export imagery in this region.\nexportRegion = ee.Geometry.Rectangle([-122.7, 37.3, -121.8, 38.00])\n\n# Setup the task.\nimageTask = ee.batch.Export.image.toCloudStorage(\n image=image,\n description='Image Export',\n fileNamePrefix=imageFilePrefix,\n bucket=outputBucket,\n scale=30,\n fileFormat='TFRecord',\n region=exportRegion.toGeoJSON()['coordinates'],\n formatOptions=imageExportFormatOptions,\n)\n\n# Start the task.\nimageTask.start()",
"Monitor task progress\nBefore making predictions, we need the image export to finish, so block until it does. This might take a few minutes...",
"while imageTask.active():\n print('Polling for task (id: {}).'.format(imageTask.id))\n time.sleep(5)\nprint('Done with image export.')",
"Data preparation and pre-processing\nRead data from the TFRecord file into a tf.data.Dataset. Pre-process the dataset to get it into a suitable format for input to the model.\nRead into a tf.data.Dataset\nHere we are going to read a file in Cloud Storage into a tf.data.Dataset. (these TensorFlow docs explain more about reading data into a Dataset). Check that you can read examples from the file. The purpose here is to ensure that we can read from the file without an error. The actual content is not necessarily human readable.",
"# Create a dataset from the TFRecord file in Cloud Storage.\ntrainDataset = tf.data.TFRecordDataset(trainFilePath, compression_type='GZIP')\n# Print the first record to check.\nprint(iter(trainDataset).next())",
"Define the structure of your data\nFor parsing the exported TFRecord files, featuresDict is a mapping between feature names (recall that featureNames contains the band and label names) and float32 tf.io.FixedLenFeature objects. This mapping is necessary for telling TensorFlow how to read data in a TFRecord file into tensors. Specifically, all numeric data exported from Earth Engine is exported as float32.\n(Note: features in the TensorFlow context (i.e. feature.proto) are not to be confused with Earth Engine features (i.e. ee.Feature), where the former is a protocol message type for serialized data input to the model and the latter is a geometry-based geographic data structure.)",
"# List of fixed-length features, all of which are float32.\ncolumns = [\n tf.io.FixedLenFeature(shape=[1], dtype=tf.float32) for k in featureNames\n]\n\n# Dictionary with names as keys, features as values.\nfeaturesDict = dict(zip(featureNames, columns))\n\npprint(featuresDict)",
"Parse the dataset\nNow we need to make a parsing function for the data in the TFRecord files. The data comes in flattened 2D arrays per record and we want to use the first part of the array for input to the model and the last element of the array as the class label. The parsing function reads data from a serialized Example proto (i.e. example.proto) into a dictionary in which the keys are the feature names and the values are the tensors storing the value of the features for that example. (Learn more about parsing Example protocol buffer messages).",
"def parse_tfrecord(example_proto):\n \"\"\"The parsing function.\n\n Read a serialized example into the structure defined by featuresDict.\n\n Args:\n example_proto: a serialized Example.\n \n Returns: \n A tuple of the predictors dictionary and the label, cast to an `int32`.\n \"\"\"\n parsed_features = tf.io.parse_single_example(example_proto, featuresDict)\n labels = parsed_features.pop(label)\n return parsed_features, tf.cast(labels, tf.int32)\n\n# Map the function over the dataset.\nparsedDataset = trainDataset.map(parse_tfrecord, num_parallel_calls=5)\n\n# Print the first parsed record to check.\npprint(iter(parsedDataset).next())",
"Note that each record of the parsed dataset contains a tuple. The first element of the tuple is a dictionary with bands for keys and the numeric value of the bands for values. The second element of the tuple is a class label.\nCreate additional features\nAnother thing we might want to do as part of the input process is to create new features, for example NDVI, a vegetation index computed from reflectance in two spectral bands. Here are some helper functions for that.",
"def normalizedDifference(a, b):\n \"\"\"Compute normalized difference of two inputs.\n\n Compute (a - b) / (a + b). If the denomenator is zero, add a small delta. \n\n Args:\n a: an input tensor with shape=[1]\n b: an input tensor with shape=[1]\n\n Returns:\n The normalized difference as a tensor.\n \"\"\"\n nd = (a - b) / (a + b)\n nd_inf = (a - b) / (a + b + 0.000001)\n return tf.where(tf.is_finite(nd), nd, nd_inf)\n\ndef addNDVI(features, label):\n \"\"\"Add NDVI to the dataset.\n Args: \n features: a dictionary of input tensors keyed by feature name.\n label: the target label\n \n Returns:\n A tuple of the input dictionary with an NDVI tensor added and the label.\n \"\"\"\n features['NDVI'] = normalizedDifference(features['B5'], features['B4'])\n return features, label",
"Model setup\nThe basic workflow for classification in TensorFlow is:\n\nCreate the model.\nTrain the model (i.e. fit()).\nUse the trained model for inference (i.e. predict()).\n\nHere we'll create a Sequential neural network model using Keras. This simple model is inspired by examples in:\n\nThe TensorFlow Get Started tutorial\nThe TensorFlow Keras guide\nThe Keras Sequential model examples\n\nNote that the model used here is purely for demonstration purposes and hasn't gone through any performance tuning.\nCreate the Keras model\nBefore we create the model, there's still a wee bit of pre-processing to get the data into the right input shape and a format that can be used with cross-entropy loss. Specifically, Keras expects a list of inputs and a one-hot vector for the class. (See the Keras loss function docs, the TensorFlow categorical identity docs and the tf.one_hot docs for details). \nHere we will use a simple neural network model with a 64 node hidden layer, a dropout layer and an output layer. Once the dataset has been prepared, define the model, compile it, fit it to the training data. See the Keras Sequential model guide for more details.",
"from tensorflow import keras\n\n# How many classes there are in the model.\nnClasses = 3\n\n# Add NDVI.\ninputDataset = parsedDataset.map(addNDVI)\n\n# Keras requires inputs as a tuple. Note that the inputs must be in the\n# right shape. Also note that to use the categorical_crossentropy loss,\n# the label needs to be turned into a one-hot vector.\ndef toTuple(dict, label):\n return tf.transpose(list(dict.values())), tf.one_hot(indices=label, depth=nClasses)\n\n# Repeat the input dataset as many times as necessary in batches of 10.\ninputDataset = inputDataset.map(toTuple).repeat().batch(10)\n\n# Define the layers in the model.\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Dense(64, activation=tf.nn.relu),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(nClasses, activation=tf.nn.softmax)\n])\n\n# Compile the model with the specified loss function.\nmodel.compile(optimizer=tf.train.AdamOptimizer(),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# Fit the model to the training data.\n# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.\nmodel.fit(x=inputDataset, epochs=3, steps_per_epoch=100)\n",
"Check model accuracy on the test set\nNow that we have a trained model, we can evaluate it using the test dataset. To do that, read and prepare the test dataset in the same way as the training dataset. Here we specify a batch sie of 1 so that each example in the test set is used exactly once to compute model accuracy. For model steps, just specify a number larger than the test dataset size (ignore the warning).",
"testDataset = (\n tf.data.TFRecordDataset(testFilePath, compression_type='GZIP')\n .map(parse_tfrecord, num_parallel_calls=5)\n .map(addNDVI)\n .map(toTuple)\n .batch(1)\n)\n\nmodel.evaluate(testDataset, steps=100)",
"Use the trained model to classify an image from Earth Engine\nNow it's time to classify the image that was exported from Earth Engine. If the exported image is large, it will be split into multiple TFRecord files in its destination folder. There will also be a JSON sidecar file called \"the mixer\" that describes the format and georeferencing of the image. Here we will find the image files and the mixer file, getting some info out of the mixer that will be useful during model inference.\nFind the image files and JSON mixer file in Cloud Storage\nUse gsutil to locate the files of interest in the output Cloud Storage bucket. Check to make sure your image export task finished before running the following.",
"# Get a list of all the files in the output bucket.\nfilesList = !gsutil ls 'gs://'{outputBucket}\n# Get only the files generated by the image export.\nexportFilesList = [s for s in filesList if imageFilePrefix in s]\n\n# Get the list of image files and the JSON mixer file.\nimageFilesList = []\njsonFile = None\nfor f in exportFilesList:\n if f.endswith('.tfrecord.gz'):\n imageFilesList.append(f)\n elif f.endswith('.json'):\n jsonFile = f\n\n# Make sure the files are in the right order.\nimageFilesList.sort()\n\npprint(imageFilesList)\nprint(jsonFile)",
"Read the JSON mixer file\nThe mixer contains metadata and georeferencing information for the exported patches, each of which is in a different file. Read the mixer to get some information needed for prediction.",
"import json\n\n# Load the contents of the mixer file to a JSON object.\njsonText = !gsutil cat {jsonFile}\n# Get a single string w/ newlines from the IPython.utils.text.SList\nmixer = json.loads(jsonText.nlstr)\npprint(mixer)",
"Read the image files into a dataset\nYou can feed the list of files (imageFilesList) directly to the TFRecordDataset constructor to make a combined dataset on which to perform inference. The input needs to be preprocessed differently than the training and testing. Mainly, this is because the pixels are written into records as patches, we need to read the patches in as one big tensor (one patch for each band), then flatten them into lots of little tensors.",
"# Get relevant info from the JSON mixer file.\nPATCH_WIDTH = mixer['patchDimensions'][0]\nPATCH_HEIGHT = mixer['patchDimensions'][1]\nPATCHES = mixer['totalPatches']\nPATCH_DIMENSIONS_FLAT = [PATCH_WIDTH * PATCH_HEIGHT, 1]\n\n# Note that the tensors are in the shape of a patch, one patch for each band.\nimageColumns = [\n tf.FixedLenFeature(shape=PATCH_DIMENSIONS_FLAT, dtype=tf.float32) \n for k in bands\n]\n\n# Parsing dictionary.\nimageFeaturesDict = dict(zip(bands, imageColumns))\n\n# Note that you can make one dataset from many files by specifying a list.\nimageDataset = tf.data.TFRecordDataset(imageFilesList, compression_type='GZIP')\n\n# Parsing function.\ndef parse_image(example_proto):\n return tf.parse_single_example(example_proto, imageFeaturesDict)\n\n# Parse the data into tensors, one long tensor per patch.\nimageDataset = imageDataset.map(parse_image, num_parallel_calls=5)\n\n# Break our long tensors into many little ones.\nimageDataset = imageDataset.flat_map(\n lambda features: tf.data.Dataset.from_tensor_slices(features)\n)\n\n# Add additional features (NDVI).\nimageDataset = imageDataset.map(\n # Add NDVI to a feature that doesn't have a label.\n lambda features: addNDVI(features, None)[0]\n)\n\n# Turn the dictionary in each record into a tuple with a dummy label.\nimageDataset = imageDataset.map(\n # Add a dummy target (-1), with a value that is obviously ridiculous.\n # This is because the model expects a tuple of (inputs, label).\n lambda dataDict: (tf.transpose(list(dataDict.values())), tf.constant(-1))\n)\n\n# Turn each patch into a batch.\nimageDataset = imageDataset.batch(PATCH_WIDTH * PATCH_HEIGHT)",
"Generate predictions for the image pixels\nTo get predictions in each pixel, run the image dataset through the trained model using model.predict(). Print the first prediction to see that the output is a list of the three class probabilities for each pixel. Running all predictions might take a while.",
"# Run prediction in batches, with as many steps as there are patches.\npredictions = model.predict(imageDataset, steps=PATCHES, verbose=1)\n\n# Note that the predictions come as a numpy array. Check the first one.\nprint(predictions[0])",
"Write the predictions to a TFRecord file\nNow that there's a list of class probabilities in predictions, it's time to write them back into a file, optionally including a class label which is simply the index of the maximum probability. We'll write directly from TensorFlow to a file in the output Cloud Storage bucket.\nIterate over the list, compute class label and write the class and the probabilities in patches. Specifically, we need to write the pixels into the file as patches in the same order they came out. The records are written as serialized tf.train.Example protos. This might take a while.",
"outputImageFile = 'gs://' + outputBucket + '/Classified_pixel_demo.TFRecord'\nprint('Writing to file ' + outputImageFile)\n\n# Instantiate the writer.\nwriter = tf.python_io.TFRecordWriter(outputImageFile)\n\n# Every patch-worth of predictions we'll dump an example into the output\n# file with a single feature that holds our predictions. Since our predictions\n# are already in the order of the exported data, the patches we create here\n# will also be in the right order.\npatch = [[], [], [], []]\ncurPatch = 1\nfor prediction in predictions:\n patch[0].append(tf.argmax(prediction, 1))\n patch[1].append(prediction[0][0])\n patch[2].append(prediction[0][1])\n patch[3].append(prediction[0][2])\n # Once we've seen a patches-worth of class_ids...\n if (len(patch[0]) == PATCH_WIDTH * PATCH_HEIGHT):\n print('Done with patch ' + str(curPatch) + ' of ' + str(PATCHES) + '...')\n # Create an example\n example = tf.train.Example(\n features=tf.train.Features(\n feature={\n 'prediction': tf.train.Feature(\n int64_list=tf.train.Int64List(\n value=patch[0])),\n 'bareProb': tf.train.Feature(\n float_list=tf.train.FloatList(\n value=patch[1])),\n 'vegProb': tf.train.Feature(\n float_list=tf.train.FloatList(\n value=patch[2])),\n 'waterProb': tf.train.Feature(\n float_list=tf.train.FloatList(\n value=patch[3])),\n }\n )\n )\n # Write the example to the file and clear our patch array so it's ready for\n # another batch of class ids\n writer.write(example.SerializeToString())\n patch = [[], [], [], []]\n curPatch += 1\n\nwriter.close()",
"Upload the classifications to an Earth Engine asset\nVerify the existence of the predictions file\nAt this stage, there should be a predictions TFRecord file sitting in the output Cloud Storage bucket. Use the gsutil command to verify that the predictions image (and associated mixer JSON) exist and have non-zero size.",
"!gsutil ls -l {outputImageFile}",
"Upload the classified image to Earth Engine\nUpload the image to Earth Engine directly from the Cloud Storage bucket with the earthengine command. Provide both the image TFRecord file and the JSON file as arguments to earthengine upload.",
"# REPLACE WITH YOUR USERNAME:\nUSER_NAME = 'nclinton'\noutputAssetID = 'users/' + USER_NAME + '/Classified_pixel_demo'\nprint('Writing to ' + outputAssetID)\n\n# Start the upload.\n!earthengine upload image --asset_id={outputAssetID} {outputImageFile} {jsonFile}",
"Check the status of the asset ingestion\nYou can also use the Earth Engine API to check the status of your asset upload. It might take a while. The upload of the image is an asset ingestion task.",
"ee.batch.Task.list()",
"View the ingested asset\nDisplay the vector of class probabilities as an RGB image with colors corresponding to the probability of bare, vegetation, water in a pixel. Also display the winning class using the same color palette.",
"predictionsImage = ee.Image(outputAssetID)\n\npredictionVis = {\n 'bands': 'prediction',\n 'min': 0,\n 'max': 2,\n 'palette': ['red', 'green', 'blue']\n}\nprobabilityVis = {'bands': ['bareProb', 'vegProb', 'waterProb']}\n\npredictionMapid = predictionsImage.getMapId(predictionVis)\nprobabilityMapid = predictionsImage.getMapId(probabilityVis)\n\nmap = folium.Map(location=[38., -122.5])\nfolium.TileLayer(\n tiles=EE_TILES.format(**predictionMapid),\n attr='Google Earth Engine',\n overlay=True,\n name='prediction',\n).add_to(map)\nfolium.TileLayer(\n tiles=EE_TILES.format(**probabilityMapid),\n attr='Google Earth Engine',\n overlay=True,\n name='probability',\n).add_to(map)\nmap.add_child(folium.LayerControl())\nmap"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
probml/pyprobml | notebooks/misc/splines_numpyro.ipynb | mit | [
"<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/splines_numpyro.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n1d regression splines\nWe illustrate 1d regression splines using the cherry blossom example in sec 4.5 of Statistical Rethinking ed 2. \nThe numpyro code is from Du Phan's site.",
"!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro\n!pip install -q arviz\n\nimport numpy as np\n\nnp.set_printoptions(precision=3)\nimport matplotlib.pyplot as plt\nimport math\nimport os\nimport warnings\nimport pandas as pd\n\nfrom scipy.interpolate import BSpline\nfrom scipy.stats import gaussian_kde\n\nimport jax\n\nprint(\"jax version {}\".format(jax.__version__))\nprint(\"jax backend {}\".format(jax.lib.xla_bridge.get_backend().platform))\n\nimport jax.numpy as jnp\nfrom jax import random, vmap\n\nrng_key = random.PRNGKey(0)\nrng_key, rng_key_ = random.split(rng_key)\n\nimport numpyro\nimport numpyro.distributions as dist\nfrom numpyro.distributions import constraints\nfrom numpyro.distributions.transforms import AffineTransform\nfrom numpyro.diagnostics import hpdi, print_summary\nfrom numpyro.infer import Predictive\nfrom numpyro.infer import MCMC, NUTS\nfrom numpyro.infer import SVI, Trace_ELBO, init_to_value\nfrom numpyro.infer.autoguide import AutoLaplaceApproximation\nimport numpyro.optim as optim\n\n\nimport arviz as az",
"Data",
"url = \"https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/cherry_blossoms.csv\"\ncherry_blossoms = pd.read_csv(url, sep=\";\")\ndf = cherry_blossoms\n\ndisplay(df.sample(n=5, random_state=1))\ndisplay(df.describe())\n\ndf2 = df[df.doy.notna()] # complete cases on doy (day of year)\nx = df2.year.values.astype(float)\ny = df2.doy.values.astype(float)\nxlabel = \"year\"\nylabel = \"doy\"",
"B-splines",
"def make_splines(x, num_knots, degree=3):\n knot_list = jnp.quantile(x, q=jnp.linspace(0, 1, num=num_knots))\n knots = jnp.pad(knot_list, (3, 3), mode=\"edge\")\n B = BSpline(knots, jnp.identity(num_knots + 2), k=degree)(x)\n return B\n\n\ndef plot_basis(x, B, w=None):\n if w is None:\n w = jnp.ones((B.shape[1]))\n fig, ax = plt.subplots()\n ax.set_xlim(np.min(x), np.max(x))\n ax.set_xlabel(xlabel)\n ax.set_ylabel(\"basis value\")\n for i in range(B.shape[1]):\n ax.plot(x, (w[i] * B[:, i]), \"k\", alpha=0.5)\n return ax\n\n\nnknots = 15\nB = make_splines(x, nknots)\nax = plot_basis(x, B)\nplt.savefig(f\"splines_basis_{nknots}_{ylabel}.pdf\", dpi=300)\n\nnum_knots = 15\ndegree = 3\n\nknot_list = jnp.quantile(x, q=jnp.linspace(0, 1, num=num_knots))\nprint(knot_list)\nprint(knot_list.shape)\n\nknots = jnp.pad(knot_list, (3, 3), mode=\"edge\")\nprint(knots)\nprint(knots.shape)\n\nB = BSpline(knots, jnp.identity(num_knots + 2), k=degree)(x)\nprint(B.shape)\n\ndef plot_basis_with_vertical_line(x, B, xstar):\n ax = plot_basis(x, B)\n num_knots = B.shape[1]\n ndx = np.where(x == xstar)[0][0]\n for i in range(num_knots):\n yy = B[ndx, i]\n if yy > 0:\n ax.scatter(xstar, yy, s=40)\n ax.axvline(x=xstar)\n return ax\n\n\nplot_basis_with_vertical_line(x, B, 1200)\nplt.savefig(f\"splines_basis_{nknots}_vertical_{ylabel}.pdf\", dpi=300)\n\ndef model(B, y, offset=100):\n a = numpyro.sample(\"a\", dist.Normal(offset, 10))\n w = numpyro.sample(\"w\", dist.Normal(0, 10).expand(B.shape[1:]))\n sigma = numpyro.sample(\"sigma\", dist.Exponential(1))\n mu = numpyro.deterministic(\"mu\", a + B @ w)\n # mu = numpyro.deterministic(\"mu\", a + jnp.sum(B * w, axis=-1)) # equivalent\n numpyro.sample(\"y\", dist.Normal(mu, sigma), obs=y)\n\n\ndef fit_model(B, y, offset=100):\n start = {\"w\": jnp.zeros(B.shape[1])}\n guide = AutoLaplaceApproximation(model, init_loc_fn=init_to_value(values=start))\n svi = SVI(model, guide, optim.Adam(1), Trace_ELBO(), B=B, y=y, offset=offset)\n params, losses = svi.run(random.PRNGKey(0), 20000) # needs 20k iterations\n post = guide.sample_posterior(random.PRNGKey(1), params, (1000,))\n return post\n\n\npost = fit_model(B, y)\nw = jnp.mean(post[\"w\"], 0)\nplot_basis(x, B, w)\nplt.savefig(f\"splines_basis_weighted_{nknots}_{ylabel}.pdf\", dpi=300)\n\ndef plot_post_pred(post, x, y):\n mu = post[\"mu\"]\n mu_PI = jnp.percentile(mu, q=(1.5, 98.5), axis=0)\n plt.figure()\n plt.scatter(x, y)\n plt.fill_between(x, mu_PI[0], mu_PI[1], color=\"k\", alpha=0.5)\n plt.xlabel(xlabel)\n plt.ylabel(ylabel)\n plt.show()\n\n\nplot_post_pred(post, x, y)\nplt.savefig(f\"splines_post_pred_{nknots}_{ylabel}.pdf\", dpi=300)\n\na = jnp.mean(post[\"a\"], 0)\nw = jnp.mean(post[\"w\"], 0)\nmu = a + B @ w\n\n\ndef plot_pred(mu, x, y):\n plt.figure()\n plt.scatter(x, y, alpha=0.5)\n plt.plot(x, mu, \"k-\", linewidth=4)\n plt.xlabel(xlabel)\n plt.ylabel(ylabel)\n\n\nplot_pred(mu, x, y)\nplt.savefig(f\"splines_point_pred_{nknots}_{ylabel}.pdf\", dpi=300)",
"Repeat with temperature as target variable",
"df2 = df[df.temp.notna()] # complete cases\nx = df2.year.values.astype(float)\ny = df2.temp.values.astype(float)\nxlabel = \"year\"\nylabel = \"temp\"\n\nnknots = 15\n\nB = make_splines(x, nknots)\nplot_basis_with_vertical_line(x, B, 1200)\nplt.savefig(f\"splines_basis_{nknots}_vertical_{ylabel}.pdf\", dpi=300)\n\n\npost = fit_model(B, y, offset=6)\nw = jnp.mean(post[\"w\"], 0)\nplot_basis(x, B, w)\nplt.savefig(f\"splines_basis_weighted_{nknots}_{ylabel}.pdf\", dpi=300)\n\nplot_post_pred(post, x, y)\nplt.savefig(f\"splines_post_pred_{nknots}_{ylabel}.pdf\", dpi=300)\n\na = jnp.mean(post[\"a\"], 0)\nw = jnp.mean(post[\"w\"], 0)\nmu = a + B @ w\nplot_pred(mu, x, y)\nplt.savefig(f\"splines_point_pred_{nknots}_{ylabel}.pdf\", dpi=300)",
"Maximum likelihood estimation",
"from sklearn.linear_model import LinearRegression, Ridge\n\n# reg = LinearRegression().fit(B, y)\nreg = Ridge().fit(B, y)\nw = reg.coef_\na = reg.intercept_\nprint(w)\nprint(a)\n\nmu = a + B @ w\nplot_pred(mu, x, y)\nplt.savefig(f\"splines_MLE_{nknots}_{ylabel}.pdf\", dpi=300)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jornvdent/WUR-Geo-Scripting-Course | Lesson 14/Lesson 14 - Assignment.ipynb | gpl-3.0 | [
"Import modules",
"from twython import TwythonStreamer\nimport string, json, pprint\nimport urllib\nfrom datetime import datetime\nfrom datetime import date\nfrom time import *\nimport string, os, sys, subprocess, time\nimport psycopg2\nimport re\nfrom osgeo import ogr",
"Enter your details for twitter API",
"# get access to the twitter API\nAPP_KEY = 'fQCYxyQmFDUE6aty0JEhDoZj7'\nAPP_SECRET = 'ZwVIgnWMpuEEVd1Tlg6TWMuyRwd3k90W3oWyLR2Ek1tnjnRvEG'\nOAUTH_TOKEN = '824520596293820419-f4uGwMV6O7PSWUvbPQYGpsz5fMSVMct'\nOAUTH_TOKEN_SECRET = '1wq51Im5HQDoSM0Fb5OzAttoP3otToJtRFeltg68B8krh'",
"Set up details for PostGIS DB, run in terminal:\nWe are going to use a PostGis database, which requires you to have an empty database. Enter these steps into the terminal to set up you databse.\nIn this example we use \"demo\" as the name of our database. Feel free to give you database another name, but replace \"demo\" with the name you have chosen. \nConnect to postgres\npsql -d postgres\"\nCreate database\npostgres=# CREATE DATABASE demo;\nSwitch to new DB\npostgres=# \\c demo\nAdd PostGIS extension to new DB\ndemo=# create extension postgis;\nAdd Table\ndemo=# CREATE TABLE tweets (id serial primary key, tweet_id BIGINT, text varchar(140), date DATE, time TIME, geom geometry(POINT,4326) );\nEnter your database connection details:",
"dbname = \"demo\"\nuser = \"user\"\npassword = \"user\"\ntable = \"tweets\"",
"Function which connects to PostGis database and inserts data",
"def insert_into_DB(tweet_id, tweet_text, tweet_date, tweet_time, tweet_lat, tweet_lon):\n try:\n conn = psycopg2.connect(dbname = dbname, user = user, password = password)\n cur = conn.cursor()\n # enter stuff in database\n sql = \"INSERT INTO \" + str(table) + \" (tweet_id, text, date, time, geom) \\\n VALUES (\" + str(tweet_id) + \", '\" + str(tweet_text) + \"', '\" + str(tweet_date) + \"', '\" + str(tweet_time) + \"', \\\n ST_GeomFromText('POINT(\" + str(tweet_lon) + \" \" + str(tweet_lat) + \")', 4326))\"\n cur.execute(sql)\n conn.commit()\n conn.close()\n\n except psycopg2.DatabaseError, e:\n print 'Error %s' % e ",
"Function to remove the hyperlinks from the text",
"def remove_link(text):\n pattern = r'(https://)'\n matcher = re.compile(pattern)\n match = matcher.search(text)\n if match != None:\n text = text[:match.start(1)]\n return text",
"Process JSON twitter streamd data",
"#Class to process JSON data comming from the twitter stream API. Extract relevant fields\nclass MyStreamer(TwythonStreamer):\n def on_success(self, data):\n tweet_lat = 0.0\n tweet_lon = 0.0\n tweet_name = \"\"\n retweet_count = 0\n\n if 'id' in data:\n tweet_id = data['id']\n if 'text' in data:\n tweet_text = data['text'].encode('utf-8').replace(\"'\",\"''\").replace(';','')\n tweet_text = remove_link(tweet_text)\n if 'coordinates' in data: \n geo = data['coordinates']\n if geo is not None:\n latlon = geo['coordinates']\n tweet_lon = latlon[0]\n tweet_lat = latlon[1]\n if 'created_at' in data:\n dt = data['created_at']\n tweet_datetime = datetime.strptime(dt, '%a %b %d %H:%M:%S +0000 %Y')\n tweet_date = str(tweet_datetime)[:11]\n tweet_time = str(tweet_datetime)[11:]\n\n if 'user' in data:\n users = data['user']\n tweet_name = users['screen_name']\n\n if 'retweet_count' in data:\n retweet_count = data['retweet_count']\n \n if tweet_lat != 0:\n # call function to write to DB\n insert_into_DB(tweet_id, tweet_text, tweet_date, tweet_time, tweet_lat, tweet_lon)\n \n def on_error(self, status_code, data):\n print \"OOPS FOUTJE: \" +str(status_code)\n #self.disconnect",
"Main procedure",
"def main():\n try:\n stream = MyStreamer(APP_KEY, APP_SECRET,OAUTH_TOKEN, OAUTH_TOKEN_SECRET)\n print 'Connecting to twitter: will take a minute'\n except ValueError:\n print 'OOPS! that hurts, something went wrong while making connection with Twitter: '+str(ValueError)\n \n \n # Filter based on bounding box see twitter api documentation for more info\n try:\n stream.statuses.filter(locations='-0.351468, 51.38494, 0.148271, 51.672343')\n except ValueError:\n print 'OOPS! that hurts, something went wrong while getting the stream from Twitter: '+str(ValueError)\n\n\n \nif __name__ == '__main__':\n main()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jbwhit/svds-jupyter | deliver/01-Tips-and-tricks.ipynb | mit | [
"Best practices\nLet's start with pep8 (https://www.python.org/dev/peps/pep-0008/)\n\nImports should be grouped in the following order:\n\nstandard library imports\nrelated third party imports\nlocal application/library specific imports\n\nYou should put a blank line between each group of imports.\nPut any relevant all specification after the imports.",
"# Best practice for loading libraries?\n# Couldn't find what to do with 'magic' imports at the top\n\n%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n%config InlineBackend.figure_format='retina' \n\nfrom __future__ import division\n\nfrom itertools import combinations\nimport string\n\nfrom IPython.display import IFrame, HTML, YouTubeVideo\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport scipy as sp\nimport seaborn as sns; sns.set();\n\nplt.rcParams['figure.figsize'] = (12, 8)\nsns.set_style(\"darkgrid\")\nsns.set_context(\"poster\", font_scale=1.3)",
"Pivot Tables w/ pandas\nhttp://nicolas.kruchten.com/content/2015/09/jupyter_pivottablejs/",
"YouTubeVideo(\"ZbrRrXiWBKc\", width=800, height=600)\n\n!pip install pivottablejs\n\ndf = pd.read_csv(\"../data/mps.csv\")\n\ndf.head()\n\nfrom pivottablejs import pivot_ui\npivot_ui(df)\n# Province, Party, Average, Age, Heatmap",
"Keyboard shortcuts",
"# in select mode, shift j/k (to select multiple cells at once)\n# split cell with ctrl shift -\n\nfirst = 1\n\nsecond = 2\n\nthird = 3",
"Floating Table of Contents\nCreates a new button on the toolbar that pops up a table of contents that you can navigate by.\nIn your documentation if you indent by 4 spaces, you get monospaced code-style code so you can embed in a Markdown cell:\n$ mkdir toc\n$ cd toc\n\n$ wget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.js\n\n$ wget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.css\n$ cd ..\n\n$ jupyter-nbextension install --user toc\n\n$ jupyter-nbextension enable toc/toc\n\nYou can also get syntax highlighting if you tell it the language that you're including: \n```bash\nmkdir toc\ncd toc\nwget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.js\nwget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.css\ncd ..\njupyter-nbextension install --user toc\njupyter-nbextension enable toc/toc\n```\nR\n\npyRserve\nrpy2",
"import rpy2\n\n%load_ext rpy2.ipython\n\nX = np.array([0,1,2,3,4])\nY = np.array([3,5,4,6,7])\n\n%%R -i X,Y -o XYcoef\nXYlm = lm(Y~X)\nXYcoef = coef(XYlm)\nprint(summary(XYlm))\npar(mfrow=c(2,2))\nplot(XYlm)",
"Tech_Vault additions\nMiniconda + conda environments\nProbably the best way to go -- there should be an updated document in tech_vault that describes the way to setup py2, and py3 environments.\nSVDS Template\n\ncolor/seaborn template\nstandards of organization (imports all at the top)\n\nR&D Project: Projects that would be great to support\n\nhttp://nbdiff.org/\nstatsmodels"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
IanHawke/maths-with-python | 04-basic-plotting.ipynb | mit | [
"Plotting\nThere are many Python plotting libraries depending on your purpose. However, the standard general-purpose library is matplotlib. This is often used through its pyplot interface.",
"from matplotlib import pyplot\n\n%matplotlib inline\nfrom matplotlib import rcParams\nrcParams['figure.figsize']=(12,9)\n\nfrom math import sin, pi\n\nx = []\ny = []\nfor i in range(201):\n x_point = 0.01*i\n x.append(x_point)\n y.append(sin(pi*x_point)**2)\n\npyplot.plot(x, y)\npyplot.show()",
"We have defined two sequences - in this case lists, but tuples would also work. One contains the $x$-axis coordinates, the other the data points to appear on the $y$-axis. A basic plot is produced using the plot command of pyplot. However, this plot will not automatically appear on the screen, as after plotting the data you may wish to add additional information. Nothing will actually happen until you either save the figure to a file (using pyplot.savefig(<filename>)) or explicitly ask for it to be displayed (with the show command). When the plot is displayed the program will typically pause until you dismiss the plot.\nIf using the notebook you can include the command %matplotlib inline or %matplotlib notebook before plotting to make the plots appear automatically inside the notebook. If code is included in a program which is run inside spyder through an IPython console, the figures may appear in the console automatically. Either way, it is good practice to always include the show command to explicitly display the plot.\nThis plotting interface is straightforward, but the results are not particularly nice. The following commands illustrate some of the ways of improving the plot:",
"from math import sin, pi\n\nx = []\ny = []\nfor i in range(201):\n x_point = 0.01*i\n x.append(x_point)\n y.append(sin(pi*x_point)**2)\n\npyplot.plot(x, y, marker='+', markersize=8, linestyle=':', \n linewidth=3, color='b', label=r'$\\sin^2(\\pi x)$')\npyplot.legend(loc='lower right')\npyplot.xlabel(r'$x$')\npyplot.ylabel(r'$y$')\npyplot.title('A basic plot')\npyplot.show()",
"Whilst most of the commands are self-explanatory, a note should be made of the strings line r'$x$'. These strings are in LaTeX format, which is the standard typesetting method for professional-level mathematics. The $ symbols surround mathematics. The r before the definition of the string is Python notation, not LaTeX. It says that the following string will be \"raw\": that backslash characters should be left alone. Then, special LaTeX commands have a backslash in front of them: here we use \\pi and \\sin. Most basic symbols can be easily guessed (eg \\theta or \\int), but there are useful lists of symbols, and a reverse search site available. We can also use ^ to denote superscripts (used here), _ to denote subscripts, and use {} to group terms.\nBy combining these basic commands with other plotting types (semilogx and loglog, for example), most simple plots can be produced quickly.\nHere are some more examples:",
"from math import sin, pi, exp, log\n\nx = []\ny1 = []\ny2 = []\nfor i in range(201):\n x_point = 1.0 + 0.01*i\n x.append(x_point)\n y1.append(exp(sin(pi*x_point)))\n y2.append(log(pi+x_point*sin(x_point)))\n\npyplot.loglog(x, y1, linestyle='--', linewidth=4, \n color='k', label=r'$y_1=e^{\\sin(\\pi x)}$')\npyplot.loglog(x, y2, linestyle='-.', linewidth=4, \n color='r', label=r'$y_2=\\log(\\pi+x\\sin(x))$')\npyplot.legend(loc='lower right')\npyplot.xlabel(r'$x$')\npyplot.ylabel(r'$y$')\npyplot.title('A basic logarithmic plot')\npyplot.show()\n\nfrom math import sin, pi, exp, log\n\nx = []\ny1 = []\ny2 = []\nfor i in range(201):\n x_point = 1.0 + 0.01*i\n x.append(x_point)\n y1.append(exp(sin(pi*x_point)))\n y2.append(log(pi+x_point*sin(x_point)))\n\npyplot.semilogy(x, y1, linestyle='None', marker='o', \n color='g', label=r'$y_1=e^{\\sin(\\pi x)}$')\npyplot.semilogy(x, y2, linestyle='None', marker='^', \n color='r', label=r'$y_2=\\log(\\pi+x\\sin(x))$')\npyplot.legend(loc='lower right')\npyplot.xlabel(r'$x$')\npyplot.ylabel(r'$y$')\npyplot.title('A different logarithmic plot')\npyplot.show()",
"We will look at more complex plots later, but the matplotlib documentation contains a lot of details, and the gallery contains a lot of examples that can be adapted to fit. There is also an extremely useful document as part of Johansson's lectures on scientific Python, and an introduction by Nicolas Rougier.\nExercise: Logistic map\nThe logistic map builds a sequence of numbers ${ x_n }$ using the relation\n$$ x_{n+1} = r x_n \\left( 1 - x_n \\right), $$\nwhere $0 \\le x_0 \\le 1$.\nExercise 1\nWrite a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).\nExercise 2\nFix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$. Plot the last 100 members of the sequence in both cases.\nWhat does this suggest about the long-term behaviour of the sequence?\nExercise 3\nFix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the 'k.' plotting style).\nExercise 4\nFor iterative maps such as the logistic map, one of three things can occur:\n\nThe sequence settles down to a fixed point.\nThe sequence rotates through a finite number of values. This is called a limit cycle.\nThe sequence generates an infinite number of values. This is called deterministic chaos.\n\nUsing just your plot, or new plots from this data, work out approximate values of $r$ for which there is a transition from fixed points to limit cycles, from limit cycles of a given number of values to more values, and the transition to chaos."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
shubham0704/ATR-FNN | MAMs discussion.ipynb | mit | [
"Morphological Associative memories",
"# dependencies\nimport matplotlib.pyplot as plt\nimport pickle\nimport numpy as np\n\nf = open('final_dataset.pickle','rb')\ndataset = pickle.load(f)\n\nsample_image = dataset['train_dataset'][0]\nsample_label = dataset['train_labels'][0]\nprint(sample_label)\nplt.figure()\nplt.imshow(sample_image)\nplt.show()\n\n# lets make Wxx and Mxx for this images\n# Wxx\nx = np.array([0,0,0])\ny = np.array([0,1,0])\nfinal = np.subtract.outer(y,x)\nprint(final)\n\n# 1. flatten the whole image into n-pixels\nx_vectors = sample_image.flatten()\n# dimensions must be of the form img_len,1\n# for this x_vector the weights must be of the order 1,num_perceptrons\n# but this gives me a sparse matrix of the order img_len,num_perceptrons\n# what we do here is take sum row wise we will get like [1,0,0,1] will become [2]\n# and therefore we will have outputs as img_len,1\n\n# so here to get the weights for x_vectors we have to multiply matrices of the order\n# img_len,1 and 1,img_len \n\n#x_vectors = np.array([1,2,5,1])\nweights = np.subtract.outer(x_vectors, x_vectors)\nprint(weights)\n\nadd_individual = np.add(weights, x_vectors)\n# pg-6 now perform row wise max\nresult = [max(row) for row in add_individual]\nnp.testing.assert_array_almost_equal(x_vectors, result)\nprint('done')\n# for k=1 dimesions of Mxx and Wxx are same",
"Now lets add some erosive noise to the image and then lets see the recall",
"import cv2\n\nerode_img = sample_image\n# kernel is a pixel set like a cross( or any shape) which convolves and erodes\nkernel = np.ones((5,5),np.uint8)\nerosion = cv2.erode(erode_img,kernel,iterations = 1)\nplt.figure()\nplt.imshow(erosion)\nplt.show()\n\n# Now lets try to do some recall\nx_eroded = erosion\nx_eroded_vector = x_eroded.flatten()\n\nadd_individual = np.add(weights, x_eroded_vector)\nresult = np.array([max(row) for row in add_individual])\n# now lets reshape the result to 128 x 128\nresult.shape = (128, 128)\nplt.figure()\nplt.imshow(result)\nplt.show()\n\n# now lets see the amount of recall error\nresult = result.flatten()\nnp.testing.assert_array_almost_equal(result, x_vectors)\nprint('done 0%')",
"Further investigation will be done on obtaining kernel matrix and creating a Neural Network"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rainyear/pytips | Tips/2016-03-23-With-Context-Manager.ipynb | mit | [
"Python 上下文管理器\nPython 2.5 引入了 with 语句(PEP 343)与上下文管理器类型(Context Manager Types),其主要作用包括:\n\n保存、重置各种全局状态,锁住或解锁资源,关闭打开的文件等。With Statement Context Managers\n\n一种最普遍的用法是对文件的操作:",
"with open(\"utf8.txt\", \"r\") as f:\n print(f.read())",
"上面的例子也可以用 try...finally... 实现,它们的效果是相同(或者说上下文管理器就是封装、简化了错误捕捉的过程):",
"try:\n f = open(\"utf8.txt\", \"r\")\n print(f.read())\nfinally:\n f.close()",
"除了文件对象之外,我们也可以自己创建上下文管理器,与 0x01 中介绍的迭代器类似,只要定义了 __enter__() 和 __exit__() 方法就成为了上下文管理器类型。with 语句的执行过程如下:\n\n执行 with 后的语句获取上下文管理器,例如 open('utf8.txt', 'r') 就是返回一个 file object;\n加载 __exit__() 方法备用;\n执行 __enter__(),该方法的返回值将传递给 as 后的变量(如果有的话);\n执行 with 语法块的子句;\n执行 __exit__() 方法,如果 with 语法块子句中出现异常,将会传递 type, value, traceback 给 __exit__(),否则将默认为 None;如果 __exit__() 方法返回 False,将会抛出异常给外层处理;如果返回 True,则忽略异常。\n\n了解了 with 语句的执行过程,我们可以编写自己的上下文管理器。假设我们需要一个引用计数器,而出于某些特殊的原因需要多个计数器共享全局状态并且可以相互影响,而且在计数器使用完毕之后需要恢复初始的全局状态:",
"_G = {\"counter\": 99, \"user\": \"admin\"}\n\nclass Refs():\n def __init__(self, name = None):\n self.name = name\n self._G = _G\n self.init = self._G['counter']\n def __enter__(self):\n return self\n def __exit__(self, *args):\n self._G[\"counter\"] = self.init\n return False\n def acc(self, n = 1):\n self._G[\"counter\"] += n\n def dec(self, n = 1):\n self._G[\"counter\"] -= n\n def __str__(self):\n return \"COUNTER #{name}: {counter}\".format(**self._G, name=self.name)\n \nwith Refs(\"ref1\") as ref1, Refs(\"ref2\") as ref2: # Python 3.1 加入了多个并列上下文管理器\n for _ in range(3):\n ref1.dec()\n print(ref1)\n ref2.acc(2)\n print(ref2)\nprint(_G)",
"上面的例子很别扭但是可以很好地说明 with 语句的执行顺序,只是每次定义两个方法看起来并不是很简洁,一如既往地,Python 提供了 @contextlib.contextmanager + generator 的方式来简化这一过程(正如 0x01 中 yield 简化迭代器一样):",
"from contextlib import contextmanager as cm\n_G = {\"counter\": 99, \"user\": \"admin\"}\n\n@cm\ndef ref():\n counter = _G[\"counter\"]\n yield _G\n _G[\"counter\"] = counter\n\nwith ref() as r1, ref() as r2:\n for _ in range(3):\n r1[\"counter\"] -= 1\n print(\"COUNTER #ref1: {}\".format(_G[\"counter\"]))\n r2[\"counter\"] += 2\n print(\"COUNTER #ref2: {}\".format(_G[\"counter\"]))\nprint(\"*\"*20)\nprint(_G)",
"这里对生成器的要求是必须只能返回一个值(只有一次 yield),返回的值相当于 __enter__() 的返回值;而 yield 后的语句相当于 __exit__()。\n生成器的写法更简洁,适合快速生成一个简单的上下文管理器。\n除了上面两种方式,Python 3.2 中新增了 contextlib.ContextDecorator,可以允许我们自己在 class 层面定义新的”上下文管理修饰器“,有兴趣可以到官方文档查看。至少在我目前看来好像并没有带来更多方便(除了可以省掉一层缩进之外:()。\n上下文管理器的概念与修饰器有很多相似之处,但是要记住的是 with 语句的目的是为了更优雅地收拾残局而不是替代 try...finally...,毕竟在 The Zen of Python 中,\n\nExplicit is better than implicit.\n\n比\n\nSimple is better than complex.\n\n更重要:P。"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io | dev/_downloads/a786781c3c54739f0be7add5b76a068f/50_ssvep.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Frequency-tagging: Basic analysis of an SSVEP/vSSR dataset\nIn this tutorial we compute the frequency spectrum and quantify signal-to-noise\nratio (SNR) at a target frequency in EEG data recorded during fast periodic\nvisual stimulation (FPVS) at 12 Hz and 15 Hz in different trials.\nExtracting SNR at stimulation frequency is a simple way to quantify frequency\ntagged responses in MEEG (a.k.a. steady state visually evoked potentials,\nSSVEP, or visual steady-state responses, vSSR in the visual domain,\nor auditory steady-state responses, ASSR in the auditory domain).\nFor a general introduction to the method see\nNorcia et al. (2015) for the visual domain,\nand Picton et al. (2003) for\nthe auditory domain.\nData and outline:\nWe use a simple example dataset with frequency tagged visual stimulation:\nN=2 participants observed checkerboard patterns inverting with a constant\nfrequency of either 12.0 Hz of 15.0 Hz.\n32 channels wet EEG was recorded.\n(see ssvep-dataset for more information).\nWe will visualize both the power-spectral density (PSD) and the SNR\nspectrum of the epoched data,\nextract SNR at stimulation frequency,\nplot the topography of the response,\nand statistically separate 12 Hz and 15 Hz responses in the different trials.\nSince the evoked response is mainly generated in early visual areas of the\nbrain the statistical analysis will be carried out on an occipital\nROI.\n :depth: 2",
"# Authors: Dominik Welke <[email protected]>\n# Evgenii Kalenkovich <[email protected]>\n#\n# License: BSD-3-Clause\n\nimport matplotlib.pyplot as plt\nimport mne\nimport numpy as np\nfrom scipy.stats import ttest_rel",
"Data preprocessing\nDue to a generally high SNR in SSVEP/vSSR, typical preprocessing steps\nare considered optional. This doesn't mean, that a proper cleaning would not\nincrease your signal quality!\n\n\nRaw data have FCz reference, so we will apply common-average rereferencing.\n\n\nWe will apply a 0.1 highpass filter.\n\n\nLastly, we will cut the data in 20 s epochs corresponding to the trials.",
"# Load raw data\ndata_path = mne.datasets.ssvep.data_path()\nbids_fname = (data_path / 'sub-02' / 'ses-01' / 'eeg' /\n 'sub-02_ses-01_task-ssvep_eeg.vhdr')\n\nraw = mne.io.read_raw_brainvision(bids_fname, preload=True, verbose=False)\nraw.info['line_freq'] = 50.\n\n# Set montage\nmontage = mne.channels.make_standard_montage('easycap-M1')\nraw.set_montage(montage, verbose=False)\n\n# Set common average reference\nraw.set_eeg_reference('average', projection=False, verbose=False)\n\n# Apply bandpass filter\nraw.filter(l_freq=0.1, h_freq=None, fir_design='firwin', verbose=False)\n\n# Construct epochs\nevent_id = {\n '12hz': 255,\n '15hz': 155\n}\nevents, _ = mne.events_from_annotations(raw, verbose=False)\ntmin, tmax = -1., 20. # in s\nbaseline = None\nepochs = mne.Epochs(\n raw, events=events,\n event_id=[event_id['12hz'], event_id['15hz']], tmin=tmin,\n tmax=tmax, baseline=baseline, verbose=False)",
"Frequency analysis\nNow we compute the frequency spectrum of the EEG data.\nYou will already see the peaks at the stimulation frequencies and some of\ntheir harmonics, without any further processing.\nThe 'classical' PSD plot will be compared to a plot of the SNR spectrum.\nSNR will be computed as a ratio of the power in a given frequency bin\nto the average power in its neighboring bins.\nThis procedure has two advantages over using the raw PSD:\n\n\nit normalizes the spectrum and accounts for 1/f power decay.\n\n\npower modulations which are not very narrow band will disappear.\n\n\nCalculate power spectral density (PSD)\nThe frequency spectrum will be computed using Fast Fourier transform (FFT).\nThis seems to be common practice in the steady-state literature and is\nbased on the exact knowledge of the stimulus and the assumed response -\nespecially in terms of it's stability over time.\nFor a discussion see e.g.\nBach & Meigen (1999)\nWe will exclude the first second of each trial from the analysis:\n\n\nsteady-state response often take a while to stabilize, and the\n transient phase in the beginning can distort the signal estimate.\n\n\nthis section of data is expected to be dominated by responses related to\n the stimulus onset, and we are not interested in this.\n\n\nIn MNE we call plain FFT as a special case of Welch's method, with only a\nsingle Welch window spanning the entire trial and no specific windowing\nfunction (i.e. applying a boxcar window).",
"tmin = 1.\ntmax = 20.\nfmin = 1.\nfmax = 90.\nsfreq = epochs.info['sfreq']\n\npsds, freqs = mne.time_frequency.psd_welch(\n epochs,\n n_fft=int(sfreq * (tmax - tmin)),\n n_overlap=0, n_per_seg=None,\n tmin=tmin, tmax=tmax,\n fmin=fmin, fmax=fmax,\n window='boxcar',\n verbose=False)",
"Calculate signal to noise ratio (SNR)\nSNR - as we define it here - is a measure of relative power:\nit's the ratio of power in a given frequency bin - the 'signal' -\nto a 'noise' baseline - the average power in the surrounding frequency bins.\nThis approach was initially proposed by\nMeigen & Bach (1999)\nHence, we need to set some parameters for this baseline - how many\nneighboring bins should be taken for this computation, and do we want to skip\nthe direct neighbors (this can make sense if the stimulation frequency is not\nsuper constant, or frequency bands are very narrow).\nThe function below does what we want.",
"def snr_spectrum(psd, noise_n_neighbor_freqs=1, noise_skip_neighbor_freqs=1):\n \"\"\"Compute SNR spectrum from PSD spectrum using convolution.\n\n Parameters\n ----------\n psd : ndarray, shape ([n_trials, n_channels,] n_frequency_bins)\n Data object containing PSD values. Works with arrays as produced by\n MNE's PSD functions or channel/trial subsets.\n noise_n_neighbor_freqs : int\n Number of neighboring frequencies used to compute noise level.\n increment by one to add one frequency bin ON BOTH SIDES\n noise_skip_neighbor_freqs : int\n set this >=1 if you want to exclude the immediately neighboring\n frequency bins in noise level calculation\n\n Returns\n -------\n snr : ndarray, shape ([n_trials, n_channels,] n_frequency_bins)\n Array containing SNR for all epochs, channels, frequency bins.\n NaN for frequencies on the edges, that do not have enough neighbors on\n one side to calculate SNR.\n \"\"\"\n # Construct a kernel that calculates the mean of the neighboring\n # frequencies\n averaging_kernel = np.concatenate((\n np.ones(noise_n_neighbor_freqs),\n np.zeros(2 * noise_skip_neighbor_freqs + 1),\n np.ones(noise_n_neighbor_freqs)))\n averaging_kernel /= averaging_kernel.sum()\n\n # Calculate the mean of the neighboring frequencies by convolving with the\n # averaging kernel.\n mean_noise = np.apply_along_axis(\n lambda psd_: np.convolve(psd_, averaging_kernel, mode='valid'),\n axis=-1, arr=psd\n )\n\n # The mean is not defined on the edges so we will pad it with nas. The\n # padding needs to be done for the last dimension only so we set it to\n # (0, 0) for the other ones.\n edge_width = noise_n_neighbor_freqs + noise_skip_neighbor_freqs\n pad_width = [(0, 0)] * (mean_noise.ndim - 1) + [(edge_width, edge_width)]\n mean_noise = np.pad(\n mean_noise, pad_width=pad_width, constant_values=np.nan\n )\n\n return psd / mean_noise",
"Now we call the function to compute our SNR spectrum.\nAs described above, we have to define two parameters.\n\n\nhow many noise bins do we want?\n\n\ndo we want to skip the n bins directly next to the target bin?\n\n\nTweaking these parameters can drastically impact the resulting spectrum,\nbut mainly if you choose extremes.\nE.g. if you'd skip very many neighboring bins, broad band power modulations\n(such as the alpha peak) should reappear in the SNR spectrum.\nOn the other hand, if you skip none you might miss or smear peaks if the\ninduced power is distributed over two or more frequency bins (e.g. if the\nstimulation frequency isn't perfectly constant, or you have very narrow\nbins).\nHere, we want to compare power at each bin with average power of the\nthree neighboring bins (on each side) and skip one bin directly next\nto it.",
"snrs = snr_spectrum(psds, noise_n_neighbor_freqs=3,\n noise_skip_neighbor_freqs=1)",
"Plot PSD and SNR spectra\nNow we will plot grand average PSD (in blue) and SNR (in red) ± sd\nfor every frequency bin.\nPSD is plotted on a log scale.",
"fig, axes = plt.subplots(2, 1, sharex='all', sharey='none', figsize=(8, 5))\nfreq_range = range(np.where(np.floor(freqs) == 1.)[0][0],\n np.where(np.ceil(freqs) == fmax - 1)[0][0])\n\npsds_plot = 10 * np.log10(psds)\npsds_mean = psds_plot.mean(axis=(0, 1))[freq_range]\npsds_std = psds_plot.std(axis=(0, 1))[freq_range]\naxes[0].plot(freqs[freq_range], psds_mean, color='b')\naxes[0].fill_between(\n freqs[freq_range], psds_mean - psds_std, psds_mean + psds_std,\n color='b', alpha=.2)\naxes[0].set(title=\"PSD spectrum\", ylabel='Power Spectral Density [dB]')\n\n# SNR spectrum\nsnr_mean = snrs.mean(axis=(0, 1))[freq_range]\nsnr_std = snrs.std(axis=(0, 1))[freq_range]\n\naxes[1].plot(freqs[freq_range], snr_mean, color='r')\naxes[1].fill_between(\n freqs[freq_range], snr_mean - snr_std, snr_mean + snr_std,\n color='r', alpha=.2)\naxes[1].set(\n title=\"SNR spectrum\", xlabel='Frequency [Hz]',\n ylabel='SNR', ylim=[-2, 30], xlim=[fmin, fmax])\nfig.show()",
"You can see that the peaks at the stimulation frequencies (12 Hz, 15 Hz)\nand their harmonics are visible in both plots (just as the line noise at\n50 Hz).\nYet, the SNR spectrum shows them more prominently as peaks from a\nnoisy but more or less constant baseline of SNR = 1.\nYou can further see that the SNR processing removes any broad-band power\ndifferences (such as the increased power in alpha band around 10 Hz),\nand also removes the 1/f decay in the PSD.\nNote, that while the SNR plot implies the possibility of values below 0\n(mean minus sd) such values do not make sense.\nEach SNR value is a ratio of positive PSD values, and the lowest possible PSD\nvalue is 0 (negative Y-axis values in the upper panel only result from\nplotting PSD on a log scale).\nHence SNR values must be positive and can minimally go towards 0.\nExtract SNR values at the stimulation frequency\nOur processing yielded a large array of many SNR values for each trial ×\nchannel × frequency-bin of the PSD array.\nFor statistical analysis we obviously need to define specific subsets of this\narray. First of all, we are only interested in SNR at the stimulation\nfrequency, but we also want to restrict the analysis to a spatial ROI.\nLastly, answering your interesting research questions will probably rely on\ncomparing SNR in different trials.\nTherefore we will have to find the indices of trials, channels, etc.\nAlternatively, one could subselect the trials already at the epoching step,\nusing MNE's event information, and process different epoch structures\nseparately.\nLet's only have a look at the trials with 12 Hz stimulation, for now.",
"# define stimulation frequency\nstim_freq = 12.",
"Get index for the stimulation frequency (12 Hz)\nIdeally, there would be a bin with the stimulation frequency exactly in its\ncenter. However, depending on your Spectral decomposition this is not\nalways the case. We will find the bin closest to it - this one should contain\nour frequency tagged response.",
"# find index of frequency bin closest to stimulation frequency\ni_bin_12hz = np.argmin(abs(freqs - stim_freq))\n# could be updated to support multiple frequencies\n\n# for later, we will already find the 15 Hz bin and the 1st and 2nd harmonic\n# for both.\ni_bin_24hz = np.argmin(abs(freqs - 24))\ni_bin_36hz = np.argmin(abs(freqs - 36))\ni_bin_15hz = np.argmin(abs(freqs - 15))\ni_bin_30hz = np.argmin(abs(freqs - 30))\ni_bin_45hz = np.argmin(abs(freqs - 45))",
"Get indices for the different trial types",
"i_trial_12hz = np.where(epochs.events[:, 2] == event_id['12hz'])[0]\ni_trial_15hz = np.where(epochs.events[:, 2] == event_id['15hz'])[0]",
"Get indices of EEG channels forming the ROI",
"# Define different ROIs\nroi_vis = ['POz', 'Oz', 'O1', 'O2', 'PO3', 'PO4', 'PO7',\n 'PO8', 'PO9', 'PO10', 'O9', 'O10'] # visual roi\n\n# Find corresponding indices using mne.pick_types()\npicks_roi_vis = mne.pick_types(epochs.info, eeg=True, stim=False,\n exclude='bads', selection=roi_vis)",
"Apply the subset, and check the result\nNow we simply need to apply our selection and yield a result. Therefore,\nwe typically report grand average SNR over the subselection.\nIn this tutorial we don't verify the presence of a neural response.\nThis is commonly done in the ASSR literature where SNR is\noften lower. An F-test or Hotelling T² would be\nappropriate for this purpose.",
"snrs_target = snrs[i_trial_12hz, :, i_bin_12hz][:, picks_roi_vis]\nprint(\"sub 2, 12 Hz trials, SNR at 12 Hz\")\nprint(f'average SNR (occipital ROI): {snrs_target.mean()}')",
"Topography of the vSSR\nBut wait...\nAs described in the intro, we have decided a priori to work with average\nSNR over a subset of occipital channels - a visual region of interest (ROI)\n- because we expect SNR to be higher on these channels than in other\nchannels.\nLet's check out, whether this was a good decision!\nHere we will plot average SNR for each channel location as a topoplot.\nThen we will do a simple paired T-test to check, whether average SNRs over\nthe two sets of channels are significantly different.",
"# get average SNR at 12 Hz for ALL channels\nsnrs_12hz = snrs[i_trial_12hz, :, i_bin_12hz]\nsnrs_12hz_chaverage = snrs_12hz.mean(axis=0)\n\n# plot SNR topography\nfig, ax = plt.subplots(1)\nmne.viz.plot_topomap(snrs_12hz_chaverage, epochs.info, vmin=1., axes=ax)\n\nprint(\"sub 2, 12 Hz trials, SNR at 12 Hz\")\nprint(\"average SNR (all channels): %f\" % snrs_12hz_chaverage.mean())\nprint(\"average SNR (occipital ROI): %f\" % snrs_target.mean())\n\ntstat_roi_vs_scalp = \\\n ttest_rel(snrs_target.mean(axis=1), snrs_12hz.mean(axis=1))\nprint(\"12 Hz SNR in occipital ROI is significantly larger than 12 Hz SNR over \"\n \"all channels: t = %.3f, p = %f\" % tstat_roi_vs_scalp)",
"We can see, that 1) this participant indeed exhibits a cluster of channels\nwith high SNR in the occipital region and 2) that the average SNR over all\nchannels is smaller than the average of the visual ROI computed above.\nThe difference is statistically significant. Great!\nSuch a topography plot can be a nice tool to explore and play with your data\n- e.g. you could try how changing the reference will affect the spatial\ndistribution of SNR values.\nHowever, we also wanted to show this plot to point at a potential\nproblem with frequency-tagged (or any other brain imaging) data:\nthere are many channels and somewhere you will likely find some\nstatistically significant effect.\nIt is very easy - even unintended - to end up double-dipping or p-hacking.\nSo if you want to work with an ROI or individual channels, ideally select\nthem a priori - before collecting or looking at the data - and preregister\nthis decision so people will believe you.\nIf you end up selecting an ROI or individual channel for reporting because\nthis channel or ROI shows an effect, e.g. in an explorative analysis, this\nis also fine but make it transparently and correct for multiple comparison.\nStatistical separation of 12 Hz and 15 Hz vSSR\nAfter this little detour into open science, let's move on and\ndo the analyses we actually wanted to do:\nWe will show that we can easily detect and discriminate the brains responses\nin the trials with different stimulation frequencies.\nIn the frequency and SNR spectrum plot above, we had all trials mixed up.\nNow we will extract 12 and 15 Hz SNR in both types of trials individually,\nand compare the values with a simple t-test.\nWe will also extract SNR of the 1st and 2nd harmonic for both stimulation\nfrequencies. These are often reported as well and can show interesting\ninteractions.",
"snrs_roi = snrs[:, picks_roi_vis, :].mean(axis=1)\n\nfreq_plot = [12, 15, 24, 30, 36, 45]\ncolor_plot = [\n 'darkblue', 'darkgreen', 'mediumblue', 'green', 'blue', 'seagreen'\n]\nxpos_plot = [-5. / 12, -3. / 12, -1. / 12, 1. / 12, 3. / 12, 5. / 12]\nfig, ax = plt.subplots()\nlabels = ['12 Hz trials', '15 Hz trials']\nx = np.arange(len(labels)) # the label locations\nwidth = 0.6 # the width of the bars\nres = dict()\n\n# loop to plot SNRs at stimulation frequencies and harmonics\nfor i, f in enumerate(freq_plot):\n # extract snrs\n stim_12hz_tmp = \\\n snrs_roi[i_trial_12hz, np.argmin(abs(freqs - f))]\n stim_15hz_tmp = \\\n snrs_roi[i_trial_15hz, np.argmin(abs(freqs - f))]\n SNR_tmp = [stim_12hz_tmp.mean(), stim_15hz_tmp.mean()]\n # plot (with std)\n ax.bar(\n x + width * xpos_plot[i], SNR_tmp, width / len(freq_plot),\n yerr=np.std(SNR_tmp),\n label='%i Hz SNR' % f, color=color_plot[i])\n # store results for statistical comparison\n res['stim_12hz_snrs_%ihz' % f] = stim_12hz_tmp\n res['stim_15hz_snrs_%ihz' % f] = stim_15hz_tmp\n\n# Add some text for labels, title and custom x-axis tick labels, etc.\nax.set_ylabel('SNR')\nax.set_title('Average SNR at target frequencies')\nax.set_xticks(x)\nax.set_xticklabels(labels)\nax.legend(['%i Hz' % f for f in freq_plot], title='SNR at:')\nax.set_ylim([0, 70])\nax.axhline(1, ls='--', c='r')\nfig.show()",
"As you can easily see there are striking differences between the trials.\nLet's verify this using a series of two-tailed paired T-Tests.",
"# Compare 12 Hz and 15 Hz SNR in trials after averaging over channels\n\ntstat_12hz_trial_stim = \\\n ttest_rel(res['stim_12hz_snrs_12hz'], res['stim_12hz_snrs_15hz'])\nprint(\"12 Hz Trials: 12 Hz SNR is significantly higher than 15 Hz SNR\"\n \": t = %.3f, p = %f\" % tstat_12hz_trial_stim)\n\ntstat_12hz_trial_1st_harmonic = \\\n ttest_rel(res['stim_12hz_snrs_24hz'], res['stim_12hz_snrs_30hz'])\nprint(\"12 Hz Trials: 24 Hz SNR is significantly higher than 30 Hz SNR\"\n \": t = %.3f, p = %f\" % tstat_12hz_trial_1st_harmonic)\n\ntstat_12hz_trial_2nd_harmonic = \\\n ttest_rel(res['stim_12hz_snrs_36hz'], res['stim_12hz_snrs_45hz'])\nprint(\"12 Hz Trials: 36 Hz SNR is significantly higher than 45 Hz SNR\"\n \": t = %.3f, p = %f\" % tstat_12hz_trial_2nd_harmonic)\n\nprint()\ntstat_15hz_trial_stim = \\\n ttest_rel(res['stim_15hz_snrs_12hz'], res['stim_15hz_snrs_15hz'])\nprint(\"15 Hz trials: 12 Hz SNR is significantly lower than 15 Hz SNR\"\n \": t = %.3f, p = %f\" % tstat_15hz_trial_stim)\n\ntstat_15hz_trial_1st_harmonic = \\\n ttest_rel(res['stim_15hz_snrs_24hz'], res['stim_15hz_snrs_30hz'])\nprint(\"15 Hz trials: 24 Hz SNR is significantly lower than 30 Hz SNR\"\n \": t = %.3f, p = %f\" % tstat_15hz_trial_1st_harmonic)\n\ntstat_15hz_trial_2nd_harmonic = \\\n ttest_rel(res['stim_15hz_snrs_36hz'], res['stim_15hz_snrs_45hz'])\nprint(\"15 Hz trials: 36 Hz SNR is significantly lower than 45 Hz SNR\"\n \": t = %.3f, p = %f\" % tstat_15hz_trial_2nd_harmonic)",
"Debriefing\nSo that's it, we hope you enjoyed our little tour through this example\ndataset.\nAs you could see, frequency-tagging is a very powerful tool that can yield\nvery high signal to noise ratios and effect sizes that enable you to detect\nbrain responses even within a single participant and single trials of only\na few seconds duration.\nBonus exercises\nFor the overly motivated amongst you, let's see what else we can show with\nthese data.\nUsing the PSD function as implemented in MNE makes it very easy to change\nthe amount of data that is actually used in the spectrum\nestimation.\nHere we employ this to show you some features of frequency\ntagging data that you might or might not have already intuitively expected:\nEffect of trial duration on SNR\nFirst we will simulate shorter trials by taking only the first x s of our 20s\ntrials (2, 4, 6, 8, ..., 20 s), and compute the SNR using a FFT window\nthat covers the entire epoch:",
"stim_bandwidth = .5\n\n# shorten data and welch window\nwindow_lengths = [i for i in range(2, 21, 2)]\nwindow_snrs = [[]] * len(window_lengths)\nfor i_win, win in enumerate(window_lengths):\n # compute spectrogram\n windowed_psd, windowed_freqs = mne.time_frequency.psd_welch(\n epochs[str(event_id['12hz'])],\n n_fft=int(sfreq * win),\n n_overlap=0, n_per_seg=None,\n tmin=0, tmax=win,\n window='boxcar',\n fmin=fmin, fmax=fmax, verbose=False)\n # define a bandwidth of 1 Hz around stimfreq for SNR computation\n bin_width = windowed_freqs[1] - windowed_freqs[0]\n skip_neighbor_freqs = \\\n round((stim_bandwidth / 2) / bin_width - bin_width / 2. - .5) if (\n bin_width < stim_bandwidth) else 0\n n_neighbor_freqs = \\\n int((sum((windowed_freqs <= 13) & (windowed_freqs >= 11)\n ) - 1 - 2 * skip_neighbor_freqs) / 2)\n # compute snr\n windowed_snrs = \\\n snr_spectrum(\n windowed_psd,\n noise_n_neighbor_freqs=n_neighbor_freqs if (\n n_neighbor_freqs > 0\n ) else 1,\n noise_skip_neighbor_freqs=skip_neighbor_freqs)\n window_snrs[i_win] = \\\n windowed_snrs[\n :, picks_roi_vis,\n np.argmin(\n abs(windowed_freqs - 12.))].mean(axis=1)\n\nfig, ax = plt.subplots(1)\nax.boxplot(window_snrs, labels=window_lengths, vert=True)\nax.set(title='Effect of trial duration on 12 Hz SNR',\n ylabel='Average SNR', xlabel='Trial duration [s]')\nax.axhline(1, ls='--', c='r')\nfig.show()",
"You can see that the signal estimate / our SNR measure increases with the\ntrial duration.\nThis should be easy to understand: in longer recordings there is simply\nmore signal (one second of additional stimulation adds, in our case, 12\ncycles of signal) while the noise is (hopefully) stochastic and not locked\nto the stimulation frequency.\nIn other words: with more data the signal term grows faster than the noise\nterm.\nWe can further see that the very short trials with FFT windows < 2-3s are not\ngreat - here we've either hit the noise floor and/or the transient response\nat the trial onset covers too much of the trial.\nAgain, this tutorial doesn't statistically test for the presence of a neural\nresponse, but an F-test or Hotelling T² would be appropriate for this\npurpose.\nTime resolved SNR\n..and finally we can trick MNE's PSD implementation to make it a\nsliding window analysis and come up with a time resolved SNR measure.\nThis will reveal whether a participant blinked or scratched their head..\nEach of the ten trials is coded with a different color in the plot below.",
"# 3s sliding window\nwindow_length = 4\nwindow_starts = [i for i in range(20 - window_length)]\nwindow_snrs = [[]] * len(window_starts)\n\nfor i_win, win in enumerate(window_starts):\n # compute spectrogram\n windowed_psd, windowed_freqs = mne.time_frequency.psd_welch(\n epochs[str(event_id['12hz'])],\n n_fft=int(sfreq * window_length) - 1,\n n_overlap=0, n_per_seg=None,\n window='boxcar',\n tmin=win, tmax=win + window_length,\n fmin=fmin, fmax=fmax,\n verbose=False)\n # define a bandwidth of 1 Hz around stimfreq for SNR computation\n bin_width = windowed_freqs[1] - windowed_freqs[0]\n skip_neighbor_freqs = \\\n round((stim_bandwidth / 2) / bin_width - bin_width / 2. - .5) if (\n bin_width < stim_bandwidth) else 0\n n_neighbor_freqs = \\\n int((sum((windowed_freqs <= 13) & (windowed_freqs >= 11)\n ) - 1 - 2 * skip_neighbor_freqs) / 2)\n # compute snr\n windowed_snrs = snr_spectrum(\n windowed_psd,\n noise_n_neighbor_freqs=n_neighbor_freqs if (\n n_neighbor_freqs > 0) else 1,\n noise_skip_neighbor_freqs=skip_neighbor_freqs)\n window_snrs[i_win] = \\\n windowed_snrs[:, picks_roi_vis, np.argmin(\n abs(windowed_freqs - 12.))].mean(axis=1)\n\nfig, ax = plt.subplots(1)\ncolors = plt.get_cmap('Greys')(np.linspace(0, 1, 10))\nfor i in range(10):\n ax.plot(window_starts, np.array(window_snrs)[:, i], color=colors[i])\nax.set(title='Time resolved 12 Hz SNR - %is sliding window' % window_length,\n ylabel='Average SNR', xlabel='t0 of analysis window [s]')\nax.axhline(1, ls='--', c='r')\nax.legend(['individual trials in greyscale'])\nfig.show()",
"Well.. turns out this was a bit too optimistic ;)\nBut seriously: this was a nice idea, but we've reached the limit of\nwhat's possible with this single-subject example dataset.\nHowever, there might be data, applications, or research questions\nwhere such an analysis makes sense."
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rainyear/pytips | Tips/2016-03-18-String-Format.ipynb | mit | [
"Python 字符串的格式化\n相信很多人在格式化字符串的时候都用\"%s\" % v的语法,PEP 3101 提出一种更先进的格式化方法 str.format() 并成为 Python 3 的标准用来替换旧的 %s 格式化语法,CPython 从 2.6 开始已经实现了这一方法(其它解释器未考证)。\nformat()\n新的 format() 方法其实更像是一个简略版的模板引起(Template Engine),功能非常丰富,官方文档对其语法的描述如下:",
"\"\"\"\nreplacement_field ::= \"{\" [field_name] [\"!\" conversion] [\":\" format_spec] \"}\"\nfield_name ::= arg_name (\".\" attribute_name | \"[\" element_index \"]\")*\narg_name ::= [identifier | integer]\nattribute_name ::= identifier\nelement_index ::= integer | index_string\nindex_string ::= <any source character except \"]\"> +\nconversion ::= \"r\" | \"s\" | \"a\"\nformat_spec ::= <described in the next section>\n\"\"\"\npass # Donot output",
"我将其准换成铁路图的形式,(可能)更直观一些:\n\n模板中替换变量用 {} 包围,且由 : 分为两部分,其中后半部分 format_spec 在后面会单独讨论。前半部分有三种用法:\n\n空\n代表位置的数字\n代表keyword的标识符\n\n这与函数调用的参数类别是一致的:",
"print(\"{} {}\".format(\"Hello\", \"World\"))\n# is equal to...\nprint(\"{0} {1}\".format(\"Hello\", \"World\"))\nprint(\"{hello} {world}\".format(hello=\"Hello\", world=\"World\"))\n\nprint(\"{0}{1}{0}\".format(\"H\", \"e\"))",
"除此之外,就像在0x05 函数参数与解包中提到的一样,format() 中也可以直接使用解包操作:",
"print(\"{lang}.{suffix}\".format(**{\"lang\": \"Python\", \"suffix\": \"py\"}))\nprint(\"{} {}\".format(*[\"Python\", \"Rocks\"]))",
"在模板中还可以通过 .identifier 和 [key] 的方式获取变量内的属性或值(需要注意的是 \"{}{}\" 相当于 \"{0}{1}\"):",
"data = {'name': 'Python', 'score': 100}\nprint(\"Name: {0[name]}, Score: {0[score]}\".format(data)) # 不需要引号\n\nlangs = [\"Python\", \"Ruby\"]\nprint(\"{0[0]} vs {0[1]}\".format(langs))\n\nprint(\"\\n====\\nHelp(format):\\n {.__doc__}\".format(str.format))",
"强制转换\n可以通过 ! + r|s|a 的方式对替换的变量进行强制转换:\n\n\"{!r}\" 对变量调用 repr()\n\"{!s}\" 对变量调用 str()\n\"{!a}\" 对变量调用 ascii()\n\n格式\n最后 : 之后的部分定义输出的样式:\n\nalign 代表对齐方向,通常要配合 width 使用,而 fill 则是填充的字符(默认为空白):",
"for align, text in zip(\"<^>\", [\"left\", \"center\", \"right\"]):\n print(\"{:{fill}{align}16}\".format(text, fill=align, align=align))\n \nprint(\"{:0=10}\".format(100)) # = 只允许数字",
"同时可以看出,样式设置里面可以嵌套 {} ,但是必须通过 keyword 指定,且只能嵌套一层。\n接下来是符号样式:+|-|' ' 分别指定数字是否需要强制符号(其中空格是指在正数的时候不显示 + 但保留一位空格):",
"print(\"{0:+}\\n{1:-}\\n{0: }\".format(3.14, -3.14))",
"# 用于表示特殊格式的数字(二进制、十六进制等)是否需要前缀符号;, 也是用于表示数字时是否需要在千位处进行分隔;0 相当于前面的 {:0=} 右对齐并用 0 补充空位:",
"print(\"Binary: {0:b} => {0:#b}\".format(3))\n\nprint(\"Large Number: {0:} => {0:,}\".format(1.25e6))\n\nprint(\"Padding: {0:16} => {0:016}\".format(3))",
"最后两个就是我们熟悉的小数点精度 .n 和格式化类型了,这里仅给出一些示例,详细内容可以查阅文档:",
"from math import pi\nprint(\"pi = {pi:.2}, also = {pi:.7}\".format(pi=pi))",
"Integer",
"for t in \"b c d #o #x #X n\".split():\n print(\"Type {0:>2} of {1} shows: {1:{t}}\".format(t, 97, t=t))",
"Float",
"for t, n in zip(\"eEfFgGn%\", [12345, 12345, 1.3, 1.3, 1, 2, 3.14, 0.985]):\n print(\"Type {} shows: {:.2{t}}\".format(t, n, t=t))",
"String (default)",
"try:\n print(\"{:s}\".format(123))\nexcept:\n print(\"{}\".format(456))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/ncc/cmip6/models/sandbox-3/seaice.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: NCC\nSource ID: SANDBOX-3\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:25\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncc', 'sandbox-3', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jmhsi/justin_tinker | data_science/courses/deeplearning2/translate-pytorch.ipynb | apache-2.0 | [
"Translating French to English with Pytorch",
"%matplotlib inline\nimport re, pickle, collections, bcolz, numpy as np, keras, sklearn, math, operator\n\nfrom gensim.models import word2vec\n\nimport torch, torch.nn as nn\nfrom torch.autograd import Variable\nfrom torch import optim\nimport torch.nn.functional as F\n\npath='/data/datasets/fr-en-109-corpus/'\ndpath = 'data/translate/'",
"Prepare corpus\nThe French-English parallel corpus can be downloaded from http://www.statmt.org/wmt10/training-giga-fren.tar. It was created by Chris Callison-Burch, who crawled millions of web pages and then used 'a set of simple heuristics to transform French URLs onto English URLs (i.e. replacing \"fr\" with \"en\" and about 40 other hand-written rules), and assume that these documents are translations of each other'.",
"fname=path+'giga-fren.release2.fixed'\nen_fname = fname+'.en'\nfr_fname = fname+'.fr'",
"To make this problem a little simpler so we can train our model more quickly, we'll just learn to translate questions that begin with 'Wh' (e.g. what, why, where which). Here are our regexps that filter the sentences we want.",
"re_eq = re.compile('^(Wh[^?.!]+\\?)')\nre_fq = re.compile('^([^?.!]+\\?)')\n\nlines = ((re_eq.search(eq), re_fq.search(fq)) \n for eq, fq in zip(open(en_fname), open(fr_fname)))\n\nqs = [(e.group(), f.group()) for e,f in lines if e and f]; len(qs)\n\nqs[:6]",
"Because it takes a while to load the data, we save the results to make it easier to load in later.",
"pickle.dump(qs, open(dpath+'fr-en-qs.pkl', 'wb'))\n\nqs = pickle.load(open(dpath+'fr-en-qs.pkl', 'rb'))\n\nen_qs, fr_qs = zip(*qs)",
"Because we are translating at word level, we need to tokenize the text first. (Note that it is also possible to translate at character level, which doesn't require tokenizing.) There are many tokenizers available, but we found we got best results using these simple heuristics.",
"re_apos = re.compile(r\"(\\w)'s\\b\") # make 's a separate word\nre_mw_punc = re.compile(r\"(\\w[’'])(\\w)\") # other ' in a word creates 2 words\nre_punc = re.compile(\"([\\\"().,;:/_?!—])\") # add spaces around punctuation\nre_mult_space = re.compile(r\" *\") # replace multiple spaces with just one\n\ndef simple_toks(sent):\n sent = re_apos.sub(r\"\\1 's\", sent)\n sent = re_mw_punc.sub(r\"\\1 \\2\", sent)\n sent = re_punc.sub(r\" \\1 \", sent).replace('-', ' ')\n sent = re_mult_space.sub(' ', sent)\n return sent.lower().split()\n\nfr_qtoks = list(map(simple_toks, fr_qs)); fr_qtoks[:4]\n\nen_qtoks = list(map(simple_toks, en_qs)); en_qtoks[:4]\n\nsimple_toks(\"Rachel's baby is cuter than other's.\")",
"Special tokens used to pad the end of sentences, and to mark the start of a sentence.",
"PAD = 0; SOS = 1",
"Enumerate the unique words (vocab) in the corpus, and also create the reverse map (word->index). Then use this mapping to encode every sentence as a list of int indices.",
"def toks2ids(sents):\n voc_cnt = collections.Counter(t for sent in sents for t in sent)\n vocab = sorted(voc_cnt, key=voc_cnt.get, reverse=True)\n vocab.insert(PAD, \"<PAD>\")\n vocab.insert(SOS, \"<SOS>\")\n w2id = {w:i for i,w in enumerate(vocab)}\n ids = [[w2id[t] for t in sent] for sent in sents]\n return ids, vocab, w2id, voc_cnt\n\nfr_ids, fr_vocab, fr_w2id, fr_counts = toks2ids(fr_qtoks)\nen_ids, en_vocab, en_w2id, en_counts = toks2ids(en_qtoks)",
"Word vectors\nStanford's GloVe word vectors can be downloaded from https://nlp.stanford.edu/projects/glove/ (in the code below we have preprocessed them into a bcolz array). We use these because each individual word has a single word vector, which is what we need for translation. Word2vec, on the other hand, often uses multi-word phrases.",
"def load_glove(loc):\n return (bcolz.open(loc+'.dat')[:],\n pickle.load(open(loc+'_words.pkl','rb'), encoding='latin1'),\n pickle.load(open(loc+'_idx.pkl','rb'), encoding='latin1'))\n\nen_vecs, en_wv_word, en_wv_idx = load_glove('/data/datasets/nlp/glove/results/6B.100d')\nen_w2v = {w: en_vecs[en_wv_idx[w]] for w in en_wv_word}\nn_en_vec, dim_en_vec = en_vecs.shape\n\nen_w2v['king']",
"For French word vectors, we're using those from http://fauconnier.github.io/index.html",
"w2v_path='/data/datasets/nlp/frWac_non_lem_no_postag_no_phrase_200_skip_cut100.bin'\nfr_model = word2vec.Word2Vec.load_word2vec_format(w2v_path, binary=True)\nfr_voc = fr_model.vocab\ndim_fr_vec = 200",
"We need to map each word index in our vocabs to their word vector. Not every word in our vocabs will be in our word vectors, since our tokenization approach won't be identical to the word vector creators - in these cases we simply create a random vector.",
"def create_emb(w2v, targ_vocab, dim_vec):\n vocab_size = len(targ_vocab)\n emb = np.zeros((vocab_size, dim_vec))\n found=0\n\n for i, word in enumerate(targ_vocab):\n try: emb[i] = w2v[word]; found+=1\n except KeyError: emb[i] = np.random.normal(scale=0.6, size=(dim_vec,))\n\n return emb, found\n\nen_embs, found = create_emb(en_w2v, en_vocab, dim_en_vec); en_embs.shape, found\n\nfr_embs, found = create_emb(fr_model, fr_vocab, dim_fr_vec); fr_embs.shape, found",
"Prep data\nEach sentence has to be of equal length. Keras has a convenient function pad_sequences to truncate and/or pad each sentence as required - even although we're not using keras for the neural net, we can still use any functions from it we need!",
"from keras.preprocessing.sequence import pad_sequences\n\nmaxlen = 30\nen_padded = pad_sequences(en_ids, maxlen, 'int64', \"post\", \"post\")\nfr_padded = pad_sequences(fr_ids, maxlen, 'int64', \"post\", \"post\")\nen_padded.shape, fr_padded.shape, en_embs.shape",
"And of course we need to separate our training and test sets...",
"from sklearn import model_selection\nfr_train, fr_test, en_train, en_test = model_selection.train_test_split(\n fr_padded, en_padded, test_size=0.1)\n\n[o.shape for o in (fr_train, fr_test, en_train, en_test)]",
"Here's an example of a French and English sentence, after encoding and padding.",
"fr_train[0], en_train[0]",
"Model\nBasic encoder-decoder",
"def long_t(arr): return Variable(torch.LongTensor(arr)).cuda()\n\nfr_emb_t = torch.FloatTensor(fr_embs).cuda()\nen_emb_t = torch.FloatTensor(en_embs).cuda()\n\ndef create_emb(emb_mat, non_trainable=False):\n output_size, emb_size = emb_mat.size()\n emb = nn.Embedding(output_size, emb_size)\n emb.load_state_dict({'weight': emb_mat})\n if non_trainable:\n for param in emb.parameters(): \n param.requires_grad = False\n return emb, emb_size, output_size",
"Turning a sequence into a representation can be done using an RNN (called the 'encoder'. This approach is useful because RNN's are able to keep track of state and memory, which is obviously important in forming a complete understanding of a sentence.\n* bidirectional=True passes the original sequence through an RNN, and the reversed sequence through a different RNN and concatenates the results. This allows us to look forward and backwards.\n* We do this because in language things that happen later often influence what came before (i.e. in Spanish, \"el chico, la chica\" means the boy, the girl; the word for \"the\" is determined by the gender of the subject, which comes after).",
"class EncoderRNN(nn.Module):\n def __init__(self, embs, hidden_size, n_layers=2):\n super(EncoderRNN, self).__init__()\n self.emb, emb_size, output_size = create_emb(embs, True)\n self.n_layers = n_layers\n self.hidden_size = hidden_size\n self.gru = nn.GRU(emb_size, hidden_size, batch_first=True, num_layers=n_layers)\n# ,bidirectional=True)\n \n def forward(self, input, hidden):\n return self.gru(self.emb(input), hidden)\n\n def initHidden(self, batch_size):\n return Variable(torch.zeros(self.n_layers, batch_size, self.hidden_size))\n\ndef encode(inp, encoder):\n batch_size, input_length = inp.size()\n hidden = encoder.initHidden(batch_size).cuda()\n enc_outputs, hidden = encoder(inp, hidden)\n return long_t([SOS]*batch_size), enc_outputs, hidden ",
"Finally, we arrive at a vector representation of the sequence which captures everything we need to translate it. We feed this vector into more RNN's, which are trying to generate the labels. After this, we make a classification for what each word is in the output sequence.",
"class DecoderRNN(nn.Module):\n def __init__(self, embs, hidden_size, n_layers=2):\n super(DecoderRNN, self).__init__()\n self.emb, emb_size, output_size = create_emb(embs)\n self.gru = nn.GRU(emb_size, hidden_size, batch_first=True, num_layers=n_layers)\n self.out = nn.Linear(hidden_size, output_size)\n \n def forward(self, inp, hidden):\n emb = self.emb(inp).unsqueeze(1)\n res, hidden = self.gru(emb, hidden)\n res = F.log_softmax(self.out(res[:,0]))\n return res, hidden",
"This graph demonstrates the accuracy decay for a neural translation task. With an encoding/decoding technique, larger input sequences result in less accuracy.\n<img src=\"https://smerity.com/media/images/articles/2016/bahdanau_attn.png\" width=\"600\">\nThis can be mitigated using an attentional model.\nAdding broadcasting to Pytorch\nUsing broadcasting makes a lot of numerical programming far simpler. Here's a couple of examples, using numpy:",
"v=np.array([1,2,3]); v, v.shape\n\nm=np.array([v,v*2,v*3]); m, m.shape\n\nm+v\n\nv1=np.expand_dims(v,-1); v1, v1.shape\n\nm+v1",
"But Pytorch doesn't support broadcasting. So let's add it to the basic operators, and to a general tensor dot product:",
"def unit_prefix(x, n=1):\n for i in range(n): x = x.unsqueeze(0)\n return x\n\ndef align(x, y, start_dim=2):\n xd, yd = x.dim(), y.dim()\n if xd > yd: y = unit_prefix(y, xd - yd)\n elif yd > xd: x = unit_prefix(x, yd - xd)\n\n xs, ys = list(x.size()), list(y.size())\n nd = len(ys)\n for i in range(start_dim, nd):\n td = nd-i-1\n if ys[td]==1: ys[td] = xs[td]\n elif xs[td]==1: xs[td] = ys[td]\n return x.expand(*xs), y.expand(*ys)\n\ndef aligned_op(x,y,f): return f(*align(x,y,0))\n\ndef add(x, y): return aligned_op(x, y, operator.add)\ndef sub(x, y): return aligned_op(x, y, operator.sub)\ndef mul(x, y): return aligned_op(x, y, operator.mul)\ndef div(x, y): return aligned_op(x, y, operator.truediv)\n\ndef dot(x, y):\n assert(1<y.dim()<5)\n x, y = align(x, y)\n \n if y.dim() == 2: return x.mm(y)\n elif y.dim() == 3: return x.bmm(y)\n else:\n xs,ys = x.size(), y.size()\n res = torch.zeros(*(xs[:-1] + (ys[-1],)))\n for i in range(xs[0]): res[i].baddbmm_(x[i], (y[i]))\n return res",
"Let's test!",
"def Arr(*sz): return torch.randn(sz)/math.sqrt(sz[0])\n\nm = Arr(3, 2); m2 = Arr(4, 3)\nv = Arr(2)\nb = Arr(4,3,2); t = Arr(5,4,3,2)\n\nmt,bt,tt = m.transpose(0,1), b.transpose(1,2), t.transpose(2,3)\n\ndef check_eq(x,y): assert(torch.equal(x,y))\n\ncheck_eq(dot(m,mt),m.mm(mt))\ncheck_eq(dot(v,mt), v.unsqueeze(0).mm(mt))\ncheck_eq(dot(b,bt),b.bmm(bt))\ncheck_eq(dot(b,mt),b.bmm(unit_prefix(mt).expand_as(bt)))\n\nexp = t.view(-1,3,2).bmm(tt.contiguous().view(-1,2,3)).view(5,4,3,3)\ncheck_eq(dot(t,tt),exp)\n\ncheck_eq(add(m,v),m+unit_prefix(v).expand_as(m))\ncheck_eq(add(v,m),m+unit_prefix(v).expand_as(m))\ncheck_eq(add(m,t),t+unit_prefix(m,2).expand_as(t))\ncheck_eq(sub(m,v),m-unit_prefix(v).expand_as(m))\ncheck_eq(mul(m,v),m*unit_prefix(v).expand_as(m))\ncheck_eq(div(m,v),m/unit_prefix(v).expand_as(m))",
"Attentional model",
"def Var(*sz): return Parameter(Arr(*sz)).cuda()\n\nclass AttnDecoderRNN(nn.Module):\n def __init__(self, embs, hidden_size, n_layers=2, p=0.1):\n super(AttnDecoderRNN, self).__init__()\n self.emb, emb_size, output_size = create_emb(embs)\n self.W1 = Var(hidden_size, hidden_size)\n self.W2 = Var(hidden_size, hidden_size)\n self.W3 = Var(emb_size+hidden_size, hidden_size)\n self.b2 = Var(hidden_size)\n self.b3 = Var(hidden_size)\n self.V = Var(hidden_size)\n self.gru = nn.GRU(hidden_size, hidden_size, num_layers=2)\n self.out = nn.Linear(hidden_size, output_size)\n\n def forward(self, inp, hidden, enc_outputs):\n emb_inp = self.emb(inp)\n w1e = dot(enc_outputs, self.W1)\n w2h = add(dot(hidden[-1], self.W2), self.b2).unsqueeze(1)\n u = F.tanh(add(w1e, w2h))\n a = mul(self.V,u).sum(2).squeeze(2)\n a = F.softmax(a).unsqueeze(2)\n Xa = mul(a, enc_outputs).sum(1)\n res = dot(torch.cat([emb_inp, Xa.squeeze(1)], 1), self.W3)\n res = add(res, self.b3).unsqueeze(0)\n res, hidden = self.gru(res, hidden)\n res = F.log_softmax(self.out(res.squeeze(0)))\n return res, hidden",
"Attention testing\nPytorch makes it easy to check intermediate results, when creating a custom architecture such as this one, since you can interactively run each function.",
"def get_batch(x, y, batch_size=16):\n idxs = np.random.permutation(len(x))[:batch_size]\n return x[idxs], y[idxs]\n\nhidden_size = 128\nfra, eng = get_batch(fr_train, en_train, 4)\ninp = long_t(fra)\ntarg = long_t(eng)\nemb, emb_size, output_size = create_emb(en_emb_t)\nemb.cuda()\ninp.size()\n\nW1 = Var(hidden_size, hidden_size)\nW2 = Var(hidden_size, hidden_size)\nW3 = Var(emb_size+hidden_size, hidden_size)\nb2 = Var(1,hidden_size)\nb3 = Var(1,hidden_size)\nV = Var(1,1,hidden_size)\ngru = nn.GRU(hidden_size, hidden_size, num_layers=2).cuda()\nout = nn.Linear(hidden_size, output_size).cuda()\n\ndec_inputs, enc_outputs, hidden = encode(inp, encoder)\nenc_outputs.size(), hidden.size()\n\nemb_inp = emb(dec_inputs); emb_inp.size()\n\nw1e = dot(enc_outputs, W1); w1e.size()\n\nw2h = dot(hidden[-1], W2)\nw2h = (w2h+b2.expand_as(w2h)).unsqueeze(1); w2h.size()\n\nu = F.tanh(w1e + w2h.expand_as(w1e))\na = (V.expand_as(u)*u).sum(2).squeeze(2)\na = F.softmax(a).unsqueeze(2); a.size(),a.sum(1).squeeze(1)\n\nXa = (a.expand_as(enc_outputs) * enc_outputs).sum(1); Xa.size()\n\nres = dot(torch.cat([emb_inp, Xa.squeeze(1)], 1), W3)\nres = (res+b3.expand_as(res)).unsqueeze(0); res.size()\n\nres, hidden = gru(res, hidden); res.size(), hidden.size()\n\nres = F.log_softmax(out(res.squeeze(0))); res.size()",
"Train\nPytorch has limited functionality for training models automatically - you will generally have to write your own training loops. However, Pytorch makes it far easier to customize how this training is done, such as using teacher forcing.",
"def train(inp, targ, encoder, decoder, enc_opt, dec_opt, crit):\n decoder_input, encoder_outputs, hidden = encode(inp, encoder)\n target_length = targ.size()[1]\n \n enc_opt.zero_grad(); dec_opt.zero_grad()\n loss = 0\n\n for di in range(target_length):\n decoder_output, hidden = decoder(decoder_input, hidden, encoder_outputs)\n decoder_input = targ[:, di]\n loss += crit(decoder_output, decoder_input)\n\n loss.backward()\n enc_opt.step(); dec_opt.step()\n return loss.data[0] / target_length\n\ndef req_grad_params(o):\n return (p for p in o.parameters() if p.requires_grad)\n\ndef trainEpochs(encoder, decoder, n_epochs, print_every=1000, lr=0.01):\n loss_total = 0 # Reset every print_every\n \n enc_opt = optim.RMSprop(req_grad_params(encoder), lr=lr)\n dec_opt = optim.RMSprop(decoder.parameters(), lr=lr)\n crit = nn.NLLLoss().cuda()\n \n for epoch in range(n_epochs):\n fra, eng = get_batch(fr_train, en_train, 64)\n inp = long_t(fra)\n targ = long_t(eng)\n loss = train(inp, targ, encoder, decoder, enc_opt, dec_opt, crit)\n loss_total += loss\n\n if epoch % print_every == print_every-1:\n print('%d %d%% %.4f' % (epoch, epoch / n_epochs * 100, loss_total / print_every))\n loss_total = 0",
"Run",
"hidden_size = 128\nencoder = EncoderRNN(fr_emb_t, hidden_size).cuda()\ndecoder = AttnDecoderRNN(en_emb_t, hidden_size).cuda()\n\ntrainEpochs(encoder, decoder, 10000, print_every=500, lr=0.005)",
"Testing",
"def evaluate(inp):\n decoder_input, encoder_outputs, hidden = encode(inp, encoder)\n target_length = maxlen\n\n decoded_words = []\n for di in range(target_length):\n decoder_output, hidden = decoder(decoder_input, hidden, encoder_outputs)\n topv, topi = decoder_output.data.topk(1)\n ni = topi[0][0]\n if ni==PAD: break\n decoded_words.append(en_vocab[ni])\n decoder_input = long_t([ni])\n \n return decoded_words\n\ndef sent2ids(sent):\n ids = [fr_w2id[t] for t in simple_toks(sent)]\n return pad_sequences([ids], maxlen, 'int64', \"post\", \"post\")\n\ndef fr2en(sent): \n ids = long_t(sent2ids(sent))\n trans = evaluate(ids)\n return ' '.join(trans)\n\ni=8\nprint(en_qs[i],fr_qs[i])\nfr2en(fr_qs[i])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
keylime1/courses_12-752 | lectures/Lecture5_Assignment2-2014-ReDo.ipynb | mit | [
"Assignment #2 from 2014 - ReDo\nThis is an attempt to complete the tasks laid out on Assignment #2 from this class in 2014.\nWe begin by importing all of the libraries that are necessary, and setting up the plotting environment:",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport datetime as dt\n\n%matplotlib inline",
"Task #1\nThen, for this first task, we import the csv file into variable called data. We leverage a new lambda function that will allow the importer to convert the timestamp strings into datetime objects:",
"anewdate = '2014/11/10 17:34:28'\n\ndateConverter = lambda d : dt.datetime.strptime(d,'%Y/%m/%d %H:%M:%S')\n\ndata = np.genfromtxt('../../../data/campusDemand.csv',delimiter=\",\",names=True,dtype=('a255',type(dt),float,),converters={1: dateConverter})\n\ndata[0]\n\ndata['Point_name']",
"To make sure that the import succeeded, we print the contents of the variable. Also, because we wan't to make sure the full meter names appear in the printed output, we modify Numpy's printoptions by using the method np.set_printoptions:",
"np.set_printoptions(threshold=8) # make sure all the power meter names will be printed\n",
"Task #2\nTo find the unique number of point names, we use the unique function from Numpy, and apply it to the 'Point_name' column in data:",
"pointNames = np.unique(data['Point_name'])\nprint \"There are {} unique meters.\".format(pointNames.shape[0])",
"Task #3\nWe now print the contents of the pointNames array:",
"print pointNames\n\n#extractedData = np.extract(data['Point_name']==pointNames[6],data)\nplt.plot(data['Time'][np.where(data['Point_name']==pointNames[0])],'rd')",
"Task #4\nTo count the numer of samples present on each power meter, there are many ways to achieve it. For instance, we can use an iterator to loop over all pointNames and create a list of tuples in the process (this is formally called a List Comprehension). Every tuple will then contain two elements: the meter name, and the number of samples in it:\nTask #5\nFirst, we can use another List Comprehension to iterate over the point names and create a new list whose elements are in turn tuples with the indeces for the samples corresponding to this meter:",
"idx = [np.where(data['Point_name']==meter) for meter in pointNames]\n\nprint \"idx is now a {0:s} of {1:d} items.\".format(type(idx),len(idx))\nprint \"Each item in idx is of {0:s}.\".format(type(idx[0]))\n\n[(meter,(data[idxItem]['Time'][-1]-data[idxItem]['Time'][0]).days) for meter,idxItem in zip(pointNames,idx)]",
"And then use yet another list comprehension to calculate the differences between the first and last timestamp:",
"help(zip)",
"Task #6\nFor this task, we are going to directly take the difference between any two consecutive datetime objects and display the result in terms of, say, number of seconds elapsed between these timestamps. \nBefore we do this, though, it is useful to plot the timestamps to figure out if there are discontinuities that we can visually see:",
"fig = plt.figure(figsize=(20,30)) # A 20 inch x 20 inch figure box\n\n### What else?",
"As you may have seen, gaps were easily identifiable as discontinuities in the lines that were plotted. If no gaps existed, the plot would be a straight line.\nBut now let's get back to solving this using exact numbers...\nFirst, you need to know that applying the difference operator (-) on two datetime objects results in a timedelta object. These objects (timedelta) describe time differences in terms of number of days, seconds and microseconds (see the link above for more details). Because of this, we can quickly convert any timedelta object (say dt) into the number of seconds by doing:\n<pre>\ndt.days*3600*24+dt.seconds+dt.microseconds/1000000\n</pre>\nIn this case, however, our timestamps do not contain information about the microseconds, so we will skip that part of the converstion.\nUsing this knowledge, we can create a list of lists (a nested list) in a similar manner as we've done before (i.e. using list comprehensions), and in it store the timedeltas in seconds for each meter. In other words, the outer list is a list of the same length as pointNames, and each element is a list of timedeltas for the corresponding meter.\nOne more thing comes in handy for this task: the np.diff function, which takes an array (or a list) and returns the difference between any two consecutive items of the list.\nNow, in a single line of code we can get the nested list we talked about:",
"delta_t = ",
"Now we need to be able to print out the exact times during which there are gaps. We will define gaps to be any timedelta that is longer than the median timedelta for a meter.\nWe will achieve this as follows: \n\nfirst we will create a for loop to iterate over every item in the list delta_t (which means we will iterate over all meters).\nthen, inside the loop, we will calculate the median value for the delta_t that corresponds to each meter\nfollowing this, we will find the indeces of delta_t where its value is greater than the median\nlastly, we will iterate over all the indeces found in the previous step and print out their values",
"np.set_printoptions(threshold=np.nan)\n\n",
"Task #7\nFirst, we will define a new variable containing the weekday for each of the timestamps.",
"wd = lambda d : d.weekday()\nweekDays = np.array(map(wd,data['Time']))\n\nMonday = data[np.where(weekDays==0)]\nTuesday = data[np.where(weekDays==1)]\nWednesday = data[np.where(weekDays==2)]\nThursday = data[np.where(weekDays==3)]\nFriday = data[np.where(weekDays==4)]\nSaturday = data[np.where(weekDays==5)]\nSunday = data[np.where(weekDays==6)]\n",
"Then we can do logical indexing to segment the data:",
"plt.plot(Sunday['Time'][np.where(Sunday['Point_name']==pointNames[0])],Sunday['Value'][np.where(Sunday['Point_name']==pointNames[0])],'rd')",
"Task #8\nIn this task we basically use two for loops and a the subplot functionality of PyPlot to do visualize the data contained in the variables we declared above.\nThe main trick is that we need to create a time index that only contains information about the hours, minutes and seconds (i.e. it completely disregards the exact day of the measurement) so that all of the measurements can be displayed within a single 24-hour period.",
"Days = ['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday']\n\nfig = plt.figure(figsize=(20,20))\nfor i in range(len(pointNames)): # iterate over meters\n for j in range(7): # iterate over days of the week\n plt.subplot(7,7,i*7+j+1)\n # Data from the day being plotted = All[j]\n # Data from the meter being plotted = All[j][All[j]['Point_name']==pointNames[i]]\n time = np.array([t.hour*3600+t.minute*60+t.second for t in All[j][All[j]['Point_name']==pointNames[i]]['Time']])\n # plot the power vs the hours in a day\n plt.plot(time/3600.,All[j][All[j]['Point_name']==pointNames[i]]['Value'],'.')\n if i==6:\n plt.xlabel('hours in a day')\n if j==0: \n plt.ylabel(pointNames[i].split('-')[0]+'\\n'+pointNames[i].split('-')[1])\n if i==0:\n plt.title(Days[j])\nfig.tight_layout()\nplt.show()",
"Task #9\nServeral findings: (more to be added)\n- Campus consume more energy during weekdays than weekends.\n- Higher energy consumption during working hours.\n- Many meters report a bi-modal distribution of the measurements, possibly due to seasonal effects.\n- Some meters (e.g., Porter Hall) show more erratic behavior during weekends."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/text_classification/labs/word_embeddings.ipynb | apache-2.0 | [
"Word Embeddings\nLearning Objectives\nYou will learn:\n\nHow to use Embedding layer\nHow to create a classification model\nCompile and train the model\nHow to retrieve the trained word embeddings, save them to disk and visualize it.\n\nIntroduction\nThis notebook contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the Embedding Projector (shown in the image below). \n\nRepresenting text as numbers\nMachine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to \"vectorize\" the text) before feeding it to the model. In this section, you will look at three strategies for doing so.\nOne-hot encodings\nAs a first idea, you might \"one-hot\" encode each word in your vocabulary. Consider the sentence \"The cat sat on the mat\". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.\n\nTo create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.\nKey point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero.\nEncode each word with a unique number\nA second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to \"cat\", 2 to \"mat\", and so on. You could then encode the sentence \"The cat sat on the mat\" as a dense vector like [5, 1, 4, 3, 5, 2]. This approach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).\nThere are two downsides to this approach, however:\n\n\nThe integer-encoding is arbitrary (it does not capture any relationship between words).\n\n\nAn integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful.\n\n\nWord embeddings\nWord embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.\n\nAbove is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as \"lookup table\". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.\nEach learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference. \nSetup",
"# Use the chown command to change the ownership of repository to user.\n!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst\n\nimport io\nimport os\nimport re\nimport shutil\nimport string\nimport tensorflow as tf\n\nfrom datetime import datetime\nfrom tensorflow.keras import Model, Sequential\nfrom tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D\nfrom tensorflow.keras.layers.experimental.preprocessing import TextVectorization",
"This notebook uses TF2.x.\nPlease check your tensorflow version using the cell below.",
"# Show the currently installed version of TensorFlow\nprint(\"TensorFlow version: \",tf.version.VERSION)",
"Download the IMDb Dataset\nYou will use the Large Movie Review Dataset through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the Loading text tutorial. \nDownload the dataset using Keras file utility and take a look at the directories.",
"url = \"https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\"\n\ndataset = tf.keras.utils.get_file(\"aclImdb_v1.tar.gz\", url,\n untar=True, cache_dir='.',\n cache_subdir='')\n\ndataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')\nos.listdir(dataset_dir)",
"Take a look at the train/ directory. It has pos and neg folders with movie reviews labelled as positive and negative respectively. You will use reviews from pos and neg folders to train a binary classification model.",
"train_dir = os.path.join(dataset_dir, 'train')\nos.listdir(train_dir)",
"The train directory also has additional folders which should be removed before creating training dataset.",
"remove_dir = os.path.join(train_dir, 'unsup')\nshutil.rmtree(remove_dir)",
"Next, create a tf.data.Dataset using tf.keras.preprocessing.text_dataset_from_directory. You can read more about using this utility in this text classification tutorial. \nUse the train directory to create both train and validation datasets with a split of 20% for validation.",
"batch_size = 1024\nseed = 123\ntrain_ds = tf.keras.preprocessing.text_dataset_from_directory(\n 'aclImdb/train', batch_size=batch_size, validation_split=0.2, \n subset='training', seed=seed)\nval_ds = tf.keras.preprocessing.text_dataset_from_directory(\n 'aclImdb/train', batch_size=batch_size, validation_split=0.2, \n subset='validation', seed=seed)",
"Take a look at a few movie reviews and their labels (1: positive, 0: negative) from the train dataset.",
"for text_batch, label_batch in train_ds.take(1):\n for i in range(5):\n print(label_batch[i].numpy(), text_batch.numpy()[i])",
"Configure the dataset for performance\nThese are two important methods you should use when loading data to make sure that I/O does not become blocking.\n.cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.\n.prefetch() overlaps data preprocessing and model execution while training. \nYou can learn more about both methods, as well as how to cache data to disk in the data performance guide.",
"AUTOTUNE = tf.data.experimental.AUTOTUNE\n\ntrain_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)\nval_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)",
"Using the Embedding layer\nKeras makes it easy to use word embeddings. Take a look at the Embedding layer.\nThe Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.",
"# Embed a 1,000 word vocabulary into 5 dimensions.\n# TODO: Your code goes here\n",
"When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).\nIf you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:",
"result = embedding_layer(tf.constant([1,2,3]))\nresult.numpy()",
"For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape (samples, sequence_length), where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes (32, 10) (batch of 32 sequences of length 10) or (64, 15) (batch of 64 sequences of length 15).\nThe returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a (2, 3) input batch and the output is (2, 3, N)",
"result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))\nresult.shape",
"When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape (samples, sequence_length, embedding_dimensionality). To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The Text Classification with an RNN tutorial is a good next step.\nText preprocessing\nNext, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the Text Classification tutorial.",
"# Create a custom standardization function to strip HTML break tags '<br />'.\ndef custom_standardization(input_data):\n lowercase = tf.strings.lower(input_data)\n stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')\n return tf.strings.regex_replace(stripped_html,\n '[%s]' % re.escape(string.punctuation), '')\n\n# Vocabulary size and number of words in a sequence.\nvocab_size = 10000\nsequence_length = 100\n\n# Use the text vectorization layer to normalize, split, and map strings to \n# integers. Note that the layer uses the custom standardization defined above. \n# Set maximum_sequence length as all samples are not of the same length.\nvectorize_layer = TextVectorization(\n standardize=custom_standardization,\n max_tokens=vocab_size,\n output_mode='int',\n output_sequence_length=sequence_length)\n\n# Make a text-only dataset (no labels) and call adapt to build the vocabulary.\ntext_ds = train_ds.map(lambda x, y: x)\nvectorize_layer.adapt(text_ds)",
"Create a classification model\nUse the Keras Sequential API to define the sentiment classification model. In this case it is a \"Continuous bag of words\" style model.\n* The TextVectorization layer transforms strings into vocabulary indices. You have already initialized vectorize_layer as a TextVectorization layer and built it's vocabulary by calling adapt on text_ds. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer.\n* The Embedding layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: (batch, sequence, embedding).\n\n\nThe GlobalAveragePooling1D layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.\n\n\nThe fixed-length output vector is piped through a fully-connected (Dense) layer with 16 hidden units.\n\n\nThe last layer is densely connected with a single output node. \n\n\nCaution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the masking and padding guide.",
"embedding_dim=16\n\n# TODO: Your code goes here\n\n",
"Compile and train the model\nCreate a tf.keras.callbacks.TensorBoard.",
"# TODO: Your code goes here\n",
"Compile and train the model using the Adam optimizer and BinaryCrossentropy loss.",
"# TODO: Your code goes here\n\n\nmodel.fit(\n train_ds,\n validation_data=val_ds, \n epochs=10,\n callbacks=[tensorboard_callback])",
"With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).\nNote: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. \nYou can look into the model summary to learn more about each layer of the model.",
"model.summary()",
"Visualize the model metrics in TensorBoard.",
"!tensorboard --bind_all --port=8081 --logdir logs",
"Run the following command in Cloud Shell:\n<code>gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081</code> \nMake sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.\nIn Cloud Shell, click Web Preview > Change Port and insert port number 8081. Click Change and Preview to open the TensorBoard.\n\nTo quit the TensorBoard, click Kernel > Interrupt kernel.\nRetrieve the trained word embeddings and save them to disk\nNext, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape (vocab_size, embedding_dimension).\nObtain the weights from the model using get_layer() and get_weights(). The get_vocabulary() function provides the vocabulary to build a metadata file with one token per line.",
"weights = # TODO: Your code goes here\nvocab = # TODO: Your code goes here\n",
"Write the weights to disk. To use the Embedding Projector, you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).",
"out_v = io.open('vectors.tsv', 'w', encoding='utf-8')\nout_m = io.open('metadata.tsv', 'w', encoding='utf-8')\n\nfor index, word in enumerate(vocab):\n if index == 0: continue # skip 0, it's padding.\n vec = weights[index] \n out_v.write('\\t'.join([str(x) for x in vec]) + \"\\n\")\n out_m.write(word + \"\\n\")\nout_v.close()\nout_m.close()",
"Two files will created as vectors.tsv and metadata.tsv. Download both files.",
"try:\n from google.colab import files\n files.download('vectors.tsv')\n files.download('metadata.tsv')\nexcept Exception as e:\n pass",
"Visualize the embeddings\nTo visualize the embeddings, upload them to the embedding projector.\nOpen the Embedding Projector.\n\n\nClick on \"Load\".\n\n\nUpload the two files you created above: vectors.tsv and metadata.tsv.\n\n\nThe embeddings you have trained will now be displayed. You can search for words to find their closest neighbors. For example, try searching for \"beautiful\". You may see neighbors like \"wonderful\". \nNote: Experimentally, you may be able to produce more interpretable embeddings by using a simpler model. Try deleting the Dense(16) layer, retraining the model, and visualizing the embeddings again.\nNote: Typically, a much larger dataset is needed to train more interpretable word embeddings. This tutorial uses a small IMDb dataset for the purpose of demonstration.\nNext Steps\nThis tutorial has shown you how to train and visualize word embeddings from scratch on a small dataset.\n\n\nTo train word embeddings using Word2Vec algorithm, try the Word2Vec tutorial. \n\n\nTo learn more about advanced text processing, read the Transformer model for language understanding."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NREL/bifacial_radiance | docs/tutorials/7 - Advanced topics - Multiple SceneObjects Example.ipynb | bsd-3-clause | [
"7 - Advanced topics - Multiple SceneObjects Example\nThis journal shows how to:\n<ul>\n <li> Create multiple scene objects in the same scene. </li>\n <li> Analyze multiple scene objects in the same scene </li>\n <li> Add a marker to find the origin (0,0) on a scene (for sanity-checks/visualization). </li>\n\nA scene Object is defined as an array of modules, with whatever parameters you want to give it. In this case, we are modeling one array of 2 rows of 5 modules in landscape, and one array of 1 row of 5 modules in 2-UP, portrait configuration, as the image below:\n\n\n\n\n### Steps:\n\n<ol>\n <li> <a href='#step1'> Generating the setups</a></li>\n <ol type='A'>\n <li> <a href='#step1a'> Generating the firt scene object</a></li>\n <li> <a href='#step1b'> Generating the second scene object.</a></li>\n </ol>\n <li> <a href='#step2'> Add a Marker at the Origin (coordinates 0,0) for help with visualization </a></li> \n <li> <a href='#step3'> Combine all scene Objects into one OCT file & Visualize </a></li>\n <li> <a href='#step4'> Analysis for Each sceneObject </a></li>\n</ol>\n\n<a id='step1'></a>\n\n### 1. Generating the Setups",
"import os\nimport numpy as np\nimport pandas as pd\nfrom pathlib import Path\n\ntestfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_07')\nif not os.path.exists(testfolder):\n os.makedirs(testfolder)\n \nprint (\"Your simulation will be stored in %s\" % testfolder)\n \nfrom bifacial_radiance import RadianceObj, AnalysisObj ",
"<a id='step1a'></a>\nA. Generating the first scene object\nThis is a standard fixed-tilt setup for one hour. Gencumsky could be used too for the whole year.\nThe key here is that we are setting in sceneDict the variable appendRadfile to true.",
"demo = RadianceObj(\"tutorial_7\", path = testfolder) \ndemo.setGround(0.62)\nepwfile = demo.getEPW(lat = 37.5, lon = -77.6) \nmetdata = demo.readWeatherFile(epwfile, coerce_year=2001) \nfullYear = True\ntimestamp = metdata.datetime.index(pd.to_datetime('2001-06-17 13:0:0 -5')) # Noon, June 17th \ndemo.gendaylit(timestamp) \nmodule_type = 'test-moduleA' \nmymodule = demo.makeModule(name=module_type,y=1,x=1.7)\nsceneDict = {'tilt':10,'pitch':1.5,'clearance_height':0.2,'azimuth':180, 'nMods': 5, 'nRows': 2, 'appendRadfile':True} \nsceneObj1 = demo.makeScene(mymodule, sceneDict) ",
"Checking values after Scene for the scene Object created",
"print (\"SceneObj1 modulefile: %s\" % sceneObj1.modulefile)\nprint (\"SceneObj1 SceneFile: %s\" %sceneObj1.radfiles)\nprint (\"SceneObj1 GCR: %s\" % round(sceneObj1.gcr,2))\nprint (\"FileLists: \\n %s\" % demo.getfilelist())",
"<a id='step1b'></a>\nB. Generating the second scene object.\nCreating a different Scene. Same Module, different values.\nNotice we are passing a different originx and originy to displace the center of this new sceneObj to that location.",
"sceneDict2 = {'tilt':30,'pitch':5,'clearance_height':1,'azimuth':180, \n 'nMods': 5, 'nRows': 1, 'originx': 0, 'originy': 3.5, 'appendRadfile':True} \nmodule_type2='test-moduleB'\nmymodule2 = demo.makeModule(name=module_type2,x=1,y=1.6, numpanels=2, ygap=0.15)\nsceneObj2 = demo.makeScene(mymodule2, sceneDict2) \n\n\n# Checking values for both scenes after creating new SceneObj\nprint (\"SceneObj1 modulefile: %s\" % sceneObj1.modulefile)\nprint (\"SceneObj1 SceneFile: %s\" %sceneObj1.radfiles)\nprint (\"SceneObj1 GCR: %s\" % round(sceneObj1.gcr,2))\n\nprint (\"\\nSceneObj2 modulefile: %s\" % sceneObj2.modulefile)\nprint (\"SceneObj2 SceneFile: %s\" %sceneObj2.radfiles)\nprint (\"SceneObj2 GCR: %s\" % round(sceneObj2.gcr,2))\n\n#getfilelist should have info for the rad file created by BOTH scene objects.\nprint (\"NEW FileLists: \\n %s\" % demo.getfilelist())",
"<a id='step2'></a>\n2. Add a Marker at the Origin (coordinates 0,0) for help with visualization\nCreating a \"markers\" for the geometry is useful to orient one-self when doing sanity-checks (for example, marke where 0,0 is, or where 5,0 coordinate is).\n<div class=\"alert alert-warning\">\nNote that if you analyze the module that intersects with the marker, some of the sensors will be wrong. To perform valid analysis, do so without markers, as they are 'real' objects on your scene. \n</div>",
"# NOTE: offsetting translation by 0.1 so the center of the marker (with sides of 0.2) is at the desired coordinate.\nname='Post1'\ntext='! genbox black originMarker 0.2 0.2 1 | xform -t -0.1 -0.1 0'\ncustomObject = demo.makeCustomObject(name,text)\ndemo.appendtoScene(sceneObj1.radfiles, customObject, '!xform -rz 0')",
"<a id='step3'></a>\n3. Combine all scene Objects into one OCT file & Visualize\nMarking this as its own steps because this is the step that joins our Scene Objects 1, 2 and the appended Post.\nRun makeOCT to make the scene with both scene objects AND the marker in it, the ground and the skies.",
"octfile = demo.makeOct(demo.getfilelist()) ",
"At this point you should be able to go into a command window (cmd.exe) and check the geometry. Example:\nrvu -vf views\\front.vp -e .01 -pe 0.3 -vp 1 -7.5 12 tutorial_7.oct",
"\n## Comment the ! line below to run rvu from the Jupyter notebook instead of your terminal.\n## Simulation will stop until you close the rvu window\n\n#!rvu -vf views\\front.vp -e .01 -pe 0.3 -vp 1 -7.5 12 tutorial_7.oct\n",
"It should look something like this:\n\n<a id='step4'></a>\n4. Analysis for Each sceneObject\na sceneDict is saved for each scene. When calling the Analysis, you should reference the scene object you want.",
"sceneObj1.sceneDict\n\nsceneObj2.sceneDict\n\nanalysis = AnalysisObj(octfile, demo.basename) \nfrontscan, backscan = analysis.moduleAnalysis(sceneObj1)\nfrontdict, backdict = analysis.analysis(octfile, \"FirstObj\", frontscan, backscan) # compare the back vs front irradiance \nprint('Annual bifacial ratio First Set of Panels: %0.3f ' %( np.mean(analysis.Wm2Back) / np.mean(analysis.Wm2Front)) )",
"Let's do a Sanity check for first object:\nSince we didn't pass any desired module, it should grab the center module of the center row (rounding down). For 2 rows and 5 modules, that is row 1, module 3 ~ indexed at 0, a2.0.a0.PVmodule.....\"\"",
"print (frontdict['x'])\nprint (\"\")\nprint (frontdict['y'])\nprint (\"\")\nprint (frontdict['mattype'])",
"Let's analyze a module in sceneobject 2 now. Remember we can specify which module/row we want. We only have one row in this Object though.",
"analysis2 = AnalysisObj(octfile, demo.basename) \nmodWanted = 4\nrowWanted = 1\nsensorsy=4\nfrontscan, backscan = analysis2.moduleAnalysis(sceneObj2, modWanted = modWanted, rowWanted = rowWanted, sensorsy=sensorsy)\nfrontdict2, backdict2 = analysis2.analysis(octfile, \"SecondObj\", frontscan, backscan) \nprint('Annual bifacial ratio Second Set of Panels: %0.3f ' %( np.mean(analysis2.Wm2Back) / np.mean(analysis2.Wm2Front)) )",
"Sanity check for first object. Since we didn't pass any desired module, it should grab the center module of the center row (rounding down). For 1 rows, that is row 0, module 4 ~ indexed at 0, a3.0.a0.Longi... and a3.0.a1.Longi since it is a 2-UP system.",
"print (\"x coordinate points:\" , frontdict2['x'])\nprint (\"\")\nprint (\"y coordinate points:\", frontdict2['y'])\nprint (\"\")\nprint (\"Elements intersected at each point: \", frontdict2['mattype'])",
"Visualizing the coordinates and module analyzed with an image:"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jmhsi/justin_tinker | data_science/courses/temp/tutorials/linalg_pytorch.ipynb | apache-2.0 | [
"All the Linear Algebra You Need for AI\nThe purpose of this notebook is to serve as an explanation of two crucial linear algebra operations used when coding neural networks: matrix multiplication and broadcasting.\nIntroduction\nMatrix multiplication is a way of combining two matrices (involving multiplying and summing their entries in a particular way). Broadcasting refers to how libraries such as Numpy and PyTorch can perform operations on matrices/vectors with mismatched dimensions (in particular cases, with set rules). We will use broadcasting to show an alternative way of thinking about matrix multiplication from, different from the way it is standardly taught.\nIn keeping with the fast.ai teaching philosophy of \"the whole game\", we will:\n\nfirst use a pre-defined class for our neural network\nthen define the net ourselves to see where it uses matrix multiplication & broadcasting\nand finally dig into the details of how those operations work\n\nThis is different from how most math courses are taught, where you have to learn all the individual elements before you can combine them (Harvard professor David Perkins call this elementitis), but it is similar to how topics like driving and baseball are taught. That is, you can start driving without knowing how an internal combustion engine works, and children begin playing baseball before they learn all the formal rules.\n<img src=\"images/demba_combustion_engine.png\" alt=\"\" style=\"width: 50%\"/>\n<center>\n(source: Demba Ba and Arvind Nagaraj)\n</center>\nMore linear algebra resources\nThis notebook was originally created for a 40 minute talk I gave at the O'Reilly AI conference in San Francisco. If you want further resources for linear algebra, here are a few recommendations:\n\n3Blue1Brown Essence of Linear Algebra videos about geometric intuition, which are gorgeous and great for visual learners\nKhan Academy Linear Algebra videos covering traditional linear algebra material\nImmersive linear algebra free online textbook with interactive graphics\nChapter 2 of Ian Goodfellow's Deep Learning Book for a fairly academic take\nComputational Linear Algebra: a free, online fast.ai course, originally taught in the University of San Francisco's Masters in Analytics program. It includes a free online textbook and series of videos. This course is very different from standard linear algebra (which often focuses on how humans do matrix calculations), because it is about how to get computers to do matrix computations with speed and accuracy, and incorporates modern tools and algorithms. All the material is taught in Python and centered around solving practical problems such as removing the background from a surveillance video or implementing Google's PageRank search algorithm on Wikipedia pages.\n\nOur Tools\nWe will be using the open source deep learning library, fastai, which provides high level abstractions and best practices on top of PyTorch. This is the highest level, simplest way to get started with deep learning. Please note that fastai requires Python 3 to function. It is currently in pre-alpha, so items may move around and more documentation will be added in the future.\nImports",
"%load_ext autoreload\n%autoreload 2\n\nfrom fastai.imports import *\nfrom fastai.torch_imports import *\nfrom fastai.io import *",
"PyTorch\nThe fastai deep learning library uses PyTorch, a Python framework for dynamic neural networks with GPU acceleration, which was released by Facebook's AI team.\nPyTorch has two overlapping, yet distinct, purposes. As described in the PyTorch documentation:\n<img src=\"images/what_is_pytorch.png\" alt=\"pytorch\" style=\"width: 80%\"/>\nThe neural network functionality of PyTorch is built on top of the Numpy-like functionality for fast matrix computations on a GPU. Although the neural network purpose receives way more attention, both are very useful. We'll implement a neural net from scratch today using PyTorch.\nFurther learning: If you are curious to learn what dynamic neural networks are, you may want to watch this talk by Soumith Chintala, Facebook AI researcher and core PyTorch contributor.\nIf you want to learn more PyTorch, you can try this introductory tutorial or this tutorial to learn by examples.\nAbout GPUs\nGraphical processing units (GPUs) allow for matrix computations to be done with much greater speed, as long as you have a library such as PyTorch that takes advantage of them. Advances in GPU technology in the last 10-20 years have been a key part of why neural networks are proving so much more powerful now than they did a few decades ago. \nYou may own a computer that has a GPU which can be used. For the many people that either don't have a GPU (or have a GPU which can't be easily accessed by Python), there are a few differnt options:\n\nDon't use a GPU: For the sake of this tutorial, you don't have to use a GPU, although some computations will be slower. The only change needed to the code is to remove .cuda() wherever it appears.\nUse crestle, through your browser: Crestle is a service that gives you an already set up cloud service with all the popular scientific and deep learning frameworks already pre-installed and configured to run on a GPU in the cloud. It is easily accessed through your browser. New users get 10 hours and 1 GB of storage for free. After this, GPU usage is 34 cents per hour. I recommend this option to those who are new to AWS or new to using the console.\nSet up an AWS instance through your console: You can create an AWS instance with a GPU by following the steps in this fast.ai setup lesson.] AWS charges 90 cents per hour for this.\n\nData\nAbout The Data\nToday we will be working with MNIST, a classic data set of hand-written digits. Solutions to this problem are used by banks to automatically recognize the amounts on checks, and by the postal service to automatically recognize zip codes on mail.\n<img src=\"images/mnist.png\" alt=\"\" style=\"width: 60%\"/>\nA matrix can represent an image, by creating a grid where each entry corresponds to a different pixel.\n<img src=\"images/digit.gif\" alt=\"digit\" style=\"width: 55%\"/>\n (Source: Adam Geitgey\n)\nDownload\nLet's download, unzip, and format the data.",
"path = '../data/'\n\nimport os\nos.makedirs(path, exist_ok=True)\n\nURL='http://deeplearning.net/data/mnist/'\nFILENAME='mnist.pkl.gz'\n\ndef load_mnist(filename):\n return pickle.load(gzip.open(filename, 'rb'), encoding='latin-1')\n\nget_data(URL+FILENAME, path+FILENAME)\n((x, y), (x_valid, y_valid), _) = load_mnist(path+FILENAME)",
"Normalize\nMany machine learning algorithms behave better when the data is normalized, that is when the mean is 0 and the standard deviation is 1. We will subtract off the mean and standard deviation from our training set in order to normalize the data:",
"mean = x.mean()\nstd = x.std()\n\nx=(x-mean)/std\nx.mean(), x.std()",
"Note that for consistency (with the parameters we learn when training), we subtract the mean and standard deviation of our training set from our validation set.",
"x_valid = (x_valid-mean)/std\nx_valid.mean(), x_valid.std()",
"Look at the data\nIn any sort of data science work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. To make it easier to work with, let's reshape it into 2d images from the flattened 1d format.\nHelper methods",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef show(img, title=None):\n plt.imshow(img, interpolation='none', cmap=\"gray\")\n if title is not None: plt.title(title)\n\ndef plots(ims, figsize=(12,6), rows=2, titles=None):\n f = plt.figure(figsize=figsize)\n cols = len(ims)//rows\n for i in range(len(ims)):\n sp = f.add_subplot(rows, cols, i+1)\n sp.axis('Off')\n if titles is not None: sp.set_title(titles[i], fontsize=16)\n plt.imshow(ims[i], interpolation='none', cmap='gray')",
"Plots",
"x_valid.shape\n\nx_imgs = np.reshape(x_valid, (-1,28,28)); x_imgs.shape\n\nshow(x_imgs[0], y_valid[0])\n\ny_valid.shape",
"It's the digit 3! And that's stored in the y value:",
"y_valid[0]",
"We can look at part of an image:",
"x_imgs[0,10:15,10:15]\n\nshow(x_imgs[0,10:15,10:15])\n\nplots(x_imgs[:8], titles=y_valid[:8])",
"The Most Important Machine Learning Concepts\nFunctions, parameters, and training\nA function takes inputs and returns outputs. For instance, $f(x) = 3x + 5$ is an example of a function. If we input $2$, the output is $3\\times 2 + 5 = 11$, or if we input $-1$, the output is $3\\times -1 + 5 = 2$\nFunctions have parameters. The above function $f$ is $ax + b$, with parameters a and b set to $a=3$ and $b=5$.\nMachine learning is often about learning the best values for those parameters. For instance, suppose we have the data points on the chart below. What values should we choose for $a$ and $b$?\n<img src=\"images/sgd2.gif\" alt=\"\" style=\"width: 70%\"/>\nIn the above gif fast.ai Practical Deep Learning for Coders course, intro to SGD notebook), an algorithm called stochastic gradient descent is being used to learn the best parameters to fit the line to the data (note: in the gif, the algorithm is stopping before the absolute best parameters are found). This process is called training or fitting.\nMost datasets will not be well-represented by a line. We could use a more complicated function, such as $g(x) = ax^2 + bx + c + \\sin d$. Now we have 4 parameters to learn: $a$, $b$, $c$, and $d$. This function is more flexible than $f(x) = ax + b$ and will be able to accurately model more datasets.\nNeural networks take this to an extreme, and are infinitely flexible. They often have thousands, or even hundreds of thousands of parameters. However the core idea is the same as above. The neural network is a function, and we will learn the best parameters for modeling our data.\nTraining & Validation data sets\nPossibly the most important idea in machine learning is that of having separate training & validation data sets.\nAs motivation, suppose you don't divide up your data, but instead use all of it. And suppose you have lots of parameters:\nThis is called over-fitting. A validation set helps prevent this problem.\n<img src=\"images/overfitting2.png\" alt=\"\" style=\"width: 70%\"/>\n<center>\nUnderfitting and Overfitting\n</center>\nThe error for the pictured data points is lowest for the model on the far right (the blue curve passes through the red points almost perfectly), yet it's not the best choice. Why is that? If you were to gather some new data points, they most likely would not be on that curve in the graph on the right, but would be closer to the curve in the middle graph.\nThis illustrates how using all our data can lead to overfitting.\nNeural Net (with nn.torch)\nImports",
"from fastai.metrics import *\nfrom fastai.model import *\nfrom fastai.dataset import *\nfrom fastai.core import *\n\nimport torch.nn as nn",
"Neural networks\nWe will use fastai's ImageClassifierData, which holds our training and validation sets and will provide batches of that data in a form ready for use by a PyTorch model.",
"md = ImageClassifierData.from_arrays(path, (x,y), (x_valid, y_valid))",
"We will begin with the highest level abstraction: using a neural net defined by PyTorch's Sequential class.",
"net = nn.Sequential(\n nn.Linear(28*28, 256),\n nn.ReLU(),\n nn.Linear(256, 10)\n).cuda()",
"Each input is a vector of size $28\\times 28$ pixels and our output is of size $10$ (since there are 10 digits: 0, 1, ..., 9). \nWe use the output of the final layer to generate our predictions. Often for classification problems (like MNIST digit classification), the final layer has the same number of outputs as there are classes. In that case, this is 10: one for each digit from 0 to 9. These can be converted to comparative probabilities. For instance, it may be determined that a particular hand-written image is 80% likely to be a 4, 18% likely to be a 9, and 2% likely to be a 3. In our case, we are not interested in viewing the probabilites, and just want to see what the most likely guess is.\nLayers\nSequential defines layers of our network, so let's talk about layers. Neural networks consist of linear layers alternating with non-linear layers. This creates functions which are incredibly flexible. Deeper layers are able to capture more complex patterns.\nLayer 1 of a convolutional neural network:\n<img src=\"images/zeiler1.png\" alt=\"pytorch\" style=\"width: 40%\"/>\n<center>\nMatthew Zeiler and Rob Fergus\n</center>\nLayer 2:\n<img src=\"images/zeiler2.png\" alt=\"pytorch\" style=\"width: 90%\"/>\n<center>\nMatthew Zeiler and Rob Fergus\n</center>\nDeeper layers can learn about more complicated shapes (although we are only using 2 layers in our network):\n<img src=\"images/zeiler4.png\" alt=\"pytorch\" style=\"width: 90%\"/>\n<center>\nMatthew Zeiler and Rob Fergus\n</center>\nTraining the network\nNext we will set a few inputs for our fit method:\n- Optimizer: algorithm for finding the minimum. typically these are variations on stochastic gradient descent, involve taking a step that appears to be the right direction based on the change in the function.\n- Loss: what function is the optimizer trying to minimize? We need to say how we're defining the error.\n- Metrics: other calculations you want printed out as you train",
"loss=F.cross_entropy\nmetrics=[accuracy]\nopt=optim.Adam(net.parameters())",
"Fitting is the process by which the neural net learns the best parameters for the dataset.",
"fit(net, md, epochs=1, crit=loss, opt=opt, metrics=metrics)",
"GPUs are great at handling lots of data at once (otherwise don't get performance benefit). We break the data up into batches, and that specifies how many samples from our dataset we want to send to the GPU at a time. The fastai library defaults to a batch size of 64. On each iteration of the training loop, the error on 1 batch of data will be calculated, and the optimizer will update the parameters based on that.\nAn epoch is completed once each data sample has been used once in the training loop.\nNow that we have the parameters for our model, we can make predictions on our validation set.",
"preds = predict(net, md.val_dl)\n\npreds = preds.max(1)[1]",
"Let's see how some of our preditions look!",
"plots(x_imgs[:8], titles=preds[:8])",
"These predictions are pretty good!\nCoding the Neural Net ourselves\nRecall that above we used PyTorch's Sequential to define a neural network with a linear layer, a non-linear layer (ReLU), and then another linear layer.",
"# Our code from above\nnet = nn.Sequential(\n nn.Linear(28*28, 256),\n nn.ReLU(),\n nn.Linear(256, 10)\n).cuda()",
"It turns out that Linear is defined by a matrix multiplication and then an addition. Let's try defining this ourselves. This will allow us to see exactly where matrix multiplication is used (we will dive in to how matrix multiplication works in teh next section). \nJust as Numpy has np.matmul for matrix multiplication (in Python 3, this is equivalent to the @ operator), PyTorch has torch.matmul. \nPyTorch class has two things: constructor (says parameters) and a forward method (how to calculate prediction using those parameters) The method forward describes how the neural net converts inputs to outputs.\nIn PyTorch, the optimizer knows to try to optimize any attribute of type Parameter.",
"def get_weights(*dims): return nn.Parameter(torch.randn(*dims)/dims[0])\n\nclass SimpleMnist(nn.Module):\n def __init__(self):\n super().__init__()\n self.l1_w = get_weights(28*28, 256) # Layer 1 weights\n self.l1_b = get_weights(256) # Layer 1 bias\n self.l2_w = get_weights(256, 10) # Layer 2 weights\n self.l2_b = get_weights(10) # Layer 2 bias\n\n def forward(self, x):\n x = x.view(x.size(0), -1)\n x = torch.matmul(x, self.l1_w) + self.l1_b # Linear Layer\n x = x * (x > 0).float() # Non-linear Layer\n x = torch.matmul(x, self.l2_w) + self.l2_b # Linear Layer\n return x",
"We create our neural net and the optimizer. (We will use the same loss and metrics from above).",
"net2 = SimpleMnist().cuda()\nopt=optim.Adam(net2.parameters())\n\nfit(net2, md, epochs=1, crit=loss, opt=opt, metrics=metrics)",
"Now we can check our predictions:",
"preds = predict(net2, md.val_dl).max(1)[1]\nplots(x_imgs[:8], titles=preds[:8])",
"what torch.matmul (matrix multiplication) is doing\nNow let's dig in to what we were doing with torch.matmul: matrix multiplication. First, let's start with a simpler building block: broadcasting.\nElement-wise operations\nBroadcasting and element-wise operations are supported in the same way by both numpy and pytorch.\nOperators (+,-,*,/,>,<,==) are usually element-wise.\nExamples of element-wise operations:",
"a = np.array([10, 6, -4])\nb = np.array([2, 8, 7])\n\na + b\n\na < b",
"Broadcasting\nThe term broadcasting describes how arrays with different shapes are treated during arithmetic operations. The term broadcasting was first used by Numpy, although is now used in other libraries such as Tensorflow and Matlab; the rules can vary by library.\nFrom the Numpy Documentation:\nThe term broadcasting describes how numpy treats arrays with \ndifferent shapes during arithmetic operations. Subject to certain \nconstraints, the smaller array is “broadcast” across the larger \narray so that they have compatible shapes. Broadcasting provides a \nmeans of vectorizing array operations so that looping occurs in C\ninstead of Python. It does this without making needless copies of \ndata and usually leads to efficient algorithm implementations.\n\nIn addition to the efficiency of broadcasting, it allows developers to write less code, which typically leads to fewer errors.\nThis section was adapted from Chapter 4 of the fast.ai Computational Linear Algebra course.\nBroadcasting with a scalar",
"a\n\na > 0",
"How are we able to do a > 0? 0 is being broadcast to have the same dimensions as a.\nRemember above when we normalized our dataset by subtracting the mean (a scalar) from the entire data set (a matrix) and dividing by the standard deviation (another scalar)? We were using broadcasting!\nOther examples of broadcasting with a scalar:",
"a + 1\n\nm = np.array([[1, 2, 3], [4,5,6], [7,8,9]]); m\n\nm * 2",
"Broadcasting a vector to a matrix\nWe can also broadcast a vector to a matrix:",
"c = np.array([10,20,30]); c\n\nm + c",
"Although numpy does this automatically, you can also use the broadcast_to method:",
"np.broadcast_to(c, (3,3))\n\nc.shape",
"The numpy expand_dims method lets us convert the 1-dimensional array c into a 2-dimensional array (although one of those dimensions has value 1).",
"np.expand_dims(c,0).shape\n\nm + np.expand_dims(c,0)\n\nnp.expand_dims(c,1).shape\n\nm + np.expand_dims(c,1)\n\nnp.broadcast_to(np.expand_dims(c,1), (3,3))",
"Broadcasting Rules\nWhen operating on two arrays, Numpy/PyTorch compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when\n\nthey are equal, or\none of them is 1\n\nArrays do not need to have the same number of dimensions. For example, if you have a $256 \\times 256 \\times 3$ array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:\nImage (3d array): 256 x 256 x 3\nScale (1d array): 3\nResult (3d array): 256 x 256 x 3\n\nThe numpy documentation includes several examples of what dimensions can and can not be broadcast together.\nMatrix Multiplication\nWe are going to use broadcasting to define matrix multiplication.\nMatrix-Vector Multiplication",
"m, c\n\nm @ c # np.matmul(m, c)",
"We get the same answer using torch.matmul:",
"torch.matmul(torch.from_numpy(m), torch.from_numpy(c))",
"The following is NOT matrix multiplication. What is it?",
"m * c\n\n(m * c).sum(axis=1)\n\nc\n\nnp.broadcast_to(c, (3,3))",
"From a machine learning perspective, matrix multiplication is a way of creating features by saying how much we want to weight each input column. Different features are different weighted averages of the input columns. \nThe website matrixmultiplication.xyz provides a nice visualization of matrix multiplcation\nDraw a picture",
"n = np.array([[10,40],[20,0],[30,-5]]); n\n\nm @ n\n\n(m * n[:,0]).sum(axis=1)\n\n(m * n[:,1]).sum(axis=1)",
"Homework: another use of broadcasting\nIf you want to test your understanding of the above tutorial. I encourage you to work through it again, only this time use CIFAR 10, a dataset that consists of 32x32 color images in 10 different categories. Color images have an extra dimension, containing RGB values, compared to black & white images.\n<img src=\"images/cifar10.png\" alt=\"\" style=\"width: 70%\"/>\n<center>\n(source: Cifar 10)\n</center>\nFortunately, broadcasting will make it relatively easy to add this extra dimension (for color RGB), but you will have to make some changes to the code.\nOther applications of Matrix and Tensor Products\nHere are some other examples of where matrix multiplication arises. This material is taken from Chapter 1 of my Computational Linear Algebra course. \nMatrix-Vector Products:\nThe matrix below gives the probabilities of moving from 1 health state to another in 1 year. If the current health states for a group are:\n- 85% asymptomatic\n- 10% symptomatic\n- 5% AIDS\n- 0% death\nwhat will be the % in each health state in 1 year?\n<img src=\"images/markov_health.jpg\" alt=\"floating point\" style=\"width: 80%\"/>(Source: Concepts of Markov Chains)\nAnswer",
"import numpy as np\n\n#Exercise: Use Numpy to compute the answer to the above\n",
"Matrix-Matrix Products\n<img src=\"images/shop.png\" alt=\"floating point\" style=\"width: 100%\"/>(Source: Several Simple Real-world Applications of Linear Algebra Tools)\nAnswer",
"#Exercise: Use Numpy to compute the answer to the above\n",
"End\nA Tensor is a multi-dimensional matrix containing elements of a single data type: a group of data, all with the same type (e.g. A Tensor could store a 4 x 4 x 6 matrix of 32-bit signed integers)."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
KshitijT/fundamentals_of_interferometry | 4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb | gpl-2.0 | [
"<a id='beginning'></a> <!--\\label{beginning}-->\n* Outline\n* Glossary\n* 4. The Visibility Space\n * Previous: 4.4 The Visibility Function\n * Next: 4.5.2 UV Coverage: Improving Your Coverage\n\nImport standard modules:",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS",
"Import section specific modules:",
"from mpl_toolkits.mplot3d import Axes3D\nimport plotBL\n\nHTML('../style/code_toggle.html')",
"4.5.1 UV coverage : UV tracks\nThe objective of $\\S$ 4.5.1 ⤵ and $\\S$ 4.5.2 ➞ is to give you a glimpse into the process of aperture synthesis. <span style=\"background-color:cyan\">TLG:GM: Check if the italic words are in the glossary. </span> An interferometer measures components of the Fourier Transform of the sky by sampling the visibility function, $\\mathcal{V}$. This collection of samples lives in ($u$, $v$, $w$) space, and are often projected onto the so-called $uv$-plane.\nIn $\\S$ 4.5.1 ⤵, we will focus on the way the visibility function is sampled. This sampling is a function of the interferometer's configuration, the direction of the source and the observation time.\nIn $\\S$ 4.5.2 ➞, we will see how this sampling can be improved by using certain observing techniques.\n4.5.1.1 The projected baseline with time: the $uv$ track\nA projected baseline is obtained via a baseline and a direction in the sky. It corresponds to the baseline as seen from the source. The projected baseline is associated with the measurement of a spatial frequency of the source. <span style=\"background-color:red\">TLG:RC: Rewrite previous sentence.</span> As the Earth rotates, the projected baseline and its corresponding spatial frequency (defined by the baseline's ($u$, $v$)-coordinates) vary slowly in time, generating a path in the $uv$-plane.\nWe will now generate test cases to see what locus the path takes, and how it can be predicted depending on the baseline's geometry.\n4.5.1.1.1 Baseline projection as seen from the source\nLet's generate one baseline from two antennas Ant$_1$ and Ant$_2$.",
"ant1 = np.array([-500e3,500e3,0]) # in m\nant2 = np.array([500e3,-500e3,+10]) # in m",
"Let's express the corresponding physical baseline in ENU coordinates.",
"b_ENU = ant2-ant1 # baseline \nD = np.sqrt(np.sum((b_ENU)**2)) # |b|\nprint str(D/1000)+\" km\"",
"Let's place the interferometer at a latitude $L_a=+45^\\circ00'00''$.",
"L = (np.pi/180)*(45+0./60+0./3600) # Latitude in radians\n\nA = np.arctan2(b_ENU[0],b_ENU[1])\nprint \"Baseline Azimuth=\"+str(np.degrees(A))+\"°\"\n\nE = np.arcsin(b_ENU[2]/D)\nprint \"Baseline Elevation=\"+str(np.degrees(E))+\"°\"\n\n%matplotlib nbagg\nplotBL.sphere(ant1,ant2,A,E,D,L)",
"Figure 4.5.1: A baseline located at +45$^\\circ$ as seen from the sky. This plot is interactive and can be rotated in 3D to see different baseline projections, depending on the position of the source w.r.t. the physical baseline.\nOn the interactive plot above, we represent a baseline located at +45$^\\circ$, aligned with the local south-west/north-east as seen from the celestial sphere. <span style=\"background-color:red\">TLG:RC: Rewrite previous sentence.</span> By rotating the sphere westward, you can simulate the variation of the projected baseline as seen from a source in apparent motion on the celestial sphere.\n4.5.1.1.2 Coordinates of the baseline in the ($u$,$v$,$w$) plane\nWe will now simulate an observation to study how a projected baseline will change with time. We will position this baseline at a South African latitude. We first need the expression of the physical baseline in a convenient reference frame, attached to the source in the sky.\nIn $\\S$ 4.2 ➞, we linked the equatorial coordinates of the baseline to the ($u$,$v$,$w$) coordinates through the transformation matrix:\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\n\\end{pmatrix}\n=\n\\frac{1}{\\lambda}\n\\begin{pmatrix}\n\\sin H_0 & \\cos H_0 & 0\\ \n-\\sin \\delta_0 \\cos H_0 & \\sin\\delta_0\\sin H_0 & \\cos\\delta_0\\\n\\cos \\delta_0 \\cos H_0 & -\\cos\\delta_0\\sin H_0 & \\sin\\delta_0\\\n\\end{pmatrix} \n\\begin{pmatrix}\nX\\\nY\\\nZ\n\\end{pmatrix}\n\\end{equation}\n<a id=\"vis:eq:451\"></a> <!---\\label{vis:eq:451}--->\n\\begin{equation}\n\\begin{bmatrix}\nX\\\nY\\\nZ\n\\end{bmatrix}\n=|\\mathbf{b}|\n\\begin{bmatrix}\n\\cos L_a \\sin \\mathcal{E} - \\sin L_a \\cos \\mathcal{E} \\cos \\mathcal{A}\\nonumber\\ \n\\cos \\mathcal{E} \\sin \\mathcal{A} \\nonumber\\\n\\sin L_a \\sin \\mathcal{E} + \\cos L_a \\cos \\mathcal{E} \\cos \\mathcal{A}\\\n\\end{bmatrix}\n\\end{equation}\nEquation 4.5.1 \nThis expression of $\\mathcal{b}$ is a function of ($\\mathcal{A}$,$\\mathcal{E}$) in the equatorial ($X$,$Y$,$Z$) systems. <span style=\"background-color:red\">TLG:RC: Rewrite previous sentence as the meaning is unclear. Notice the unbold $b$.</span>\n4.5.1.1.2 Observation parameters\nLet's define an arbitrary set of observation parameters to mimic a real observation.\n\nLatitude of the baseline: $L_a=-30^\\circ43'17.34''$\nDeclination of the observation: $\\delta=-74^\\circ39'37.481''$\nDuration of the observation: $\\Delta \\text{HA}=[-4^\\text{h},4^\\text{h}]$\nTime steps: 600\nFrequency: 1420 MHz",
"# Observation parameters\nc = 3e8 # Speed of light\nf = 1420e9 # Frequency\nlam = c/f # Wavelength \ndec = (np.pi/180)*(-30-43.0/60-17.34/3600) # Declination\n\ntime_steps = 600 # Time Steps\nh = np.linspace(-4,4,num=time_steps)*np.pi/12 # Hour angle window",
"4.5.1.1.3 Computing of the projected baselines in ($u$,$v$,$w$) coordinates as a function of time\nAs seen previously, we convert the baseline coordinates using the previous matrix transformation.",
"ant1 = np.array([25.095,-9.095,0.045])\nant2 = np.array([90.284,26.380,-0.226])\nb_ENU = ant2-ant1\nD = np.sqrt(np.sum((b_ENU)**2))\nL = (np.pi/180)*(-30-43.0/60-17.34/3600)\n\nA=np.arctan2(b_ENU[0],b_ENU[1])\nprint \"Azimuth=\",A*(180/np.pi)\nE=np.arcsin(b_ENU[2]/D)\nprint \"Elevation=\",E*(180/np.pi)\n\nX = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))\nY = D*np.cos(E)*np.sin(A)\nZ = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))",
"As the $u$, $v$, $w$ coordinates explicitly depend on $H$, we must evaluate them for each observational time step. We will use the equations defined in $\\S$ 4.2.2 ➞:\n\n$\\lambda u = X \\sin H + Y \\cos H$\n$\\lambda v= -X \\sin \\delta \\cos H + Y \\sin\\delta\\sin H + Z \\cos\\delta$\n$\\lambda w= X \\cos \\delta \\cos H -Y \\cos\\delta\\sin H + Z \\sin\\delta$",
"u = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3\nv = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3\nw = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3",
"We now have everything that describes the $uvw$-track of the baseline (over an 8-hour observational period). It is hard to predict which locus the $uvw$ track traverses given only the three mathematical equations from above. Let's plot it in $uvw$ space and its projection in $uv$ space.",
"%matplotlib nbagg\nplotBL.UV(u,v,w)",
"Figure 4.5.2: $uvw$ track derived from the simulation and projection in the $uv$-plane.\nThe track in $uvw$ space are curves and the projection in the $uv$ plane are arcs. Let us focus on the track's projection in this plane. To get observation-independent knowledge of the track we can try to combine the three equations of $u$, $v$ and $w$, the aim being to eliminate $H$ from the equation. We end up with an equation linking $u$, $v$, $X$ and $Y$ (the full derivation can be found in $\\S$ A.3 ➞):\n$$\\boxed{u^2 + \\left[ \\frac{v -\\frac{Z}{\\lambda} \\cos \\delta}{\\sin \\delta} \\right]^2 = \\left[ \\frac{X}{\\lambda} \\right]^2 + \\left[ \\frac{Y}{\\lambda} \\right]^2}$$\nOne can note that in this particular case, the $uv$ track takes on the form of an ellipse.\n<span style=\"background-color:cyan\">TLG:GM: Check if the italic words are in the glossary. </span>\nThis ellipse is centered at $(0,\\frac{Z}{\\lambda} \\cos \\delta)$ in the ($u$,$v$) plane.\nThe major axis is $a=\\frac{\\sqrt{X^2 + Y^2}}{\\lambda}$.\nThe minor axis (along the axis $v$) will be a function of $Z$, $\\delta$ and $a$.\nWe can check this by plotting the theoretical ellipse over the observed portion of the track. (You can fall back to the duration of the observation to see that the track is mapping this ellipse exactly).",
"%matplotlib inline\nfrom matplotlib.patches import Ellipse\n\n# parameters of the UVtrack as an ellipse\na=np.sqrt(X**2+Y**2)/lam/1e3 # major axis \nb=a*np.sin(dec) # minor axis\nv0=Z/lam*np.cos(dec)/1e3 # center of ellipse\n\nplotBL.UVellipse(u,v,w,a,b,v0)",
"Figure 4.5.3: The blue (resp. the red) curve is the $uv$ track of the baseline $\\mathbf{b}{12}$ (resp. $\\mathbf{b}{21}$). As $I_\\nu$ is real, the real part of the visibility $\\mathcal{V}$ is even and the imaginary part is odd making $\\mathcal{V}(-u,-v)=\\mathcal{V}^*$. It implies that one baseline automatically provides a measurement of a visibility and its complex conjugate at ($-u$,$-v$).\n4.5.1.2 Special cases\n4.5.1.2.1 The Polar interferometer\nLet settle one baseline at the North pole. The local zenith corresponds to the North Celestial Pole (NCP) at $\\delta=90^\\circ$. As seen from the NCP, the baseline will rotate and the projected baseline will correspond to the physical baseline. This configuration is the only case where this happens.\nIf $\\mathbf{b}$ rotates, we can guess that the $uv$ tracks will be perfect circles. Let's check:",
"L=np.radians(90.)\nant1 = np.array([25.095,-9.095,0.045])\nant2 = np.array([90.284,26.380,-0.226])\nb_ENU = ant2-ant1\nD = np.sqrt(np.sum((b_ENU)**2))\n\nA=np.arctan2(b_ENU[0],b_ENU[1])\nprint \"Azimuth=\",A*(180/np.pi)\nE=np.arcsin(b_ENU[2]/D)\nprint \"Elevation=\",E*(180/np.pi)\n\nX = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))\nY = D*np.cos(E)*np.sin(A)\nZ = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))",
"Let's compute the $uv$ tracks of an observation of the NCP ($\\delta=90^\\circ$):",
"dec=np.radians(90.)\n\nuNCP = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3\nvNCP = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3\nwNCP = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3\n\n# parameters of the UVtrack as an ellipse\naNCP=np.sqrt(X**2+Y**2)/lam/1e3 # major axis \nbNCP=aNCP*np.sin(dec) # minor axi\nv0NCP=Z/lam*np.cos(dec)/1e3 # center of ellipse",
"Let's compute the uv tracks when observing a source at $\\delta=30^\\circ$:",
"dec=np.radians(30.)\n\nu30 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3\nv30 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3\nw30 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3\n\na30=np.sqrt(X**2+Y**2)/lam/1e3 # major axis \nb30=a*np.sin(dec) # minor axi\nv030=Z/lam*np.cos(dec)/1e3 # center of ellipse\n\n%matplotlib inline\nplotBL.UVellipse(u30,v30,w30,a30,b30,v030)\nplotBL.UVellipse(uNCP,vNCP,wNCP,aNCP,bNCP,v0NCP)",
"Figure 4.5.4: $uv$ track for a baseline at the pole observing at $\\delta=90^\\circ$ (NCP) and at $\\delta=30^\\circ$ with the same color conventions as the previous figure.\nWhen observing a source at declination $\\delta$, we still have an elliptical shape but centered at (0,0). In the case of a polar interferometer, the full $uv$ track can be covered in 12 hours only due to the symmetry of the baseline.\n4.5.1.2.2 The Equatorial interferometer\nLet's consider the other extreme scenario: this time, we position the interferometer at the equator. The local zenith is crossed by the Celestial Equator at $\\delta=0^\\circ$. As seen from the celestial equator, the baseline will not rotate and the projected baseline will no longer correspond to the physical baseline. This configuration is the only case where this happens.\nIf $\\mathbf{b}$ is not rotating, we can intuitively guess that the $uv$ tracks will be straight lines.",
"L=np.radians(90.)\nX = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))\nY = D*np.cos(E)*np.sin(A)\nZ = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))\n\n# At local zenith == Celestial Equator\ndec=np.radians(0.)\n\nuEQ = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3\nvEQ = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3\nwEQ = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3\n\n# parameters of the UVtrack as an ellipse\naEQ=np.sqrt(X**2+Y**2)/lam/1e3 # major axis \nbEQ=aEQ*np.sin(dec) # minor axi\nv0EQ=Z/lam*np.cos(dec)/1e3 # center of ellipse\n\n# Close to Zenith\ndec=np.radians(10.)\n\nu10 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3\nv10 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3\nw10 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3\n\na10=np.sqrt(X**2+Y**2)/lam/1e3 # major axis \nb10=a*np.sin(dec) # minor axi\nv010=Z/lam*np.cos(dec)/1e3 # center of ellipse\n\n%matplotlib inline\nplotBL.UVellipse(u10,v10,w10,a10,b10,v010)\nplotBL.UVellipse(uEQ,vEQ,wEQ,aEQ,bEQ,v0EQ)",
"Figure 4.5.5: $uv$ track for a baseline at the equator observing at $\\delta=0^\\circ$ and at $\\delta=10^\\circ$, with the same color conventions as the previous figure.\nAn equatorial interferometer observing its zenith will see radio sources crossing the sky on straight, linear paths. Therefore, they will produce straight $uv$ coordinates.\n4.5.1.1.3 The East-West array <a id='vis:sec:ew'></a> <!--\\label{vis:sec:ew}-->\nThe East-West array is the special case of an interferometer with physical baselines aligned with the East-West direction in the ground-based frame of reference. They have the convenient property of giving a $uv$ coverage which lies entirely on a plane.\nIf the baseline is aligned with the East-West direction, then the Elevation $\\mathcal{E}$ of the baseline is zero and the Azimuth $\\mathcal{A}$ is $\\frac{\\pi}{2}$. Eq. 4.5.1 ⤵ then simplifies considerably:\nThe only non-zero component of the baseline will be its $Y$-component.\n\\begin{equation}\n\\frac{1}{\\lambda}\n\\begin{bmatrix}\nX\\\nY\\\nZ\n\\end{bmatrix}\n=\n|\\mathbf{b_\\lambda}|\n\\begin{bmatrix}\n\\cos L_a \\sin 0 - \\sin L_a \\cos 0 \\cos \\frac{\\pi}{2}\\nonumber\\ \n\\cos 0 \\sin \\frac{\\pi}{2} \\nonumber\\\n\\sin L_a \\sin 0 + \\cos L_a \\cos 0 \\cos \\frac{\\pi}{2}\\\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n0\\\n|\\mathbf{b_\\lambda}|\\\n0 \\\n\\end{bmatrix}\n\\end{equation}\nIf we observe a source at declination $\\delta_0$ with varying Hour Angle, $H$, we obtain:\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n\\sin H & \\cos H & 0\\ \n-\\sin \\delta_0 \\cos H & \\sin\\delta_0\\sin H & \\cos\\delta_0\\\n\\cos \\delta_0 \\cos H & -\\cos\\delta_0\\sin H & \\sin\\delta_0\\\n\\end{pmatrix} \n\\begin{pmatrix}\n0\\\n|\\mathbf{b_\\lambda}| \\\n0\n\\end{pmatrix}\n\\end{equation}\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n|\\mathbf{b_\\lambda}| \\cos H \\ \n|\\mathbf{b_\\lambda}| \\sin\\delta_0 \\sin H\\\n-|\\mathbf{b_\\lambda}|\\cos\\delta_0\\sin H\\\n\\end{pmatrix} \n\\end{equation}\nwhen $H = 6^\\text{h}$ (West)\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n0 \\ \n|\\mathbf{b_\\lambda}|\\sin\\delta_0\\\n|\\mathbf{b_\\lambda}|\\cos\\delta_0\\\n\\end{pmatrix} \n\\end{equation}\nwhen $H = 0^\\text{h}$ (South)\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n|\\mathbf{b_\\lambda}| \\ \n0\\\n0\\\n\\end{pmatrix} \n\\end{equation}\nwhen $H = -6^\\text{h}$ (East)\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n0 \\ \n-|\\mathbf{b_\\lambda}|\\sin\\delta_0\\\n-|\\mathbf{b_\\lambda}|\\cos\\delta_0\n\\end{pmatrix} \n\\end{equation}\nIn this case, one can notice that we always have a relationship between $u$, $v$ and $|\\mathbf{b_\\lambda}|$:\n$$ u^2+\\left( \\frac{v}{\\sin\\delta_0}\\right) ^2=|\\mathbf{b_\\lambda}|^2$$ \n<div class=warn>\n<b>Warning:</b> The $\\sin\\delta_0$ factor, appearing in the previous equation, can be interpreted as a compression factor.\n</div>\n\n4.5.1.3 Sampling the visibility plane with $uv$-tracks\n4.5.1.3.1 Simulating a baseline\nWhen we have an EW baseline, some equations simplify.\nFirstly, $XYZ = [0~d~0]^T$, where $d$ is the baseline length measured in wavelengths.\nSecondly, we have the following relationships: $u = d\\cos(H)$, $v = d\\sin(H)\\sin(\\delta)$,\nwhere $H$ is the hour angle of the field center and $\\delta$ its declination.\nIn this section, we will plot the $uv$-coverage of an EW-baseline whose field center is at two different declinations.",
"H = np.linspace(-6,6,600)*(np.pi/12) #Hour angle in radians\nd = 100 #We assume that we have already divided by wavelength\n\ndelta = 60*(np.pi/180) #Declination in degrees\nu_60 = d*np.cos(H)\nv_60 = d*np.sin(H)*np.sin(delta)",
"<span style=\"background-color:red\">TLG:AC: Add the following figures. This is specifically for an EW array. They will add some more insight. </span>\n<img src='figures/EW_1_d.svg' width=40%>\n<img src='figures/EW_2_d.svg' width=40%>\n<img src='figures/EW_3_d.svg' width=40%>\n4.5.1.3.2 Simulating the sky\nLet us populate our sky with three sources, with positions given in RA ($\\alpha$) and DEC ($\\delta$):\n* Source 1: (5h 32m 0.4s,60$^{\\circ}$-17' 57'') - 1 Jy\n* Source 2: (5h 36m 12.8s,-61$^{\\circ}$ 12' 6.9'') - 0.5 Jy\n* Source 3: (5h 40m 45.5s,-61$^{\\circ}$ 56' 34'') - 0.2 Jy\nWe place the field center at $(\\alpha_0,\\delta_0) = $ (5h 30m,60$^{\\circ}$).",
"RA_sources = np.array([5+30.0/60,5+32.0/60+0.4/3600,5+36.0/60+12.8/3600,5+40.0/60+45.5/3600])\nDEC_sources = np.array([60,60+17.0/60+57.0/3600,61+12.0/60+6.9/3600,61+56.0/60+34.0/3600])\nFlux_sources_labels = np.array([\"\",\"1 Jy\",\"0.5 Jy\",\"0.2 Jy\"])\nFlux_sources = np.array([1,0.5,0.1]) #in Jy\nstep_size = 200\nprint \"Phase center Source 1 Source 2 Source3\"\nprint repr(\"RA=\"+str(RA_sources)).ljust(2)\nprint \"DEC=\"+str(DEC_sources)",
"We then convert the ($\\alpha$,$\\delta$) to $l,m$: <span style=\"background-color:red\">TLG:AC:Point to Chapter 3.</span>\n* $l = \\cos \\delta \\sin \\Delta \\alpha$\n* $m = \\sin \\delta\\cos\\delta_0 -\\cos \\delta\\sin\\delta_0\\cos\\Delta \\alpha$\n* $\\Delta \\alpha = \\alpha - \\alpha_0$",
"RA_rad = np.array(RA_sources)*(np.pi/12)\nDEC_rad = np.array(DEC_sources)*(np.pi/180)\nRA_delta_rad = RA_rad-RA_rad[0]\n\nl = np.cos(DEC_rad)*np.sin(RA_delta_rad)\nm = (np.sin(DEC_rad)*np.cos(DEC_rad[0])-np.cos(DEC_rad)*np.sin(DEC_rad[0])*np.cos(RA_delta_rad))\nprint \"l=\",l*(180/np.pi)\nprint \"m=\",m*(180/np.pi)\n\npoint_sources = np.zeros((len(RA_sources)-1,3))\npoint_sources[:,0] = Flux_sources\npoint_sources[:,1] = l[1:]\npoint_sources[:,2] = m[1:]",
"The source and phase centre coordinates are now given in degrees.",
"%matplotlib inline\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nplt.xlim([-4,4])\nplt.ylim([-4,4])\nplt.xlabel(\"$l$ [degrees]\")\nplt.ylabel(\"$m$ [degrees]\")\nplt.plot(l[0],m[0],\"bx\")\nplt.hold(\"on\")\nplt.plot(l[1:]*(180/np.pi),m[1:]*(180/np.pi),\"ro\") \ncounter = 1\nfor xy in zip(l[1:]*(180/np.pi)+0.25, m[1:]*(180/np.pi)+0.25): \n ax.annotate(Flux_sources_labels[counter], xy=xy, textcoords='offset points',horizontalalignment='right',\n verticalalignment='bottom') \n counter = counter + 1\n \nplt.grid()",
"Figure 4.5.6: Distribution of the simulated sky in the $l$,$m$ plane.\n4.5.1.3.3 Simulating an observation\nWe will now create a fully-filled $uv$-plane, and sample it using the EW-baseline track we created in the first section. We will be ignoring the $w$-term for the sake of simplicity.",
"u = np.linspace(-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10, num=step_size, endpoint=True)\nv = np.linspace(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10, num=step_size, endpoint=True) \nuu, vv = np.meshgrid(u, v)\nzz = np.zeros(uu.shape).astype(complex)",
"We create the dimensions of our visibility plane.",
"s = point_sources.shape\nfor counter in xrange(1, s[0]+1):\n A_i = point_sources[counter-1,0]\n l_i = point_sources[counter-1,1]\n m_i = point_sources[counter-1,2]\n zz += A_i*np.exp(-2*np.pi*1j*(uu*l_i+vv*m_i))\nzz = zz[:,::-1]",
"We create our fully-filled visibility plane. With a \"perfect\" interferometer, we could sample the entire $uv$-plane. Since we only have a finite amount of antennas, this is never possible in practice. Recall that our sky brightness $I(l,m)$ is related to our visibilites $V(u,v)$ via the Fourier transform. For a bunch of point sources we can therefore write:\n$$V(u,v)=\\mathcal{F}{I(l,m)} = \\mathcal{F}{\\sum_k A_k \\delta(l-l_k,m-m_k)} = \\sum_k A_k e^{-2\\pi i (ul_i+vm_i)}$$\nLet's compute the total visibilities for our simulated sky.",
"u_track = u_60\nv_track = v_60\nz = np.zeros(u_track.shape).astype(complex) \n\ns = point_sources.shape\nfor counter in xrange(1, s[0]+1):\n A_i = point_sources[counter-1,0]\n l_i = point_sources[counter-1,1]\n m_i = point_sources[counter-1,2]\n z += A_i*np.exp(-1*2*np.pi*1j*(u_track*l_i+v_track*m_i))",
"Below we sample our visibility plane on the $uv$-track derived in the first section, i.e. $V(u_t,v_t)$.\n<span style=\"background-color:red\">TLG:RC: The graphs below intersect. Axis labels inside other\n graphs.</span>",
"plt.subplot(121)\nplt.imshow(zz.real,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \\\n np.amax(abs(v_60))+10])\nplt.plot(u_60,v_60,\"k\")\nplt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])\nplt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)\nplt.xlabel(\"u\")\nplt.ylabel(\"v\")\nplt.title(\"Real part of visibilities\")\n\nplt.subplot(122)\nplt.imshow(zz.imag,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \\\n np.amax(abs(v_60))+10])\nplt.plot(u_60,v_60,\"k\")\nplt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])\nplt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)\nplt.xlabel(\"u\")\nplt.ylabel(\"v\")\nplt.title(\"Imaginary part of visibilities\")",
"Figure 4.5.7: Real and imaginary parts of the visibility function. The black curve is the portion of the $uv$ track crossing the visibility.\nWe now plot the sampled visibilites as a function of time-slots, i.e $V(u_t(t_s),v_t(t_s))$.",
"plt.subplot(121)\nplt.plot(z.real)\nplt.xlabel(\"Timeslots\")\nplt.ylabel(\"Jy\")\nplt.title(\"Real: sampled visibilities\")\n\nplt.subplot(122)\nplt.plot(z.imag)\nplt.xlabel(\"Timeslots\")\nplt.ylabel(\"Jy\")\nplt.title(\"Imag: sampled visibilities\")",
"Figure 4.5.8: Real and imaginary parts of the visibility sampled by the black curve in Fig. 4.5.7, plotted as a function of time.",
"plt.subplot(121)\nplt.imshow(abs(zz),\n extent=[-1*(np.amax(np.abs(u_60)))-10,\n np.amax(np.abs(u_60))+10,\n -1*(np.amax(abs(v_60)))-10,\n np.amax(abs(v_60))+10])\nplt.plot(u_60,v_60,\"k\")\nplt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])\nplt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)\nplt.xlabel(\"u\")\nplt.ylabel(\"v\")\nplt.title(\"Amplitude of visibilities\")\n\nplt.subplot(122)\nplt.imshow(np.angle(zz),\n extent=[-1*(np.amax(np.abs(u_60)))-10,\n np.amax(np.abs(u_60))+10,\n -1*(np.amax(abs(v_60)))-10,\n np.amax(abs(v_60))+10])\nplt.plot(u_60,v_60,\"k\")\nplt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])\nplt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)\nplt.xlabel(\"u\")\nplt.ylabel(\"v\")\nplt.title(\"Phase of visibilities\")",
"Figure 4.5.9: Amplitude and Phase of the visibility function. The black curve is the portion of the $uv$ track crossing the visibility.",
"plt.subplot(121)\nplt.plot(abs(z))\nplt.xlabel(\"Timeslots\")\nplt.ylabel(\"Jy\")\nplt.title(\"Abs: sampled visibilities\")\n\nplt.subplot(122)\nplt.plot(np.angle(z))\nplt.xlabel(\"Timeslots\")\nplt.ylabel(\"Jy\")\nplt.title(\"Phase: sampled visibilities\")",
"Figure 4.5.10: Amplitude and Phase of the visibility sampled by the black curve in Fig. 4.5.7, plotted as a function of time.\n4.5.1.3.4 \"Real-life\" visibility\nIn the following figure, we present a collection of visibility measurements taken with different baselines, as a function of time. These measurements come from a real LOFAR dataset observing Cygnus A (Fig. 4.4.11 ⤵), a powerful radiosource.\nEach color corresponds to a different baseline measurement, and consequently, a different sampling of the same visibility function along different uv-track.\n<a id=\"vis:fig:4411\"></a> <!---\\label{vis:eq:4411}--->\n<img src='figures/cygnusA.jpg' width=30%>\nFigure 4.5.11: Cygnus A at 21 cm.\n<a id=\"vis:fig:4412\"></a> <!---\\label{vis:eq:4412}--->\n<img src='figures/baselines.jpg' width=70%>\nFigure 4.5.12: Visibility amplitude as a function of time.\nFig. 4.5.12 ⤵ shows a plot of the amplitudes of all the visibility samples from our observation of Cygnus A. The large number of antennas makes its interpretation difficult. Even the inspection of single visibility's amplitude (i.e. a single $uv$ track) is hard to interpret due to the source's intrinsic complexity. Let us see what happens if we plot the same information as a function of the $uv$-distance, $r_{uv}$.\n<a id=\"vis:fig:4413\"></a> <!---\\label{vis:eq:4413}--->\n<img src='figures/baseline-uvdist.jpg' width=70%>\nFigure 4.5.13: Visibility amplitude as a function of $r_{uv}$.\nFig. 4.5.13 ⤵ display the same information as Fig. 4.5.12 ⤵ this time as a function of $r_{uv}$. It should be quite clear that, as in $\\S$ 4.4 ➞, we are stacking the radial plots of the visibility function. The interpretation of these radial plots provides us with information about the size of the source. For Fig. 4.5.13 ⤵ in particular, when the amplitude of the visibility goes to zero, one characteristic size of the source has been resolved.\nFrom these plots, it is clear that the more baselines we have, the better the sampling of the visibility function.\nIn the next section, we discuss how astronomers improve their $uv$ coverage.\n<p class=conclusion>\n <font size=4><b>Important things to remember</b></font>\n <br>\n <br>\n\n• Each individual baseline samples the visibility function along a single $uv$ track.<br>\n• The $uv$ tracks are ellipses whose parameters depends on the latitude and declination of observation.<br>\n• The polar (resp. equatorial) interferometer gives circular (linear) $uv$ tracks.<br>\n• Accumulating samples over time enhances the sampling of the visibility function, thus improving our knowledge of the source.<br>\n\n</p>\n\n\n\nNext: 4.5.2 UV Coverage: Improving Your Coverage\n\nFormat status:\n\n<span style=\"background-color:green\"> </span> : LF: 09/02/2017\n<span style=\"background-color:green\"> </span> : NC: 09/02/2017\n<span style=\"background-color:green\"> </span> : RF: 09/02/2017\n<span style=\"background-color:green\"> </span> : HF: 09/02/2017\n<span style=\"background-color:green\"> </span> : GM: 09/02/2017\n<span style=\"background-color:green\"> </span> : CC: 09/02/2017\n<span style=\"background-color:green\"> </span> : CL: 09/02/2017\n<span style=\"background-color:green\"> </span> : ST: 09/02/2017\n<span style=\"background-color:green\"> </span> : FN: 09/02/2017\n<span style=\"background-color:green\"> </span> : TC: 09/02/2017\n<span style=\"background-color:green\"> </span> : XX: 09/02/2017\n\n<div class=warn><b>Future Additions:</b></div>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
WNoxchi/Kaukasos | FAI02_old/Lesson9/neural_sr_attempt2.ipynb | mit | [
"01 SEP 2017",
"%matplotlib inline\nimport importlib\n\nimport os, sys; sys.path.insert(1, os.path.join('../utils'))\n\nimport utils2; importlib.reload(utils2)\nfrom utils2 import *\n\nfrom scipy.optimize import fmin_l_bfgs_b\nfrom scipy.misc import imsave\nfrom keras import metrics\n\nfrom vgg16_avg import VGG16_Avg\n\nfrom bcolz_array_iterator import BcolzArrayIterator\n\nlimit_mem()\n\npath = '../data/'\ndpath = path\n\nrn_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32)\npreproc = lambda x: (x - rn_mean)[:, :, :, ::-1]\ndeproc = lambda x,s: np.clip(x.reshape(s)[:, :, :, ::-1] + rn_mean, 0, 255)\n\narr_lr = bcolz.open(dpath+'trn_resized_72.bc')\narr_hr = bcolz.open(path+'trn_resized_288.bc')\nparms = {'verbose': 0, 'callbacks': [TQDMNotebookCallback(leave_inner=True)]}\n\nparms = {'verbose': 0, 'callbacks': [TQDMNotebookCallback(leave_inner=True)]}\n\ndef conv_block(x, filters, size, stride=(2,2), mode='same', act=True):\n x = Convolution2D(filters, size, size, subsample=stride, border_mode=mode)(x)\n x = BatchNormalization(mode=2)(x)\n return Activation('relu')(x) if act else x\ndef res_block(ip, nf=64):\n x = conv_block(ip, nf, 3, (1,1))\n x = conv_block(x, nf, 3, (1,1), act=False)\n return merge([x, ip], mode='sum')\ndef up_block(x, filters, size):\n x = keras.layers.UpSampling2D()(x)\n x = Convolution2D(filters, size, size, border_mode='same')(x)\n x = BatchNormalization(mode=2)(x)\n return Activation('relu')(x)\ndef get_model(arr):\n inp=Input(arr.shape[1:])\n x=conv_block(inp, 64, 9, (1,1))\n for i in range(4): x=res_block(x)\n x=up_block(x, 64, 3)\n x=up_block(x, 64, 3)\n x=Convolution2D(3, 9, 9, activation='tanh', border_mode='same')(x)\n outp=Lambda(lambda x: (x+1)*127.5)(x)\n return inp,outp\n\ninp,outp=get_model(arr_lr)\n\nshp = arr_hr.shape[1:]\n\nvgg_inp=Input(shp)\nvgg= VGG16(include_top=False, input_tensor=Lambda(preproc)(vgg_inp))\nfor l in vgg.layers: l.trainable=False\n\ndef get_outp(m, ln): return m.get_layer(f'block{ln}_conv2').output\nvgg_content = Model(vgg_inp, [get_outp(vgg, o) for o in [1,2,3]])\nvgg1 = vgg_content(vgg_inp)\nvgg2 = vgg_content(outp)\n\ndef mean_sqr_b(diff): \n dims = list(range(1,K.ndim(diff)))\n return K.expand_dims(K.sqrt(K.mean(diff**2, dims)), 0)\n\nw=[0.1, 0.8, 0.1]\ndef content_fn(x): \n res = 0; n=len(w)\n for i in range(n): res += mean_sqr_b(x[i]-x[i+n]) * w[i]\n return res\n\nm_sr = Model([inp, vgg_inp], Lambda(content_fn)(vgg1+vgg2))\nm_sr.compile('adam', 'mae')\n\ndef train(bs, niter=10):\n targ = np.zeros((bs, 1))\n bc = BcolzArrayIterator(arr_hr, arr_lr, batch_size=bs)\n for i in range(niter):\n hr,lr = next(bc)\n m_sr.train_on_batch([lr[:bs], hr[:bs]], targ)\n\nits = len(arr_hr)//16; its\n\narr_lr.chunklen, arr_hr.chunklen\n\n%time train(64, 18000)",
"Finally starting to understand this problem. So ResourceExhaustedError isn't system memory (or at least not only) but graphics memory. The card (obviously) cannot handle a batch size of 64. But batch size must be a multiple of chunk length, which here is 64.. so I have to find a way to reduce the chunk length down to something my system can handle: no more than 8.",
"arr_lr_c8 = bcolz.carray(arr_lr, chunklen=8, rootdir=path+'trn_resized_72_c8.bc')\narr_lr_c8.flush()\n\narr_hr_c8 = bcolz.carray(arr_hr, chunklen=8, rootdir=path+'trn_resized_288_c8.bc')\narr_hr_c8.flush()\n\narr_lr_c8.chunklen, arr_hr_c8.chunklen",
"That looks successful, now to redo the whole thing with the _c8 versions:",
"arr_lr_c8 = bcolz.open(path+'trn_resized_72_c8.bc')\narr_hr_c8 = bcolz.open(path+'trn_resized_288_c8.bc')\n\ninp,outp=get_model(arr_lr_c8)\n\nshp = arr_hr_c8.shape[1:]\n\nvgg_inp=Input(shp)\nvgg= VGG16(include_top=False, input_tensor=Lambda(preproc)(vgg_inp))\nfor l in vgg.layers: l.trainable=False\n \nvgg_content = Model(vgg_inp, [get_outp(vgg, o) for o in [1,2,3]])\nvgg1 = vgg_content(vgg_inp)\nvgg2 = vgg_content(outp)\n\nm_sr = Model([inp, vgg_inp], Lambda(content_fn)(vgg1+vgg2))\nm_sr.compile('adam', 'mae')\n\ndef train(bs, niter=10):\n targ = np.zeros((bs, 1))\n bc = BcolzArrayIterator(arr_hr_c8, arr_lr_c8, batch_size=bs)\n for i in range(niter):\n hr,lr = next(bc)\n m_sr.train_on_batch([lr[:bs], hr[:bs]], targ)\n\n%time train(8, 18000) # not sure what exactly the '18000' is for\n\narr_lr.shape, arr_hr.shape, arr_lr_c8.shape, arr_hr_c8.shape\n\n# 19439//8 = 2429\n%time train(8, 2430)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ffmmjj/intro_to_data_science_workshop | 03-Delimitação de grupos de flores.ipynb | apache-2.0 | [
"Suponha que não soubéssemos quantas espécies diferentes estão presentes no dataset iris. Como poderíamos descobrir essa informação aproximadamente a partir dos dados presentes ali?\nUma solução possível seria plotar os dados em um scatterplot e tentar identificar visualmente a existência de grupos distintos. O datase Iris, no entanto, possui quatro dimensões de dados então não é possível visualizá-lo inteiramente (apenas um par de features por vez).\nPara visualizar o dataset completo como um scatterplot 2D, é possível usar técnicas de redução de dimensionalidade para comprimir o dataset para duas dimensões perdendo pouca informação estrutural.\nLeitura dos dados",
"import pandas as pd\n\niris = # Carregue o arquivo 'datasets/iris_without_classes.csv' \n\n# Exiba as primeiras cinco linhas usando o método head() para checar que não existe mais a coluna \"Class\"\n",
"Redução de dimensões\nUsaremos o algoritmo PCA do scikit-learn para reduzir o número de dimenSões para dois no dataset.",
"from sklearn.decomposition import PCA\n\nRANDOM_STATE=1234\npca_model = # Crie um objeto PCA com dois componentes\niris_2d = # Use o método fit_transform() para reduzir o dataset para duas dimensões\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Crie um scatterplot do dataset reduzido\n\n# Exiba o gráfico\n",
"Quantos grupos distintos você consegue identificar?\nDescoberta de clusters com K-Means\nO problem descrito anteriormente pode ser descrito como um problema de Clusterização. Clusterização permite encontrar grupos de exemplos que sejam semelhantes a outros exemplos no mesmo grupo mas diferentes de exemplos pertencentes a outros grupos.\nNeste exemplo, usaremos o algoritmo KMeans do scikit-learn para encontrar cluster no dataset.\nUma limitação do KMeans é que ele precisa receber o número esperado de clusters como argumento, então é necessário que se tenha algum conhecimento daquele domínio para chutar um número razoável de grupos ou pode-se testar diferentes números de clusters e ver qual deles apresenta o melhor resultado.",
"# Crie dois modelos KMeans: um com dois clusters e outro com três clusters\n# Armazene os identificadores previstos pelos modelos usando dois e três clusters\nfrom sklearn.cluster import KMeans\n\nmodel2 = # Crie um objeto KMeans que espere dois clusters\nlabels2 = # Infira o identificador de cluster de cada exemplo no dataset usando predict()\n\nmodel3 = # Crie um objeto KMeans que espere três clusters\nlabels3 = # Infira o identificador de cluster de cada exemplo no dataset usando predict()\n\n# Crie um scatterplot usando o dataset reduzido colorindo cada ponto de acordo com o cluster\n# ao qual ele pertence segundo o KMeans de dois clusters\n\n# Exiba o scatterplot\n\n\n# Crie um scatterplot usando o dataset reduzido colorindo cada ponto de acordo com o cluster\n# ao qual ele pertence segundo o KMeans de três clusters\n\n# Exiba o scatterplot\n",
"Recursos adicionais\nExistem técnicas como Análise de Silhueta para inferir automaticamente o número ótimo de clusters em um dataset. Este link ilustra com um exemplo como essa técnica pode ser implementada usando o scikit-learn.\nEm relação a redução de dimensionalidade, PCA é uma das técnicas mais usadas em experimentos iniciais. Algumas alternativas comuns ao KMeans e PCA são, respectivamente, DBSCAN e t-SNE. Para uma excelente explicação interativa sobre o t-SNE, veja esse link."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Featuretools/featuretools | docs/source/guides/advanced_custom_primitives.ipynb | bsd-3-clause | [
"Advanced Custom Primitives Guide",
"from featuretools.primitives import TransformPrimitive\nfrom featuretools.tests.testing_utils import make_ecommerce_entityset\nfrom woodwork.column_schema import ColumnSchema\nfrom woodwork.logical_types import Datetime, NaturalLanguage\nimport featuretools as ft\nimport numpy as np\nimport re",
"Primitives with Additional Arguments\nSome features require more advanced calculations than others. Advanced features usually entail additional arguments to help output the desired value. With custom primitives, you can use primitive arguments to help you create advanced features.\nString Count Example\nIn this example, you will learn how to make custom primitives that take in additional arguments. You will create a primitive to count the number of times a specific string value occurs inside a text.\nFirst, derive a new transform primitive class using TransformPrimitive as a base. The primitive will take in a text column as the input and return a numeric column as the output, so set the input type to a Woodwork ColumnSchema with logical type NaturalLanguage and the return type to a Woodwork ColumnSchema with the semantic tag 'numeric'. The specific string value is the additional argument, so define it as a keyword argument inside __init__. Then, override get_function to return a primitive function that will calculate the feature.\nFeaturetools' primitives use Woodwork's ColumnSchema to control the input and return types of columns for the primitive. For more information about using the Woodwork typing system in Featuretools, see the Woodwork Typing in Featuretools guide.",
"class StringCount(TransformPrimitive):\n '''Count the number of times the string value occurs.'''\n name = 'string_count'\n input_types = [ColumnSchema(logical_type=NaturalLanguage)]\n return_type = ColumnSchema(semantic_tags={'numeric'})\n\n def __init__(self, string=None):\n self.string = string\n\n def get_function(self):\n def string_count(column):\n assert self.string is not None, \"string to count needs to be defined\"\n # this is a naive implementation used for clarity\n counts = [text.lower().count(self.string) for text in column]\n return counts\n\n return string_count",
"Now you have a primitive that is reusable for different string values. For example, you can create features based on the number of times the word \"the\" appears in a text. Create an instance of the primitive where the string value is \"the\" and pass the primitive into DFS to generate the features. The feature name will automatically reflect the string value of the primitive.",
"es = make_ecommerce_entityset()\n\nfeature_matrix, features = ft.dfs(\n entityset=es,\n target_dataframe_name=\"sessions\",\n agg_primitives=[\"sum\", \"mean\", \"std\"],\n trans_primitives=[StringCount(string=\"the\")],\n)\n\nfeature_matrix[[\n 'STD(log.STRING_COUNT(comments, string=the))',\n 'SUM(log.STRING_COUNT(comments, string=the))',\n 'MEAN(log.STRING_COUNT(comments, string=the))',\n]]",
"Features with Multiple Outputs\nSome calculations output more than a single value. With custom primitives, you can make the most of these calculations by creating a feature for each output value.\nCase Count Example\nIn this example, you will learn how to make custom primitives that output multiple features. You will create a primitive that outputs the count of upper case and lower case letters of a text.\nFirst, derive a new transform primitive class using TransformPrimitive as a base. The primitive will take in a text column as the input and return two numeric columns as the output, so set the input type to a Woodwork ColumnSchema with logical type NaturalLanguage and the return type to a Woodwork ColumnSchema with semantic tag 'numeric'. Since this primitive returns two columns, also set number_output_features to two. Then, override get_function to return a primitive function that will calculate the feature and return a list of columns.",
"class CaseCount(TransformPrimitive):\n '''Return the count of upper case and lower case letters of a text.'''\n name = 'case_count'\n input_types = [ColumnSchema(logical_type=NaturalLanguage)]\n return_type = ColumnSchema(semantic_tags={'numeric'})\n number_output_features = 2\n\n def get_function(self):\n def case_count(array):\n # this is a naive implementation used for clarity\n upper = np.array([len(re.findall('[A-Z]', i)) for i in array])\n lower = np.array([len(re.findall('[a-z]', i)) for i in array])\n return upper, lower\n\n return case_count",
"Now you have a primitive that outputs two columns. One column contains the count for the upper case letters. The other column contains the count for the lower case letters. Pass the primitive into DFS to generate features. By default, the feature name will reflect the index of the output.",
"feature_matrix, features = ft.dfs(\n entityset=es,\n target_dataframe_name=\"sessions\",\n agg_primitives=[],\n trans_primitives=[CaseCount],\n)\n\nfeature_matrix[[\n 'customers.CASE_COUNT(favorite_quote)[0]',\n 'customers.CASE_COUNT(favorite_quote)[1]',\n]]",
"Custom Naming for Multiple Outputs\nWhen you create a primitive that outputs multiple features, you can also define custom naming for each of those features.\nHourly Sine and Cosine Example\nIn this example, you will learn how to apply custom naming for multiple outputs. You will create a primitive that outputs the sine and cosine of the hour.\nFirst, derive a new transform primitive class using TransformPrimitive as a base. The primitive will take in the time index as the input and return two numeric columns as the output. Set the input type to a Woodwork ColumnSchema with a logical type of Datetime and the semantic tag 'time_index'. Next, set the return type to a Woodwork ColumnSchema with semantic tag 'numeric' and set number_output_features to two. Then, override get_function to return a primitive function that will calculate the feature and return a list of columns. Also, override generate_names to return a list of the feature names that you define.",
"class HourlySineAndCosine(TransformPrimitive):\n '''Returns the sine and cosine of the hour.'''\n name = 'hourly_sine_and_cosine'\n input_types = [ColumnSchema(logical_type=Datetime, semantic_tags={'time_index'})]\n return_type = ColumnSchema(semantic_tags={'numeric'})\n\n number_output_features = 2\n\n def get_function(self):\n def hourly_sine_and_cosine(column):\n sine = np.sin(column.dt.hour)\n cosine = np.cos(column.dt.hour)\n return sine, cosine\n\n return hourly_sine_and_cosine\n\n def generate_names(self, base_feature_names):\n name = self.generate_name(base_feature_names)\n return f'{name}[sine]', f'{name}[cosine]'",
"Now you have a primitive that outputs two columns. One column contains the sine of the hour. The other column contains the cosine of the hour. Pass the primitive into DFS to generate features. The feature name will reflect the custom naming you defined.",
"feature_matrix, features = ft.dfs(\n entityset=es,\n target_dataframe_name=\"log\",\n agg_primitives=[],\n trans_primitives=[HourlySineAndCosine],\n)\n\nfeature_matrix.head()[[\n 'HOURLY_SINE_AND_COSINE(datetime)[sine]',\n 'HOURLY_SINE_AND_COSINE(datetime)[cosine]',\n]]"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
norsween/data-science | springboard-answers-to-exercises/sliderule_dsi_inferential_statistics_exercise_3_answers.ipynb | gpl-3.0 | [
"Hospital Readmissions Data Analysis and Recommendations for Reduction\nBackground\nIn October 2012, the US government's Center for Medicare and Medicaid Services (CMS) began reducing Medicare payments for Inpatient Prospective Payment System hospitals with excess readmissions. Excess readmissions are measured by a ratio, by dividing a hospital’s number of “predicted” 30-day readmissions for heart attack, heart failure, and pneumonia by the number that would be “expected,” based on an average hospital with similar patients. A ratio greater than 1 indicates excess readmissions.\nExercise Directions\nIn this exercise, you will:\n+ critique a preliminary analysis of readmissions data and recommendations (provided below) for reducing the readmissions rate\n+ construct a statistically sound analysis and make recommendations of your own \nMore instructions provided below. Include your work in this notebook and submit to your Github account. \nResources\n\nData source: https://data.medicare.gov/Hospital-Compare/Hospital-Readmission-Reduction/9n3s-kdb3\nMore information: http://www.cms.gov/Medicare/medicare-fee-for-service-payment/acuteinpatientPPS/readmissions-reduction-program.html\nMarkdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet",
"%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport bokeh.plotting as bkp\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\n# read in readmissions data provided\nhospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')",
"Preliminary Analysis",
"# deal with missing and inconvenient portions of data \nclean_hospital_read_df = hospital_read_df[hospital_read_df['Number of Discharges'] != 'Not Available']\nclean_hospital_read_df.loc[:, 'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int)\nclean_hospital_read_df = clean_hospital_read_df.sort_values('Number of Discharges')\n\n# generate a scatterplot for number of discharges vs. excess rate of readmissions\n# lists work better with matplotlib scatterplot function\nx = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]]\ny = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3])\n\nfig, ax = plt.subplots(figsize=(8,5))\nax.scatter(x, y,alpha=0.2)\n\nax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True)\nax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True)\n\nax.set_xlim([0, max(x)])\nax.set_xlabel('Number of discharges', fontsize=12)\nax.set_ylabel('Excess rate of readmissions', fontsize=12)\nax.set_title('Scatterplot of number of discharges vs. excess rate of readmissions', fontsize=14)\n\nax.grid(True)\nfig.tight_layout()",
"Preliminary Report\nRead the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform.\nA. Initial observations based on the plot above\n+ Overall, rate of readmissions is trending down with increasing number of discharges\n+ With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red)\n+ With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green) \nB. Statistics\n+ In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1 \n+ In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1 \nC. Conclusions\n+ There is a significant correlation between hospital capacity (number of discharges) and readmission rates. \n+ Smaller hospitals/facilities may be lacking necessary resources to ensure quality care and prevent complications that lead to readmissions.\nD. Regulatory policy recommendations\n+ Hospitals/facilties with small capacity (< 300) should be required to demonstrate upgraded resource allocation for quality care to continue operation.\n+ Directives and incentives should be provided for consolidation of hospitals and facilities to have a smaller number of them with higher capacity and number of discharges.\nANSWERS to Exercise 3:\nQuestion A. Do you agree with the above analysis and recommendations? Why or why not?\n At first glance, it appears that the analysis hold weight but one problem it\n didn't address is whether there is enough evidence to conclude is true.\n So, at this point I can't categorically say that I agree with the conclusion\n until I conduct a hypothesis test and test the p-value of the sample population.\n After this test, then I can answer this question.\n\nQuestion B. Provide support for your arguments and your own recommendations with \n a statistically sound analysis:\n\n\nSetup an appropriate hypothesis test.\nThe hypothesis test is basically comprised of a NULL HYPOTHESIS and an \n ALTERNATIVE HYPOTHESIS. The NULL HYPOTHESIS is usually a statement of 'no effect' or \n 'no difference' and is the statement being tested based on the p-value of the sample \n data. If the p-value is less than or equal to the level of significance then the NULL \n HYPOTHESIS can be neglected, which in turn signifies that there is enough evidence \n in the data to support the ALTERNATIVE HYPOTHESIS.\nFor this particular set of data, looking at the scatter plot, it appears that there are\n a lot more hospitals with a relatively small number of discharges compared to hospitals\n with a large number of discharges. We can equate this arbitrarily to small hospitals \n vs. large hospitals.\nSince the original conclusion correlate hospital capacity (number of discharges) with \n readmission rate, an appropriate null hypothesis should involve these two:\nNULL HYPOTHESIS:\n Ho:μ1=μ2 where μ1 is the average rate of readmission of hospitals with < 100 discharges\n and μ2 is the average rate of readmission of hospitals with > 1000 discharges.\n In other words, the null hypothesis states that there is no difference in the average rate\n of readmissions between hospitals with less than 100 discharges or hospitals with greater \n than 1000 discharges.\nALTERNATIVE HYPOTHESIS:\n Ho:μ1≠μ2 where μ1 is the average rate of readmission of hospitals with < 100 discharges\n and μ2 is the average rate of readmission of hospitals with > 1000 discharges.\n In other words, the alternative hypothesis states that there is a significant difference\n in average hospital readmission rates in hospitals with less than 100 discharges and hospitals\n with greater than 1000 discharges.",
"%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport bokeh.plotting as bkp\nimport scipy.stats as st\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\n# read in readmissions data provided\nhospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')\n\n# Set-up the hypothesis test. \n# Get the two groups of hospitals, one with < 100 discharges and the other with > 1000 discharges.\n# Get the hospitals with small discharges first. \n# First statement deals with missing data. \nclean_hospital_read_df = hospital_read_df[(hospital_read_df['Number of Discharges'] != 'Not Available')]\nhosp_with_small_discharges = clean_hospital_read_df[clean_hospital_read_df['Number of Discharges'].astype(int) < 100]\nhosp_with_small_discharges = hosp_with_small_discharges[hosp_with_small_discharges['Number of Discharges'].astype(int) != 0]\nhosp_with_small_discharges.sort_values(by = 'Number of Discharges', ascending = False)\n\n# Now get the hospitals with relatively large discharges.\nhosp_with_large_discharges = clean_hospital_read_df[clean_hospital_read_df['Number of Discharges'].astype(int) > 1000]\nhosp_with_large_discharges = hosp_with_large_discharges[hosp_with_large_discharges['Number of Discharges'].astype(int) != 0]\nhosp_with_large_discharges.sort_values(by = 'Number of Discharges', ascending = False)\n\n# Now calculate the statistical significance and p-value\nsmall_hospitals = hosp_with_small_discharges['Excess Readmission Ratio']\nlarge_hospitals = hosp_with_large_discharges['Excess Readmission Ratio']\nresult = st.ttest_ind(small_hospitals,large_hospitals, equal_var=False)\nprint(\"Statistical significance is equal to : %6.4F, P-value is equal to: %5.14F\" % (result[0],result[1]))",
"Report statistical significance for α = .01:\nSince the P-value < 0.01, we can reject the null hypothesis that states \nthat there are no significant differences between the two hospital groups\noriginally mentioned in conclusion.\n\n\n\nDiscuss statistical significance and practical significance:\nThe hypothesis test has shown that there is a difference \nbetween the two groups being compared in the preliminary report: of hospitals\nwith readmissions rate < 100 and hospitals with readmissions rate > 1000.\nIt may be that the difference between the two groups is not practically\nsignificant since the samples we used are quite large and \nlarge sample sizes can make hypothesis testing very sensitive to even slight \ndifferences in the data. The hypothesis test prove that there is a strong \nlevel of confidence that the samples are not statistically identical.\n\n\n\nLook at the scatterplot above. What are the advantages and disadvantages of\n using this plot to convey information?\nTo me, the main advantage of a scatterplot is the range of data flow,i.e., the maximum\nand minimum values can be easily determined. And also, one can easily see the relationship\nbetween two variables. But the one drawback to it is that one can not qualitatively visualize\nthe significance in differences.\n\n\n\nConstruct another plot that conveys the same information in a more direct manner:\nBelow I've constructed a hexabgon binning plot that can easily show the relative counts\nof a combination of data points for readmission rate and number of discharges.",
"%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport bokeh.plotting as bkp\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\n# read in readmissions data provided\nhospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')\n\n# deal with missing and inconvenient portions of data \nclean_hospital_read_df = hospital_read_df[hospital_read_df['Number of Discharges'] != 'Not Available']\nclean_hospital_read_df = clean_hospital_read_df.sort_values('Number of Discharges')\n\n# generate a scatterplot for number of discharges vs. excess rate of readmissions\n# lists work better with matplotlib scatterplot function\nx = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]]\ny = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3])\n\nfig, ax = plt.subplots(figsize=(8,5))\nim = ax.hexbin(x, y,gridsize=20)\nfig.colorbar(im, ax=ax)\n\nax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True)\nax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True)\n\nax.set_xlabel('Number of discharges', fontsize=10)\nax.set_ylabel('Excess rate of readmissions', fontsize=10)\nax.set_title('Hexagon Bin Plot of number of discharges vs. excess rate of readmissions', fontsize=12, fontweight='bold')\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ninadhw/ninadhw.github.io | notebooks/getting_started_with_keras.ipynb | cc0-1.0 | [
"Getting started with keras\nThis tutorial is inspired from https://keras.io\nSequential model\nKeras uses slightly different approach for initializing and defining layers. This approach is called Sequential model. Sequential model is a linear stack of several layers of neural network to be designed. So to defining each and every layer in the neural network we use Sequential class. This can be done in two different ways as shown below.",
"#\n# Import required packages\n#\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation\n\nfrom IPython.display import display, Image\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport random",
"Either define entire neural network inside the constructor of the Sequential class as below,",
"#\n# Network model can be initialized using following syntax in the constructor itself\n#\nmodel1 = Sequential([\n Dense(32,input_dim=784),\n Activation(\"relu\"),\n Dense(10),\n Activation(\"softmax\")\n])",
"Or add layers to the network one by one as per convinience.",
"#\n# Layers to the network can be added dynamically\n#\nmodel2 = Sequential()\nmodel2.add(Dense(32, input_dim=784))\nmodel2.add(Activation('relu'))\nmodel2.add(Dense(10))\nmodel2.add(Activation('softmax'))",
"The model needs to know what input shape it should expect i.e whether input is 28x28 (746 pixels) image or some numeric text or some other size features. \nFor this reason, the first layer in a <span style=\"color:red;font-weight:bold\">Sequential model</span> (and only the first, because following layers can do automatic shape inference from the shape of previous layers) needs to receive information about its input shape hence first <span style=\"color:red;font-weight:bold\">model.add</span> function has extra argument of <span style=\"color:red;font-weight:bold\">input_dim</span>. \nThere are several possible ways to do this:\n-- pass an <span style=\"color:red;font-weight:bold\">input_shape</span> argument to the first layer. This is a shape tuple (a tuple of integers or None entries, where None indicates that any positive integer may be expected). In <span style=\"color:red;font-weight:bold\">input_shape</span>, the batch dimension is not included. \ne.g. input_shape=(784,10) -> neural network shall have 10 inputs of 784 length each\n input_shape=(784,) or input_shape=(784,None) -> neural network shall have any positive number of inputs with 784 length each\n\n-- pass instead a batch_input_shape argument, where the batch dimension is included. This is useful for specifying a fixed batch size (e.g. with stateful RNNs).\n-- some 2D layers, such as Dense, support the specification of their input shape via the argument input_dim, and some 3D temporal layers support the arguments input_dim and input_length.\nAs such, the following three snippets are strictly equivalent:",
"model1 = Sequential()\nmodel1.add(Dense(32, input_shape=(784,)))\n\nmodel2 = Sequential()\nmodel2.add(Dense(32, batch_input_shape=(None, 784)))\n# note that batch dimension is \"None\" here,\n# so the model will be able to process batches of any size with each input of length 784.\n\nmodel3 = Sequential()\nmodel3.add(Dense(32, input_dim=784))",
"Note that <span style=\"font-weight:bold\">input_dim=784 is same as input_shape=(784,)</span>\nThe Merge layer\nMultiple Sequential instances can be merged into a single output via a Merge layer. The output is a layer that can be added as first layer in a new Sequential model. For instance, here's a model with two separate input branches getting merged:",
"Image(\"keras_examples/keras_merge.png\")\n\nfrom keras.layers import Merge\n\nleft_branch = Sequential()\nleft_branch.add(Dense(32, input_dim=784))\n\nright_branch = Sequential()\nright_branch.add(Dense(32, input_dim=784))\n\nmerged = Merge([left_branch, right_branch], mode='concat')\n\nfinal_model = Sequential()\nfinal_model.add(merged)\nfinal_model.add(Dense(10, activation='softmax'))",
"Such a two-branch model can then be trained via e.g.:",
"final_model.compile(optimizer='rmsprop', loss='categorical_crossentropy')\nfinal_model.fit([input_data_1, input_data_2], targets) # we pass one data array per model input",
"The Merge layer supports a number of pre-defined modes:\n<ul>\n<li>sum (default): element-wise sum</li>\n<li>concat: tensor concatenation. You can specify the concatenation axis via the argument concat_axis.</li>\n<li>mul: element-wise multiplication</li>\n<li>ave: tensor average</li>\n<li>dot: dot product. You can specify which axes to reduce along via the argument dot_axes.</li>\n<li>cos: cosine proximity between vectors in 2D tensors.</li>\n</ul>\n\nYou can also pass a function as the mode argument, allowing for arbitrary transformations:",
"merged = Merge([left_branch, right_branch], mode=lambda x: x[0] - x[1])",
"Now you know enough to be able to define almost any model with Keras. For complex models that cannot be expressed via Sequential and Merge, you can use the functional API.\nCompilation\nBefore training a model, you need to configure the learning process, which is done via the compile method. It receives three arguments:\n<ul>\n<li>an optimizer, it is a type of optimizer to be used e.g. gradient descent. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class. <a href=\"https://keras.io/optimizers\" target=\"_blank\">See: optimizers.</a> </li>\n<li>a loss function, it is an error function to be optimized e.g. squered error function or cross-entropy function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function (such as categorical_crossentropy or mse), or it can be an objective function. <a href=\"https://keras.io/objectives\" target=\"_blank\">See: objectives.</a></li>\n<li>a list of metrics, to evaluate performance of the network. For any classification problem you will want to set this to metrics=['accuracy']. A metric could be the string identifier of an existing metric or a custom metric function. Custom metric function should return either a single tensor value or a dict metric_name -> metric_value. <a href=\"https://keras.io/metrics\" target=\"_blank\">See: metrics.</a></li>",
"# for a multi-class classification problem\nmodel.compile(optimizer='rmsprop',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# for a binary classification problem\nmodel.compile(optimizer='rmsprop',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n\n# for a mean squared error regression problem\nmodel.compile(optimizer='rmsprop',\n loss='mse')\n\n# for custom metrics\nimport keras.backend as K\n\ndef mean_pred(y_true, y_pred):\n return K.mean(y_pred)\n\ndef false_rates(y_true, y_pred):\n false_neg = ...\n false_pos = ...\n return {\n 'false_neg': false_neg,\n 'false_pos': false_pos,\n }\n\nmodel.compile(optimizer='rmsprop',\n loss='binary_crossentropy',\n metrics=['accuracy', mean_pred, false_rates])",
"Training\nKeras models are trained on Numpy arrays of input data and labels. For training a model, you will typically use the fit function. <a href=\"https://keras.io/models/sequential\" target=\"_blank\">Read its documentation here.</a>",
"# for a single-input model with 2 classes (binary):\n\nmodel = Sequential()\nmodel.add(Dense(1, input_dim=784, activation='sigmoid'))\nmodel.compile(optimizer='rmsprop',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n\n# generate dummy data\nimport numpy as np\ndata = np.random.random((1000, 784))\nlabels = np.random.randint(2, size=(1000, 1))\n\n# train the model, iterating on the data in batches\n# of 32 samples\nmodel.fit(data, labels, nb_epoch=10, batch_size=32)\n\n# for a multi-input model with 10 classes:\n\nleft_branch = Sequential()\nleft_branch.add(Dense(32, input_dim=784))\n\nright_branch = Sequential()\nright_branch.add(Dense(32, input_dim=784))\n\nmerged = Merge([left_branch, right_branch], mode='concat')\n\nmodel = Sequential()\nmodel.add(merged)\nmodel.add(Dense(10, activation='softmax'))\n\nmodel.compile(optimizer='rmsprop',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# generate dummy data\nimport numpy as np\nfrom keras.utils.np_utils import to_categorical\ndata_1 = np.random.random((1000, 784))\ndata_2 = np.random.random((1000, 784))\n\n# these are integers between 0 and 9\nlabels = np.random.randint(10, size=(1000, 1))\n# we convert the labels to a binary matrix of size (1000, 10)\n# for use with categorical_crossentropy\nlabels = to_categorical(labels, 10)\n\n# train the model\n# note that we are passing a list of Numpy arrays as training data\n# since the model has 2 inputs\nmodel.fit([data_1, data_2], labels, nb_epoch=10, batch_size=32)",
"Example\nFollowing is an example implementation of multi-layer perceptron on MNIST data set\nFirst initialize all the libraries rerquired",
"# %load mnist_mlp.py\n'''Trains a simple deep NN on the MNIST dataset.\n\nGets to 98.40% test accuracy after 20 epochs\n(there is *a lot* of margin for parameter tuning).\n2 seconds per epoch on a K520 GPU.\n'''\n\nfrom __future__ import print_function\nimport numpy as np\nnp.random.seed(1337) # for reproducibility\n\nfrom keras.datasets import mnist\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Dropout, Activation\nfrom keras.optimizers import RMSprop\nfrom keras.utils import np_utils",
"Simple function to display testdata with prediction results on the test dataset",
"def show_prediction_results(X_test,predicted_labels):\n for i,j in enumerate(random.sample(range(len(X_test)),10)):\n plt.subplot(5,2,i+1)\n plt.axis(\"off\")\n plt.title(\"Predicted labels is \"+str(np.argmax(predicted_labels[j])))\n plt.imshow(X_test[j].reshape(28,28))",
"Generating and structuring dataset for training and testing. We will be using 28x28 images from MNIST dataset of about 60000 for training and 10000 for testing. We will use batch size of 128, for classifying 10 numbers in the images. For small computations 20 epochs are used to these can be increased for more accuracy.",
"batch_size = 128\nnb_classes = 10\nnb_epoch = 20\n\n# the data, shuffled and split between train and test sets\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n\nX_train = X_train.reshape(60000, 784)\nX_test = X_test.reshape(10000, 784)\nX_train = X_train.astype('float32')\nX_test = X_test.astype('float32')\nX_train /= 255\nX_test /= 255\nprint(X_train.shape[0], 'train samples')\nprint(X_test.shape[0], 'test samples')\n\n# convert class vectors to binary class matrices\nY_train = np_utils.to_categorical(y_train, nb_classes)\nY_test = np_utils.to_categorical(y_test, nb_classes)",
"Start building Sequiential model in keras. We will use 3 layer MLP model for modelling the dataset.",
"model = Sequential()\nmodel.add(Dense(512, input_shape=(784,)))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(512))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(10))\nmodel.add(Activation('softmax'))\n\nmodel.summary()",
"Compiling model is configuring model with performance parameters such as loss function. metric and optimizer",
"model.compile(loss='categorical_crossentropy',\n optimizer=RMSprop(),\n metrics=['accuracy'])",
"<span style=\"color:red;font-weight:bold\">fit function for model</span> fits the training data to neural network configured before",
"history = model.fit(X_train, Y_train,\n batch_size=batch_size, nb_epoch=nb_epoch,\n verbose=0, validation_data=(X_test, Y_test))\n\n# Let's save the model in local file to fetch at later point in time to skip computations\n# and directly start testing if need be\nmodel.save_weights('mnist_mlp.hdf5')\nwith open('mnist_mlp.json', 'w') as f:\n f.write(model.to_json())",
"<span style=\"color:red;font-weight:bold\">predict function for model</span> predicts labels or values for the testing data provided",
"predicted_labels = model.predict(X_test,verbose=0)\nscore = model.evaluate(X_test, Y_test, verbose=0)\nprint('Test score:', score[0])\nprint('Test accuracy:', score[1])\n\n# Let's visualize some results randomly picked from testdata set and predicted labels for them\n#\nshow_prediction_results(X_test,predicted_labels)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mohanprasath/Course-Work | coursera/applied_data_science_capstone/Week 3 Applied Data Science Capstone.ipynb | gpl-3.0 | [
"Part 1 - Extracting Table from Wiki Page",
"import requests\nimport lxml\n\nimport pandas as pd\nimport numpy as np\n\nfrom bs4 import BeautifulSoup\n\nwiki_page = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text\nsoup = BeautifulSoup(wiki_page, 'lxml')\ntable = soup.find('table')\n# table\n\ntoronto_table = soup.find('table',{'class':'wikitable sortable'})\nlinks = toronto_table.findAll('td')\n\npincodes = []\ncount = 0\nfor x in links:\n if count == 0:\n x1 = x.text\n count += 1\n elif count == 1:\n x2 = x.text\n count +=1\n elif count == 2:\n x3 = x.text\n x3 = x3.replace('\\n','')\n count = 0\n if x3 == 'Not assigned':\n x3 = x2\n if x2 != 'Not assigned': \n pincodes.append((x1,x2,x3))\n# print (pincodes)\n\nresult = {}\nfor x in pincodes:\n if x[0] in result:\n result[x[0]] = [x[0], x[1], result[x[0]][1] + ', ' + x[2]]\n else:\n result[x[0]] = [x[0], x[1], x[2]]\n \nresults = {}\nfor count, x in enumerate(result):\n results[count] = [x, result[x][1], result[x][2]]\n \n# print(results)\n\ntoronto_data = pd.DataFrame.from_dict(results, orient='index', columns=['PostalCode', 'Borough', 'Neighborhood'])\ntoronto_data\n\n# Trail - Not WOrking or taking too long time\nimport geocoder # import geocoder\n\nupdate_results = {}\nfor postal_code in toronto_data['PostalCode']:\n\n lat_lng_coords = None\n while(lat_lng_coords is None):\n geo_info = geocoder.google('{}, Toronto, Ontario'.format(postal_code))\n lat_lng_coords = geo_info.latlng\n\n latitude = lat_lng_coords[0]\n longitude = lat_lng_coords[1]\n update_results[postal_code] = {\"latitude\":latitude, \"longitude\":longitude}\n\ntoronto_data['PostalCode']",
"Part 2 - Adding Latitude and Longitude",
"coordinates = pd.read_csv('http://cocl.us/Geospatial_data')\ncoordinates.rename(columns={'Postal Code': 'PostalCode'}, inplace=True)\nfinal_result = pd.merge(toronto_data, coordinates, on='PostalCode')\nfinal_result",
"Part 3 - Clustering",
"import matplotlib.pyplot as plt\n\nlat_lons = []\nlats = []\nlons = []\nfor index, row in final_result.iterrows():\n lat_lons.append([row['Longitude'], row['Latitude']])\n lats.append(row['Latitude'])\n lons.append(row['Longitude'])\n\nplt.scatter(lons, lats)\nplt.xlabel(\"Longitude\")\nplt.ylabel(\"Latitude\")\nplt.title(\"Toronto Postal Codes Geo Location\")\nplt.show()",
"Above plots shows the regions in Toronto. However the clusters are not visible\nclearly through visual analysis. It requires detailes Clusteing algorithms like\nk-Means for a good analysis. Please refer the following code for more info.",
"# I have Referred some clustering examples from Kaggle\n# https://www.kaggle.com/xxing9703/kmean-clustering-of-latitude-and-longitude\n\nimport folium \n\ntoronto_latitude = 43.6532; toronto_longitude = -79.3832\nmap_toronto = folium.Map(location = [toronto_latitude, toronto_longitude], zoom_start = 10.7)\n\n# adding markers to map\nfor lat, lng, borough, neighborhood in zip(final_result['Latitude'], final_result['Longitude'], final_result['Borough'], final_result['Neighborhood']):\n label = '{}, {}'.format(neighborhood, borough)\n label = folium.Popup(label, parse_html=True)\n folium.CircleMarker(\n [lat, lng],\n radius=5,\n popup=label,\n color='red',\n fill=True,\n fill_color='#110000',\n fill_opacity=0.7).add_to(map_toronto) \n \n\nmap_toronto",
""
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bretthandrews/marvin | docs/sphinx/jupyter/dap_maps.ipynb | bsd-3-clause | [
"Marvin Maps\nMarvin Maps is how you deal with the DAP MAPS FITS files easily. You can retrieve maps in several ways. Let's take a took. \nFrom a Marvin Maps\nMarvin Maps takes the same inputs as cube: filename, plateifu, or mangaid. It also accepts keywords bintype and template_kin. These uniquely define a DAP MAPS file. By default, Marvin will load a MAPS file of bintype=SPX and template_kin=GAU-MILESHC for MPL-5. For MPL-4, the defaults are bintype=NONE, and template_kin=MIUSCAT-THIN.",
"# import the maps\nfrom marvin.tools.maps import Maps\n\n# Load a MPL-5 map\nmapfile = '/Users/Brian/Work/Manga/analysis/v2_0_1/2.0.2/SPX-GAU-MILESHC/8485/1901/manga-8485-1901-MAPS-SPX-GAU-MILESHC.fits.gz'\n# Let's get a default map of\n\nmaps = Maps(filename=mapfile)\nprint(maps)",
"Once you have a maps object, you can access the raw maps file and header and extensions via maps.header and maps.data. Alternatively, you can access individual maps using the getMap method. getMap works by specifying a parameter and a channel. The parameter and channels names are equivalent to those found in the MAPS FITS extensions and headers, albeit lowercased.",
"# Let's grab the H-alpha flux emission line map\nhaflux = maps.getMap('emline_gflux', channel='ha_6564')\nprint(haflux)",
"We can easily plot the map using the internal plot function. Currently maps are plotted using some default Matplotlib color schemes and scaling.",
"# turn on interactive plotting\n%matplotlib notebook\n\n# let's plot it\nhaflux.plot()",
"Try Yourself Now try grabbing and plotting the map for stellar velocity in the cell below.\nYou can access the individual values, ivar, and mask for your map via the .value, .ivar, and .mask attributes. These are 2d-array numpy arrays.",
"haflux.value, haflux.mask",
"Let's replot the Halpha flux map but exclude all regions that have a non-zero mask. We need the numpy Python package for this.",
"import numpy as np\n# select the locations where the mask is non-zero\nbadvals = np.where(haflux.mask > 0)\n# set those values to a numpy nan. \nhaflux.value[badvals] = np.nan\n# check the min and max\nprint('min', np.nanmin(haflux.value), 'max', np.nanmax(haflux.value))\nhaflux.plot()",
"From the maps object, we can also easily plot the ratio between two maps, e.g. emission-line ratios, using the getMapRatio method. Map ratios are Map objects the same as any other, so you can access their array values or plot them",
"# Let's look at the NII-to-Halpha emission-line ratio map\nniiha = maps.getMapRatio('emline_gflux', 'nii_6585', 'ha_6564')\nprint(niiha)\nniiha.plot()",
"Try Yourself Modify the above to display the map for the emission-line ratio OIII/Hbeta\nFrom a Marvin Cube",
"# import the Cube tool\nfrom marvin.tools.cube import Cube\n\n# point to your file\nfilename ='/Users/Brian/Work/Manga/redux/v2_0_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'\n\n# get a cube\ncube = Cube(filename=filename)\nprint(cube)",
"Once we have a cube, we can get its maps using the getMaps method. getMaps is just a wrapper to the Marvin Maps Tool. Once we have the maps, we can do all the same things as before.",
"maps = cube.getMaps()\nprint(maps)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
machinelearningnanodegree/stanford-cs231 | solutions/vijendra/assignment1/two_layer_net.ipynb | mit | [
"Implementing a Neural Network\nIn this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.",
"# A bit of setup\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom cs231n.classifiers.neural_net import TwoLayerNet\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))",
"We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.",
"# Create a small net and some toy data to check your implementations.\n# Note that we set the random seed for repeatable experiments.\n\ninput_size = 4\nhidden_size = 10\nnum_classes = 3\nnum_inputs = 5\n\ndef init_toy_model():\n np.random.seed(0)\n return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)\n\ndef init_toy_data():\n np.random.seed(1)\n X = 10 * np.random.randn(num_inputs, input_size)\n y = np.array([0, 1, 2, 2, 1])\n return X, y\n\nnet = init_toy_model()\nX, y = init_toy_data()",
"Forward pass: compute scores\nOpen the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters. \nImplement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.",
"scores = net.loss(X)\nprint 'Your scores:'\nprint scores\nprint\nprint 'correct scores:'\ncorrect_scores = np.asarray([\n [-0.81233741, -1.27654624, -0.70335995],\n [-0.17129677, -1.18803311, -0.47310444],\n [-0.51590475, -1.01354314, -0.8504215 ],\n [-0.15419291, -0.48629638, -0.52901952],\n [-0.00618733, -0.12435261, -0.15226949]])\nprint correct_scores\nprint\n\n# The difference should be very small. We get < 1e-7\nprint 'Difference between your scores and correct scores:'\nprint np.sum(np.abs(scores - correct_scores))",
"Forward pass: compute loss\nIn the same function, implement the second part that computes the data and regularizaion loss.",
"loss, _ = net.loss(X, y, reg=0.1)\ncorrect_loss = 1.30378789133\n\n# should be very small, we get < 1e-12\nprint 'Difference between your loss and correct loss:'\nprint np.sum(np.abs(loss - correct_loss))",
"Backward pass\nImplement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:",
"from cs231n.gradient_check import eval_numerical_gradient\n\n# Use numeric gradient checking to check your implementation of the backward pass.\n# If your implementation is correct, the difference between the numeric and\n# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.\n\nloss, grads = net.loss(X, y, reg=0.1)\n\n# these should all be less than 1e-8 or so\nfor param_name in grads:\n f = lambda W: net.loss(X, y, reg=0.1)[0]\n param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)\n print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))",
"Train the network\nTo train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.\nOnce you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.",
"net = init_toy_model()\nstats = net.train(X, y, X, y,\n learning_rate=1e-1, reg=1e-5,\n num_iters=100, verbose=False)\n\nprint 'Final training loss: ', stats['loss_history'][-1]\n\n# plot the loss history\nplt.plot(stats['loss_history'])\nplt.xlabel('iteration')\nplt.ylabel('training loss')\nplt.title('Training Loss history')\nplt.show()",
"Load the data\nNow that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.",
"from cs231n.data_utils import load_CIFAR10\n\ndef get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):\n \"\"\"\n Load the CIFAR-10 dataset from disk and perform preprocessing to prepare\n it for the two-layer neural net classifier. These are the same steps as\n we used for the SVM, but condensed to a single function. \n \"\"\"\n # Load the raw CIFAR-10 data\n cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n \n # Subsample the data\n mask = range(num_training, num_training + num_validation)\n X_val = X_train[mask]\n y_val = y_train[mask]\n mask = range(num_training)\n X_train = X_train[mask]\n y_train = y_train[mask]\n mask = range(num_test)\n X_test = X_test[mask]\n y_test = y_test[mask]\n\n # Normalize the data: subtract the mean image\n mean_image = np.mean(X_train, axis=0)\n X_train -= mean_image\n X_val -= mean_image\n X_test -= mean_image\n\n # Reshape data to rows\n X_train = X_train.reshape(num_training, -1)\n X_val = X_val.reshape(num_validation, -1)\n X_test = X_test.reshape(num_test, -1)\n\n return X_train, y_train, X_val, y_val, X_test, y_test\n\n\n# Invoke the above function to get our data.\nX_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()\nprint 'Train data shape: ', X_train.shape\nprint 'Train labels shape: ', y_train.shape\nprint 'Validation data shape: ', X_val.shape\nprint 'Validation labels shape: ', y_val.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape",
"Train a network\nTo train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.",
"input_size = 32 * 32 * 3\nhidden_size = 50\nnum_classes = 10\nnet = TwoLayerNet(input_size, hidden_size, num_classes)\n\n# Train the network\nstats = net.train(X_train, y_train, X_val, y_val,\n num_iters=1000, batch_size=200,\n learning_rate=1e-4, learning_rate_decay=0.95,\n reg=0.5, verbose=True)\n\n# Predict on the validation set\nval_acc = (net.predict(X_val) == y_val).mean()\nprint 'Validation accuracy: ', val_acc\n\n",
"Debug the training\nWith the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.\nOne strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.\nAnother strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.",
"# Plot the loss function and train / validation accuracies\nplt.subplot(2, 1, 1)\nplt.plot(stats['loss_history'])\nplt.title('Loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(stats['train_acc_history'], label='train')\nplt.plot(stats['val_acc_history'], label='val')\nplt.title('Classification accuracy history')\nplt.xlabel('Epoch')\nplt.ylabel('Clasification accuracy')\nplt.show()\n\nfrom cs231n.vis_utils import visualize_grid\n\n# Visualize the weights of the network\n\ndef show_net_weights(net):\n W1 = net.params['W1']\n W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)\n plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))\n plt.gca().axis('off')\n plt.show()\n\nshow_net_weights(net)",
"Tune your hyperparameters\nWhat's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.\nTuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.\nApproximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.\nExperiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).",
"best_net = None # store the best model into this \n\n#################################################################################\n# TODO: Tune hyperparameters using the validation set. Store your best trained #\n# model in best_net. #\n# #\n# To help debug your network, it may help to use visualizations similar to the #\n# ones we used above; these visualizations will have significant qualitative #\n# differences from the ones we saw above for the poorly tuned network. #\n# #\n# Tweaking hyperparameters by hand can be fun, but you might find it useful to #\n# write code to sweep through possible combinations of hyperparameters #\n# automatically like we did on the previous exercises. #\n\n\nlearning_rates = [1e-4, 2e-4]\nregularization_strengths = [1,1e4]\n\n# results is dictionary mapping tuples of the form\n# (learning_rate, regularization_strength) to tuples of the form\n# (training_accuracy, validation_accuracy). The accuracy is simply the fraction\n# of data points that are correctly classified.\nresults = {}\nbest_val = -1 # The highest validation accuracy that we have seen so far.\n\nfor learning_rate in learning_rates:\n for regularization_strength in regularization_strengths:\n net = TwoLayerNet(input_size,hidden_size,num_classes)\n net.train(X_train, y_train,X_val,y_val, learning_rate= learning_rate, reg=regularization_strength,\n num_iters=1500)\n y_train_predict = net.predict(X_train)\n y_val_predict = net.predict(X_val)\n accuracy_train = np.mean(y_train_predict == y_train)\n accuracy_validation = np.mean(y_val_predict == y_val)\n results[(learning_rate,regularization_strength)] = (accuracy_train,accuracy_validation)\n if accuracy_validation > best_val:\n best_val = accuracy_validation\n best_net = net\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n \n# Print out results.\nfor lr, reg in sorted(results):\n train_accuracy, val_accuracy = results[(lr, reg)]\n print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (\n lr, reg, train_accuracy, val_accuracy)\n \nprint 'best validation accuracy achieved during cross-validation: %f' % best_val\n\n\n\n\n# visualize the weights of the best network\nshow_net_weights(best_net)",
"Run on the test set\nWhen you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.\nWe will give you extra bonus point for every 1% of accuracy above 52%.",
"test_acc = (best_net.predict(X_test) == y_test).mean()\nprint 'Test accuracy: ', test_acc"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
enakai00/jupyter_ml4se_commentary | Solutions/03-Random Numbers-solution.ipynb | apache-2.0 | [
"確率分布と乱数の取得",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom pandas import Series, DataFrame",
"練習問題\n(1) 2個のサイコロを振った結果をシュミレーションします。次の例のように、1〜6の整数のペアを含むarrayを乱数で生成してください。",
"from numpy.random import randint\nrandint(1,7,2)",
"(2) 2個のサイコロを振った結果を10回分用意します。次の例のように、1〜6の整数のペア(リスト)を10組含むarrayを生成して、変数 dice に保存してください。",
"dice = randint(1,7,[10,2])\ndice",
"(3) 変数 dice に保存されたそれぞれの結果に対して、次の例のように、2個のサイコロの目の合計を計算してください。(計算結果はリストに保存すること。)",
"[a+b for (a,b) in dice]",
"(4) 2個のサイコロの目の合計を1000回分用意して、2〜12のそれぞれの回数をヒストグラムに表示してください。\nヒント:オプション bins=11, range=(1.5, 12.5) を指定するときれいに描けます。",
"dice = randint(1,7,[1000,2])\nsums = [a+b for (a,b) in dice]\nplt.hist(sums, bins=11, range=(1.5, 12.5))",
"(5) 0≦x≦1 の範囲を等分した10個の点 data_x = np.linspace(0,1,10) に対して、sin(2πx) の値を格納したarrayを作成して、変数 data_y に保存しなさい。\nさらに、data_y に含まれるそれぞれの値に、標準偏差 0.3 の正規分布に従う乱数を加えたarrayを作成して、変数 data_t に保存した後、(data_x, data_t) を散布図に表示しなさい。",
"from numpy.random import normal\n\ndata_x = np.linspace(0,1,10)\ndata_y = np.sin(2*np.pi*data_x)\ndata_t = data_y + normal(loc=0, scale=0.3, size=len(data_y))\nplt.scatter(data_x, data_t)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AaronCWong/phys202-2015-work | assignments/assignment04/MatplotlibEx02.ipynb | mit | [
"Matplotlib Exercise 2\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Exoplanet properties\nOver the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.\nhttp://iopscience.iop.org/1402-4896/2008/T130/014001\nYour job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo:\nhttps://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue\nA text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:",
"!head -n 30 open_exoplanet_catalogue.txt",
"Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:",
"data = np.genfromtxt('open_exoplanet_catalogue.txt' , delimiter = \",\")\n\nassert data.shape==(1993,24)",
"Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.\n\nCustomize your plot to follow Tufte's principles of visualizations.\nCustomize the box, grid, spines and ticks to match the requirements of this data.\nPick the number of bins for the histogram appropriately.",
"mass = data[:2]\n\nassert True # leave for grading",
"Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.\n\nCustomize your plot to follow Tufte's principles of visualizations.\nCustomize the box, grid, spines and ticks to match the requirements of this data.",
"# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave for grading"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cmmorrow/sci-analysis | docs/using_sci_analysis.ipynb | mit | [
"Using sci-analysis\nFrom the python interpreter or in the first cell of a Jupyter notebook, type:",
"import warnings\nwarnings.filterwarnings(\"ignore\")\nimport numpy as np\nimport scipy.stats as st\nfrom sci_analysis import analyze",
"This will tell python to import the sci-analysis function analyze().\n\nNote: Alternatively, the function analyse() can be imported instead, as it is an alias for analyze(). For the case of this documentation, analyze() will be used for consistency.\n\nIf you are using sci-analysis in a Jupyter notebook, you need to use the following code instead to enable inline plots:",
"%matplotlib inline\nimport numpy as np\nimport scipy.stats as st\nfrom sci_analysis import analyze",
"Now, sci-analysis should be ready to use. Try the following code:",
"np.random.seed(987654321)\ndata = st.norm.rvs(size=1000)\nanalyze(xdata=data)",
"A histogram, box plot, summary stats, and test for normality of the data should appear above. \n\nNote: numpy and scipy.stats were only imported for the purpose of the above example. sci-analysis uses numpy and scipy internally, so it isn't necessary to import them unless you want to explicitly use them. \n\nA histogram and statistics for categorical data can be performed with the following command:",
"pets = ['dog', 'cat', 'rat', 'cat', 'rabbit', 'dog', 'hamster', 'cat', 'rabbit', 'dog', 'dog']\nanalyze(pets)",
"Let's examine the analyze() function in more detail. Here's the signature for the analyze() function:",
"from inspect import signature\nprint(analyze.__name__, signature(analyze))\nprint(analyze.__doc__)",
"analyze() will detect the desired type of data analysis to perform based on whether the ydata argument is supplied, and whether the xdata argument is a two-dimensional array-like object. \nThe xdata and ydata arguments can accept most python array-like objects, with the exception of strings. For example, xdata will accept a python list, tuple, numpy array, or a pandas Series object. Internally, iterable objects are converted to a Vector object, which is a pandas Series of type float64.\n\nNote: A one-dimensional list, tuple, numpy array, or pandas Series object will all be referred to as a vector throughout the documentation.\n\nIf only the xdata argument is passed and it is a one-dimensional vector of numeric values, the analysis performed will be a histogram of the vector with basic statistics and Shapiro-Wilk normality test. This is useful for visualizing the distribution of the vector. If only the xdata argument is passed and it is a one-dimensional vector of categorical (string) values, the analysis performed will be a histogram of categories with rank, frequencies and percentages displayed.\nIf xdata and ydata are supplied and are both equal length one-dimensional vectors of numeric data, an x/y scatter plot with line fit will be graphed and the correlation between the two vectors will be calculated. If there are non-numeric or missing values in either vector, they will be ignored. Only values that are numeric in each vector, at the same index will be included in the correlation. For example, the two following two vectors will yield:",
"example1 = [0.2, 0.25, 0.27, np.nan, 0.32, 0.38, 0.39, np.nan, 0.42, 0.43, 0.47, 0.51, 0.52, 0.56, 0.6]\nexample2 = [0.23, 0.27, 0.29, np.nan, 0.33, 0.35, 0.39, 0.42, np.nan, 0.46, 0.48, 0.49, np.nan, 0.5, 0.58]\nanalyze(example1, example2)",
"If xdata is a sequence or dictionary of vectors, a location test and summary statistics for each vector will be performed. If each vector is normally distributed and they all have equal variance, a one-way ANOVA is performed. If the data is not normally distributed or the vectors do not have equal variance, a non-parametric Kruskal-Wallis test will be performed instead of a one-way ANOVA.\n\nNote: Vectors should be independent from one another --- that is to say, there shouldn't be values in one vector that are derived from or some how related to a value in another vector. These dependencies can lead to weird and often unpredictable results. \n\nA proper use case for a location test would be if you had a table with measurement data for multiple groups, such as test scores per class, average height per country or measurements per trial run, where the classes, countries, and trials are the groups. In this case, each group should be represented by it's own vector, which are then all wrapped in a dictionary or sequence. \nIf xdata is supplied as a dictionary, the keys are the names of the groups and the values are the array-like objects that represent the vectors. Alternatively, xdata can be a python sequence of the vectors and the groups argument a list of strings of the group names. The order of the group names should match the order of the vectors passed to xdata. \n\nNote: Passing the data for each group into xdata as a sequence or dictionary is often referred to as \"unstacked\" data. With unstacked data, the values for each group are in their own vector. Alternatively, if values are in one vector and group names in another vector of equal length, this format is referred to as \"stacked\" data. The analyze() function can handle either stacked or unstacked data depending on which is most convenient.\n\nFor example:",
"np.random.seed(987654321)\ngroup_a = st.norm.rvs(size=50)\ngroup_b = st.norm.rvs(size=25)\ngroup_c = st.norm.rvs(size=30)\ngroup_d = st.norm.rvs(size=40)\nanalyze({\"Group A\": group_a, \"Group B\": group_b, \"Group C\": group_c, \"Group D\": group_d})",
"In the example above, sci-analysis is telling us the four groups are normally distributed (by use of the Bartlett Test, Oneway ANOVA and the near straight line fit on the quantile plot), the groups have equal variance and the groups have matching means. The only significant difference between the four groups is the sample size we specified. Let's try another example, but this time change the variance of group B:",
"np.random.seed(987654321)\ngroup_a = st.norm.rvs(0.0, 1, size=50)\ngroup_b = st.norm.rvs(0.0, 3, size=25)\ngroup_c = st.norm.rvs(0.1, 1, size=30)\ngroup_d = st.norm.rvs(0.0, 1, size=40)\nanalyze({\"Group A\": group_a, \"Group B\": group_b, \"Group C\": group_c, \"Group D\": group_d})",
"In the example above, group B has a standard deviation of 2.75 compared to the other groups that are approximately 1. The quantile plot on the right also shows group B has a much steeper slope compared to the other groups, implying a larger variance. Also, the Kruskal-Wallis test was used instead of the Oneway ANOVA because the pre-requisite of equal variance was not met.\nIn another example, let's compare groups that have different distributions and different means:",
"np.random.seed(987654321)\ngroup_a = st.norm.rvs(0.0, 1, size=50)\ngroup_b = st.norm.rvs(0.0, 3, size=25)\ngroup_c = st.weibull_max.rvs(1.2, size=30)\ngroup_d = st.norm.rvs(0.0, 1, size=40)\nanalyze({\"Group A\": group_a, \"Group B\": group_b, \"Group C\": group_c, \"Group D\": group_d})",
"The above example models group C as a Weibull distribution, while the other groups are normally distributed. You can see the difference in the distributions by the one-sided tail on the group C boxplot, and the curved shape of group C on the quantile plot. Group C also has significantly the lowest mean as indicated by the Tukey-Kramer circles and the Kruskal-Wallis test."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/ai-notebooks-extended | dataproc-hub-example/build/infrastructure-builder/mig/files/gcs_working_folder/examples/Python/storage/Storage command-line tool.ipynb | apache-2.0 | [
"Storage command-line tool\nThe Google Cloud SDK provides a set of commands for working with data stored in Cloud Storage. This notebook introduces several gsutil commands for interacting with Cloud Storage. Note that shell commands in a notebook must be prepended with a !.\nList available commands\nThe gsutil command can be used to perform a wide array of tasks. Run the help command to view a list of available commands:",
"!gsutil help",
"Create a storage bucket\nBuckets are the basic containers that hold your data. Everything that you store in Cloud Storage must be contained in a bucket. You can use buckets to organize your data and control access to your data.\nStart by defining a globally unique name.\nFor more information about naming buckets, see Bucket name requirements.",
"# Replace the string below with a unique name for the new bucket\nbucket_name = \"your-new-bucket\"",
"NOTE: In the examples below, the bucket_name and project_id variables are referenced in the commands using {} and $. If you want to avoid creating and using variables, replace these interpolated variables with literal values and remove the {} and $ characters.\nNext, create the new bucket with the gsutil mb command:",
"!gsutil mb gs://{bucket_name}/",
"List buckets in a project\nReplace 'your-project-id' in the cell below with your project ID and run the cell to list the storage buckets in your project.",
"# Replace the string below with your project ID\nproject_id = \"your-project-id\"\n\n!gsutil ls -p $project_id",
"The response should look like the following:\ngs://your-new-bucket/\nGet bucket metadata\nThe next cell shows how to get information on metadata of your Cloud Storage buckets.\nTo learn more about specific bucket properties, see Bucket locations and Storage classes.",
"!gsutil ls -L -b gs://{bucket_name}/",
"The response should look like the following:\ngs://your-new-bucket/ :\n Storage class: MULTI_REGIONAL\n Location constraint: US\n ...\nUpload a local file to a bucket\nObjects are the individual pieces of data that you store in Cloud Storage. Objects are referred to as \"blobs\" in the Python client library. There is no limit on the number of objects that you can create in a bucket.\nAn object's name is treated as a piece of object metadata in Cloud Storage. Object names can contain any combination of Unicode characters (UTF-8 encoded) and must be less than 1024 bytes in length.\nFor more information, including how to rename an object, see the Object name requirements.",
"!gsutil cp resources/us-states.txt gs://{bucket_name}/",
"List blobs in a bucket",
"!gsutil ls -r gs://{bucket_name}/**",
"The response should look like the following:\ngs://your-new-bucket/us-states.txt\nGet a blob and display metadata\nSee Viewing and editing object metadata for more information about object metadata.",
"!gsutil ls -L gs://{bucket_name}/us-states.txt",
"The response should look like the following:\ngs://your-new-bucket/us-states.txt:\n Creation time: Fri, 08 Feb 2019 05:23:28 GMT\n Update time: Fri, 08 Feb 2019 05:23:28 GMT\n Storage class: STANDARD\n Content-Language: en\n Content-Length: 637\n Content-Type: text/plain\n...\nDownload a blob to a local directory",
"!gsutil cp gs://{bucket_name}/us-states.txt resources/downloaded-us-states.txt",
"Cleaning up\nDelete a blob",
"!gsutil rm gs://{bucket_name}/us-states.txt",
"Delete a bucket\nThe following command deletes all objects in the bucket before deleting the bucket itself.",
"!gsutil rm -r gs://{bucket_name}/",
"Next Steps\nRead more about Cloud Storage in the documentation:\n+ Storage key terms\n+ How-to guides\n+ Pricing"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
metpy/MetPy | v0.5/_downloads/Station_Plot_with_Layout.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Station Plot with Layout\nMake a station plot, complete with sky cover and weather symbols, using a\nstation plot layout built into MetPy.\nThe station plot itself is straightforward, but there is a bit of code to perform the\ndata-wrangling (hopefully that situation will improve in the future). Certainly, if you have\nexisting point data in a format you can work with trivially, the station plot will be simple.\nThe StationPlotLayout class is used to standardize the plotting various parameters\n(i.e. temperature), keeping track of the location, formatting, and even the units for use in\nthe station plot. This makes it easy (if using standardized names) to re-use a given layout\nof a station plot.",
"import cartopy.crs as ccrs\nimport cartopy.feature as feat\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom metpy.calc import get_wind_components\nfrom metpy.cbook import get_test_data\nfrom metpy.plots import simple_layout, StationPlot, StationPlotLayout\nfrom metpy.units import units",
"The setup\nFirst read in the data. We use numpy.loadtxt to read in the data and use a structured\nnumpy.dtype to allow different types for the various columns. This allows us to handle\nthe columns with string data.",
"f = get_test_data('station_data.txt')\nall_data = np.loadtxt(f, skiprows=1, delimiter=',',\n usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),\n dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'),\n ('slp', 'f'), ('air_temperature', 'f'),\n ('cloud_fraction', 'f'), ('dew_point_temperature', 'f'),\n ('weather', '16S'),\n ('wind_dir', 'f'), ('wind_speed', 'f')]))",
"This sample data has way too many stations to plot all of them. Instead, we just select\na few from around the U.S. and pull those out of the data file.",
"# Get the full list of stations in the data\nall_stids = [s.decode('ascii') for s in all_data['stid']]\n\n# Pull out these specific stations\nwhitelist = ['OKC', 'ICT', 'GLD', 'MEM', 'BOS', 'MIA', 'MOB', 'ABQ', 'PHX', 'TTF',\n 'ORD', 'BIL', 'BIS', 'CPR', 'LAX', 'ATL', 'MSP', 'SLC', 'DFW', 'NYC', 'PHL',\n 'PIT', 'IND', 'OLY', 'SYR', 'LEX', 'CHS', 'TLH', 'HOU', 'GJT', 'LBB', 'LSV',\n 'GRB', 'CLT', 'LNK', 'DSM', 'BOI', 'FSD', 'RAP', 'RIC', 'JAN', 'HSV', 'CRW',\n 'SAT', 'BUY', '0CO', 'ZPC', 'VIH']\n\n# Loop over all the whitelisted sites, grab the first data, and concatenate them\ndata_arr = np.concatenate([all_data[all_stids.index(site)].reshape(1,) for site in whitelist])\n\n# First, look at the names of variables that the layout is expecting:\nsimple_layout.names()",
"Next grab the simple variables out of the data we have (attaching correct units), and\nput them into a dictionary that we will hand the plotting function later:",
"# This is our container for the data\ndata = dict()\n\n# Copy out to stage everything together. In an ideal world, this would happen on\n# the data reading side of things, but we're not there yet.\ndata['longitude'] = data_arr['lon']\ndata['latitude'] = data_arr['lat']\ndata['air_temperature'] = data_arr['air_temperature'] * units.degC\ndata['dew_point_temperature'] = data_arr['dew_point_temperature'] * units.degC\ndata['air_pressure_at_sea_level'] = data_arr['slp'] * units('mbar')",
"Notice that the names (the keys) in the dictionary are the same as those that the\nlayout is expecting.\nNow perform a few conversions:\n\nGet wind components from speed and direction\nConvert cloud fraction values to integer codes [0 - 8]\nMap METAR weather codes to WMO codes for weather symbols",
"# Get the wind components, converting from m/s to knots as will be appropriate\n# for the station plot\nu, v = get_wind_components(data_arr['wind_speed'] * units('m/s'),\n data_arr['wind_dir'] * units.degree)\ndata['eastward_wind'], data['northward_wind'] = u, v\n\n# Convert the fraction value into a code of 0-8, which can be used to pull out\n# the appropriate symbol\ndata['cloud_coverage'] = (8 * data_arr['cloud_fraction']).astype(int)\n\n# Map weather strings to WMO codes, which we can use to convert to symbols\n# Only use the first symbol if there are multiple\nwx_text = [s.decode('ascii') for s in data_arr['weather']]\nwx_codes = {'': 0, 'HZ': 5, 'BR': 10, '-DZ': 51, 'DZ': 53, '+DZ': 55,\n '-RA': 61, 'RA': 63, '+RA': 65, '-SN': 71, 'SN': 73, '+SN': 75}\ndata['present_weather'] = [wx_codes[s.split()[0] if ' ' in s else s] for s in wx_text]",
"All the data wrangling is finished, just need to set up plotting and go:\nSet up the map projection and set up a cartopy feature for state borders",
"proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,\n standard_parallels=[35])\nstate_boundaries = feat.NaturalEarthFeature(category='cultural',\n name='admin_1_states_provinces_lines',\n scale='110m', facecolor='none')",
"The payoff",
"# Change the DPI of the resulting figure. Higher DPI drastically improves the\n# look of the text rendering\nplt.rcParams['savefig.dpi'] = 255\n\n# Create the figure and an axes set to the projection\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(1, 1, 1, projection=proj)\n\n# Add some various map elements to the plot to make it recognizable\nax.add_feature(feat.LAND, zorder=-1)\nax.add_feature(feat.OCEAN, zorder=-1)\nax.add_feature(feat.LAKES, zorder=-1)\nax.coastlines(resolution='110m', zorder=2, color='black')\nax.add_feature(state_boundaries, edgecolor='black')\nax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black')\n\n# Set plot bounds\nax.set_extent((-118, -73, 23, 50))\n\n#\n# Here's the actual station plot\n#\n\n# Start the station plot by specifying the axes to draw on, as well as the\n# lon/lat of the stations (with transform). We also the fontsize to 12 pt.\nstationplot = StationPlot(ax, data['longitude'], data['latitude'],\n transform=ccrs.PlateCarree(), fontsize=12)\n\n# The layout knows where everything should go, and things are standardized using\n# the names of variables. So the layout pulls arrays out of `data` and plots them\n# using `stationplot`.\nsimple_layout.plot(stationplot, data)\n\nplt.show()",
"or instead, a custom layout can be used:",
"# Just winds, temps, and dewpoint, with colors. Dewpoint and temp will be plotted\n# out to Farenheit tenths. Extra data will be ignored\ncustom_layout = StationPlotLayout()\ncustom_layout.add_barb('eastward_wind', 'northward_wind', units='knots')\ncustom_layout.add_value('NW', 'air_temperature', fmt='.1f', units='degF', color='darkred')\ncustom_layout.add_value('SW', 'dew_point_temperature', fmt='.1f', units='degF',\n color='darkgreen')\n\n# Also, we'll add a field that we don't have in our dataset. This will be ignored\ncustom_layout.add_value('E', 'precipitation', fmt='0.2f', units='inch', color='blue')\n\n# Create the figure and an axes set to the projection\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(1, 1, 1, projection=proj)\n\n# Add some various map elements to the plot to make it recognizable\nax.add_feature(feat.LAND, zorder=-1)\nax.add_feature(feat.OCEAN, zorder=-1)\nax.add_feature(feat.LAKES, zorder=-1)\nax.coastlines(resolution='110m', zorder=2, color='black')\nax.add_feature(state_boundaries, edgecolor='black')\nax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black')\n\n# Set plot bounds\nax.set_extent((-118, -73, 23, 50))\n\n#\n# Here's the actual station plot\n#\n\n# Start the station plot by specifying the axes to draw on, as well as the\n# lon/lat of the stations (with transform). We also the fontsize to 12 pt.\nstationplot = StationPlot(ax, data['longitude'], data['latitude'],\n transform=ccrs.PlateCarree(), fontsize=12)\n\n# The layout knows where everything should go, and things are standardized using\n# the names of variables. So the layout pulls arrays out of `data` and plots them\n# using `stationplot`.\ncustom_layout.plot(stationplot, data)\n\nplt.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google-research/google-research | group_agnostic_fairness/data_utils/CreateCompasDatasetFiles.ipynb | apache-2.0 | [
"Copyright 2020 Google LLC.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.",
"from __future__ import division\nimport pandas as pd\nimport numpy as np\nimport json\nimport os,sys\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nimport numpy as np",
"Overview\nPre-processes COMPAS dataset:\nDownload the COMPAS dataset from:\nhttps://github.com/propublica/compas-analysis/blob/master/compas-scores-two-years.csv\nand save it in the ./group_agnostic_fairness/data/compas folder.\nInput: ./group_agnostic_fairness/data/compas/compas-scores-two-years.csv\nOutputs: train.csv, test.csv, mean_std.json, vocabulary.json, IPS_exampleweights_with_label.json, IPS_exampleweights_without_label.json",
"pd.options.display.float_format = '{:,.2f}'.format\ndataset_base_dir = './group_agnostic_fairness/data/compas/'\ndataset_file_name = 'compas-scores-two-years.csv'",
"Processing original dataset",
"file_path = os.path.join(dataset_base_dir,dataset_file_name)\nwith open(file_path, \"r\") as file_name:\n temp_df = pd.read_csv(file_name)\n\n# Columns of interest\ncolumns = ['juv_fel_count', 'juv_misd_count', 'juv_other_count', 'priors_count',\n 'age', \n 'c_charge_degree', \n 'c_charge_desc',\n 'age_cat',\n 'sex', 'race', 'is_recid']\ntarget_variable = 'is_recid'\ntarget_value = 'Yes'\n\n# Drop duplicates\ntemp_df = temp_df[['id']+columns].drop_duplicates()\ndf = temp_df[columns].copy()\n\n# Convert columns of type ``object`` to ``category`` \ndf = pd.concat([\n df.select_dtypes(include=[], exclude=['object']),\n df.select_dtypes(['object']).apply(pd.Series.astype, dtype='category')\n ], axis=1).reindex_axis(df.columns, axis=1)\n\n# Binarize target_variable\ndf['is_recid'] = df.apply(lambda x: 'Yes' if x['is_recid']==1.0 else 'No', axis=1).astype('category')\n\n# Process protected-column values\nrace_dict = {'African-American':'Black','Caucasian':'White'}\ndf['race'] = df.apply(lambda x: race_dict[x['race']] if x['race'] in race_dict.keys() else 'Other', axis=1).astype('category')\n\ndf.head()",
"Shuffle and Split into Train (70%) and Test set (30%)",
"train_df, test_df = train_test_split(df, test_size=0.30, random_state=42)\n\noutput_file_path = os.path.join(dataset_base_dir,'train.csv')\nwith open(output_file_path, mode=\"w\") as output_file:\n train_df.to_csv(output_file,index=False,columns=columns,header=False)\n output_file.close()\n\noutput_file_path = os.path.join(dataset_base_dir,'test.csv')\nwith open(output_file_path, mode=\"w\") as output_file:\n test_df.to_csv(output_file,index=False,columns=columns,header=False)\n output_file.close()",
"Computing Invese propensity weights for each subgroup, and writes to directory.\nIPS_example_weights_with_label.json: json dictionary of the format\n {subgroup_id : inverse_propensity_score,...}. Used by IPS_reweighting_model approach.",
"IPS_example_weights_without_label = {\n 0: (len(train_df))/(len(train_df[(train_df.race != 'Black') & (train_df.sex != 'Female')])), # 00: White Male\n 1: (len(train_df))/(len(train_df[(train_df.race != 'Black') & (train_df.sex == 'Female')])), # 01: White Female\n 2: (len(train_df))/(len(train_df[(train_df.race == 'Black') & (train_df.sex != 'Female')])), # 10: Black Male\n 3: (len(train_df))/(len(train_df[(train_df.race == 'Black') & (train_df.sex == 'Female')])) # 11: Black Female\n}\n \noutput_file_path = os.path.join(dataset_base_dir,'IPS_example_weights_without_label.json')\nwith open(output_file_path, mode=\"w\") as output_file:\n output_file.write(json.dumps(IPS_example_weights_without_label))\n output_file.close()\n\nprint(IPS_example_weights_without_label)\n\nIPS_example_weights_with_label = {\n0: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race != 'Black') & (train_df.sex != 'Female')])), # 000: Negative White Male\n1: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race != 'Black') & (train_df.sex == 'Female')])), # 001: Negative White Female\n2: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race == 'Black') & (train_df.sex != 'Female')])), # 010: Negative Black Male\n3: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race == 'Black') & (train_df.sex == 'Female')])), # 011: Negative Black Female\n4: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race != 'Black') & (train_df.sex != 'Female')])), # 100: Positive White Male\n5: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race != 'Black') & (train_df.sex == 'Female')])), # 101: Positive White Female\n6: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race == 'Black') & (train_df.sex != 'Female')])), # 110: Positive Black Male\n7: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race == 'Black') & (train_df.sex == 'Female')])), # 111: Positive Black Female\n}\n \noutput_file_path = os.path.join(dataset_base_dir,'IPS_example_weights_with_label.json')\nwith open(output_file_path, mode=\"w\") as output_file:\n output_file.write(json.dumps(IPS_example_weights_with_label))\n output_file.close()\n\nprint(IPS_example_weights_with_label)",
"Construct vocabulary.json, and write to directory.\nvocabulary.json: json dictionary of the format {feature_name: [feature_vocabulary]}, containing vocabulary for categorical features.",
"cat_cols = train_df.select_dtypes(include='category').columns\nvocab_dict = {}\nfor col in cat_cols:\n vocab_dict[col] = list(set(train_df[col].cat.categories))\n \noutput_file_path = os.path.join(dataset_base_dir,'vocabulary.json')\nwith open(output_file_path, mode=\"w\") as output_file:\n output_file.write(json.dumps(vocab_dict))\n output_file.close()\nprint(vocab_dict)",
"Construct mean_std.json, and write to directory\nmean_std.json: json dictionary of the format feature_name: [mean, std]},\ncontaining mean and std for numerical features.",
"temp_dict = train_df.describe().to_dict()\nmean_std_dict = {}\nfor key, value in temp_dict.items():\n mean_std_dict[key] = [value['mean'],value['std']]\n\noutput_file_path = os.path.join(dataset_base_dir,'mean_std.json')\nwith open(output_file_path, mode=\"w\") as output_file:\n output_file.write(json.dumps(mean_std_dict))\n output_file.close()\nprint(mean_std_dict)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sympy/scipy-2017-codegen-tutorial | notebooks/08-cythonizing.ipynb | bsd-3-clause | [
"The Easy \"Hard\" Way: Cythonizing\nIn this notebook, we'll build on the previous work where we used SymPy's code printers to generate code for evaluating expressions numerically. As a layer of abstraction on top of C code printers, which generate snippets of code we can copy into a C program, we can generate a fully compilable C library. On top of this, we will see how to use Cython to compile such a library into a Python extension module so its computational routines can be called directly from Python.\nLearning Objectives\nAfter this lesson, you will be able to:\n\nwrite a simple Cython function and run it a Jupyter notebook using the %%cython magic command\nuse the SymPy codegen function to output compilable C code\nwrap codegen-generated code with Cython, compile it into an extension module, and call it from Python\nuse SymPy's autowrap function to do all of this behind the scenes\npass a custom code printer to autowrap to make use of an external C library",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport sympy as sym\nsym.init_printing()",
"1. A Quick Introduction to Cython\nCython is a compiler and a programming language used to generate C extension modules for Python.\nThe Cython language is a Python/C creole which is essentially Python with some additional keywords for specifying static data types. It looks something like this:\ncython\ndef cython_sum(int n):\n cdef float s = 0.0\n cdef int i\n for i in range(n):\n s += i\n return s\nThe Cython compiler transforms this code into a \"flavor\" of C specific to Python extension modules. This C code is then compiled into a binary file that can be imported and used just like a regular Python module -- the difference being that the functions you use from that module can potentially be much faster and more efficient than an equivalent pure Python implementation.\nAside from writing Cython code for computations, Cython is commonly used for writing wrappers around existing C code so that the functions therein can be made available in an extension module as described above. We will use this technique to make the SymPy-generated C code accessible to Python for use in SciPy's odeint.\nExample\nAs a quick demonstration of what Cython can offer, we'll walk through a simple example of generating numbers in the Fibonacci sequence. If you're not familiar with it already, the sequence is initialized with $F_0 = 0$ and $F_1 = 1$, then the remaining terms are defined recursively by:\n$$\nF_i = F_{i-1} + F_{i-2}\n$$\nOur objective is to write a function that computes the $n$-th Fibonacci number. Let's start by writing a simple iterative solution in pure Python.",
"def python_fib(n):\n a = 0.0\n b = 1.0\n for i in range(n):\n tmp = a\n a = a + b\n b = tmp\n return a\n\n[python_fib(i) for i in range(10)]",
"Let's see how long it takes to compute the 100th Fibonacci number.",
"%timeit python_fib(100)",
"Now let's implement the same thing with Cython. Since Cython is essentially \"Python with types,\" it is often fairly easy to make the move and see improvements in speed. It does come at the cost, however, of a separate compilation step.\nThere are several ways to ways to go about the compilation process, and in many cases, Cython's tooling makes it fairly simple. For example, Jupyter notebooks can make use of a %%cython magic command that will do all of compilation in the background for us. To make use of it, we need to load the cython extension.",
"%load_ext cython",
"Now we can write a Cython function.\nNote: the --annotate (or -a) flag of the %%cython magic command will produce an interactive annotated printout of the Cython code, allowing us to see the C code that is generated.",
"%%cython\ndef cython_fib(int n):\n cdef double a = 0.0\n cdef double b = 1.0\n cdef double tmp\n for i in range(n):\n tmp = a\n a = a + b\n b = tmp\n return a\n\n%timeit cython_fib(100)",
"To see a bit more about writing Cython and its potential performance benefits, see this Cython examples notebook.\nEven better, check out Kurt Smith's Cython tutorial which is happening at the same time as this tutorial.\n2. Generating C Code with SymPy's codegen()\nOur main goal in using Cython is to wrap SymPy-generated C code into a Python extension module so that we can call the fast compiled numerical routines from Python.\nSymPy's codegen function takes code printing a step further: it wraps a snippet of code that numerically evaluates an expression with a function, and puts that function into the context of a file that is fully ready-to-compile code.\nHere we'll revisit the water radiolysis system, with the aim of numerically computing the right hand side of the system of ODEs and integrating using SciPy's odeint.\nRecall that this system looks like:\n$$\n\\begin{align}\n\\frac{dy_0(t)}{dt} &= f_0\\left(y_0,\\,y_1,\\,\\dots,\\,y_{13},\\,t\\right) \\\n&\\vdots \\\n\\frac{dy_{13}(t)}{dt} &= f_{13}\\left(y_0,\\,y_1,\\,\\dots,\\,y_{13},\\,t\\right)\n\\end{align}\n$$\nwhere we are representing our state variables $y_0,\\,y_1,\\dots,y_{13}$ as a vector $\\mathbf{y}(t)$ that we called states in our code, and the collection of functions on the right hand side $\\mathbf{f}(\\mathbf{y}(t))$ we called rhs_of_odes.\nStart by importing the system of ODEs and the matrix of state variables.",
"from scipy2017codegen.chem import load_large_ode\nrhs_of_odes, states = load_large_ode()\nrhs_of_odes[0]",
"Now we'll use codegen (under sympy.utilities.codegen) to output C source and header files which can compute the right hand side (RHS) of the ODEs numerically, given the current values of our state variables. Here we'll import it and show the documentation:",
"from sympy.utilities.codegen import codegen\n#codegen?",
"We just have one expression we're interested in computing, and that is the matrix expression representing the derivatives of our state variables with respect to time: rhs_of_odes. What we want codegen to do is create a C function that takes in the current values of the state variables and gives us back each of the derivatives.",
"[(cf, cs), (hf, hs)] = codegen(('c_odes', rhs_of_odes), language='c')",
"Note that we've just expanded the outputs into individual variables so we can access the generated code easily. codegen gives us back the .c filename and its source code in a tuple, and the .h filename and its source in another tuple. Let's print the source code.",
"print(cs)",
"There are several things here worth noting:\n\nthe state variables are passed in individually\nthe state variables in the function signature are out of order\nthe output array is passed in as a pointer like in our Fibonacci sequence example, but it has an auto-generated name\n\nLet's address the first issue first. Similarly to what we did in the C printing exercises, let's use a MatrixSymbol to represent our state vector instead of a matrix of individual state variable symbols (i.e. y[0] instead of y0). First, create the MatrixSymbol object that is the same shape as our states matrix.",
"y = sym.MatrixSymbol('y', *states.shape)",
"Now we need to replace the use of y0, y1, etc. in our rhs_of_odes matrix with the elements of our new state vector (e.g. y[0], y[1], etc.). We saw how to do this already in the previous notebook. Start by forming a mapping from y0 -> y[0, 0], y1 -> y[1, 0], etc.",
"state_array_map = dict(zip(states, y))\nstate_array_map",
"Now replace the symbols in rhs_of_odes according to the mapping. We'll call it rhs_of_odes_ind and use that from now on.",
"rhs_of_odes_ind = rhs_of_odes.xreplace(state_array_map)\nrhs_of_odes_ind[0]",
"Exercise: use codegen again, but this time with rhs_of_odes_ind which makes use of a state vector rather than a container of symbols. Check out the resulting code. What is different about the function signature?\npython\n[(cf, cs), (hf, hs)] = codegen(???)\nSolution\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nv",
"[(cf, cs), (hf, hs)] = codegen(('c_odes', rhs_of_odes_ind), language='c')\nprint(cs)",
"So by re-writing our expression in terms of a MatrixSymbol rather than individual symbols, the function signature of the generated code is cleaned up greatly.\nHowever, we still have the issue of the auto-generated output variable name. To fix this, we can form a matrix equation rather than an expression. The name given to the symbol on the left hand side of the equation will then be used for our output variable name.\nWe'll start by defining a new MatrixSymbol that will represent the left hand side of our equation -- the derivatives of each state variable.",
"dY = sym.MatrixSymbol('dY', *y.shape)",
"Exercise: form an equation using sym.Eq to equate the two sides of our system of differential equations, then use this as the expression in codegen. Print out just the header source to see the function signature. What is the output argument called now?\npython\node_eq = sym.Eq(???)\n[(cf, cs), (hf, hs)] = codegen(???)\nprint(???)\nSolution\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nv",
"ode_eq = sym.Eq(dY, rhs_of_odes_ind)\n[(cf, cs), (hf, hs)] = codegen(('c_odes', ode_eq), language='c')\nprint(hs)",
"Now we see that the c_odes function signature is nice and clean. We pass it a pointer to an array representing the current values of all of our state variables and a pointer to an array that we want to fill with the derivatives of each of those state variables.\nIf you're not familiar with C and pointers, you just need to know that it is idiomatic in C to preallocate a block of memory representing an array, then pass the location of that memory (and usually the number of elements it can hold), rather than passing the array itself to/from a function. For our purposes, this is as complicated as pointers will get.\nJust so we can compile this code and use it, we'll re-use the codegen call above with to_files=True so the .c and .h files are actually written to the filesystem, rather than having their contents returned in a string.",
"codegen(('c_odes', ode_eq), language='c', to_files=True)",
"3. Wrapping the Generated Code with Cython\nNow we want to wrap the function that was generated c_odes with a Cython function so we can generate an extension module and call that function from Python. Wrapping a set of C functions involves writing a Cython script that specifies the Python interface to the C functions. This script must do two things:\n\nspecify the function signatures as found in the C source\nimplement the Python interface to the C functions by wrapping them\n\nThe build system of Cython is able to take the Cython wrapper source code as well as the C library source code and compile/link things together into a Python extension module. We will write our wrapper code in a cell making use of the magic command %%cython_pyximport, which does a few things for us:\n\nwrites the contents of the cell to a Cython source file (modname.pyx)\nlooks for a modname.pyxbld file for instructions on how to build things\nbuilds everything into an extension module\nimports the extension module, making the functions declared there available in the notebook\n\nSo, it works similarly to the %%cython magic command we saw at the very beginning, but things are a bit more complicated now because we have this external library c_odes that needs to be compiled as well.\nNote: The pyxbld file contains code similar to what would be found in the setup.py file of a package making use of Cython code for wrapping C libraries.\nIn either case, all that's needed is to tell setuptools/Cython:\n\nthe name of the extension module we want to make\nthe location of the Cython and C source files to be built\nthe location of headers needed during compilation -- both our C library's headers as well as NumPy's headers\n\nWe will call our extension module cy_odes, so here we'll generate a cy_odes.pyxbld file to specify how to build the module.",
"%%writefile cy_odes.pyxbld\nimport numpy\n\n# module name specified by `%%cython_pyximport` magic\n# | just `modname + \".pyx\"`\n# | |\ndef make_ext(modname, pyxfilename):\n from setuptools.extension import Extension\n return Extension(modname,\n sources=[pyxfilename, 'c_odes.c'],\n include_dirs=['.', numpy.get_include()])",
"Now we can write our wrapper code.\nTo write the wrapper, we first write the function signature as specified by the C library. Then, we create a wrapper function that makes use of the C implementation and returns the result. This wrapper function becomes the interface to the compiled code, and it does not need to be identical to the C function signature. In fact, we'll make our wrapper function compliant with the odeint interface (i.e. takes a 1-dimensional array of state variable values and the current time).",
"%%cython_pyximport cy_odes\nimport numpy as np\ncimport numpy as cnp # cimport gives us access to NumPy's C API\n\n# here we just replicate the function signature from the header\ncdef extern from \"c_odes.h\":\n void c_odes(double *y, double *dY)\n\n# here is the \"wrapper\" signature that conforms to the odeint interface\ndef cy_odes(cnp.ndarray[cnp.double_t, ndim=1] y, double t):\n # preallocate our output array\n cdef cnp.ndarray[cnp.double_t, ndim=1] dY = np.empty(y.size, dtype=np.double)\n # now call the C function\n c_odes(<double *> y.data, <double *> dY.data)\n # return the result\n return dY",
"Exercise: use np.random.randn to generate random state variable values and evaluate the right-hand-side of our ODEs with those values.\npython\nrandom_vals = np.random.randn(???)\n???\nSolution\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nv",
"random_vals = np.random.randn(14)\ncy_odes(random_vals, 0) # note: any time value will do",
"Now we can use odeint to integrate the equations and plot the results to check that it worked. First we need to import odeint.",
"from scipy.integrate import odeint",
"A couple convenience functions are provided in the scipy2017codegen package which give some reasonable initial conditions for the system and plot the state trajectories, respectively. Start by grabbing some initial values for our state variables and time values.",
"from scipy2017codegen.chem import watrad_init, watrad_plot\ny_init, t_vals = watrad_init()",
"Finally we can integrate the equations using our Cython-wrapped C function and plot the results.",
"y_vals = odeint(cy_odes, y_init, t_vals)\nwatrad_plot(t_vals, y_vals)",
"4. Generating and Compiling a C Extension Module Automatically\nAs yet another layer of abstraction on top of codegen, SymPy provides an autowrap function that can automatically generate a Cython wrapper for the generated C code. This greatly simplifies the process of going from a SymPy expression to numerical computation, but as we'll see, we lose a bit of flexibility compared to manually creating the Cython wrapper.\nLet's start by importing the autowrap function and checking out its documentation.",
"from sympy.utilities.autowrap import autowrap\n#autowrap?",
"So autowrap takes in a SymPy expression and gives us back a binary callable which evaluates the expression numerically. Let's use the Equality formed earlier to generate a function we can call to evaluate the right hand side of our system of ODEs.",
"auto_odes = autowrap(ode_eq, backend='cython', tempdir='./autowraptmp')",
"Exercise: use the main Jupyter notebook tab to head to the temporary directory autowrap just created. Take a look at some of the files it contains. Can you map everything we did manually to the files generated?\nSolution\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nv\nautowrap generates quite a few files, but we'll explicitly list a few here:\n\nwrapped_code_#.c: the same thing codegen generated\nwrapper_module_#.pyx: the Cython wrapper code\nwrapper_module_#.c: the cythonized code\nsetup.py: specification of the Extension for how to build the extension module\n\nExercise: just like we did before, generate some random values for the state variables and use auto_odes to compute the derivatives. Did it work like before?\nHint: take a look at wrapper_module_#.pyx to see the types of the arrays being passed in / created.\npython\nrandom_vals = np.random.randn(???)\nauto_odes(???)\nSolution\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nv",
"random_vals = np.random.randn(14, 1) # need a 2-dimensional vector\nauto_odes(random_vals)",
"One advantage to wrapping the generated C code manually is that we get fine control over how the function is used from Python. That is, in our hand-written Cython wrapper we were able to specify that from the Python side, the input to our wrapper function and its return value are both 1-dimensional ndarray objects. We were also able to add in the extra argument t for the current time, making the wrapper function fully compatible with odeint.\nHowever, autowrap just sees that we have a matrix equation where each side is a 2-dimensional array with shape (14, 1). The function returned then expects the input array to be 2-dimensional and it returns a 2-dimensional array.\nThis won't work with odeint, so we can write a simple wrapper that massages the input and output and adds an extra argument for t.",
"def auto_odes_wrapper(y, t):\n dY = auto_odes(y[:, np.newaxis])\n return dY.squeeze()",
"Now a 1-dimensional input works.",
"random_vals = np.random.randn(14)\nauto_odes_wrapper(random_vals, 0)",
"Exercise: As we have seen previously, we can analytically evaluate the Jacobian of our system of ODEs, which can be helpful in numerical integration. Compute the Jacobian of rhs_of_odes_ind with respect to y, then use autowrap to generate a function that evaluates the Jacobian numerically. Finally, write a Python wrapper called auto_jac_wrapper to make it compatible with odeint.\n```python\ncompute jacobian of rhs_of_odes_ind with respect to y\n???\ngenerate a function that computes the jacobian\nauto_jac = autowrap(???)\ndef auto_jac_wrapper(y, t):\n return ???\n```\nTest your wrapper by passing in the random_vals array from above. The shape of the result should be (14, 14).\nSolution\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nv",
"jac = rhs_of_odes_ind.jacobian(y)\n\nauto_jac = autowrap(jac, backend='cython', tempdir='./autowraptmp')\n\ndef auto_jac_wrapper(y, t):\n return auto_jac(y[:, np.newaxis])\n\nauto_jac_wrapper(random_vals, 2).shape",
"Finally, we can use our two wrapped functions in odeint and compare to our manually-written Cython wrapper result.",
"y_vals = odeint(auto_odes_wrapper, y_init, t_vals, Dfun=auto_jac_wrapper)\nwatrad_plot(t_vals, y_vals)",
"5. Using a Custom Printer and an External Library with autowrap\nAs of SymPy 1.1, autowrap accepts a custom CodeGen object, which is responsible for generating the code. The CodeGen object in turn accepts a custom CodePrinter object, meaning we can use these two points of flexibility to make use of customized code printing in an autowrapped function. The following example is somewhat contrived, but the concept in general is powerful.\nIn our set of ODEs, there are quite a few instances of $y_i^2$, where $y_i$ is one of the 14 state variables. As an example, here's the equation for $\\frac{dy_3(t)}{dt}$:",
"rhs_of_odes[3]",
"There is a library called fastapprox that provides computational routines things like powers, exponentials, logarithms, and a few others. These routines provide limited precision with respect to something like math.h's equivalent functions, but they offer potentially faster computation.\nThe fastapprox library provides a function called fastpow, with the signature fastpow(float x, float p). It it follows the interface of pow from math.h. In the previous notebook, we saw how to turn instances of $x^3$ into x*x*x, which is potentially quicker than pow(x, 3). Here, let's just use fastpow instead.\nExercise: implement a CustomPrinter class that inherits from C99CodePrinter and overrides the _print_Pow function to make use of fastpow. Test it by instantiating the custom printer and printing a SymPy expression $x^3$.\nHint: it may be helpful to run C99CodePrinter._print_Pow?? to see how it works\n```python\nfrom sympy.printing.ccode import C99CodePrinter\nclass CustomPrinter(C99CodePrinter):\n def _print_Pow(self, expr):\n ???\nprinter = CustomPrinter()\nx = sym.symbols('x')\nprint x**3 using the custom printer\n???\n```\nSolution\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nv",
"from sympy.printing.ccode import C99CodePrinter\n\nclass CustomPrinter(C99CodePrinter):\n def _print_Pow(self, expr):\n return \"fastpow({}, {})\".format(self._print(expr.base),\n self._print(expr.exp))\n\nprinter = CustomPrinter()\nx = sym.symbols('x')\nprinter.doprint(x**3)",
"Now we can create a C99CodeGen object that will make use of this printer. This object will be passed in to autowrap with the code_gen keyword argument, and autowrap will use it in the code generation process.",
"from sympy.utilities.codegen import C99CodeGen\ngen = C99CodeGen(printer=printer)",
"However, for our generated code to use the fastpow function, it needs to have a #include \"fastpow.h\" preprocessor statement at the top. The code gen object supports this by allowing us to append preprocessor statements to its preprocessor_statements attribute.",
"gen.preprocessor_statements.append('#include \"fastpow.h\"')",
"One final issue remains, and that is telling autowrap where to find the fastapprox library headers. These header files have just been downloaded from GitHub and placed in the scipy2017codegen package, so it should be installed with the conda environment. We can find it by looking for the package directory.",
"import os\nimport scipy2017codegen\n\npackage_dir = os.path.dirname(scipy2017codegen.__file__)\nfastapprox_dir = os.path.join(package_dir, 'fastapprox')",
"Finally we're ready to call autowrap. We'll just use ode_eq, the Equality we created before, pass in the custom CodeGen object, and tell autowrap where the fastapprox headers are located.",
"auto_odes_fastpow = autowrap(ode_eq,\n code_gen=gen,\n backend='cython',\n include_dirs=[fastapprox_dir],\n tempdir='autowraptmp_custom')",
"If we navigate to the tmp directory created, we can view the wrapped_code_#.c to see our custom printing in action.\nAs before, we need a wrapper function for use with odeint, but aside from that, everything should be in place.",
"def auto_odes_fastpow_wrapper(y, t):\n dY = auto_odes_fastpow(y[:, np.newaxis])\n return dY.squeeze()\n\ny_vals, info = odeint(auto_odes_fastpow_wrapper, y_init, t_vals, full_output=True)\nwatrad_plot(t_vals, y_vals)",
"Exercise: generate an array of random state variable values, then use this array in the auto_odes_wrapper and auto_odes_fastpow_wrapper functions. Compare their outputs.\nSolution\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nv",
"random_vals = np.random.randn(14)\ndY1 = auto_odes_wrapper(random_vals, 0)\ndY2 = auto_odes_fastpow_wrapper(random_vals, 0)\ndY1 - dY2"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dcavar/python-tutorial-for-ipython | notebooks/Flair Training Sequence Labeling Models.ipynb | apache-2.0 | [
"Flair Training Sequence Labeling Models\n(C) 2019-2020 by Damir Cavar\nVersion: 0.3, February 2020\nDownload: This and various other Jupyter notebooks are available from my GitHub repo.\nFor the Flair tutorial 7 license and copyright restrictions, see the website below. For all the parts that I added, consider the license to be Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0).\nBased on the Flair Tutorial 7 Training a Model.\nThis tutorial is using the CoNLL-03 Named Entity Recognition data set. See this website for more details and to download an independent copy of the data set.\nTraining a Sequence Labeling Model\nWe will need the following modules:",
"from flair.data import Corpus\nfrom flair.datasets import WNUT_17\nfrom flair.embeddings import TokenEmbeddings, WordEmbeddings, StackedEmbeddings\nfrom typing import List",
"If you want to use the CoNLL-03 corpus, you need to download it and unpack it in your Flair data and model folder. This folder should be in your home-directory and it is named .flair. Once you have downloaded the corpus, unpack it into a folder .flair/datasets/conll_03. If you do not want to use the CoNLL-03 corpus, but rather the free W-NUT 17 corpus, you can use the Flair command: WNUT_17()\nIf you decide to download the CoNLL-03 corpus, adapt the following code. We load the W-NUT17 corpus and down-sample it to 10% of its size:",
"corpus: Corpus = WNUT_17().downsample(0.1)\nprint(corpus)",
"Declare the tag type to be predicted:",
"tag_type = 'ner'",
"Create the tag-dictionary for the tag-type:",
"tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)\nprint(tag_dictionary)",
"Load the embeddings:",
"embedding_types: List[TokenEmbeddings] = [\n\n WordEmbeddings('glove'),\n\n # comment in this line to use character embeddings\n # CharacterEmbeddings(),\n\n # comment in these lines to use flair embeddings\n # FlairEmbeddings('news-forward'),\n # FlairEmbeddings('news-backward'),\n]\nembeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types)",
"Load and initialize the sequence tagger:",
"from flair.models import SequenceTagger\n\ntagger: SequenceTagger = SequenceTagger(hidden_size=256,\n embeddings=embeddings,\n tag_dictionary=tag_dictionary,\n tag_type=tag_type,\n use_crf=True)",
"Load and initialize the trainer:",
"from flair.trainers import ModelTrainer\n\ntrainer: ModelTrainer = ModelTrainer(tagger, corpus)",
"If you have a GPU (otherwise maybe tweak the batch size, etc.), run the training with 150 epochs:",
"trainer.train('resources/taggers/example-ner',\n learning_rate=0.1,\n mini_batch_size=32,\n max_epochs=150)",
"Plot the training curves and results:",
"from flair.visual.training_curves import Plotter\nplotter = Plotter()\nplotter.plot_training_curves('resources/taggers/example-ner/loss.tsv')\nplotter.plot_weights('resources/taggers/example-ner/weights.txt')",
"Use the model via the predict method:",
"from flair.data import Sentence\nmodel = SequenceTagger.load('resources/taggers/example-ner/final-model.pt')\nsentence = Sentence('John lives in the Empire State Building .')\nmodel.predict(sentence)\nprint(sentence.to_tagged_string())"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ireapps/cfj-2017 | completed/12. Web scraping (Part 2).ipynb | mit | [
"Let's scrape a practice table\nThe latest Mountain Goats album is called Goths. (It's good!) I made a simple HTML table with the track listing -- let's scrape it into a CSV.\nImport the modules we'll need",
"from bs4 import BeautifulSoup\nimport csv",
"Read in the file, see what we're working with\nWe'll use the read() method to get the contents of the file.",
"# in a with block, open the HTML file\nwith open('mountain-goats.html', 'r') as html_file:\n \n # .read() in the contents of a file -- it'll be a string\n html_code = html_file.read()\n\n # print the string to see what's there\n print(html_code)",
"Parse the table with BeautifulSoup\nRight now, Python isn't interpreting our table as data -- it's just a string. We need to use BeautifulSoup to parse that string into data objects that Python can understand. Once the string is parsed, we'll be working with a \"tree\" of data that we can navigate.",
"with open('mountain-goats.html', 'r') as html_file:\n html_code = html_file.read()\n \n # use the type() function to see what kind of object `html_code` is\n print(type(html_code))\n \n # feed the file's contents (the string of HTML) to BeautifulSoup\n # will complain if you don't specify the parser\n soup = BeautifulSoup(html_code, 'html.parser')\n\n # use the type() function to see what kind of object `soup` is\n print(type(soup))",
"Decide how to target the table\nBeautifulSoup has several methods for targeting elements -- by position on the page, by attribute, etc. Right now we just want to find the correct table.",
"with open('mountain-goats.html', 'r') as html_file:\n html_code = html_file.read()\n soup = BeautifulSoup(html_code, 'html.parser')\n \n # by position on the page\n # find_all returns a list of matching elements, and we want the second ([1]) one\n # song_table = soup.find_all('table')[1]\n \n # by class name\n # => with `find`, you can pass a dictionary of element attributes to match on\n # song_table = soup.find('table', {'class': 'song-table'})\n \n # by ID\n # song_table = soup.find('table', {'id': 'my-cool-table'})\n \n # by style\n song_table = soup.find('table', {'style': 'width: 95%;'})\n \n print(song_table)",
"Looping over the table rows\nLet's print a list of track numbers and song titles. Look at the structure of the table -- a table has rows represented by the tag tr, and within each row there are cells represented by td tags. The find_all() method returns a list. And we know how to iterate over lists: with a for loop. Let's do that.",
"with open('mountain-goats.html', 'r') as html_file:\n html_code = html_file.read()\n soup = BeautifulSoup(html_code, 'html.parser')\n song_table = soup.find('table', {'style': 'width: 95%;'})\n \n # find the rows in the table\n # slice to skip the header row\n song_rows = song_table.find_all('tr')[1:]\n \n # loop over the rows\n for row in song_rows:\n\n # get the table cells in the row\n song = row.find_all('td')\n \n # assign them to variables\n track, title, duration, artist, album = song\n \n # use the .string attribute to get the text in the cell\n print(track.string, title.string)",
"Write data to file\nLet's put it all together and open a file to write the data to.",
"with open('mountain-goats.html', 'r') as html_file, open('mountain-goats.csv', 'w') as outfile:\n html_code = html_file.read()\n soup = BeautifulSoup(html_code, 'html.parser')\n song_table = soup.find('table', {'style': 'width: 95%;'})\n \n song_rows = song_table.find_all('tr')[1:]\n \n # set up a writer object\n writer = csv.DictWriter(outfile, fieldnames=['track', 'title', 'duration', 'artist', 'album'])\n \n writer.writeheader()\n \n for row in song_rows:\n\n # get the table cells in the row\n song = row.find_all('td')\n \n # assign them to variables\n track, title, duration, artist, album = song\n \n # write out the dictionary to file\n writer.writerow({\n 'track': track.string,\n 'title': title.string,\n 'duration': duration.string,\n 'artist': artist.string,\n 'album': album.string\n })"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ZuckermanLab/NMpathAnalysis | test/Clustering_test.ipynb | gpl-3.0 | [
"Testing clustering based on the MFPT",
"import sys\nsys.path.append(\"../\")\nsys.path.append(\"../nmpath/\")\nfrom test.tools_for_notebook import *\n%matplotlib inline\nfrom nmpath.auxfunctions import *\nfrom nmpath.mfpt import *\nfrom nmpath.mappers import rectilinear_mapper\nfrom nmpath.clustering import *\n#from nmpath.mappers import voronoi_mapper",
"Toy model with two basins",
"plot_traj([],[],figsize=(6,5))",
"Generating MC trajectory\nContinuos Ensemble",
"mc_traj = mc_simulation2D(500000)\nmy_ensemble = Ensemble([mc_traj])",
"Discrete Ensemble and Transition Matrix\nThe mapping funcion divides each dimension in 12. The total number of bins is 144.",
"discrete_ens = DiscreteEnsemble.from_ensemble(my_ensemble, mapping_function2D)\n\n# Transition Matrix\nK = discrete_ens._mle_transition_matrix(N*N,prior_counts=1e-6)",
"Agglomerative Clustering\nThe points with the same color belong to the same cluster, only the clusters with size > 1 are shown.",
"t_min_list=[]\nt_max_list=[]\nt_AB_list=[]\nn_clusters = [135, 130, 125, 120, 115, 110, 105, 100, 95, 90, 85, 80, 75, 70]\n\nfor n in n_clusters:\n big_clusters=[]\n big_clusters_index =[]\n clusters, t_min, t_max, clustered_tmatrix = kinetic_clustering_from_tmatrix(K, n, verbose=False)\n t_min_list.append(t_min)\n t_max_list.append(t_max)\n \n for i, cluster in enumerate(clusters):\n if len(cluster) > 1:\n big_clusters.append(cluster)\n big_clusters_index.append(i)\n \n n_big = len(big_clusters)\n \n if n_big > 1:\n tAB = markov_commute_time(clustered_tmatrix,[big_clusters_index[0]],[big_clusters_index[1]] )\n else:\n tAB = 0.0\n t_AB_list.append(tAB)\n \n discrete = [True for i in range(n_big)]\n \n print(\"{} Clusters, t_cut: {:.2f}tau, t_max: {:.2e}tau, tAB: {:.2f}tau\".format(n, t_min, t_max, tAB))\n plot_traj([ [big_clusters[i],[]] for i in range(n_big) ], \n discrete, std = 0.00002, alpha=0.3, justpoints=True, figsize=(3,3))\n\nplt.plot(n_clusters, t_min_list, label=\"t_cut\")\nplt.plot(n_clusters, t_AB_list, label=\"t_AB\")\nplt.xlabel(\"Number of Clusters\")\nplt.ylabel(\"t (tau)\")\n#plt.text(110, 4000,\"Clustering\", fontsize=14)\nplt.axis([70,135,0,9000])\n#plt.arrow(125, 3600, -30, 0,shape='left', lw=2, length_includes_head=True)\nplt.title(\"Commute times vs Number of Clusters\")\nplt.legend()\nplt.show()\n\nm_ratio = [t_AB_list[i]/t_min_list[i] for i in range(len(t_min_list))]\n\nplt.plot(n_clusters, m_ratio, label=\"t_AB / t_cut\", color=\"red\")\nplt.xlabel(\"Number of Clusters\")\nplt.ylabel(\"t_AB / t_cut\")\nplt.axis([70,135,0,65])\nplt.legend()\nplt.show()\n\nm_ratio2 = [t_max_list[i]/t_min_list[i] for i in range(len(t_min_list))]\n\nplt.plot(n_clusters, m_ratio2, label=\"t_max / t_cut\", color=\"green\")\nplt.xlabel(\"Number of Clusters\")\nplt.ylabel(\"t_max / t_cut\")\n#plt.axis([70,135,0,1000])\nplt.legend()\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
physion/ovation-python | examples/qc-activity-example.ipynb | gpl-3.0 | [
"Quality Check API Example",
"import urllib\nimport ovation.lab.workflows as workflows\nimport ovation.session as session",
"Create a session. Note the api endpoint, lab-services.ovation.io for Ovation Service Lab.",
"s = session.connect(input('Email: '), api='https://lab-services.ovation.io')",
"Create a Quality Check (QC) activity\nA QC activity determines the status of results for each Sample in a Workflow. Normally, QC activities are handled in the web application, but you can submit a new activity with the necessary information to complete the QC programaticallly.\nFirst, we'll need a workflow and the label of the QC activity WorkflowActivity:",
"workflow_id = input('Workflow ID: ')\n\nqc_activity_label = input('QC activity label: ')",
"Next, we'll get the WorkflowSampleResults for the batch. Each WorkflowSampleResult contains the parsed data for a single Sample within the batch. Each WorkflowSampleResult has a result_type that distinguishes each kind of data.",
"result_type = input('Result type: ')\n\nworkflow_sample_results = s.get(s.path('workflow_sample_results'), params={'workflow_id': workflow_id, \n 'result_type': result_type})\nworkflow_sample_results",
"Within each WorkflowSampleResult you should see a result object containing records for each assay. In most cases, the results parser created a record for each line in an uploaded tabular (csv or tab-delimited) file. When that record has an entry identifiying the sample and an entry identifying the assay, the parser places that record into the WorkflowSampleResult for the corresponding Workflow Sample, result type, and assay. If more than one record matches this Sample > Result type > Assay, it will be appended to the records for that sample, result type, and assay.\nA QC activity updates the status of assays and entire Workflow Sample Results. Each assay may recieve a status (\"accepted\", \"rejected\", or \"repeat\") indicating the QC outcome of that assay for a particular sample. In addition, the WorkflowSampleResult has a global status indicating the overall QC outcome for that sample and result type. Individual assay statuses may be used on repeat to determine which assays need to be repeated. The global status determines how the sample is routed following QC. In fact, there can be multiple routing options for each status (e.g. an \"Accept and process for workflow A\" and \"Accept and process for workflow B\" options). Ovation internally uses a routing value to indicate (uniquely) which routing option to chose from the configuration. In many cases routing is the same as status (but not always).\nWorkflowSampleResult and assay statuses are set (overriding any existing status) by creating a QC activity, passing the updated status for each workflow sample result and contained assay(s).\nIn this example, we'll randomly choose statuses for each of the workflow samples above:",
"import random\nWSR_STATUS = [\"accepted\", \"rejected\", \"repeat\"]\nASSAY_STATUS = [\"accepted\", \"rejected\"]\n\nqc_results = []\nfor wsr in workflow_sample_results:\n assay_results = {}\n for assay_name, assay in wsr.result.items():\n assay_results[assay_name] = {\"status\": random.choice(ASSAY_STATUS)}\n \n wsr_status = random.choice(WSR_STATUS)\n \n result = {'id': wsr.id,\n 'result_type': wsr.result_type,\n 'status': wsr_status,\n 'routing': wsr_status,\n 'result': assay_results}\n \n qc_results.append(result)\n",
"The activity data we POST will look like this:\n{\"workflow_sample_results\": [{\"id\": WORKFLOW_SAMPLE_RESULT_ID,\n \"result_type\": RESULT_TYPE,\n \"status\":\"accepted\"|\"rejected\"|\"repeat\",\n \"routing\":\"accepted\",\n \"result\":{ASSAY:{\"status\":\"accepted\"|\"rejected\"}}},\n ...]}}",
"qc = workflows.create_activity(s, workflow_id, qc_activity_label,\n activity={'workflow_sample_results': qc_results,\n 'custom_attributes': {} # Always an empty dictionary for QC activities\n })"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n | site/ko/tutorials/estimator/keras_model_to_estimator.ipynb | apache-2.0 | [
"Copyright 2019 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"케라스 모델로 추정기 생성하기\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/estimator/keras_model_to_estimator\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/estimator/keras_model_to_estimator.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">구글 코랩(Colab)에서 실행하기</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/estimator/keras_model_to_estimator.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">깃허브(GitHub) 소스 보기</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/estimator/keras_model_to_estimator.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">Download notebook</a></td>\n</table>\n\n\n경고: 추정기는 새 코드에 권장되지 않습니다. Estimator는 v1.Session 스타일 코드를 실행합니다. 이 코드는 올바르게 작성하기가 더 어렵고 특히 TF 2 코드와 결합될 때 예기치 않게 작동할 수 있습니다. 에스티메이터는 호환성 보장 이 적용되지만 보안 취약점 외에는 수정 사항이 제공되지 않습니다. 자세한 내용은 마이그레이션 가이드 를 참조하세요.\n\n개요\nTensorFlow Estimator는 TensorFlow에서 지원되며 신규 및 기존 tf.keras 모델에서 생성할 수 있습니다. 이 자습서에는 해당 프로세스의 완전하고 최소한의 예가 포함되어 있습니다.\n주의: 케라스 모델을 사용한다면, 추정량을 변환하지 않고 tf.distribute strategies과 함께 직접 사용할 수 있습니다. 따라서, model_to_estimators는 더 이상 권장되지 않습니다.\n설정",
"import tensorflow as tf\n\nimport numpy as np\nimport tensorflow_datasets as tfds",
"간단한 케라스 모델 만들기\n케라스에서는 여러 겹의 층을 쌓아 모델을 만들 수 있습니다. 일반적으로 모델은 층의 그래프로 구성됩니다. 이 중 가장 흔한 형태는 적층형 구조를 갖고 있는 tf.keras.Sequential 모델입니다.\n간단한 완전히 연결 네트워크(다층 퍼셉트론)를 만들어봅시다:",
"model = tf.keras.models.Sequential([\n tf.keras.layers.Dense(16, activation='relu', input_shape=(4,)),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(3)\n])",
"모델을 컴파일한 후, 모델 구조를 요약해 출력할 수 있습니다.",
"model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n optimizer='adam')\nmodel.summary()",
"입력 함수 만들기\n데이터셋 API를 사용해 대규모 데이터셋을 다루거나 여러 장치에서 훈련할 수 있습니다.\n텐서플로 추정기는 입력 파이프라인(input pipeline)이 언제 어떻게 생성되었는지 제어해야 합니다. 이를 위해서는 \"입력 함수\", 즉 input_fn이 필요합니다. 추정기는 이 함수를 별도의 매개변수 설정 없이 호출하게 됩니다. 이때 input_fn은 tf.data.Dataset 객체를 반환해야 합니다.",
"def input_fn():\n split = tfds.Split.TRAIN\n dataset = tfds.load('iris', split=split, as_supervised=True)\n dataset = dataset.map(lambda features, labels: ({'dense_input':features}, labels))\n dataset = dataset.batch(32).repeat()\n return dataset",
"input_fn이 잘 구현되었는지 확인해봅니다.",
"for features_batch, labels_batch in input_fn().take(1):\n print(features_batch)\n print(labels_batch)",
"tf.keras.model을 추정기로 변환하기\ntf.keras.model은 tf.keras.estimator.model_to_estimator 함수를 이용해 tf.estimator.Estimator 객체로 변환함으로써 tf.estimator API를 통해 훈련할 수 있습니다.",
"import tempfile\nmodel_dir = tempfile.mkdtemp()\nkeras_estimator = tf.keras.estimator.model_to_estimator(\n keras_model=model, model_dir=model_dir)",
"추정기를 훈련한 후 평가합니다.",
"keras_estimator.train(input_fn=input_fn, steps=500)\neval_result = keras_estimator.evaluate(input_fn=input_fn, steps=10)\nprint('Eval result: {}'.format(eval_result))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/analytics-componentized-patterns | retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb | apache-2.0 | [
"Low-latency item-to-item recommendation system - Orchestrating with TFX\nOverview\nThis notebook is a part of the series that describes the process of implementing a Low-latency item-to-item recommendation system.\nThis notebook demonstrates how to use TFX and AI Platform Pipelines (Unified) to operationalize the workflow that creates embeddings and builds and deploys an ANN Service index. \nIn the notebook you go through the following steps.\n\nCreating TFX custom components that encapsulate operations on BQ, BQML and ANN Service.\nCreating a TFX pipeline that automates the processes of creating embeddings and deploying an ANN Index \nTesting the pipeline locally using Beam runner.\nCompiling the pipeline to the TFX IR format for execution on AI Platform Pipelines (Unified).\nSubmitting pipeline runs.\n\nThis notebook was designed to run on AI Platform Notebooks. Before running the notebook make sure that you have completed the setup steps as described in the README file.\nTFX Pipeline Design\nThe below diagram depicts the TFX pipeline that you will implement in this notebook. Each step of the pipeline is implemented as a TFX Custom Python function component. The components track the relevant metadata in AI Platform (Unfied) ML Metadata using both standard and custom metadata types. \n\n\nThe first step of the pipeline is to compute item co-occurence. This is done by calling the sp_ComputePMI stored procedure created in the preceeding notebooks. \nNext, the BQML Matrix Factorization model is created. The model training code is encapsulated in the sp_TrainItemMatchingModel stored procedure.\nItem embeddings are extracted from the trained model weights and stored in a BQ table. The component calls the sp_ExtractEmbeddings stored procedure that implements the extraction logic.\nThe embeddings are exported in the JSONL format to the GCS location using the BigQuery extract job.\nThe embeddings in the JSONL format are used to create an ANN index by calling the ANN Service Control Plane REST API.\nFinally, the ANN index is deployed to an ANN endpoint.\n\nAll steps and their inputs and outputs are tracked in the AI Platform (Unified) ML Metadata service.",
"%load_ext autoreload\n%autoreload 2",
"Setting up the notebook's environment\nInstall AI Platform Pipelines client library\nFor AI Platform Pipelines (Unified), which is in the Experimental stage, you need to download and install the AI Platform client library on top of the KFP and TFX SDKs that were installed as part of the initial environment setup.",
"AIP_CLIENT_WHEEL = 'aiplatform_pipelines_client-0.1.0.caip20201123-py3-none-any.whl'\nAIP_CLIENT_WHEEL_GCS_LOCATION = f'gs://cloud-aiplatform-pipelines/releases/20201123/{AIP_CLIENT_WHEEL}'\n\n!gsutil cp {AIP_CLIENT_WHEEL_GCS_LOCATION} {AIP_CLIENT_WHEEL}\n\n%pip install {AIP_CLIENT_WHEEL}",
"Restart the kernel.",
"import IPython\napp = IPython.Application.instance()\napp.kernel.do_shutdown(True)",
"Import notebook dependencies",
"import logging\nimport tfx\nimport tensorflow as tf\n\nfrom aiplatform.pipelines import client\nfrom tfx.orchestration.beam.beam_dag_runner import BeamDagRunner\n\nprint('TFX Version: ', tfx.__version__)",
"Configure GCP environment\n\nIf you're on AI Platform Notebooks, authenticate with Google Cloud before running the next section, by running\nsh\ngcloud auth login\nin the Terminal window (which you can open via File > New in the menu). You only need to do this once per notebook instance.\nSet the following constants to the values reflecting your environment:\n\nPROJECT_ID - your GCP project ID\nPROJECT_NUMBER - your GCP project number\nBUCKET_NAME - a name of the GCS bucket that will be used to host artifacts created by the pipeline\nPIPELINE_NAME_SUFFIX - a suffix appended to the standard pipeline name. You can change to differentiate between pipelines from different users in a classroom environment\nAPI_KEY - a GCP API key\nVPC_NAME - a name of the GCP VPC to use for the index deployments. \nREGION - a compute region. Don't change the default - us-central - while the ANN Service is in the experimental stage",
"PROJECT_ID = 'jk-mlops-dev' # <---CHANGE THIS\nPROJECT_NUMBER = '895222332033' # <---CHANGE THIS\nAPI_KEY = 'AIzaSyBS_RiaK3liaVthTUD91XuPDKIbiwDFlV8' # <---CHANGE THIS\nUSER = 'user' # <---CHANGE THIS\nBUCKET_NAME = 'jk-ann-staging' # <---CHANGE THIS\nVPC_NAME = 'default' # <---CHANGE THIS IF USING A DIFFERENT VPC\n\nREGION = 'us-central1'\nPIPELINE_NAME = \"ann-pipeline-{}\".format(USER)\nPIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(BUCKET_NAME, PIPELINE_NAME)\nPATH=%env PATH\n%env PATH={PATH}:/home/jupyter/.local/bin\n \nprint('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))",
"Defining custom components\nIn this section of the notebook you define a set of custom TFX components that encapsulate BQ, BQML and ANN Service calls. The components are TFX Custom Python function components. \nEach component is created as a separate Python module. You also create a couple of helper modules that encapsulate Python functions and classess used across the custom components. \nRemove files created in the previous executions of the notebook",
"component_folder = 'bq_components'\n\nif tf.io.gfile.exists(component_folder):\n print('Removing older file')\n tf.io.gfile.rmtree(component_folder)\nprint('Creating component folder')\ntf.io.gfile.mkdir(component_folder)\n\n%cd {component_folder}",
"Define custom types for ANN service artifacts\nThis module defines a couple of custom TFX artifacts to track ANN Service indexes and index deployments.",
"%%writefile ann_types.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Custom types for managing ANN artifacts.\"\"\"\n\nfrom tfx.types import artifact\n\nclass ANNIndex(artifact.Artifact):\n TYPE_NAME = 'ANNIndex'\n \nclass DeployedANNIndex(artifact.Artifact):\n TYPE_NAME = 'DeployedANNIndex'\n",
"Create a wrapper around ANN Service REST API\nThis module provides a convenience wrapper around ANN Service REST API. In the experimental stage, the ANN Service does not have an \"official\" Python client SDK nor it is supported by the Google Discovery API.",
"%%writefile ann_service.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Helper classes encapsulating ANN Service REST API.\"\"\"\n\nimport datetime\nimport logging\nimport json\nimport time\n\nimport google.auth\n\nclass ANNClient(object):\n \"\"\"Base ANN Service client.\"\"\"\n \n def __init__(self, project_id, project_number, region):\n credentials, _ = google.auth.default()\n self.authed_session = google.auth.transport.requests.AuthorizedSession(credentials)\n self.ann_endpoint = f'{region}-aiplatform.googleapis.com'\n self.ann_parent = f'https://{self.ann_endpoint}/v1alpha1/projects/{project_id}/locations/{region}'\n self.project_id = project_id\n self.project_number = project_number\n self.region = region\n \n def wait_for_completion(self, operation_id, message, sleep_time):\n \"\"\"Waits for a completion of a long running operation.\"\"\"\n \n api_url = f'{self.ann_parent}/operations/{operation_id}'\n\n start_time = datetime.datetime.utcnow()\n while True:\n response = self.authed_session.get(api_url)\n if response.status_code != 200:\n raise RuntimeError(response.json())\n if 'done' in response.json().keys():\n logging.info('Operation completed!')\n break\n elapsed_time = datetime.datetime.utcnow() - start_time\n logging.info('{}. Elapsed time since start: {}.'.format(\n message, str(elapsed_time)))\n time.sleep(sleep_time)\n \n return response.json()['response']\n\n\nclass IndexClient(ANNClient):\n \"\"\"Encapsulates a subset of control plane APIs \n that manage ANN indexes.\"\"\"\n\n def __init__(self, project_id, project_number, region):\n super().__init__(project_id, project_number, region)\n\n def create_index(self, display_name, description, metadata):\n \"\"\"Creates an ANN Index.\"\"\"\n \n api_url = f'{self.ann_parent}/indexes'\n \n request_body = {\n 'display_name': display_name,\n 'description': description,\n 'metadata': metadata\n }\n \n response = self.authed_session.post(api_url, data=json.dumps(request_body))\n if response.status_code != 200:\n raise RuntimeError(response.text)\n operation_id = response.json()['name'].split('/')[-1]\n \n return operation_id\n\n def list_indexes(self, display_name=None):\n \"\"\"Lists all indexes with a given display name or\n all indexes if the display_name is not provided.\"\"\"\n \n if display_name:\n api_url = f'{self.ann_parent}/indexes?filter=display_name=\"{display_name}\"'\n else:\n api_url = f'{self.ann_parent}/indexes'\n\n response = self.authed_session.get(api_url).json()\n\n return response['indexes'] if response else []\n \n def delete_index(self, index_id):\n \"\"\"Deletes an ANN index.\"\"\"\n \n api_url = f'{self.ann_parent}/indexes/{index_id}'\n response = self.authed_session.delete(api_url)\n if response.status_code != 200:\n raise RuntimeError(response.text)\n\n\nclass IndexDeploymentClient(ANNClient):\n \"\"\"Encapsulates a subset of control plane APIs \n that manage ANN endpoints and deployments.\"\"\"\n \n def __init__(self, project_id, project_number, region):\n super().__init__(project_id, project_number, region)\n\n def create_endpoint(self, display_name, vpc_name):\n \"\"\"Creates an ANN endpoint.\"\"\"\n \n api_url = f'{self.ann_parent}/indexEndpoints'\n network_name = f'projects/{self.project_number}/global/networks/{vpc_name}'\n\n request_body = {\n 'display_name': display_name,\n 'network': network_name\n }\n\n response = self.authed_session.post(api_url, data=json.dumps(request_body))\n if response.status_code != 200:\n raise RuntimeError(response.text)\n operation_id = response.json()['name'].split('/')[-1]\n \n return operation_id\n \n def list_endpoints(self, display_name=None):\n \"\"\"Lists all ANN endpoints with a given display name or\n all endpoints in the project if the display_name is not provided.\"\"\"\n \n if display_name:\n api_url = f'{self.ann_parent}/indexEndpoints?filter=display_name=\"{display_name}\"'\n else:\n api_url = f'{self.ann_parent}/indexEndpoints'\n\n response = self.authed_session.get(api_url).json()\n \n return response['indexEndpoints'] if response else []\n \n def delete_endpoint(self, endpoint_id):\n \"\"\"Deletes an ANN endpoint.\"\"\"\n \n api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}'\n \n response = self.authed_session.delete(api_url)\n if response.status_code != 200:\n raise RuntimeError(response.text)\n \n return response.json()\n \n def create_deployment(self, display_name, deployment_id, endpoint_id, index_id):\n \"\"\"Deploys an ANN index to an endpoint.\"\"\"\n \n api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:deployIndex'\n index_name = f'projects/{self.project_number}/locations/{self.region}/indexes/{index_id}'\n\n request_body = {\n 'deployed_index': {\n 'id': deployment_id,\n 'index': index_name,\n 'display_name': display_name\n }\n }\n\n response = self.authed_session.post(api_url, data=json.dumps(request_body))\n if response.status_code != 200:\n raise RuntimeError(response.text)\n operation_id = response.json()['name'].split('/')[-1]\n \n return operation_id\n \n def get_deployment_grpc_ip(self, endpoint_id, deployment_id):\n \"\"\"Returns a private IP address for a gRPC interface to \n an Index deployment.\"\"\"\n \n api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}'\n\n response = self.authed_session.get(api_url)\n if response.status_code != 200:\n raise RuntimeError(response.text)\n \n endpoint_ip = None\n if 'deployedIndexes' in response.json().keys():\n for deployment in response.json()['deployedIndexes']:\n if deployment['id'] == deployment_id:\n endpoint_ip = deployment['privateEndpoints']['matchGrpcAddress']\n \n return endpoint_ip\n\n \n def delete_deployment(self, endpoint_id, deployment_id):\n \"\"\"Undeployes an index from an endpoint.\"\"\"\n \n api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:undeployIndex'\n \n request_body = {\n 'deployed_index_id': deployment_id\n }\n \n response = self.authed_session.post(api_url, data=json.dumps(request_body))\n if response.status_code != 200:\n raise RuntimeError(response.text)\n \n return response\n ",
"Create Compute PMI component\nThis component encapsulates a call to the BigQuery stored procedure that calculates item cooccurence. Refer to the preceeding notebooks for more details about item coocurrent calculations.\nThe component tracks the output item_cooc table created by the stored procedure using the TFX (simple) Dataset artifact.",
"%%writefile compute_pmi.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"BigQuery compute PMI component.\"\"\"\n\nimport logging\n\nfrom google.cloud import bigquery\n\nimport tfx\nimport tensorflow as tf\n\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\n\nfrom tfx.types.experimental.simple_artifacts import Dataset as BQDataset\n\n\n@component\ndef compute_pmi(\n project_id: Parameter[str],\n bq_dataset: Parameter[str],\n min_item_frequency: Parameter[int],\n max_group_size: Parameter[int],\n item_cooc: OutputArtifact[BQDataset]):\n \n stored_proc = f'{bq_dataset}.sp_ComputePMI'\n query = f'''\n DECLARE min_item_frequency INT64;\n DECLARE max_group_size INT64;\n\n SET min_item_frequency = {min_item_frequency};\n SET max_group_size = {max_group_size};\n\n CALL {stored_proc}(min_item_frequency, max_group_size);\n '''\n result_table = 'item_cooc'\n\n logging.info(f'Starting computing PMI...')\n \n client = bigquery.Client(project=project_id)\n query_job = client.query(query)\n query_job.result() # Wait for the job to complete\n \n logging.info(f'Items PMI computation completed. Output in {bq_dataset}.{result_table}.')\n \n # Write the location of the output table to metadata. \n item_cooc.set_string_custom_property('table_name',\n f'{project_id}:{bq_dataset}.{result_table}')\n",
"Create Train Item Matching Model component\nThis component encapsulates a call to the BigQuery stored procedure that trains the BQML Matrix Factorization model. Refer to the preceeding notebooks for more details about model training.\nThe component tracks the output item_matching_model BQML model created by the stored procedure using the TFX (simple) Model artifact.",
"%%writefile train_item_matching.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"BigQuery compute PMI component.\"\"\"\n\nimport logging\n\nfrom google.cloud import bigquery\n\nimport tfx\nimport tensorflow as tf\n\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\n\nfrom tfx.types.experimental.simple_artifacts import Dataset as BQDataset\nfrom tfx.types.standard_artifacts import Model as BQModel\n\n\n@component\ndef train_item_matching_model(\n project_id: Parameter[str],\n bq_dataset: Parameter[str],\n dimensions: Parameter[int],\n item_cooc: InputArtifact[BQDataset],\n bq_model: OutputArtifact[BQModel]):\n \n item_cooc_table = item_cooc.get_string_custom_property('table_name')\n stored_proc = f'{bq_dataset}.sp_TrainItemMatchingModel'\n query = f'''\n DECLARE dimensions INT64 DEFAULT {dimensions};\n CALL {stored_proc}(dimensions);\n '''\n model_name = 'item_matching_model'\n \n logging.info(f'Using item co-occurrence table: item_cooc_table')\n logging.info(f'Starting training of the model...')\n \n client = bigquery.Client(project=project_id)\n query_job = client.query(query)\n query_job.result()\n \n logging.info(f'Model training completed. Output in {bq_dataset}.{model_name}.')\n \n # Write the location of the model to metadata. \n bq_model.set_string_custom_property('model_name',\n f'{project_id}:{bq_dataset}.{model_name}')\n \n ",
"Create Extract Embeddings component\nThis component encapsulates a call to the BigQuery stored procedure that extracts embdeddings from the model to the staging table. Refer to the preceeding notebooks for more details about embeddings extraction.\nThe component tracks the output item_embeddings table created by the stored procedure using the TFX (simple) Dataset artifact.",
"%%writefile extract_embeddings.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Extracts embeddings to a BQ table.\"\"\"\n\nimport logging\n\nfrom google.cloud import bigquery\n\nimport tfx\nimport tensorflow as tf\n\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\n\nfrom tfx.types.experimental.simple_artifacts import Dataset as BQDataset \nfrom tfx.types.standard_artifacts import Model as BQModel\n\n\n@component\ndef extract_embeddings(\n project_id: Parameter[str],\n bq_dataset: Parameter[str],\n bq_model: InputArtifact[BQModel],\n item_embeddings: OutputArtifact[BQDataset]):\n \n embedding_model_name = bq_model.get_string_custom_property('model_name')\n stored_proc = f'{bq_dataset}.sp_ExractEmbeddings'\n query = f'''\n CALL {stored_proc}();\n '''\n embeddings_table = 'item_embeddings'\n\n logging.info(f'Extracting item embeddings from: {embedding_model_name}')\n \n client = bigquery.Client(project=project_id)\n query_job = client.query(query)\n query_job.result() # Wait for the job to complete\n \n logging.info(f'Embeddings extraction completed. Output in {bq_dataset}.{embeddings_table}')\n \n # Write the location of the output table to metadata.\n item_embeddings.set_string_custom_property('table_name', \n f'{project_id}:{bq_dataset}.{embeddings_table}')\n \n\n ",
"Create Export Embeddings component\nThis component encapsulates a BigQuery table extraction job that extracts the item_embeddings table to a GCS location as files in the JSONL format. The format of the extracted files is compatible with the ingestion schema for the ANN Service.\nThe component tracks the output files location in the TFX (simple) Dataset artifact.",
"%%writefile export_embeddings.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Exports embeddings from a BQ table to a GCS location.\"\"\"\n\nimport logging\n\nfrom google.cloud import bigquery\n\nimport tfx\nimport tensorflow as tf\n\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\n\nfrom tfx.types.experimental.simple_artifacts import Dataset \n\nBQDataset = Dataset\n\n@component\ndef export_embeddings(\n project_id: Parameter[str],\n gcs_location: Parameter[str],\n item_embeddings_bq: InputArtifact[BQDataset],\n item_embeddings_gcs: OutputArtifact[Dataset]):\n \n filename_pattern = 'embedding-*.json'\n gcs_location = gcs_location.rstrip('/')\n destination_uri = f'{gcs_location}/{filename_pattern}'\n \n _, table_name = item_embeddings_bq.get_string_custom_property('table_name').split(':')\n \n logging.info(f'Exporting item embeddings from: {table_name}')\n \n bq_dataset, table_id = table_name.split('.')\n client = bigquery.Client(project=project_id)\n dataset_ref = bigquery.DatasetReference(project_id, bq_dataset)\n table_ref = dataset_ref.table(table_id)\n job_config = bigquery.job.ExtractJobConfig()\n job_config.destination_format = bigquery.DestinationFormat.NEWLINE_DELIMITED_JSON\n\n extract_job = client.extract_table(\n table_ref,\n destination_uris=destination_uri,\n job_config=job_config\n ) \n extract_job.result() # Wait for resuls\n \n logging.info(f'Embeddings export completed. Output in {gcs_location}')\n \n # Write the location of the embeddings to metadata.\n item_embeddings_gcs.uri = gcs_location\n\n ",
"Create ANN index component\nThis component encapsulats the calls to the ANN Service to create an ANN Index. \nThe component tracks the created index int the TFX custom ANNIndex artifact.",
"%%writefile create_index.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Creates an ANN index.\"\"\"\n\nimport logging\n\nimport google.auth\nimport numpy as np\nimport tfx\nimport tensorflow as tf\n\nfrom google.cloud import bigquery\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\nfrom tfx.types.experimental.simple_artifacts import Dataset \n\nfrom ann_service import IndexClient\nfrom ann_types import ANNIndex\n\nNUM_NEIGHBOURS = 10\nMAX_LEAVES_TO_SEARCH = 200\nMETRIC = 'DOT_PRODUCT_DISTANCE'\nFEATURE_NORM_TYPE = 'UNIT_L2_NORM'\nCHILD_NODE_COUNT = 1000\nAPPROXIMATE_NEIGHBORS_COUNT = 50\n\n@component\ndef create_index(\n project_id: Parameter[str],\n project_number: Parameter[str],\n region: Parameter[str],\n display_name: Parameter[str],\n dimensions: Parameter[int],\n item_embeddings: InputArtifact[Dataset],\n ann_index: OutputArtifact[ANNIndex]):\n \n index_client = IndexClient(project_id, project_number, region)\n \n logging.info('Creating index:')\n logging.info(f' Index display name: {display_name}')\n logging.info(f' Embeddings location: {item_embeddings.uri}')\n \n index_description = display_name\n index_metadata = {\n 'contents_delta_uri': item_embeddings.uri,\n 'config': {\n 'dimensions': dimensions,\n 'approximate_neighbors_count': APPROXIMATE_NEIGHBORS_COUNT,\n 'distance_measure_type': METRIC,\n 'feature_norm_type': FEATURE_NORM_TYPE,\n 'tree_ah_config': {\n 'child_node_count': CHILD_NODE_COUNT,\n 'max_leaves_to_search': MAX_LEAVES_TO_SEARCH\n }\n }\n }\n \n operation_id = index_client.create_index(display_name, \n index_description,\n index_metadata)\n response = index_client.wait_for_completion(operation_id, 'Waiting for ANN index', 45)\n index_name = response['name']\n \n logging.info('Index {} created.'.format(index_name))\n \n # Write the index name to metadata.\n ann_index.set_string_custom_property('index_name', \n index_name)\n ann_index.set_string_custom_property('index_display_name', \n display_name)\n",
"Deploy ANN index component\nThis component deploys an ANN index to an ANN Endpoint. \nThe componet tracks the deployed index in the TFX custom DeployedANNIndex artifact.",
"%%writefile deploy_index.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Deploys an ANN index.\"\"\"\n\nimport logging\n\nimport numpy as np\nimport uuid\nimport tfx\nimport tensorflow as tf\n\nfrom google.cloud import bigquery\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\nfrom tfx.types.experimental.simple_artifacts import Dataset \n\nfrom ann_service import IndexDeploymentClient\nfrom ann_types import ANNIndex\nfrom ann_types import DeployedANNIndex\n\n\n@component\ndef deploy_index(\n project_id: Parameter[str],\n project_number: Parameter[str],\n region: Parameter[str],\n vpc_name: Parameter[str],\n deployed_index_id_prefix: Parameter[str],\n ann_index: InputArtifact[ANNIndex],\n deployed_ann_index: OutputArtifact[DeployedANNIndex]\n ):\n \n deployment_client = IndexDeploymentClient(project_id, \n project_number,\n region)\n \n index_name = ann_index.get_string_custom_property('index_name')\n index_display_name = ann_index.get_string_custom_property('index_display_name')\n endpoint_display_name = f'Endpoint for {index_display_name}'\n \n logging.info(f'Creating endpoint: {endpoint_display_name}')\n operation_id = deployment_client.create_endpoint(endpoint_display_name, vpc_name)\n response = deployment_client.wait_for_completion(operation_id, 'Waiting for endpoint', 30)\n endpoint_name = response['name']\n logging.info(f'Endpoint created: {endpoint_name}')\n \n endpoint_id = endpoint_name.split('/')[-1]\n index_id = index_name.split('/')[-1]\n deployed_index_display_name = f'Deployed {index_display_name}'\n deployed_index_id = deployed_index_id_prefix + str(uuid.uuid4())\n \n logging.info(f'Creating deployed index: {deployed_index_id}')\n logging.info(f' from: {index_name}')\n operation_id = deployment_client.create_deployment(\n deployed_index_display_name, \n deployed_index_id,\n endpoint_id,\n index_id)\n response = deployment_client.wait_for_completion(operation_id, 'Waiting for deployment', 60)\n logging.info('Index deployed!')\n \n deployed_index_ip = deployment_client.get_deployment_grpc_ip(\n endpoint_id, deployed_index_id\n )\n # Write the deployed index properties to metadata.\n deployed_ann_index.set_string_custom_property('endpoint_name', \n endpoint_name)\n deployed_ann_index.set_string_custom_property('deployed_index_id', \n deployed_index_id)\n deployed_ann_index.set_string_custom_property('index_name', \n index_name)\n deployed_ann_index.set_string_custom_property('deployed_index_grpc_ip', \n deployed_index_ip)\n",
"Creating a TFX pipeline\nThe pipeline automates the process of preparing item embeddings (in BigQuery), training a Matrix Factorization model (in BQML), and creating and deploying an ANN Service index.\nThe pipeline has a simple sequential flow. The pipeline accepts a set of runtime parameters that define GCP environment settings and embeddings and index assembly parameters.",
"import os\n\n# Only required for local run.\nfrom tfx.orchestration.metadata import sqlite_metadata_connection_config\n\nfrom tfx.orchestration.pipeline import Pipeline\nfrom tfx.orchestration.kubeflow.v2 import kubeflow_v2_dag_runner\n\nfrom compute_pmi import compute_pmi\nfrom export_embeddings import export_embeddings\nfrom extract_embeddings import extract_embeddings\nfrom train_item_matching import train_item_matching_model\nfrom create_index import create_index\nfrom deploy_index import deploy_index\n\ndef ann_pipeline(\n pipeline_name,\n pipeline_root,\n metadata_connection_config,\n project_id,\n project_number,\n region,\n vpc_name,\n bq_dataset_name,\n min_item_frequency,\n max_group_size,\n dimensions,\n embeddings_gcs_location,\n index_display_name,\n deployed_index_id_prefix) -> Pipeline:\n \"\"\"Implements the SCANN training pipeline.\"\"\"\n \n pmi_computer = compute_pmi(\n project_id=project_id,\n bq_dataset=bq_dataset_name,\n min_item_frequency=min_item_frequency,\n max_group_size=max_group_size\n )\n \n bqml_trainer = train_item_matching_model(\n project_id=project_id,\n bq_dataset=bq_dataset_name,\n item_cooc=pmi_computer.outputs.item_cooc,\n dimensions=dimensions,\n )\n \n embeddings_extractor = extract_embeddings(\n project_id=project_id,\n bq_dataset=bq_dataset_name,\n bq_model=bqml_trainer.outputs.bq_model\n )\n \n embeddings_exporter = export_embeddings(\n project_id=project_id,\n gcs_location=embeddings_gcs_location,\n item_embeddings_bq=embeddings_extractor.outputs.item_embeddings\n )\n \n index_constructor = create_index(\n project_id=project_id,\n project_number=project_number,\n region=region,\n display_name=index_display_name,\n dimensions=dimensions,\n item_embeddings=embeddings_exporter.outputs.item_embeddings_gcs\n )\n \n index_deployer = deploy_index(\n project_id=project_id,\n project_number=project_number,\n region=region,\n vpc_name=vpc_name,\n deployed_index_id_prefix=deployed_index_id_prefix,\n ann_index=index_constructor.outputs.ann_index\n )\n\n components = [\n pmi_computer,\n bqml_trainer,\n embeddings_extractor,\n embeddings_exporter,\n index_constructor,\n index_deployer\n ]\n \n return Pipeline(\n pipeline_name=pipeline_name,\n pipeline_root=pipeline_root,\n # Only needed for local runs.\n metadata_connection_config=metadata_connection_config,\n components=components)",
"Testing the pipeline locally\nYou will first run the pipeline locally using the Beam runner.\nClean the metadata and artifacts from the previous runs",
"pipeline_root = f'/tmp/{PIPELINE_NAME}'\nlocal_mlmd_folder = '/tmp/mlmd'\n\nif tf.io.gfile.exists(pipeline_root):\n print(\"Removing previous artifacts...\")\n tf.io.gfile.rmtree(pipeline_root)\nif tf.io.gfile.exists(local_mlmd_folder):\n print(\"Removing local mlmd SQLite...\")\n tf.io.gfile.rmtree(local_mlmd_folder)\nprint(\"Creating mlmd directory: \", local_mlmd_folder)\ntf.io.gfile.mkdir(local_mlmd_folder)\nprint(\"Creating pipeline root folder: \", pipeline_root)\ntf.io.gfile.mkdir(pipeline_root)",
"Set pipeline parameters and create the pipeline",
"bq_dataset_name = 'song_embeddings'\nindex_display_name = 'Song embeddings'\ndeployed_index_id_prefix = 'deployed_song_embeddings_'\nmin_item_frequency = 15\nmax_group_size = 100\ndimensions = 50\nembeddings_gcs_location = f'gs://{BUCKET_NAME}/embeddings'\n\nmetadata_connection_config = sqlite_metadata_connection_config(\n os.path.join(local_mlmd_folder, 'metadata.sqlite'))\n\npipeline = ann_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=pipeline_root,\n metadata_connection_config=metadata_connection_config,\n project_id=PROJECT_ID,\n project_number=PROJECT_NUMBER,\n region=REGION,\n vpc_name=VPC_NAME,\n bq_dataset_name=bq_dataset_name,\n index_display_name=index_display_name,\n deployed_index_id_prefix=deployed_index_id_prefix,\n min_item_frequency=min_item_frequency,\n max_group_size=max_group_size,\n dimensions=dimensions,\n embeddings_gcs_location=embeddings_gcs_location\n)",
"Start the run",
"logging.getLogger().setLevel(logging.INFO)\n\nBeamDagRunner().run(pipeline)",
"Inspect produced metadata\nDuring the execution of the pipeline, the inputs and outputs of each component have been tracked in ML Metadata.",
"from ml_metadata import metadata_store\nfrom ml_metadata.proto import metadata_store_pb2\n\nconnection_config = metadata_store_pb2.ConnectionConfig()\nconnection_config.sqlite.filename_uri = os.path.join(local_mlmd_folder, 'metadata.sqlite')\nconnection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE\nstore = metadata_store.MetadataStore(connection_config)\nstore.get_artifacts()",
"NOTICE. The following code does not work with ANN Service Experimental. It will be finalized when the service moves to the Preview stage.\nRunning the pipeline on AI Platform Pipelines\nYou will now run the pipeline on AI Platform Pipelines (Unified)\nPackage custom components into a container\nThe modules containing custom components must be first package as a docker container image, which is a derivative of the standard TFX image.\nCreate a Dockerfile",
"%%writefile Dockerfile\nFROM gcr.io/tfx-oss-public/tfx:0.25.0\nWORKDIR /pipeline\nCOPY ./ ./\nENV PYTHONPATH=\"/pipeline:${PYTHONPATH}\"",
"Build and push the docker image to Container Registry",
"!gcloud builds submit --tag gcr.io/{PROJECT_ID}/caip-tfx-custom:{USER} .",
"Create AI Platform Pipelines client",
"from aiplatform.pipelines import client\n\naipp_client = client.Client(\n project_id=PROJECT_ID,\n region=REGION,\n api_key=API_KEY\n)",
"Set the the parameters for AIPP execution and create the pipeline",
"metadata_connection_config = None\npipeline_root = PIPELINE_ROOT\n\npipeline = ann_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=pipeline_root,\n metadata_connection_config=metadata_connection_config,\n project_id=PROJECT_ID,\n project_number=PROJECT_NUMBER,\n region=REGION,\n vpc_name=VPC_NAME,\n bq_dataset_name=bq_dataset_name,\n index_display_name=index_display_name,\n deployed_index_id_prefix=deployed_index_id_prefix,\n min_item_frequency=min_item_frequency,\n max_group_size=max_group_size,\n dimensions=dimensions,\n embeddings_gcs_location=embeddings_gcs_location\n)",
"Compile the pipeline",
"config = kubeflow_v2_dag_runner.KubeflowV2DagRunnerConfig(\n project_id=PROJECT_ID,\n display_name=PIPELINE_NAME,\n default_image='gcr.io/{}/caip-tfx-custom:{}'.format(PROJECT_ID, USER))\nrunner = kubeflow_v2_dag_runner.KubeflowV2DagRunner(\n config=config,\n output_filename='pipeline.json')\nrunner.compile(\n pipeline,\n write_out=True)",
"Submit the pipeline run",
"aipp_client.create_run_from_job_spec('pipeline.json')",
"License\nCopyright 2020 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \nSee the License for the specific language governing permissions and limitations under the License.\nThis is not an official Google product but sample code provided for an educational purpose"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sameersingh/ml-discussions | week4/multiclass_perceptron.ipynb | apache-2.0 | [
"Multiclass Perceptron\nIn the previous discussion we've gone over a perceptron with only 2 classes. In this notebook we'll show how it can work on multiple classes, following the slides from the lecture.",
"# Import all required libraries\nfrom __future__ import division # For python 2.*\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport mltools as ml\n\nnp.random.seed(0)\n%matplotlib inline",
"Data Sampling",
"x_0 = np.random.normal(loc=[-4, 2], scale=0.5, size=(100, 2))\nx_1 = np.random.normal(loc=[-4, -3], scale=0.5, size=(100, 2))\nx_2 = np.random.normal(loc=[4, 1], scale=0.5, size=(100, 2))\nx_3 = np.random.normal(loc=[5, -2], scale=0.5, size=(100, 2))\n\nX = np.vstack([x_0, x_1, x_2, x_3])\nY = np.ones(X.shape[0], dtype=np.intc)\nY[:100] = 0\nY[100:200] = 1\nY[200:300] = 2\nY[300:] = 3\n\nml.plotClassify2D(None, X, Y)\n\nclasses = np.unique(Y)\nprint classes",
"Multiclass Preceptron Training Algorithm\n<img src = 'extras/multiclass.png'>\nOne of the main differences is that now there is a $\\theta_c$ for each class. So in the algorithm above $\\theta$ is basically size $#Classes \\times #Features$. \nTo find the class, instead of using the sign threshold on the response, we are looking for the class that maximizes the response.\nSo let's adapt the code from the previous discussion to do this.\nLet's add the const to the X and create the theta matrix.",
"# Like previous discussion\ndef add_const(X):\n return np.hstack([np.ones([X.shape[0], 1]), X])\n\nXconst = add_const(X)\ntheta = np.random.randn(classes.shape[0], Xconst.shape[1]) # Adding 1 for theta corresponding to bias term\n\nx_j, y_j = Xconst[5], Y[5]\n\n# The response is also the same, only we transpose the theta.\ndef resp(x, theta):\n return np.dot(x, theta.T)",
"For the predict we need to find the class that maximizes the response. We can do this with np.argmax().",
"def predict(x, theta):\n r = resp(x, theta)\n return np.argmax(np.atleast_2d(r), axis=1)\n\n# Error stays the same\ndef pred_err(X, Y, theta):\n \"\"\"Predicts that class for X and returns the error rate. \"\"\"\n Yhat = predict(X, theta)\n return np.mean(Yhat != Y)\n\npred_vals = predict(x_j, theta)\nprint 'Predicted class %d, True class is %d' % (pred_vals, y_j)",
"Learning Update",
"a = 0.1\ny_j_hat = predict(x_j, theta)\n\ntheta[y_j_hat] -= a * x_j\ntheta[y_j] += a * x_j",
"Train method\nUsing everything we coded so far, let's code the training method.",
"def train(X, Y, a=0.01, stop_tol=1e-8, max_iter=50):\n Xconst = add_const(X)\n m, n = Xconst.shape\n c = np.unique(Y).shape[0]\n \n # Initializing theta\n theta = np.random.rand(c, n)\n \n # The update loop\n J_err = [np.inf]\n for i in xrange(1, max_iter + 1):\n for j in range(m):\n x_j, y_j = Xconst[j], Y[j]\n y_j_hat = predict(x_j, theta)\n theta[y_j_hat] -= a * x_j\n theta[y_j] += a * x_j\n \n curr_err = pred_err(Xconst, Y, theta)\n J_err.append(curr_err)\n \n print 'Error %.3f at iteration %d' % (J_err[-1], i)\n \n return theta, J_err",
"Multiclass Pereptron Object\nLet us put this all in a class MultiPerceptron.",
"from mltools.base import classifier\nclass MultiClassPerceptron(classifier):\n def __init__(self, theta=None):\n self.theta = theta\n \n def add_const(self, X):\n return np.hstack([np.ones([X.shape[0], 1]), X])\n\n def resp(self, x):\n return np.dot(x, self.theta.T) \n \n def predict(self, X):\n \"\"\"Retruns class prediction for either single point or multiple points. \"\"\"\n Xconst = np.atleast_2d(X)\n \n # Making sure it has the const, if not adding it.\n if Xconst.shape[1] == self.theta.shape[1] - 1:\n Xconst = self.add_const(Xconst)\n \n r = self.resp(Xconst)\n return np.argmax(np.atleast_2d(r), axis=1)\n \n # Notice that we don't need the sign function (from Perceptron class) any longer\n# def sign(self, vals):\n# \"\"\"A sign version with breaking 0's as +1. \"\"\"\n# return np.sign(vals + 1e-200)\n \n def pred_err(self, X, Y):\n Yhat = self.predict(X)\n return np.mean(Yhat != Y)\n\n def train(self, X, Y, a=0.01, stop_tol=1e-8, max_iter=50):\n # Start by adding a const\n Xconst = self.add_const(X)\n\n m, n = Xconst.shape\n c = np.unique(Y).shape[0]\n self.classes = np.unique(Y)\n \n # Making sure Theta is inititialized.\n if self.theta is None:\n self.theta = np.random.randn(c, n)\n\n # The update loop\n J_err = [np.inf]\n for i in xrange(1, max_iter + 1):\n for j in np.random.permutation(m):\n x_j, y_j = Xconst[j], Y[j]\n y_j_hat = self.predict(x_j)\n\n self.theta[y_j_hat[0]] -= a * x_j\n self.theta[y_j] += a * x_j\n \n curr_err = self.pred_err(Xconst, Y)\n J_err.append(curr_err)\n\n return J_err ",
"Let's train and plot :)",
"model = MultiClassPerceptron()\nj_err = model.train(X, Y, a=.02, max_iter=50)\nml.plotClassify2D(model, X, Y)",
"Bonus question\nIn the plot below we have two classes. Let's assume that I want to have multiclass perceptron with 2 classes, what would theta have to be to separate them correctly?",
"x_0 = np.random.normal(loc=[-2, 2], scale=0.5, size=(100, 2))\nx_1 = np.random.normal(loc=[2, 2], scale=0.5, size=(100, 2))\n\nX = np.vstack([x_0, x_1])\nY = np.ones(X.shape[0], dtype=np.intc)\nY[:100] = 0\nY[100:200] = 1\n\nml.plotClassify2D(None, X, Y)\n\ntheta = ???? # Fill in the code and run\nmodel = MultiClassPerceptron(theta)\nml.plotClassify2D(model, X, Y)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n | site/ko/io/tutorials/prometheus.ipynb | apache-2.0 | [
"Copyright 2020 The TensorFlow IO Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Prometheus 서버에서 메트릭 로드하기\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/io/tutorials/prometheus\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/prometheus.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행하기</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/prometheus.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub에서소스 보기</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/io/tutorials/prometheus.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드하기</a></td>\n</table>\n\n주의: Python 패키지 외에 이 노트북에서는 sudo apt-get install을 사용하여 타자 패키지를 설치합니다.\n개요\n이 튜토리얼은 Prometheus 서버에서 tf.data.Dataset로 CoreDNS 메트릭을 로드한 다음 훈련과 추론에 tf.keras를 사용합니다.\nCoreDNS는 서비스 검색에 중점을 둔 DNS 서버이며 Kubernetes 클러스터의 일부로 널리 배포됩니다. 이 때문에 종종 연산을 통해 면밀하게 모니터링됩니다.\n이 튜토리얼은 머신러닝을 통해 연산을 자동화하려는 DevOps에서 사용할 수 있는 예입니다.\n설정 및 사용법\n필요한 tensorflow-io 패키지를 설치하고 런타임 다시 시작하기",
"import os\n\ntry:\n %tensorflow_version 2.x\nexcept Exception:\n pass\n\n!pip install tensorflow-io\n\nfrom datetime import datetime\n\nimport tensorflow as tf\nimport tensorflow_io as tfio",
"CoreDNS 및 Prometheus 설치 및 설정하기\n데모 목적으로, DNS 쿼리를 수신하기 위해 포트 9053이 열려 있고 스크래핑에 대한 메트릭을 노출하기 위해 포트 9153(기본값)이 열려 있는 CoreDNS 서버가 로컬에 있습니다. 다음은 CoreDNS에 대한 기본 Corefile 구성이며 다운로드할 수 있습니다.\n.:9053 { prometheus whoami }\n설치에 대한 자세한 내용은 CoreDNS 설명서에서 찾을 수 있습니다.",
"!curl -s -OL https://github.com/coredns/coredns/releases/download/v1.6.7/coredns_1.6.7_linux_amd64.tgz\n!tar -xzf coredns_1.6.7_linux_amd64.tgz\n\n!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/Corefile\n\n!cat Corefile\n\n# Run `./coredns` as a background process.\n# IPython doesn't recognize `&` in inline bash cells.\nget_ipython().system_raw('./coredns &')",
"다음 단계로 Prometheus 서버를 설정하고 Prometheus를 사용하여 위의 포트 9153에서 노출된 CoreDNS 메트릭을 스크래핑합니다. 구성을 위한 prometheus.yml 파일도 다운로드할 수 있습니다.",
"!curl -s -OL https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz\n!tar -xzf prometheus-2.15.2.linux-amd64.tar.gz --strip-components=1\n\n!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/prometheus.yml\n\n!cat prometheus.yml\n\n# Run `./prometheus` as a background process.\n# IPython doesn't recognize `&` in inline bash cells.\nget_ipython().system_raw('./prometheus &')",
"일부 활동을 표시하기 위해 dig 명령을 사용하여 설정된 CoreDNS 서버에 대해 몇 가지 DNS 쿼리를 생성할 수 있습니다.",
"!sudo apt-get install -y -qq dnsutils\n\n!dig @127.0.0.1 -p 9053 demo1.example.org\n\n!dig @127.0.0.1 -p 9053 demo2.example.org",
"이제 CoreDNS 서버의 메트릭을 Prometheus 서버에서 스크래핑하고 TensorFlow에서 사용할 준비가 됩니다.\nCoreDNS 메트릭에 대한 Dataset를 만들고 TensorFlow에서 사용하기\nPostgreSQL 서버에서 사용할 수 있고 tfio.experimental.IODataset.from_prometheus를 통해 수행할 수 있는 CoreDNS 메트릭의 Dataset를 만듭니다. 최소한 두 가지 인수가 필요합니다. query는 메트릭을 선택하기 위해 Prometheus 서버로 전달되고 length는 Dataset에 로드하려는 기간입니다.\n\"coredns_dns_request_count_total\" 및 \"5\"(초)로 시작하여 아래 Dataset를 만들 수 있습니다. 튜토리얼 앞부분에서 두 개의 DNS 쿼리가 보내졌기 때문에 \"coredns_dns_request_count_total\"에 대한 메트릭은 시계열 마지막에서 \"2.0\"이 될 것으로 예상됩니다.",
"dataset = tfio.experimental.IODataset.from_prometheus(\n \"coredns_dns_request_count_total\", 5, endpoint=\"http://localhost:9090\")\n\n\nprint(\"Dataset Spec:\\n{}\\n\".format(dataset.element_spec))\n\nprint(\"CoreDNS Time Series:\")\nfor (time, value) in dataset:\n # time is milli second, convert to data time:\n time = datetime.fromtimestamp(time // 1000)\n print(\"{}: {}\".format(time, value['coredns']['localhost:9153']['coredns_dns_request_count_total']))",
"Dataset의 사양을 자세히 살펴보겠습니다.\n( TensorSpec(shape=(), dtype=tf.int64, name=None), { 'coredns': { 'localhost:9153': { 'coredns_dns_request_count_total': TensorSpec(shape=(), dtype=tf.float64, name=None) } } } )\n데이터세트는 (time, values) 튜플로 구성되는 것을 분명히 알 수 있으며, values 필드는 다음으로 확장된 Python dict입니다.\n\"job_name\": { \"instance_name\": { \"metric_name\": value, }, }\n위의 예에서 'coredns'는 작업 이름이고, 'localhost:9153'은 인스턴스 이름이며, 'coredns_dns_request_count_total'은 메트릭 이름입니다. 사용된 Prometheus 쿼리에 따라 여러 작업/인스턴스/메트릭이 반환될 수 있습니다. 이것은 또한 Python dict이 Dataset의 구조에 사용된 이유이기도 합니다.\n다른 쿼리 \"go_memstats_gc_sys_bytes\"를 예로 들어 보겠습니다. CoreDNS와 Prometheus가 모두 Golang으로 작성되었으므로 \"go_memstats_gc_sys_bytes\" 메트릭은 \"coredns\" 작업과 \"prometheus\" 작업 모두에 사용할 수 있습니다.\n참고: 이 셀은 처음 실행할 때 오류가 발생할 수 있습니다. 다시 실행하면 통과됩니다.",
"dataset = tfio.experimental.IODataset.from_prometheus(\n \"go_memstats_gc_sys_bytes\", 5, endpoint=\"http://localhost:9090\")\n\nprint(\"Time Series CoreDNS/Prometheus Comparision:\")\nfor (time, value) in dataset:\n # time is milli second, convert to data time:\n time = datetime.fromtimestamp(time // 1000)\n print(\"{}: {}/{}\".format(\n time,\n value['coredns']['localhost:9153']['go_memstats_gc_sys_bytes'],\n value['prometheus']['localhost:9090']['go_memstats_gc_sys_bytes']))",
"생성된 Dataset는 이제 훈련 또는 추론 목적으로 tf.keras로 직접 전달할 수 있습니다.\n모델 훈련에 Dataset 사용하기\n메트릭 Dataset가 생성되면 모델 훈련 또는 추론을 위해 Dataset를 tf.keras로 바로 전달할 수 있습니다.\n데모 목적으로 이 튜토리얼에서는 1개의 특성과 2개의 스텝을 입력으로 포함하는 매우 간단한 LSTM 모델을 사용합니다.",
"n_steps, n_features = 2, 1\nsimple_lstm_model = tf.keras.models.Sequential([\n tf.keras.layers.LSTM(8, input_shape=(n_steps, n_features)),\n tf.keras.layers.Dense(1)\n])\n\nsimple_lstm_model.compile(optimizer='adam', loss='mae')\n",
"사용할 데이터세트는 10개의 샘플이 있는 CoreDNS의 'go_memstats_sys_bytes' 값입니다. 그러나 window=n_steps 및 shift=1의 슬라이딩 윈도우가 형성되기 때문에 추가 샘플이 필요합니다(연속된 두 요소에 대해 첫 번째 요소는 x로, 두 번째 요소는 훈련을 위해 y로 입력됨). 합계는 10 + n_steps - 1 + 1 = 12초입니다.\n데이터 값의 스케일도 [0, 1]로 조정됩니다.",
"n_samples = 10\n\ndataset = tfio.experimental.IODataset.from_prometheus(\n \"go_memstats_sys_bytes\", n_samples + n_steps - 1 + 1, endpoint=\"http://localhost:9090\")\n\n# take go_memstats_gc_sys_bytes from coredns job \ndataset = dataset.map(lambda _, v: v['coredns']['localhost:9153']['go_memstats_sys_bytes'])\n\n# find the max value and scale the value to [0, 1]\nv_max = dataset.reduce(tf.constant(0.0, tf.float64), tf.math.maximum)\ndataset = dataset.map(lambda v: (v / v_max))\n\n# expand the dimension by 1 to fit n_features=1\ndataset = dataset.map(lambda v: tf.expand_dims(v, -1))\n\n# take a sliding window\ndataset = dataset.window(n_steps, shift=1, drop_remainder=True)\ndataset = dataset.flat_map(lambda d: d.batch(n_steps))\n\n\n# the first value is x and the next value is y, only take 10 samples\nx = dataset.take(n_samples)\ny = dataset.skip(1).take(n_samples)\n\ndataset = tf.data.Dataset.zip((x, y))\n\n# pass the final dataset to model.fit for training\nsimple_lstm_model.fit(dataset.batch(1).repeat(10), epochs=5, steps_per_epoch=10)",
"이 튜토리얼에서 설정한 CoreDNS 서버에는 어떤 워크로드도 없기 때문에 위의 훈련된 모델은 실제로 그다지 유용하지 않습니다. 그러나 이 모델은 실제 운영 서버에서 메트릭을 로드하는 데 사용할 수 있는 파이프라인입니다. 따라서 모델을 개선하여 DevOps 자동화의 실제 문제를 해결할 수 있습니다."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
CheungChanDevCoder/blog | python/ipynb/人人推荐系统调研综述-Copy1.ipynb | mit | [
"人人汽车推荐系统调研综述\n这里是一个推荐引擎,使用经典数据集movielens,可以将movies数据替换为人人的车型数据,rating数据替换为从日志系统中收集的所有用户对车的点击次数,浏览时间(权重)。这样可以实现C端对车型的推荐\n架构:\n①日志系统:搜集用户行为提供离线数据\n②推荐引擎:A:从数据库或者缓存中拿到用户特征向量(浏览记录 收藏记录 购买记录 停留时间)。B:将用户特征向量通过特征-物品矩阵通过各种推荐算法转换为初始推荐物品列表。C:对初始推荐列表过滤、排名(热门程度、新鲜程度、购买过)。\n③UI展示系统 提供标题、缩略图、推荐理由(有了推荐理由用户才愿意点击)评分 根据用户点击情况或者评分来增删推荐引擎或者调整推荐引擎所占的权重。\n下面是一个初步的推荐算法引擎,只使用了科学计算包numpy和基于numpy的数据处理的包pandas。基于协同过滤算法,后面会考虑使用tensorflow surprise python-recsys等框架提供的SVD、余弦相似度算法和深度学习神经网络开发更多推荐引擎。",
"import numpy as np\nimport pandas as pd\nimport os\n# 使用pandas加载csv数据\n\nmovies = pd.read_csv(os.path.expanduser(\"~/ml-latest-small/movies.csv\"))\nratings = pd.read_csv(os.path.expanduser(\"~/ml-latest-small/ratings.csv\"))\n# 去掉无用的维度\nratings.drop(['timestamp'],axis=1,inplace=True)\nmovies.head()\n\nratings.head()\n\n# 将movieid替换为moviename\ndef replace_name(x):\n return movies[movies[\"movieId\"]==x].title.values[0]\n\nratings.movieId = ratings.movieId.map(replace_name)\n\nratings.head()\n\n# 建立一个透视表\nM = ratings.pivot_table(index=['userId'],columns=['movieId'],values='rating')\n\n# 当前维度\nM.shape\n\n# M是一个非常稀疏的透视表\nM",
"在_产品-产品协同过滤_中的产品之间的相似性值是通过观察所有对两个产品之间的打分的用户来度量的。\n<img src=\"img/item-item.png\"/>\n对于_用户-产品协同过滤_,用户之间的相似性值是通过观察所有同时被两个用户打分的产品来度量的。\n<img src=\"img/user-item.png\"/>\n核心算法方面使用皮尔逊的R来计算距离\n两个变量之间的皮尔逊相关系数定义为两个变量之间的协方差和标准差的商\n\n上式定义了总体相关系数,常用希腊小写字母 ρ (rho) 作为代表符号。估算样本的协方差和标准差,可得到样本相关系数(样本皮尔逊系数),常用英文小写字母 r 代表:",
"# 算法实现\ndef pearson(s1, s2):\n s1_c = s1 - s1.mean()\n s2_c = s2 - s2.mean()\n# print(f\"s1_c={s1_c}\")\n# print(f\"s2_c={s2_c}\")\n denominator = np.sqrt(np.sum(s1_c ** 2) * np.sum(s2_c ** 2))\n if denominator == 0:\n return 0\n return np.sum(s1_c * s2_c) / denominator",
"算法引擎2可以考虑比较文本相似度的余弦相似性算法,这也是推荐系统常用的算法。其中,打分被看成n维空间中的向量,而相似性是基于这些向量之间的角度进行计算的。可以使用sklearn的pairwise_distances函数来计算余弦相似性。注意,输出范围从0到1,因为打分都是正的。",
"# 永不妥协 碟中谍2\npearson(M['Erin Brockovich (2000)'],M['Mission: Impossible II (2000)'])\n# 永不妥协 指环王\n# pearson(M['Erin Brockovich (2000)'],M['Fingers (1978)'])\n# 永不妥协 哈利波特与密室\n# pearson(M['Erin Brockovich (2000)'],M['Harry Potter and the Chamber of Secrets (2002)'])\n# 哈利波特与密室 哈利波特与阿兹卡班的囚徒\n# pearson(M['Harry Potter and the Chamber of Secrets (2002)'],M['Harry Potter and the Prisoner of Azkaban (2004)'])\n\n\n\ndef get_recs(movie_name, M, num):\n reviews = []\n for title in M.columns:\n if title == movie_name:\n continue\n cor = pearson(M[movie_name], M[title])\n if np.isnan(cor):\n continue\n else:\n reviews.append((title, cor))\n reviews.sort(key=lambda tup: tup[1], reverse=True)\n return reviews[:num]\n\n# %%time\nrecs = get_recs('Clerks (1994)', M, 10)\nrecs[:10]\n\n# %%time\nanti_recs = get_recs('Clerks (1994)', M, 8551)\nanti_recs[-10:]",
"初步设想由大数据同学准备好数据之后,搭建服务进行机器学习,针对每一个车型跑一编结果,缓存起来用户浏览时提供。\n但是根据阿里技术专家郑重(卢梭)所说:推荐系统的搭建是个复杂工程,涉及到实时计算、离线计算,以及各种数据采集、流转等,对自建推荐系统来说,1人年是跑不掉的。"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mikelseverson/Udacity-Deep_Learning-Nanodegree | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | [
"TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)",
"import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n vocab_set = set(text)\n \n #enumerate the set and put in dictionary\n vocab_to_int = {word: ii for ii, word in enumerate(vocab_set, 1)}\n \n #flip the dictionary\n int_to_vocab = {ii: word for word, ii in vocab_to_int.items()}\n \n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n token_dict = {\n '.' : \"||Period||\",\n ',' : \"||Comma||\",\n '\"' : \"||Quotation_Mark||\",\n ';' : \"||Semicolon||\",\n '!' : \"||Exclamation_Mark||\",\n '?' : \"||Question_Mark||\",\n '(' : \"||Left_Parentheses||\",\n ')' : \"||Right_Parentheses||\",\n '--' : \"||Dash||\",\n '\\n' : \"||Return||\" \n }\n \n \n \n return token_dict\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)",
"def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n input = tf.placeholder(tf.int32, shape=(None, None), name='input')\n targets = tf.placeholder(tf.int32, shape=(None, None), name='targets')\n learningRate = tf.placeholder(tf.float32, shape=None, name='learning_rate')\n\n # TODO: Implement Function\n return input, targets, learningRate\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)",
"def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n \n lstm_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n rnn_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell])\n \n initialized = rnn_cell.zero_state(batch_size, tf.float32)\n initialized = tf.identity(initialized, name=\"initial_state\")\n\n \n return rnn_cell, initialized\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.",
"def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)",
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n \n output, finalState = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n finalState = tf.identity(finalState, \"final_state\")\n \n return output, finalState\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)",
"def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n \n embeded = get_embed(input_data, vocab_size, rnn_size)\n\n rnn, state = build_rnn(cell, embeded)\n \n logits = tf.contrib.layers.fully_connected(rnn, vocab_size)\n\n return logits, state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n# Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n# Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.",
"def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n \n n_elements = len(int_text)\n n_batches = (n_elements - 1)//(batch_size*seq_length)\n all_batches = np.zeros(shape=(n_batches, 2, batch_size, seq_length), dtype=np.int32)\n\n # fill Numpy array\n for i in range(n_batches):\n for j in range(batch_size):\n input_start = i * seq_length + j * batch_size * seq_length\n target_start = input_start + 1\n target_stop = target_start + seq_length\n if target_stop < len(int_text):\n for k in range(seq_length):\n all_batches[i][0][j][k] = int_text[input_start + k]\n all_batches[i][1][j][k] = int_text[target_start + k]\n \n return all_batches\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.",
"# Number of Epochs\nnum_epochs = 40\n# Batch Size\nbatch_size = 200\n# RNN Size\nrnn_size = 128\nembed_dim = None\n# Embedding Dimension Size\n# Sequence Length\nseq_length = 56\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 100\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"Save Parameters\nSave seq_length and save_dir for generating a new TV script.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)",
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n \n inputTensor = loaded_graph.get_tensor_by_name(\"input:0\")\n InitialStateTensor = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n FinalStateTensor = loaded_graph.get_tensor_by_name(\"final_state:0\")\n ProbsTensor = loaded_graph.get_tensor_by_name(\"probs:0\")\n \n return inputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"Choose Word\nImplement the pick_word() function to select the next word using probabilities.",
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n \n return int_to_vocab[np.random.choice(len(int_to_vocab), p=probabilities)]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.",
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
stevetjoa/stanford-mir | chroma.ipynb | mit | [
"%matplotlib inline\nimport numpy, scipy, matplotlib.pyplot as plt, IPython.display as ipd\nimport librosa, librosa.display\nimport stanford_mir; stanford_mir.init()",
"← Back to Index\nConstant-Q Transform and Chroma\nConstant-Q Transform\nUnlike the Fourier transform, but similar to the mel scale, the constant-Q transform (Wikipedia) uses a logarithmically spaced frequency axis. For more information, read the original paper:\n\nJudith C. Brown, \"Calculation of a constant Q spectral transform,\" J. Acoust. Soc. Am., 89(1):425–434, 1991.\n\nLet's load a file:",
"x, sr = librosa.load('audio/simple_piano.wav')\nipd.Audio(x, rate=sr)",
"To compute a constant-Q spectrogram, will use librosa.cqt:",
"fmin = librosa.midi_to_hz(36)\nhop_length = 512\nC = librosa.cqt(x, sr=sr, fmin=fmin, n_bins=72, hop_length=hop_length)",
"Display:",
"logC = librosa.amplitude_to_db(numpy.abs(C))\nplt.figure(figsize=(15, 5))\nlibrosa.display.specshow(logC, sr=sr, x_axis='time', y_axis='cqt_note', fmin=fmin, cmap='coolwarm')",
"Note how each frequency bin corresponds to one MIDI pitch number.\nChroma\nA chroma vector (Wikipedia) (FMP, p. 123) is a typically a 12-element feature vector indicating how much energy of each pitch class, {C, C#, D, D#, E, ..., B}, is present in the signal.\nlibrosa.feature.chroma_stft",
"chromagram = librosa.feature.chroma_stft(x, sr=sr, hop_length=hop_length)\nplt.figure(figsize=(15, 5))\nlibrosa.display.specshow(chromagram, x_axis='time', y_axis='chroma', hop_length=hop_length, cmap='coolwarm')",
"librosa.feature.chroma_cqt",
"chromagram = librosa.feature.chroma_cqt(x, sr=sr, hop_length=hop_length)\nplt.figure(figsize=(15, 5))\nlibrosa.display.specshow(chromagram, x_axis='time', y_axis='chroma', hop_length=hop_length, cmap='coolwarm')",
"Chroma energy normalized statistics (CENS) (FMP, p. 375). The main idea of CENS features is that taking statistics over large windows smooths local deviations in tempo, articulation, and musical ornaments such as trills and arpeggiated chords. CENS are best used for tasks such as audio matching and similarity.\nlibrosa.feature.chroma_cens",
"chromagram = librosa.feature.chroma_cens(x, sr=sr, hop_length=hop_length)\nplt.figure(figsize=(15, 5))\nlibrosa.display.specshow(chromagram, x_axis='time', y_axis='chroma', hop_length=hop_length, cmap='coolwarm')",
"← Back to Index"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ML4DS/ML4all | C3.Classification_LogReg/.ipynb_checkpoints/RegresionLogistica-checkpoint.ipynb | mit | [
"Logistic Regression\nNotebook version: 1.0 (Oct 12, 2016)\n\nAuthor: Jesús Cid Sueiro ([email protected])\n Jerónimo Arenas García ([email protected])\n\nChanges: v.1.0 - First version\n v.1.1 - Typo correction. Prepared for slide presentation",
"# To visualize plots in the notebook\n%matplotlib inline\n\n# Imported libraries\nimport csv\nimport random\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport pylab\n\nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn import linear_model\n",
"Logistic Regression\n1. Introduction\n1.1. Binary classification and decision theory. The MAP criterion\nGoal of a classification problem is to assign a class or category to every instance or observation of a data collection. Here, we will assume that every instance ${\\bf x}$ is an $N$-dimensional vector in $\\mathbb{R}^N$, and that the class $y$ of sample ${\\bf x}$ is an element of a binary set ${\\mathcal Y} = {0, 1}$. The goal of a classifier is to predict the true value of $y$ after observing ${\\bf x}$.\nWe will denote as $\\hat{y}$ the classifier output or decision. If $y=\\hat{y}$, the decision is an hit, otherwise $y\\neq \\hat{y}$ and the decision is an error.\nDecision theory provides a solution to the classification problem in situations where the relation between instance ${\\bf x}$ and its class $y$ is given by a known probabilistic model: assume that every tuple $({\\bf x}, y)$ is an outcome of a random vector $({\\bf X}, Y)$ with joint distribution $p_{{\\bf X},Y}({\\bf x}, y)$. A natural criteria for classification is to select predictor $\\hat{Y}=f({\\bf x})$ in such a way that the probability or error, $P{\\hat{Y} \\neq Y}$ is minimum. Noting that\n$$\nP{\\hat{Y} \\neq Y} = \\int P{\\hat{Y} \\neq Y | {\\bf x}} p_{\\bf X}({\\bf x}) d{\\bf x}\n$$\nthe optimal decision is got if, for every sample ${\\bf x}$, we make decision minimizing the conditional error probability:\n\\begin{align}\n\\hat{y}^* &= \\arg\\min_{\\hat{y}} P{\\hat{y} \\neq Y |{\\bf x}} \\\n &= \\arg\\max_{\\hat{y}} P{\\hat{y} = Y |{\\bf x}} \\\n\\end{align}\nThus, the optimal decision rule can be expressed as\n$$\nP_{Y|{\\bf X}}(1|{\\bf x}) \\quad\\mathop{\\gtrless}^{\\hat{y}=1}{\\hat{y}=0}\\quad P{Y|{\\bf X}}(0|{\\bf x}) \n$$\nor, equivalently\n$$\nP_{Y|{\\bf X}}(1|{\\bf x}) \\quad\\mathop{\\gtrless}^{\\hat{y}=1}_{\\hat{y}=0}\\quad \\frac{1}{2} \n$$\nThe classifier implementing this decision rule is usually named MAP (Maximum A Posteriori). \n1.2. Parametric classification.\nClassical decision theory is grounded on the assumption that the probabilistic model relating the observed sample ${\\bf X}$ and the true hypothesis $Y$ is known. Unfortunately, this is unrealistic in many applications, where the only available information to construct the classifier is a dataset $\\mathcal S = {({\\bf x}^{(k)}, y^{(k)}), \\,k=1,\\ldots,K}$ of instances and their respective class labels.\nA more realistic formulation of the classification problem is the following: given a dataset $\\mathcal S = {({\\bf x}^{(k)}, y^{(k)}) \\in {\\mathbb{R}}^N \\times {\\mathcal Y}, \\, k=1,\\ldots,K}$ of independent and identically distributed (i.i.d.) samples from an unknown distribution $p_{{\\bf X},Y}({\\bf x}, y)$, predict the class $y$ of a new sample ${\\bf x}$ with the minimum probability of error.\nSince the probabilistic model generating the data is unknown, the MAP decision rule cannot be applied. However, many classification algorithms use the dataset to obtain an estimate of the posterior class probabilities, and apply it to implement an approximation to the MAP decision maker.\nParametric classifiers based on this idea assume, additionally, that the posterior class probabilty satisfies some parametric formula:\n$$\nP_{Y|X}(1|{\\bf x},{\\bf w}) = f_{\\bf w}({\\bf x})\n$$\nwhere ${\\bf w}$ is a vector of parameters. Given the expression of the MAP decision maker, classification consists in comparing the value of $f_{\\bf w}({\\bf x})$ with the threshold $\\frac{1}{2}$, and each parameter vector would be associated to a different decision maker.\nIn practice, the dataset ${\\mathcal S}$ is used to select a particular parameter vector $\\hat{\\bf w}$ according to certain criterion. Accordingly, the decision rule becomes\n$$\nf_{\\hat{\\bf w}}({\\bf x}) \\quad\\mathop{\\gtrless}^{\\hat{y}=1}_{\\hat{y}=0}\\quad \\frac{1}{2} \n$$\nIn this lesson, we explore one of the most popular model-based parametric classification methods: logistic regression.\n<img src=\"figs/parametric_decision.png\", width=300>\n2. Logistic regression.\n2.1. The logistic function\nThe logistic regression model assumes that the binary class label $Y \\in {0,1}$ of observation $X\\in \\mathbb{R}^N$ satisfies the expression.\n$$P_{Y|{\\bf X}}(1|{\\bf x}, {\\bf w}) = g({\\bf w}^\\intercal{\\bf x})$$\n$$P_{Y|{\\bf,X}}(0|{\\bf x}, {\\bf w}) = 1-g({\\bf w}^\\intercal{\\bf x})$$\nwhere ${\\bf w}$ is a parameter vector and $g(·)$ is the logistic function, which is defined by\n$$g(t) = \\frac{1}{1+\\exp(-t)}$$\nIt is straightforward to see that the logistic function has the following properties:\n\nP1: Probabilistic output: $\\quad 0 \\le g(t) \\le 1$\nP2: Symmetry: $\\quad g(-t) = 1-g(t)$\nP3: Monotonicity: $\\quad g'(t) = g(t)·[1-g(t)] \\ge 0$\n\nIn the following we define a logistic function in python, and use it to plot a graphical representation.\nExercise 1: Verify properties P2 and P3.\nExercise 2: Implement a function to compute the logistic function, and use it to plot such function in the inverval $[-6,6]$.",
"# Define the logistic function\ndef logistic(x): \n p = 1.0 / (1 + np.exp(-x))\n return p\n\n# Plot the logistic function\nt = np.arange(-6, 6, 0.1)\nz = logistic(t)\n\nplt.plot(t, z)\nplt.xlabel('$t$', fontsize=14)\nplt.ylabel('$\\phi(t)$', fontsize=14)\nplt.title('The logistic function')\nplt.grid()",
"2.2. Classifiers based on the logistic model.\nThe MAP classifier under a logistic model will have the form\n$$P_{Y|{\\bf X}}(1|{\\bf x}, {\\bf w}) = g({\\bf w}^\\intercal{\\bf x}) \\quad\\mathop{\\gtrless}^{\\hat{y}=1}_{\\hat{y}=0} \\quad \\frac{1}{2} $$\nTherefore\n$$\n2 \\quad\\mathop{\\gtrless}^{\\hat{y}=1}_{\\hat{y}=0} \\quad\n1 + \\exp(-{\\bf w}^\\intercal{\\bf x}) $$\nwhich is equivalent to\n$${\\bf w}^\\intercal{\\bf x} \n\\quad\\mathop{\\gtrless}^{\\hat{y}=1}_{\\hat{y}=0}\\quad \n0 $$\nTherefore, the classifiers based on the logistic model are given by linear decision boundaries passing through the origin, ${\\bf x} = {\\bf 0}$.",
"# Weight vector:\nw = [1, 4, 8] # Try different weights\n\n# Create a rectangular grid.\nx_min = -1\nx_max = 1\ndx = x_max - x_min\nh = float(dx) / 200\nxgrid = np.arange(x_min, x_max, h)\nxx0, xx1 = np.meshgrid(xgrid, xgrid)\n\n# Compute the logistic map for the given weights\nZ = logistic(w[0] + w[1]*xx0 + w[2]*xx1)\n\n# Plot the logistic map\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)\nplt.xlabel('$x_0$')\nplt.ylabel('$x_1$')\nax.set_zlabel('P(1|x,w)')",
"3.3. Nonlinear classifiers.\nThe logistic model can be extended to construct non-linear classifiers by using non-linear data transformations. A general form for a nonlinear logistic regression model is\n$$P_{Y|{\\bf X}}(1|{\\bf x}, {\\bf w}) = g[{\\bf w}^\\intercal{\\bf z}({\\bf x})] $$\nwhere ${\\bf z}({\\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation\n$$\n{\\bf w}^\\intercal{\\bf z} = 0\n$$\n Exercise 2: Modify the code above to generate a 3D surface plot of the polynomial logistic regression model given by\n$$\nP_{Y|{\\bf X}}(1|{\\bf x}, {\\bf w}) = g(1 + 10 x_0 + 10 x_1 - 20 x_0^2 + 5 x_0 x_1 + x_1^2) \n$$",
"# SOLUTION TO THE EXERCISE\n# Weight vector:\nw = [1, 10, 10, -20, 5, 1] # Try different weights\n\n# Create a regtangular grid.\nx_min = -1\nx_max = 1\ndx = x_max - x_min\nh = float(dx) / 200\nxgrid = np.arange(x_min, x_max, h)\nxx0, xx1 = np.meshgrid(xgrid, xgrid)\n\n# Compute the logistic map for the given weights\nZ = logistic(w[0] + w[1]*xx0 + w[2]*xx1 + w[3]*np.multiply(xx0,xx0) + \n w[4]*np.multiply(xx0,xx1) + w[3]*np.multiply(xx1,xx1))\n\n# Plot the logistic map\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)\nplt.xlabel('$x_0$')\nplt.ylabel('$x_1$')\nax.set_zlabel('P(1|x,w)')",
"3. Inference\nRemember that the idea of parametric classification is to use the training data set $\\mathcal S = {({\\bf x}^{(k)}, y^{(k)}) \\in {\\mathbb{R}}^N \\times {0,1}, k=1,\\ldots,K}$ to set the parameter vector ${\\bf w}$ according to certain criterion. Then, the estimate $\\hat{\\bf w}$ can be used to compute the label prediction for any new observation as \n$$\\hat{y} = \\arg\\max_y P_{Y|{\\bf X}}(y|{\\bf x},\\hat{\\bf w}).$$\n<img src=\"figs/parametric_decision.png\", width=300>\nIn the following, we will make the following assumptions:\n\n\nA1. The samples in ${\\mathcal S}$ are i.i.d.\n\n\nA2. Target $Y^{(k)}$ only depends on ${\\bf x}^{(k)}$, but not on ${\\bf x}^{(l)}$ for any $l\\neq k$.\n\n\nA3. (Logistic Regression): We assume a logistic model for the a posteriori probability of ${Y=1}$ given ${\\bf X}$, i.e.,\n\n\n$$P_{Y|{\\bf X}}(1|{\\bf x}, {\\bf w}) = g[{\\bf w}^\\intercal{\\bf z}({\\bf x})].$$\nWe need still to choose a criterion to optimize with the selection of the parameter vector. In the notebook, we will discuss two different approaches to the estimation of ${\\bf w}$:\n\nMaximum Likelihood (ML): $\\hat{\\bf w}{\\text{ML}} = \\arg\\max{\\bf w} P_{{\\mathcal S}|{\\bf W}}({\\mathcal S}|{\\bf w})$\nMaximum A Posteriori (MAP): $\\hat{\\bf w}{\\text{MAP}} = \\arg\\max{\\bf w} p_{{\\bf W}|{\\mathcal S}}({\\bf w}|{\\mathcal S})$\n\nFor the mathematical derivation of the logistic regression algorithm, the following representation of the logistic model will be useful: noting that\n$$P_{Y|{\\bf X}}(0|{\\bf x}, {\\bf w}) = 1-g[{\\bf w}^\\intercal{\\bf z}({\\bf x})]\n= g[-{\\bf w}^\\intercal{\\bf z}({\\bf x})]$$\nwe can write\n$$P_{Y|{\\bf X}}(y|{\\bf x}, {\\bf w}) = g[\\overline{y}{\\bf w}^\\intercal{\\bf z}({\\bf x})]$$\nwhere $\\overline{y} = 2y-1$ is a symmetrized label ($\\overline{y}\\in{-1, 1}$). \n3.1. ML estimation.\nThe ML estimate is defined as\n$$\\hat{\\bf w}{\\text{ML}} = \\arg\\max{\\bf w} P_{{\\mathcal S}|{\\bf W}}({\\mathcal S}|{\\bf w})\n = \\arg\\min_{\\bf w} L({\\bf w})\n$$\nwhere $L({\\bf w})$ is the negative log-likelihood function, given by\n$$\nL({\\bf w}) = - \\log P_{{\\mathcal S}|{\\bf W}}({\\mathcal S}|{\\bf w})\n = - \\log\\left[P\\left(y^{(1)},\\ldots,y^{(K)}|\n {\\bf x}^{(1)},\\ldots, {\\bf x}^{(K)},{\\bf w}\\right)\\right]\n$$\nUsing assumption A1,\n$$\nL({\\bf w}) = - \\log\\left[\\prod_{k=1}^K P\\left(y^{(k)}|{\\bf x}^{(1)},\\ldots,{\\bf x}^{(K)},{\\bf w}\\right)\\right].\n$$\nUsing A2,\n\\begin{align}\nL({\\bf w}) \n &= - \\log\\left[\\prod_{k=1}^K P_{Y|{\\bf X}}\\left(y^{(k)}|{\\bf x}^{(k)},{\\bf w}\\right)\\right] \\\n &= - \\sum_{k=1}^K\\log\\left[P_{Y|{\\bf X}}\\left(y^{(k)}|{\\bf x}^{(k)},{\\bf w}\\right)\\right]\n\\end{align}\nUsing A3 (the logistic model)\n\\begin{align}\nL({\\bf w}) \n &= - \\sum_{k=1}^K\\log\\left[g\\left(\\overline{y}^{(k)}{\\bf w}^\\intercal {\\bf z}^{(k)}\\right)\\right] \\\n &= \\sum_{k=1}^K\\log\\left[1+\\exp\\left(-\\overline{y}^{(k)}{\\bf w}^\\intercal {\\bf z}^{(k)}\\right)\\right]\n\\end{align}\nwhere ${\\bf z}^{(k)}={\\bf z}({\\bf x}^{(k)})$.\nIt can be shown that $L({\\bf w})$ is a convex and differentiable function of ${\\bf w}$. Therefore, its minimum is a point with zero gradient.\n\\begin{align}\n\\nabla_{\\bf w} L(\\hat{\\bf w}{\\text{ML}}) \n &= - \\sum{k=1}^K \n \\frac{\\exp\\left(-\\overline{y}^{(k)}\\hat{\\bf w}{\\text{ML}}^\\intercal {\\bf z}^{(k)}\\right) \\overline{y}^{(k)} {\\bf z}^{(k)}}\n {1+\\exp\\left(-\\overline{y}^{(k)}\\hat{\\bf w}{\\text{ML}}^\\intercal {\\bf z}^{(k)}\n \\right)} = \\\n &= - \\sum_{k=1}^K \\left[y^{(k)}-g(\\hat{\\bf w}_{\\text{ML}}^T {\\bf z}^{(k)})\\right] {\\bf z}^{(k)} = 0\n\\end{align}\nUnfortunately, $\\hat{\\bf w}_{\\text{ML}}$ cannot be taken out from the above equation, and some iterative optimization algorithm must be used to search for the minimum.\n3.2. Gradient descent.\nA simple iterative optimization algorithm is <a href = https://en.wikipedia.org/wiki/Gradient_descent> gradient descent</a>. \n\\begin{align}\n{\\bf w}{n+1} = {\\bf w}_n - \\rho_n \\nabla{\\bf w} L({\\bf w}_n)\n\\end{align}\nwhere $\\rho_n >0$ is the learning step.\nApplying the gradient descent rule to logistic regression, we get the following algorithm:\n\\begin{align}\n{\\bf w}{n+1} &= {\\bf w}_n \n + \\rho_n \\sum{k=1}^K \\left[y^{(k)}-g({\\bf w}_n^\\intercal {\\bf z}^{(k)})\\right] {\\bf z}^{(k)}\n\\end{align}\nDefining vectors\n\\begin{align}\n{\\bf y} &= [y^{(1)},\\ldots,y^{(K)}]^\\intercal \\\n\\hat{\\bf p}_n &= [g({\\bf w}_n^\\intercal {\\bf z}^{(1)}), \\ldots, g({\\bf w}_n^\\intercal {\\bf z}^{(K)})]^\\intercal\n\\end{align}\nand matrix\n\\begin{align}\n{\\bf Z} = \\left[{\\bf z}^{(1)},\\ldots,{\\bf z}^{(K)}\\right]^\\intercal\n\\end{align}\nwe can write\n\\begin{align}\n{\\bf w}_{n+1} &= {\\bf w}_n \n + \\rho_n {\\bf Z} \\left({\\bf y}-\\hat{\\bf p}_n\\right)\n\\end{align}\nIn the following, we will explore the behavior of the gradient descend method using the Iris Dataset.\n3.2.1 Example: Iris Dataset.\nAs an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository</a>. This data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant (setosa, versicolor or virginica). Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters. \nWe will try to fit the logistic regression model to discriminate between two classes using only two attributes.\nFirst, we load the dataset and split them in training and test subsets.",
"# Adapted from a notebook by Jason Brownlee\ndef loadDataset(filename, split):\n xTrain = []\n cTrain = []\n xTest = []\n cTest = []\n\n with open(filename, 'rb') as csvfile:\n lines = csv.reader(csvfile)\n dataset = list(lines)\n for i in range(len(dataset)-1):\n for y in range(4):\n dataset[i][y] = float(dataset[i][y])\n item = dataset[i]\n if random.random() < split:\n xTrain.append(item[0:4])\n cTrain.append(item[4])\n else:\n xTest.append(item[0:4])\n cTest.append(item[4])\n return xTrain, cTrain, xTest, cTest\n\nwith open('iris.data', 'rb') as csvfile:\n lines = csv.reader(csvfile)\n\nxTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('iris.data', 0.66)\nnTrain_all = len(xTrain_all)\nnTest_all = len(xTest_all)\nprint 'Train: ' + str(nTrain_all)\nprint 'Test: ' + str(nTest_all)",
"Now, we select two classes and two attributes.",
"# Select attributes\ni = 0 # Try 0,1,2,3\nj = 1 # Try 0,1,2,3 with j!=i\n\n# Select two classes\nc0 = 'Iris-versicolor' \nc1 = 'Iris-virginica'\n\n# Select two coordinates\nind = [i, j]\n\n# Take training test\nX_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all) \n if cTrain_all[n]==c0 or cTrain_all[n]==c1])\nC_tr = [cTrain_all[n] for n in range(nTrain_all) \n if cTrain_all[n]==c0 or cTrain_all[n]==c1]\nY_tr = np.array([int(c==c1) for c in C_tr])\nn_tr = len(X_tr)\n\n# Take test set\nX_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all) \n if cTest_all[n]==c0 or cTest_all[n]==c1])\nC_tst = [cTest_all[n] for n in range(nTest_all) \n if cTest_all[n]==c0 or cTest_all[n]==c1]\nY_tst = np.array([int(c==c1) for c in C_tst])\nn_tst = len(X_tst)",
"3.2.2. Data normalization\nNormalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized.\nWe will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance.",
"def normalize(X, mx=None, sx=None):\n \n # Compute means and standard deviations\n if mx is None:\n mx = np.mean(X, axis=0)\n if sx is None:\n sx = np.std(X, axis=0)\n\n # Normalize\n X0 = (X-mx)/sx\n\n return X0, mx, sx",
"Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set.",
"# Normalize data\nXn_tr, mx, sx = normalize(X_tr)\nXn_tst, mx, sx = normalize(X_tst, mx, sx)",
"The following figure generates a plot of the normalized training data.",
"# Separate components of x into different arrays (just for the plots)\nx0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0]\nx1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0]\nx0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1]\nx1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1]\n\n# Scatterplot.\nlabels = {'Iris-setosa': 'Setosa', \n 'Iris-versicolor': 'Versicolor',\n 'Iris-virginica': 'Virginica'}\nplt.plot(x0c0, x1c0,'r.', label=labels[c0])\nplt.plot(x0c1, x1c1,'g+', label=labels[c1])\nplt.xlabel('$x_' + str(ind[0]) + '$')\nplt.ylabel('$x_' + str(ind[1]) + '$')\nplt.legend(loc='best')\nplt.axis('equal')",
"In order to apply the gradient descent rule, we need to define two methods: \n - A fit method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations.\n - A predict method, that receives the model weight and a set of inputs, and returns the posterior class probabilities for that input, as well as their corresponding class predictions.",
"def logregFit(Z_tr, Y_tr, rho, n_it):\n\n # Data dimension\n n_dim = Z_tr.shape[1]\n\n # Initialize variables\n nll_tr = np.zeros(n_it)\n nll_tr2 = np.zeros(n_it)\n pe_tr = np.zeros(n_it)\n w = np.random.randn(n_dim,1)\n\n # Running the gradient descent algorithm\n for n in range(n_it):\n \n # Compute posterior probabilities for weight w\n p1_tr = logistic(np.dot(Z_tr, w))\n\n # Compute negative log-likelihood\n # (note that this is not required for the weight update, only for nll tracking)\n Y_tr2 = 2*Y_tr - 1\n nll_tr[n] = np.sum(np.log(1 + np.exp(-np.dot(Y_tr2*Z_tr, w)))) \n\n # Update weights\n w += rho*np.dot(Z_tr.T, Y_tr - p1_tr)\n \n return w, nll_tr\n\ndef logregPredict(Z, w):\n\n # Compute posterior probability of class 1 for weights w.\n p = logistic(np.dot(Z, w))\n \n # Class\n D = [int(round(pn)) for pn in p]\n \n return p, D",
"We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\\bf z}({\\bf x}) = (1, {\\bf x}^\\intercal)^\\intercal$.",
"# Parameters of the algorithms\nrho = float(1)/50 # Learning step\nn_it = 200 # Number of iterations\n\n# Compute Z's\nZ_tr = np.c_[np.ones(n_tr), Xn_tr] \nZ_tst = np.c_[np.ones(n_tst), Xn_tst]\nn_dim = Z_tr.shape[1]\n\n# Convert target arrays to column vectors\nY_tr2 = Y_tr[np.newaxis].T\nY_tst2 = Y_tst[np.newaxis].T\n\n# Running the gradient descent algorithm\nw, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)\n\n# Classify training and test data\np_tr, D_tr = logregPredict(Z_tr, w)\np_tst, D_tst = logregPredict(Z_tst, w)\n\n# Compute error rates\nE_tr = D_tr!=Y_tr\nE_tst = D_tst!=Y_tst\n\n# Error rates\npe_tr = float(sum(E_tr)) / n_tr\npe_tst = float(sum(E_tst)) / n_tst\n\n# NLL plot.\nplt.plot(range(n_it), nll_tr,'b.:', label='Train')\nplt.xlabel('Iteration')\nplt.ylabel('Negative Log-Likelihood')\nplt.legend()\n\nprint \"The optimal weights are:\"\nprint w\nprint \"The final error rates are:\"\nprint \"- Training: \" + str(pe_tr)\nprint \"- Test: \" + str(pe_tst)\nprint \"The NLL after training is \" + str(nll_tr[len(nll_tr)-1])",
"3.2.3. Free parameters\nUnder certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\\bf w}$ depend on several factors:\n\nNumber of iterations\nInitialization\nLearning step\n\nExercise: Visualize the variability of gradient descent caused by initializations. To do so, fix the number of iterations to 200 and the learning step, and execute the gradient descent 100 times, storing the training error rate of each execution. Plot the histogram of the error rate values.\nNote that you can do this exercise with a loop over the 100 executions, including the code in the previous code slide inside the loop, with some proper modifications. To plot a histogram of the values in array p with nbins, you can use plt.hist(p, n)\n3.2.3.1. Learning step\nThe learning step, $\\rho$, is a free parameter of the algorithm. Its choice is critical for the convergence of the algorithm. Too large values of $\\rho$ make the algorithm diverge. For too small values, the convergence gets very slow and more iterations are required for a good convergence.\nExercise 3: Observe the evolution of the negative log-likelihood with the number of iterations for different values of $\\rho$. It is easy to check that, for large enough $\\rho$, the gradient descent method does not converge. Can you estimate (through manual observation) an approximate value of $\\rho$ stating a boundary between convergence and divergence?\nExercise 4: In this exercise we explore the influence of the learning step more sistematically. Use the code in the previouse exercises to compute, for every value of $\\rho$, the average error rate over 100 executions. Plot the average error rate vs. $\\rho$. \nNote that you should explore the values of $\\rho$ in a logarithmic scale. For instance, you can take $\\rho = 1, 1/10, 1/100, 1/1000, \\ldots$\nIn practice, the selection of $\\rho$ may be a matter of trial an error. Also there is some theoretical evidence that the learning step should decrease along time up to cero, and the sequence $\\rho_n$ should satisfy two conditions:\n- C1: $\\sum_{n=0}^{\\infty} \\rho_n^2 < \\infty$ (decrease slowly)\n- C2: $\\sum_{n=0}^{\\infty} \\rho_n = \\infty$ (but not too slowly)\nFor instance, we can take $\\rho_n= 1/n$. Another common choice is $\\rho_n = \\alpha/(1+\\beta n)$ where $\\alpha$ and $\\beta$ are also free parameters that can be selected by trial and error with some heuristic method.\n3.2.4. Visualizing the posterior map.\nWe can also visualize the posterior probability map estimated by the logistic regression model for the estimated weights.",
"# Create a regtangular grid.\nx_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max() \ny_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max()\ndx = x_max - x_min\ndy = y_max - y_min\nh = dy /400\nxx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h),\n np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h))\nX_grid = np.array([xx.ravel(), yy.ravel()]).T\n\n# Compute Z's\nZ_grid = np.c_[np.ones(X_grid.shape[0]), X_grid] \n\n# Compute the classifier output for all samples in the grid.\npp, dd = logregPredict(Z_grid, w)\n\n# Put the result into a color plot\nplt.plot(x0c0, x1c0,'r.', label=labels[c0])\nplt.plot(x0c1, x1c1,'g+', label=labels[c1])\nplt.xlabel('$x_' + str(ind[0]) + '$')\nplt.ylabel('$x_' + str(ind[1]) + '$')\nplt.legend(loc='best')\nplt.axis('equal')\npp = pp.reshape(xx.shape)\nplt.contourf(xx, yy, pp, cmap=plt.cm.copper)",
"3.2.5. Polynomial Logistic Regression\nThe error rates of the logistic regression model can be potentially reduced by using polynomial transformations.\nTo compute the polynomial transformation up to a given degree, we can use the PolynomialFeatures method in sklearn.preprocessing.",
"# Parameters of the algorithms\nrho = float(1)/50 # Learning step\nn_it = 500 # Number of iterations\ng = 5 # Degree of polynomial\n\n# Compute Z_tr\npoly = PolynomialFeatures(degree=g)\nZ_tr = poly.fit_transform(Xn_tr)\n# Normalize columns (this is useful to make algorithms more stable).)\nZn, mz, sz = normalize(Z_tr[:,1:])\nZ_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)\n\n# Compute Z_tst\nZ_tst = poly.fit_transform(Xn_tst)\nZn, mz, sz = normalize(Z_tst[:,1:], mz, sz)\nZ_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)\n\n# Convert target arrays to column vectors\nY_tr2 = Y_tr[np.newaxis].T\nY_tst2 = Y_tst[np.newaxis].T\n\n# Running the gradient descent algorithm\nw, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)\n\n# Classify training and test data\np_tr, D_tr = logregPredict(Z_tr, w)\np_tst, D_tst = logregPredict(Z_tst, w)\n \n# Compute error rates\nE_tr = D_tr!=Y_tr\nE_tst = D_tst!=Y_tst\n\n# Error rates\npe_tr = float(sum(E_tr)) / n_tr\npe_tst = float(sum(E_tst)) / n_tst\n\n# NLL plot.\nplt.plot(range(n_it), nll_tr,'b.:', label='Train')\nplt.xlabel('Iteration')\nplt.ylabel('Negative Log-Likelihood')\nplt.legend()\n\nprint \"The optimal weights are:\"\nprint w\nprint \"The final error rates are:\"\nprint \"- Training: \" + str(pe_tr)\nprint \"- Test: \" + str(pe_tst)\nprint \"The NLL after training is \" + str(nll_tr[len(nll_tr)-1])\n",
"Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries.",
"# Compute Z_grid\nZ_grid = poly.fit_transform(X_grid)\nn_grid = Z_grid.shape[0]\nZn, mz, sz = normalize(Z_grid[:,1:], mz, sz)\nZ_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)\n\n# Compute the classifier output for all samples in the grid.\npp, dd = logregPredict(Z_grid, w)\npp = pp.reshape(xx.shape)\n\n# Paint output maps\npylab.rcParams['figure.figsize'] = 8, 4 # Set figure size\nfor i in [1, 2]:\n ax = plt.subplot(1,2,i)\n ax.plot(x0c0, x1c0,'r.', label=labels[c0])\n ax.plot(x0c1, x1c1,'g+', label=labels[c1])\n ax.set_xlabel('$x_' + str(ind[0]) + '$')\n ax.set_ylabel('$x_' + str(ind[1]) + '$')\n ax.axis('equal')\n if i==1:\n ax.contourf(xx, yy, pp, cmap=plt.cm.copper)\n else:\n ax.legend(loc='best')\n ax.contourf(xx, yy, np.round(pp), cmap=plt.cm.copper)",
"4. Regularization and MAP estimation.\nAn alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\\bf W}({\\bf w})$, the MAP estimate is defined as\n$$\n\\hat{\\bf w}{\\text{MAP}} = \\arg\\max{\\bf w} p({\\bf w}|{\\mathcal S})\n$$\nThe posterior density $p({\\bf w}|{\\mathcal S})$ is related to the likelihood function and the prior density of the weights, $p_{\\bf W}({\\bf w})$ through the Bayes rule\n$$\np({\\bf w}|{\\mathcal S}) = \n \\frac{P\\left(y^{(1)},\\ldots,y^{(K)}|{\\bf x}^{(1)},\\ldots, {\\bf x}^{(K)},{\\bf w}\\right)\n p_{\\bf W}({\\bf w})}\n {p\\left(y^{(1)},\\ldots,y^{(K)}|{\\bf x}^{(1)},\\ldots, {\\bf x}^{(K)}\\right)}\n$$\n$$\np({\\bf w}|{\\mathcal S}) = \n \\frac{P\\left(y^{(1)},\\ldots,y^{(K)}|{\\bf x}^{(1)},\\ldots, {\\bf x}^{(K)},{\\bf w}\\right)\n p_{\\bf W}({\\bf w})}\n {p\\left(y^{(1)},\\ldots,y^{(K)}|{\\bf x}^{(1)},\\ldots, {\\bf x}^{(K)}\\right)}\n$$\nThe numerator of the above expression is the product of two terms:\n\nThe likelihood $P_{{\\mathcal S}|{\\bf W}}({\\mathcal S}|{\\bf w})$, which takes large values for parameter vectors $\\bf w$ that fit well the training data\nThe prior distribution of weights $p_{\\bf W}({\\bf w})$, which expresses our a priori preference for some solutions. Usually, we recur to prior distributions that take large values when $\\|{\\bf w}\\|$ is small (associated to soft classification borders).\n\nIn general, the denominator in this expression cannot be computed analytically. However, it is not required for MAP estimation because it does not depend on ${\\bf w}$.\nTherefore, the MAP criterion prefers solutions that simultaneously fit well the data and our a priori belief about which solutions should be preferred.\n$$\\hat{\\bf w}{\\text{MAP}} \n = \\arg\\max{\\bf w} P_{{\\mathcal S}|{\\bf W}}({\\mathcal S}|{\\bf w}) \\cdot p_{\\bf W}({\\bf w})$$\nWe can compute the MAP estimate as\n\\begin{align}\n\\hat{\\bf w}{\\text{MAP}} \n &= \\arg\\max{\\bf w} \n P\\left(y^{(1)},\\ldots,y^{(K)}|{\\bf x}^{(1)},\\ldots, {\\bf x}^{(K)},{\\bf w}\\right) \n p_{\\bf W}({\\bf w}) \\\n &= \\arg\\max_{\\bf w} \\left{\n \\log\\left[P\\left(y^{(1)},\\ldots,y^{(K)}|{\\bf x}^{(1)},\\ldots, {\\bf x}^{(K)},{\\bf w}\\right) \\right]\n + \\log\\left[ p_{\\bf W}({\\bf w})\\right]\n \\right} \\\n &= \\arg\\min_{\\bf w} \\left{L({\\bf w}) - \\log\\left[ p_{\\bf W}({\\bf w})\\right]\n \\right}\n\\end{align}\nwhere $L(·)$ is the negative log-likelihood function.\nWe can check that the MAP criterion adds a penalty term to the ML objective, that penalizes parameter vectors for which the prior distribution of weights takes small values.\n4.1 MAP estimation with Gaussian prior\nIf we assume that ${\\bf W}$ is a zero-mean Gaussian random variable with variance matrix $v{\\bf I}$, \n$$\np_{\\bf W}({\\bf w}) = \\frac{1}{(2\\pi v)^{N/2}} \\exp\\left(-\\frac{1}{2v}\\|{\\bf w}\\|^2\\right)\n$$\nthe MAP estimate becomes\n\\begin{align}\n\\hat{\\bf w}{\\text{MAP}} \n &= \\arg\\min{\\bf w} \\left{L({\\bf w}) + \\frac{1}{C}\\|{\\bf w}\\|^2\n \\right}\n\\end{align}\nwhere $C = 2v$. Noting that\n$$\\nabla_{\\bf w}\\left{L({\\bf w}) + \\frac{1}{C}\\|{\\bf w}\\|^2\\right} \n= - {\\bf Z} \\left({\\bf y}-\\hat{\\bf p}_n\\right) + \\frac{2}{C}{\\bf w},\n$$\nwe obtain the following gradient descent rule for MAP estimation\n\\begin{align}\n{\\bf w}_{n+1} &= \\left(1-\\frac{2\\rho_n}{C}\\right){\\bf w}_n \n + \\rho_n {\\bf Z} \\left({\\bf y}-\\hat{\\bf p}_n\\right)\n\\end{align}\n4.2 MAP estimation with Laplacian prior\nIf we assume that ${\\bf W}$ follows a multivariate zero-mean Laplacian distribution given by\n$$\np_{\\bf W}({\\bf w}) = \\frac{1}{(2 C)^{N}} \\exp\\left(-\\frac{1}{C}\\|{\\bf w}\\|_1\\right)\n$$\n(where $\\|{\\bf w}\\|=|w_1|+\\ldots+|w_N|$ is the $L_1$ norm of ${\\bf w}$), the MAP estimate is\n\\begin{align}\n\\hat{\\bf w}{\\text{MAP}} \n &= \\arg\\min{\\bf w} \\left{L({\\bf w}) + \\frac{1}{C}\\|{\\bf w}\\|_1\n \\right}\n\\end{align}\nThe additional term introduced by the prior in the optimization algorithm is usually named the regularization term. It is usually very effective to avoid overfitting when the dimension of the weight vectors is high. Parameter $C$ is named the inverse regularization strength.\nExercise 5: Derive the gradient descent rules for MAP estimation of the logistic regression weights with Laplacian prior.\n5. Other optimization algorithms\n5.1. Stochastic Gradient descent.\nStochastic gradient descent (SGD) is based on the idea of using a single sample at each iteration of the learning algorithm. The SGD rule for ML logistic regression is\n\\begin{align}\n{\\bf w}_{n+1} &= {\\bf w}_n \n + \\rho_n {\\bf z}^{(n)} \\left(y^{(n)}-\\hat{p}^{(n)}_n\\right)\n\\end{align}\nOnce all samples in the training set have been applied, the algorith can continue by applying the training set several times.\nThe computational cost of each iteration of SGD is much smaller than that of gradient descent, though it usually needs more iterations to converge.\nExercise 5: Modify logregFit to implement an algorithm that applies the SGD rule.\n5.2. Newton's method\nAssume that the function to be minimized, $C({\\bf w})$, can be approximated by its second order Taylor series expansion around ${\\bf w}_0$\n$$ \nC({\\bf w}) \\approx C({\\bf w}0) \n+ \\nabla{\\bf w}^\\intercal C({\\bf w}_0)({\\bf w}-{\\bf w}_0)\n+ \\frac{1}{2}({\\bf w}-{\\bf w}_0)^\\intercal{\\bf H}({\\bf w}_0)({\\bf w}-{\\bf w}_0)\n$$\nwhere ${\\bf H}({\\bf w}_k)$ is the <a href=https://en.wikipedia.org/wiki/Hessian_matrix> Hessian matrix</a> of $C$ at ${\\bf w}_k$. Taking the gradient of $C({\\bf w})$, and setting the result to ${\\bf 0}$, the minimum of C around ${\\bf w}_0$ can be approximated as\n$$ \n{\\bf w}^* = {\\bf w}0 - {\\bf H}({\\bf w}_0)^{-1} \\nabla{\\bf w}^\\intercal C({\\bf w}_0)\n$$\nSince the second order polynomial is only an approximation to $C$, ${\\bf w}^$ is only an approximation to the optimal weight vector, but we can expect ${\\bf w}^$ to be closer to the minimizer of $C$ than ${\\bf w}_0$. Thus, we can repeat the process, computing a second order approximation around ${\\bf w}^*$ and a new approximation to the minimizer.\n<a href=https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization> Newton's method</a> is based on this idea. At each optization step, the function to be minimized is approximated by a second order approximation using a Taylor series expansion around the current estimate. As a result, the learning rules becomes\n$$\\hat{\\bf w}{n+1} = \\hat{\\bf w}{n} - \\rho_n {\\bf H}({\\bf w}k)^{-1} \\nabla{{\\bf w}}C({\\bf w}_k)\n$$\nFor instance, for the MAP estimate with Gaussian prior, the Hessian matrix becomes\n$$\n{\\bf H}({\\bf w}) \n = \\frac{2}{C}{\\bf I} + \\sum_{k=1}^K f({\\bf w}^T {\\bf z}^{(k)}) \\left(1-f({\\bf w}^T {\\bf z}^{(k)})\\right){\\bf z}^{(k)} ({\\bf z}^{(k)})^\\intercal\n$$\nDefining diagonal matrix\n$$\n{\\mathbf S}({\\bf w}) = \\text{diag}\\left(f({\\bf w}^T {\\bf z}^{(k)}) \\left(1-f({\\bf w}^T {\\bf z}^{(k)})\\right)\\right)\n$$\nthe Hessian matrix can be written in more compact form as\n$$\n{\\bf H}({\\bf w}) \n = \\frac{2}{C}{\\bf I} + {\\bf Z}^\\intercal {\\bf S}({\\bf w}) {\\bf Z}\n$$\nTherefore, the Newton's algorithm for logistic regression becomes\n\\begin{align}\n\\hat{\\bf w}{n+1} = \\hat{\\bf w}{n} + \n\\rho_n \n\\left(\\frac{2}{C}{\\bf I} + {\\bf Z}^\\intercal {\\bf S}(\\hat{\\bf w}_{n})\n{\\bf Z}\n\\right)^{-1} \n{\\bf Z}^\\intercal \\left({\\bf y}-\\hat{\\bf p}_n\\right)\n\\end{align}\nSome variants of the Newton method are implemented in the <a href=\"http://scikit-learn.org/stable/\"> Scikit-learn </a> package.",
"def logregFit2(Z_tr, Y_tr, rho, n_it, C=1e4):\n\n # Compute Z's\n r = 2.0/C\n n_dim = Z_tr.shape[1]\n\n # Initialize variables\n nll_tr = np.zeros(n_it)\n pe_tr = np.zeros(n_it)\n w = np.random.randn(n_dim,1)\n\n # Running the gradient descent algorithm\n for n in range(n_it):\n p_tr = logistic(np.dot(Z_tr, w))\n \n sk = np.multiply(p_tr, 1-p_tr)\n S = np.diag(np.ravel(sk.T))\n\n # Compute negative log-likelihood\n nll_tr[n] = - np.dot(Y_tr.T, np.log(p_tr)) - np.dot((1-Y_tr).T, np.log(1-p_tr))\n\n # Update weights\n invH = np.linalg.inv(r*np.identity(n_dim) + np.dot(Z_tr.T, np.dot(S, Z_tr)))\n\n w += rho*np.dot(invH, np.dot(Z_tr.T, Y_tr - p_tr))\n\n return w, nll_tr\n\n# Parameters of the algorithms\nrho = float(1)/50 # Learning step\nn_it = 500 # Number of iterations\nC = 1000\ng = 4\n\n# Compute Z_tr\npoly = PolynomialFeatures(degree=g)\nZ_tr = poly.fit_transform(X_tr)\n# Normalize columns (this is useful to make algorithms more stable).)\nZn, mz, sz = normalize(Z_tr[:,1:])\nZ_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)\n\n# Compute Z_tst\nZ_tst = poly.fit_transform(X_tst)\nZn, mz, sz = normalize(Z_tst[:,1:], mz, sz)\nZ_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)\n\n# Convert target arrays to column vectors\nY_tr2 = Y_tr[np.newaxis].T\nY_tst2 = Y_tst[np.newaxis].T\n\n# Running the gradient descent algorithm\nw, nll_tr = logregFit2(Z_tr, Y_tr2, rho, n_it, C)\n\n# Classify training and test data\np_tr, D_tr = logregPredict(Z_tr, w)\np_tst, D_tst = logregPredict(Z_tst, w)\n \n# Compute error rates\nE_tr = D_tr!=Y_tr\nE_tst = D_tst!=Y_tst\n\n# Error rates\npe_tr = float(sum(E_tr)) / n_tr\npe_tst = float(sum(E_tst)) / n_tst\n\n# NLL plot.\nplt.plot(range(n_it), nll_tr,'b.:', label='Train')\nplt.xlabel('Iteration')\nplt.ylabel('Negative Log-Likelihood')\nplt.legend()\n\nprint \"The final error rates are:\"\nprint \"- Training: \" + str(pe_tr)\nprint \"- Test: \" + str(pe_tst)\nprint \"The NLL after training is \" + str(nll_tr[len(nll_tr)-1])",
"6. Logistic regression in Scikit Learn.\nThe <a href=\"http://scikit-learn.org/stable/\"> scikit-learn </a> package includes an efficient implementation of <a href=\"http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression\"> logistic regression</a>. To use it, we must first create a classifier object, specifying the parameters of the logistic regression algorithm.",
"# Create a logistic regression object.\nLogReg = linear_model.LogisticRegression(C=1.0)\n\n# Compute Z_tr\npoly = PolynomialFeatures(degree=g)\nZ_tr = poly.fit_transform(Xn_tr)\n# Normalize columns (this is useful to make algorithms more stable).)\nZn, mz, sz = normalize(Z_tr[:,1:])\nZ_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)\n\n# Compute Z_tst\nZ_tst = poly.fit_transform(Xn_tst)\nZn, mz, sz = normalize(Z_tst[:,1:], mz, sz)\nZ_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)\n\n# Fit model to data.\nLogReg.fit(Z_tr, Y_tr)\n\n# Classify training and test data\nD_tr = LogReg.predict(Z_tr)\nD_tst = LogReg.predict(Z_tst)\n \n# Compute error rates\nE_tr = D_tr!=Y_tr\nE_tst = D_tst!=Y_tst\n\n# Error rates\npe_tr = float(sum(E_tr)) / n_tr\npe_tst = float(sum(E_tst)) / n_tst\n\nprint \"The final error rates are:\"\nprint \"- Training: \" + str(pe_tr)\nprint \"- Test: \" + str(pe_tst)\n\n# Compute Z_grid\nZ_grid = poly.fit_transform(X_grid)\nn_grid = Z_grid.shape[0]\nZn, mz, sz = normalize(Z_grid[:,1:], mz, sz)\nZ_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)\n\n# Compute the classifier output for all samples in the grid.\ndd = LogReg.predict(Z_grid)\npp = LogReg.predict_proba(Z_grid)[:,1]\npp = pp.reshape(xx.shape)\n\n# Paint output maps\npylab.rcParams['figure.figsize'] = 8, 4 # Set figure size\nfor i in [1, 2]:\n ax = plt.subplot(1,2,i)\n ax.plot(x0c0, x1c0,'r.', label=labels[c0])\n ax.plot(x0c1, x1c1,'g+', label=labels[c1])\n ax.set_xlabel('$x_' + str(ind[0]) + '$')\n ax.set_ylabel('$x_' + str(ind[1]) + '$')\n ax.axis('equal')\n if i==1:\n ax.contourf(xx, yy, pp, cmap=plt.cm.copper)\n else:\n ax.legend(loc='best')\n ax.contourf(xx, yy, np.round(pp), cmap=plt.cm.copper)\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arsenovic/clifford | docs/tutorials/cga/object-oriented.ipynb | bsd-3-clause | [
"This notebook is part of the clifford documentation: https://clifford.readthedocs.io/.\nObject Oriented CGA\nThis is a shelled out demo for a object-oriented approach to CGA with clifford. The CGA object holds the original layout for an arbitrary geometric algebra , and the conformalized version. It provides up/down projections, as well as easy ways to generate objects and operators. \nQuick Use Demo",
"from clifford.cga import CGA, Round, Translation\nfrom clifford import Cl\n\ng3,blades = Cl(3) \n\ncga = CGA(g3) # make cga from existing ga\n# or \ncga = CGA(3) # generate cga from dimension of 'base space'\n\nlocals().update(cga.blades) # put ga's blades in local namespace\n\nC = cga.round(e1,e2,e3,-e2) # generate unit sphere from points \nC \n\n## Objects \ncga.round() # from None \ncga.round(3) # from dim of space\ncga.round(e1,e2,e3,-e2) # from points\ncga.round(e1,e2,e3) # from points\ncga.round(e1,e2) # from points\ncga.round((e1,3)) # from center, radius\ncga.round(cga.round().mv)# from existing multivector\n\ncga.flat() # from None \ncga.flat(2) # from dim of space\ncga.flat(e1,e2) # from points\ncga.flat(cga.flat().mv) # from existing multivector\n\n\n## Operations\ncga.dilation() # from from None \ncga.dilation(.4) # from int\n \ncga.translation() # from None \ncga.translation(e1+e2) # from vector \ncga.translation(cga.down(cga.null_vector()))\n\ncga.rotation() # from None\ncga.rotation(e12+e23) # from bivector \n\ncga.transversion(e1+e2).mv\n\ncga.round().inverted()\n\nD = cga.dilation(5)\ncga.down(D(e1))\n\nC.mv # any CGA object/operator has a multivector\n\nC.center_down,C.radius # some properties of spheres\n\nT = cga.translation(e1+e2) # make a translation \nC_ = T(C) # translate the sphere \ncga.down(C_.center) # compute center again\n\ncga.round() # no args == random sphere \ncga.translation() # random translation \n\nif 1 in map(int, [1,2]):\n print(3)",
"Objects\nVectors",
"a = cga.base_vector() # random vector with components in base space only\na\n\ncga.up(a)\n\ncga.null_vector() # create null vector directly",
"Sphere (point pair, circles)",
"C = cga.round(e1, e2, -e1, e3) # generates sphere from points\nC = cga.round(e1, e2, -e1) # generates circle from points\nC = cga.round(e1, e2) # generates point-pair from points\n#or \nC2 = cga.round(2) # random 2-sphere (sphere)\nC1 = cga.round(1) # random 1-sphere, (circle)\nC0 = cga.round(0) # random 0-sphere, (point pair)\n\n\nC1.mv # access the multivector\n\nC = cga.round(e1, e2, -e1, e3)\nC.center,C.radius # spheres have properties\n\ncga.down(C.center) == C.center_down \n\nC_ = cga.round().from_center_radius(C.center,C.radius)\nC_.center,C_.radius",
"Operators",
"T = cga.translation(e1) # generate translation \nT.mv \n\nC = cga.round(e1, e2, -e1) \nT.mv*C.mv*~T.mv # translate a sphere \n\nT(C) # shorthand call, same as above. returns type of arg\n\nT(C).center "
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst | blogs/textclassification/txtcls.ipynb | apache-2.0 | [
"<h1> Text Classification using TensorFlow on Cloud ML Engine </h1>\n\nThis notebook illustrates:\n<ol>\n<li> Creating datasets for Machine Learning using BigQuery\n<li> Creating a text classification model using the high-level Estimator API \n<li> Training on Cloud ML Engine\n<li> Deploying model\n<li> Predicting with model\n</ol>",
"# change these to try this notebook out\nBUCKET = 'cloud-training-demos-ml'\nPROJECT = 'cloud-training-demos'\nREGION = 'us-central1'\n\nimport os\nos.environ['BUCKET'] = BUCKET\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\n\n%datalab project set -p $PROJECT\n\n!pip install --upgrade tensorflow\n\nimport tensorflow as tf\nprint tf.__version__",
"The idea is to look at the title of a newspaper article and figure out whether the article came from the New York Times or from TechCrunch. There are very sophisticated approaches that we can try, but for now, let's go with something very simple.\n<h2> Data exploration and preprocessing in BigQuery </h2>\n<p>\nWhat does the Hacker News dataset look like?",
"%bq query\nSELECT\n url, title, score\nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n LENGTH(title) > 10\n AND score > 10\nLIMIT 10",
"Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>. To ensure that the parsing works for all URLs of interest, I'll group by the source to make sure there are no weird names left. This was an iterative process.",
"query=\"\"\"\nSELECT\n ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,\n COUNT(title) AS num_articles\nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')\n AND LENGTH(title) > 10\nGROUP BY\n source\nORDER BY num_articles DESC\nLIMIT 10\n\"\"\"\n\nimport google.datalab.bigquery as bq\ndf = bq.Query(query).execute().result().to_dataframe()\ndf",
"Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.",
"query=\"\"\"\nSELECT source, REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ') AS title FROM\n(SELECT\n ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,\n title\nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')\n AND LENGTH(title) > 10\n)\nWHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')\n\"\"\"\ndf = bq.Query(query + \" LIMIT 10\").execute().result().to_dataframe()\ndf.head()",
"For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). A simple way to do this is to use the hash of a well-distributed column in our data (See https://www.oreilly.com/learning/repeatable-sampling-of-data-sets-in-bigquery-for-machine-learning).\n<p>\nSo, let's do that and save the results as CSV files.",
"traindf = bq.Query(query + \" AND ABS(MOD(FARM_FINGERPRINT(title), 4)) > 0\").execute().result().to_dataframe()\nevaldf = bq.Query(query + \" AND ABS(MOD(FARM_FINGERPRINT(title), 4)) = 0\").execute().result().to_dataframe()\ntraindf.head()\n\ntraindf['source'].value_counts()\n\nevaldf['source'].value_counts()\n\ntraindf.to_csv('train.csv', header=False, index=False, encoding='utf-8', sep='\\t')\nevaldf.to_csv('eval.csv', header=False, index=False, encoding='utf-8', sep='\\t')\n\n!head -3 train.csv\n\n!wc -l *.csv\n\n%bash\ngsutil cp *.csv gs://${BUCKET}/txtcls1/",
"<h2> TensorFlow code </h2>\n\nPlease explore the code in this <a href=\"txtcls1/trainer\">directory</a> -- <a href=\"txtcls1/trainer/model.py\">model.py</a> contains the key TensorFlow model and <a href=\"txtcls1/trainer/task.py\">task.py</a> has a main() that launches off the training job.\nHowever, the following cells should give you an idea of what the model code does:",
"import tensorflow as tf\nfrom tensorflow.contrib import lookup\nfrom tensorflow.python.platform import gfile\n\nprint tf.__version__\nMAX_DOCUMENT_LENGTH = 5 \nPADWORD = 'ZYXW'\n\n# vocabulary\nlines = ['Some title', 'A longer title', 'An even longer title', 'This is longer than doc length']\n\n# create vocabulary\nvocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(MAX_DOCUMENT_LENGTH)\nvocab_processor.fit(lines)\nwith gfile.Open('vocab.tsv', 'wb') as f:\n f.write(\"{}\\n\".format(PADWORD))\n for word, index in vocab_processor.vocabulary_._mapping.iteritems():\n f.write(\"{}\\n\".format(word))\nN_WORDS = len(vocab_processor.vocabulary_)\nprint '{} words into vocab.tsv'.format(N_WORDS)\n\n# can use the vocabulary to convert words to numbers\ntable = lookup.index_table_from_file(\n vocabulary_file='vocab.tsv', num_oov_buckets=1, vocab_size=None, default_value=-1)\nnumbers = table.lookup(tf.constant(lines[0].split()))\nwith tf.Session() as sess:\n tf.tables_initializer().run()\n print \"{} --> {}\".format(lines[0], numbers.eval()) \n\n!cat vocab.tsv\n\n# string operations\ntitles = tf.constant(lines)\nwords = tf.string_split(titles)\ndensewords = tf.sparse_tensor_to_dense(words, default_value=PADWORD)\nnumbers = table.lookup(densewords)\n\n# now pad out with zeros and then slice to constant length\npadding = tf.constant([[0,0],[0,MAX_DOCUMENT_LENGTH]])\npadded = tf.pad(numbers, padding)\nsliced = tf.slice(padded, [0,0], [-1, MAX_DOCUMENT_LENGTH])\n\nwith tf.Session() as sess:\n tf.tables_initializer().run()\n print \"titles=\", titles.eval(), titles.shape\n print \"words=\", words.eval()\n print \"dense=\", densewords.eval(), densewords.shape\n print \"numbers=\", numbers.eval(), numbers.shape\n print \"padding=\", padding.eval(), padding.shape\n print \"padded=\", padded.eval(), padded.shape\n print \"sliced=\", sliced.eval(), sliced.shape \n\n\n%bash\ngrep \"^def\" txtcls1/trainer/model.py",
"Let's make sure the code works locally on a small dataset for a few steps.",
"%bash\necho \"bucket=${BUCKET}\"\nrm -rf outputdir\nexport PYTHONPATH=${PYTHONPATH}:${PWD}/txtcls1\npython -m trainer.task \\\n --bucket=${BUCKET} \\\n --output_dir=outputdir \\\n --job-dir=./tmp --train_steps=200",
"When I ran it, I got a 41% accuracy after a few steps. Because batchsize=32, 200 steps is essentially 6400 examples -- the full dataset is 72,000 examples, so this is not even the full dataset. And already, we are doing better than random chance.\n<p>\nOnce the code works in standalone mode, you can run it on Cloud ML Engine. You can monitor the job from the GCP console in the Cloud Machine Learning Engine section. Since we have 72,000 examples and batchsize=32, train_steps=36,000 essentially means 16 epochs.",
"%bash\nOUTDIR=gs://${BUCKET}/txtcls1/trained_model\nJOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR\ngsutil cp txtcls1/trainer/*.py $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n --region=$REGION \\\n --module-name=trainer.task \\\n --package-path=$(pwd)/txtcls1/trainer \\\n --job-dir=$OUTDIR \\\n --staging-bucket=gs://$BUCKET \\\n --scale-tier=BASIC --runtime-version=1.2 \\\n -- \\\n --bucket=${BUCKET} \\\n --output_dir=${OUTDIR} \\\n --train_steps=36000",
"Training finished with an accuracy of 73%. Obviously, this was trained on a really small dataset and with more data will hopefully come even greater accuracy.\n<h2> Deploy trained model </h2>\n<p>\nDeploying the trained model to act as a REST web service is a simple gcloud call.",
"%bash\ngsutil ls gs://${BUCKET}/txtcls1/trained_model/export/Servo/\n\n%bash\nMODEL_NAME=\"txtcls\"\nMODEL_VERSION=\"v1\"\nMODEL_LOCATION=$(gsutil ls gs://${BUCKET}/txtcls1/trained_model/export/Servo/ | tail -1)\necho \"Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes\"\n#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}\n#gcloud ml-engine models delete ${MODEL_NAME}\ngcloud ml-engine models create ${MODEL_NAME} --regions $REGION\ngcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION}",
"<h2> Use model to predict </h2>\n<p>\nSend a JSON request to the endpoint of the service to make it predict which publication the article is more likely to run in. These are actual titles of articles in the New York Times, github, and TechCrunch on June 19. These titles were not part of the training or evaluation datasets.",
"from googleapiclient import discovery\nfrom oauth2client.client import GoogleCredentials\nimport json\n\ncredentials = GoogleCredentials.get_application_default()\napi = discovery.build('ml', 'v1beta1', credentials=credentials,\n discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1beta1_discovery.json')\n\nrequest_data = {'instances':\n [\n {\n 'title': 'Supreme Court to Hear Major Case on Partisan Districts'\n },\n {\n 'title': 'Furan -- build and push Docker images from GitHub to target'\n },\n {\n 'title': 'Time Warner will spend $100M on Snapchat original shows and ads'\n },\n ]\n}\n\nparent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'txtcls', 'v1')\nresponse = api.projects().predict(body=request_data, name=parent).execute()\nprint \"response={0}\".format(response)",
"As you can see, the trained model predicts that the Supreme Court article is 78% likely to come from New York Times and 22% from TechCrunch. The Docker article is 89% likely to be from GitHub according to the service and the Time Warner one is 100% likely to be from TechCrunch.\nCopyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/08_image_keras/labs/flowers_fromscratch.ipynb | apache-2.0 | [
"Flowers Image Classification with TensorFlow on Cloud ML Engine\nThis notebook demonstrates how to do image classification from scratch on a flowers dataset using the Estimator API.",
"import os\nPROJECT = \"cloud-training-demos\" # REPLACE WITH YOUR PROJECT ID\nBUCKET = \"cloud-training-demos-ml\" # REPLACE WITH YOUR BUCKET NAME\nREGION = \"us-central1\" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\nMODEL_TYPE = \"cnn\"\n\n# Do not change these\nos.environ[\"PROJECT\"] = PROJECT\nos.environ[\"BUCKET\"] = BUCKET\nos.environ[\"REGION\"] = REGION\nos.environ[\"MODEL_TYPE\"] = MODEL_TYPE\nos.environ[\"TFVERSION\"] = \"1.13\" # Tensorflow version\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION",
"Input functions to read JPEG images\nThe key difference between this notebook and the MNIST one is in the input function.\nIn the input function here, we are doing the following:\n* Reading JPEG images, rather than 2D integer arrays.\n* Reading in batches of batch_size images rather than slicing our in-memory structure to be batch_size images.\n* Resizing the images to the expected HEIGHT, WIDTH. Because this is a real-world dataset, the images are of different sizes. We need to preprocess the data to, at the very least, resize them to constant size.\nRun as a Python module\nSince we want to run our code on Cloud ML Engine, we've packaged it as a python module.\nThe model.py and task.py containing the model code is in <a href=\"flowersmodel\">flowersmodel</a>\nComplete the TODOs in model.py before proceeding!\nOnce you've completed the TODOs, run it locally for a few steps to test the code.",
"%%bash\nrm -rf flowersmodel.tar.gz flowers_trained\ngcloud ml-engine local train \\\n --module-name=flowersmodel.task \\\n --package-path=${PWD}/flowersmodel \\\n -- \\\n --output_dir=${PWD}/flowers_trained \\\n --train_steps=5 \\\n --learning_rate=0.01 \\\n --batch_size=2 \\\n --model=$MODEL_TYPE \\\n --augment \\\n --train_data_path=gs://cloud-ml-data/img/flower_photos/train_set.csv \\\n --eval_data_path=gs://cloud-ml-data/img/flower_photos/eval_set.csv",
"Now, let's do it on ML Engine. Note the --model parameter",
"%%bash\nOUTDIR=gs://${BUCKET}/flowers/trained_${MODEL_TYPE}\nJOBNAME=flowers_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n --region=$REGION \\\n --module-name=flowersmodel.task \\\n --package-path=${PWD}/flowersmodel \\\n --job-dir=$OUTDIR \\\n --staging-bucket=gs://$BUCKET \\\n --scale-tier=BASIC_GPU \\\n --runtime-version=$TFVERSION \\\n -- \\\n --output_dir=$OUTDIR \\\n --train_steps=1000 \\\n --learning_rate=0.01 \\\n --batch_size=40 \\\n --model=$MODEL_TYPE \\\n --augment \\\n --batch_norm \\\n --train_data_path=gs://cloud-ml-data/img/flower_photos/train_set.csv \\\n --eval_data_path=gs://cloud-ml-data/img/flower_photos/eval_set.csv",
"Monitoring training with TensorBoard\nUse this cell to launch tensorboard",
"from google.datalab.ml import TensorBoard\nTensorBoard().start(\"gs://{}/flowers/trained_{}\".format(BUCKET, MODEL_TYPE))\n\nfor pid in TensorBoard.list()[\"pid\"]:\n TensorBoard().stop(pid)\n print(\"Stopped TensorBoard with pid {}\".format(pid))",
"Here are my results:\nModel | Accuracy | Time taken | Run time parameters\n--- | :---: | ---\ncnn with batch-norm | 0.582 | 47 min | 1000 steps, LR=0.01, Batch=40\nas above, plus augment | 0.615 | 3 hr | 5000 steps, LR=0.01, Batch=40\nWhat was your accuracy?\nDeploying and predicting with model\nDeploy the model:",
"%%bash\nMODEL_NAME=\"flowers\"\nMODEL_VERSION=${MODEL_TYPE}\nMODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1)\necho \"Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes\"\n#gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}\n#gcloud ml-engine models delete ${MODEL_NAME}\ngcloud ml-engine models create ${MODEL_NAME} --regions $REGION\ngcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION",
"To predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src=\"http://storage.googleapis.com/cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg\" />\nThe online prediction service expects images to be base64 encoded as described here.",
"%%bash\nIMAGE_URL=gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg\n\n# Copy the image to local disk.\ngsutil cp $IMAGE_URL flower.jpg\n\n# Base64 encode and create request message in json format.\npython -c 'import base64, sys, json; img = base64.b64encode(open(\"flower.jpg\", \"rb\").read()).decode(); print(json.dumps({\"image_bytes\":{\"b64\": img}}))' &> request.json",
"Send it to the prediction service",
"%%bash\ngcloud ml-engine predict \\\n --model=flowers \\\n --version=${MODEL_TYPE} \\\n --json-instances=./request.json",
"<pre>\n# Copyright 2017 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n</pre>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
boland1992/SeisSuite | docs/Preprocess_Example.ipynb | gpl-3.0 | [
"Ambient Noise Waveforms Preprocessing Methods Examples\nThe following notebook contains examples for using the psprocess.py toolbox for preprocessing raw seismic waveforms to a point where they can be used for cross-correlations. Currently the script can only operate with MSEED formats, but additional support for other formats such as: SAC, SEED and SUDS will likely be added in the future. The user specifies the input path to the raw waveform. This waveform should be only one trace for the purposes of this example, e.g. BHZ. A Preprocess object is created with this input, and the output is specified by one of the functions contained. \nThe theory for the current workflow that this example follows for preprocessing is explained in depth in Bensen et al. (2007).",
"from pysismo.pspreprocess import Preprocess\nfrom obspy import read \nfrom obspy.core import Stream\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline ",
"The Preprocess class requires many input parameters to function. Below is a list of examples.",
"# list of example variables for Preprocess class\nFREQMAX = 1./1 # bandpass parameters\nFREQMIN = 1/20.0\nCORNERS = 2\nZEROPHASE = True \nONEBIT_NORM = False # one-bit normalization\nPERIOD_RESAMPLE = 0.02 # resample period to decimate traces, after band-pass\nFREQMIN_EARTHQUAKE = 1/75.0 # earthquakes periods band\nFREQMAX_EARTHQUAKE = 1/25.0 \nWINDOW_TIME = 0.5 * 1./FREQMAX_EARTHQUAKE # time window to calculate time-normalisation weights\nWINDOW_FREQ = 0.0002 # freq window (Hz) to smooth ampl spectrum\n\n# here is a list of all of the functions and variables that the Preprocess class contains\nhelp(Preprocess)\n\n# set the path to the desired waveform, the example HOLS.mseed is provided. \nexample_path = 'tools/examples/HOLS.mseed'\n\n# import a trace from the example waveform\nexample_trace = read(example_path)[0]\n# initialise the Preprocess class\nPREPROCESS = Preprocess(FREQMIN, FREQMAX, FREQMIN_EARTHQUAKE, \n FREQMAX_EARTHQUAKE, CORNERS, ZEROPHASE, \n PERIOD_RESAMPLE, WINDOW_TIME, WINDOW_FREQ, \n ONEBIT_NORM)",
"The following processing examples are in order from Bensen et al. (2007). The final example has all of them combined.\n- First, the trace has its instrument response removed. \n- Second, the trace is trimmed, demeaned and detrended. \n- Third, the trace is passed through a butterworth band-pass filter to remove high amplitude noise and event signals as much as possible.\n- Fourth, the trace is downsampled to allow for swifter processing. The closer to the original sample rate this is left, the longer overall processing will take!\n- Fifth, the trace is normalised. This can be specified as either time-normalised or one-bit normalised.\n- Sixth, the spectrum of the waveform is 'whitened'.\nTake note that for the purposes of this example, the instrument response has been kept. This is because the metadata file for this waveform is having some technical difficulties. The resulting waveforms and techniques posed here are still valid for example purposes.",
"# process the band-pass filtered trace\n# the bands are set from the above freqmax and freqmin parameters entered when the class is initialised.\nexample_trace = PREPROCESS.bandpass_filt(example_trace)\nst = Stream(traces=[example_trace])\nst.plot()",
"Next, downsample the example_trace. The output downsampled trace is dictated by the variable PERIOD_RESAMPLE. The new sample rate is 1/PERIOD_RESAMPLE",
"# Previous trace sample rate:\nprint 'Initial trace sample rate: ', example_trace.stats.sampling_rate\n# Downsample trace\nexample_trace = PREPROCESS.trace_downsample(example_trace)\nprint 'Downsampled trace sample rate: ', example_trace.stats.sampling_rate",
"Normalise the trace, either with respect to time, or the one-bit normalisation procedure.",
"example_trace_copy = example_trace\n# one-bit normalization\nexample_trace.data = np.sign(example_trace.data)\nst = Stream(traces=[example_trace])\n# plot the one-bit normalised trace\nst.plot()\n\n# copy the trace for time normalisation\nexample_trace = example_trace_copy\n# process for time normalisation\nexample_trace = PREPROCESS.time_norm(example_trace, example_trace_copy)\nst = Stream(traces=[example_trace])\n# plot the time normalised trace\nst.plot()\n\n",
"Finally spectrally whiten the trace.",
"# process the whitened spectrum for the trace\nexample_trace = PREPROCESS.spectral_whitening(example_trace)\nst = Stream(traces=[example_trace])\n# plot the time normalised trace\nst.plot()",
"References\nBensen, G., Ritzwoller, M., Barmin, M., Levshin, A., Lin, F., & Moschetti, M. et al. (2007). Processing seismic ambient noise data to obtain reliable broad-band surface wave dispersion measurements. Geophysical Journal International, 169(3), 1239-1260. doi:10.1111/j.1365-246x.2007.03374.x"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
paix120/DataScienceLearningClubActivities | Activity05/Mushroom Edibility Classification - Naive Bayes Bernoulli.ipynb | gpl-2.0 | [
"Mushroom Classification - Edible or Poisonous?\nby Renee Teate\nUsing Bernoulli Naive Bayes Classification from scikit-learn\nFor Activity 5 of the Data Science Learning Club: http://www.becomingadatascientist.com/learningclub/forum-13.html\nDataset from UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Mushroom",
"#import pandas and numpy libraries\nimport pandas as pd\nimport numpy as np\nimport sys #sys needed only for python version\n#import gaussian naive bayes from scikit-learn\nimport sklearn as sk\n#seaborn for pretty plots\nimport seaborn as sns\n\n#display versions of python and packages\nprint('\\npython version ' + sys.version)\nprint('pandas version ' + pd.__version__)\nprint('numpy version ' + np.__version__)\nprint('sk-learn version ' + sk.__version__)\nprint('seaborn version ' + sns.__version__)\n",
"The dataset doesn't include column names, and the values are text characters",
"#read in data. it's comma-separated with no column names.\ndf = pd.read_csv('agaricus-lepiota.data', sep=',', header=None,\n error_bad_lines=False, warn_bad_lines=True, low_memory=False)\n# set pandas to output all of the columns in output\npd.options.display.max_columns = 25\n#show the first 5 rows\nprint(df.sample(n=5))",
"Added column names from the UCI documentation",
"#manually add column names from documentation (1st col is class: e=edible,p=poisonous; rest are attributes)\ndf.columns = ['class','cap-shape','cap-surface','cap-color','bruises','odor','gill-attachment',\n 'gill-spacing','gill-size','gill-color','stalk-shape','stalk-root',\n 'stalk-surf-above-ring','stalk-surf-below-ring','stalk-color-above-ring','stalk-color-below-ring',\n 'veil-type','veil-color','ring-number','ring-type','spore-color','population','habitat']\n\nprint(\"Example values:\\n\")\nprint(df.iloc[3984]) #this one has a ? value - how are those treated by classifier?",
"The dataset is split fairly evenly between the edible and poison classes",
"#show plots in notebook\n%matplotlib inline\n\n#bar chart of classes using pandas plotting\nprint(df['class'].value_counts())\n#df['class'].value_counts().plot(kind='bar')\n",
"Let's see how well our classifier can identify poisonous mushrooms by combinations of features",
"#put the features into X (everything except the 0th column)\nX = pd.DataFrame(df, columns=df.columns[1:len(df.columns)], index=df.index)\n#put the class values (0th column) into Y \nY = df['class']\n\n#encode the class labels as numeric\nfrom sklearn import preprocessing\nle = preprocessing.LabelEncoder()\nle.fit(Y)\n#print(le.classes_)\n#print(np.array(Y))\n#Y values now boolean values; poison = 1\ny = le.transform(Y)\n#print(y_train)\n\n#have to initialize or get error below\nx = pd.DataFrame(X,columns=[X.columns[0]])\n\n#encode each feature column and add it to x_train (one hot encoder requires numeric input?)\nfor colname in X.columns:\n le.fit(X[colname])\n #print(colname, le.classes_)\n x[colname] = le.transform(X[colname])\n\n#encode the feature labels using one-hot encoding\nfrom sklearn import preprocessing\noh = preprocessing.OneHotEncoder(categorical_features='all')\noh.fit(x)\nxo = oh.transform(x).toarray()\n#print(xo)\n\nprint('\\nEncoder Value Counts Per Column:')\nprint(oh.n_values_) \nprint('\\nExample Feature Values - row 1 in X:')\nprint(X.iloc[1])\nprint('\\nExample Encoded Feature Values - row 1 in xo:')\nprint(xo[1])\nprint('\\nClass Values (Y):')\nprint(np.array(Y))\nprint('\\nEncoded Class Values (y):')\nprint(y)\n\n\n#split the dataset into training and test sets\nfrom sklearn.cross_validation import train_test_split\nx_train, x_test, y_train, y_test = train_test_split(xo, y, test_size=0.33)\n\n#initialize and fit the naive bayes classifier\nfrom sklearn.naive_bayes import BernoulliNB\nskbnb = BernoulliNB()\nskbnb.fit(x_train,y_train)\ntrain_predict = skbnb.predict(x_train)\n#print(train_predict)\n\n#see how accurate the training data was fit\nfrom sklearn import metrics\nprint(\"Training accuracy:\",metrics.accuracy_score(y_train, train_predict))\n\n#use the trained model to predict the test values\ntest_predict = skbnb.predict(x_test)\nprint(\"Testing accuracy:\",metrics.accuracy_score(y_test, test_predict))\n\n\nprint(\"\\nClassification Report:\")\nprint(metrics.classification_report(y_test, test_predict, target_names=['edible','poisonous']))\nprint(\"\\nConfusion Matrix:\")\nskcm = metrics.confusion_matrix(y_test,test_predict)\n#putting it into a dataframe so it prints the labels\nskcm = pd.DataFrame(skcm, columns=['predicted-edible','predicted-poisonous'])\nskcm['actual'] = ['edible','poisonous']\nskcm = skcm.set_index('actual')\n\n#NOTE: NEED TO MAKE SURE I'M INTERPRETING THE ROWS & COLS RIGHT TO ASSIGN THESE LABELS!\nprint(skcm)\n\nprint(\"\\nScore (same thing as test accuracy?): \", skbnb.score(x_test,y_test))\n\n",
"Add interpretation of numbers above (after verifying I entered the parameters correctly and the metrics are labeled right)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
maxkleiner/maXbox4 | MNISTSinglePredict2Test.ipynb | gpl-3.0 | [
"<a href=\"https://colab.research.google.com/github/maxkleiner/maXbox4/blob/master/MNISTSinglePredict2Test.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nMNIST Single Multi Prediction\nFor this tutor we’ll explore one of the classic machine learning datasets – hand written digits classification. We have set up a very simple SVC to classify the MNIST digits to make one single predict.\nFirst we load the libraries and the dataset:",
"#sign:max: MAXBOX8: 13/03/2021 07:46:37 \nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn import tree\nfrom sklearn.svm import SVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn import datasets\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import train_test_split\n\n# [height, weight, 8*8 pixels of digits 0..9]\ndimages = datasets.load_digits()\nprint(type(dimages), len(dimages.data), 'samples')\n",
"The dataset () is available either for download from the UCI ML repository or via a Python library scikit-learn dataset. Then we setup the Support Vector Classifier with the training data X and the target y:",
"sclf = SVC(gamma=0.001, C=100, kernel='linear')\n\nX= dimages.data[:-10]\ny= dimages.target[:-10]\nprint('train set samples:',len(X))\n",
"Gamma is the learning rate and the higher the value of gamma the more precise the decision boundary would be. C (regularization) is the penalty of the fault tolerance. Having a larger C will lead to smaller values for the slack variables. This means that the number of support vectors will decrease. When you run the prediction, it will need to calculate the indicator function for each support vector. Now we train (fit) the samples:",
"sclf.fit(X,y)",
"In the last step we predict a specific digit from the test set (only the last 10 samples are unseen), means we pass an actual image and SVC makes the prediction of which digit belongs to the image:",
"testimage = -5\n\ns_prediction = sclf.predict([dimages.data[testimage]])\nprint ('the image maybe belongs to ',s_prediction)\nplt.imshow(dimages.images[testimage], cmap=plt.cm.gray_r, interpolation=\"nearest\")\nplt.show()",
"The same fit we try with a Random Forest Classifier to finish the first step of this lesson:",
"#RandomForestClassifier\nrfc_clf = RandomForestClassifier()\nrfc_clf.fit(X,y)\nrfc_prediction = rfc_clf.predict([dimages.data[testimage]])\nprint ('predict with RFC ',rfc_prediction)",
"There are many ways to improve this predict, including not using a vector classifier and go further with a neural classifier, but here’s a simple one to start what we do. Let’s just simplify our images by making them true black and white and stack an array.\nMNIST Multi Prediction\n\nNow we split explicit data in train- and test-set. Splitting the given images in 80:20 ratio so that 80% image is available for training and 20 % image is available for testing. We consider the data as pixels and the target as labels.\nConvert and create the dataframe from datasets. We are using support vector machines for classification. Fit method trains the model and score will test it against the given test set and score.\nA Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples. In two dimensional space this hyperplane is a line dividing a plane in two parts where in each class lay in either side.",
"#df = pd.DataFrame(data=dimages.data, columns=dimages.feature_names)\ndf = pd.DataFrame(data=dimages.data)\nprint(df.head(5))\ndf['target'] = pd.Series(dimages.target)\n#df['pixels'] = dimages.data[:,1:64] #pd.Series(dimages.data[:,1:785])\nprint(df['target'])\nprint(df.shape) #print(df.info)\n \npixels = df\nlabels = df.target\nprint('pixels ',pixels)",
"We are ready for splitting the given images in 80:20 ratio so that 80% image is available for training and 20 % image as unseen or unknown is available for testing.",
"train_images, test_images, train_labels, test_labels = \\\n train_test_split(pixels,labels,train_size=0.8,random_state=2);\n\nprint('train size: ',len(train_images), len(train_labels)) \nprint('test size: ',len(test_images), len(test_labels)) \n \nsclf.fit(train_images, train_labels)\nprint('test score ',sclf.score(test_images,test_labels))",
"This gives us the score of 97 percent ( 0.977777 ) which is at all a good score. We would try to increase the accuracy but this is sort of challenge.\nThe dataset description of our primer says: Each image is 8 pixels in height and 8 pixels in width, for a total of 64 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255, inclusive.\nWould be nice to get the confusion matrix of MNIST dataset to get an impression of the score.",
"from sklearn.metrics import confusion_matrix\ntest_predictions = sclf.predict(test_images)\n#print(confusion_matrix(test_labels,np.argmax(test_predictions,axis=1)))\nprint(confusion_matrix(test_labels, test_predictions))",
"Splitting the given images in 70:30 ratio shows a slight different confusion matrix so that 70% image is available for training and 30 % image as unseen or unknown is available for testing. Number 8 has probably most problems to get recognized! So disguise as 8 you can be a 6 or 9 and thats logical cause the 8 is in a 7-segment LCD display the base pattern! In german we say that with the word Achtung ;-).",
"train_images, test_images, train_labels, test_labels = \\\n train_test_split(pixels,labels,train_size=0.7,random_state=2);\n\nsclf.fit(train_images, train_labels)\nprint('test score ',sclf.score(test_images,test_labels))\ntest_predictions = sclf.predict(test_images)\nprint(confusion_matrix(test_labels, test_predictions))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kastnerkyle/kastnerkyle.github.io-nikola | blogsite/posts/wavelets.ipynb | bsd-3-clause | [
"Wavelets are a fundamental part of modern signal processing and feature engineering. Utilizing well developed basis functions with certain mathematical properties, rather than the more typical sines and cosines used for the DFT (discrete fourier transform) and DCT (discrete cosine transform), wavelet analysis has many interesting applications.\n<!-- TEASER_END -->\n\nWavelets have been used extensively for denoising and compression, but the DFT and DCT have been used extensively in these areas as well. One unique area where wavelets shine is peak detection. While many peak detection algorithms are based on heuristics, and are often datset specfic, wavelets provide a good framework for generalized peak search. This will be discussed in greater detail after the break.",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt",
"Data\nSunspots are a typical dataset for testing different peak detection algorithms. Besides having well defined, Gaussian-ish peaks, the height of the local maxima and minima are also variant over time. These attributes make sunspot datasets good for baselining different peak detection algorithms.",
"from utils import progress_bar_downloader\nimport os\n\nlink = 'http://www.quandl.com/api/v1/datasets/SIDC/SUNSPOTS_A.csv?&trim_start=1700-12-31&trim_end=2013-12-31&sort_order=desc'\ndlname = 'sunspots.csv'\nif not os.path.exists('./%s' % dlname):\n progress_bar_downloader(link, dlname)\nelse:\n print('%s already downloaded!' % dlname)\n \nsunspot = np.genfromtxt(dlname, delimiter=',', skip_header=1, usecols=1)\nplt.plot(sunspot, color='steelblue')\nplt.title('Annual Sunspot Data, 1700-2014, from quandl.com')",
"The piece-regular function is another popular test signal, consisting of wildly non-Gaussian shapes. It is not a \"real-world\" example for most types of data, but is an extreme test of peak detection - many humans even have trouble picking all the peaks in this one!\nThe simplest way to acquire the piece-regular dataset is to use the load_signal.m function. I have pre-run this function, and saved the dataset to my public dropbox account as a .mat file. We can then fetch the file from there, in a similar way as the sunspot data.\nFor the curious, these were the octave commands (octave was run from the directory containing load_signal.m):\n>> x = load_signal('piece-regular');\n>> save -V7 piece-regular.mat x",
"from scipy.io import loadmat\nlink = 'https://dl.dropboxusercontent.com/u/15378192/piece-regular.mat'\ndlname = 'piece-regular.mat'\nif not os.path.exists('./%s' % dlname):\n progress_bar_downloader(link, dlname)\nelse:\n print('%s already downloaded!' % dlname)\n\ndata = loadmat(dlname)\npr = data['x']\n\nplt.plot(pr, color='steelblue')\nplt.title('Piecewise Regular Data, from WaveLab')",
"We Have To Go Deeper\nThe filterbank representation of the wavelet (seen below) is very convenient for wavelet peak finding. Extending code from a previous post, we will create an arbitrary depth anlysis wavelet filterbank (no reconstruction), in order to perform peak detection.\nThe basic algorithm is detailed in this whitepaper. In short, this method involves finding zero crossings in the level X detail coefficients (generated by the bior3.1 g[n]) where there is less noise in the zero crossings. Tracking those peaks back through the lower levels, we can refine the peak location with higher resolution data. I will use the haar wavelet, mainly because I know how to construct it, so the results will be slightly different than the whitepaper. For non-tutorial use, check out the PyWavelets package - it supports a ton of different wavelet functions and is very popular. scipy.signal also has some limited support for wavelet methods.\nThe NI whitepaper also chooses to use the undecimated wavelet transform - I have chosen to use the decimated version, and compensate for the results. This introduces some noise into the estimates, and may account for some of the \"off-by-one\" peaks in the results. Hoever, this fits better into the filterbank model for wavelet decomposition.",
"from IPython.display import Image\nImage(url='http://upload.wikimedia.org/wikipedia/commons/2/22/Wavelets_-_Filter_Bank.png')\n\nfrom numpy.lib.stride_tricks import as_strided\n\ndef polyphase_core(x, m, f):\n #x = input data\n #m = decimation rate\n #f = filter\n #Force it to be 1D\n x = x.ravel()\n #Hack job - append zeros to match decimation rate\n if x.shape[0] % m != 0:\n x = np.append(x, np.zeros((m - x.shape[0] % m,)))\n if f.shape[0] % m != 0:\n f = np.append(f, np.zeros((m - f.shape[0] % m,)))\n polyphase = p = np.zeros((m, (x.shape[0] + f.shape[0]) / m), dtype=x.dtype)\n p[0, :-1] = np.convolve(x[::m], f[::m])\n #Invert the x values when applying filters\n for i in range(1, m):\n p[i, 1:] = np.convolve(x[m - i::m], f[i::m])\n return p\n\ndef wavelet_lp(data, ntaps=4):\n #type == 'haar':\n f = np.array([1.] * ntaps)\n return np.sum(polyphase_core(data, 2, f), axis=0)\n\ndef wavelet_hp(data, ntaps=4):\n #type == 'haar':\n if ntaps % 2 is not 0:\n raise ValueError(\"ntaps should be even\")\n half = ntaps // 2\n f = np.array(([-1.] * half) + ([1.] * half))\n return np.sum(polyphase_core(data, 2, f), axis=0)\n\ndef wavelet_filterbank(n, data):\n #Create and store all coefficients to level n\n x = data\n all_lp = []\n all_hp = []\n for i in range(n):\n c = wavelet_lp(x)\n x = wavelet_hp(x)\n all_lp.append(c)\n all_hp.append(x)\n return all_lp, all_hp\n\ndef zero_crossing(x):\n x = x.ravel()\n #Create an X, 2 array of overlapping points i.e.\n #[1, 2, 3, 4, 5] becomes\n #[[1, 2],\n #[2, 3],\n #[3, 4],\n #[4, 5]]\n o = as_strided(x, shape=(x.shape[0] - 1, 2), strides=(x.itemsize, x.itemsize))\n #Look for sign changes where sign goes from positive to negative - this is local maxima!\n #Negative to positive is local minima\n return np.where((np.sum(np.sign(o), axis=1) == 0) & (np.sign(o)[:, 0] == 1.))[0]\n\ndef peak_search(hp_arr, arr_max):\n #Given all hp coefficients and a limiting value, find and return all peak indices\n zero_crossings = []\n for n, _ in enumerate(hp_arr):\n #2 ** (n + 1) is required to rescale due to decimation by 2 at each level\n #Also remove a bunch of redundant readings due to clip using np.unique\n zero_crossings.append(np.unique(np.clip(2 ** (n + 1) * zero_crossing(hp_arr[n]), 0, arr_max)))\n\n #Find refined estimate for each peak\n peak_idx = []\n for v in zero_crossings[-1]:\n v_itr = v\n for n in range(len(zero_crossings) - 2, 0, -1):\n v_itr = find_nearest(v_itr, zero_crossings[n])\n peak_idx.append(v_itr)\n #Only return unique answers\n return np.unique(np.array(peak_idx, dtype='int32'))\n \ndef find_nearest(v, x):\n return x[np.argmin(np.abs(x - v))]\n \ndef peak_detect(data, depth):\n if depth == 1:\n raise ValueError(\"depth should be > 1\")\n #Return indices where peaks were found\n lp, hp = wavelet_filterbank(depth, data)\n return peak_search(hp, data.shape[0] - 1)",
"Peaking Duck\nOne of the trickiest things about this method is choosing the proper depth for wavelet peak detection. Too deep, and sharp peaks may be eliminated due to decimation. Not deep enough, and there may be false positives o spurious results.\nEmpirically, it appears that the number of taps in the wavelet filter has a similar effect to depth - too many taps seems to blur out sharp peaks, but not enough taps causes strange peaks to be detected. 4 taps seems to be a good number for these two datasets, but this may need tweaking for new applications.\nInterestingly enough, changing one line in the zero_crossing function from:\nreturn np.where((np.sum(np.sign(o), axis=1) == 0) & (np.sign(o)[:, 0] == 1.))[0]\nto\nreturn np.where((np.sum(np.sign(o), axis=1) == 0) & (np.sign(o)[:, 0] == -1.))[0]\nchanges this from a peak detector (local maximum) into a valley detector (local minimum) - very cool! Though the detected peaks are not perfect, they are pretty close and could probably be improved by choosing a better wavelet, or better estimation of the proper depth parameter.",
"indices = peak_detect(sunspot, 2)\nplt.title('Detected peaks for sunspot dataset')\nplt.plot(sunspot, color='steelblue')\nplt.plot(indices, sunspot[indices], 'x', color='darkred')\nplt.figure()\nindices = peak_detect(pr, 3)\nplt.title('Detected peaks for piece-regular dataset')\nplt.plot(pr, color='steelblue')\nplt.plot(indices, pr[indices], 'x', color='darkred')",
"Denoise and Defury\nTo complete our tour of wavelets, it would be beneficial to show how wavelets can be used for denoising as well. For this application, I will use the matrix representation of the wavelet basis, rather than the filterbank interpretation. Though they should be equivalent, the block transform is more straightforward (IMO) if you do not need the intermediate coefficients, as we did in the peak detection application. The MATLAB code is also a useful resource.\nThe general idea is to transform the input signal, then remove noise using a soft threshold in the transform space. This will zero out small coefficients, while larger coefficients will remain unaltered. Following this thresholding operation with the inverse wavelet transform we will get a filtered version of the original signal. Because each wavelet contains many different frequencies, this type of filtering can better preserve edges and trends than a simple highpass or lowpass filter. Let's check it out.",
"def haar_matrix(size):\n level = int(np.ceil(np.log2(size)))\n H = np.array([1.])[:, None]\n NC = 1. / np.sqrt(2.)\n LP = np.array([1., 1.])[:, None] \n HP = np.array([1., -1.])[:, None]\n for i in range(level):\n H = NC * np.hstack((np.kron(H, LP), np.kron(np.eye(len(H)),HP)))\n H = H.T\n return H\n\ndef dwt(x):\n H = haar_matrix(x.shape[0])\n x = x.ravel()\n #Zero pad to next power of 2\n x = np.hstack((x, np.zeros(H.shape[1] - x.shape[0])))\n return np.dot(H, x)\n\ndef idwt(x):\n H = haar_matrix(x.shape[0])\n x = x.ravel()\n #Zero pad to next power of 2\n x = np.hstack((x, np.zeros(H.shape[0] - x.shape[0])))\n return np.dot(H.T, x)\n\ndef wthresh(a, thresh):\n #Soft threshold\n res = np.abs(a) - thresh\n return np.sign(a) * ((res > 0) * res)\n\nrstate = np.random.RandomState(0)\ns = pr + 2 * rstate.randn(*pr.shape)\nthreshold = t = 5\nwt = dwt(s)\nwt = wthresh(wt, t)\nrs = idwt(wt)\n\nplt.plot(s, color='steelblue')\nplt.title('Noisy Signal')\nplt.figure()\nplt.plot(dwt(s), color='darkred')\nplt.title('Wavelet Transform of Noisy Signal')\nplt.figure()\nplt.title('Soft Thresholded Transform Coefficients')\nplt.plot(wt, color='darkred')\nplt.figure()\nplt.title('Reconstructed Signal after Thresholding')\nplt.plot(rs, color='steelblue')",
"The Compression Dimension\nThe use of wavelets for compression is basically identical to its use for filtering. Keep the most powerful coefficients, while zeroing out everything else. On reconstruction, the result will be very close to the original, and much closer than if you attempted the same thing with a DFT. I am not sure how this compares to DCT compression, but the visual result seems pretty good to me.",
"from scipy import misc\nlink = 'http://sipi.usc.edu/~ortega/icip2001/original/lena_512.gif'\ndlname = 'lena.gif'\nif not os.path.exists('./%s' % dlname):\n progress_bar_downloader(link, dlname)\nelse:\n print('%s already downloaded!' % dlname)\n \ndef dwt2d(x):\n H = haar_matrix(x.shape[0])\n return np.dot(np.dot(H, x), H.T) \n\ndef idwt2d(x):\n H = haar_matrix(x.shape[0])\n return np.dot(np.dot(H.T, x), H) \n\nlena = misc.imread(dlname)\nwt = dwt2d(lena)\nthresh = wthresh(wt, 15)\nrs = idwt2d(thresh)\n\nwtnz = len(wt.nonzero()[0]) + len(wt.nonzero()[1])\nrsnz = len(thresh.nonzero()[0]) + len(thresh.nonzero()[1])\n\nreduction = 'using %.2f percent the coefficients of the original' % (100 * float(rsnz) / wtnz)\n\nplt.imshow(lena, cmap='gray')\nf = plt.gca()\nf.axes.get_xaxis().set_visible(False)\nf.axes.get_yaxis().set_visible(False)\nplt.title('Original Lena')\nplt.figure()\nplt.imshow(rs, cmap='gray')\nf = plt.gca()\nf.axes.get_xaxis().set_visible(False)\nf.axes.get_yaxis().set_visible(False)\nplt.title('Compressed Lena, %s' % reduction)",
"Wavelets are a very interesting and powerful tool for signal processing and machine learning, and I hope this post has brought some clarity to the subject.\nkk"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
fastai/fastai | dev_nbs/course/lesson4-collab.ipynb | apache-2.0 | [
"from fastai.collab import *\nfrom fastai.tabular.all import *",
"Collaborative filtering example\ncollab models use data in a DataFrame of user, items, and ratings.",
"user,item,title = 'userId','movieId','title'\n\npath = untar_data(URLs.ML_SAMPLE)\npath\n\nratings = pd.read_csv(path/'ratings.csv')\nratings.head()",
"That's all we need to create and train a model:",
"dls = CollabDataLoaders.from_df(ratings, bs=64, seed=42)\n\ny_range = [0,5.5]\n\nlearn = collab_learner(dls, n_factors=50, y_range=y_range)\n\nlearn.fit_one_cycle(3, 5e-3)",
"Movielens 100k\nLet's try with the full Movielens 100k data dataset, available from http://files.grouplens.org/datasets/movielens/ml-100k.zip",
"path=Config().data/'ml-100k'\n\nratings = pd.read_csv(path/'u.data', delimiter='\\t', header=None,\n names=[user,item,'rating','timestamp'])\nratings.head()\n\nmovies = pd.read_csv(path/'u.item', delimiter='|', encoding='latin-1', header=None,\n names=[item, 'title', 'date', 'N', 'url', *[f'g{i}' for i in range(19)]])\nmovies.head()\n\nlen(ratings)\n\nrating_movie = ratings.merge(movies[[item, title]])\nrating_movie.head()\n\ndls = CollabDataLoaders.from_df(rating_movie, seed=42, valid_pct=0.1, bs=64, item_name=title, path=path)\n\ndls.show_batch()\n\ny_range = [0,5.5]\n\nlearn = collab_learner(dls, n_factors=40, y_range=y_range)\n\nlearn.lr_find()\n\nlearn.fit_one_cycle(5, 5e-3, wd=1e-1)\n\nlearn.save('dotprod')",
"Here's some benchmarks on the same dataset for the popular Librec system for collaborative filtering. They show best results based on RMSE of 0.91, which corresponds to an MSE of 0.91**2 = 0.83.\nInterpretation\nSetup",
"learn.load('dotprod');\n\nlearn.model\n\ng = rating_movie.groupby('title')['rating'].count()\ntop_movies = g.sort_values(ascending=False).index.values[:1000]\ntop_movies[:10]",
"Movie bias",
"movie_bias = learn.model.bias(top_movies, is_item=True)\nmovie_bias.shape\n\nmean_ratings = rating_movie.groupby('title')['rating'].mean()\nmovie_ratings = [(b, i, mean_ratings.loc[i]) for i,b in zip(top_movies,movie_bias)]\n\nitem0 = lambda o:o[0]\n\nsorted(movie_ratings, key=item0)[:15]\n\nsorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]",
"Movie weights",
"movie_w = learn.model.weight(top_movies, is_item=True)\nmovie_w.shape\n\nmovie_pca = movie_w.pca(3)\nmovie_pca.shape\n\nfac0,fac1,fac2 = movie_pca.t()\nmovie_comp = [(f, i) for f,i in zip(fac0, top_movies)]\n\nsorted(movie_comp, key=itemgetter(0), reverse=True)[:10]\n\nsorted(movie_comp, key=itemgetter(0))[:10]\n\nmovie_comp = [(f, i) for f,i in zip(fac1, top_movies)]\n\nsorted(movie_comp, key=itemgetter(0), reverse=True)[:10]\n\nsorted(movie_comp, key=itemgetter(0))[:10]\n\nidxs = np.random.choice(len(top_movies), 50, replace=False)\nidxs = list(range(50))\nX = fac0[idxs]\nY = fac2[idxs]\nplt.figure(figsize=(15,15))\nplt.scatter(X, Y)\nfor i, x, y in zip(top_movies[idxs], X, Y):\n plt.text(x,y,i, color=np.random.rand(3)*0.7, fontsize=11)\nplt.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bosonbeard/Funny-models-and-scripts | 3.Machine_learning/2.family_registered_habr.ipynb | unlicense | [
"Введение (Introduction)\nДанный блокнот является дополнительным материалом к статье по демонстрации примеров линейной регрессии представленной публикации на портале Habrahabr – \nУчитывая возможные ошибки вызванные техническими и «человеческими» факторами при обработке данных, рекомендуется применение данного набора исключительно в демонстрационных целях. \n\nThis notebook is an additional material to the article on demonstrating examples of linear regression of the presented publication on the portal Habrahabr -\nMaterials may contain errors, not recommended for serious research.\nP.S. English text from google translate :)\nОписание данных (Data description)\nДанные о регистрации актов гражданского состояния в Москве с 2010 года по настоящее время с разбивкой по месяцам. Например, регистрации браков, рождений, смертей, установлений отцовства, смены имени и т.п.\nПодробное описание данных по адресу: https://data.mos.ru/opendata/7704111479-dinamika-registratsii-aktov-grajdanskogo-sostoyaniya/description?versionNumber=2&releaseNumber=33\n\nData of registration of acts of civil status in Moscow from 2010 to the present time by months. For example, registration of marriages, births, deaths, paternity establishments, name changes, etc.\nDetailed description of the data at: https://data.mos.ru/opendata/7704111479-dinamika-registratsii-aktov-grajdanskogo-sostoyaniya/description?versionNumber=2&releaseNumber=33",
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import linear_model\nimport warnings\nwarnings.filterwarnings('ignore')\n%matplotlib inline",
"Загрузка и предобработка (Download and preprocessing)",
"#download\ndf = pd.read_csv('https://op.mos.ru/EHDWSREST/catalog/export/get?id=230308', compression='zip', header=0, encoding='cp1251', sep=';', quotechar='\"')\n#look at the data\ndf.head(12)",
"Закодируем месяца числовыми значениями и удалим ненужные для анализа столбцы\n\nWe will code the month with numeric values and delete the columns we do not need for analysis",
"#code months\nd={'январь':1, 'февраль':2, 'март':3, 'апрель':4, 'май':5, 'июнь':6, 'июль':7,\n 'август':8, 'сентябрь':9, 'октябрь':10, 'ноябрь':11, 'декабрь':12}\ndf.Month=df.Month.map(d)\n\n#delete some unuseful columns\ndf.drop(['ID','global_id','Unnamed: 12'],axis=1,inplace=True)\n\n#look at the data\ndf.head(12)",
"Построим попарные графики зависимостей, но для наглядности возьмем только часть признаков\n\nWe construct pairwise graphs of dependencies, but for clarity we take only a part of the features",
"columns_to_show = ['StateRegistrationOfBirth', 'StateRegistrationOfMarriage', \n 'StateRegistrationOfPaternityExamination', 'StateRegistrationOfDivorce','StateRegistrationOfDeath']\ndata=df[columns_to_show]\n\ngrid = sns.pairplot(data)",
"Посмотрим, изменит ли что-то масштабирование. \n\nLet's see the result of scaling.",
"# change scale of features\nscaler = MinMaxScaler()\ndf2=pd.DataFrame(scaler.fit_transform(df))\ndf2.columns=df.columns\ndata2=df2[columns_to_show]\n\ngrid2 = sns.pairplot(data2)",
"Почти без разницы\n\nAlmost without difference\nПростейшая регрессия по 1 признаку (Regression 1 features)\nРассмотрим два параметра с наиболее выраженной линейной зависимостью StateRegistrationOfBirth и StateRegistrationOfPaternityExamination\n\nConsider two parameters with the most pronounced linear dependence StateRegistrationOfBirth and StateRegistrationOfPaternityExamination",
"#get data for model\n\nX = data2['StateRegistrationOfBirth'].values\ny = data2['StateRegistrationOfPaternityExamination'].values\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)\nX_train=np.reshape(X_train,[X_train.shape[0],1])\ny_train=np.reshape(y_train,[y_train.shape[0],1])\nX_test=np.reshape(X_test,[X_test.shape[0],1])\ny_test=np.reshape(y_test,[y_test.shape[0],1])\n\n#teach model and get predictions\nlr = linear_model.LinearRegression()\nlr.fit(X_train, y_train)\nprint('Coefficients:', lr.coef_)\nprint('Score:', lr.score(X_test,y_test))",
"График для зависимости, полученной по обучающим данным\n\nThe graph for the dependence obtained from the training data",
"plt.scatter(X_train, y_train, color='black')\nplt.plot(X_train, lr.predict(X_train), color='blue',\n linewidth=3)\n\nplt.xlabel('StateRegistrationOfBirth')\nplt.ylabel('State Registration OfPaternity Examination')\nplt.title=\"Regression on train data\"",
"График для зависимости, полученной поконтрольным данным\n\nThe graph for the dependence obtained from the test data",
"plt.scatter(X_test, y_test, color='black')\nplt.plot(X_test, lr.predict(X_test), color='green',\n linewidth=3)\n\nplt.xlabel('StateRegistrationOfBirth')\nplt.ylabel('State Registration OfPaternity Examination')\nplt.title=\"Regression on test data\"",
"Регрессия по нескольким признакам и Lasso регуляризация (Regression on several features and Lasso regularization)\nПопробуем предсказать другой параметр - число зарегестрированных браков, на основании той части признаков, для которых ранее строили диаграммы ('StateRegistrationOfBirth', 'StateRegistrationOfMarriage', 'StateRegistrationOfPaternityExamination', 'StateRegistrationOfDivorce','StateRegistrationOfDeath')\n\nLet's try to predict another parameter - the number of registered marriages, based on that part of the characteristics for which the charts were previously built ('StateRegistrationOfBirth', 'StateRegistrationOfMarriage', 'StateRegistrationOfPaternityExamination', 'StateRegistrationOfDivorce', 'StateRegistrationOfDeath')",
"#get main data\ncolumns_to_show2=columns_to_show.copy()\ncolumns_to_show2.remove(\"StateRegistrationOfMarriage\")\n\n#get data for a model\nX = data2[columns_to_show2].values\ny = data2['StateRegistrationOfMarriage'].values\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)\ny_train=np.reshape(y_train,[y_train.shape[0],1])\ny_test=np.reshape(y_test,[y_test.shape[0],1])",
"Обучим простою линейную регрессию на 4-х мерном векторе признаков\n\nWe teach a linear regression on a 4-dimensional vector of features",
"lr = linear_model.LinearRegression()\nlr.fit(X_train, y_train)\nprint('Coefficients:', lr.coef_)\nprint('Score:', lr.score(X_test,y_test))\n",
"Рассмотрим линейную регрессию с регуляризацией - Лассо\n\nConsider linear regression with Lasso regularization",
"#let's look at the different alpha parameter:\n\n#large\nRid=linear_model.Lasso (alpha = 0.01)\nRid.fit(X_train, y_train)\nprint(' Appha:', Rid.alpha)\nprint(' Coefficients:', Rid.coef_)\nprint(' Score:', Rid.score(X_test,y_test))\n\n#Small\nRid=linear_model.Lasso (alpha = 0.000000001)\nRid.fit(X_train, y_train)\nprint('\\n Appha:', Rid.alpha)\nprint(' Coefficients:', Rid.coef_)\nprint(' Score:', Rid.score(X_test,y_test))\n\n#Optimal (for these test data)\nRid=linear_model.Lasso (alpha = 0.00025)\nRid.fit(X_train, y_train)\nprint('\\n Appha:', Rid.alpha)\nprint(' Coefficients:', Rid.coef_)\nprint(' Score:', Rid.score(X_test,y_test))",
"Добавим откровенно бесполезный признак\n\nAdd a seless feature",
"columns_to_show3=columns_to_show2.copy()\ncolumns_to_show3.append(\"TotalNumber\")\ncolumns_to_show3\n\nX = df2[columns_to_show3].values\n# y hasn't changed\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)\ny_train=np.reshape(y_train,[y_train.shape[0],1])\ny_test=np.reshape(y_test,[y_test.shape[0],1])\n",
"Для начала посмотрим на результаты без регуляризации\n\nFirst, look at the results without regularization",
"lr = linear_model.LinearRegression()\nlr.fit(X_train, y_train)\nprint('Coefficients:', lr.coef_)\nprint('Score:', lr.score(X_test,y_test))\n",
"А теперь с регуляризацией (Lasso).\nПри малых значениях коэффициента регуляризации получаем незначительное улучшение.\n\nAnd now with regularization (Lasso).\nFor small values of the regularization coefficient we obtain a slight improvement.",
"#Optimal (for these test data)\nRid=linear_model.Lasso (alpha = 0.00015)\nRid.fit(X_train, y_train)\nprint('\\n Appha:', Rid.alpha)\nprint(' Coefficients:', Rid.coef_)\nprint(' Score:', Rid.score(X_test,y_test))",
"При больших значениях альфа можно посмотреть, на отбор признаков в действии\n\nFor large alpha values, you can look at the selection of features in action",
"#large\nRid=linear_model.Lasso (alpha = 0.01)\nRid.fit(X_train, y_train)\nprint('\\n Appha:', Rid.alpha)\nprint(' Coefficients:', Rid.coef_)\nprint(' Score:', Rid.score(X_test,y_test))",
"Резкий рост качества предсказаний можно объяснить, тем, что регистрация браков является составной величиной от общего количества. \nРассмотрим какую часть регистраций браков можно предсказать, только на основании общего количеств регистраций\n\nThe increase in the quality of predictions can be explained by the fact that registration of marriages is a composite of the total.\nConsider what part of the marriage registrations can be predicted, only based on the total number of registrations.",
"X_train=np.reshape(X_train[:,4],[X_train.shape[0],1])\nX_test=np.reshape(X_test[:,4],[X_test.shape[0],1])\n\nlr = linear_model.LinearRegression()\nlr.fit(X_train, y_train)\nprint('Coefficients:', lr.coef_)\nprint('Score:', lr.score(X_train,y_train))",
"И взглянем на графики\n\nAnd look at the graphics",
"# plot for train data\nplt.figure(figsize=(8,10))\nplt.subplot(211)\n\nplt.scatter(X_train, y_train, color='black')\nplt.plot(X_train, lr.predict(X_train), color='blue',\n linewidth=3)\n\nplt.xlabel('Total Number of Registration')\nplt.ylabel('State Registration Of Marriage')\nplt.title=\"Regression on train data\"\n\n# plot for test data\nplt.subplot(212)\nplt.scatter(X_test, y_test, color='black')\nplt.plot(X_test, lr.predict(X_test), '--', color='green',\n linewidth=3)\n\nplt.xlabel('Total Number of Registration')\nplt.ylabel('State Registration Of Marriage')\nplt.title=\"Regression on test data\"",
"Добавим другой малополезный признак State Registration Of Name Change\n\nAdd another less useful sign. State Registration Of Name Change",
"columns_to_show4=columns_to_show2.copy()\ncolumns_to_show4.append(\"StateRegistrationOfNameChange\")\n\n\nX = df2[columns_to_show4].values\n# y hasn't changed\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)\ny_train=np.reshape(y_train,[y_train.shape[0],1])\ny_test=np.reshape(y_test,[y_test.shape[0],1])\n\nlr = linear_model.LinearRegression()\nlr.fit(X_train, y_train)\nprint('Coefficients:', lr.coef_)\nprint('Score:', lr.score(X_test,y_test))\n\n",
"Как видно, он нам только мешает.\n\nAs you can see, it's just a hindrance.\nДобавим полезный признак, закодированное значение месяца в который получил количество регистраций.\n\nAdd a useful feature, the encoded value of the month in which the number of registrations was received.",
"#get data\ncolumns_to_show5=columns_to_show2.copy()\ncolumns_to_show5.append(\"Month\")\n\n#get data for model\nX = df2[columns_to_show5].values\n# y hasn't changed\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)\ny_train=np.reshape(y_train,[y_train.shape[0],1])\ny_test=np.reshape(y_test,[y_test.shape[0],1])\n#teach model and get predictions\nlr = linear_model.LinearRegression()\nlr.fit(X_train, y_train)\nprint('Coefficients:', lr.coef_)\nprint('Score:', lr.score(X_test,y_test))",
"Линейная регрессия для предсказания тренда (Linear regression for predicting a trend)\nВернемся к исходным данным, но рассмотрим их теперь с учетом изменения во времени.\nДля начала заменим колонку год на общее количество месяцев с момента начальной даты\nВ этот раз не будем масштабировать данные, большой пользы это не принесет.\n\nLet's go back to the original data, but consider them now with the change in time.\nTo begin with, replace the column year by the total number of months from the start date\nThis time we will not scale the data, it will not be of much use.",
"#get data\ndf3=df.copy()\n\n#get new column\ndf3.Year=df.Year.map(lambda x: (x-2010)*12)+df.Month\ndf3.rename(columns={'Year': 'Months'}, inplace=True)\n\n#get data for model\nX=df3[columns_to_show5].values\ny=df3['StateRegistrationOfMarriage'].values\ntrain=[df3.Months<=72]\ntest=[df3.Months>72]\nX_train=X[train]\ny_train=y[train]\nX_test=X[test]\ny_test=y[test]\ny_train=np.reshape(y_train,[y_train.shape[0],1])\ny_test=np.reshape(y_test,[y_test.shape[0],1]) \n\n#teach model and get predictions\nlr = linear_model.LinearRegression()\nlr.fit(X_train, y_train)\nprint('Coefficients:', lr.coef_[0])\nprint('Score:', lr.score(X_test,y_test))",
"Результат предсказания \"не очень\", но думаю лучше, чем просто наобум\nПосмотрим на данные в графическом виде, в начале по отдельности, а потом вместе.\nНаша модель пусть и не очень хорошо, но улавливает основные особенности тренда, позволяя прогнозировать данные.\n\nThe result of the prediction is \"not very,\" but I think it's better than just haphazardly\n \nLet's look at the data in a graphical form, at the beginning separately, and then together.\nOur model, though not very good, but captures the main features of the trend, allowing you to predict the data.",
"plt.figure(figsize=(9,23))\n\n# plot for train data\nplt.subplot(311)\n\nplt.scatter(df3.Months.values[train], y_train, color='black')\nplt.plot(df3.Months.values[train], lr.predict(X_train), color='blue', linewidth=2)\nplt.xlabel('Months (from 01.2010)')\nplt.ylabel('State Registration Of Marriage')\nplt.title=\"Regression on train data\"\n\n# plot for test data\nplt.subplot(312)\n\nplt.scatter(df3.Months.values[test], y_test, color='black')\nplt.plot(df3.Months.values[test], lr.predict(X_test), color='green', linewidth=2)\nplt.xlabel('Months (from 01.2010)')\nplt.ylabel('State Registration Of Marriage')\nplt.title=\"Regression (prediction) on test data\"\n\n# plot for all data\nplt.subplot(313)\n\nplt.scatter(df3.Months.values[train], y_train, color='black')\nplt.plot(df3.Months.values[train], lr.predict(X_train), color='blue', label='train', linewidth=2)\n\nplt.scatter(df3.Months.values[test], y_test, color='black')\nplt.plot(df3.Months.values[test], lr.predict(X_test), color='green', label='test', linewidth=2)\n\nplt.title=\"Regression (prediction) on all data\"\nplt.xlabel('Months (from 01.2010)')\nplt.ylabel('State Registration Of Marriage')\n\n#plot line for link train to test\nplt.plot([72,73], lr.predict([X_train[-1],X_test[0]]) , color='magenta',linewidth=2, label='train to test')\n\n\n\nplt.legend() \n\n",
"Бонус (Bonus)\nПовышаем точность, за счет другого подхода к месяцам\n(Increase the accuracy, due to a different approach to the months)\nДля начала заново загрузим исходную таблицу\n\nFor a start, reload the original table",
"df_base = pd.read_csv('https://op.mos.ru/EHDWSREST/catalog/export/get?id=230308', compression='zip', header=0, encoding='cp1251', sep=';', quotechar='\"')",
"Попробуем применить one-hot кодирование к графе Месяц\n\nLet's try to apply one-hot encoding to the column Month",
"#get data for model\n\ndf4=df_base.copy()\ndf4.drop(['Year','StateRegistrationOfMarriage','ID','global_id','Unnamed: 12','TotalNumber','StateRegistrationOfNameChange','StateRegistrationOfAdoption'],axis=1,inplace=True)\ndf4=pd.get_dummies(df4,prefix=['Month'])\n\nX=df4.values\nX_train=X[train]\nX_test=X[test]\n\n#teach model and get predictions\n\nlr = linear_model.LinearRegression()\nlr.fit(X_train, y_train)\nprint('Coefficients:', lr.coef_[0])\nprint('Score:', lr.score(X_test,y_test))\n\n\n# plot for all data\nplt.scatter(df3.Months.values[train], y_train, color='black')\nplt.plot(df3.Months.values[train], lr.predict(X_train), color='blue', label='train', linewidth=2)\n\nplt.scatter(df3.Months.values[test], y_test, color='black')\nplt.plot(df3.Months.values[test], lr.predict(X_test), color='green', label='test', linewidth=2)\n\nplt.title=\"Regression (prediction) on all data\"\nplt.xlabel('Months (from 01.2010)')\nplt.ylabel('State Registration Of Marriage')\n\n#plot line for link train to test\nplt.plot([72,73], lr.predict([X_train[-1],X_test[0]]) , color='magenta',linewidth=2, label='train to test')\n",
"Качество предсказания резко улучшилось\n\nThe quality of the prediction has has greatly improved\nТеперь попробуем закодировать вместо значения месяца, среднее значение регистрации браков в данный месяц, взятое на основании обучающих данных.\n\nNow try to encode instead of the month, the average value of registration of marriages in a given month, taken on the basis of training data.",
"#get data for pandas data frame\ndf5=df_base.copy()\n\nd=dict()\n\n#get we obtain the mean value of Registration Of Marriages by months on the training data\nfor mon in df5.Month.unique():\n\n d[mon]=df5.StateRegistrationOfMarriage[df5.Month.values[train]==mon].mean()\n #d+={} \n\ndf5['MeanMarriagePerMonth']=df5.Month.map(d)\ndf5.drop(['Month','Year','StateRegistrationOfMarriage','ID','global_id','Unnamed: 12','TotalNumber',\n 'StateRegistrationOfNameChange','StateRegistrationOfAdoption'],axis=1,inplace=True)\n\n#get data for model\nX=df5.values\nX_train=X[train]\nX_test=X[test]\n\n#teach model and get predictions\nlr = linear_model.LinearRegression()\nlr.fit(X_train, y_train)\nprint('Coefficients:', lr.coef_[0])\nprint('Score:', lr.score(X_test,y_test))\n\n# plot for all data\nplt.scatter(df3.Months.values[train], y_train, color='black')\nplt.plot(df3.Months.values[train], lr.predict(X_train), color='blue', label='train', linewidth=2)\n\nplt.scatter(df3.Months.values[test], y_test, color='black')\nplt.plot(df3.Months.values[test], lr.predict(X_test), color='green', label='test', linewidth=2)\n\nplt.title=\"Regression (prediction) on all data\"\nplt.xlabel('Months (from 01.2010)')\nplt.ylabel('State Registration Of Marriage')\n\n#plot line for link train to test\nplt.plot([72,73], lr.predict([X_train[-1],X_test[0]]) , color='magenta',linewidth=2, label='train to test')",
"Качество предсказания стало еще немного лучше\n\nThe quality of the prediction is even slightly better"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.