repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
quantopian/pyfolio
pyfolio/examples/round_trip_tear_sheet_example.ipynb
apache-2.0
[ "Round Trip Tear Sheet Example\nWhen evaluating the performance of an investing strategy, it is helpful to quantify the frequency, duration, and profitability of its independent bets, or \"round trip\" trades. A round trip trade is started when a new long or short position is opened and then later completely or partially closed out.\nThe intent of the round trip tearsheet is to help differentiate strategies that profited off a few lucky trades from strategies that profited repeatedly from genuine alpha. Breaking down round trip profitability by traded name and sector can also help inform universe selection and identify exposure risks. For example, even if your equity curve looks robust, if only two securities in your universe of fifteen names contributed to overall profitability, you may have reason to question the logic of your strategy.\nTo identify round trips, pyfolio reconstructs the complete portfolio based on the transactions that you pass in. When you make a trade, pyfolio checks if shares are already present in your portfolio purchased at a certain price. If there are, we compute the PnL, returns and duration of that round trip trade. In calculating round trips, pyfolio will also append position closing transactions at the last timestamp in the positions data. This closing transaction will cause the PnL from any open positions to realized as completed round trips.", "import pyfolio as pf\n%matplotlib inline\nimport gzip\nimport os\nimport pandas as pd\n\n# silence warnings\nimport warnings\nwarnings.filterwarnings('ignore')\n\ntransactions = pd.read_csv(gzip.open('../tests/test_data/test_txn.csv.gz'),\n index_col=0, parse_dates=True)\npositions = pd.read_csv(gzip.open('../tests/test_data/test_pos.csv.gz'),\n index_col=0, parse_dates=True)\nreturns = pd.read_csv(gzip.open('../tests/test_data/test_returns.csv.gz'),\n index_col=0, parse_dates=True, header=None)[1]\n\n# Optional: Sector mappings may be passed in as a dict or pd.Series. If a mapping is\n# provided, PnL from symbols with mappings will be summed to display profitability by sector.\nsect_map = {'COST': 'Consumer Goods', 'INTC':'Technology', 'CERN':'Healthcare', 'GPS':'Technology',\n 'MMM': 'Construction', 'DELL': 'Technology', 'AMD':'Technology'}", "The easiest way to run the analysis is to call pyfolio.create_round_trip_tear_sheet(). Passing in a sector map is optional. You can also pass round_trips=True to pyfolio.create_full_tear_sheet() to have this be created along all the other analyses.", "pf.create_round_trip_tear_sheet(returns, positions, transactions, sector_mappings=sect_map)", "Under the hood, several functions are being called. extract_round_trips() does the portfolio reconstruction and creates the round-trip trades.", "rts = pf.round_trips.extract_round_trips(transactions, \n portfolio_value=positions.sum(axis='columns') / (returns + 1))\n\nrts.head()\n\npf.round_trips.print_round_trip_stats(rts)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
mjasher/gac
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
gpl-2.0
[ "SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island System\nThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.\nThe domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (DELR), 50 m (DELC), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. \nThe horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).\nThe groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (NSRF=1) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (ISTRAT=1). The dimensionless density difference ($\\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a TOESLOPE and TIPSLOPE of 0.005, a default ALPHA of 0.1, and a default BETA of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 ISOURCE parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. ISOURCE in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 ISOURCE parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active ZETA surface in the cell.\nA constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawing\nsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface from\nupconing into the upper aquifer (model layer).\nImport numpy and matplotlib, set all figures to be inline, import flopy.modflow and flopy.utils.", "%matplotlib inline\nimport os\nimport platform\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport flopy.modflow as mf\nimport flopy.utils as fu", "Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named ml and specify that this is a MODFLOW-2005 model.", "#Set name of MODFLOW exe\n# assumes executable is in users path statement\nexe_name = 'mf2005'\nif platform.system() == 'Windows':\n exe_name = 'mf2005.exe'\n\nworkspace = os.path.join('data')\n#make sure workspace directory exists\nif not os.path.exists(workspace):\n os.makedirs(workspace)", "Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.", "ncol = 61\nnrow = 61\nnlay = 2\n\nnper = 3\nperlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]\nnstp = [1000, 120, 180]\nsave_head = [200, 60, 60]\nsteady = True", "Specify the cell size along the rows (delr) and along the columns (delc) and the top and bottom of the aquifer for the DIS package.", "#--dis data\ndelr, delc = 50.0, 50.0\nbotm = np.array([-10., -30., -50.])", "Define the IBOUND array and starting heads for the BAS package. The corners of the model are defined to be inactive.", "#--bas data\n#--ibound - active except for the corners\nibound = np.ones((nlay, nrow, ncol), dtype= np.int)\nibound[:, 0, 0] = 0\nibound[:, 0, -1] = 0\nibound[:, -1, 0] = 0\nibound[:, -1, -1] = 0\n#--initial head data\nihead = np.zeros((nlay, nrow, ncol), dtype=np.float)", "Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the LPF package.", "#--lpf data\nlaytyp=0\nhk=10.\nvka=0.2", "Define the boundary condition data for the model", "#--boundary condition data\n#--ghb data\ncolcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))\nindex = np.zeros((nrow, ncol), dtype=np.int)\nindex[:, :10] = 1\nindex[:, -10:] = 1\nindex[:10, :] = 1\nindex[-10:, :] = 1\nnghb = np.sum(index)\nlrchc = np.zeros((nghb, 5))\nlrchc[:, 0] = 0\nlrchc[:, 1] = rowcell[index == 1]\nlrchc[:, 2] = colcell[index == 1]\nlrchc[:, 3] = 0.\nlrchc[:, 4] = 50.0 * 50.0 / 40.0\n#--create ghb dictionary\nghb_data = {0:lrchc}\n\n#--recharge data\nrch = np.zeros((nrow, ncol), dtype=np.float)\nrch[index == 0] = 0.0004\n#--create recharge dictionary\nrch_data = {0: rch}\n\n#--well data\nnwells = 2\nlrcq = np.zeros((nwells, 4))\nlrcq[0, :] = np.array((0, 30, 35, 0))\nlrcq[1, :] = np.array([1, 30, 35, 0])\nlrcqw = lrcq.copy()\nlrcqw[0, 3] = -250\nlrcqsw = lrcq.copy()\nlrcqsw[0, 3] = -250.\nlrcqsw[1, 3] = -25.\n#--create well dictionary\nbase_well_data = {0:lrcq, 1:lrcqw}\nswwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}\n\n#--swi2 data\nadaptive = False\nnadptmx = 10\nnadptmn = 1\nnu = [0, 0.025]\nnumult = 5.0\ntoeslope = nu[1] / numult #0.005\ntipslope = nu[1] / numult #0.005\nz1 = -10.0 * np.ones((nrow, ncol))\nz1[index == 0] = -11.0\nz = np.array([[z1, z1]])\niso = np.zeros((nlay, nrow, ncol), dtype=np.int)\niso[0, :, :][index == 0] = 1\niso[0, :, :][index == 1] = -2\niso[1, 30, 35] = 2\nssz=0.2\n#--swi2 observations\nobsnam = ['layer1_', 'layer2_']\nobslrc=[[1, 31, 36], [2, 31, 36]]\nnobs = len(obsnam)\niswiobs = 1051", "Create output control (OC) data using words", "#--oc data\nspd = {(0,199): ['print budget', 'save head'],\n (0,200): [],\n (0,399): ['print budget', 'save head'],\n (0,400): [],\n (0,599): ['print budget', 'save head'],\n (0,600): [],\n (0,799): ['print budget', 'save head'],\n (0,800): [],\n (0,999): ['print budget', 'save head'],\n (1,0): [],\n (1,59): ['print budget', 'save head'],\n (1,60): [],\n (1,119): ['print budget', 'save head'],\n (1,120): [],\n (2,0): [],\n (2,59): ['print budget', 'save head'],\n (2,60): [],\n (2,119): ['print budget', 'save head'],\n (2,120): [],\n (2,179): ['print budget', 'save head']}", "Create the model with the freshwater well (Simulation 1)", "modelname = 'swiex4_s1'\nml = mf.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)\n\ndiscret = mf.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,\n delr=delr, delc=delc, top=botm[0], botm=botm[1:],\n nper=nper, perlen=perlen, nstp=nstp)\nbas = mf.ModflowBas(ml, ibound=ibound, strt=ihead)\nlpf = mf.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)\nwel = mf.ModflowWel(ml, stress_period_data=base_well_data)\nghb = mf.ModflowGhb(ml, stress_period_data=ghb_data)\nrch = mf.ModflowRch(ml, rech=rch_data)\nswi = mf.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,\n zeta=z, ssz=ssz, isource=iso, nsolver=1,\n adaptive=adaptive, nadptmx=nadptmx, nadptmn=nadptmn, \n nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc)\noc = mf.ModflowOc(ml, stress_period_data=spd)\npcg = mf.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)", "Write the simulation 1 MODFLOW input files and run the model", "ml.write_input()\nml.run_model(silent=True)", "Create the model with the saltwater well (Simulation 2)", "modelname2 = 'swiex4_s2'\nml2 = mf.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)\n\ndiscret = mf.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,\n delr=delr, delc=delc, top=botm[0], botm=botm[1:],\n nper=nper, perlen=perlen, nstp=nstp)\nbas = mf.ModflowBas(ml2, ibound=ibound, strt=ihead)\nlpf = mf.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)\nwel = mf.ModflowWel(ml2, stress_period_data=swwells_well_data)\nghb = mf.ModflowGhb(ml2, stress_period_data=ghb_data)\nrch = mf.ModflowRch(ml2, rech=rch_data)\nswi = mf.ModflowSwi2(ml2, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,\n zeta=z, ssz=ssz, isource=iso, nsolver=1,\n adaptive=adaptive, nadptmx=nadptmx, nadptmn=nadptmn,\n nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc)\noc = mf.ModflowOc(ml2, stress_period_data=spd)\npcg = mf.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)", "Write the simulation 2 MODFLOW input files and run the model", "ml2.write_input()\nml2.run_model(silent=True)", "Load the simulation 1 ZETA data and ZETA observations.", "#--read base model zeta\nzfile = fu.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))\nkstpkper = zfile.get_kstpkper()\nzeta = []\nfor kk in kstpkper:\n zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])\nzeta = np.array(zeta)\n#--read swi obs\nzobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs'), names=True)", "Load the simulation 2 ZETA data and ZETA observations.", "#--read saltwater well model zeta\nzfile2 = fu.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))\nkstpkper = zfile2.get_kstpkper()\nzeta2 = []\nfor kk in kstpkper:\n zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])\nzeta2 = np.array(zeta2)\n#--read swi obs\nzobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs'), names=True)", "Create arrays for the x-coordinates and the output years", "x = np.linspace(-1500, 1500, 61)\nxcell = np.linspace(-1500, 1500, 61) + delr / 2.\nxedge = np.linspace(-1525, 1525, 62)\nyears = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]", "Define figure dimensions and colors used for plotting ZETA surfaces", "#--figure dimensions\nfwid, fhgt = 8.00, 5.50\nflft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925\n\n#--line color definition\nicolor = 5\ncolormap = plt.cm.jet #winter\ncc = []\ncr = np.linspace(0.9, 0.0, icolor)\nfor idx in cr:\n cc.append(colormap(idx))", "Recreate Figure 9 from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).", "plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})\nfig = plt.figure(figsize=(fwid, fhgt), facecolor='w')\nfig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)\n#--first plot\nax = fig.add_subplot(2, 2, 1)\n#--axes limits\nax.set_xlim(-1500, 1500)\nax.set_ylim(-50, -10)\nfor idx in xrange(5):\n #--layer 1\n ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid', \n linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))\n #--layer 2\n ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',\n linewidth=0.5, color=cc[idx], label='_None')\nax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)\n#--legend\nplt.legend(loc='lower left')\n#--axes labels and text\nax.set_xlabel('Horizontal distance, in meters')\nax.set_ylabel('Elevation, in meters')\nax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')\nax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')\nax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')\n\n#--second plot\nax = fig.add_subplot(2, 2, 2)\n#--axes limits\nax.set_xlim(-1500, 1500)\nax.set_ylim(-50, -10)\nfor idx in xrange(5, len(years)):\n #--layer 1\n ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid', \n linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))\n #--layer 2\n ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',\n linewidth=0.5, color=cc[idx-5], label='_None')\nax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)\n#--legend\nplt.legend(loc='lower left')\n#--axes labels and text\nax.set_xlabel('Horizontal distance, in meters')\nax.set_ylabel('Elevation, in meters')\nax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')\nax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')\nax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')\n\n#--third plot\nax = fig.add_subplot(2, 2, 3)\n#--axes limits\nax.set_xlim(-1500, 1500)\nax.set_ylim(-50, -10)\nfor idx in xrange(5, len(years)):\n #--layer 1\n ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid', \n linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))\n #--layer 2\n ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',\n linewidth=0.5, color=cc[idx-5], label='_None')\nax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)\n#--legend\nplt.legend(loc='lower left')\n#--axes labels and text\nax.set_xlabel('Horizontal distance, in meters')\nax.set_ylabel('Elevation, in meters')\nax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')\nax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')\nax.text(0.975, .1, 'Freshwater and saltwater\\nwell withdrawals', transform=ax.transAxes,\n va='center', ha='right', size='8')\n\n#--fourth plot\nax = fig.add_subplot(2, 2, 4)\n#--axes limits\nax.set_xlim(0, 30)\nax.set_ylim(-50, -10)\nt = zobs['TOTIM'][999:] / 365 - 200.\ntz2 = zobs['layer1_001'][999:]\ntz3 = zobs2['layer1_001'][999:]\nfor i in xrange(len(t)):\n if zobs['layer2_001'][i+999] < -30. - 0.1:\n tz2[i] = zobs['layer2_001'][i+999]\n if zobs2['layer2_001'][i+999] < 20. - 0.1:\n tz3[i] = zobs2['layer2_001'][i+999]\nax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')\nax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')\nax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')\n#--legend\nleg = plt.legend(loc='lower right', numpoints=1)\n#--axes labels and text\nax.set_xlabel('Time, in years')\nax.set_ylabel('Elevation, in meters')\nax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')\nax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ajul/zerosum
python/examples/super_street_fighter_2_turbo.ipynb
bsd-3-clause
[ "Super Street Fighter 2 Turbo example\nThis example applies a logistic handicap to a Super Street Fighter 2 Turbo matchup chart.", "import _initpath\n\nimport numpy\nimport dataset.matchup\nimport dataset.csv\nimport zerosum.balance\nfrom pandas import DataFrame\n\n# Balances a Super Street Fighter 2 Turbo matchup chart using a logistic handicap.\n# Produces a .csv file for the initial game and the resulting game.\n\ninit = dataset.matchup.ssf2t.sorted_by_sum()\ndataset.csv.write_csv('out/ssf2t_init.csv', init.data, init.row_names, numeric_format = '%0.4f')\n\nbalance = zerosum.balance.LogisticSymmetricBalance(init.data)\nopt = balance.optimize()\ndataset.csv.write_csv('out/ssf2t_opt.csv', opt.F, init.row_names, numeric_format = '%0.4f')", "Initial matchup chart", "DataFrame(data = init.data, index = init.row_names, columns = init.col_names)", "Matchup chart after balancing with a logistic handicap", "DataFrame(data = opt.F, index = init.row_names, columns = init.col_names)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ARM-software/lisa
ipynb/deprecated/releases/ReleaseNotes_v16.12.ipynb
apache-2.0
[ "Target Connectivity\nConfigurable logging system\nAll LISA modules have been updated to use a more consistent logging which can be configured using a single configuraton file:", "!head -n12 $LISA_HOME/logging.conf", "Each module has a unique name which can be used to assign a priority level for messages generated by that module.", "!head -n30 $LISA_HOME/logging.conf | tail -n5", "The default logging level for a notebook can also be easily configured using this few lines", "import logging\nfrom conf import LisaLogging\nLisaLogging.setup(level=logging.INFO)", "Removed Juno/Juno2 distinction\nJuno R0 and Juno R2 boards are now accessible by specifying \"juno\" in the target configuration.\nThe previous distinction was required because of a different way for the two boards to report HWMON channels.\nThis distinction is not there anymore and thus Juno boards can now be connected using the same platform data.", "from env import TestEnv\n\nte = TestEnv({\n 'platform' : 'linux',\n 'board' : 'juno',\n 'host' : '10.1.210.45',\n 'username' : 'root'\n })\ntarget = te.target", "Executor Module\nSimplified tests definition using in-code configurations\nAutomated LISA tests previously configured the Executor using JSON files. This is still possible, but the existing tests now use Python dictionaries directly in the code. In the short term, this allows de-duplicating configuration elements that are shared between multiple tests. It will later allow more flexible test configuration.\nSee tests/eas/acceptance.py for an example of how this is currently used.\nSupport to write files from Executor configuration\nhttps://github.com/ARM-software/lisa/pull/209\nA new \"files\" attribute can be added to Executor configurations which allows\nto specify a list files (e.g. sysfs and procfs) and values to be written to that files.\nFor example, the following test configuration:", "tests_conf = {\n \"confs\" : [\n {\n \"tag\" : \"base\",\n \"flags\" : \"ftrace\",\n \"sched_features\" : \"NO_ENERGY_AWARE\",\n \"cpufreq\" : {\n \"governor\" : \"performance\",\n },\n \"files\" : {\n '/proc/sys/kernel/sched_is_big_little' : '0',\n '!/proc/sys/kernel/sched_migration_cost_ns' : '500000'\n },\n }\n ]\n}", "can be used to run a test where the platform is configured to\n- disable the \"sched_is_big_little\" flag (if present)\n- set to 50ms the \"sched_migration_cost_ns\"\nNortice that a value written in a file is verified only if the file path is\nprefixed by a '/'. Otherwise, the write never fails, e.g. if the file does not exists.\nSupport to freeze user-space across a test\nhttps://github.com/ARM-software/lisa/pull/227\nExecutor learned the \"freeze_userspace\" conf flag. When this flag is present, LISA uses the devlib freezer to freeze as much of userspace as possible while the experiment workload is executing, in order to reduce system noise.\nThe Executor example notebook:\nhttps://github.com/ARM-software/lisa/blob/master/ipynb/examples/utils/executor_example.ipynb\ngives an example of using this feature.\nTrace module\nTasks name pre-loading\nWhen the Trace module is initialized, by default all the tasks in that trace are identified and exposed via the usual getTask() method:", "from trace import Trace\nimport json\n\nwith open('/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/platform.json', 'r') as fh:\n platform = json.load(fh)\n\ntrace = Trace('/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/trace.dat',\n ['sched_switch'], platform\n))\n\nlogging.info(\"%d tasks loaded from trace\", len(trace.getTasks()))\n\nlogging.info(\"The rt-app task in this trace has these PIDs:\")\nlogging.info(\" %s\", trace.getTasks()['rt-app'])", "Android Support\nAdded support for Pixel Phones\nA new platform definition file has been added which allows to easily setup\na connection with an Pixel device:", "!cat $LISA_HOME/libs/utils/platforms/pixel.json\n\nfrom env import TestEnv\n\nte = TestEnv({\n 'platform' : 'android',\n 'board' : 'pixel',\n 'ANDROID_HOME' : '/home/patbel01/Code/lisa/tools/android-sdk-linux/'\n }, force_new=True)\ntarget = te.target", "Added UiBench workload\nA new Android benchmark has been added to run UiBench provided tests.\nHere is a notebook which provides an example of how to run this test on your\nandroid target:\nhttps://github.com/ARM-software/lisa/blob/master/ipynb/examples/android/benchmarks/Android_UiBench.ipynb\nTests\nIntial version of the preliminary tests\nPreliminary tests aim at verifying some basic support required for a\ncomplete functional EAS solution.\nA initial version of these preliminary tests is now available:\nhttps://github.com/ARM-software/lisa/blob/master/tests/eas/preliminary.py\nand it will be extended in the future to include more and more tests.\nCapacity capping test\nA new test has been added to verify that capacity capping is working\nas expected:\nhttps://github.com/ARM-software/lisa/blob/master/tests/eas/capacity_capping.py\nAcceptance tests reworked\nThe EAS acceptace test collects a set of platform independent tests to verify\nbasic EAS beahviours.\nThis test has been cleaned up and it's now avaiable with a detailed documentation:\nhttps://github.com/ARM-software/lisa/blob/master/tests/eas/acceptance.py\nNotebooks\nAdded scratchpad notebooks\nA new scratchpad folder has been added under the ipynb folder which collects the available notebooks:", "!tree -L 1 ~/Code/lisa/ipynb", "This folder is configured to be ignored by git, thus it's the best place to place your work-in-progress notebooks.\nExample notebook restructoring\nExample notebooks has been consolidated and better organized by topic:", "!tree -L 1 ~/Code/lisa/ipynb/examples", "This is the folder to look into when it comes to undedrstand how a specific\nLISA API works.\nHere is where we will provide a dedicated folder and set of notebooks for each of the main LISA modules." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CyberCRI/dataanalysis-herocoli-redmetrics
v1.52.2/Tests/2.6 Google form analysis - MCA.ipynb
cc0-1.0
[ "Google form analysis tests - MCA\nPurpose: determine in what extent the current data can accurately describe correlations, underlying factors on the score.\nEspecially concerning the answerTemporalities[0] groups: are there underlying groups explaining the discrepancies in score? Are those groups tied to certain questions?\nTable of Contents\nMCA\n<br>\n<br>\n<br>\n<br>", "%run \"../Functions/2. Google form analysis.ipynb\"", "MCA\n<a id=MCA />\nsource: http://nbviewer.jupyter.org/github/esafak/mca/blob/master/docs/mca-BurgundiesExample.ipynb\nmca - Burgundies Example\nThis example demonstrated capabilities of mca package by reproducing results of Multiple Correspondence Analysis, Hedbi & Valentin, 2007.\nImports and loading data", "import mca\n\nnp.set_printoptions(formatter={'float': '{: 0.4f}'.format})\npd.set_option('display.precision', 5)\npd.set_option('display.max_columns', 25)", "For input format, mca uses \nDataFrame \nfrom \npandas package. \nHere we use pandas to load CSV file with indicator matrix $X$ of categorical data with 6 observations, 10 variables and 22 levels in total. We also set up supplementary variable $j_{sup}$ and supplementary observation $i_{sup}$.", "data = pd.read_table('../../data/burgundies.csv',sep=',', skiprows=1, index_col=0, header=0)\nX = data.drop('oak_type', axis=1)\nj_sup = data.oak_type\ni_sup = np.array([0, 1, 0, 1, 0, .5, .5, 1, 0, 1, 0, 0, 1, 0, .5, .5, 1, 0, .5, .5, 0, 1])\nncols = 10\n\nX.shape, j_sup.shape, i_sup.shape", "Table 1\n\"Data for the barrel-aged red burgundy wines example. “Oak Type\" is an illustrative (supplementary) variable, the wine W? is an unknown wine treated as a supplementary observation.\" (Hedbi & Valentin, 2007)", "src_index = (['Expert 1'] * 7 + ['Expert 2'] * 9 + ['Expert 3'] * 6)\nvar_index = (['fruity'] * 2 + ['woody'] * 3 + ['coffee'] * 2 + ['fruity'] * 2\n + ['roasted'] * 2 + ['vanillin'] * 3 + ['woody'] * 2 + ['fruity'] * 2\n + ['butter'] * 2 + ['woody'] * 2)\nyn = ['y','n']; rg = ['1', '2', '3']; val_index = yn + rg + yn*3 + rg + yn*4\ncol_index = pd.MultiIndex.from_arrays([src_index, var_index, val_index], \n names=['source', 'variable', 'value'])\n\ntable1 = pd.DataFrame(data=X.values, index=X.index, columns=col_index)\ntable1.loc['W?'] = i_sup\ntable1['','Oak Type',''] = j_sup\n\ntable1", "MCA\nLet's create two MCA instances - one with Benzécri correction enabled (default) and one without it. Parameter ncols denotes number of categorical variables.", "mca_ben = mca.MCA(X, ncols=ncols)\nmca_ind = mca.MCA(X, ncols=ncols, benzecri=False)\n\nprint(mca.MCA.__doc__)", "Table 2 (L, expl_var)\n\"Eigenvalues, corrected eigenvalues, proportion of explained inertia and corrected proportion of explained inertia. The eigenvalues of the Burt matrix are equal to the squared eigenvalues of the indicator matrix; The corrected eigenvalues for Benzécri and Greenacre are the same, but the proportion of explained variance differ. Eigenvalues are denoted by\nλ, proportions of explained inertia by τ (note that the average inertia used to compute Greenacre’s correction is equal to\nI = .7358).\" (Hedbi & Valentin, 2007)\nField L contains the eigenvalues, or the principal inertias, of the factors. Method expl_var returns proportion of explained inertia for each factor, whereas Greenacre corrections may be enabled with parameter greenacre and N limits number of retained factors.\nNote that Burt matrix values are not included in the following table, as it is not currently implemented in mca package.", "data = {'Iλ': pd.Series(mca_ind.L),\n 'τI': mca_ind.expl_var(greenacre=False, N=4),\n 'Zλ': pd.Series(mca_ben.L),\n 'τZ': mca_ben.expl_var(greenacre=False, N=4),\n 'cλ': pd.Series(mca_ben.L),\n 'τc': mca_ind.expl_var(greenacre=True, N=4)}\n\n# 'Indicator Matrix', 'Benzecri Correction', 'Greenacre Correction'\ncolumns = ['Iλ', 'τI', 'Zλ', 'τZ', 'cλ', 'τc']\ntable2 = pd.DataFrame(data=data, columns=columns).fillna(0)\ntable2.index += 1\ntable2.loc['Σ'] = table2.sum()\ntable2.index.name = 'Factor'\n\ntable2", "The inertia is simply the sum of the principle inertias:", "mca_ind.inertia, mca_ind.L.sum(), mca_ben.inertia, mca_ben.L.sum()", "Table 3 (fs_r, cos_r, cont_r, fs_r_sup)\n\"Factor scores, squared cosines, and contributions for the observations (I-set). The eigenvalues and\nproportions of explained inertia are corrected using Benzécri/Greenacre formula. ~~Contributions corresponding\nto negative scores are in italic.~~ The mystery wine (Wine ?) is a supplementary observation. Only the first two\nfactors are reported.\" (Hedbi & Valentin, 2007)\nFirstly, we once again tabulate eigenvalues and their proportions. This time only for the first two factors and as percentage.", "data = np.array([mca_ben.L[:2], \n mca_ben.expl_var(greenacre=True, N=2) * 100]).T\ndf = pd.DataFrame(data=data, columns=['cλ','%c'], index=range(1,3))\ndf", "Factor scores, squared cosines, and contributions for the observations are computed by fs_r, cos_r and cont_r methods respectively, where r denotes rows (i.e. observations). Again, N limits the number of retained factors.\nFactor scores of supplementary observation $i_{sup}$ is computed by method fs_r_sup.\nNote that squared cosines do not agree with those in the reference. See issue #1.", "fs, cos, cont = 'Factor score','Squared cosines', 'Contributions x 1000'\ntable3 = pd.DataFrame(columns=X.index, index=pd.MultiIndex\n .from_product([[fs, cos, cont], range(1, 3)]))\n\ntable3.loc[fs, :] = mca_ben.fs_r(N=2).T\ntable3.loc[cos, :] = mca_ben.cos_r(N=2).T\ntable3.loc[cont, :] = mca_ben.cont_r(N=2).T * 1000\ntable3.loc[fs, 'W?'] = mca_ben.fs_r_sup(pd.DataFrame([i_sup]), N=2)[0]\n\nnp.round(table3.astype(float), 2)", "Table 4 (fs_c, cos_c, cont_c, fs_c_sup)\n\"Factor scores, squared cosines, and contributions for the variables (J-set). The eigenvalues and\npercentages of inertia have been corrected using Benzécri/Greenacre formula. ~~Contributions corresponding to\nnegative scores are in italic.~~ Oak 1 and 2 are supplementary variables.\" (Hedbi & Valentin, 2007)\nComputations for columns (i.e. variables) are analogous to those of rows. Before the supplementary variable factor scores can be computed, $j_{sup}$ must be converted from categorical variable into dummy indicator matrix by method mca.dummy.", "table4 = pd.DataFrame(columns=col_index, index=pd.MultiIndex\n .from_product([[fs, cos, cont], range(1, 3)]))\ntable4.loc[fs, :] = mca_ben.fs_c(N=2).T\ntable4.loc[cos, :] = mca_ben.cos_c(N=2).T\ntable4.loc[cont,:] = mca_ben.cont_c(N=2).T * 1000\n\nfs_c_sup = mca_ben.fs_c_sup(mca.dummy(pd.DataFrame(j_sup)), N=2)\ntable4.loc[fs, ('Oak', '', 1)] = fs_c_sup[0]\ntable4.loc[fs, ('Oak', '', 2)] = fs_c_sup[1]\n\nnp.round(table4.astype(float), 2)", "Figure 1\n\"Multiple Correspondence Analysis. Projections on the first 2 dimensions. The eigenvalues (λ) and\nproportion of explained inertia (τ) have been corrected with Benzécri/Greenacre formula. (a) The I set: rows\n(i.e., wines), wine ? is a supplementary element. (b) The J set: columns (i.e., adjectives). Oak 1 and Oak 2 are\nsupplementary elements. (the projection points have been slightly moved to increase readability). (Projections\nfrom Tables 3 and 4).\" (Hedbi & Valentin, 2007)\nFollowing plots do not introduce anything new in terms of mca package, it just reuses factor scores from Tables 3 and 4. But everybody loves colourful graphs, so...", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\npoints = table3.loc[fs].values\nlabels = table3.columns.values\n\nplt.figure()\nplt.margins(0.1)\nplt.axhline(0, color='gray')\nplt.axvline(0, color='gray')\nplt.xlabel('Factor 1')\nplt.ylabel('Factor 2')\nplt.scatter(*points, s=120, marker='o', c='r', alpha=.5, linewidths=0)\nfor label, x, y in zip(labels, *points):\n plt.annotate(label, xy=(x, y), xytext=(x + .03, y + .03))\nplt.show()\n\nnoise = 0.05 * (np.random.rand(*table4.T[fs].shape) - 0.5)\nfs_by_source = table4.T[fs].add(noise).groupby(level=['source'])\n\nfig, ax = plt.subplots()\nplt.margins(0.1)\nplt.axhline(0, color='gray')\nplt.axvline(0, color='gray')\nplt.xlabel('Factor 1')\nplt.ylabel('Factor 2')\nax.margins(0.1)\nmarkers = '^', 's', 'o', 'o'\ncolors = 'r', 'g', 'b', 'y'\nfor fscore, marker, color in zip(fs_by_source, markers, colors):\n label, points = fscore\n ax.plot(*points.T.values, marker=marker, color=color, label=label, linestyle='', alpha=.5, mew=0, ms=12)\nax.legend(numpoints=1, loc=4)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/pcmdi/cmip6/models/sandbox-3/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: PCMDI\nSource ID: SANDBOX-3\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:36\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ClementPhil/deep-learning
intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
mit
[ "Sentiment analysis with TFLearn\nIn this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.\nWe'll start off by importing all the modules we'll need, then load and prepare the data.", "import pandas as pd\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nfrom tflearn.data_utils import to_categorical", "Preparing the data\nFollowing along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.\nRead the data\nUse the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.", "reviews = pd.read_csv('reviews.txt', header=None)\nlabels = pd.read_csv('labels.txt', header=None)", "Counting word frequency\nTo start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.\n\nExercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.", "from collections import Counter\ntotal_counts = Counter()\nfor _, row in reviews.iterrows():\n total_counts.update(row[0].split(' '))\nprint(\"Total words in data set: \", len(total_counts))", "Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.", "vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]\nprint(vocab[:60])", "What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.", "print(vocab[-1], ': ', total_counts[vocab[-1]])", "The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.\nNote: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.\nNow for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.\n\nExercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.", "word2idx = {word: i for i, word in enumerate(vocab)}", "Text to vector function\nNow we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:\n\nInitialize the word vector with np.zeros, it should be the length of the vocabulary.\nSplit the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.\nFor each word in that list, increment the element in the index associated with that word, which you get from word2idx.\n\nNote: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.", "def text_to_vector(text):\n word_vector = np.zeros(len(vocab), dtype=np.int_)\n for word in text.split(' '):\n idx = word2idx.get(word, None)\n if idx is None:\n continue\n else:\n word_vector[idx] += 1\n return np.array(word_vector)", "If you do this right, the following code should return\n```\ntext_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]\narray([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])\n```", "text_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]", "Now, run through our entire review data set and convert each review to a word vector.", "word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)\nfor ii, (_, text) in enumerate(reviews.iterrows()):\n word_vectors[ii] = text_to_vector(text[0])\n\n# Printing out the first 5 word vectors\nword_vectors[:5, :23]", "Train, Validation, Test sets\nNow that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.", "Y = (labels=='positive').astype(np.int_)\nrecords = len(labels)\n\nshuffle = np.arange(records)\nnp.random.shuffle(shuffle)\ntest_fraction = 0.9\n\ntrain_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]\ntrainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)\ntestX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)\n\ntrainY", "Building the network\nTFLearn lets you build the network by defining the layers. \nInput layer\nFor the input layer, you just need to tell it how many units you have. For example, \nnet = tflearn.input_data([None, 100])\nwould create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.\nThe number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).\nOutput layer\nThe last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.\nnet = tflearn.fully_connected(net, 2, activation='softmax')\nTraining\nTo set how you train the network, use \nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with the categorical cross-entropy.\n\nFinally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like \nnet = tflearn.input_data([None, 10]) # Input\nnet = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden\nnet = tflearn.fully_connected(net, 2, activation='softmax') # Output\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nmodel = tflearn.DNN(net)\n\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.", "# Network building\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n # Inputs\n net = tflearn.input_data([None, 10000])\n\n # Hidden layer(s)\n net = tflearn.fully_connected(net, 200, activation='ReLU')\n net = tflearn.fully_connected(net, 25, activation='ReLU')\n\n # Output layer\n net = tflearn.fully_connected(net, 2, activation='softmax')\n net = tflearn.regression(net, optimizer='sgd', \n learning_rate=0.1, \n loss='categorical_crossentropy')\n \n model = tflearn.DNN(net)\n return model", "Intializing the model\nNext we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.\n\nNote: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.", "model = build_model()", "Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.\nYou can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.", "# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)", "Testing\nAfter you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.", "predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)\ntest_accuracy = np.mean(predictions == testY[:,0], axis=0)\nprint(\"Test accuracy: \", test_accuracy)", "Try out your own text!", "# Helper function that uses your model to predict sentiment\ndef test_sentence(sentence):\n positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]\n print('Sentence: {}'.format(sentence))\n print('P(positive) = {:.3f} :'.format(positive_prob), \n 'Positive' if positive_prob > 0.5 else 'Negative')\n\nsentence = \"Moonlight is by far the best movie of 2016.\"\ntest_sentence(sentence)\n\nsentence = \"It's amazing anyone could be talented enough to make something this spectacularly awful\"\ntest_sentence(sentence)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
marcelomiky/PythonCodes
Intro ML Semcomp/semcomp17_ml/semcomp17_ml_answer.ipynb
mit
[ "Introduction to Machine Learning (ML)\nThis tutorial aims to get you familiar with the basis of ML. You will go through several tasks to build some basic regression and classification models.", "#essential imports\nimport sys\nsys.path.insert(1,'utils')\nimport numpy as np\nimport matplotlib.pyplot as plt\n# display plots in this notebook\n%matplotlib nbagg\nimport pandas as pd\nimport ml_utils\n#print np.__version__\n#print np.__file__", "1. Linear regression\n1. 1. Univariate linear regression\nLet start with the most simple regression example. Firstly, read the data in a file named \"house_price_statcrunch.xls\".", "house_data = pd.ExcelFile('data/house_price_statcrunch.xls').parse(0)", "Let see what is inside by printing out the first few lines.", "print(\" \".join([field.ljust(10) for field in house_data.keys()]))\nfor i in range(10):\n print(\" \".join([str(house_data[field][i]).ljust(10) for field in house_data.keys()]))\nTOTALS = len(house_data['House'])\nprint(\"...\\n\\nTotal number of samples: {}\".format(TOTALS))", "Let preserve some data for test. Here we extract 10% for testing.", "np.random.seed(0)\nidx = np.random.permutation(TOTALS)\nidx_train = idx[:90]\nidx_test = idx[90:]\nhouse_data_train = {}\nhouse_data_test = {}\nfor field in house_data.keys():\n house_data_test[field] = house_data[field][idx_test]\n house_data_train[field] = house_data[field][idx_train]", "For univariate regression, we are interested in the \"size\" parameter only. Let's extract necessary data and visualise it.", "X, Z = ml_utils.extract_data(house_data, ['size'], ['price'])\nZ = Z/1000.0 #price has unit x1000 USD\n\nplt.plot(X[0],Z[0], '.')\nplt.xlabel('size (feet^2)')\nplt.ylabel('price (USD x1000)')\nplt.title('house data scatter plot')\nplt.show()", "Our goal is to build a house price prediction model that will approximate the price of a house given its size. To do it, we need to fit a linear line (y = ax + b) to the data above using linear regression. Remember the procedure:\n1. Define training set\n2. Define hypothesis function. Here $F(x,W) = Wx$\n3. Loss function. Here $L(W) = \\frac{1}{2N}{\\sum_{i=1}^N{(F(x^{(i)},W)-z)^2}}$\n4. Update procedure (gradient descent). $W = W - k\\frac{\\partial L}{\\partial W}$\nTo speed up computation, you should avoid using loop when working with scripting languges e.g. Python, Matlab. Try using array/matrix instead. Here you are provided code for step 1 and 2. Your will be asked to implement step 3 and 4. Some skeleton code will be provided for your convenience.", "\"\"\"step 1: define training and test set X, Z.\"\"\"\nX_train, Z_train = ml_utils.extract_data(house_data_train, ['size'], ['price'])\nX_test, Z_test = ml_utils.extract_data(house_data_test, ['size'], ['price'])\n\nZ_train = Z_train/1000.0 #price has unit x1000 USD\nZ_test = Z_test/1000.0\n\n##normalise data, uncomment for now\n#X_train, u, scale = ml_utils.normalise_data(X_train)\n#X_test = ml_utils.normalise_data(X_test, u, scale)\n\nN = Z_train.size #number of training samples\nones_array = np.ones((1,N),dtype=np.float32)\nX_train = np.concatenate((X_train, ones_array), axis=0) #why?\nX_test = np.concatenate((X_test, np.ones((1, Z_test.size), dtype=np.float32)), axis = 0) #same for test data\nprint(\"size of X_train \", X_train.shape)\nprint(\"size of Z_train \", Z_train.shape)\n\n\"\"\"step 2: define hypothesis function\"\"\"\ndef F_Regression(X, W):\n \"\"\"\n Compute the hypothesis function y=F(x,W) in batch.\n input: X input array, must has size DxN (each column is one sample)\n W parameter array, must has size 1xD\n output: linear multiplication of W*X, size 1xN\n \"\"\"\n return np.dot(W,X)", "Task 1.1: define the loss function for linear regression according to the following formula:\n$$L = \\frac{1}{2N}{\\sum_{i=1}^N{(y^{(i)}-z^{(i)})^2}}$$\nPlease fill in the skeleton code below. Hints: (i) in Python numpy the square operator $x^2$ is implemented as x**2; (ii) try to use matrix form and avoid for loop", "\"\"\"step 3: loss function\"\"\"\ndef Loss_Regression(Y, Z):\n \"\"\"\n Compute the loss between the predicted (Y=F(X,W)) and the groundtruth (Z) values.\n input: Y predicted results Y = F(X,W) with given parameter W, has size 1xN\n Z groundtruth vector Z, has size 1xN\n output: loss value, is a scalar\n \"\"\"\n #enter the code here\n N = float(Z.size)\n diff = Y-Z\n return 1/(2*N)*np.dot(diff, diff.T).squeeze()", "Task 1.2: compute gradient of the loss function w.r.t parameter W according to the following formula:<br>\n$$\\frac{\\partial L}{\\partial W} = \\frac{1}{N}\\sum_{i=1}^N{(y^{(i)}-z^{(i)})x^{(i)}}$$\nPlease fill in the skeleton code below.", "\"\"\"step 4: gradient descent - compute gradient\"\"\"\ndef dLdW_Regression(X, Y, Z):\n \"\"\"\n Compute gradient of the loss w.r.t parameter W.\n input: X input array, each column is one sample, has size DxN\n Y predicted values, has size 1xN\n Z groundtruth values, has size 1xN\n output: gradient, has same size as W\n \"\"\"\n #enter the code here\n N = float(Z.size)\n return 1/N * (Y-Z).dot(X.T)", "Now we will perform gradient descent update procedure according to the following formula:\n$$W = W - k\\frac{\\partial L}{\\partial W}$$\nHere we use fixed number of iterations and learning rate.", "\"\"\"step 4: gradient descent - update loop\"\"\"\nnp.random.seed(0)\nW = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised\nk = 1e-8 #learning rate\nniters = 160 #number of training iterations\n\n#visualisation settings\nvis_interval = niters/50\nloss_collections = []\nplt.close()\nplt.ion()\nfig = plt.figure(1,figsize=(16, 4))\naxis_loss = fig.add_subplot(131)\naxis_data = fig.add_subplot(132)\nfor i in range(niters):\n Y_train = F_Regression(X_train,W) #compute hypothesis function aka. predicted values\n loss = Loss_Regression(Y_train, Z_train) #compute loss\n dLdW = dLdW_Regression(X_train, Y_train, Z_train) #compute gradient\n W = W - k*dLdW #update\n loss_collections.append(loss)\n if (i+1)% vis_interval == 0:\n ml_utils.plot_loss(axis_loss, range(i+1),loss_collections, \"loss = \" + str(loss))\n ml_utils.plot_scatter_and_line(axis_data, X_train, Z_train, W, \"iter #\" + str(i))\n fig.canvas.draw()\nprint(\"Learned parameters \", W.squeeze())", "Now evaluate your learned model using the test set. Measure the total error of your prediction", "Y_test = F_Regression(X_test, W)\nerror = Loss_Regression(Y_test, Z_test)\nprint(\"Evaluation error: \", error)", "Quiz: you may notice the learning rate k is set to $10^{-8}$. Why is it too small? Try to play with several bigger values of k, you will soon find out that the training is extremely sensitive to the learning rate (the training easily diverges or even causes \"overflow\" error with large k).<br><br>\nAnswer: It is because both the input (size of house) and output (price) have very large range of values, which result in very large gradient.\nTask 1.3: Test your learned model. Suppose you want to sell a house of size 3000 $feat^2$, how much do you expect your house will cost?<br>\nAnswer: you should get around 260k USD for that house.", "x = 3000\nx = np.array([x,1])[...,None] #make sure feature vector has size 2xN, here N=1\nprint \"Expected price: \", F_Regression(x,W).squeeze()", "Task 1.4: The gradient descent in the code above terminates after 100 iterations. You may want it to terminate when improvement in the loss is below a threshold.\n$$\\Delta L_t = |L_t - L_{t-1}| < \\epsilon$$\nEdit the code to terminate the loop when the loss improvement is below $\\epsilon=10^{-2}$. Re-evaluate your model to see if its performance has improved.", "\"\"\"step 4: gradient descent - update loop\"\"\"\nW = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised\nk = 1e-8 #learning rate\nepsilon = 1e-2 #terminate condition\n\n#visualisation settings\nvis_interval = 10\nloss_collections = []\nprev_loss = 0\nplt.close()\nplt.ion()\nfig = plt.figure(1,figsize=(16, 4))\naxis_loss = fig.add_subplot(131)\naxis_data = fig.add_subplot(132)\nwhile(1):\n Y_train = F_Regression(X_train,W) #compute hypothesis function aka. predicted values\n loss = Loss_Regression(Y_train, Z_train) #compute loss\n dLdW = dLdW_Regression(X_train, Y_train, Z_train) #compute gradient\n W = W - k*dLdW #update\n loss_collections.append(loss)\n if abs(loss - prev_loss) < epsilon:\n break\n prev_loss = loss\n \n if (len(loss_collections)+1) % vis_interval==0:\n #print \"Iter #\", len(loss_collections)\n ml_utils.plot_loss(axis_loss, range(len(loss_collections)),loss_collections, \"loss = \" + str(loss))\n ml_utils.plot_scatter_and_line(axis_data, X_train, Z_train, W, \"iter #\" + str(len(loss_collections)))\n fig.canvas.draw()\n\nprint \"Learned parameters \", W.squeeze()\nprint \"Learning terminates after {} iterations\".format(len(loss_collections))\n\n#run the test\nY_test = F_Regression(X_test, W)\nerror = Loss_Regression(Y_test, Z_test)\nprint \"Evaluation error: \", error", "Confirm that the error measurement on the test set has improved.\n1.2 Multivariate regression\nSo far we assume the house price is affected by the size only. Now let consider also other fields \"Bedrooms\", \"Baths\", \"lot\" (location) and \"NW\" (whether or not the houses face Nothern West direction).<br><br>\nImportant: now your feature vector is multi-dimensional, it is crucial to normalise your training set for gradient descent to converge properly. The code below is almost identical to the previous step 1, except it loads more fields and implements data normalisation.", "\"\"\"step 1: define training set X, Z.\"\"\"\nselected_fields = ['size', 'Bedrooms', 'Baths', 'lot', 'NW']\nX_train, Z_train = ml_utils.extract_data(house_data_train, selected_fields, ['price'])\nX_test, Z_test = ml_utils.extract_data(house_data_test, selected_fields, ['price'])\n\nZ_train = Z_train/1000.0 #price has unit x1000 USD\nZ_test = Z_test/1000.0\n\n##normalise \nX_train, u, scale = ml_utils.normalise_data(X_train)\nX_test = ml_utils.normalise_data(X_test, u, scale)\n\nN = Z_train.size #number of training samples\nones_array = np.ones((1,N),dtype=np.float32)\nX_train = np.concatenate((X_train, ones_array), axis=0) #why?\nX_test = np.concatenate((X_test, np.ones((1, Z_test.size), dtype=np.float32)), axis = 0) #same for test data\nprint \"size of X_train \", X_train.shape\nprint \"size of Z_train \", Z_train.shape", "Now run step 2-4 again. Note the followings: \n1. You need not to modify the Loss_Regression and dLdW_Regression functions. They should generalise enough to work with multi-dimensional data\n2. Since your training samples are normalised you can now use much higher learning rate e.g. k = 1e-2\n3. Note that the plot function plot_scatter_and_line will not work in multivariate regression since it is designed for 1-D input only. Consider commenting it out.<br>\nQuestion: how many iterations are required to pass the threshold $\\Delta L < 10^{-2}$ ?<br>\nAnswer: ~4000 iterations (and it will take a while to complete).\nTask 1.5: (a) evaluate your learned model on the test set. (b) Suppose the house you want to sell has a size of 3000 $feet^2$, has 3 bedrooms, 2 baths, lot number 10000 and in NW direction. How much do you think its price would be? Hints: don't forget to normalise the test sample.<br>\nAnswer: You will get ~150k USD only, much lower than the previous prediction based on size only. Your house has an advantage of size, but other parameters matter too.", "\"\"\"step 4: gradient descent - update loop\"\"\"\n\"\"\" same code but change k = 1e-2\"\"\"\nW = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised\nk = 1e-2 #learning rate\nepsilon = 1e-2 #terminate condition\n\n#visualisation settings\nvis_interval = 10\nloss_collections = []\nprev_loss = 0\nplt.close()\nplt.ion()\nfig = plt.figure(1,figsize=(16, 4))\naxis_loss = fig.add_subplot(131)\n#axis_data = fig.add_subplot(132)\nwhile(1):\n Y_train = F_Regression(X_train,W) #compute hypothesis function aka. predicted values\n loss = Loss_Regression(Y_train, Z_train) #compute loss\n dLdW = dLdW_Regression(X_train, Y_train, Z_train) #compute gradient\n W = W - k*dLdW #update\n loss_collections.append(loss)\n if abs(loss - prev_loss) < epsilon:\n break\n prev_loss = loss\n \n if (len(loss_collections)+1) % vis_interval==0:\n #print \"Iter #\", len(loss_collections)\n ml_utils.plot_loss(axis_loss, range(len(loss_collections)),loss_collections, \"loss = \" + str(loss))\n #ml_utils.plot_scatter_and_line(axis_data, X_train, Z_train, W, \"iter #\" + str(len(loss_collections)))\n fig.canvas.draw()\n\nprint \"Learned parameters \", W.squeeze()\nprint \"Learning terminates after {} iterations\".format(len(loss_collections))\n\n\n\"\"\"apply on the test set\"\"\"\nY_test = F_Regression(X_test, W)\nerror = Loss_Regression(Y_test, Z_test)\nprint \"Evaluation error: \", error\n\n\"\"\"test a single sample\"\"\"\nx = np.array([3000, 3,2, 10000, 1],dtype=np.float32)[...,None]\nx = ml_utils.normalise_data(x, u, scale)\nx = np.concatenate((x,np.ones((1,1))),axis=0)\nprint \"Price: \", F_Regression(x,W).squeeze()", "1.3 Gradient descent with momentum\nIn the latest experiment, our training takes ~4000 iterations to converge. Now let try gradient descent with momentum to speed up the training. We will employ the following formula:\n$$v_t = m*v_{t-1} + k\\frac{\\partial L}{\\partial W}$$\n$$W = W - v_t$$", "\"\"\"step 4: gradient descent with momentum - update loop\"\"\"\nW = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised\nk = 1e-2 #learning rate\nepsilon = 1e-2 #terminate condition\nm = 0.9 #momentum\nv = 0 #initial velocity\n\n#visualisation settings\nvis_interval = 10\nloss_collections = []\nprev_loss = 0\nplt.close()\nplt.ion()\nfig = plt.figure(1,figsize=(16, 4))\naxis_loss = fig.add_subplot(131)\n#axis_data = fig.add_subplot(132)\nwhile(1):\n Y_train = F_Regression(X_train,W) #compute hypothesis function aka. predicted values\n loss = Loss_Regression(Y_train, Z_train) #compute loss\n dLdW = dLdW_Regression(X_train, Y_train, Z_train) #compute gradient\n v = v*m + k*dLdW\n W = W - v #update\n loss_collections.append(loss)\n if abs(loss - prev_loss) < epsilon:\n break\n prev_loss = loss\n if (len(loss_collections)+1) % vis_interval==0:\n #print \"Iter #\", len(loss_collections)\n ml_utils.plot_loss(axis_loss, range(len(loss_collections)),loss_collections, \"loss = \" + str(loss))\n #ml_utils.plot_scatter_and_line(axis_data, X_train, Z_train, W, \"iter #\" + str(len(loss_collections)))\n fig.canvas.draw()\n\nprint \"Learned parameters \", W.squeeze()\nprint \"Learning terminates after {} iterations\".format(len(loss_collections))", "2. Classification\nIn this part you will walk through different steps to implement several basic classification tasks.\n2.1. Binary classification\nImagine you were an USP professor who teaches Computer Science. This year there is 100 year-one students who want to register your module. You examine their performance based on their scores on two exams. You have gone through the records of 80 students and already made admission decisions for them. Now you want to build a model to automatically make admission decisions for the rest 20 students. Your training data will be the exam results and admission decisions for the 80 students that you have assessed.<br><br>\nFirstly, let load the data.", "student_data = pd.read_csv('data/student_data_binary_clas.txt', header = None, names=['exam1', 'exam2', 'decision'])\nstudent_data\n\n#split train/test set\nX = np.array([student_data['exam1'], student_data['exam2']], dtype=np.float32)\nZ = np.array([student_data['decision']], dtype = np.float32)\n\n#assume the first 80 students have been assessed, use them as the training data\nX_train = X[:,:80]\nX_test = X[:,80:]\n\n#you later have to manually assess the rest 20 students according to the university policies.\n# Great, now you have a chance to evaluate your learned model\nZ_train = Z[:,:80]\nZ_test = Z[:,80:]\n\n#normalise data\nX_train, u, scale = ml_utils.normalise_data(X_train)\nX_test = ml_utils.normalise_data(X_test, u, scale)\n\n#concatenate array of \"1s\" to X array\nX_train = np.concatenate((X_train, np.ones_like(Z_train)), axis = 0)\nX_test = np.concatenate((X_test, np.ones_like(Z_test)), axis = 0)\n\n#let visualise the training set\nplt.close()\nplt.ion()\nfig = plt.figure(1)\naxis_data = fig.add_subplot(111)\nml_utils.plot_scatter_with_label_2d(axis_data, X_train, Z_train,msg=\"student score scatter plot\")", "Task 2.1: your first task is to define the hypothesis function. Do you remember the hypothesis function in a binary classification task? It has form of a sigmoid function:\n$$F(x,W) = \\frac{1}{1+e^{-Wx}}$$", "def F_Classification(X, W):\n \"\"\"\n Compute the hypothesis function given input array X and parameter W\n input: X input array, must has size DxN (each column is one sample)\n W parameter array, must has size 1xD\n output: sigmoid of W*X, size 1xN\n \"\"\"\n return 1/(1+np.exp(-np.dot(W,X)))", "Task 2.2: define the loss function for binary classification. It is called \"negative log loss\":\n$$L(W) = -\\frac{1}{N} \\sum_{i=1}^N{[z^{(i)} log(F(x^{(i)},W)) + (1-z^{(i)})(log(1-F(x^{(i)},W))]}$$\nNext, define the gradient function:\n$$\\frac{\\partial L}{\\partial W} = \\frac{1}{N}(F(X,W) - Z)X^T$$", "\"\"\"step 3: loss function for classification\"\"\"\ndef Loss_Classification(Y, Z):\n \"\"\"\n Compute the loss between the predicted (Y=F(X,W)) and the groundtruth (Z) values.\n input: Y predicted results Y = F(X,W) with given parameter W, has size 1xN\n Z groundtruth vector Z, has size 1xN\n output: loss value, is a scalar\n \"\"\"\n #enter the code here\n N = float(Z.size)\n \n return -1/N*(np.dot(np.log(Y), Z.T) + np.dot(np.log(1-Y), (1-Z).T)).squeeze()\n\n\"\"\"step 4: gradient descent for classification - compute gradient\"\"\"\ndef dLdW_Classification(X, Y, Z):\n \"\"\"\n Compute gradient of the loss w.r.t parameter W.\n input: X input array, each column is one sample, has size DxN\n Y probability of label = 1, has size 1xN\n Z groundtruth values, has size 1xN\n output: gradient, has same size as W\n \"\"\"\n #enter the code here\n N = float(Z.size)\n return 1/N * (Y-Z).dot(X.T)\n\nW = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised\nk = 0.2 #learning rate\nepsilon = 1e-6 #terminate condition\nm = 0.9 #momentum\nv = 0 #initial velocity\n\n#visualisation settings\nvis_interval = 10\nloss_collections = []\nprev_loss = 0\nplt.close()\nplt.ion()\nfig = plt.figure(1,figsize=(16, 4))\naxis_loss = fig.add_subplot(131)\naxis_data = fig.add_subplot(132)\nwhile(1):\n Y_train = F_Classification(X_train,W) #compute hypothesis function aka. predicted values\n loss = Loss_Classification(Y_train, Z_train) #compute loss\n dLdW = dLdW_Classification(X_train, Y_train, Z_train) #compute gradient\n v = v*m + k*dLdW\n W = W - v #update\n loss_collections.append(loss)\n if abs(loss - prev_loss) < epsilon:\n break\n prev_loss = loss\n if (len(loss_collections)+1) % vis_interval==0:\n ml_utils.plot_loss(axis_loss, range(len(loss_collections)),loss_collections, \"loss = \" + str(loss))\n ml_utils.plot_scatter_with_label_2d(axis_data, X_train, Z_train, W, \"student score scatter plot\")\n fig.canvas.draw()\n\nprint \"Learned parameters \", W.squeeze()\nprint \"Learning terminates after {} iterations\".format(len(loss_collections))\n\n\n\n#evaluate\nY_test = F_Classification(X_test, W)\npredictions = Y_test > 0.5\naccuracy = np.sum(predictions == Z_test)/float(Z_test.size)\nprint \"Test accuracy: \", accuracy", "We achieve 90% accuracy (only two students have been misclassified). Not too bad, isn't it?\nTask 2.3: regularisation\nNow we want to add a regularisation term into the loss to prevent overfitting.\nRegularisation loss is simply magnitude of the parameter vector W after removing the last element (i.e. bias doesn't count to regularisation).\n$$L_R = \\frac{1}{2}|W'|^2$$\nwhere W' is W with the last element truncated.<br>\nNow the total loss would be:\n$$L(W) = -\\frac{1}{N} \\sum_{i=1}^N{[z^{(i)} log(F(x^{(i)},W)) + (1-z^{(i)})(log(1-F(x^{(i)},W))]} + \\frac{1}{2}|W'|^2$$\nThe gradient become:\n$$\\frac{\\partial L}{\\partial W} = \\frac{1}{N}(F(X,W) - Z)X^T + W''$$\nwhere W'' is W with the last element change to 0.\nYour task is to implement the loss and gradient function with added regularisation.", "\"\"\"step 3: loss function with regularisation\"\"\"\ndef Loss_Classification_Reg(Y, Z, W):\n \"\"\"\n Compute the loss between the predicted (Y=F(X,W)) and the groundtruth (Z) values.\n input: Y predicted results Y = F(X,W) with given parameter W, has size 1xN\n Z groundtruth vector Z, has size 1xN\n W parameter vector, size 1xD\n output: loss value, is a scalar\n \"\"\"\n #enter the code here\n N = float(Z.size)\n W_ = W[:,:-1]\n return -1/N*(np.dot(np.log(Y), Z.T) + np.dot(np.log(1-Y), (1-Z).T)).squeeze() + 0.5*np.dot(W_,W_.T).squeeze()\n\n\"\"\"step 4: gradient descent with regularisation - compute gradient\"\"\"\ndef dLdW_Classification_Reg(X, Y, Z, W):\n \"\"\"\n Compute gradient of the loss w.r.t parameter W.\n input: X input array, each column is one sample, has size DxN\n Y probability of label = 1, has size 1xN\n Z groundtruth values, has size 1xN\n W parameter vector, size 1xD\n output: gradient, has same size as W\n \"\"\"\n #enter the code here\n N = float(Z.size)\n W_ = W\n W_[:,-1] = 0\n return 1/N * (Y-Z).dot(X.T) + W_", "Rerun the update loop again with the new loss and gradient functions. Note you may need to change the learning rate accordingly to have proper convergence. Now you have implemented both regularisation and momentum techniques, you can use a standard learning rate value of 0.01 which is widely used in practice.", "\"\"\" gradient descent with regularisation- parameter update loop\"\"\"\nW = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised\nk = 0.01 #learning rate\nepsilon = 1e-6 #terminate condition\nm = 0.9 #momentum\nv = 0 #initial velocity\n\n#visualisation settings\nvis_interval = 10\nloss_collections = []\nprev_loss = 0\nplt.close()\nplt.ion()\nfig = plt.figure(1,figsize=(16, 4))\naxis_loss = fig.add_subplot(131)\naxis_data = fig.add_subplot(132)\nfor i in range(500):\n Y_train = F_Classification(X_train,W) #compute hypothesis function aka. predicted values\n loss = Loss_Classification_Reg(Y_train, Z_train, W) #compute loss\n dLdW = dLdW_Classification_Reg(X_train, Y_train, Z_train, W) #compute gradient\n v = v*m + k*dLdW\n W = W - v #update\n loss_collections.append(loss)\n if abs(loss - prev_loss) < epsilon:\n break\n prev_loss = loss\n if (len(loss_collections)+1) % vis_interval==0:\n ml_utils.plot_loss(axis_loss, range(len(loss_collections)),loss_collections, \"loss = \" + str(loss))\n ml_utils.plot_scatter_with_label_2d(axis_data, X_train, Z_train, W, \"student score scatter plot\")\n fig.canvas.draw()\n\nprint \"Learned parameters \", W.squeeze()\nprint \"Learning terminates after {} iterations\".format(len(loss_collections))", "Question: Do you see any improvement in accuracy or convergence speed? Why?\nAnswer: Regularisation does help speed up the training (it adds stricter rules to the update procedure). Accuracy is the same (90%) is probably because (i) number of parameters to be trained is small (2-D) and so is the number of training samples; and (ii) the data are well separated. In a learning task which involves large number of parameters (such as neural network), regularisation proves a very efficient technique.\n2.2 Multi-class classification\nHere we are working with a very famous dataset. The Iris flower dataset has 150 samples of 3 Iris flower species (Setosa, Versicolour, and Virginica), each sample stores the height and length of its sepal and pedal in cm (4-D in total). Your task is to build a classifier to distinguish these flowers.", "#read the Iris dataset\niris = np.load('data/iris.npz')\nX = iris['X']\nZ = iris['Z']\nprint \"size X \", X.shape\nprint \"size Z \", Z.shape\n\n#split train/test with ratio 120:30\nTOTALS = Z.size\nidx = np.random.permutation(TOTALS)\nidx_train = idx[:120]\nidx_test = idx[120:]\n\nX_train = X[:, idx_train]\nX_test = X[:, idx_test]\nZ_train = Z[:, idx_train]\nZ_test = Z[:, idx_test]\n\n#normalise data\nX_train, u, scale = ml_utils.normalise_data(X_train)\nX_test = ml_utils.normalise_data(X_test, u, scale)\n\n#concatenate array of \"1s\" to X array\nX_train = np.concatenate((X_train, np.ones_like(Z_train)), axis = 0)\nX_test = np.concatenate((X_test, np.ones_like(Z_test)), axis = 0)", "Task 2.4: one-vs-all. Train 3 binary one-vs-all classifiers $F_i$ (i=1-3), one for each class. An unknown feture vector x belongs to class i if:\n$$max_i F(x,W_i)$$\nTask 2.5: implement one-vs-one and compare the results with one-vs-all." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
otavio-r-filho/AIND-Deep_Learning_Notebooks
autoencoder/Convolutional_Autoencoder.ipynb
mit
[ "Convolutional Autoencoder\nSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.", "%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)\n\nimg = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')", "Network Architecture\nThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.\n<img src='assets/convolutional_autoencoder.png' width=500px>\nHere our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.\nWhat's going on with the decoder\nOkay, so the decoder has these \"Upsample\" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose. \nHowever, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.\n\nExercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.", "learning_rate = 0.001\n\n# Input and target placeholders\ninputs_ = tf.placeholder(tf.float32, [None,28,28,1], name='inputs')\ntargets_ = tf.placeholder(tf.float32, [None,28,28,1], name='labels')\n\n### Encoder\nconv1 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(3,3), \n padding='same', activation=tf.nn.relu, name='enc_conv1')\n# Now 28x28x16\nmaxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2),\n strides=(2,2), padding='same', name='enc_maxpool1')\n# Now 14x14x16\nconv2 = tf.layers.conv2d(inputs=maxpool1, filters=8, kernel_size=(3,3),\n padding='same', activation=tf.nn.relu, name='enc_conv2')\n# Now 14x14x8\nmaxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=(2,2),\n strides=(2,2), padding='same', name='enc_maxpool2')\n# Now 7x7x8\nconv3 = tf.layers.conv2d(inputs=maxpool2, filters=8, kernel_size=(3,3),\n padding='same', activation=tf.nn.relu, name='enc_conv3')\n# Now 7x7x8\nencoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=(2,2),\n strides=(2,2), padding='same', name='encoded')\n# Now 4x4x8\n\n### Decoder\nupsample1 = tf.image.resize_bilinear(images=encoded, size=(7,7), name='dec_upsample1')\n# Now 7x7x8\nconv4 = tf.layers.conv2d(inputs=upsample1, filters=8, kernel_size=(3,3),\n padding='same', activation=tf.nn.relu, name='dec_conv4')\n# Now 7x7x8\nupsample2 = tf.image.resize_bilinear(images=conv4, size=(14,14), name='dec_upsample2')\n# Now 14x14x8\nconv5 = tf.layers.conv2d(inputs=upsample2, filters=8, kernel_size=(3,3),\n padding='same', activation=tf.nn.relu, name='dec_conv5')\n# Now 14x14x8\nupsample3 = tf.image.resize_bilinear(images=conv5, size=(28,28), name='dec_upsample3')\n# Now 28x28x8\nconv6 = tf.layers.conv2d(inputs=upsample3, filters=16, kernel_size=(3,3),\n padding='same', activation=tf.nn.relu, name='dec_conv6')\n# Now 28x28x16\n\nlogits = tf.layers.conv2d(inputs=conv6, filters=1, kernel_size=(3,3),\n padding='same', activation=None, name='logits')\n#Now 28x28x1\n\n# Pass logits through sigmoid to get reconstructed image\ndecoded = tf.nn.sigmoid(logits, name='decoded')\n\n# Pass logits through sigmoid and calculate the cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_, name='loss')\n\n# Get cost and define the optimizer\ncost = tf.reduce_mean(loss, name='cost')\nopt = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "Training\nAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.", "sess = tf.Session()\n\nepochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n imgs = batch[0].reshape((-1, 28, 28, 1))\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,\n targets_: imgs})\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))\n\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n\nfig.tight_layout(pad=0.1)\n\nsess.close()", "Denoising\nAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.\n\nSince this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.\n\nExercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.", "learning_rate = 0.001\ninputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')\n\n### Encoder\nconv1 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3), \n padding='same', activation=tf.nn.relu)\n# Now 28x28x32\nmaxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2),\n strides=(2,2), padding='same')\n# Now 14x14x32\nconv2 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3), \n padding='same', activation=tf.nn.relu)\n# Now 14x14x32\nmaxpool2 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2),\n strides=(2,2), padding='same')\n# Now 7x7x32\nconv3 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(3,3), \n padding='same', activation=tf.nn.relu)\n# Now 7x7x16\nencoded = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2),\n strides=(2,2), padding='same')\n# Now 4x4x16\n\n### Decoder\nupsample1 = tf.image.resize_bilinear(images=encoded, size=(7,7))\n# Now 7x7x16\nconv4 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(3,3), \n padding='same', activation=tf.nn.relu)\n# Now 7x7x16\nupsample2 = tf.image.resize_bilinear(images=encoded, size=(14,14))\n# Now 14x14x16\nconv5 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3), \n padding='same', activation=tf.nn.relu)\n# Now 14x14x32\nupsample3 = tf.image.resize_bilinear(images=encoded, size=(28,28))\n# Now 28x28x32\nconv6 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3), \n padding='same', activation=tf.nn.relu)\n# Now 28x28x32\n\nlogits = tf.layers.conv2d(inputs=conv6, filters=1, kernel_size=(3,3),\n padding='same', activation=None)\n#Now 28x28x1\n\n# Pass logits through sigmoid to get reconstructed image\ndecoded = tf.nn.sigmoid(logits, name='output')\n\n# Pass logits through sigmoid and calculate the cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)\n\n# Get cost and define the optimizer\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(learning_rate).minimize(cost)\n\nsess = tf.Session()\n\nepochs = 100\nbatch_size = 200\n# Set's how much noise we're adding to the MNIST images\nnoise_factor = 0.5\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n # Get images from the batch\n imgs = batch[0].reshape((-1, 28, 28, 1))\n \n # Add random noise to the input images\n noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)\n # Clip the images to be between 0 and 1\n noisy_imgs = np.clip(noisy_imgs, 0., 1.)\n \n # Noisy images as inputs, original images as targets\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,\n targets_: imgs})\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))", "Checking out the performance\nHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.", "fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nnoisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)\nnoisy_imgs = np.clip(noisy_imgs, 0., 1.)\n\nreconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([noisy_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
janusnic/21v-python
unit_20/parallel_ml/notebooks/04 - Pandas and Heterogeneous Data Modeling.ipynb
mit
[ "Predictive Modeling with heterogeneous data", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nimport warnings\nwarnings.simplefilter('ignore', DeprecationWarning)", "<img src=\"files/images/predictive_modeling_data_flow.png\">\nLoading tabular data from the Titanic kaggle challenge in a pandas Data Frame\nLet us have a look at the Titanic dataset from the Kaggle Getting Started challenge at:\nhttps://www.kaggle.com/c/titanic-gettingStarted\nWe can load the CSV file as a pandas data frame in one line:", "#!curl -s https://dl.dropboxusercontent.com/u/5743203/data/titanic/titanic_train.csv | head -5\nwith open('titanic_train.csv', 'r') as f:\n for i, line in zip(range(5), f):\n print(line.strip())\n\n#data = pd.read_csv('https://dl.dropboxusercontent.com/u/5743203/data/titanic/titanic_train.csv')\ndata = pd.read_csv('titanic_train.csv')", "pandas data frames have a HTML table representation in the IPython notebook. Let's have a look at the first 5 rows:", "data.head(5)\n\ndata.count()", "The data frame has 891 rows. Some passengers have missing information though: in particular Age and Cabin info can be missing. The meaning of the columns is explained on the challenge website:\nhttps://www.kaggle.com/c/titanic-gettingStarted/data\nA data frame can be converted into a numpy array by calling the values attribute:", "list(data.columns)\n\ndata.shape\n\ndata.values", "However this cannot be directly fed to a scikit-learn model:\n\n\nthe target variable (survival) is mixed with the input data\n\n\nsome attribute such as unique ids have no predictive values for the task\n\n\nthe values are heterogeneous (string labels for categories, integers and floating point numbers)\n\n\nsome attribute values are missing (nan: \"not a number\")\n\n\nPredicting survival\nThe goal of the challenge is to predict whether a passenger has survived from others known attribute. Let us have a look at the Survived columns:", "survived_column = data['Survived']\nsurvived_column.dtype", "data.Survived is an instance of the pandas Series class with an integer dtype:", "type(survived_column)", "The data object is an instance pandas DataFrame class:", "type(data)", "Series can be seen as homegeneous, 1D columns. DataFrame instances are heterogenous collections of columns with the same length.\nThe original data frame can be aggregated by counting rows for each possible value of the Survived column:", "data.groupby('Survived').count()\n\nnp.mean(survived_column == 0)", "From this the subset of the full passengers list, about 2/3 perished in the event. So if we are to build a predictive model from this data, a baseline model to compare the performance to would be to always predict death. Such a constant model would reach around 62% predictive accuracy (which is higher than predicting at random):\npandas Series instances can be converted to regular 1D numpy arrays by using the values attribute:", "target = survived_column.values\n\ntype(target)\n\ntarget.dtype\n\ntarget[:5]", "Training a predictive model on numerical features\nsklearn estimators all work with homegeneous numerical feature descriptors passed as a numpy array. Therefore passing the raw data frame will not work out of the box.\nLet us start simple and build a first model that only uses readily available numerical features as input, namely data['Fare'], data['Pclass'] and data['Age'].", "numerical_features = data[['Fare', 'Pclass', 'Age']]\nnumerical_features.head(5)", "Unfortunately some passengers do not have age information:", "numerical_features.count()", "Let's use pandas fillna method to input the median age for those passengers:", "median_features = numerical_features.dropna().median()\nmedian_features\n\nimputed_features = numerical_features.fillna(median_features)\nimputed_features.count()\n\nimputed_features.head(5)", "Now that the data frame is clean, we can convert it into an homogeneous numpy array of floating point values:", "features_array = imputed_features.values\nfeatures_array\n\nfeatures_array.dtype", "Let's take the 80% of the data for training a first model and keep 20% for computing is generalization score:", "from sklearn.cross_validation import train_test_split\n\nfeatures_train, features_test, target_train, target_test = train_test_split(\n features_array, target, test_size=0.20, random_state=0)\n\nfeatures_train.shape\n\nfeatures_test.shape\n\ntarget_train.shape\n\ntarget_test.shape", "Let's start with a simple model from sklearn, namely LogisticRegression:", "from sklearn.linear_model import LogisticRegression\n\nlogreg = LogisticRegression(C=1)\nlogreg.fit(features_train, target_train)\n\ntarget_predicted = logreg.predict(features_test)\n\nfrom sklearn.metrics import accuracy_score\n\naccuracy_score(target_test, target_predicted)", "This first model has around 73% accuracy: this is better than our baseline that always predicts death.", "logreg.score(features_test, target_test)", "Model evaluation and interpretation\nInterpreting linear model weights\nThe coef_ attribute of a fitted linear model such as LogisticRegression holds the weights of each features:", "feature_names = numerical_features.columns\nfeature_names\n\nlogreg.coef_\n\nx = np.arange(len(feature_names))\nplt.bar(x, logreg.coef_.ravel())\nplt.xticks(x + 0.5, feature_names, rotation=30);", "In this case, survival is slightly positively linked with Fare (the higher the fare, the higher the likelyhood the model will predict survival) while passenger from first class and lower ages are predicted to survive more often than older people from the 3rd class.\nFirst-class cabins were closer to the lifeboats and children and women reportedly had the priority. Our model seems to capture that historical data. We will see later if the sex of the passenger can be used as an informative predictor to increase the predictive accuracy of the model.\nAlternative evaluation metrics\nIt is possible to see the details of the false positive and false negative errors by computing the confusion matrix:", "from sklearn.metrics import confusion_matrix\n\ncm = confusion_matrix(target_test, target_predicted)\nprint(cm)", "The true labeling are seen as the rows and the predicted labels are the columns:", "def plot_confusion(cm, target_names = ['survived', 'not survived'],\n title='Confusion matrix'):\n plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)\n plt.title(title)\n plt.colorbar()\n\n tick_marks = np.arange(len(target_names))\n plt.xticks(tick_marks, target_names, rotation=60)\n plt.yticks(tick_marks, target_names)\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n # Convenience function to adjust plot parameters for a clear layout.\n plt.tight_layout()\n \nplot_confusion(cm)\n\nprint(cm)", "We can normalize the number of prediction by dividing by the total number of true \"survived\" and \"not survived\" to compute true and false positive rates for survival in the first row and the false negative and true negative rates in the second row.", "cm.sum(axis=1)\n\ncm_normalized = cm.astype(np.float64) / cm.sum(axis=1)[:, np.newaxis]\nprint(cm_normalized)\n\nplot_confusion(cm_normalized, title=\"Normalized confusion matrix\")", "We can therefore observe that the fact that the target classes are not balanced in the dataset makes the accuracy score not very informative.\nscikit-learn provides alternative classification metrics to evaluate models performance on imbalanced data such as precision, recall and f1 score:", "from sklearn.metrics import classification_report\n\nprint(classification_report(target_test, target_predicted,\n target_names=['not survived', 'survived']))", "Another way to quantify the quality of a binary classifier on imbalanced data is to compute the precision, recall and f1-score of a model (at the default fixed decision threshold of 0.5).\nLogistic Regression is a probabilistic models: instead of just predicting a binary outcome (survived or not) given the input features it can also estimates the posterior probability of the outcome given the input features using the predict_proba method:", "target_predicted_proba = logreg.predict_proba(features_test)\ntarget_predicted_proba[:5]", "By default the decision threshold is 0.5: if we vary the decision threshold from 0 to 1 we could generate a family of binary classifier models that address all the possible trade offs between false positive and false negative prediction errors.\nWe can summarize the performance of a binary classifier for all the possible thresholds by plotting the ROC curve and quantifying the Area under the ROC curve:", "from sklearn.metrics import roc_curve\nfrom sklearn.metrics import auc\n\ndef plot_roc_curve(target_test, target_predicted_proba):\n fpr, tpr, thresholds = roc_curve(target_test, target_predicted_proba[:, 1])\n \n roc_auc = auc(fpr, tpr)\n # Plot ROC curve\n plt.plot(fpr, tpr, label='ROC curve (area = %0.3f)' % roc_auc)\n plt.plot([0, 1], [0, 1], 'k--') # random predictions curve\n plt.xlim([0.0, 1.0])\n plt.ylim([0.0, 1.0])\n plt.xlabel('False Positive Rate or (1 - Specifity)')\n plt.ylabel('True Positive Rate or (Sensitivity)')\n plt.title('Receiver Operating Characteristic')\n plt.legend(loc=\"lower right\")\n\nplot_roc_curve(target_test, target_predicted_proba)", "Here the area under ROC curve is 0.756 which is very similar to the accuracy (0.732). However the ROC-AUC score of a random model is expected to 0.5 on average while the accuracy score of a random model depends on the class imbalance of the data. ROC-AUC can be seen as a way to callibrate the predictive accuracy of a model against class imbalance.\nCross-validation\nWe previously decided to randomly split the data to evaluate the model on 20% of held-out data. However the location randomness of the split might have a significant impact in the estimated accuracy:", "features_train, features_test, target_train, target_test = train_test_split(\n features_array, target, test_size=0.20, random_state=0)\n\nlogreg.fit(features_train, target_train).score(features_test, target_test)\n\nfeatures_train, features_test, target_train, target_test = train_test_split(\n features_array, target, test_size=0.20, random_state=1)\n\nlogreg.fit(features_train, target_train).score(features_test, target_test)\n\nfeatures_train, features_test, target_train, target_test = train_test_split(\n features_array, target, test_size=0.20, random_state=2)\n\nlogreg.fit(features_train, target_train).score(features_test, target_test)", "So instead of using a single train / test split, we can use a group of them and compute the min, max and mean scores as an estimation of the real test score while not underestimating the variability:", "from sklearn.cross_validation import cross_val_score\n\nscores = cross_val_score(logreg, features_array, target, cv=5)\nscores\n\nscores.min(), scores.mean(), scores.max()", "cross_val_score reports accuracy by default be it can also be used to report other performance metrics such as ROC-AUC or f1-score:", "scores = cross_val_score(logreg, features_array, target, cv=5,\n scoring='roc_auc')\nscores.min(), scores.mean(), scores.max()", "Exercise:\n\n\nCompute cross-validated scores for other classification metrics ('precision', 'recall', 'f1', 'accuracy'...).\n\n\nChange the number of cross-validation folds between 3 and 10: what is the impact on the mean score? on the processing time?\n\n\nHints:\nThe list of classification metrics is available in the online documentation:\nhttp://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values\nYou can use the %%time cell magic on the first line of an IPython cell to measure the time of the execution of the cell. \nMore feature engineering and richer models\nLet us now try to build richer models by including more features as potential predictors for our model.\nCategorical variables such as data['Embarked'] or data['Sex'] can be converted as boolean indicators features also known as dummy variables or one-hot-encoded features:", "pd.get_dummies(data['Sex'], prefix='Sex').head(5)\n\npd.get_dummies(data.Embarked, prefix='Embarked').head(5)", "We can combine those new numerical features with the previous features using pandas.concat along axis=1:", "rich_features = pd.concat([data[['Fare', 'Pclass', 'Age']],\n pd.get_dummies(data['Sex'], prefix='Sex'),\n pd.get_dummies(data['Embarked'], prefix='Embarked')],\n axis=1)\nrich_features.head(5)", "By construction the new Sex_male feature is redundant with Sex_female. Let us drop it:", "rich_features_no_male = rich_features.drop('Sex_male', 1)\nrich_features_no_male.head(5)", "Let us not forget to imput the median age for passengers without age information:", "rich_features_final = rich_features_no_male.fillna(rich_features_no_male.dropna().median())\nrich_features_final.head(5)", "We can finally cross-validate a logistic regression model on this new data an observe that the mean score has significantly increased:", "%%time\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.cross_validation import cross_val_score\n\nlogreg = LogisticRegression(C=1)\nscores = cross_val_score(logreg, rich_features_final, target, cv=5, scoring='accuracy')\nprint(\"Logistic Regression CV scores:\")\nprint(\"min: {:.3f}, mean: {:.3f}, max: {:.3f}\".format(\n scores.min(), scores.mean(), scores.max()))", "Exercise:\n\n\nchange the value of the parameter C. Does it have an impact on the score?\n\n\nfit a new instance of the logistic regression model on the full dataset.\n\n\nplot the weights for the features of this newly fitted logistic regression model.", "%load solutions/04A_plot_logistic_regression_weights.py", "Training Non-linear models: ensembles of randomized trees\nsklearn also implement non linear models that are known to perform very well for data-science projects where datasets have not too many features (e.g. less than 5000).\nIn particular let us have a look at Random Forests and Gradient Boosted Trees:", "%%time\n\nfrom sklearn.ensemble import RandomForestClassifier\n\nrf = RandomForestClassifier(n_estimators=100)\nscores = cross_val_score(rf, rich_features_final, target, cv=5, n_jobs=4,\n scoring='accuracy')\nprint(\"Random Forest CV scores:\")\nprint(\"min: {:.3f}, mean: {:.3f}, max: {:.3f}\".format(\n scores.min(), scores.mean(), scores.max()))\n\n%%time\n\nfrom sklearn.ensemble import GradientBoostingClassifier\n\ngb = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1,\n subsample=.8, max_features=.5)\nscores = cross_val_score(gb, rich_features_final, target, cv=5, n_jobs=4,\n scoring='accuracy')\nprint(\"Gradient Boosted Trees CV scores:\")\nprint(\"min: {:.3f}, mean: {:.3f}, max: {:.3f}\".format(\n scores.min(), scores.mean(), scores.max()))", "Both models seem to do slightly better than the logistic regression model on this data.\nExercise:\n\n\nChange the value of the learning_rate and other GradientBoostingClassifier parameter, can you get a better mean score?\n\n\nWould treating the PClass variable as categorical improve the models performance?\n\n\nFind out which predictor variables (features) are the most informative for those models.\n\n\nHints:\nFitted ensembles of trees have feature_importances_ attribute that can be used similarly to the coef_ attribute of linear models.", "%load solutions/04B_more_categorical_variables.py\n\n%load solutions/04C_feature_importance.py", "Automated parameter tuning\nInstead of changing the value of the learning rate manually and re-running the cross-validation, we can find the best values for the parameters automatically (assuming we are ready to wait):", "%%time\n\nfrom sklearn.grid_search import GridSearchCV\n\ngb = GradientBoostingClassifier(n_estimators=100, subsample=.8)\n\nparams = {\n 'learning_rate': [0.05, 0.1, 0.5],\n 'max_features': [0.5, 1],\n 'max_depth': [3, 4, 5],\n}\ngs = GridSearchCV(gb, params, cv=5, scoring='roc_auc', n_jobs=4)\ngs.fit(rich_features_final, target)", "Let us sort the models by mean validation score:", "sorted(gs.grid_scores_, key=lambda x: x.mean_validation_score, reverse=True)\n\ngs.best_score_\n\ngs.best_params_", "We should note that the mean scores are very close to one another and almost always within one standard deviation of one another. This means that all those parameters are quite reasonable. The only parameter of importance seems to be the learning_rate: 0.5 seems to be a bit too high.\nAvoiding data snooping with pipelines\nWhen doing imputation in pandas, prior to computing the train test split we use data from the test to improve the accuracy of the median value that we impute on the training set. This is actually cheating. To avoid this we should compute the median of the features on the training fold and use that median value to do the imputation both on the training and validation fold for a given CV split.\nTo do this we can prepare the features as previously but without the imputation: we just replace missing values by the -1 marker value:", "features = pd.concat([data[['Fare', 'Age']],\n pd.get_dummies(data['Sex'], prefix='Sex'),\n pd.get_dummies(data['Pclass'], prefix='Pclass'),\n pd.get_dummies(data['Embarked'], prefix='Embarked')],\n axis=1)\nfeatures = features.drop('Sex_male', 1)\n\n# Because of the following bug we cannot use NaN as the missing\n# value marker, use a negative value as marker instead:\n# https://github.com/scikit-learn/scikit-learn/issues/3044\nfeatures = features.fillna(-1)\nfeatures.head(5)", "We can now use the Imputer transformer of scikit-learn to find the median value on the training set and apply it on missing values of both the training set and the test set.", "from sklearn.cross_validation import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(features.values, target, random_state=0)\n\nfrom sklearn.preprocessing import Imputer\n\nimputer = Imputer(strategy='median', missing_values=-1)\n\nimputer.fit(X_train)", "The median age computed on the training set is stored in the statistics_ attribute.", "imputer.statistics_\n\nfeatures.columns.values", "Imputation can now happen by calling the transform method:", "X_train_imputed = imputer.transform(X_train)\nX_test_imputed = imputer.transform(X_test)\n\nnp.any(X_train == -1)\n\nnp.any(X_train_imputed == -1)\n\nnp.any(X_test == -1)\n\nnp.any(X_test_imputed == -1)", "We can now use a pipeline that wraps an imputer transformer and the classifier itself:", "from sklearn.pipeline import Pipeline\n\nimputer = Imputer(strategy='median', missing_values=-1)\n\nclassifier = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1,\n subsample=.8, max_features=.5,\n random_state=0)\n\npipeline = Pipeline([\n ('imp', imputer),\n ('clf', classifier),\n])\n\nscores = cross_val_score(pipeline, features.values, target, cv=5, n_jobs=4,\n scoring='accuracy', )\nprint(scores.min(), scores.mean(), scores.max())", "The mean cross-validation is slightly lower than we used the imputation on the whole data as we did earlier although not by much. This means that in this case the data-snooping was not really helping the model cheat by much.\nLet us re-run the grid search, this time on the pipeline. Note that thanks to the pipeline structure we can optimize the interaction of the imputation method with the parameters of the downstream classifier without cheating:", "%%time\n\nparams = {\n 'imp__strategy': ['mean', 'median'],\n 'clf__max_features': [0.5, 1],\n 'clf__max_depth': [3, 4, 5],\n}\ngs = GridSearchCV(pipeline, params, cv=5, scoring='roc_auc', n_jobs=4)\ngs.fit(X_train, y_train)\n\nsorted(gs.grid_scores_, key=lambda x: x.mean_validation_score, reverse=True)\n\ngs.best_score_\n\nplot_roc_curve(y_test, gs.predict_proba(X_test))\n\ngs.best_params_", "From this search we can conclude that the imputation by the 'mean' strategy is generally a slightly better imputation strategy when training a GBRT model on this data.\nFurther integrating sklearn and pandas\nHelper tool for better sklearn / pandas integration: https://github.com/paulgb/sklearn-pandas by making it possible to embed the feature construction from the raw dataframe directly inside a pipeline.\nCredits\nThanks to:\n\n\nKaggle for setting up the Titanic challenge.\n\n\nThis blog post by Philippe Adjiman for inspiration:\n\n\nhttp://www.philippeadjiman.com/blog/2013/09/12/a-data-science-exploration-from-the-titanic-in-r/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DistrictDataLabs/entity-resolution
Entity Resolution Workshop.ipynb
apache-2.0
[ "Entity Resolution Workshop\nEntity Resolution is the task of disambiguating manifestations of real world entities through linking and grouping and is often an essential part of the data wrangling process. There are three primary tasks involved in entity resolution: deduplication, record linkage, and canonicalization; each of which serve to improve data quality by reducing irrelevant or repeated data, joining information from disparate records, and providing a single source of information to perform analytics upon. However, due to data quality issues (misspellings or incorrect data), schema variations in different sources, or simply different representations, entity resolution is not a straightforward process and most ER techniques utilize machine learning and other stochastic approaches.", "## Imports\n\nimport os\nimport csv\nimport nltk\nimport json\nimport math\nimport random\nimport distance \n\n## Important Paths\nFIXTURES = os.path.join(os.getcwd(), \"fixtures\")\nPRODUCTS = os.path.join(FIXTURES, \"products\")\n\n## Module Constants\nGOOGID = 'http://www.google.com/base/feeds/snippets'\n\ndef load_data(name):\n \"\"\"\n Create a generator to load data from the products data source.\n \"\"\"\n with open(os.path.join(PRODUCTS, name), 'r') as f:\n reader = csv.DictReader(f)\n for row in reader:\n yield row\n\ndef google_key(key):\n return os.path.join(GOOGID, key)\n\n## Load Datasets into Memory\namazon = list(load_data('amazon.csv'))\ngoogle = list(load_data('google.csv'))\nmapping = list(load_data('perfect_mapping.csv'))\n\n## Report on contents of the dataset\nfor name, dataset in (('Amazon', amazon), ('Google Shopping', google)):\n print \"{} dataset contains {} records\".format(name, len(dataset))\n print \"Record keys: {}\\n\".format(\", \".join(dataset[0].keys()))\n\n## Report on the contents of the mapping\nprint \"There are {} matching records to link\".format(len(mapping))\n\n## Convert dataset to records indexed by their ID.\namazon = dict((v['id'], v) for v in amazon)\ngoogle = dict((v['id'], v) for v in google)\n\nX = amazon['b0000c7fpt']\nY = google[google_key('17175991674191849246')]\n\n## Show example Records\nprint json.dumps(X, indent=2)\nprint json.dumps(Y, indent=2)", "Similarity Scores\nLinks to information about distance metrics:\n\nImplementing the Five Most Popular Similarity Measures in Python\nScikit-Learn Distance Metric\nPython Distance Library\n\nNumeric distances are fairly easy, but can be record specific (e.g. phone numbers can compare area codes, city codes, etc. to determine similarity). We will compare text similarity in this section:", "# Typographic Distances\n\nprint distance.levenshtein(\"lenvestein\", \"levenshtein\")\nprint distance.hamming(\"hamming\", \"hamning\")\n\n# Compare glyphs, syllables, or phonemes \nt1 = (\"de\", \"ci\", \"si\", \"ve\")\nt2 = (\"de\", \"ri\", \"si\", \"ve\")\nprint distance.levenshtein(t1, t2)\n\n\n# Sentence Comparison\nsent1 = \"The quick brown fox jumped over the lazy dogs.\"\nsent2 = \"The lazy foxes are jumping over the crazy Dog.\"\n\nprint distance.nlevenshtein(sent1.split(), sent2.split(), method=1)\n\n# Normalization\nprint distance.hamming(\"fat\", \"cat\", normalized=True)\nprint distance.nlevenshtein(\"abc\", \"acd\", method=1) # shortest alignment\nprint distance.nlevenshtein(\"abc\", \"acd\", method=2) # longest alignment\n\n# Set measures\nprint distance.sorensen(\"decide\", \"resize\")\nprint distance.jaccard(\"decide\", \"resize\")", "Preprocessed Text Score\nUse text preprocessing with NLTK to split long strings into parts, and normalize them using Wordnet.", "def tokenize(sent):\n \"\"\"\n When passed in a sentence, tokenizes and normalizes the string,\n returning a list of lemmata.\n \"\"\"\n lemmatizer = nltk.WordNetLemmatizer() \n for token in nltk.wordpunct_tokenize(sent):\n token = token.lower()\n yield lemmatizer.lemmatize(token)\n\ndef normalized_jaccard(*args):\n try:\n return distance.jaccard(*[tokenize(arg) for arg in args])\n except UnicodeDecodeError:\n return 0.0\n\nprint normalized_jaccard(sent1, sent2)", "Similarity Vectors", "def similarity(prod1, prod2):\n \"\"\"\n Returns a similarity vector of match scores:\n [name_score, description_score, manufacturer_score, price_score]\n \"\"\"\n pair = (prod1, prod2)\n names = [r.get('name', None) or r.get('title', None) for r in pair]\n descr = [r.get('description') for r in pair]\n manuf = [r.get('manufacturer') for r in pair]\n price = [float(r.get('price')) for r in pair]\n \n return [\n normalized_jaccard(*names),\n normalized_jaccard(*descr),\n normalized_jaccard(*manuf),\n abs(1.0/(1+ (price[0] - price[1]))),\n ]\n\nprint similarity(X, Y)", "Weighted Pairwise Matching", "THRESHOLD = 0.90\nWEIGHTS = (0.6, 0.1, 0.2, 0.1)\n\nmatches = 0\nfor azprod in amazon.values():\n for googprod in google.values():\n vector = similarity(azprod, googprod)\n score = sum(map(lambda v: v[0]*v[1], zip(WEIGHTS, vector)))\n if score > THRESHOLD:\n matches += 1\n print \"{0:0.3f}: {1} {2}\".format(\n score, azprod['id'], googprod['id'].split(\"/\")[-1]\n )\n\nprint \"\\n{} matches discovered\".format(matches)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
recepkabatas/Spark
5_word2vec.ipynb
apache-2.0
[ "Deep Learning with TensorFlow\nCredits: Forked from TensorFlow by Google\nSetup\nRefer to the setup instructions.\nExercise 5\nThe goal of this exercise is to train a skip-gram model over Text8 data.", "# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nimport collections\nimport math\nimport numpy as np\nimport os\nimport random\nimport tensorflow as tf\nimport urllib\nimport zipfile\nfrom matplotlib import pylab\nfrom sklearn.manifold import TSNE", "Download the data from the source website if necessary.", "#url = 'http://mattmahoney.net/dc/'\nimport urllib.request\nurl = urllib.request.urlretrieve(\"http://mattmahoney.net/dc/\")\ndef maybe_download(filename, expected_bytes):\n \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n if not os.path.exists(filename):\n filename, _ = urllib.request.urlretrieve(url + filename, filename)\n statinfo = os.stat(filename)\n if statinfo.st_size == expected_bytes:\n print ('Found and verified', filename)\n else:\n print (statinfo.st_size)\n raise Exception('Failed to verify ' + filename + '. Can you get to it with a browser?')\n return filename\n\n#filename = maybe_download(\"text8.zip\",31344016)", "Read the data into a string.", "filename=(\"text8.zip\")\ndef read_data(filename):\n f = zipfile.ZipFile(filename)\n for name in f.namelist():\n return f.read(name).split()\n f.close()\n \nwords = read_data(filename)\nprint ('Data size', len(words))", "Build the dictionary and replace rare words with UNK token.", "vocabulary_size = 50000\n\ndef build_dataset(words):\n count = [['UNK', -1]]\n count.extend(collections.Counter(words).most_common(vocabulary_size - 1))\n dictionary = dict()\n for word, _ in count:\n dictionary[word] = len(dictionary)\n data = list()\n unk_count = 0\n for word in words:\n if word in dictionary:\n index = dictionary[word]\n else:\n index = 0 # dictionary['UNK']\n unk_count = unk_count + 1\n data.append(index)\n count[0][1] = unk_count\n reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) \n return data, count, dictionary, reverse_dictionary\n\ndata, count, dictionary, reverse_dictionary = build_dataset(words)\nprint ('Most common words (+UNK)', count[:5])\nprint ('Sample data', data[:10])\ndel words # Hint to reduce memory.", "Function to generate a training batch for the skip-gram model.", "data_index = 0\n\ndef generate_batch(batch_size, num_skips, skip_window):\n global data_index\n assert batch_size % num_skips == 0\n assert num_skips <= 2 * skip_window\n batch = np.ndarray(shape=(batch_size), dtype=np.int32)\n labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)\n span = 2 * skip_window + 1 # [ skip_window target skip_window ]\n buffer = collections.deque(maxlen=span)\n for _ in range(span):\n buffer.append(data[data_index])\n data_index = (data_index + 1) % len(data)\n for i in range(int(batch_size / num_skips)):\n target = skip_window # target label at the center of the buffer\n targets_to_avoid = [ skip_window ]\n for j in range(num_skips):\n while target in targets_to_avoid:\n target = random.randint(0, span - 1)\n targets_to_avoid.append(target)\n batch[i * num_skips + j] = buffer[skip_window]\n labels[i * num_skips + j, 0] = buffer[target]\n buffer.append(data[data_index])\n data_index = (data_index + 1) % len(data)\n return batch, labels\n\nbatch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)\nfor i in range(8):\n print (batch[i], '->', labels[i, 0])\n print (reverse_dictionary[batch[i]], '->', reverse_dictionary[labels[i, 0]])", "Train a skip-gram model.", "batch_size = 128\nembedding_size = 128 # Dimension of the embedding vector.\nskip_window = 1 # How many words to consider left and right.\nnum_skips = 2 # How many times to reuse an input to generate a label.\n# We pick a random validation set to sample nearest neighbors. here we limit the\n# validation samples to the words that have a low numeric ID, which by\n# construction are also the most frequent. \nvalid_size = 16 # Random set of words to evaluate similarity on.\nvalid_window = 100 # Only pick dev samples in the head of the distribution.\nvalid_examples = np.array(random.sample(range(valid_window), valid_size))\nnum_sampled = 64 # Number of negative examples to sample.\n\ngraph = tf.Graph()\n\nwith graph.as_default():\n\n # Input data.\n train_dataset = tf.placeholder(tf.int32, shape=[batch_size])\n train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # Variables.\n embeddings = tf.Variable(\n tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))\n softmax_weights = tf.Variable(\n tf.truncated_normal([vocabulary_size, embedding_size],\n stddev=1.0 / math.sqrt(embedding_size)))\n softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))\n \n # Model.\n # Look up embeddings for inputs.\n embed = tf.nn.embedding_lookup(embeddings, train_dataset)\n # Compute the softmax loss, using a sample of the negative labels each time.\n loss = tf.reduce_mean(\n tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,\n train_labels, num_sampled, vocabulary_size))\n\n # Optimizer.\n optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)\n \n # Compute the similarity between minibatch examples and all embeddings.\n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))\n normalized_embeddings = embeddings / norm\n valid_embeddings = tf.nn.embedding_lookup(\n normalized_embeddings, valid_dataset)\n similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))\n\nnum_steps = 100001\n\nwith tf.Session(graph=graph) as session:\n tf.global_variables_initializer().run()\n print (\"Initialized\")\n average_loss = 0\n for step in xrange(num_steps):\n batch_data, batch_labels = generate_batch(\n batch_size, num_skips, skip_window)\n feed_dict = {train_dataset : batch_data, train_labels : batch_labels}\n _, l = session.run([optimizer, loss], feed_dict=feed_dict)\n average_loss += l\n if step % 2000 == 0:\n if step > 0:\n average_loss = average_loss / 2000\n # The average loss is an estimate of the loss over the last 2000 batches.\n print (\"Average loss at step\", step, \":\", average_loss)\n average_loss = 0\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n if step % 10000 == 0:\n sim = similarity.eval()\n for i in xrange(valid_size):\n valid_word = reverse_dictionary[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = \"Nearest to %s:\" % valid_word\n for k in xrange(top_k):\n close_word = reverse_dictionary[nearest[k]]\n log = \"%s %s,\" % (log, close_word)\n print (log)\n final_embeddings = normalized_embeddings.eval()\n\nnum_points = 400\n\ntsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)\ntwo_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])\n\ndef plot(embeddings, labels):\n assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'\n pylab.figure(figsize=(15,15)) # in inches\n for i, label in enumerate(labels):\n x, y = embeddings[i,:]\n pylab.scatter(x, y)\n pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',\n ha='right', va='bottom')\n pylab.show()\n\nwords = [reverse_dictionary[i] for i in range(1, num_points+1)]\nplot(two_d_embeddings, words)\n\nfrom pyspark import SparkContext\nfrom pyspark.mllib.feature import Word2Vec\n\n#sc = SparkContext(appName='Word2Vec')\ninp = sc.textFile(\"url.txt\").map(lambda row: row.split(\" \"))\nword2vec = Word2Vec()\nmodel = word2vec.fit(inp) #Results in exception...\nprint(model.getVectors)\nprint(model.getVectors)\nmodel.call\nmodel.findSynonyms\nmodel.load\nmodel.save\nmodel.transform\nmodel.getVectors\n\n\nsc\n\n\nfrom __future__ import print_function\n\nimport sys\n\nfrom pyspark import SparkContext\nfrom pyspark.mllib.feature import Word2Vec\n\nUSAGE = (\"bin/spark-submit --driver-memory 4g \"\n \"examples/src/main/python/mllib/word2vec.py text8_lines\")\n\nif __name__ == \"__main__\":\n if len(sys.argv) < 2:\n print(USAGE)\n sys.exit(\"Argument for file not provided\")\n file_path = sys.argv[1]\n file_path=\"url.txt\"\n # sc = SparkContext(appName='Word2Vec')\n inp = sc.textFile(file_path).map(lambda row: row.split(\" \"))\n\n word2vec = Word2Vec()\n model = word2vec.fit(inp)\n\n synonyms = model.findSynonyms('1', 5)\n \n\n for word, cosine_distance in synonyms:\n print(\"{}: {}\".format(word, cosine_distance))\n sc.stop()\n\nfrom pyspark.mllib.feature import HashingTF, IDF\n\n# Load documents (one per line).\ndocuments = sc.textFile(\"url.txt\").map(lambda line: line.split(\" \"))\n\nhashingTF = HashingTF()\ntf = hashingTF.transform(documents)\n\n# While applying HashingTF only needs a single pass to the data, applying IDF needs two passes:\n# First to compute the IDF vector and second to scale the term frequencies by IDF.\ntf.cache()\nidf = IDF().fit(tf)\ntfidf = idf.transform(tf)\n\n# spark.mllib's IDF implementation provides an option for ignoring terms\n# which occur in less than a minimum number of documents.\n# In such cases, the IDF for these terms is set to 0.\n# This feature can be used by passing the minDocFreq value to the IDF constructor.\nidfIgnore = IDF(minDocFreq=2).fit(tf)\ntfidfIgnore = idfIgnore.transform(tf)\n\nfrom pyspark.mllib.feature import Word2Vec\n\ninp = sc.textFile(\"data/mllib/sample_lda_data.txt\").map(lambda row: row.split(\" \"))\n\nword2vec = Word2Vec()\nmodel = word2vec.fit(inp)\n\nsynonyms = model.findSynonyms('1', 5)\n\nfor word, cosine_distance in synonyms:\n print(\"{}: {}\".format(word, cosine_distance))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
4dsolutions/Python5
Silicon Forest Math Series | RSA.ipynb
mit
[ "Silicon Forest Math Series<br/>Oregon Curriculum Network\nIntroduction to Public Key Cryptography\n\nHere in the Silicon Forest, we do not expect everyone to become a career computer programmer. \nWe do expect a lot of people will wish to program at some time in their career. \nCoding skills give you the power to control machines and you might find appropriate and life-enhancing uses for this type of power.\nTo help you get the flavor of coding, we leverage concepts we expect you're already getting through your math courses. \nIn moving from pre-computer math, to computer math, and back again, we develop important conceptual bridges.\nGenerating the Prime Numbers\nLets look at a first such concept, that of a prime number. \nThe Fundamental Theorem of Arithmetic says every positive integer distills into a unique list of prime number factors. Duplicates are allowed.\nBut what are primes in the first place? Numbers with no factors other than themselves.", "import pprint\n\ndef primes():\n \"\"\"generate successive prime numbers (trial by division)\"\"\"\n candidate = 1\n _primes_so_far = [2] # first prime, only even prime\n yield _primes_so_far[0] # share it!\n while True:\n candidate += 2 # check odds only from now on\n for prev in _primes_so_far:\n if prev**2 > candidate:\n yield candidate # new prime!\n _primes_so_far.append(candidate)\n break\n if not divmod(candidate, prev)[1]: # no remainder!\n break # done looping\n \np = primes() # generator function based iterator\npp = pprint.PrettyPrinter(width=40, compact=True)\npp.pprint([next(p) for _ in range(30)]) # next 30 primes please!", "The above algorithm is known as \"trial by division\". \nKeep track of all primes discovered so far, and test divide them, in increasing order, into a candidate number, until: \n(A) either one of the primes goes evenly, in which case move on to the next odd \nor \n(B) until we know our candidate is a next prime, in which case yield it and append it to the growing list.\nIf we get passed the 2nd root of the candidate, we conclude no larger factor will work, as we would have encountered it already as the smaller of the factor pair. \nPassing this 2nd root milestone triggers plan B. Then we advance to the next candidate, ad infinitum. \nPython pauses at each yield statement however, handing control back to the calling sequence, in this case a \"list comprehension\" containing a next() function for advancing to the next yield.\n\nCoprimes, Totatives, and the Totient of a Number\nFrom here, we jump to the idea of numbers being coprime to one another. A synonym for coprime is \"stranger.\" Given two ordinary positive integers, they're strangers if they have no prime factors in common. For that to be true, they'd have no shared factors at all (not counting 1).\nGuido van Rossum, the inventor of Python, gives us a pretty little implementation of what's known as Euclid's Method, an algorithm that's thousands of years old. It'll find the largest factor any two numbers have in common (gcd = \"greatest common divisor\").\nHere it is:", "def gcd(a, b):\n while b:\n a, b = b, a % b\n return a\n\nprint(gcd(81, 18))\nprint(gcd(12, 44))\nprint(gcd(117, 17)) # strangers", "How does Euclid's Method work? That's a great question and one your teacher should be able to explain. First see if you might figure it out for yourself... \nHere's one explanation:\nIf a smaller number divides a larger one without remainder then we're done, and that will always happen when that smaller number is 1 if not before. \nIf there is a remainder, what then? Lets work through an example.\n81 % 18 returns a remainder of 9 in the first cycle. 18 didn't go into 81 evenly but if another smaller number goes into both 9, the remainder, and 18, then we have our answer. \n9 itself does the trick and we're done.", "print(81 % 18) # 18 goes into \nprint(18 % 9) # so the new b becomes the answer", "Suppose we had asked for gcd(18, 81) instead? 18 is the remainder (no 81s go into it) whereas b was 81, so the while loop simply flips the two numbers around to give the example above.\nThe gcd function now gives us the means to compute totients and totatives of a number. The totatives of N are the strangers less than N, whereas the totient is the number of such strangers.", "def totatives(N):\n # list comprehension!\n return [x for x in range(1,N) if gcd(x,N)==1] # strangers only\n \ndef T(N):\n \"\"\"\n Returns the number of numbers between (1, N) that \n have no factors in common with N: called the \n 'totient of N' (sometimes phi is used in the docs)\n \"\"\"\n return len(totatives(N)) # how many strangers did we find?\n\nprint(\"Totient of 100:\", T(100))\nprint(\"Totient of 1000:\", T(1000))", "Where to go next is in the direction of Euler's Theorem, a generalization of Fermat's Little Theorem. The built-in pow(m, n, N) function will raise m to the n modulo N in an efficient manner.", "def powers(N):\n totient = T(N)\n print(\"Totient of {}:\".format(N), totient)\n for t in totatives(N):\n values = [pow(t, n, N) for n in range(totient + 1)]\n cycle = values[:values.index(1, 1)] # first 1 after initial 1\n print(\"{:>2}\".format(len(cycle)), cycle)\n \npowers(17)", "Above we see repeating cycles of numbers, with the length of the cycles all dividing 16, the totient of the prime number 17. \npow(14, 2, 17) is 9, pow(14, 3, 17) is 7, and so on, coming back around the 14 at pow(14, 17, 17) where 17 is 1 modulo 16.\nNumbers raised to any kth power modulo N, where k is 1 modulo the totient of N, end up staying the same number. For example, pow(m, (n * T(N)) + 1, N) == m for any n.", "from random import randint\n\ndef check(N):\n totient = T(N)\n for t in totatives(N):\n n = randint(1, 10)\n print(t, pow(t, (n * totient) + 1, N))\n\ncheck(17)", "In public key cryptography, RSA in particular, a gigantic composite N is formed from two primes p and q. \nN's totient will then be (p - 1) * (q - 1). For example if N = 17 * 23 (both primes) then T(N) = 16 * 22.", "p = 17\nq = 23\nT(p*q) == (p-1)*(q-1)", "From this totient, we'll be able to find pairs (e, d) such that (e * d) modulo T(N) == 1. \nWe may find d, given e and T(N), by means of the Extended Euclidean Algorithm (xgcd below).\nRaising some numeric message m to the eth power modulo N will encrypt the message, giving c. \nRaising the encrypted message c to the dth power will cycle it back around to its starting value, thereby decrypting it.\nc = pow(m, e, N)\nm = pow(c, d, N)\nwhere (e * d) % T(N) == 1.\nFor example:", "p = 37975227936943673922808872755445627854565536638199\nq = 40094690950920881030683735292761468389214899724061\nRSA_100 = p * q\ntotient = (p - 1) * (q - 1)\n\n# https://en.wikibooks.org/wiki/\n# Algorithm_Implementation/Mathematics/\n# Extended_Euclidean_algorithm\n\ndef xgcd(b, n):\n x0, x1, y0, y1 = 1, 0, 0, 1\n while n != 0:\n q, b, n = b // n, n, b % n\n x0, x1 = x1, x0 - q * x1\n y0, y1 = y1, y0 - q * y1\n return b, x0, y0\n\n# x = mulinv(b) mod n, (x * b) % n == 1\ndef mulinv(b, n):\n g, x, _ = xgcd(b, n)\n if g == 1:\n return x % n\n\ne = 3\nd = mulinv(e, totient)\nprint((e*d) % totient)\n\nimport binascii\nm = int(binascii.hexlify(b\"I'm a secret\"), 16)\nprint(m) # decimal encoding of byte string\n\nc = pow(m, e, RSA_100) # raise to eth power\nprint(c)\n\nm = pow(c, d, RSA_100) # raise to dth power\nprint(m)\n\nbinascii.unhexlify(hex(m)[2:]) # m is back where we started.", "What makes RSA hard to crack is that although N is public, d is not, and N's factors p and q have been thrown away. \nBecause factoring N back into p, q is a super hard problem, if N is large enough, RSA remains a secure algorithm, and a favorite one, now that the patent has expired.\nThe idea is when you want to encrypt a message for Alice, look up her public key N. Only she has private key d, derived from her N at the time. \nYou may sign your message by raising it to your own secret key power, modulo your own public N, and she'll know for sure the message is from you, given she'll have your public key to decrypt it, once her own secret d has been applied.\n\nFor Further Reading\nLinked Jupyter Notebooks:\n\nGenerators and Coroutines -- introduces Quadrays\nPi Day Fun -- Ramanujan!\nComposition of Functions -- advanced Python\nSTEM Mathematics" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ameliecordier/iutdoua-info_algo2015
2015-12-03 - TD16 - Récursivité et tableaux.ipynb
cc0-1.0
[ "Préambule : nous avons commencé le TD par un rappel sur la récursivité et nous avons fait plusieurs exercices sur Pythontutor de sorte à ce que vous visualisiez la pile d'exécution et que vous compreniez bien comment sont effectués les appels récursifs et les instructions situées après un appel récursif dans le corps d'une méthode. \nAttention, important : vous devez travailler par vous-mêmes et essayer de comprendre ce que font les algorithmes que vous lisez et que vous écrivez. N'hésitez pas à prendre un crayon et une feuille de brouillon pour dérouler, étape par étape, ce qui se passe dans l'ordinateur. N'hésitez pas non plus à refaire des simulations avec Python Tutor de sorte à mieux comprendre ce qui se passe lorsque vous déclenchez un appel récursif. \nExercice 1. Écrire un algorithme récursif qui permet de vérifier si une chaîne contient un caractère. Rappel : nous avons déjà écrit la solution de cet exercice en itératif.", "%load_ext doctestmagic\n\ndef rechercheRecursive (chaine,carac,i):\n '''\n :entree: chaine (string)\n :entree: caractère (string)\n :entree: i (int) \n :sortie: present (booleen)\n :pre-conditions: i doit être fixé à 0 lors de l'appel, carac doit être un caractère seul, la chaîne peut être vide.\n :post-condions: le booléen est fixé à vrai si la chaîne contient le caractère et à faux sinon (y compris dans le cas de la chaîne vide)\n >>> rechercheRecursive(\"Bonjour\", \"j\", 0)\n True\n >>> rechercheRecursive(\"Bonjour\", \"r\", 0)\n True\n >>> rechercheRecursive(\"\", \"j\", 0)\n False\n >>> rechercheRecursive(\"Bonjour\", \"a\", 0)\n False\n '''\n if i==len(chaine):\n present=False\n elif chaine[i]==carac:\n present=True\n else:\n present=rechercheRecursive(chaine,carac,i+1)\n return present\nprint(rechercheRecursive(\"bonjour\",'a',0))\nprint(rechercheRecursive(\"bonjour\",'j',0))\nprint(rechercheRecursive(\"\",'j',0))\nprint(rechercheRecursive(\"bonjour\",'r',0))\n\n\n%doctest rechercheRecursive", "La solution précédente est valide mais elle a l'inconvénient d'utiliser un indice en passage de paramètre, indice qui doit systématiquement être fixé à 0 lors de l'appel de la méthode. Une solution consisterait à \"encapsuler\" l'appel de cette méthode dans une autre méthode qui aurait une spécification plus simple, mais c'est un peu \"trop facile\". La solution ci-dessous est nettement plus élégante, même si elle a l'inconvénient de construire une nouvelle chaîne de caractères à chaque appel récursif.", "def rechercheRecursiveBis(chaine,carac):\n '''\n :entree: chaine (string)\n :entree: caractère (string)\n :sortie: present (booleen)\n :pre-conditions: carac doit être un caractère seul, la chaîne peut être vide.\n :post-condions: le booléen est fixé à vrai si la chaîne contient le caractère et à faux sinon (y compris dans le cas de la chaîne vide)\n >>> rechercheRecursiveBis(\"Bonjour\", \"j\")\n True\n >>> rechercheRecursiveBis(\"Bonjour\", \"r\")\n True\n >>> rechercheRecursiveBis(\"\", \"j\")\n False\n >>> rechercheRecursiveBis(\"Bonjour\", \"a\")\n False\n '''\n if len(chaine)==0:\n a=False\n elif chaine[0]==carac:\n a=True\n else: \n a=rechercheRecursiveBis(chaine[1:],carac)\n return a\nprint(rechercheRecursiveBis(\"bonjour\",'a'))\nprint(rechercheRecursiveBis(\"bonjour\",'j'))\nprint(rechercheRecursiveBis(\"bonjour\",'r'))\nprint(rechercheRecursiveBis(\"\",\"r\"))\n\n\n", "Exercice 2. Écrire une méthode récursive pour calculer la somme des éléments d'une liste. Vous écrirez également le contrat.", "def sommeRec(l):\n '''\n :entree l: une liste de nombres (entiers ou flottants)\n :sortie somme: la somme des éléments de la liste\n :pre-conditions: la liste peut être vide\n :post-condition: somme contient la somme des éléments de la liste, et est donc du même type que les éléments. \n >>> sommeRec([1, 2, 3])\n 6\n >>> sommeRec([])\n 0\n >>> sommeRec([6, 42.2, 34])\n 82.2\n '''\n somme=0\n if len(l)>1:\n somme=l[0]+sommeRec(l[1:])\n elif len(l)==1:\n somme+=l[0]\n return somme\nprint(sommeRec([1, 2, 3]))\nprint(sommeRec([]))\nprint(sommeRec([6, 42.2, 34]))\n\n%doctest sommeRec", "Encore une fois, la solution ci-dessus est correcte mais elle est loin d'être \"simple\" et facile à lire pour quelqu'un d'autre que celui ou celle qui a écrit l'algorithme. On va donc (ci-dessous) proposer une ré-écriture plus simple.", "def sommeRecBis(tab):\n '''\n :entree l: une liste de nombres (entiers ou flottants)\n :sortie somme: la somme des éléments de la liste\n :pre-conditions: la liste peut être vide\n :post-condition: somme contient la somme des éléments de la liste, et est donc du même type que les éléments. \n >>> sommeRecBis([1, 2, 3])\n 6\n >>> sommeRecBis([])\n 0\n >>> sommeRecBis([6, 42.2, 34])\n 82.2\n '''\n \n if len(tab) == 0:\n somme = 0\n else:\n somme = tab[0]+sommeRecBis(tab[1:])\n return somme\n\nprint(sommeRecBis([1, 2, 3]))\nprint(sommeRecBis([]))\nprint(sommeRecBis([6, 42.2, 34]))\n\n%doctest sommeRecBis", "Exerice 3. Écrire un algorithme qui permet de rechercher un nombre dans un tableau trié. Proposez une solution récursive et une solution non récursive.", "def rechercheTab(tab,a):\n '''\n :entree tab: un tableau de nombres (entiers ou flottants) triés\n :entree a: le nombre recherché\n :sortie i: l'indice de la case du tableau dans laquelle se trouve le nombre. \n :pré-conditions: le tableau est trié par ordre croissant de valeur.\n :post-condition: l'indice de la première occurrence trouvée du nombre est renvoyé. \n Si le nombre n'est pas présent dans le tableau, on retourne -1. \n :Remarque : on appelle ce type de recherche \"recherche par dichotomie\". \n >>> rechercheTab([0,1,2,3,4],1)\n 1\n '''\n i=-1\n b=len(tab)//2\n if tab[b]==a:\n i=b\n elif b == 0:\n i = -1\n elif tab[b]>a: # Si la valeur du milieu du tableau est plus grande que la valeur recherchée, on recherche dans la partie gauche du tableau\n i=rechercheTab(tab[:b],a)\n else: # Sinon, on recherche dans la partie droite et on gère le décalage des indices. \n i=rechercheTab(tab[b:],a)\n if i != -1:\n i = i+b\n return i\n\nprint(rechercheTab([0,1,2,3,4],1))\nprint(rechercheTab([0,1,2,3,4],5))\nprint(rechercheTab([0,1,2,3,4],4))\nprint(rechercheTab([0,1.3,2.7,3.4],0))\n\n", "Devoirs à faire à la maison :\n- Pratiquer la récursivité en déroulant les algos à la main et / ou en utilisant Python Tutor ;\n- Relire la page d'exercices sur les tris (dans le manuel d'exercices) ;\n- Préparer la liste des exercices / notions que vous voulez revoir dans les prochains TD." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pushpajnc/models
Titanic_Survival_Exploration/Titanic_Survival_Exploration-V1.ipynb
mit
[ "Titanic Survival Exploration\nIn 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive.\nGetting Started", "import numpy as np\nimport pandas as pd\nimport pylab as pl\n\n# RMS Titanic data visualization code \nfrom titanic_visualizations import survival_stats\nfrom IPython.display import display\n%matplotlib inline\n\n# Load the dataset\nin_file = 'titanic_data.csv'\nfull_data = pd.read_csv(in_file)\n\n# Print the first few entries of the RMS Titanic data\ndisplay(full_data.head())", "From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:\n- Survived: Outcome of survival (0 = No; 1 = Yes)\n- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)\n- Name: Name of passenger\n- Sex: Sex of the passenger\n- Age: Age of the passenger (Some entries contain NaN)\n- SibSp: Number of siblings and spouses of the passenger aboard\n- Parch: Number of parents and children of the passenger aboard\n- Ticket: Ticket number of the passenger\n- Fare: Fare paid by the passenger\n- Cabin Cabin number of the passenger (Some entries contain NaN)\n- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)\nSince we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes.", "# Store the 'Survived' feature in a new variable and remove it from the dataset\noutcomes = full_data['Survived']\ndata = full_data.drop('Survived', axis = 1)\n\n# Show the new dataset with 'Survived' removed\ndisplay(data.head())", "The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].\nTo measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct.", "def accuracy_score(truth, pred):\n \"\"\" Returns accuracy score for input truth and predictions. \"\"\"\n \n # Ensure that the number of predictions matches number of outcomes\n if len(truth) == len(pred): \n \n # Calculate and return the accuracy as a percent\n return \"Predictions have an accuracy of {:.2f}%.\".format((truth == pred).mean()*100)\n \n else:\n return \"Number of predictions does not match number of outcomes!\"\n \n# Test the 'accuracy_score' function\npredictions = pd.Series(np.ones(5, dtype = int))\nprint accuracy_score(outcomes[:5], predictions)", "Making Predictions\nIf we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.\nThe predictions_0 function below will always predict that a passenger did not survive.", "def predictions_0(data):\n \"\"\" Model with no features. Always predicts a passenger did not survive. \"\"\"\n\n predictions = []\n for _, passenger in data.iterrows():\n \n # Predict the survival of 'passenger'\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_0(data)\n\nprint accuracy_score(outcomes, predictions)", "Using the RMS Titanic data, a prediction would be 61.62% accurate that none of the passengers survived.\n\nLet's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.", "survival_stats(data, outcomes, 'Sex')", "Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.", "def predictions_1(data):\n \"\"\" Model with one feature: \n - Predict a passenger survived if they are female. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if(passenger['Sex'] == 'female'):\n predictions.append(1)\n else:\n predictions.append(0)\n \n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_1(data)\n\nprint accuracy_score(outcomes, predictions)", "Therefore, the prediction that all female passengers survived and the remaining passengers did not survive, would be 78.68% accurate.\n\nUsing just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.", "survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\"])", "Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.", "def predictions_2(data):\n \"\"\" Model with two features: \n - Predict a passenger survived if they are female.\n - Predict a passenger survived if they are male and younger than 10. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if passenger['Sex'] == 'female':\n predictions.append(1)\n elif passenger['Age'] < 10:\n predictions.append(1)\n else:\n predictions.append(0)\n \n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_2(data)", "Prediction: all female passengers and all male passengers younger than 10 survived", "print accuracy_score(outcomes, predictions)", "Thus, the accuracy increases with above prediction to 79.35%\n\nAdding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone.", "survival_stats(data, outcomes, 'Sex')\nsurvival_stats(data, outcomes, 'Pclass')\nsurvival_stats(data, outcomes, 'Pclass',[\"Sex == 'female'\"])\nsurvival_stats(data, outcomes, 'SibSp', [\"Sex == 'female'\", \"Pclass == 3\"])\n\nsurvival_stats(data, outcomes, 'Age', [\"Sex == 'male'\", \"Age < 18\"])\nsurvival_stats(data, outcomes, 'Pclass', [\"Sex == 'male'\", \"Age < 15\"])\n\nsurvival_stats(data, outcomes, 'Age',[\"Sex == 'female'\"])\n\nsurvival_stats(data, outcomes, 'Age', [\"Sex == 'male'\", \"Pclass == 1\"] )\n\nsurvival_stats(data, outcomes, 'Sex', [\"Age < 10\", \"Pclass == 1\"] )\nsurvival_stats(data, outcomes, 'SibSp', [\"Sex == 'male'\"])\n\n\ndef predictions_3(data): \n \"\"\" Model with multiple features. Makes a prediction with an accuracy of at least 80%. \"\"\"\n predictions = []\n for _, passenger in data.iterrows():\n if ( 'Master' in passenger['Name'] and np.isnan(passenger['Age'])) :\n predictions.append(1)\n continue\n if ( passenger['Sex'] == 'male' and passenger['Age'] > 20\n and passenger['Age'] < 41 and passenger['Pclass'] == 1) :\n predictions.append(1)\n continue\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if passenger['Sex'] == 'female':\n if(passenger['Pclass'] < 3):\n predictions.append(1)\n elif passenger['SibSp'] < 1:\n predictions.append(1)\n else:\n predictions.append(0)\n elif (passenger['Age'] < 10):\n predictions.append(1)\n else:\n predictions.append(0)\n # Return our predictions\n print len(predictions)\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_3(data)\n\nprint accuracy_score(outcomes, predictions)", "With above features, I obtain a prediction accuracy of 81.03%\nDescription:\nWe have used features in the following order:\n1) Sex: Fig1 shows that the survival rate for females is higher than that of males.\n2) Pclass: Fig2 shows that the survival rate is higher than 50% in Class1 and less than 50% in Class2. We then looked at the survival rate of women in different classes. Fig3 shows that almost all the women survived in Class1 and Class2. Therefore, we used the filter that:\nif passenger['Sex'] == 'female':\n if(passenger['Pclass'] < 3):\n predictions.append(1)\nIn Class3, there is a 50% chance of survival for women. We want to see which women in that 50% survived. For that we looked at the SibSp feature. Fig5 shows that single women have a better chance of survival. Therefore, we used the following filter in conjunction with the above one:\nelif passenger['SibSp'] < 1:\n predictions.append(1)\n3) Age: Next feature we looked is Age of males. Next figure shows that males of age below 10 have a higher chance of survival. This figure shows that all males of age below 10 survived in Class1 and Class2. Therefore, we put the following filter in conjunction with the above filter: \n elif (passenger['Age'] < 10 and passenger['Pclass'] < 3):\n predictions.append(1)\n else:\n predictions.append(0)\nFig8 shows that almost all the males between the age of 20 and 41 in Class1 survived. \n if ( passenger['Sex'] == 'male' \n and passenger['Age'] < 50 and passenger['Pclass'] == 1) :\n predictions.append(1)\n continue\nAt each step, we are trying to decrease the entropy of each partition.\nSince we are asked to improve the accuracy to at least 80%, we stopped at that. \nThere are ways to improve the accuracy further by:\n1) Looking at the age of single women in Filter2 above and by looking at the parch information for these women, one can modify the filter2 to:\nelif (passenger['SibSp'] < 1 and passenger[‘Age’] < 15) :\n predictions.append(1)\nConclusion\nAfter several iterations of exploring and conditioning on the data, I have built a useful algorithm for predicting the survival of each passenger aboard the RMS Titanic. The technique applied in this project is a manual implementation of a simple machine learning model, the decision tree. A decision tree splits a set of data into smaller and smaller groups (called nodes), by one feature at a time. Each time a subset of the data is split, our predictions become more accurate if each of the resulting subgroups are more homogeneous (contain similar labels) than before.\nA decision tree is just one of many models that come from supervised learning. In supervised learning, we attempt to use features of the data to predict or model things with objective outcome labels. That is to say, each of our data points has a known outcome value, such as a categorical, discrete label like 'Survived', or a numerical, continuous value like predicting the price of a house.\nAnother use of supervised learning is building Spam filtering models. The outcome that we are predicting is a binary: spam or not-spam label. Mis-spellings and the email address of the sender can be used as two important features for making prediction." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MonicaGutierrez/PracticalMachineLearningClass
exercises/02-Churn model-solution.ipynb
mit
[ "Exercise 02\nEstimate a classifier to predict churn\nProblem Formulation\nCustomer Churn: losing/attrition of the customers from the company. Especially, the industries that the user acquisition is costly, it is crucially important for one company to reduce and ideally make the customer churn to 0 to sustain their recurring revenue. If you consider customer retention is always cheaper than customer acquisition and generally depends on the data of the user(usage of the service or product), it poses a great/exciting/hard problem for machine learning.\nData\nDataset is from a telecom service provider where they have the service usage(international plan, voicemail plan, usage in daytime, usage in evenings and nights and so on) and basic demographic information(state and area code) of the user. For labels, I have a single data point whether the customer is churned out or not.", "# Download the dataset\nfrom urllib import request\nresponse = request.urlopen('https://raw.githubusercontent.com/EricChiang/churn/master/data/churn.csv')\nraw_data = response.read().decode('utf-8')\n\n# Convert to numpy\nimport numpy as np\ndata = []\nfor line in raw_data.splitlines()[1:]:\n words = line.split(',')\n data.append(words)\ndata = np.array(data)\ncolumn_names = raw_data.splitlines()[0].split(',')\nn_obs = data.shape[0]\n\nprint(column_names)\nprint(data.shape)\n\ndata[:2]\n\n# Select only the numeric features\nX = data[:, [1,2,6,7,8,9,10]].astype(np.float)\n# Convert bools to floats\nX_ = (data[:, [4,5]] == 'no').astype(np.float)\nX = np.hstack((X, X_))\nY = (data[:, -1] == 'True.').astype(np.int)\n\nX[:2]\n\nprint('Number of churn cases ', Y.sum())", "Exercise 02.1\nSplit the training set in two sets with 70% and 30% of the data, respectively. \n\nPartir la base de datos es dos partes de 70%", "\n# Insert code here\nrandom_sample = np.random.rand(n_obs)\nX_train, X_test = X[random_sample<0.6], X[random_sample>=0.6]\nY_train, Y_test = Y[random_sample<0.6], Y[random_sample>=0.6]\n\nprint(Y_train.shape, Y_test.shape)", "Exercise 02.2\nTrain a logistic regression using the 70% set\n\nEntrenar una regresion logistica usando la particion del 70%", "\n# Insert code here\nfrom sklearn.linear_model import LogisticRegression\nclf = LogisticRegression()\nclf.fit(X_train, Y_train)", "Exercise 02.3\na) Create a confusion matrix using the prediction on the 30% set.\nb) Estimate the accuracy of the model in the 30% set\n\na) Estimar la matriz de confusion en la base del 30%.\nb) Calcular el poder de prediccion usando la base del 30%.", "# Insert code here\ny_pred = clf.predict(X_test)\n\nfrom sklearn.metrics import confusion_matrix\nconfusion_matrix(Y_test, y_pred)\n\n(Y_test == y_pred).mean()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mjuric/LSSTC-DSFP-Sessions
Session4/Day1/LSSTC-DSFP4-Juric-FrequentistAndBayes-01-Probability.ipynb
mit
[ "Frequentism and Bayesianism I: a Practical Introduction\nMario Juric & Jake VanderPlas, University of Washington\ne-mail: &#109;&#106;&#117;&#114;&#105;&#99;&#64;&#97;&#115;&#116;&#114;&#111;&#46;&#119;&#97;&#115;&#104;&#105;&#110;&#103;&#116;&#111;&#110;&#46;&#101;&#100;&#117;, twitter: @mjuric\n\nThis lecture is based on a post on the blog Pythonic Perambulations, by Jake VanderPlas. The content is BSD licensed. See also VanderPlas (2014) \"Frequentism and Bayesianism: A Python-driven Primer\".\nSlides built using the excellent RISE Jupyter extension by Damian Avila.\n<!-- PELICAN_BEGIN_SUMMARY -->\n\nOne of the first things a scientist hears about statistics is that there is are two different approaches: Frequentism and Bayesianism. Despite their importance, many scientific researchers never have opportunity to learn the distinctions between them and the different practical approaches that result.\nThe purpose of this lecture is to synthesize the philosophical and pragmatic aspects of the frequentist and Bayesian approaches, so that scientists like you might be better prepared to understand the types of data analysis people do.\nWe'll start by addressing the philosophical distinctions between the views, and from there move to discussion of how these ideas are applied in practice, with some Python code snippets demonstrating the difference between the approaches.\n<!-- PELICAN_END_SUMMARY -->\n\nPrerequisites\n\nPython Version 2.7\n\nThe \"PyData\" data science software stack (comes with Anaconda).\n\n\nemcee -- a pure-Python implementation of Goodman & Weare’s Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler.\n\nAstroML -- a library of statistical and machine learning routines for analyzing, loading, and visualizing astronomical data in Python.\n\nFrequentism vs. Bayesianism: a Philosophical Debate\n<br>\n<center>Fundamentally, the disagreement between frequentists and Bayesians concerns the definition (interpretation) of probability.</center>\n<br>\nFrequentist Probability\nFor frequentists, probability only has meaning in terms of a limiting case of repeated measurements.\nThat is, if I measure the photon flux $F$ from a given star (we'll assume for now that the star's flux does not vary with time), then measure it again, then again, and so on, each time I will get a slightly different answer due to the statistical error of my measuring device. In the limit of a large number of measurements, the frequency of any given value indicates the probability of measuring that value.\nFor frequentists probabilities are fundamentally related to frequencies of events. This means, for example, that in a strict frequentist view, it is meaningless to talk about the probability of the true flux of the star: the true flux is (by definition) a single fixed value, and to talk about a frequency distribution for a fixed value is nonsense.\nBayesian Probability\nFor Bayesians, the concept of probability is extended to cover degrees of certainty about statements. You can think of it as an extension of logic to statements where there's uncertainty.\nSay a Bayesian claims to measure the flux $F$ of a star with some probability $P(F)$: that probability can certainly be estimated from frequencies in the limit of a large number of repeated experiments, but this is not fundamental. The probability is a statement of my knowledge of what the measurement reasult will be.\nFor Bayesians, probabilities are fundamentally related to our own knowledge about an event. This means, for example, that in a Bayesian view, we can meaningfully talk about the probability that the true flux of a star lies in a given range.\nThat probability codifies our knowledge of the value based on prior information and/or available data.\nThe surprising thing is that this arguably subtle difference in philosophy leads, in practice, to vastly different approaches to the statistical analysis of data. Below I will give a few practical examples of the differences in approach, along with associated Python code to demonstrate the practical aspects of the resulting methods.\nFrequentist and Bayesian Approaches in Practice: Counting Photons\nHere we'll take a look at an extremely simple problem, and compare the frequentist and Bayesian approaches to solving it. There's necessarily a bit of mathematical formalism involved, but we won't go into too much depth or discuss too many of the subtleties.\nIf you want to go deeper, you might consider taking a look at chapters 4-5 of the Statistics, Data Mining, and Machine Learning in Astronomy textbook.\nThe Problem: Simple Photon Counts\nImagine that we point our telescope to the sky, and observe the light coming from a single star. For the time being, we'll assume that the star's true flux is constant with time, i.e. that is it has a fixed value $F_{\\rm true}$ (we'll also ignore effects like sky noise and other sources of systematic error). We'll assume that we perform a series of $N$ measurements with our telescope, where the $i^{\\rm th}$ measurement reports the observed photon flux $F_i$ and error $e_i$.\nThe question is, given this set of measurements $D = {F_i,e_i}$, what is our best estimate of the true flux $F_{\\rm true}$?\nAside on measurement errors\nWe'll make the (reasonable) assumption that errors are Gaussian:\n* In a Frequentist perspective, $e_i$ is the standard deviation of the results of a single measurement event in the limit of repetitions of that event.\n* In the Bayesian perspective, $e_i$ is the standard deviation of the (Gaussian) probability distribution describing our knowledge of that particular measurement given its observed value)\nHere we'll use Python to generate some toy data to demonstrate the two approaches to the problem.\nBecause the measurements are number counts, a Poisson distribution is a good approximation to the measurement process:", "# Generating some simple photon count data\nimport numpy as np\nfrom scipy import stats\nnp.random.seed(1) # for repeatability\n\nF_true = 1000 # true flux, say number of photons measured in 1 second\nN = 50 # number of measurements\nF = stats.poisson(F_true).rvs(N) # N measurements of the flux\ne = np.sqrt(F) # errors on Poisson counts estimated via square root", "Now let's make a simple visualization of the \"measured\" data:", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nax.errorbar(F, np.arange(N), xerr=e, fmt='ok', ecolor='gray', alpha=0.5)\nax.vlines([F_true], 0, N, linewidth=5, alpha=0.2)\nax.set_xlabel(\"Flux\");ax.set_ylabel(\"measurement number\");", "These measurements each have a different error $e_i$ which is estimated from Poisson statistics using the standard square-root rule.\nIn this toy example we already know the true flux $F_{\\rm true}$, but the question is this: given our measurements and errors, what is our best estimate of the true flux?\nLet's take a look at the frequentist and Bayesian approaches to solving this.\nFrequentist Approach to Simple Photon Counts\nWe'll start with the classical frequentist maximum likelihood approach. Given a single observation $D_i = (F_i, e_i)$, we can compute the probability distribution of the measurement given the true flux $F_{\\rm true}$ given our assumption of Gaussian errors:\n$$ P(D_i~|~F_{\\rm true}) = \\frac{1}{\\sqrt{2\\pi e_i^2}} \\exp{\\left[\\frac{-(F_i - F_{\\rm true})^2}{2 e_i^2}\\right]} $$\nThis should be read \"the probability of $D_i$ given $F_{\\rm true}$ equals ...\". You should recognize this as a normal distribution with mean $F_{\\rm true}$ and standard deviation $e_i$.\nWe construct the likelihood function by computing the product of the probabilities for each data point:\n$$\\mathcal{L}(D~|~F_{\\rm true}) = \\prod_{i=1}^N P(D_i~|~F_{\\rm true})$$\nHere $D = {D_i}$ represents the entire set of measurements. Because the value of the likelihood can become very small, it is often more convenient to instead compute the log-likelihood. Combining the previous two equations and computing the log, we have\n$$\\log\\mathcal{L} = -\\frac{1}{2} \\sum_{i=1}^N \\left[ \\log(2\\pi e_i^2) + \\frac{(F_i - F_{\\rm true})^2}{e_i^2} \\right]$$\nWhat we'd like to do is determine $F_{\\rm true}$ such that the likelihood is maximized. For this simple problem, the maximization can be computed analytically (i.e. by setting $d\\log\\mathcal{L}/dF_{\\rm true} = 0$). This results in the following observed estimate of $F_{\\rm true}$:\n$$ F_{\\rm est} = \\frac{\\sum w_i F_i}{\\sum w_i};~~w_i = 1/e_i^2 $$\nNotice that in the special case of all errors $e_i$ being equal, this reduces to\n$$ F_{\\rm est} = \\frac{1}{N}\\sum_{i=1}^N F_i $$\nThat is, in agreement with intuition, $F_{\\rm est}$ is simply the mean of the observed data when errors are equal.\nWe can go further and ask what the error of our estimate is? In the frequentist approach, this can be accomplished by fitting a Gaussian approximation to the likelihood curve at maximum; in this simple case this can also be solved analytically.\nIt can be shown that the standard deviation of this Gaussian approximation is:\n$$ \\sigma_{\\rm est} = \\left(\\sum_{i=1}^N w_i \\right)^{-1/2} $$\nThese results are fairly simple calculations; let's evaluate them for our toy dataset:", "w = 1. / e ** 2\nprint(\"\"\"\n F_true = {0}\n F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)\n \"\"\".format(F_true, (w * F).sum() / w.sum(), w.sum() ** -0.5, N))", "We find that for 50 measurements of the flux, our estimate has an error of about 0.4% and is consistent with the input value.\nBayesian Approach to Simple Photon Counts\nThe Bayesian approach, as you might expect, begins and ends with probabilities. It recognizes that what we fundamentally want to compute is our knowledge of the parameters in question, i.e. in this case,\n$$ P(F_{\\rm true}~|~D) $$\nN.b.: that this formulation of the problem is fundamentally contrary to the frequentist philosophy, which says that probabilities have no meaning for model parameters like $F_{\\rm true}$. Within the Bayesian interpretation of probability, this is perfectly acceptable. \nTo compute this result, Bayesians next apply Bayes' Theorem, a fundamental law of probability:\n$$ P(F_{\\rm true}~|~D) = \\frac{P(D~|~F_{\\rm true})~P(F_{\\rm true})}{P(D)} $$\nThough Bayes' theorem is where Bayesians get their name, it is not this law itself that is controversial, but the Bayesian interpretation of probability implied by the term $P(F_{\\rm true}~|~D)$.\nLet's take a look at each of the terms in this expression:\n\n$P(F_{\\rm true}~|~D)$: The posterior, or the probability of the model parameters given the data: this is the result we want to compute.\n$P(D~|~F_{\\rm true})$: The likelihood, which is proportional to the $\\mathcal{L}(D~|~F_{\\rm true})$ in the frequentist approach, above.\n$P(F_{\\rm true})$: The model prior, which encodes what we knew about the model prior to the application of the data $D$.\n$P(D)$: The data probability, which in practice amounts to simply a normalization term.\n\nIf we set the prior $P(F_{\\rm true}) \\propto 1$ (a flat prior), we find\n$$P(F_{\\rm true}|D) \\propto \\mathcal{L}(D|F_{\\rm true})$$\nand the Bayesian probability is maximized at precisely the same value as the frequentist result! So despite the philosophical differences, we see that (for this simple problem at least) the Bayesian and frequentist point estimates are equivalent.\nBut What About the Prior?\nWe glossed over something here: the prior, $P(F_{\\rm true})$.\nThe prior allows inclusion of other information into the computation, which becomes very useful in cases where multiple measurement strategies are being combined to constrain a single model (as is the case in, e.g. cosmological parameter estimation).\nThe necessity to specify a prior, however, is one of the more controversial pieces of Bayesian analysis.\nBut What About the Prior?\nA frequentist will point out that the prior is problematic when no true prior information is available. Though it might seem straightforward to use a noninformative prior like the flat prior mentioned above, there are some surprisingly subtleties involved. It turns out that in many situations, a truly noninformative prior does not exist!\nFrequentists point out that the subjective choice of a prior which necessarily biases your result has no place in statistical data analysis. A Bayesian would counter that frequentism doesn't solve this problem, but simply skirts the question.\nFrequentism can often be viewed as simply a special case of the Bayesian approach for some (implicit) choice of the prior: a Bayesian would say that it's better to make this implicit choice explicit, even if the choice might include some subjectivity.\nPhoton Counts: the Bayesian approach\nLeaving these philosophical debates aside for the time being, let's address how Bayesian results are generally computed in practice.\nFor a one parameter problem like the one considered here, it's as simple as computing the posterior probability $P(F_{\\rm true}~|~D)$ as a function of $F_{\\rm true}$: this is the distribution reflecting our knowledge of the parameter $F_{\\rm true}$.\nBut as the dimension of the model grows, this direct approach becomes increasingly intractable. For this reason, Bayesian calculations often depend on sampling methods such as Markov Chain Monte Carlo (MCMC). Here, we'll be using Dan Foreman-Mackey's excellent emcee package. Keep in mind here that the goal is to generate a set of points drawn from the posterior probability distribution, and to use those points to determine the answer we seek.\nTo perform this MCMC, we start by defining Python functions for the prior $P(F_{\\rm true})$, the likelihood $P(D~|~F_{\\rm true})$, and the posterior $P(F_{\\rm true}~|~D)$, noting that none of these need be properly normalized. \nOur model here is one-dimensional, but to handle multi-dimensional models we'll define the model in terms of an array of parameters $\\theta$, which in this case is $\\theta = [F_{\\rm true}]$:", "def log_prior(theta):\n return 1 # flat prior\n\ndef log_likelihood(theta, F, e):\n return -0.5 * np.sum(np.log(2 * np.pi * e ** 2)\n + (F - theta[0]) ** 2 / e ** 2)\n\ndef log_posterior(theta, F, e):\n return log_prior(theta) + log_likelihood(theta, F, e)", "Now we set up the problem, including generating some random starting guesses for the multiple chains of points.", "ndim = 1 # number of parameters in the model\nnwalkers = 50 # number of MCMC walkers\nnburn = 1000 # \"burn-in\" period to let chains stabilize\nnsteps = 2000 # number of MCMC steps to take\n\n# we'll start at random locations between 0 and 2000\nstarting_guesses = 2000 * np.random.rand(nwalkers, ndim)\n\nimport emcee\nsampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e])\nsampler.run_mcmc(starting_guesses, nsteps)\n\nsample = sampler.chain # shape = (nwalkers, nsteps, ndim)\nsample = sampler.chain[:, nburn:, :].ravel() # discard burn-in points", "If this all worked correctly, the array sample should contain a series of 50000 points drawn from the posterior. Let's plot them and check:", "# plot a histogram of the sample\nplt.hist(sample, bins=50, histtype=\"stepfilled\", alpha=0.3, normed=True)\n\n# plot a best-fit Gaussian\nF_fit = np.linspace(975, 1025)\npdf = stats.norm(np.mean(sample), np.std(sample)).pdf(F_fit)\n\nplt.plot(F_fit, pdf, '-k')\nplt.xlabel(\"F\"); plt.ylabel(\"P(F)\")", "We end up with a sample of points drawn from the (normal) posterior distribution. The mean and standard deviation of this posterior are the corollary of the frequentist maximum likelihood estimate above:", "print(\"\"\"\n F_true = {0}\n F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)\n \"\"\".format(F_true, np.mean(sample), np.std(sample), N))", "We see that as expected for this simple problem, the Bayesian approach yields the same result as the frequentist approach!\nDiscussion\nNow, you might come away with the impression that the Bayesian method is unnecessarily complicated, and in this case it certainly is. Using an Affine Invariant Markov Chain Monte Carlo Ensemble sampler to characterize a one-dimensional normal distribution is a bit like using the Death Star to destroy a beach ball.\nBut we did it here because it demonstrates an approach that can scale to complicated posteriors in many, many dimensions, and can provide nice results in more complicated situations where an analytic likelihood approach is not possible.\nAs a side note, you might also have noticed one little sleight of hand: at the end, we use a frequentist approach to characterize our posterior samples! When we computed the sample mean and standard deviation above, we were employing a distinctly frequentist technique to characterize the posterior distribution. The pure Bayesian result for a problem like this would be to report the posterior distribution itself (i.e. its representative sample), and leave it at that. That is, in pure Bayesianism the answer to a question is not a single number with error bars; the answer is the posterior distribution over the model parameters!\nAdding a Dimension: Exploring a more sophisticated model\nLet's briefly take a look at a more complicated situation, and compare the frequentist and Bayesian results yet again. Above we assumed that the star was static: now let's assume that we're looking at an object which we suspect has some stochastic variation &mdash; that is, it varies with time, but in an unpredictable way (a Quasar is a good example of such an object).\nWe'll propose a simple 2-parameter Gaussian model for this object: $\\theta = [\\mu, \\sigma]$ where $\\mu$ is the mean value, and $\\sigma$ is the standard deviation of the variability intrinsic to the object. Thus our model for the probability of the true flux at the time of each observation looks like this:\n$$ F_{\\rm true} \\sim \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left[\\frac{-(F - \\mu)^2}{2\\sigma^2}\\right]$$\nNow, we'll again consider $N$ observations each with their own error. We can generate them this way:", "np.random.seed(42) # for reproducibility\nN = 100 # we'll use more samples for the more complicated model\nmu_true, sigma_true = 1000, 15 # stochastic flux model\n\nF_true = stats.norm(mu_true, sigma_true).rvs(N) # (unknown) true flux\nF = stats.poisson(F_true).rvs() # observed flux: true flux plus Poisson errors.\ne = np.sqrt(F) # root-N error, as above", "Varying Photon Counts: The Frequentist Approach\nThe resulting likelihood is the convolution of the intrinsic distribution with the error distribution, so we have\n$$\\mathcal{L}(D~|~\\theta) = \\prod_{i=1}^N \\frac{1}{\\sqrt{2\\pi(\\sigma^2 + e_i^2)}}\\exp\\left[\\frac{-(F_i - \\mu)^2}{2(\\sigma^2 + e_i^2)}\\right]$$\nAnalogously to above, we can analytically maximize this likelihood to find the best estimate for $\\mu$:\n$$\\mu_{est} = \\frac{\\sum w_i F_i}{\\sum w_i};~~w_i = \\frac{1}{\\sigma^2 + e_i^2} $$\nAnd here we have a problem: the optimal value of $\\mu$ depends on the optimal value of $\\sigma$. The results are correlated, so we can no longer use straightforward analytic methods to arrive at the frequentist result.\nNevertheless, we can use numerical optimization techniques to determine the maximum likelihood value. Here we'll use the optimization routines available within Scipy's optimize submodule:", "def log_likelihood(theta, F, e):\n return -0.5 * np.sum(np.log(2 * np.pi * (theta[1] ** 2 + e ** 2))\n + (F - theta[0]) ** 2 / (theta[1] ** 2 + e ** 2))\n\n# maximize likelihood <--> minimize negative likelihood\ndef neg_log_likelihood(theta, F, e):\n return -log_likelihood(theta, F, e)\n\nfrom scipy import optimize\ntheta_guess = [900, 5]\ntheta_est = optimize.fmin(neg_log_likelihood, theta_guess, args=(F, e))\nprint(\"\"\"\n Maximum likelihood estimate for {0} data points:\n mu={theta[0]:.0f}, sigma={theta[1]:.0f}\n \"\"\".format(N, theta=theta_est))", "Error Estimates\nThis maximum likelihood value gives our best estimate of the parameters $\\mu$ and $\\sigma$ governing our model of the source. But this is only half the answer: we need to determine how confident we are in this answer, that is, we need to compute the error bars on $\\mu$ and $\\sigma$.\nTo see how this is done in the frequentist paradigm, see the sub-slides.\nThere are several approaches to determining errors in a frequentist paradigm. We could:\n* as above, fit a normal approximation to the maximum likelihood and report the covariance matrix (here we'd have to do this numerically rather than analytically).\n* Alternatively, we can compute statistics like $\\chi^2$ and $\\chi^2_{\\rm dof}$ to and use standard tests to determine confidence limits, which also depends on strong assumptions about the Gaussianity of the likelihood. \n* We might alternatively use randomized sampling approaches such as \"Jackknife\"...\n* or \"Bootstrap\", which maximize the likelihood for randomized samples of the input data in order to explore the degree of certainty in the result.\nAll of these would be valid techniques to use, but each comes with its own assumptions and subtleties. Here, for simplicity, we'll use the basic bootstrap resampler found in the astroML package:", "from astroML.resample import bootstrap\n\ndef fit_samples(sample):\n # sample is an array of size [n_bootstraps, n_samples]\n # compute the maximum likelihood for each bootstrap.\n return np.array([optimize.fmin(neg_log_likelihood, theta_guess,\n args=(F, np.sqrt(F)), disp=0)\n for F in sample])\n\nsamples = bootstrap(F, 1000, fit_samples) # 1000 bootstrap resamplings", "Now in a similar manner to what we did above for the MCMC Bayesian posterior, we'll compute the sample mean and standard deviation to determine the errors on the parameters.", "mu_samp = samples[:, 0]\nsig_samp = abs(samples[:, 1])\n\nprint \" mu = {0:.0f} +/- {1:.0f}\".format(mu_samp.mean(), mu_samp.std())\nprint \" sigma = {0:.0f} +/- {1:.0f}\".format(sig_samp.mean(), sig_samp.std())", "I should note that there is a huge literature on the details of bootstrap resampling, and there are definitely some subtleties of the approach that I am glossing over here. One obvious piece is that there is potential for errors to be correlated or non-Gaussian, neither of which is reflected by simply finding the mean and standard deviation of each model parameter. Nevertheless, I trust that this gives the basic idea of the frequentist approach to this problem.\nVarying Photon Counts: The Bayesian Approach\nThe Bayesian approach to this problem is almost exactly the same as it was in the previous problem, and we can set it up by slightly modifying the above code.", "def log_prior(theta):\n # sigma needs to be positive.\n if theta[1] <= 0:\n return -np.inf\n else:\n return 0\n\ndef log_posterior(theta, F, e):\n return log_prior(theta) + log_likelihood(theta, F, e)\n\n# same setup as above:\nndim, nwalkers = 2, 50\nnsteps, nburn = 2000, 1000\n\nstarting_guesses = np.random.rand(nwalkers, ndim)\nstarting_guesses[:, 0] *= 2000 # start mu between 0 and 2000\nstarting_guesses[:, 1] *= 20 # start sigma between 0 and 20\n\nsampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e])\nsampler.run_mcmc(starting_guesses, nsteps)\n\nsample = sampler.chain # shape = (nwalkers, nsteps, ndim)\nsample = sampler.chain[:, nburn:, :].reshape(-1, 2)", "Now that we have the samples, we'll use a convenience routine from astroML to plot the traces and the contours representing one and two standard deviations:", "from astroML.plotting import plot_mcmc\nfig = plt.figure()\nax = plot_mcmc(sample.T, fig=fig, labels=[r'$\\mu$', r'$\\sigma$'], colors='k')\nax[0].plot(sample[:, 0], sample[:, 1], ',k', alpha=0.1)\nax[0].plot([mu_true], [sigma_true], 'o', color='red', ms=10);", "The red dot indicates ground truth (from our problem setup), and the contours indicate one and two standard deviations (68% and 95% confidence levels). In other words, based on this analysis we are 68% confident that the model lies within the inner contour, and 95% confident that the model lies within the outer contour.\nNote here that $\\sigma = 0$ is consistent with our data within two standard deviations: that is, depending on the certainty threshold you're interested in, our data are not enough to confidently rule out the possibility of a non-varying source!\nThe other thing to notice is that this posterior is definitely not* Gaussian*: this can be seen by the lack of symmetry in the vertical direction.\nThat means that the Gaussian approximation used within the frequentist approach may not reflect the true uncertainties in the result. This isn't an issue with frequentism itself (i.e. there are certainly ways to account for non-Gaussianity within the frequentist paradigm), but the vast majority of commonly applied frequentist techniques make the explicit or implicit assumption of Gaussianity of the distribution.\nBayesian approaches generally don't require such assumptions.\n(Side note on priors: there are good arguments that a flat prior on $\\sigma$ subtley biases the calculation in this case: i.e. a flat prior is not necessarily non-informative in the case of scale factors like $\\sigma$. There are interesting arguments to be made that the Jeffreys Prior would be more applicable. Here I believe the Jeffreys prior is not suitable, because $\\sigma$ is not a true scale factor (i.e. the Gaussian has contributions from $e_i$ as well). On this question, I'll have to defer to others who have more expertise. Note that subtle &mdash; some would say subjective &mdash; questions like this are among the features of Bayesian analysis that frequentists take issue with).\nConclusion\nPhilosophical differences underlying frequentism and Bayesianism lead to fundamentally different approaches to simple problems, which nonetheless can often yield similar or even identical results.\nThe root of all differences is in a different interpretation of probability:\n\nFrequentism considers probabilities to be relative frequencies of a large number of (real or hypothetical) events.\nBayesianism considers probabilities to measure degrees of knowledge (belief) about something.\n\n... and differences in methodology follow:\n\nFrequentist analyses generally proceed through use of point estimates and maximum likelihood approaches.\nBayesian analyses generally compute the posterior either directly or through some version of MCMC sampling.\n\nAs we've seen, in simple problems the two approaches can yield similar results.\nBut as data and models grow in complexity, the two approaches can diverge greatly. We turn to that next..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JJINDAHOUSE/deep-learning
transfer-learning/Transfer_Learning.ipynb
mit
[ "Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.\ngit clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")", "Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.", "import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()", "ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)", "import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]", "Below I'm running images through the VGG network in batches.\n\nExercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).", "# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 30\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n \n # TODO: Build the vgg network here\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n \n # Image batch to pass to VGG network\n images = np.concatenate(batch)\n \n # TODO: Get the values from the relu6 layer of the VGG network\n feed_dict = {input_: images}\n codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)\n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)", "Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.", "# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))", "Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.", "from sklearn.preprocessing import LabelBinarizer\n# Your one-hot encoded labels array here\nlb = LabelBinarizer()\nlb.fit(labels)\n\nlabels_vecs = lb.transform(labels)", "Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.", "from sklearn.model_selection import StratifiedShuffleSplit\n\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\n\ntrain_idx, val_idx = next(ss.split(codes, labels))\n\nhalf_val_len = int(len(val_idx)/2)\nval_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]\n\ntrain_x, train_y = codes[train_idx], labels_vecs[train_idx]\nval_x, val_y = codes[val_idx], labels_vecs[val_idx]\ntest_x, test_y = codes[test_idx], labels_vecs[test_idx]\n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)", "If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.", "inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\n# TODO: Classifier layers and operations\nfc = tf.contrib.layers.fully_connected(inputs_, 256)\n\nlogits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None) # output layer logits\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)\ncost = tf.reduce_mean(cross_entropy) # cross entropy loss\n\noptimizer = tf.train.AdamOptimizer().minimize(cost) # training optimizer\n\n# Operations for validation/test accuracy\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.", "def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y", "Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!", "epochs = 10\niteration = 0\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in get_batches(train_x, train_y):\n feed = {\n inputs_: x,\n labels_: y\n }\n loss, _ = sess.run([cost, optimizer], feed_dict = feed)\n print(\"Epoch: {}/{}\".format(e+1, epochs),\n \"Iteration:{}\".format(iteration), \n \"Training loss: {:.5f}\".format(loss))\n iteration += 1\n \n if iteration % 5 == 0:\n feed = {inputs_: val_x, \n labels_: val_y}\n val_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e, epochs), \n \"Iteration: {}\".format(iteration), \n \"Validation Acc: {:.4f}\".format(val_acc))\n \n # TODO: Your training code here\n saver.save(sess, \"checkpoints/flowers.ckpt\")", "Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread", "Below, feel free to choose images and see how the trained classifier predicts the flowers in them.", "test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nif 'vgg' in globals():\n print('\"vgg\" object already exists. Will not create again.')\nelse:\n #create vgg\n with tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jeanbaptistepriez/predicsis-ai-faq-tuto
23.how_to_build_a_first_model_SDK/Build your first model.ipynb
gpl-3.0
[ "Goal\nBuild your first model on the central frame.\nPrerequisites\n\nPredicSis.ai Python SDK (pip install predicsis; documentation)\n\nA created project, with uploaded well formatted datasets, and predefined settings\n\n\nJupyter (see http://jupyter.org/)", "# Load PredicSis.ai SDK\nfrom predicsis import PredicSis", "Getting insights\nRetrieve your project thanks to its name.\nBuild your first model, on the central table, from the default schema.", "prj = PredicSis.project('Outbound Mail Campaign')", "Build a model from the default schema", "mdl = prj.default_schema().fit('My first model')\n\nmdl.auc()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
xpharry/Udacity-DLFoudation
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
mit
[ "Introduction\nHi, my name is Andrew Trask. I am currently a PhD student at the University of Oxford studying Deep Learning for Natural Language Processing. Natural Language Processing is the field that studies human language and today we’re going to be talking about Sentiment Classification, or the classification of whether or not a section of human-generated text is positive or negative (i.e., happy or sad). Deep Learning, as I’m sure you’re coming to understand, is a set of tools (neural networks) used to take what we “know”, and predict what we what we “want to know”. In this case, we “know” a paragraph of text generated from a human, and we “want to know” whether or not it has positive or negative sentiment. Our goal is to build a neural network that can make this prediction.\nWhat you will learn along the way - \"Framing a Problem\"\nWhat this tutorial is really about is \"framing a problem\" so that a neural network can be successful in solving it. Sentiment is a great example because neural networks don't take raw text as input, they take numbers! We have to consider how to efficiently transform our text into numbers so that our network can learn a valuable underlying pattern. I can't stress enough how important this skillset will be to your career. Frameworks (like TensorFlow) will handle backpropagation, gradients, and error measures for you, but \"framing the problem\" is up to you, the scientist, and if it's not done correctly, your networks will spend forever searching for correlation between your two datasets (and they might never find it). \nWhat You Should Already Know\nI am assuming you already know about neural networks, forward and back-propagation, stochastic gradient descent, mean squared error, and train/test splits from previous lessons. \nIt Starts with a Dataset\nNeural networks, by themselves, cannot do anything. All a neural network really does is search for direct or indirect correlation between two datasets. So, in order for a neural network to learn anything, we have to present it with two, meaningful datasets. The first dataset must represent what we “know” and the second dataset must represent what we “want to know”, or what we want the neural network to be able to tell us. As the network trains, it’s going to search for correlation between these two datasets, so that eventually it can take one and predict the other. Let me show you what I mean with our example sentiment dataset.", "def pretty_print_review_and_label(i):\n print(labels[i] + \"\\t:\\t\" + reviews[i][:80] + \"...\")\n\ng = open('reviews.txt','r')\nreviews = list(map(lambda x:x[:-1],g.readlines()))\ng.close()\n\ng = open('labels.txt','r')\nlabels = list(map(lambda x:x[:-1].upper(),g.readlines()))\ng.close()", "In the cell above, I have loaded two datasets. The first dataset \"reviews\" is a list of 25,000 movie reviews that people wrote about various movies. The second dataset is a list of whether or not each review is a “positive” review or “negative” review.", "reviews[0]\n\nlabels[0]", "I want you to pretend that you’re a neural network for a moment. Consider a few examples from the two datasets below. Do you see any correlation between these two datasets?", "print(\"labels.txt \\t : \\t reviews.txt\\n\")\npretty_print_review_and_label(2137)\npretty_print_review_and_label(12816)\npretty_print_review_and_label(6267)\npretty_print_review_and_label(21934)\npretty_print_review_and_label(5297)\npretty_print_review_and_label(4998)", "Well, let’s consider several different granularities. At the paragraph level, no two paragraphs are the same, so there can be no “correlation” per-say. You have to see two things occur at the same time more than once in order for there to be considered “correlation”. What about at the character level? I’m guessing the letter “b” is used just as much in positive reviews as it is in negative reviews. How about word level? Ah, I think there's some correlation between the words in these reviews and whether or not the review is positive or negative.", "from collections import Counter\n\npositive_counts = Counter()\nnegative_counts = Counter()\ntotal_counts = Counter()\n\nfor i in range(len(reviews)):\n if(labels[i] == 'POSITIVE'):\n for word in reviews[i].split(\" \"):\n positive_counts[word] += 1\n total_counts[word] += 1\n else:\n for word in reviews[i].split(\" \"):\n negative_counts[word] += 1\n total_counts[word] += 1\n \npos_neg_ratios = Counter()\n\nfor term,cnt in list(total_counts.most_common()):\n if(cnt > 10):\n pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)\n pos_neg_ratios[term] = pos_neg_ratio\n\nfor word,ratio in pos_neg_ratios.most_common():\n if(ratio > 1):\n pos_neg_ratios[word] = np.log(ratio)\n else:\n pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))\n\n# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()\n\n# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]", "Wow, there’s really something to this theory! As we can see, there are clearly terms in movie reviews that have correlation with our output labels. So, if we think there might be strong correlation between the words present in a particular review and the sentiment of that review, what should our network take as input and then predict? Let me put it a different way: If we think that there is correlation between the “vocabulary” of a particular review and the sentiment of that review, what should be the input and output to our neural network? The input should be the “vocabulary of the review” and the output should be whether or not the review is positive or negative!\nNow that we have some idea that this task is possible (and where we want the network to find correlation), let’s try to train a neural network to predict sentiment based on the vocabulary of a movie review.\nTransforming Text to Numbers\nThe next challenge is to transform our datasets into something that the neural network can read.\nAs I’m sure you’ve learned, neural networks are made up of layers of interconnected “neurons”. The first layer is where our input data “goes in” to the network. Any particular “input neuron” can take exactly two kinds of inputs, binary inputs and “real valued” inputs. Previously, you’ve been training networks on raw, continuous data, real valued inputs. However, now we’re modeling whether different input terms “exist” or “do not exist” in a movie review. When we model something that either “exists” or “does not exiest” or when something is either “true” or “false”, we want to use “binary” inputs to our neural network. This use of binary values is called \"one-hot encoding\". Let me show you what I mean.\nExample Predictions", "from IPython.display import Image\n\nreview = \"This was a horrible, terrible movie.\"\n\nImage(filename='sentiment_network.png')\n\nreview = \"The movie was excellent\"\n\nImage(filename='sentiment_network_pos.png')", "The Input\nLet’s say our entire movie review corpus has 10,000 words. Given a single movie review (\"This was a horrible, terrible movie\"), we’re going to put a “1” in the input of our neural network for every word that exists in the review, and a 0 everywhere else. So, given our 10,000 words, a movie review with 6 words would have 6 neurons with a “1” and 9,994 neurons with a “0”. The picture above is a miniturized version of this, displaying how we input a \"1\" for the words \"horrible\" and \"terrible\" while inputting a \"0\" for the word \"excellent\" because it was not present in the review.\nThe Output\nIn the same way, we want our network to either predict that the input is “positive” or “negative”. Now, our networks can’t write “positive” or “negative”, so we’re going to instead have another single neuron that represents “positive” when it is a “1” and “negative” when it is a “0”. In this way, our network can give us a number that we will interpret as “positive” or “negative”.\nBig Picture\nWhat we’re actually doing here is creating a “derivative dataset” from our movie reviews. Neural networks, after all, can’t read text. So, what we’re doing is identifying the “source of correlation” in our two datasets and creating a derivative dataset made up of numbers that preserve the patterns that we care about. In our input dataset, that pattern is the existence or non-existence of a particular word. In our output dataset, that pattern is whether a statement is positive or negative. Now we’ve converted our patterns into something our network can understand! Our network is going to look for correlation between the 1s and 0s in our input and the 1s and 0s in our output, and if it can do so it has learned to predict the sentiment of movie reviews. Now that our data is ready for the network, let’s start building the network.\nCreating the Input Data\nAs we just learned above, in order for our neural network to predict on a movie review, we have to be able to create an input layer of 1s and 0s that correlates with the words present in a review. Let's start by creating a function that can take a review and generate this layer of 1s and 0s.\nIn order to create this function, we first must decide how many input neurons we need. The answer is quite simple. Since we want our network's input to be able to represent the presence or absence of any word in the vocabulary, we need one node per vocabulary term. So, our input layer size is the size of our vocabulary. Let's calculate that.", "vocab = set(total_counts.keys())\nvocab_size = len(vocab)\nprint(vocab_size)", "And now we can initialize our (empty) input layer as vector of 0s. We'll modify it later by putting \"1\"s in various positions.", "import numpy as np\n\nlayer_0 = np.zeros((1,vocab_size))\nlayer_0", "And now we want to create a function that will set our layer_0 list to the correct sequence of 1s and 0s based on a single review. Now if you remember our picture before, you might have noticed something. Each word had a specific place in the input of our network.", "from IPython.display import Image\nImage(filename='sentiment_network.png')", "In order to create a function that can update our layer_0 variable based on a review, we have to decide which spots in our layer_0 vector (list of numbers) correlate with each word. Truth be told, it doesn't matter which ones we choose, only that we pick spots for each word and stick with them. Let's decide those positions now and store them in a python dictionary called \"word2index\".", "word2index = {}\n\nfor i,word in enumerate(vocab):\n word2index[word] = i\nword2index", "...and now we can use this new \"word2index\" dictionary to populate our input layer with the right 1s in the right places.", "def update_input_layer(review):\n \n global layer_0\n \n # clear out previous state, reset the layer to be all 0s\n layer_0 *= 0\n for word in review.split(\" \"):\n layer_0[0][word2index[word]] = 1\n\nupdate_input_layer(reviews[0])\n\nlayer_0", "Creating the Target Data\nAnd now we want to do the same thing for our target predictions", "def get_target_for_label(label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n\nget_target_for_label(labels[0])\n\nget_target_for_label(labels[1])", "Putting it all together in a Neural Network", "from IPython.display import Image\nImage(filename='sentiment_network_2.png')\n\nimport time\nimport sys\nimport numpy as np\n\n# Let's tweak our network from before to model these phenomena\nclass SentimentNetwork:\n def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):\n \n np.random.seed(1)\n \n self.pre_process_data()\n \n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n \n \n def pre_process_data(self):\n \n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n review_vocab.add(word)\n self.review_vocab = list(review_vocab)\n \n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n self.label_vocab = list(label_vocab)\n \n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n \n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n \n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n self.learning_rate = learning_rate\n \n self.layer_0 = np.zeros((1,input_nodes))\n \n \n def update_input_layer(self,review):\n\n # clear out previous state, reset the layer to be all 0s\n self.layer_0 *= 0\n for word in review.split(\" \"):\n if(word in self.word2index.keys()):\n self.layer_0[0][self.word2index[word]] = 1\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n def train(self, training_reviews, training_labels):\n \n assert(len(training_reviews) == len(training_labels))\n \n correct_so_far = 0\n \n start = time.time()\n \n for i in range(len(training_reviews)):\n \n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n # Input Layer\n self.update_input_layer(review)\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n\n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # TODO: Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # TODO: Update the weights\n self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n if(np.abs(layer_2_error) < 0.5):\n correct_so_far += 1\n \n reviews_per_second = i / float(time.time() - start)\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n \n \n def test(self, testing_reviews, testing_labels):\n \n correct = 0\n \n start = time.time()\n \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n reviews_per_second = i / float(time.time() - start)\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \"% #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \n # Input Layer\n self.update_input_layer(review.lower())\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n if(layer_2[0] > 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n \n\nmlp = SentimentNetwork(reviews[:-1000],labels[:-1000])\n\n# evaluate our model before training (just to show how horrible it is)\nmlp.test(reviews[-1000:],labels[-1000:])\n\n# train the network\nmlp.train(reviews[:-1000],labels[:-1000])\n\n# evaluate the model after training\nmlp.test(reviews[-1000:],labels[-1000:])\n\nmlp.run(\"That movie was great\")", "Making our Network Train and Run Faster\nEven though this network is very trainable on a laptop, we can really get a lot more performance out of it, and doing so is all about understanding how the neural network is interacting with our data (again, \"modeling the problem\"). Let's take a moment to consider how layer_1 is generated. First, we're going to create a smaller layer_0 so that we can easily picture all the values in our notebook.", "layer_0 = np.zeros(10)\n\nlayer_0", "Now, let's set a few of the inputs to 1s, and create a sample weight matrix", "layer_0[4] = 1\nlayer_0[9] = 1\nlayer_0\n\nweights_0_1 = np.random.randn(10,5) ", "So, given these pieces, layer_1 is created in the following way....", "layer_1 = layer_0.dot(weights_0_1)\n\nlayer_1", "layer_1 is generated by performing vector->matrix multiplication, however, most of our input neurons are turned off! Thus, there's actually a lot of computation being wasted. Consider the network below.", "Image(filename='sentiment_network_sparse.png')", "First Inefficiency: \"0\" neurons waste computation\nIf you recall from previous lessons, each edge from one neuron to another represents a single value in our weights_0_1 matrix. When we forward propagate, we take our input neuron's value, multiply it by each weight attached to that neuron, and then sum all the resulting values in the next layer. So, in this case, if only \"excellent\" was turned on, then all of the multiplications comein gout of \"horrible\" and \"terrible\" are wasted computation! All of the weights coming out of \"horrible\" and \"terrible\" are being multiplied by 0, thus having no affect on our values in layer_1.", "Image(filename='sentiment_network_sparse_2.png')", "Second Inefficiency: \"1\" neurons don't need to multiply!\nWhen we're forward propagating, we multiply our input neuron's value by the weights attached to it. However, in this case, when the neuron is turned on, it's always turned on to exactly 1. So, there's no need for multiplication, what if we skipped this step?\nThe Solution: Create layer_1 by adding the vectors for each word.\nInstead of generating a huge layer_0 vector and then performing a full vector->matrix multiplication across our huge weights_0_1 matrix, we can simply sum the rows of weights_0_1 that correspond to the words in our review. The resulting value of layer_1 will be exactly the same as if we had performed a full matrix multiplication at a fraction of the computational cost. This is called a \"lookup table\" or an \"embedding layer\".", "#inefficient thing we did before\n\nlayer_1 = layer_0.dot(weights_0_1)\nlayer_1\n\n# new, less expensive lookup table version\n\nlayer_1 = weights_0_1[4] + weights_0_1[9]\nlayer_1", "See how they generate exactly the same value? Let's update our new neural network to do this.", "import time\nimport sys\n\n# Let's tweak our network from before to model these phenomena\nclass SentimentNetwork:\n def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):\n \n np.random.seed(1)\n \n self.pre_process_data(reviews)\n \n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n \n \n def pre_process_data(self,reviews):\n \n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n review_vocab.add(word)\n self.review_vocab = list(review_vocab)\n \n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n self.label_vocab = list(label_vocab)\n \n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n \n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n \n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n self.learning_rate = learning_rate\n \n self.layer_0 = np.zeros((1,input_nodes))\n self.layer_1 = np.zeros((1,hidden_nodes))\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n def update_input_layer(self,review):\n\n # clear out previous state, reset the layer to be all 0s\n self.layer_0 *= 0\n for word in review.split(\" \"):\n self.layer_0[0][self.word2index[word]] = 1\n\n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def train(self, training_reviews_raw, training_labels):\n \n training_reviews = list()\n for review in training_reviews_raw:\n indices = set()\n for word in review.split(\" \"):\n if(word in self.word2index.keys()):\n indices.add(self.word2index[word])\n training_reviews.append(list(indices))\n \n assert(len(training_reviews) == len(training_labels))\n \n correct_so_far = 0\n \n start = time.time()\n \n for i in range(len(training_reviews)):\n \n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n # Input Layer\n\n # Hidden layer\n# layer_1 = self.layer_0.dot(self.weights_0_1)\n self.layer_1 *= 0\n for index in review:\n self.layer_1 += self.weights_0_1[index]\n \n # Output layer\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))\n\n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # Update the weights\n self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n \n for index in review:\n self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n if(np.abs(layer_2_error) < 0.5):\n correct_so_far += 1\n \n reviews_per_second = i / float(time.time() - start)\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n \n \n def test(self, testing_reviews, testing_labels):\n \n correct = 0\n \n start = time.time()\n \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n reviews_per_second = i / float(time.time() - start)\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \"% #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \n # Input Layer\n\n\n # Hidden layer\n self.layer_1 *= 0\n unique_indices = set()\n for word in review.lower().split(\" \"):\n if word in self.word2index.keys():\n unique_indices.add(self.word2index[word])\n for index in unique_indices:\n self.layer_1 += self.weights_0_1[index]\n \n # Output layer\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))\n \n if(layer_2[0] > 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n \n\nmlp = SentimentNetwork(reviews[:-1000],labels[:-1000],learning_rate=0.01)\n\n# train the network\nmlp.train(reviews[:-1000],labels[:-1000])", "And wallah! Our network learns 10x faster than before while making exactly the same predictions!", "# evaluate our model before training (just to show how horrible it is)\nmlp.test(reviews[-1000:],labels[-1000:])", "Our network even tests over twice as fast as well!\nMaking Learning Faster & Easier by Reducing Noise\nSo at first this might seem like the same thing we did in the previous section. However, while the previous section was about looking for computational waste and triming it out, this section is about looking for noise in our data and trimming it out. When we reduce the \"noise\" in our data, the neural network can identify correlation must faster and with greater accuracy. Whereas our technique will be simple, many recently developed state-of-the-art techniques (most notably attention and batch normalization) are all about reducing the amount of noise that your network has to filter through. The more obvious you can make the correaltion to your neural network, the better.\nOur network is looking for correlation between movie review vocabularies and output positive/negative labels. In order to do this, our network has to come to understand over 70,000 different words in our vocabulary! That's a ton of knowledge that the network has to learn! \nThis begs the questions, are all the words in the vocabulary actually relevant to sentiment? A few pages ago, we counted how often words occured in positive reviews relative to negative reviews and created a ratio. We could then sort words by this ratio and see the words with the most positive and negative affinity. If you remember, the output looked like this:", "# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()\n\n# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]\n\nfrom bokeh.models import ColumnDataSource, LabelSet\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.io import output_notebook\noutput_notebook()\n\n\n\nhist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"Word Positive/Negative Affinity Distribution\")\np.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color=\"#555555\")\nshow(p)", "In this graph \"0\" means that a word has no affinitity for either positive or negative. AS you can see, the vast majority of our words don't have that much direct affinity! So, our network is having to learn about lots of terms that are likely irrelevant to the final prediction. If we remove some of the most irrelevant words, our network will have fewer words that it has to learn about, allowing it to focus more on the words that matters.\nFurthermore, check out this graph of simple word frequency", "frequency_frequency = Counter()\n\nfor word, cnt in total_counts.most_common():\n frequency_frequency[cnt] += 1\n\nhist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"The frequency distribution of the words in our corpus\")\np.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color=\"#555555\")\nshow(p)", "As you can see, the vast majority of words in our corpus only happen once or twice. Unfortunately, this isn't enough for any of those words to be correlated with anything. Correlation requires seeing two things occur at the same time on multiple occasions so that you can identify a pattern. We should eliminate these very low frequency terms as well.\nIn the next network, we eliminate both low frequency words (via a min_count parameters) and words with low positive/negative affiliation", "import time\nimport sys\nimport numpy as np\n\n# Let's tweak our network from before to model these phenomena\nclass SentimentNetwork:\n def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):\n \n np.random.seed(1)\n \n self.pre_process_data(reviews, polarity_cutoff, min_count)\n \n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n \n \n def pre_process_data(self,reviews, polarity_cutoff,min_count):\n \n positive_counts = Counter()\n negative_counts = Counter()\n total_counts = Counter()\n\n for i in range(len(reviews)):\n if(labels[i] == 'POSITIVE'):\n for word in reviews[i].split(\" \"):\n positive_counts[word] += 1\n total_counts[word] += 1\n else:\n for word in reviews[i].split(\" \"):\n negative_counts[word] += 1\n total_counts[word] += 1\n\n pos_neg_ratios = Counter()\n\n for term,cnt in list(total_counts.most_common()):\n if(cnt >= 50):\n pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)\n pos_neg_ratios[term] = pos_neg_ratio\n\n for word,ratio in pos_neg_ratios.most_common():\n if(ratio > 1):\n pos_neg_ratios[word] = np.log(ratio)\n else:\n pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))\n \n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n if(total_counts[word] > min_count):\n if(word in pos_neg_ratios.keys()):\n if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):\n review_vocab.add(word)\n else:\n review_vocab.add(word)\n self.review_vocab = list(review_vocab)\n \n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n self.label_vocab = list(label_vocab)\n \n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n \n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n \n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n self.learning_rate = learning_rate\n \n self.layer_0 = np.zeros((1,input_nodes))\n self.layer_1 = np.zeros((1,hidden_nodes))\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n def update_input_layer(self,review):\n\n # clear out previous state, reset the layer to be all 0s\n self.layer_0 *= 0\n for word in review.split(\" \"):\n self.layer_0[0][self.word2index[word]] = 1\n\n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def train(self, training_reviews_raw, training_labels):\n \n training_reviews = list()\n for review in training_reviews_raw:\n indices = set()\n for word in review.split(\" \"):\n if(word in self.word2index.keys()):\n indices.add(self.word2index[word])\n training_reviews.append(list(indices))\n \n assert(len(training_reviews) == len(training_labels))\n \n correct_so_far = 0\n \n start = time.time()\n \n for i in range(len(training_reviews)):\n \n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n # Input Layer\n\n # Hidden layer\n# layer_1 = self.layer_0.dot(self.weights_0_1)\n self.layer_1 *= 0\n for index in review:\n self.layer_1 += self.weights_0_1[index]\n \n # Output layer\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))\n\n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # Update the weights\n self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n \n for index in review:\n self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n if(layer_2 >= 0.5 and label == 'POSITIVE'):\n correct_so_far += 1\n if(layer_2 < 0.5 and label == 'NEGATIVE'):\n correct_so_far += 1\n \n reviews_per_second = i / float(time.time() - start)\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n \n \n def test(self, testing_reviews, testing_labels):\n \n correct = 0\n \n start = time.time()\n \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n reviews_per_second = i / float(time.time() - start)\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \"% #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \n # Input Layer\n\n\n # Hidden layer\n self.layer_1 *= 0\n unique_indices = set()\n for word in review.lower().split(\" \"):\n if word in self.word2index.keys():\n unique_indices.add(self.word2index[word])\n for index in unique_indices:\n self.layer_1 += self.weights_0_1[index]\n \n # Output layer\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))\n \n if(layer_2[0] >= 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n \n\nmlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)\n\nmlp.train(reviews[:-1000],labels[:-1000])\n\nmlp.test(reviews[-1000:],labels[-1000:])", "So, using these techniques, we are able to achieve a slightly higher testing score while training 2x faster than before. Furthermore, if we really crank up these metrics, we can get some pretty extreme speed with minimal loss in quality (if, for example, your business use case requires running very fast)", "mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)\n\nmlp.train(reviews[:-1000],labels[:-1000])\n\nmlp.test(reviews[-1000:],labels[-1000:])", "What's Going On in the Weights?", "mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)\n\nmlp.train(reviews[:-1000],labels[:-1000])\n\nimport matplotlib.colors as colors\n\nwords_to_visualize = list()\nfor word, ratio in pos_neg_ratios.most_common(500):\n if(word in mlp.word2index.keys()):\n words_to_visualize.append(word)\n \nfor word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:\n if(word in mlp.word2index.keys()):\n words_to_visualize.append(word)\n\ncolors_list = list()\nvectors_list = list()\nfor word in words_to_visualize:\n if word in pos_neg_ratios.keys():\n vectors_list.append(mlp.weights_0_1[mlp.word2index[word]])\n if(pos_neg_ratios[word] > 0):\n\n colors_list.append(\"#\"+colors.rgb2hex([0,min(255,pos_neg_ratios[word] * 1),0])[3:])\n else:\n colors_list.append(\"#000000\")\n# colors_list.append(\"#\"+colors.rgb2hex([0,0,min(255,pos_neg_ratios[word] * 1)])[3:])\n \n\nfrom sklearn.manifold import TSNE\ntsne = TSNE(n_components=2, random_state=0)\nwords_top_ted_tsne = tsne.fit_transform(vectors_list)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"vector T-SNE for most polarized words\")\n\nsource = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],\n x2=words_top_ted_tsne[:,1],\n names=words_to_visualize))\n\np.scatter(x=\"x1\", y=\"x2\", size=8, source=source,color=colors_list)\n\nword_labels = LabelSet(x=\"x1\", y=\"x2\", text=\"names\", y_offset=6,\n text_font_size=\"8pt\", text_color=\"#555555\",\n source=source, text_align='center')\n# p.add_layout(word_labels)\n\nshow(p)\n\n# green indicates positive words, black indicates negative words" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DS-100/sp17-materials
sp17/labs/lab03/lab03_solution.ipynb
gpl-3.0
[ "Lab 3: Intro to Visualizations\nAuthors: Sam Lau, Deb Nolan\nDue 11:59pm 02/03/2017 (Completion-based)\nToday, we'll learn the basics of plotting using the Python libraries\nmatplotlib and seaborn! You should walk out of lab today understanding:\n\nThe functionality that matplotlib provides\nWhy we use seaborn for plotting\nHow to make and customize basic plots, including bar charts, box plots,\n histograms, and scatterplots.\n\nAs usual, to submit this lab you must scroll down the bottom and set the\ni_definitely_finished variable to True before running the submit cell.\nPlease work in pairs to work on this lab assignment. You will discuss the results with your partner instead of having to write them up in the notebook.", "import pandas as pd\nimport numpy as np\nimport seaborn as sns\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# These lines load the tests.\n!pip install -U okpy\nfrom client.api.notebook import Notebook\nok = Notebook('lab03.ok')", "matplotlib\nmatplotlib is the most widely used plotting library available for Python.\nIt comes with a good amount of out-of-the-box functionality and is highly\ncustomizable. Most other plotting libraries in Python provide simpler ways to generate\ncomplicated matplotlib plots, including seaborn, so it's worth learning a bit about\nmatplotlib now.\nNotice how all of our notebooks have lines that look like:\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nThe %matplotlib inline magic command tells matplotlib to render the plots\ndirectly onto the notebook (by default it will open a new window with the plot).\nThen, the import line lets us call matplotlib functions using plt.&lt;func&gt;\nHere's a graph of cos(x) from 0 to 2 * pi (you've made this in homework 1\nalready).", "# Set up (x, y) pairs from 0 to 2*pi\nxs = np.linspace(0, 2 * np.pi, 300)\nys = np.cos(xs)\n\n# plt.plot takes in x-values and y-values and plots them as a line\nplt.plot(xs, ys)", "matplotlib also conveniently has the ability to plot multiple things on the\nsame plot. Just call plt.plot multiple times in the same cell:", "plt.plot(xs, ys)\nplt.plot(xs, np.sin(xs))", "Question 0:\nThat plot looks pretty nice but isn't publication-ready. Luckily, matplotlib\nhas a wide array of plot customizations.\nSkim through the first part of the tutorial at\nhttps://www.labri.fr/perso/nrougier/teaching/matplotlib\nto create the plot below. There is a lot of extra information there which we suggest\nyou read on your own time. For now, just look for what you need to make the plot.\nSpecifically, you'll have to change the x and y limits, add a title, and add\na legend.", "plt.plot(xs, ys, label='cosine')\nplt.plot(xs, np.sin(xs), label='sine')\nplt.xlim(0, 2 * np.pi)\nplt.ylim(-1.1, 1.1)\n\nplt.title('Graphs of sin(x) and cos(x)')\nplt.legend(loc='lower left', frameon=False)\n\nplt.savefig('q1.png')", "Dataset: Bikeshare trips\nToday, we'll be performing some basic EDA (exploratory data analysis) on\nbikeshare data in Washington D.C. \nThe variables in this data frame are defined as:\n\ninstant: record index\ndteday : date\nseason : season (1:spring, 2:summer, 3:fall, 4:winter)\nyr : year (0: 2011, 1:2012)\nmnth : month ( 1 to 12)\nhr : hour (0 to 23)\nholiday : whether day is holiday or not\nweekday : day of the week\nworkingday : if day is neither weekend nor holiday\nweathersit :\n1: Clear or partly cloudy\n2: Mist + clouds\n3: Light Snow or Rain\n4: Heavy Rain or Snow\n\n\ntemp : Normalized temperature in Celsius (divided by 41)\natemp: Normalized feeling temperature in Celsius (divided by 50)\nhum: Normalized percent humidity (divided by 100)\nwindspeed: Normalized wind speed (divided by 67)\ncasual: count of casual users\nregistered: count of registered users\ncnt: count of total rental bikes including casual and registered", "bike_trips = pd.read_csv('bikeshare.csv')\n\n# Here we'll do some pandas datetime parsing so that the dteday column\n# contains datetime objects.\nbike_trips['dteday'] += ':' + bike_trips['hr'].astype(str)\nbike_trips['dteday'] = pd.to_datetime(bike_trips['dteday'], format=\"%Y-%m-%d:%H\")\nbike_trips = bike_trips.drop(['yr', 'mnth', 'hr'], axis=1)\n\nbike_trips.head()", "Question 1: Discuss the data with your partner. What is its granularity?\nWhat time range is represented here? Perform your exploration in the cell below.\nUsing pandas to plot\npandas provides useful methods on dataframes. For simple plots, we prefer to\njust use those methods instead of the matplotlib methods since we're often\nworking with dataframes anyway. The syntax is:\ndataframe.plot.&lt;plotfunc&gt;\n\nWhere the plotfunc is one of the functions listed here: http://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.html#other-plots", "# This plot shows the temperature at each data point\n\nbike_trips.plot.line(x='dteday', y='temp')\n\n# Stop here! Discuss why this plot is shaped like this with your partner.", "seaborn\nNow, we'll learn how to use the seaborn Python library. seaborn\nis built on top of matplotlib and provides many helpful functions\nfor statistical plotting that matplotlib and pandas don't have.\nGenerally speaking, we'll use seaborn for more complex statistical plots,\npandas for simple plots (eg. line / scatter plots), and\nmatplotlib for plot customization.\nNearly all seaborn functions are designed to operate on pandas\ndataframes. Most of these functions assume that the dataframe is in\na specific format called long-form, where each column of the dataframe\nis a particular feature and each row of the dataframe a single datapoint.\nFor example, this dataframe is long-form:\ncountry year avgtemp\n 1 Sweden 1994 6\n 2 Denmark 1994 6\n 3 Norway 1994 3\n 4 Sweden 1995 5\n 5 Denmark 1995 8\n 6 Norway 1995 11\n 7 Sweden 1996 7\n 8 Denmark 1996 8\n 9 Norway 1996 7\nBut this dataframe of the same data is not:\ncountry avgtemp.1994 avgtemp.1995 avgtemp.1996\n 1 Sweden 6 5 7\n 2 Denmark 6 8 8\n 3 Norway 3 11 7\nNote that the bike_trips dataframe is long-form.\nFor more about long-form data, see https://stanford.edu/~ejdemyr/r-tutorials/wide-and-long.\nFor now, just remember that we typically prefer long-form data and it makes plotting using\nseaborn easy as well.\nQuestion 2:\nUse seaborn's barplot function to make a bar chart showing the average\nnumber of registered riders on each day of the week over the \nentire bike_trips dataset.\nHere's a link to the seaborn API: http://seaborn.pydata.org/api.html\nSee if you can figure it out by reading the docs and talking with your partner.\nOnce you have the plot, discuss it with your partner. What trends do you\nnotice? What do you suppose causes these trends?\nNotice that barplot draws error bars for each category. It uses bootstrapping\nto make those.", "sns.barplot(x='weekday', y='registered', data=bike_trips)", "Question 3: Now for a fancier plot that seaborn makes really easy to produce.\nUse the distplot function to plot a histogram of all the total rider counts in the\nbike_trips dataset.", "sns.distplot(bike_trips['cnt'])", "Notice that seaborn will fit a curve to the histogram of the data. Fancy!\nQuestion 4: Discuss this plot with your partner. What shape does the distribution\nhave? What does that imply about the rider counts?\nQuestion 5:\nUse seaborn to make side-by-side boxplots of the number of casual riders (just\nchecked out a bike for that day) and registered riders (have a bikeshare membership).\nThe boxplot function will plot all the columns of the dataframe you pass in.\nOnce you make the plot, you'll notice that there are many outliers that make\nthe plot hard to see. To mitigate this, change the y-scale to be logarithmic.\nThat's a plot customization so you'll use matplotlib. The boxplot function returns\na matplotlib Axes object which represents a single plot and\nhas a set_yscale function.\nThe result should look like:", "ax = sns.boxplot(data=bike_trips[['casual', 'registered']])\nax.set_yscale('log')\nplt.savefig('q5.png')", "Question 6: Discuss with your partner what the plot tells you about the\ndistribution of casual vs. the distribution of registered riders.\nQuestion 7: Let's take a closer look at the number of registered vs. casual riders.\nUse the lmplot function to make a scatterplot. Put the number of casual\nriders on the x-axis and the number of registered riders on the y-axis.\nEach point should correspond to a single row in your bike_trips dataframe.", "sns.lmplot('casual', 'registered', bike_trips)", "Question 8: What do you notice about that plot? Discuss with\nyour partner. Notice that seaborn automatically fits a line of best\nfit to the plot. Does that line seem to be relevant?\nYou should note that lm_plot allows you to pass in fit_line=False to\navoid plotting lines of best fit when you feel they are unnecessary \nor misleading.\nQuestion 9: There seem to be two main groups in the scatterplot. Let's\nsee if we can separate them out.\nUse lmplot to make the scatterplot again. This time, use the hue parameter\nto color points for weekday trips differently from weekend trips. You should\nget something that looks like:", "sns.lmplot('casual', 'registered', bike_trips, hue='workingday',\n scatter_kws={'s': 6})\nplt.savefig('q9.png')\n\n# Note that the legend for workingday isn't super helpful. 0 in this case\n# means \"not a working day\" and 1 means \"working day\". Try fixing the legend\n# to be more descriptive.", "Question 10: Discuss the plot with your partner. Was splitting the data\nby working day informative? One of the best-fit lines looks valid but the other\ndoesn't. Why do you suppose that is?\nQuestion 11 (bonus): Eventually, you'll want to be able to pose a\nquestion yourself and answer it using a visualization. Here's a question\nyou can think about:\nHow do the number of casual and registered riders change throughout the day,\non average?\nSee if you can make a plot to answer this.", "riders_by_hour = (bike_trips.groupby('hr')\n .agg({'casual': 'mean', 'registered': 'mean'}))\nriders_by_hour.plot.line()", "Want to learn more?\nWe recommend checking out the seaborn tutorials on your own time. http://seaborn.pydata.org/tutorial.html\nThe matplotlib tutorial we linked in Question 1 is also a great refresher on common matplotlib functions: https://www.labri.fr/perso/nrougier/teaching/matplotlib/\nHere's a great blog post about the differences between Python's visualization libraries:\nhttps://dansaber.wordpress.com/2016/10/02/a-dramatic-tour-through-pythons-data-visualization-landscape-including-ggplot-and-altair/\nSubmission\nChange i_definitely_finished to True and run the cells below to submit the lab. You may resubmit as many times you want. We will be grading you on effort/completion.", "i_definitely_finished = True\n\n_ = ok.grade('qcompleted')\n_ = ok.backup()\n\n_ = ok.submit()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/tf-estimator-tutorials
00_Miscellaneous/tfx/01_tf_estimator_deepdive.ipynb
apache-2.0
[ "TensorFlow Estimators Deep Dive\nThe purporse of this tutorial is to explain the details of how to create a premade TensorFlow estimator, how trainining and evaluation work with different configurations, and how the model is exported for serving. The tutorial covers the following points:\n\nImplementing Input function with tf.data APIs.\nCreating Feature columns.\nCreating a Wide and Deep model with a premade estimator.\nConfiguring Train and evaluate parameters.\nExporting trained models for serving.\nImplementing Early stopping.\nDistribution Strategy for multi-GPUs.\nExtending premade estimators.\nAdaptive learning rate.\n\n<a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/sme_academy/01_tf_estimator_deepdive.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n<img valign=\"middle\" src=\"images/tf-layers.jpeg\" width=\"400\">", "try:\n COLAB = True\n from google.colab import auth\n auth.authenticate_user()\nexcept:\n pass\n\nRANDOM_SEED = 19831006\n\nimport os\nimport math\nimport multiprocessing\nimport pandas as pd\nfrom datetime import datetime\n\nimport tensorflow as tf\nprint \"TensorFlow : {}\".format(tf.__version__)\n\ntf.enable_eager_execution()\nprint \"Eager Execution Enabled: {}\".format(tf.executing_eagerly())", "Download Data\nUCI Adult Dataset: https://archive.ics.uci.edu/ml/datasets/adult\nPredict whether income exceeds $50K/yr based on census data. Also known as \"Census Income\" dataset.", "DATA_DIR='data'\n!mkdir $DATA_DIR\n!gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.data.csv $DATA_DIR\n!gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.test.csv $DATA_DIR\n\nTRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv')\nEVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv')\n\n!wc -l $TRAIN_DATA_FILE\n!wc -l $EVAL_DATA_FILE", "The training data includes 32,561 records, while the evaluation data includes 16,278 records.", "HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',\n 'marital_status', 'occupation', 'relationship', 'race', 'gender',\n 'capital_gain', 'capital_loss', 'hours_per_week',\n 'native_country', 'income_bracket']\n \npd.read_csv(TRAIN_DATA_FILE, names=HEADER).head()", "Dataset Metadata", "HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',\n 'marital_status', 'occupation', 'relationship', 'race', 'gender',\n 'capital_gain', 'capital_loss', 'hours_per_week',\n 'native_country', 'income_bracket']\n\nHEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''],\n [0], [0], [0], [''], ['']]\n\nNUMERIC_FEATURE_NAMES = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week']\nCATEGORICAL_FEATURE_WITH_VOCABULARY = {\n 'workclass': ['State-gov', 'Self-emp-not-inc', 'Private', 'Federal-gov', 'Local-gov', '?', 'Self-emp-inc', 'Without-pay', 'Never-worked'], \n 'relationship': ['Not-in-family', 'Husband', 'Wife', 'Own-child', 'Unmarried', 'Other-relative'], \n 'gender': [' Male', 'Female'], 'marital_status': [' Never-married', 'Married-civ-spouse', 'Divorced', 'Married-spouse-absent', 'Separated', 'Married-AF-spouse', 'Widowed'], \n 'race': [' White', 'Black', 'Asian-Pac-Islander', 'Amer-Indian-Eskimo', 'Other'], \n 'education': ['Bachelors', 'HS-grad', '11th', 'Masters', '9th', 'Some-college', 'Assoc-acdm', 'Assoc-voc', '7th-8th', 'Doctorate', 'Prof-school', '5th-6th', '10th', '1st-4th', 'Preschool', '12th'], \n}\n\nCATEGORICAL_FEATURE_WITH_HASH_BUCKETS = {\n 'native_country': 60,\n 'occupation': 20\n}\n\nFEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_WITH_VOCABULARY.keys() + CATEGORICAL_FEATURE_WITH_HASH_BUCKETS.keys()\nTARGET_NAME = 'income_bracket'\nTARGET_LABELS = [' <=50K', ' >50K']\nWEIGHT_COLUMN_NAME = 'fnlwgt'", "1. Data Input Function\n\nUse tf.data.Dataset APIs: list_files(), skip(), map(), filter(), batch(), shuffle(), repeat(), prefetch(), cache(), etc.\nUse tf.data.experimental.make_csv_dataset to read and parse CSV data files.\nUse tf.data.experimental.make_batched_features_dataset to read and parse TFRecords data files.", "def process_features(features, target):\n for feature_name in CATEGORICAL_FEATURE_WITH_VOCABULARY.keys() + CATEGORICAL_FEATURE_WITH_HASH_BUCKETS.keys():\n features[feature_name] = tf.strings.strip(features[feature_name])\n \n features['capital_total'] = features['capital_gain'] - features['capital_loss']\n return features, target\n\ndef make_input_fn(file_pattern, batch_size, num_epochs=1, shuffle=False):\n\n def _input_fn():\n dataset = tf.data.experimental.make_csv_dataset(\n file_pattern=file_pattern,\n batch_size=batch_size,\n column_names=HEADER,\n column_defaults=HEADER_DEFAULTS,\n label_name=TARGET_NAME,\n field_delim=',',\n use_quote_delim=True,\n header=False,\n num_epochs=num_epochs,\n shuffle=shuffle,\n shuffle_buffer_size=(5 * batch_size),\n shuffle_seed=RANDOM_SEED,\n num_parallel_reads=multiprocessing.cpu_count(),\n sloppy=True,\n )\n return dataset.map(process_features).cache()\n \n return _input_fn\n\n# You need to run tf.enable_eager_execution() at the top.\n\ndataset = make_input_fn(TRAIN_DATA_FILE, batch_size=1)()\nfor features, target in dataset.take(1):\n print \"Input Features:\"\n for key in features:\n print \"{}:{}\".format(key, features[key])\n\n print \"\"\n print \"Target:\"\n print target", "2. Create feature columns\n<br/>\n<img valign=\"middle\" src=\"images/tf-feature-columns.jpeg\" width=\"800\">\nBase feature columns\n 1. numeric_column\n 2. categorical_column_with_vocabulary_list\n 3. categorical_column_with_vocabulary_file\n 4. categorical_column_with_identity\n 5. categorical_column_with_hash_buckets\nExtended feature columns\n 1. bucketized_column\n 2. indicator_column\n 3. crossing_column\n 4. embedding_column", "def create_feature_columns():\n \n wide_columns = []\n deep_columns = []\n \n for column in NUMERIC_FEATURE_NAMES:\n # Create numeric columns.\n numeric_column = tf.feature_column.numeric_column(column)\n deep_columns.append(numeric_column)\n \n for column in CATEGORICAL_FEATURE_WITH_VOCABULARY:\n # Create categorical columns with vocab.\n vocabolary = CATEGORICAL_FEATURE_WITH_VOCABULARY[column]\n categorical_column = tf.feature_column.categorical_column_with_vocabulary_list(\n column, vocabolary)\n wide_columns.append(categorical_column)\n \n # Create embeddings of the categorical columns.\n embed_size = int(math.sqrt(len(vocabolary)))\n embedding_column = tf.feature_column.embedding_column(\n categorical_column, embed_size)\n deep_columns.append(embedding_column)\n\n for column in CATEGORICAL_FEATURE_WITH_HASH_BUCKETS:\n # Create categorical columns with hashing.\n hash_columns = tf.feature_column.categorical_column_with_hash_bucket(\n column, \n hash_bucket_size=CATEGORICAL_FEATURE_WITH_HASH_BUCKETS[column])\n wide_columns.append(hash_columns)\n\n # Create indicators for hashing columns.\n indicator_column = tf.feature_column.indicator_column(hash_columns) \n deep_columns.append(indicator_column)\n\n # Create bucktized column.\n age_bucketized = tf.feature_column.bucketized_column(\n deep_columns[0], boundaries = [18, 25, 30, 35, 40, 45, 50, 55, 60]\n )\n wide_columns.append(age_bucketized)\n\n # Create crossing column.\n education_X_occupation = tf.feature_column.crossed_column(\n ['education', 'workclass'], hash_bucket_size=int(1e4))\n wide_columns.append(education_X_occupation)\n \n # Create embeddings for crossing column.\n education_X_occupation_embedded = tf.feature_column.embedding_column(\n education_X_occupation, dimension=10)\n deep_columns.append(education_X_occupation_embedded)\n \n return wide_columns, deep_columns\n\nwide_columns, deep_columns = create_feature_columns()\n\nprint \"\"\nprint \"Wide columns:\"\nfor column in wide_columns:\n print column\n\nprint \"\"\nprint \"Deep columns:\"\nfor column in deep_columns:\n print column", "3. Instantiate a Wide and Deep Estimator\n<br/>\n<img valign=\"middle\" src=\"images/dnn-wide-deep.jpeg\">", "def create_estimator(params, run_config):\n \n wide_columns, deep_columns = create_feature_columns()\n \n estimator = tf.estimator.DNNLinearCombinedClassifier(\n\n n_classes=len(TARGET_LABELS),\n label_vocabulary=TARGET_LABELS,\n weight_column=WEIGHT_COLUMN_NAME,\n\n dnn_feature_columns=deep_columns,\n dnn_optimizer=tf.train.AdamOptimizer(\n learning_rate=params.learning_rate),\n dnn_hidden_units=params.hidden_units,\n dnn_dropout=params.dropout,\n dnn_activation_fn=tf.nn.relu,\n batch_norm=True,\n\n linear_feature_columns=wide_columns,\n linear_optimizer='Ftrl',\n\n config=run_config\n )\n \n return estimator", "4. Implement Train and Evaluate Experiment\n<img valign=\"middle\" src=\"images/tf-estimators.jpeg\" width=\"900\">\nDelete the model_dir file if you don't want a Warm Start\n* If not deleted, and you change the model, it will error.\nTrainSpec\n* Set shuffle in the input_fn to True\n* Set num_epochs in the input_fn to None\n* Set max_steps. One batch (feed-forward pass & backpropagation) \ncorresponds to 1 training step. \nEvalSpec\n* Set shuffle in the input_fn to False\n* Set Set num_epochs in the input_fn to 1\n* Set steps to None if you want to use all the evaluation data. \n* Otherwise, set steps to the number of batches you want to use for evaluation, and set shuffle to True.\n* Set start_delay_secs to 0 to start evaluation as soon as a checkpoint is produced.\n* Set throttle_secs to 0 to re-evaluate as soon as a new checkpoint is produced.", "def run_experiment(estimator, params, run_config, \n resume=False, train_hooks=None, exporters=None):\n\n print \"Resume training {}: \".format(resume)\n print \"Epochs: {}\".format(epochs)\n print \"Batch size: {}\".format(params.batch_size)\n print \"Steps per epoch: {}\".format(steps_per_epoch)\n print \"Training steps: {}\".format(params.max_steps)\n print \"Learning rate: {}\".format(params.learning_rate)\n print \"Hidden Units: {}\".format(params.hidden_units)\n print \"Dropout probability: {}\".format(params.dropout)\n print \"Save a checkpoint and evaluate afer {} step(s)\".format(run_config.save_checkpoints_steps)\n print \"Keep the last {} checkpoint(s)\".format(run_config.keep_checkpoint_max)\n print \"\"\n \n tf.logging.set_verbosity(tf.logging.INFO)\n\n if not resume: \n if tf.gfile.Exists(run_config.model_dir):\n print \"Removing previous artefacts...\"\n tf.gfile.DeleteRecursively(run_config.model_dir)\n else:\n print \"Resuming training...\"\n\n # Create train specs.\n train_spec = tf.estimator.TrainSpec(\n input_fn = make_input_fn(\n TRAIN_DATA_FILE,\n batch_size=params.batch_size,\n num_epochs=None, # Run until the max_steps is reached.\n shuffle=True\n ),\n max_steps=params.max_steps,\n hooks=train_hooks\n )\n\n # Create eval specs.\n eval_spec = tf.estimator.EvalSpec(\n input_fn = make_input_fn(\n EVAL_DATA_FILE,\n batch_size=params.batch_size, \n ),\n exporters=exporters,\n start_delay_secs=0,\n throttle_secs=0,\n steps=None # Set to limit number of steps for evaluation.\n )\n \n time_start = datetime.utcnow() \n print \"Experiment started at {}\".format(time_start.strftime(\"%H:%M:%S\"))\n print \".......................................\"\n \n # Run train and evaluate.\n tf.estimator.train_and_evaluate(\n estimator=estimator,\n train_spec=train_spec, \n eval_spec=eval_spec)\n\n time_end = datetime.utcnow() \n print \".......................................\"\n print \"Experiment finished at {}\".format(time_end.strftime(\"%H:%M:%S\"))\n print \"\"\n \n time_elapsed = time_end - time_start\n print \"Experiment elapsed time: {} seconds\".format(time_elapsed.total_seconds())", "Set Parameters and Run Configurations.\n\nSet model_dir in the run_config\nIf the data size is known, training steps, with respect to epochs would be: (training_size / batch_size) * epochs \nBy default, a checkpoint is saved every 600 secs. That is, the model is evaluated only every 10mins. \nTo change this behaviour, set one of the following parameters in the run_config\nsave_checkpoints_secs: Save checkpoints every this many seconds.\n\nsave_checkpoints_steps: Save checkpoints every this many steps.\n\n\nSet the number of the checkpoints to keep using keep_checkpoint_max", "class Parameters():\n pass\n\nMODELS_LOCATION = 'gs://ksalama-gcs-cloudml/others/models/census'\nMODEL_NAME = 'dnn_classifier'\nmodel_dir = os.path.join(MODELS_LOCATION, MODEL_NAME)\nos.environ['MODEL_DIR'] = model_dir\n\nTRAIN_DATA_SIZE = 32561\n\nparams = Parameters()\nparams.learning_rate = 0.001\nparams.hidden_units = [128, 128, 128]\nparams.dropout = 0.15\nparams.batch_size = 128\n\n# Set number of steps with respect to epochs.\nepochs = 5\nsteps_per_epoch = int(math.ceil(TRAIN_DATA_SIZE / params.batch_size))\nparams.max_steps = steps_per_epoch * epochs\n\nrun_config = tf.estimator.RunConfig(\n tf_random_seed=RANDOM_SEED,\n save_checkpoints_steps=steps_per_epoch, # Save a checkpoint after each epoch, evaluate the model after each epoch.\n keep_checkpoint_max=3, # Keep the 3 most recently produced checkpoints.\n model_dir=model_dir,\n save_summary_steps=100, # Summary steps for Tensorboard.\n log_step_count_steps=50\n)", "Run Experiment", "if COLAB:\n from tensorboardcolab import *\n TensorBoardColab(graph_path=model_dir)\n\nestimator = create_estimator(params, run_config)\nrun_experiment(estimator, params, run_config)\n\nprint model_dir\n!gsutil ls {model_dir}", "5. Export your trained model\nImplement serving input receiver function", "def make_serving_input_receiver_fn():\n inputs = {}\n for feature_name in FEATURE_NAMES:\n dtype = tf.float32 if feature_name in NUMERIC_FEATURE_NAMES else tf.string\n inputs[feature_name] = tf.placeholder(shape=[None], dtype=dtype)\n \n # What is wrong here? \n return tf.estimator.export.build_raw_serving_input_receiver_fn(inputs)", "Export to saved_model", "export_dir = os.path.join(model_dir, 'export')\n\n# Delete export directory if exists.\nif tf.gfile.Exists(export_dir):\n tf.gfile.DeleteRecursively(export_dir)\n\n# Export the estimator as a saved_model.\nestimator.export_savedmodel(\n export_dir_base=export_dir,\n serving_input_receiver_fn=make_serving_input_receiver_fn()\n)\n\n!gsutil ls gs://ksalama-gcs-cloudml/others/models/census/dnn_classifier/export/1552582374\n\n%%bash\n\nsaved_models_base=${MODEL_DIR}/export/\nsaved_model_dir=$(gsutil ls ${saved_models_base} | tail -n 1)\nsaved_model_cli show --dir=${saved_model_dir} --all", "Test saved_model", "export_dir = os.path.join(model_dir, 'export')\ntf.gfile.ListDirectory(export_dir)[-1]\nsaved_model_dir = os.path.join(export_dir, tf.gfile.ListDirectory(export_dir)[-1])\nprint(saved_model_dir)\nprint \"\"\n\npredictor_fn = tf.contrib.predictor.from_saved_model(\n export_dir = saved_model_dir,\n signature_def_key=\"predict\"\n)\n\noutput = predictor_fn(\n {\n 'age': [34.0],\n 'workclass': ['Private'],\n 'education': ['Doctorate'],\n 'education_num': [10.0],\n 'marital_status': ['Married-civ-spouse'],\n 'occupation': ['Prof-specialty'],\n 'relationship': ['Husband'],\n 'race': ['White'],\n 'gender': ['Male'],\n 'capital_gain': [0.0], \n 'capital_loss': [0.0], \n 'hours_per_week': [40.0],\n 'native_country':['Egyptian']\n }\n)\nprint(output)", "Export the Model during Training and Evaluation\nSaved models are exported under <model_dir>/export/<folder_name>.\n* Latest Exporter: exports a model after each evaluation.\n * specify the maximum number of exported models to keep using exports_to_keep param.\n* Final Exporter: exports only the very last evaluated checkpoint. of the model.\n* Best exporter: runs everytime when the newly evaluted checkpoint is better than any exsiting model.\n * specify the maximum number of exported models to keep using exports_to_keep param.\n * It uses the evaluation events stored under the eval folder.", "def _accuracy_bigger(best_eval_result, current_eval_result):\n \n metric = 'accuracy'\n return best_eval_result[metric] < current_eval_result[metric]\n\n\nparams.max_steps = 1000\nparams.hidden_units = [128, 128]\nparams.dropout = 0\nrun_config = tf.estimator.RunConfig(\n tf_random_seed=RANDOM_SEED,\n save_checkpoints_steps=200,\n keep_checkpoint_max=1,\n model_dir=model_dir,\n log_step_count_steps=50\n)\n\nexporter = tf.estimator.BestExporter(\n compare_fn=_accuracy_bigger,\n event_file_pattern='eval_{}/*.tfevents.*'.format(datetime.utcnow().strftime(\"%H%M%S\")),\n name=\"estimate\", # Saved models are exported under /export/estimate/\n serving_input_receiver_fn=make_serving_input_receiver_fn(),\n exports_to_keep=1\n)\n\nestimator = create_estimator(params, run_config)\nrun_experiment(estimator, params, run_config, exporters = [exporter])\n\n!gsutil ls {model_dir}/export/estimate", "6. Early Stopping\n\nstop_if_higher_hook \nstop_if_lower_hook \nstop_if_no_increase_hook\nstop_if_no_decrease_hook", "early_stopping_hook = tf.contrib.estimator.stop_if_no_increase_hook(\n estimator,\n 'accuracy',\n max_steps_without_increase=100,\n run_every_secs=None,\n run_every_steps=500\n)\n\nparams.max_steps = 1000000\nparams.hidden_units = [128, 128]\nparams.dropout = 0\nrun_config = tf.estimator.RunConfig(\n tf_random_seed=RANDOM_SEED,\n save_checkpoints_steps=500,\n keep_checkpoint_max=1,\n model_dir=model_dir,\n log_step_count_steps=100\n)\n\nrun_experiment(estimator, params, run_config, exporters = [exporter], train_hooks=[early_stopping_hook])", "7. Using Distribution Strategy for Utilising Multiple GPUs", "strategy = None\nnum_gpus = len([device_name for device_name in tf.contrib.eager.list_devices()\n if '/device:GPU' in device_name])\n\nprint \"GPUs available: {}\".format(num_gpus)\n\nif num_gpus > 1:\n strategy = tf.distribute.MirroredStrategy()\n params.batch_size = int(math.ceil(params.batch_size / num_gpus))\n\nrun_config = tf.estimator.RunConfig(\n tf_random_seed=RANDOM_SEED,\n save_checkpoints_steps=200,\n model_dir=model_dir,\n train_distribute=strategy\n)\n\nestimator = create_estimator(params, run_config)\nrun_experiment(estimator, params, run_config)", "8. Extending a Premade Estimator\nAdd an evaluation metric\n\ntf.metrics\ntf.estimator.add_metric", "def metric_fn(labels, predictions):\n\n metrics = {}\n \n label_index = tf.contrib.lookup.index_table_from_tensor(tf.constant(TARGET_LABELS)).lookup(labels)\n one_hot_labels = tf.one_hot(label_index, len(TARGET_LABELS))\n\n metrics['mirco_accuracy'] = tf.metrics.mean_per_class_accuracy(\n labels=label_index,\n predictions=predictions['class_ids'],\n num_classes=2)\n \n metrics['f1_score'] = tf.contrib.metrics.f1_score(\n labels=one_hot_labels,\n predictions=predictions['probabilities'])\n\n return metrics\n\nparams.max_steps = 1\nestimator = create_estimator(params, run_config)\nestimator = tf.contrib.estimator.add_metrics(estimator, metric_fn)\nrun_experiment(estimator, params, run_config)", "Add Forward Features\n\ntf.estimator.forward_features\nThis is very useful for batch prediction, in order to make instances to their predictions", "estimator = tf.contrib.estimator.forward_features(estimator, keys=\"row_identifier\")\n\ndef make_serving_input_receiver_fn():\n inputs = {}\n for feature_name in FEATURE_NAMES:\n dtype = tf.float32 if feature_name in NUMERIC_FEATURE_NAMES else tf.string\n inputs[feature_name] = tf.placeholder(shape=[None], dtype=dtype)\n \n processed_inputs,_ = process_features(inputs, None)\n processed_inputs['row_identifier'] = tf.placeholder(shape=[None], dtype=tf.string)\n return tf.estimator.export.build_raw_serving_input_receiver_fn(processed_inputs)\n\nexport_dir = os.path.join(model_dir, 'export')\n\nif tf.gfile.Exists(export_dir):\n tf.gfile.DeleteRecursively(export_dir)\n \nestimator.export_savedmodel(\n export_dir_base=export_dir,\n serving_input_receiver_fn=make_serving_input_receiver_fn()\n)\n\n%%bash\n\nsaved_models_base=${MODEL_DIR}/export/\nsaved_model_dir=$(gsutil ls ${saved_models_base} | tail -n 1)\nsaved_model_cli show --dir=${saved_model_dir} --all\n\nexport_dir = os.path.join(model_dir, 'export')\ntf.gfile.ListDirectory(export_dir)[-1]\nsaved_model_dir = os.path.join(export_dir, tf.gfile.ListDirectory(export_dir)[-1])\nprint(saved_model_dir)\nprint \"\"\n\npredictor_fn = tf.contrib.predictor.from_saved_model(\n export_dir = saved_model_dir,\n signature_def_key=\"predict\"\n)\n\noutput = predictor_fn(\n { 'row_identifier': ['key0123'],\n 'age': [34.0],\n 'workclass': ['Private'],\n 'education': ['Doctorate'],\n 'education_num': [10.0],\n 'marital_status': ['Married-civ-spouse'],\n 'occupation': ['Prof-specialty'],\n 'relationship': ['Husband'],\n 'race': ['White'],\n 'gender': ['Male'],\n 'capital_gain': [0.0], \n 'capital_loss': [0.0], \n 'hours_per_week': [40.0],\n 'native_country':['Egyptian']\n }\n)\nprint(output)", "9. Adaptive learning rate\n\nexponential_decay\nconsine_decay\nlinear_cosine_decay\nconsine_decay_restarts\npolynomial decay\npiecewise_constant_decay", "def create_estimator(params, run_config):\n \n wide_columns, deep_columns = create_feature_columns()\n \n def _update_optimizer(initial_learning_rate, decay_steps):\n \n # learning_rate = tf.train.exponential_decay(\n # initial_learning_rate,\n # global_step=tf.train.get_global_step(),\n # decay_steps=decay_steps,\n # decay_rate=0.9\n # )\n\n learning_rate = tf.train.cosine_decay_restarts(\n initial_learning_rate,\n tf.train.get_global_step(),\n first_decay_steps=50,\n t_mul=2.0,\n m_mul=1.0,\n alpha=0.0,\n )\n\n tf.summary.scalar('learning_rate', learning_rate)\n\n return tf.train.AdamOptimizer(learning_rate=learning_rate)\n \n estimator = tf.estimator.DNNLinearCombinedClassifier(\n\n n_classes=len(TARGET_LABELS),\n label_vocabulary=TARGET_LABELS,\n weight_column=WEIGHT_COLUMN_NAME,\n\n dnn_feature_columns=deep_columns,\n dnn_optimizer=lambda: _update_optimizer(params.learning_rate, params.max_steps),\n dnn_hidden_units=params.hidden_units,\n dnn_dropout=params.dropout,\n batch_norm=True,\n \n linear_feature_columns=wide_columns,\n linear_optimizer='Ftrl',\n \n config=run_config\n )\n \n return estimator\n\nparams.learning_rate = 0.1\nparams.max_steps = 1000\nrun_config = tf.estimator.RunConfig(\n tf_random_seed=RANDOM_SEED,\n save_checkpoints_steps=200,\n model_dir=model_dir,\n)\n\nif COLAB:\n from tensorboardcolab import *\n TensorBoardColab(graph_path=model_dir)\n\nestimator = create_estimator(params, run_config)\nrun_experiment(estimator, params, run_config)", "License\nAuthor: Khalid Salama\n\nDisclaimer: This is not an official Google product. The sample code provided for an educational purpose.\n\nCopyright 2019 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
timothydmorton/usrp-sciprog
day3/Astropy-Demo.ipynb
mit
[ "Using variables with units\nWe'll use astropy to unburden us from common calculations (and equally common mistakes when doing them by hand).\nastropy has a fair amount of packages, so check of the docs: http://docs.astropy.org/en/latest/", "kpc_to_km = 3.086E16\ndistance = 1. # kpc\ndistance * kpc_to_km\n\ntype(distance)\n\nimport astropy.units as u\ndistance_q = 1 * u.kpc\ntype(distance_q)\n\ndistance_q.to(u.km)\n\ndistance_q.to(u.jupiterRad)\n\ndistance_M = 1 * u.Mpc\ndistance_q + distance_M", "Coordinate transformations\nCoordinates in astronomy often come in equatorial coordinates, specified by right ascension (RA) and declination (DEC).", "import astropy.coordinates as coord\n\nc1 = coord.SkyCoord(ra=150*u.degree, dec=-17*u.degree)\nc2 = coord.SkyCoord(ra='21:15:32.141', dec=-17*u.degree, unit=(u.hourangle,u.degree))", "If we wanted this coordinate on the celestial sphere to another system (of the celestial sphere), which is tied to our Galaxy, we can do this:", "c1.transform_to(coord.Galactic)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/image_classification/solutions/3_tf_hub_transfer_learning.ipynb
apache-2.0
[ "TensorFlow Transfer Learning\nThis notebook shows how to use pre-trained models from TensorFlowHub. Sometimes, there is not enough data, computational resources, or time to train a model from scratch to solve a particular problem. We'll use a pre-trained model to classify flowers with better accuracy than a new model for use in a mobile application.\nLearning Objectives\n\nKnow how to apply image augmentation\nKnow how to download and use a TensorFlow Hub module as a layer in Keras.", "import os\nimport pathlib\nfrom PIL import Image\n\nimport IPython.display as display\nimport matplotlib.pylab as plt\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import (\n Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)\nimport tensorflow_hub as hub", "Exploring the data\nAs usual, let's take a look at the data before we start building our model. We'll be using a creative-commons licensed flower photo dataset of 3670 images falling into 5 categories: 'daisy', 'roses', 'dandelion', 'sunflowers', and 'tulips'.\nThe below tf.keras.utils.get_file command downloads a dataset to the local Keras cache. To see the files through a terminal, copy the output of the cell below.", "data_dir = tf.keras.utils.get_file(\n 'flower_photos',\n 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',\n untar=True)\n\n# Print data path\nprint(\"cd\", data_dir)", "We can use python's built in pathlib tool to get a sense of this unstructured data.", "data_dir = pathlib.Path(data_dir)\n\nimage_count = len(list(data_dir.glob('*/*.jpg')))\nprint(\"There are\", image_count, \"images.\")\n\nCLASS_NAMES = np.array(\n [item.name for item in data_dir.glob('*') if item.name != \"LICENSE.txt\"])\nprint(\"These are the available classes:\", CLASS_NAMES)", "Let's display the images so we can see what our model will be trying to learn.", "roses = list(data_dir.glob('roses/*'))\n\nfor image_path in roses[:3]:\n display.display(Image.open(str(image_path)))", "Building the dataset\nKeras has some convenient methods to read in image data. For instance tf.keras.preprocessing.image.ImageDataGenerator is great for small local datasets. A tutorial on how to use it can be found here, but what if we have so many images, it doesn't fit on a local machine? We can use tf.data.datasets to build a generator based on files in a Google Cloud Storage Bucket.\nWe have already prepared these images to be stored on the cloud in gs://cloud-ml-data/img/flower_photos/. The images are randomly split into a training set with 90% data and an iterable with 10% data listed in CSV files:\nTraining set: train_set.csv\nEvaluation set: eval_set.csv \nExplore the format and contents of the train.csv by running:", "!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv \\\n | head -5 > /tmp/input.csv\n!cat /tmp/input.csv\n\n!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | \\\n sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt\n!cat /tmp/labels.txt", "Let's figure out how to read one of these images from the cloud. TensorFlow's tf.io.read_file can help us read the file contents, but the result will be a Base64 image string. Hmm... not very readable for humans or Tensorflow.\nThankfully, TensorFlow's tf.image.decode_jpeg function can decode this string into an integer array, and tf.image.convert_image_dtype can cast it into a 0 - 1 range float. Finally, we'll use tf.image.resize to force image dimensions to be consistent for our neural network.\nWe'll wrap these into a function as we'll be calling these repeatedly. While we're at it, let's also define our constants for our neural network.", "IMG_HEIGHT = 224\nIMG_WIDTH = 224\nIMG_CHANNELS = 3\n\nBATCH_SIZE = 32\n# 10 is a magic number tuned for local training of this dataset.\nSHUFFLE_BUFFER = 10 * BATCH_SIZE\nAUTOTUNE = tf.data.experimental.AUTOTUNE\n\nVALIDATION_IMAGES = 370\nVALIDATION_STEPS = VALIDATION_IMAGES // BATCH_SIZE\n\ndef decode_img(img, reshape_dims):\n # Convert the compressed string to a 3D uint8 tensor.\n img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS)\n # Use `convert_image_dtype` to convert to floats in the [0,1] range.\n img = tf.image.convert_image_dtype(img, tf.float32)\n # Resize the image to the desired size.\n return tf.image.resize(img, reshape_dims)", "Is it working? Let's see!\nTODO 1.a: Run the decode_img function and plot it to see a happy looking daisy.", "img = tf.io.read_file(\n \"gs://cloud-ml-data/img/flower_photos/daisy/754296579_30a9ae018c_n.jpg\")\n\n# Uncomment to see the image string.\n#print(img)\nimg = decode_img(img, [IMG_WIDTH, IMG_HEIGHT])\nplt.imshow((img.numpy()));", "One flower down, 3669 more of them to go. Rather than load all the photos in directly, we'll use the file paths given to us in the csv and load the images when we batch. tf.io.decode_csv reads in csv rows (or each line in a csv file), while tf.math.equal will help us format our label such that it's a boolean array with a truth value corresponding to the class in CLASS_NAMES, much like the labels for the MNIST Lab.", "def decode_csv(csv_row):\n record_defaults = [\"path\", \"flower\"]\n filename, label_string = tf.io.decode_csv(csv_row, record_defaults)\n image_bytes = tf.io.read_file(filename=filename)\n label = tf.math.equal(CLASS_NAMES, label_string)\n return image_bytes, label", "Next, we'll transform the images to give our network more variety to train on. There are a number of image manipulation functions. We'll cover just a few:\n\ntf.image.random_crop - Randomly deletes the top/bottom rows and left/right columns down to the dimensions specified.\ntf.image.random_flip_left_right - Randomly flips the image horizontally\ntf.image.random_brightness - Randomly adjusts how dark or light the image is.\ntf.image.random_contrast - Randomly adjusts image contrast.\n\nTODO 1.b: Add the missing parameters from the random augment functions.", "MAX_DELTA = 63.0 / 255.0 # Change brightness by at most 17.7%\nCONTRAST_LOWER = 0.2\nCONTRAST_UPPER = 1.8\n\n\ndef read_and_preprocess(image_bytes, label, random_augment=False):\n if random_augment:\n img = decode_img(image_bytes, [IMG_HEIGHT + 10, IMG_WIDTH + 10])\n img = tf.image.random_crop(img, [IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS])\n img = tf.image.random_flip_left_right(img)\n img = tf.image.random_brightness(img, MAX_DELTA)\n img = tf.image.random_contrast(img, CONTRAST_LOWER, CONTRAST_UPPER)\n else:\n img = decode_img(image_bytes, [IMG_WIDTH, IMG_HEIGHT])\n return img, label\n\n\ndef read_and_preprocess_with_augment(image_bytes, label):\n return read_and_preprocess(image_bytes, label, random_augment=True)", "Finally, we'll make a function to craft our full dataset using tf.data.dataset. The tf.data.TextLineDataset will read in each line in our train/eval csv files to our decode_csv function.\n.cache is key here. It will store the dataset in memory", "def load_dataset(csv_of_filenames, batch_size, training=True):\n dataset = tf.data.TextLineDataset(filenames=csv_of_filenames) \\\n .map(decode_csv).cache()\n\n if training:\n dataset = dataset \\\n .map(read_and_preprocess_with_augment) \\\n .shuffle(SHUFFLE_BUFFER) \\\n .repeat(count=None) # Indefinately.\n else:\n dataset = dataset \\\n .map(read_and_preprocess) \\\n .repeat(count=1) # Each photo used once.\n\n # Prefetch prepares the next set of batches while current batch is in use.\n return dataset.batch(batch_size=batch_size).prefetch(buffer_size=AUTOTUNE)", "We'll test it out with our training set. A batch size of one will allow us to easily look at each augmented image.", "train_path = \"gs://cloud-ml-data/img/flower_photos/train_set.csv\"\ntrain_data = load_dataset(train_path, 1)\nitr = iter(train_data)", "TODO 1.c: Run the below cell repeatedly to see the results of different batches. The images have been un-normalized for human eyes. Can you tell what type of flowers they are? Is it fair for the AI to learn on?", "image_batch, label_batch = next(itr)\nimg = image_batch[0]\nplt.imshow(img)\nprint(label_batch[0])", "Note: It may take a 4-5 minutes to see result of different batches. \nMobileNetV2\nThese flower photos are much larger than handwritting recognition images in MNIST. They are about 10 times as many pixels per axis and there are three color channels, making the information here over 200 times larger!\nHow do our current techniques stand up? Copy your best model architecture over from the <a href=\"2_mnist_models.ipynb\">MNIST models lab</a> and see how well it does after training for 5 epochs of 50 steps.\nTODO 2.a Copy over the most accurate model from 2_mnist_models.ipynb or build a new CNN Keras model.", "eval_path = \"gs://cloud-ml-data/img/flower_photos/eval_set.csv\"\nnclasses = len(CLASS_NAMES)\nhidden_layer_1_neurons = 400\nhidden_layer_2_neurons = 100\ndropout_rate = 0.25\nnum_filters_1 = 64\nkernel_size_1 = 3\npooling_size_1 = 2\nnum_filters_2 = 32\nkernel_size_2 = 3\npooling_size_2 = 2\n\nlayers = [\n Conv2D(num_filters_1, kernel_size=kernel_size_1,\n activation='relu',\n input_shape=(IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS)),\n MaxPooling2D(pooling_size_1),\n Conv2D(num_filters_2, kernel_size=kernel_size_2,\n activation='relu'),\n MaxPooling2D(pooling_size_2),\n Flatten(),\n Dense(hidden_layer_1_neurons, activation='relu'),\n Dense(hidden_layer_2_neurons, activation='relu'),\n Dropout(dropout_rate),\n Dense(nclasses),\n Softmax()\n]\n\nold_model = Sequential(layers)\nold_model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\ntrain_ds = load_dataset(train_path, BATCH_SIZE)\neval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)\n\nold_model.fit_generator(\n train_ds,\n epochs=5,\n steps_per_epoch=5,\n validation_data=eval_ds,\n validation_steps=VALIDATION_STEPS\n)", "If your model is like mine, it learns a little bit, slightly better then random, but ugh, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we need to make it?\nEnter Transfer Learning. Why not take advantage of someone else's hard work? We can take the layers of a model that's been trained on a similar problem to ours and splice it into our own model.\nTensorflow Hub is a database of models, many of which can be used for Transfer Learning. We'll use a model called MobileNet which is an architecture optimized for image classification on mobile devices, which can be done with TensorFlow Lite. Let's compare how a model trained on ImageNet data compares to one built from scratch.\nThe tensorflow_hub python package has a function to include a Hub model as a layer in Keras. We'll set the weights of this model as un-trainable. Even though this is a compressed version of full scale image classification models, it still has over four hundred thousand paramaters! Training all these would not only add to our computation, but it is also prone to over-fitting. We'll add some L2 regularization and Dropout to prevent that from happening to our trainable weights.\nTODO 2.b: Add a Hub Keras Layer at the top of the model using the handle provided.", "module_selection = \"mobilenet_v2_100_224\"\nmodule_handle = \"https://tfhub.dev/google/imagenet/{}/feature_vector/4\" \\\n .format(module_selection)\n\ntransfer_model = tf.keras.Sequential([\n hub.KerasLayer(module_handle, trainable=False),\n tf.keras.layers.Dropout(rate=0.2),\n tf.keras.layers.Dense(\n nclasses,\n activation='softmax',\n kernel_regularizer=tf.keras.regularizers.l2(0.0001))\n])\ntransfer_model.build((None,)+(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))\ntransfer_model.summary()", "Even though we're only adding one more Dense layer in order to get the probabilities for each of the 5 flower types, we end up with over six thousand parameters to train ourselves. Wow!\nMoment of truth. Let's compile this new model and see how it compares to our MNIST architecture.", "transfer_model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\ntrain_ds = load_dataset(train_path, BATCH_SIZE)\neval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)\n\ntransfer_model.fit(\n train_ds,\n epochs=5,\n steps_per_epoch=5,\n validation_data=eval_ds,\n validation_steps=VALIDATION_STEPS\n)", "Alright, looking better!\nStill, there's clear room to improve. Data bottlenecks are especially prevalent with image data due to the size of the image files. There's much to consider such as the computation of augmenting images and the bandwidth to transfer images between machines.\nThink life is too short, and there has to be a better way? In the next lab, we'll blast away these problems by developing a cloud strategy to train with TPUs!\nBonus Exercise\nKeras has a local way to do distributed training, but we'll be using a different technique in the next lab. Want to give the local way a try? Check out this excellent blog post to get started. Or want to go full-blown Keras? It also has a number of pre-trained models ready to use.\nCopyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mabevillar/rmtk
rmtk/vulnerability/mdof_to_sdof/first_mode/first_mode.ipynb
agpl-3.0
[ "MDOF to equivalent SDOF using the first mode only\nThis IPython notebook converts a pushover curve for an MDOF system into an equivalent SDOF capacity curve, considering the first mode of vibration only. The supplied pushover curve, which is in terms of base shear and roof displacement, is transformed into an equivalent SDOF capacity curve, which is in terms of spectral acceleration and spectral displacement.\nNote that this method assumes that the first mode shape Φ has been normalised to unit amplitude at the roof, i.e. Φn = 1, where n denotes the roof level.\nThe user has the option to derive the yielding Sa and Sd, if needed, using an idealisation of the sdof capacity curve, either bilinear or quadrilinear. To do so set the variable idealised_type to 'quadrilinear' or 'bilinear', if the idealisation is not required then set it to 'none'.", "%matplotlib inline\nfrom rmtk.vulnerability.common import utils\nfrom rmtk.vulnerability.mdof_to_sdof.first_mode import first_mode\n\npushover_file = \"../../../../../rmtk_data/capacity_curves_Vb-dfloor.csv\"\nidealised_type = 'quadrilinear'; # 'bilinear', 'quadrilinear' or 'none'\n\ncapacity_curves = utils.read_capacity_curves(pushover_file)\n[sdof_capacity_curves, sdof_idealised_capacity] = first_mode.mdof_to_sdof(capacity_curves, idealised_type)", "Save capacity curves\nPlease define what capacity curve should be saved assigning the variable capacity_to_save one of the following:\n1. capacity_to_save = sdof_idealised_capacity: idealised capacity curve is saved. If idealised_type was previously set to none, an error will be raised because the variable sdof_idealised_capacity is empty.\n2. capacity_to_save = sdof_capacity_curves: full capacity curve is saved.", "capacity_to_save = sdof_idealised_capacity\n\nutils.save_SdSa_capacity_curves(capacity_to_save,'../../../../../rmtk_data/capacity_curves_sdof_first_mode.csv')\n\nif idealised_type is not 'none':\n idealised_capacity = utils.idealisation(idealised_type, sdof_capacity_curves)\n utils.plot_idealised_capacity(idealised_capacity, sdof_capacity_curves, idealised_type)\nelse:\n utils.plot_capacity_curves(capacity_curves)\n utils.plot_capacity_curves(sdof_capacity_curves)", "Defined deformed shape for converting ISD damage model\nThis function allows to define the relationship between the maximum Inter-Storey Drift (ISD) along the building height and spectral displacement. This relationship serves the purpuse of converting interstorey drift damage thresholds to spectral displacement damage threshold, if damage_model['type']=interstorey drift of the MDOF system wants to be used for the equivalent SDOF system. \nIf capacity_curves['type'] = 'Vb-dfloor' the relationship is extracted from the displacement at each storey, otherwise a linear relationship is assumed.", "deformed_shape_file = \"../../../../../rmtk_data/ISD_Sd.csv\"\n\n[ISD_vectors, Sd_vectors] = first_mode.define_deformed_shape(capacity_curves, deformed_shape_file)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
psas/composite-propellant-tank
Analysis/Calculations/Shrink Fit and Liner as Gasket Analysis.ipynb
gpl-3.0
[ "Analysis of Sealing Potential of Liner Captured Between Aluminum Rings Via Shrink Fit.\nFirst Objective:\nDetermine how much pressure will be placed on the liner at both room and cryogenic temperatures, based on the aluminum outer ring's inner diameter, an aluminum end cap's outer diameter, and the thickness of the PTFE liner. Compare this to the strength of the PTFE material at each of these temperatures.\nSecond Objective:\nIf the liner compression is at or exceeds the load necessary to seal the tank, determine the shrink fit pressure at the aluminum interface that can be achieved while maintaining this liner pressure. \nAssumptions:\n\n\nSince little theory has been found governing the sealing potential of radially loaded gaskets in the shape of an annulus, we take the approach that we can break down the elements of the ASME pressure vessel code equations and apply them to our geometry.\n\n\nThe PTFE's strength is not sufficient to prevent the shrunken aluminum from returning to its original room temperature size, or deform either of the aluminum parts. In other words, the aluminum parts are rigid bodies moving relative to the PTFE as they expand or contract.\n\n\nThe interface of the aluminum parts is at the inner diameter of the outer aluminum ring, regardless of intereference thickness. ($R = D_{i,ring}/2$)\n\n\nStress concentrations in the liner that occur at the ends of the aluminum parts can be ignored.\n\n\nThe internal pressure of the vessel will have a negligible effect on the portion of the liner that is compressed between shrink-fitted aluminum parts.", "# Import packages here:\n\nimport math as m\nimport numpy as np\nfrom IPython.display import Image\nimport matplotlib.pyplot as plt\n\n# Properties of Materials (engineeringtoolbox.com, Cengel, Tian, DuPont, http://www.dtic.mil/dtic/tr/fulltext/u2/438718.pdf)\n\n# Coefficient of Thermal Expansion\nalphaAluminum = 0.0000131 # in/in/*F\nalphaPTFE = 0.0000478 # in/in/*F (over the range in question)\n\n# Elastic Moduli\n\nEAluminum = 10000000 # psi\nEAluminumCryo = 11000000 # psi\nEPTFE = 40000 # psi\nEPTFECryo = 500000 # psi\n\n# Yield Strength\nsigmaY_PTFE = 1300 # psi\nsigmaY_PTFECryo = 19000 # psi\n\n# Poisson's Ratio\n\nnuAluminum = 0.33 # in/in\nnuPTFE = 0.46 # in/in\n\n# Temperature Change Between Ambient and LN2 \n\nDeltaT = 389 # *F\n\n# Geometry of Parts\n\n# Main Ring Outer Radius\nroMain = 2.0000 # in\n\n# End Cap Inner Radius\nriCap = 1.3750 # in\n\n# Interfacial Radius\nr = 1.5000 # in\n\n# Liner Thickness\nt = 0.125 # in", "ASME Pressure Vessel Code Equations\nThis code was developed for flat gaskets compressed between flanges. There are two equations, both give a value for the total bolt force needed to compress a gasket such that a seal is achieved at working pressure. Whichever equation gives a greater load for a specific design application is the one to be used. The following are the equations as they appear in Gaskets. Design, Selection, and Testing by Daniel E. Czernik:\n$$W_{m_2} = \\pi bGy$$ \n$$W_{m_1} = \\frac{\\pi}{4}G^2P + 2\\pi bGmP$$\nWhere $W_{m_1}$ and $W_{m_2}$ are total bolt loads in pounds, $b$ is the effective gasket contact seating width in inches, $G$ is the mean diameter of the gasket contact face in inches, $y$ is the gasket contact surface unit seating load in pounds per square inch, $m$ is a dimensionless gasket factor, and $P$ is the maximum working internal pressure of the vessel.\nFor a typical round gasket, the effective gasket contact seating width is the difference between the outer and inner radii, or $r_o - r_i$. The mean diameter of such a gasket is simply the sum of the inner and outer radii, or $r_o + r_i$. When multiplied together, the product is the difference of the squares of the radii: $(r_o - r_i)(r_o + r_i) = (r_o^2 - r_i^2)$\nThis leads to the conslusion that $\\pi bG = \\pi (r_o^2 - r_i^2)$ is simply the area of one side of the gasket. This means that the $y$ is the only part of the first equation that we need for our analysis. We need at least enough pressure from the shrink fit to reach the seating load.\nFor the second equation, the first term is simply the prodcut of the internal working pressure and the cross sectional area normal to the central axis of the tank. In other words, this is the bolt load needed to prevent the ends from losing contact with the rest of the tank. Since we are loading the liner in the radial direction, we can also remove this term from our analysis of the liner as a gasket. The second term is similar to the first equation, but has a factor of two to account for both surfaces of the gasket that provide paths for escaping fluid, and $y$ is replaced with $mP$. Values for $m$ and $y$ appear to be imperically determined for different materials of varied thicknesses, and are found in tables provided by the ASME or material manufacturer. Dupont's PTFE Handbook lists PTFE as having a gasket factor, $m$, of 2.00 and a seating load, $y$, of 1200 psi for a 1/8\" gasket. Because there will only be a fluid path on one side of the liner, we assume the factor of two can be removed. It should be noted that $y$ is for PTFE at ambient temperature, and is said to have the benefit of an operating temperature range from cryogenic to 450 &deg;C, but an estimate for a seating load at cryogenic temperature may need to be developed. \nSince the pressure vessel code equations are for bolt load (force), and not stress in the gasket directly, both sides of each can be divided by area. This simplifies the equations to suit our needs. This is similar to the simplified procedure found on page 111 of Gaskets by Czernik. The modified equations we will then apply to our gasket geometry are as follows:\n$$\\sigma_{PTFE, amb1} = y$$\n$$\\sigma_{PTFE, amb2} = mP$$\nSince our operating pressure is only 45 psi, the first equation gives by far the greatest value.\nEstimation of Minimum Seating Stress for PTFE at Cryogenic Temperature\nI don't know what to do with this thing, or even if it is necessary. I may return to this another time\nCzernik gives ranges of values for minimum seating stress for some common metals and flat gaskets (i.e. not corrugated) in Table 3.4. These values are:\n\nAluminum: 10000 - 200000 psi\nCopper: 15000 - 45000 psi\nCarbon Steel: 30000 - 70000 psi\nStainless Steel: 35000 - 95000 psi\n\nThese ranges account for variations in hardness or yield strength. Given some values found at Engineering Toolbox, we can find a value for what percent of yield strength these values are on average, and use that to determine a seating stress for PTFE at cryogenic temperature. The following is a list of nominal yield stresses from Engineering Toolbox:\n\nAluminum: 13778 psi\nCopper: 10152 psi\nCarbon Steel: 36258 psi\nStainless Steel: 72806 psi", "m = 2.00\nP = 45 # psi\nyAmbient = 1200 # psi\nsigmaPTFEAmbient1 = yAmbient\nsigmaPTFEAmbient2 = m*P\nsigmaPTFEAmbient = sigmaPTFEAmbient1", "Change in Liner Thickness Necessary to Achieve Seating Stress\nThe radial stress due to the compression of the liner follows Hooke's Law:\n$$\\sigma_{PTFE, amb} = \\frac{\\delta_{Liner, amb}}{t_{amb}}E_{PTFE, amb}$$\nWhere $t_{amb}$ is the liner thickness at ambient temperature before compression.\nsolving this equation for the change in liner thickness yields:\n$$\\delta_{Liner, amb} = \\frac{\\sigma_{PTFE, amb}}{E_{PTFE, amb}}t_{amb}$$", "deltaLinerAmbient = (sigmaPTFEAmbient/EPTFE)*t\nprint('The change in liner thickness due to compression must be', \"%.4f\" % deltaLinerAmbient, 'in, in order to achieve a proper seal.')", "To know if this can be achieved, we must examine how much we can actually shrink the end cap, and whether or not that will allow enough clearance to fit the end cap into place before expansion. \nMaximum Thermal Contraction of End Cap\nThermal expansion/contraction can be thought of as a scaling of the position vectors of all the points in a body of uniform composition relative to its centroid. The thermal change in radius of the end cap is thus given by the following linear thermal expansion relationship:\n$$r_{cyro} = r_{amb} - r_{amb}\\alpha_{Al}\\Delta T$$\nThe maximum change in radius is then simply the absolute value of the thermal change in radius from ambient to cryogenic temperature:\n$$\\Delta r = r_{amb}\\alpha_{Al}\\Delta T$$", "rCryo = r - r*alphaAluminum*DeltaT \nDeltar = r - rCryo\n\nprint('The maximum change in end cap radius equals: ', \"%.4f\" % DeltaR, 'in')\nprint('This means that the maximum theoretical interference for the shrink fit is ', \"%.4f\" % DeltaR, 'in')", "Clearance for the End Cap\nThe above number does not account for some clearance to allow the end cap to slide into the liner. According to Engineer's Toolbox, a 3\" diameter hole needs a minumum of 0.0025\" of clearance to allow a shaft through with a free fit. This means that 0.00125\" must be subtracted from the interference shrink fit to arrive at an achievable change in liner thickness due to shrink fitting. Thus:\n$$\\delta_{Liner, amb, max} = \\Delta r - 0.00125\"$$", "deltaLinerAmbientMax = DeltaR - 0.00125\n\nprint('The achievable ambient temperature change in liner thickness due to shrink fitting is', \"%.4f\" % deltaLinerAmbientMax, 'in')", "The necessary change in liner thickness is less than the achievable change in liner thickness. \nAcording to the ASME pressure vessel code, the seal we need is achievable. However, the thermal contraction in the liner and the gap between aluminum rings that captures the liner, as well as the enormous change in yield strength and elastic modulus for PTFE when going to cryogenic temperatures may pose a problem. To be thorough, let's take a look at the liner stress under cryogenic conditions.\nPressure Exerted on Liner at Cryogenic Temperature\nThe liner will contract at cryogenic temperature, which will serve to reduce its stress due to the shrink fit, while the increase in elastic modulus at that temperature will increase its stress. This means that the liner will have a different stress state one once the tank is filled with LN2. The liner thickness at cryogenic temperature is:\n$$t_{cryo} = t_{amb} - t_{amb}\\alpha_{PTFE}\\Delta T$$\nThe thermal contraction of the liner thickness is given by:\n$$\\delta_t = t_{amb}\\alpha_{PTFE}\\Delta T$$\nThe gap between aluminum rings will contract as well, leaving slightly less room for the liner at cryogenic temperature. The gap size at ambient temperature is specified to be the difference between the liner thickness and the change in liner thickness:\n$$t_{gap} = t_{amb} - \\delta_{Liner, amb}$$\nThe change in gap width is:\n$$\\delta_{gap} = t_{gap}\\alpha_{Al}\\Delta T$$\nThe change in liner thickness at cryogenic temperature is then given by:\n$$\\delta_{Liner, cryo} = \\delta_{Liner, amb} + \\delta_{gap} - \\delta_t$$\nThe Liner's radial stress at cryogenic temperature is given by:\n$$\\sigma_{PTFE, cryo} = \\frac{\\delta_{Liner, cryo}}{t_{cryo}}E_{PTFE, cryo}$$", "tCryo = t - t*alphaPTFE*DeltaT\nprint ('The liner thickness at cryogenic temperature is', \"%.4f\" % tCryo,'in')\ndeltat = t*alphaPTFE*DeltaT\nprint ('The change in liner thickness due to thermal contraction is', \"%.4f\" % deltat, 'in')\ntGap = t - deltaLinerAmbient\nprint ('The ambient temperature liner gap width is', \"%.4f\" % tGap, 'in')\ndeltaGap = tGap*alphaAluminum*DeltaT\nprint ('The change in gap width is', \"%.4f\" % deltaGap, 'in')\ndeltaLinerCryo = deltaLinerAmbient + deltaGap - deltat\nprint ('The total change in liner thickness at cryogenic temperature is', \"%.4f\" % deltaLinerCryo, 'in')\nsigmaPTFECryo = (deltaLinerCryo/tCryo)*EPTFECryo\nprint('Thus, the maximum achievable pressure exerted on the PTFE at cryogenic temperature is', \"%.2f\" % sigmaPTFECryo, 'psi')", "Although the load on the PTFE at cryogenic temperature is greater, the yield strength of the PTFE is much greater at 19000 psi. The ratio of load to yield strength at ambient temperature is then much higher than the ratio at cryogenic temperature. We do have a value to use for seating stress of cryogenic PTFE, so we must either trust the ASME code for our extreme temperature conditions, or imperically test for the seating stress, which we do not have time to do before this project is terminated. \nCan We Use the Excess Space Allowed By the Thermal Contraction of the End Cap to Hold It In Place, and Dispense With the Bolts?\nRight now the contact surface between the two aluminum parts is a cylidrical surface with a nominal diameter of 3 inches, and a height of 0.125 inches. Engineering Toolbox gives an aluminum-aluminum coefficient of friction of approximately 1.2 for clean, dry surfaces. This allows us to find the necessary pressure to secure the end cap in place using the shrink fit only and a factor of safety of 2.\n$$A_{contact} = 2\\pi rh$$\nThe normal force caused by the shrink fit is the product of the shrink fit contact pressure and contact area:\n$$F_N = P_{shrink}A_{contact} = 2P_{shrink}\\pi rh$$\nThe friction force is then the product of the normal force and the coefficient of friction:\n$$F_friction = 2P_{shrink}\\mu \\pi rh$$\nThe force that the friction must overcome (with a factor of safety of 2) is \n$$F_{cap} = 2PA_{cap} = 2P\\pi r^2$$\nEquating these forces and solving for the shrink fit pressure gives:\n$$P_{shrink} = \\frac{Pr}{\\mu h}$$\nShigley's Mechanical Engineering Design gives the shrink fit pressure as\n$$P_{shrink} = \\frac{E_{Aluminum}\\delta_{interference}}{2r^3}[\\frac{(r_o^2 - r^2)(r^2 - r_i^2)}{r_o^2 - r_i^2}]$$\nEquating these and solving for the intereference thickness, $\\delta_{interference}$, yields the following equation:\n$$\\delta_{interference} = \\frac{2Pr^4}{\\mu hE_{Aluminum}}[\\frac{r_o^2 - r_i^2}{(r_o^2 - r^2)(r^2 - r_i^2)}]$$", "h = 0.125\nmu = 1.2\ndeltaInterference = ((2*P*r**4)/(mu*h*EAluminum))*((roMain**2 - riCap**2)/((roMain**2 - r**2)*(r**2 - riCap**2)))\nprint('The intereference thickness needed to overcome the pressure force on the end caps is', \"%.4f\" % deltaInterference, 'in')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
criffy/aflengine
analysis/machine_learning/model.ipynb
gpl-3.0
[ "Modelling", "# match data with aggregated individual data\nimport pandas as pd\nmatch_path = '/Users/t_raver9/Desktop/projects/aflengine/analysis/machine_learning/src/player_data/data/matches_with_player_agg.csv'\nplayers_path = '/Users/t_raver9/Desktop/projects/aflengine/analysis/machine_learning/src/player_data/data/players_with_player_stat_totals.csv'\nmatches = pd.read_csv(match_path)\nplayers = pd.read_csv(players_path)", "Data Preparation\nFor the first iteration, we'll only use data after 2009. This is when most modern statistics began to be kept (though not all of them did).", "model_data = matches[matches['season'] >= 2010]", "To keep model simple, exclude draws. Mark them as victories for the away team instead.", "for idx, row in model_data.iterrows():\n if row['winner'] == 'draw':\n model_data.at[idx,'winner'] = 'away'", "We want to split the data into test and train in a stratified manner, i.e. we don't want to favour a certain season, or a part of the season. So we'll take a portion (25%) of games from each round.", "# How many games do we get per round?\nround_counts = {}\ncurr_round = 1\nmatches_in_round = 0\nfor idx,row in model_data.iterrows():\n \n if curr_round != row['round']:\n \n if matches_in_round not in round_counts:\n round_counts[matches_in_round] = 1\n else:\n round_counts[matches_in_round] += 1\n \n curr_round = row['round']\n matches_in_round = 1\n continue\n \n else:\n matches_in_round += 1\n \nround_counts\n\n# Taking a minimum 25% of each round\nfrom math import ceil\ntest_sample_size = {}\nfor num_games in round_counts:\n test_sample_size[num_games] = ceil(num_games/4)\n\nrounds_in_season = get_season_rounds(model_data)\nteams_in_season = get_season_teams(model_data)", "Create test and training data", "# test set\nfrom copy import deepcopy\n\ntest_data = pd.DataFrame()\nfor season, max_round in rounds_in_season.items():\n for rnd in range(1, max_round):\n round_matches = model_data[(model_data['season']==season) & (model_data['round']==rnd)]\n num_test = test_sample_size[len(round_matches)]\n round_test_set = round_matches.sample(num_test)\n test_data = test_data.append(round_test_set)\n \n# training set\ntraining_data = model_data.drop(test_data.index)", "Capture all of the 'diff' columns in the model, too", "diff_cols = [col for col in model_data.columns if col[0:4] == 'diff']", "Define features", "features = [col \n for col \n in ['h_career_' + col for col in player_cols_to_agg] + \\\n ['h_season_' + col for col in player_cols_to_agg] + \\\n ['a_career_' + col for col in player_cols_to_agg] + \\\n ['a_season_' + col for col in player_cols_to_agg] + \\\n ['h_' + col for col in ladder_cols] + \\\n ['h_' + col + '_form' for col in ladder_cols] + \\\n ['a_' + col for col in ladder_cols] + \\\n ['a_' + col + '_form' for col in ladder_cols] + \\\n ['h_career_' + col for col in misc_columns] + \\\n ['h_season_' + col for col in misc_columns] + \\\n ['a_career_' + col for col in misc_columns] + \\\n ['a_season_' + col for col in misc_columns] + \\\n diff_cols\n ]\n\n# REMOVE PERCENTAGE FOR NOW\nfeatures.remove('h_percentage')\nfeatures.remove('a_percentage')\nfeatures.remove('diff_percentage')\n\ntarget = 'winner'", "Set up test and train datasets", "X_train = training_data[features]\ny_train = training_data[target]\nX_test = test_data[features]\ny_test = test_data[target]", "Fill the NaN values", "X_train.fillna(0,inplace=True)\ny_train.fillna(0,inplace=True)\nX_test.fillna(0,inplace=True)\ny_test.fillna(0,inplace=True)", "Modelling\nModel 1: Logistic regression", "from sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import GridSearchCV\nimport numpy as np\n\nlog_reg = LogisticRegression()\n\nparam_grid = {\n 'tol': [.0001, .001, .01],\n 'C': [.1, 1, 10],\n 'max_iter': [50,100,200]\n }\n\ngrid_log_reg = GridSearchCV(log_reg, param_grid, cv=5)\ngrid_log_reg.fit(X_train, y_train)\n\ngrid_log_reg.score(X_train,y_train)\n\ngrid_log_reg.score(X_test,y_test)\n\n# Confirm that it's not just picking the home team\nprint(sum(grid_log_reg.predict(X_test)=='away'))\nprint(sum(grid_log_reg.predict(X_test)=='home'))", "Model 2: using less features", "diff_cols = [col for col in model_data.columns if col[0:4] == 'diff']\n\nfeatures = diff_cols\n\n# REMOVE PERCENTAGE FOR NOW\ndiff_cols.remove('diff_percentage')\n\ntarget = 'winner'\n\nX_train_2 = training_data[diff_cols]\ny_train_2 = training_data[target]\nX_test_2 = test_data[diff_cols]\ny_test_2 = test_data[target]\n\n#X_train_2 = X_train_2[features]\n#y_train_2 = y_train_2[features]\n#X_test_2 = X_test_2[features]\n#y_test_2 = y_test_2[features]\n\nX_train_2.fillna(0,inplace=True)\ny_train_2.fillna(0,inplace=True)\nX_test_2.fillna(0,inplace=True)\ny_test_2.fillna(0,inplace=True)\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import GridSearchCV\nimport numpy as np\n\nlog_reg_2 = LogisticRegression()\n\nparam_grid = {\n 'tol': [.0001, .001, .01],\n 'C': [.1, 1, 10],\n 'max_iter': [50,100,200]\n }\n\ngrid_log_reg_2 = GridSearchCV(log_reg_2, param_grid, cv=5)\ngrid_log_reg_2.fit(X_train_2, y_train_2)\n\ngrid_log_reg_2.score(X_train_2,y_train_2)\n\ngrid_log_reg_2.score(X_test_2,y_test_2)\n\ntraining_data[(training_data['round']==1) & (training_data['season']==2018)]", "Training model on all of the data\nGenerating predictions\nNow that we have a model, we need to ingest data for that model to make a prediction on.\nStart by reading in the fixture.", "fixture_path = '/Users/t_raver9/Desktop/projects/aflengine/tipengine/fixture2020.csv'\nfixture = pd.read_csv(fixture_path)\n\nfixture[fixture['round']==2]", "We'll then prepare the data for the round we're interested in. We'll do this by:\n- getting the team-level data, such as ladder position and form\n- getting the player-level data and aggregating it up to the team level\nTo get the player-level data, we also need to choose who is playing for each team.", "next_round_matches = get_upcoming_matches(matches,fixture,round_num=2)\n\nnext_round_matches", "Get the IDs for the players we'll be using", "import cv2 \nimport pytesseract\ncustom_config = r'--oem 3 --psm 6'\n\nimport pathlib\nnames_dir = '/Users/t_raver9/Desktop/projects/aflengine/analysis/machine_learning/src/OCR/images'\n\n# Initialise the dictionary\nplayer_names_dict = {}\nfor team in matches['hteam'].unique():\n player_names_dict[team] = []\n \n# Fill out the dictionary\nfor path in pathlib.Path(names_dir).iterdir():\n print(path)\n if path.name.split('.')[0] in player_names_dict:\n path_str = str(path)\n image_obj = cv2.imread(path_str)\n image_string = pytesseract.image_to_string(image_obj, config=custom_config)\n names = get_player_names(image_string)\n player_names_dict[path.name.split('.')[0]].extend(names)", "Try including Bachar Houli\nNow we can collect the data for each player and aggregate it to the team level, as we would with the training data", "from copy import deepcopy\n\nplayers_in_rnd = []\nfor _, v in player_names_dict.items():\n players_in_rnd.extend(v)\n\nplayer_data = get_player_data(players_in_rnd)\n\nplayers_in_rnd\n\naggregate = player_data[player_cols].groupby('team').apply(lambda x: x.mean(skipna=False))\n\n# Factor in any missing players\nnum_players_per_team = player_data[player_cols].groupby('team').count()['Supercoach']\nfor team in num_players_per_team.index:\n aggregate.loc[team] = aggregate.loc[team] * (22/num_players_per_team[team])\n \naggs_h = deepcopy(aggregate)\naggs_a = deepcopy(aggregate)\naggs_h.columns = aggregate.columns.map(lambda x: 'h_' + str(x))\naggs_a.columns = aggregate.columns.map(lambda x: 'a_' + str(x))\ncombined = next_round_matches.merge(aggs_h, left_on='hteam', right_on='team')\ncombined = combined.merge(aggs_a, left_on='ateam', right_on='team')\ncombined = get_diff_cols(combined)\n\npd.set_option('max_columns',500)", "Can now use this to make predictions", "X = combined[features]\n\nX['diff_wins_form']\n\ngrid_log_reg.decision_function(X)\n\ngrid_log_reg.predict_proba(X)\n\ngrid_log_reg.predict(X)\n\nZ = combined[diff_cols]\n\ngrid_log_reg_2.predict_proba(Z)\n\ngrid_log_reg_2.predict(Z)\n\ncombined[['ateam','hteam']]\n\ncombined[['h_percentage_form','a_percentage_form']]\n\ncombined[['h_career_games_played','a_career_games_played']]\n\ncombined[['h_wins_form','a_wins_form']]\n\nmodel_coef = grid_log_reg.best_estimator_.coef_\n\nX['diff_season_Supercoach']", "Glue these together and sort", "coef = []\nfor i in model_coef:\n for j in i:\n coef.append(abs(j))\n\nzipped = list(zip(features,coef))\n\nzipped.sort(key = lambda x: x[1],reverse=True)\n\nzipped", "Training model on all data", "features = [col \n for col \n in ['h_career_' + col for col in player_cols_to_agg] + \\\n ['h_season_' + col for col in player_cols_to_agg] + \\\n ['a_career_' + col for col in player_cols_to_agg] + \\\n ['a_season_' + col for col in player_cols_to_agg] + \\\n ['h_' + col for col in ladder_cols] + \\\n ['h_' + col + '_form' for col in ladder_cols] + \\\n ['a_' + col for col in ladder_cols] + \\\n ['a_' + col + '_form' for col in ladder_cols] + \\\n ['h_career_' + col for col in misc_columns] + \\\n ['h_season_' + col for col in misc_columns] + \\\n ['a_career_' + col for col in misc_columns] + \\\n ['a_season_' + col for col in misc_columns] + \\\n diff_cols\n ]\n\n# REMOVE PERCENTAGE FOR NOW\nfeatures.remove('h_percentage')\nfeatures.remove('a_percentage')\nfeatures.remove('diff_percentage')\n\ntarget = 'winner'\n\nX = model_data[features]\ny = model_data[target]\n\nX.fillna(0,inplace=True)\ny.fillna(0,inplace=True)\n\ngrid_log_reg_2.predict_proba(Z)\n\ncombined[['ateam','hteam']]", "Visualisation", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n\ncategory_names = ['Home','Away']\nresults = {\n 'Collingwood v Richmond': [50.7,49.3],\n 'Geelong v Hawthorn': [80.4,19.5],\n 'Brisbane Lions v Fremantle': [57.3,42.7],\n 'Carlton v Melbourne': [62.4,37.6],\n 'Gold Coast v West Coast': [9.9,90.1],\n 'Port Adelaide v Adelaide': [58.0,42.0],\n 'GWS v North Melbourne': [62.6,37.4],\n 'Sydney v Essendon': [75.2,24.8],\n 'St Kilda v Footscray': [61.2,38.8]\n}\n\n\ndef survey(results, category_names):\n \"\"\"\n Parameters\n ----------\n results : dict\n A mapping from question labels to a list of answers per category.\n It is assumed all lists contain the same number of entries and that\n it matches the length of *category_names*.\n category_names : list of str\n The category labels.\n \"\"\"\n labels = list(results.keys())\n data = np.array(list(results.values()))\n data_cum = data.cumsum(axis=1)\n category_colors = plt.get_cmap('RdYlGn')(\n np.linspace(0.15, 0.85, data.shape[1]))\n\n fig, ax = plt.subplots(figsize=(18, 10))\n fig.suptitle('Win Probabilities', fontsize=20)\n ax.invert_yaxis()\n ax.xaxis.set_visible(False)\n ax.set_xlim(0, np.sum(data, axis=1).max())\n\n for i, (colname, color) in enumerate(zip(category_names, category_colors)):\n widths = data[:, i]\n starts = data_cum[:, i] - widths\n ax.barh(labels, widths, left=starts, height=0.5,\n label=colname, color=color)\n xcenters = starts + widths / 2\n\n r, g, b, _ = color\n text_color = 'white' if r * g * b < 0.5 else 'darkgrey'\n for y, (x, c) in enumerate(zip(xcenters, widths)):\n ax.text(x, y, str(int(c)), ha='center', va='center',\n color=text_color,fontsize=15)\n ax.legend(ncol=len(category_names), bbox_to_anchor=(0, 1),\n loc='lower left', fontsize=15)\n\n return fig, ax\n\n\nsurvey(results, category_names)\nplt.show()\n\nplt.show()", "Metadata and functions", "from typing import Dict\nimport numpy as np\n\ndef get_season_rounds(matches: pd.DataFrame) -> Dict:\n \"\"\"\n Return a dictionary with seasons as keys and number of games\n in season as values\n \"\"\"\n seasons = matches['season'].unique()\n rounds_in_season = dict.fromkeys(seasons,0)\n \n for season in seasons:\n rounds_in_season[season] = max(matches[matches['season']==season]['round'])\n \n return rounds_in_season\n\n# What teams participated in each season?\ndef get_season_teams(matches: pd.DataFrame) -> Dict:\n \"\"\"\n Return a dictionary with seasons as keys and a list of teams who played\n in that season as values\n \"\"\"\n seasons = matches['season'].unique()\n teams_in_season = {}\n\n for season in seasons:\n teams = list(matches[matches['season']==season]['hteam'].unique())\n teams.extend(list(matches[matches['season']==season]['ateam'].unique()))\n teams = np.unique(teams)\n teams_in_season[season] = list(teams)\n \n return teams_in_season\n\nplayer_cols_to_agg = [\n 'AFLfantasy',\n 'Supercoach',\n 'behinds',\n 'bounces',\n 'brownlow',\n 'clangers',\n 'clearances',\n 'contested_marks',\n 'contested_poss',\n 'disposals',\n 'frees_against',\n 'frees_for',\n 'goal_assists',\n 'goals',\n 'handballs',\n 'hitouts',\n 'inside50',\n 'kicks',\n 'marks',\n 'marks_in_50',\n 'one_percenters',\n 'rebound50',\n 'tackles',\n 'tog',\n 'uncontested_poss',\n 'centre_clearances',\n 'disposal_efficiency',\n 'effective_disposals',\n 'intercepts',\n 'metres_gained',\n 'stoppage_clearances',\n 'score_involvements',\n 'tackles_in_50',\n 'turnovers'\n]\n\nmatch_cols = [\n 'odds',\n 'line'\n]\n\nladder_columns = [\n 'wins',\n 'losses',\n 'draws',\n 'prem_points',\n 'played',\n 'points_for',\n 'points_against',\n 'percentage',\n 'position'\n]\n\nmisc_columns = [\n 'games_played'\n]\n\ndiff_cols = [\n \n]\n\ndef get_upcoming_matches(matches, fixture, round_num=None):\n \n if round_num == None: # Get the latest populated round\n round_num = matches['round'].iloc[-1] + 1\n \n fixture['round'] = fixture['round'].astype(str)\n next_round = fixture[fixture['round']==str(round_num)]\n \n # Get list of home and away\n matches.sort_values(by=['season','round'],ascending=False,inplace=True)\n teams = list(next_round['hometeam'])\n teams = list(zip(teams,list(next_round['awayteam']))) # (home, away)\n \n # Initialise upcoming round\n df = pd.DataFrame()\n output = pd.DataFrame(columns = h_ladder_cols + h_ladder_form_cols + a_ladder_cols + a_ladder_form_cols)\n \n # For each team, find the data that is relevant to them\n for team in teams:\n h_last_match = matches[(matches['hteam'] == team[0]) | (matches['ateam'] == team[0])].iloc[0]\n a_last_match = matches[(matches['hteam'] == team[1]) | (matches['ateam'] == team[1])].iloc[0]\n \n # Home team conditions, and use the 'game_cols' to update the ladder and ladder form for that team\n if team[0] == h_last_match['hteam']: # Home team was home team last game\n h_last_match_rel_cols = h_last_match[h_ladder_cols + h_ladder_form_cols + game_cols]\n h_last_match_rel_cols = update_ladder(h_last_match_rel_cols,'home')\n elif team[0] == h_last_match['ateam']: # Home team was away team last game\n h_last_match_rel_cols = h_last_match[a_ladder_cols + a_ladder_form_cols + game_cols]\n h_last_match_rel_cols = update_ladder(h_last_match_rel_cols,'away')\n \n # Away team conditions\n if team[1] == a_last_match['hteam']: # Away team was home team last game\n a_last_match_rel_cols = a_last_match[h_ladder_cols + h_ladder_form_cols + game_cols]\n a_last_match_rel_cols = update_ladder(a_last_match_rel_cols,'home')\n elif team[1] == a_last_match['ateam']: # Away team was away team last game\n a_last_match_rel_cols = a_last_match[a_ladder_cols + a_ladder_form_cols + game_cols]\n a_last_match_rel_cols = update_ladder(a_last_match_rel_cols,'away')\n \n h_last_match_rel_cols['hteam'] = team[0]\n a_last_match_rel_cols['ateam'] = team[1]\n \n # Make sure the columns are the right format\n h_col_final = []\n for col in h_last_match_rel_cols.index:\n if col[0] == 'h':\n h_col_final.append(col)\n else:\n col = 'h' + col[1:]\n h_col_final.append(col)\n \n a_col_final = []\n for col in a_last_match_rel_cols.index:\n if col[0] == 'a':\n a_col_final.append(col)\n else:\n col = 'a' + col[1:]\n a_col_final.append(col) \n \n h_last_match_rel_cols.index = h_col_final\n a_last_match_rel_cols.index = a_col_final\n \n # Add all of these to the output.\n joined = pd.concat([h_last_match_rel_cols,a_last_match_rel_cols]).to_frame().T\n joined.drop('hscore',axis=1,inplace=True)\n joined.drop('ascore',axis=1,inplace=True)\n output = output.append(joined)\n \n matches.sort_values(by=['season','round'],ascending=True,inplace=True)\n return output\n\ndef update_ladder(last_match_rel_cols, last_game_h_a):\n if last_game_h_a == 'home':\n \n # Update wins, losses, draws and prem points\n if last_match_rel_cols['hscore'] > last_match_rel_cols['ascore']:\n last_match_rel_cols['h_wins'] = last_match_rel_cols['h_wins'] + 1\n last_match_rel_cols['h_wins_form'] = last_match_rel_cols['h_wins_form'] + 1\n last_match_rel_cols['h_prem_points'] = last_match_rel_cols['h_prem_points'] + 4\n last_match_rel_cols['h_prem_points_form'] = last_match_rel_cols['h_prem_points_form'] + 4\n elif last_match_rel_cols['hscore'] < last_match_rel_cols['ascore']:\n last_match_rel_cols['h_losses'] = last_match_rel_cols['h_losses'] + 1\n last_match_rel_cols['h_losses_form'] = last_match_rel_cols['h_losses_form'] + 1\n else:\n last_match_rel_cols['h_draws'] = last_match_rel_cols['h_draws'] + 1\n last_match_rel_cols['h_prem_points'] = last_match_rel_cols['h_prem_points'] + 2\n last_match_rel_cols['h_prem_points_form'] = last_match_rel_cols['h_prem_points_form'] + 2\n \n # Update points for and against\n last_match_rel_cols['h_points_for'] = last_match_rel_cols['h_points_for'] + last_match_rel_cols['hscore']\n last_match_rel_cols['h_points_against'] = last_match_rel_cols['h_points_against'] + last_match_rel_cols['ascore']\n last_match_rel_cols['h_points_for_form'] = last_match_rel_cols['h_points_for_form'] + last_match_rel_cols['hscore']\n last_match_rel_cols['h_points_against_form'] = last_match_rel_cols['h_points_against_form'] + last_match_rel_cols['ascore']\n \n # Update percentage\n last_match_rel_cols['h_percentage'] = (last_match_rel_cols['h_points_for'] / last_match_rel_cols['h_points_against']) * 100\n last_match_rel_cols['h_percentage_form'] = (last_match_rel_cols['h_points_for_form'] / last_match_rel_cols['h_points_against_form']) * 100\n \n \n if last_game_h_a == 'away':\n # Update wins, losses, draws and prem points\n if last_match_rel_cols['hscore'] > last_match_rel_cols['ascore']:\n last_match_rel_cols['a_losses'] = last_match_rel_cols['a_losses'] + 1\n last_match_rel_cols['a_losses_form'] = last_match_rel_cols['a_losses_form'] + 1\n elif last_match_rel_cols['hscore'] < last_match_rel_cols['ascore']:\n last_match_rel_cols['a_wins'] = last_match_rel_cols['a_wins'] + 1\n last_match_rel_cols['a_wins_form'] = last_match_rel_cols['a_wins_form'] + 1\n last_match_rel_cols['a_prem_points'] = last_match_rel_cols['a_prem_points'] + 4\n last_match_rel_cols['a_prem_points_form'] = last_match_rel_cols['a_prem_points_form'] + 4\n else:\n last_match_rel_cols['a_draws'] = last_match_rel_cols['a_draws'] + 1\n last_match_rel_cols['a_prem_points'] = last_match_rel_cols['a_prem_points'] + 2\n last_match_rel_cols['a_prem_points_form'] = last_match_rel_cols['a_prem_points_form'] + 2\n \n # Update points for and against\n last_match_rel_cols['a_points_for'] = last_match_rel_cols['a_points_for'] + last_match_rel_cols['ascore']\n last_match_rel_cols['a_points_against'] = last_match_rel_cols['a_points_against'] + last_match_rel_cols['hscore']\n last_match_rel_cols['a_points_for_form'] = last_match_rel_cols['a_points_for_form'] + last_match_rel_cols['ascore']\n last_match_rel_cols['a_points_against_form'] = last_match_rel_cols['a_points_against_form'] + last_match_rel_cols['hscore']\n \n # Update percentage\n last_match_rel_cols['a_percentage'] = (last_match_rel_cols['a_points_for'] / last_match_rel_cols['a_points_against']) * 100\n last_match_rel_cols['a_percentage_form'] = (last_match_rel_cols['a_points_for_form'] / last_match_rel_cols['a_points_against_form']) * 100\n \n return last_match_rel_cols\n\nladder_columns = {\n ('wins',0),\n ('losses',0),\n ('draws',0),\n ('prem_points',0),\n ('played',0),\n ('points_for',0),\n ('points_against',0),\n ('percentage',100),\n ('position',1)\n}\n\nladder_cols = [i for i,j in ladder_columns]\nh_ladder_cols = ['h_' + i for i,j in ladder_columns]\na_ladder_cols = ['a_' + i for i,j in ladder_columns]\nh_ladder_form_cols = ['h_' + i + '_form' for i,j in ladder_columns]\na_ladder_form_cols = ['a_' + i + '_form' for i,j in ladder_columns]\nh_ladder_form_cols_mapping = dict(zip(ladder_cols,h_ladder_form_cols))\na_ladder_form_cols_mapping = dict(zip(ladder_cols,a_ladder_form_cols))\n\ngame_cols = [\n 'hscore',\n 'ascore'\n]\n\ndef update_last_game(df):\n for idx,row in df.iterrows():\n \n for col in cols_to_update:\n single_game_col = col[7:] # This is the non-aggregate column, e.g. 'Supercoach' instead of 'career_Supercoach'\n if col[0:7] == 'career_':\n df.at[idx,col] = (df.at[idx,single_game_col] + (df.at[idx,col] * (df.at[idx,'career_games_played']))) / df.at[idx,'career_games_played']\n elif col[0:7] == 'season_':\n df.at[idx,col] = (df.at[idx,single_game_col] + (df.at[idx,col] * (df.at[idx,'season_games_played']))) / df.at[idx,'season_games_played']\n else:\n raise Exception('Column not found, check what columns you\\'re passing')\n \n return df\n\ndef get_player_names(image_string):\n \"\"\"\n Returns the names of players who are named in a team\n \"\"\"\n names = []\n name = ''\n i = 0\n while i <= len(image_string):\n if image_string[i] == ']':\n name = ''\n i += 2 # Skip the first space\n else:\n i += 1\n continue\n name = ''\n while (image_string[i] != ',') & (image_string[i] != '\\n'):\n name += image_string[i]\n i += 1\n if i == len(image_string):\n break\n name = name.replace(' ','_')\n names.append(name)\n i += 1\n return names\n\ndef get_player_data(player_ids):\n last_games = pd.DataFrame(columns = players.columns)\n for player in player_ids:\n last_game_row = players[(players['playerid']==player) & (players['next_matchid'].isna())]\n last_games = last_games.append(last_game_row)\n return last_games\n\nplayer_cols = ['AFLfantasy',\n 'Supercoach',\n 'behinds',\n 'bounces',\n 'brownlow',\n 'clangers',\n 'clearances',\n 'contested_marks',\n 'contested_poss',\n 'disposals',\n 'frees_against',\n 'frees_for',\n 'goal_assists',\n 'goals',\n 'handballs',\n 'hitouts',\n 'inside50',\n 'kicks',\n 'marks',\n 'marks_in_50',\n 'one_percenters',\n 'rebound50',\n 'tackles',\n 'tog',\n 'uncontested_poss',\n 'centre_clearances',\n 'disposal_efficiency',\n 'effective_disposals',\n 'intercepts',\n 'metres_gained',\n 'stoppage_clearances',\n 'score_involvements',\n 'tackles_in_50',\n 'turnovers',\n 'matchid',\n 'next_matchid',\n 'team',\n 'career_AFLfantasy',\n 'career_Supercoach',\n 'career_behinds',\n 'career_bounces',\n 'career_brownlow',\n 'career_clangers',\n 'career_clearances',\n 'career_contested_marks',\n 'career_contested_poss',\n 'career_disposals',\n 'career_frees_against',\n 'career_frees_for',\n 'career_goal_assists',\n 'career_goals',\n 'career_handballs',\n 'career_hitouts',\n 'career_inside50',\n 'career_kicks',\n 'career_marks',\n 'career_marks_in_50',\n 'career_one_percenters',\n 'career_rebound50',\n 'career_tackles',\n 'career_tog',\n 'career_uncontested_poss',\n 'career_centre_clearances',\n 'career_disposal_efficiency',\n 'career_effective_disposals',\n 'career_intercepts',\n 'career_metres_gained',\n 'career_stoppage_clearances',\n 'career_score_involvements',\n 'career_tackles_in_50',\n 'career_turnovers',\n 'season_AFLfantasy',\n 'season_Supercoach',\n 'season_behinds',\n 'season_bounces',\n 'season_brownlow',\n 'season_clangers',\n 'season_clearances',\n 'season_contested_marks',\n 'season_contested_poss',\n 'season_disposals',\n 'season_frees_against',\n 'season_frees_for',\n 'season_goal_assists',\n 'season_goals',\n 'season_handballs',\n 'season_hitouts',\n 'season_inside50',\n 'season_kicks',\n 'season_marks',\n 'season_marks_in_50',\n 'season_one_percenters',\n 'season_rebound50',\n 'season_tackles',\n 'season_tog',\n 'season_uncontested_poss',\n 'season_centre_clearances',\n 'season_disposal_efficiency',\n 'season_effective_disposals',\n 'season_intercepts',\n 'season_metres_gained',\n 'season_stoppage_clearances',\n 'season_score_involvements',\n 'season_tackles_in_50',\n 'season_turnovers',\n 'career_games_played',\n 'season_games_played']\n\n\ndef get_diff_cols(matches: pd.DataFrame) -> pd.DataFrame:\n \"\"\"\n Function to take the columns and separate between home and away teams. Each\n metric will have a \"diff\" column which tells the difference between home\n and away for this metric. i.e. there's a diff_percentage column which tells\n the difference between home and away for the percentage\n \"\"\"\n print('Creating differential columns')\n for col in matches.columns:\n if col[0:2] == 'h_':\n try:\n h_col = col\n a_col = 'a_' + col[2:]\n diff_col = 'diff_' + col[2:]\n matches[diff_col] = matches[h_col] - matches[a_col]\n except TypeError:\n pass\n return matches\n\nfrom typing import Type\nimport pandas as pd\n\nclass TeamLadder:\n def __init__(self, team: str):\n self.team = team\n for column, init_val in ladder_columns:\n setattr(self, column, init_val)\n\n def add_prev_round_team_ladder(self, prev_round_team_ladder):\n for col,val in prev_round_team_ladder.items():\n self.__dict__[col] = val\n\n def update_home_team(self, match):\n self.played += 1\n if match.hscore > match.ascore:\n self.wins += 1\n self.prem_points += 4\n elif match.hscore == match.ascore:\n self.draws += 1\n self.prem_points += 2\n else:\n self.losses += 1 \n self.points_for += match.hscore\n self.points_against += match.ascore\n self.percentage = 100 * (self.points_for / self.points_against)\n\n def update_away_team(self, match):\n self.played += 1\n if match.hscore < match.ascore:\n self.wins += 1\n self.prem_points += 4\n elif match.hscore == match.ascore:\n self.draws += 1\n self.prem_points += 2\n else:\n self.losses += 1 \n self.points_for += match.ascore\n self.points_against += match.hscore\n self.percentage = 100 * (self.points_for / self.points_against)\n\n def update_ladder(self, match):\n \"\"\"\n Update the ladder for the team based on the outcome of the game. There\n will be two possibilites - the team can be the home or the away team\n in the provided match.\n \"\"\"\n if self.team == match.teams['home']:\n self.update_home_team(match)\n else:\n self.update_away_team(match)\n\nclass Ladder:\n \"\"\"\n Each round object holds the ladder details for that round for each team\n \"\"\"\n def __init__(self, teams_in_season):\n self.teams_in_season = teams_in_season\n self.team_ladders = {}\n\n def add_team_ladder(self, team_ladder):\n self.team_ladders[team_ladder.team.team] = team_ladder\n\nclass Team:\n \"\"\"\n Holds team-level data for a particular match\n \"\"\"\n def __init__(self, generic_team_columns, home_or_away: str):\n self.home_or_away = home_or_away\n for column in generic_team_columns:\n setattr(self, column, None)\n\n def add_data(self, data: pd.DataFrame):\n if self.home_or_away == 'home':\n for home_col, generic_col in home_cols_mapped.items():\n self.__dict__[generic_col] = data[home_col]\n if self.home_or_away == 'away':\n for away_col, generic_col in away_cols_mapped.items():\n self.__dict__[generic_col] = data[away_col]\n\nclass Match:\n \"\"\"\n Holds data about a match, as well as an object for each team\n \"\"\"\n def __init__(self, match_columns):\n self.teams = {\n 'home': None,\n 'away': None\n }\n for column in match_columns:\n setattr(self, column, None)\n\n def add_data(self, data: pd.DataFrame):\n for column in self.__dict__.keys():\n try:\n self.__dict__[column] = data[column]\n except KeyError:\n continue\n \n def add_home_team(self, team):\n self.teams['home'] = team\n \n def add_away_team(self, team):\n self.teams['away'] = team\n\nclass Round:\n \"\"\"\n Contains match and ladder data for each round\n \"\"\"\n def __init__(self, round_num: int):\n self.round_num = round_num\n self.matches = []\n self.bye_teams = []\n self.ladder = None\n\n def add_match(self, match):\n self.matches.append(match)\n\n def add_ladder(self, ladder):\n self.ladder = ladder\n\nclass Season:\n \"\"\"\n Contains the rounds for a season, and which teams competed\n \"\"\"\n def __init__(self, year: int, teams):\n self.year = year\n self.teams = teams\n self.rounds = {}\n \n def add_round(self, round_obj: Type[Round]):\n self.rounds[round_obj.round_num] = round_obj\n\nclass History:\n \"\"\"\n Holds all season objects\n \"\"\"\n def __init__(self):\n self.seasons = {}\n \n def add_season(self, season):\n self.seasons[season.year] = season\n\nfrom typing import Dict\ndef get_season_num_games(matches: pd.DataFrame) -> Dict:\n \"\"\"\n Return a dictionary with seasons as keys and number of games\n in season as values\n \"\"\"\n seasons = matches['season'].unique()\n rounds_in_season = dict.fromkeys(seasons,0)\n \n for season in seasons:\n rounds_in_season[season] = max(matches[matches['season']==season]['h_played']) + 1\n \n return rounds_in_season" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
arviz-devs/arviz
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
apache-2.0
[ "Refitting PyMC3 models with ArviZ (and xarray)\nArviZ is backend agnostic and therefore does not sample directly. In order to take advantage of algorithms that require refitting models several times, ArviZ uses {class}~arviz.SamplingWrappers to convert the API of the sampling backend to a common set of functions. Hence, functions like Leave Future Out Cross Validation can be used in ArviZ independently of the sampling backend used.\nBelow there is an example of SamplingWrapper usage for PyMC3.\nBefore starting, it is important to note that PyMC3 cannot modify the shapes of the input data using the same compiled model. Thus, each refitting will require a recompilation of the model.", "import arviz as az\nimport pymc3 as pm\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats\nimport xarray as xr", "For the example, we will use a linear regression model.", "np.random.seed(26)\n\nxdata = np.linspace(0, 50, 100)\nb0, b1, sigma = -2, 1, 3\nydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma)\n\nplt.plot(xdata, ydata);", "Now we will write the PyMC3 model, keeping in mind the following two points:\n1. Data must be modifiable (both x and y).\n2. The model must be recompiled in order to be refitted with the modified data.\nWe, therefore, have to create a function that recompiles the model when it's called. Luckily for us, compilation in PyMC3 is generally quite fast.", "def compile_linreg_model(xdata, ydata):\n with pm.Model() as model:\n x = pm.Data(\"x\", xdata)\n b0 = pm.Normal(\"b0\", 0, 10)\n b1 = pm.Normal(\"b1\", 0, 10)\n sigma_e = pm.HalfNormal(\"sigma_e\", 10)\n\n y = pm.Normal(\"y\", b0 + b1 * x, sigma_e, observed=ydata)\n return model\n\nsample_kwargs = {\"draws\": 500, \"tune\": 500, \"chains\": 4}\nwith compile_linreg_model(xdata, ydata) as linreg_model:\n trace = pm.sample(**sample_kwargs)", "We have defined a dictionary sample_kwargs that will be passed to the SamplingWrapper in order to make sure that all refits use the same sampler parameters. \nWe follow the same pattern with {func}az.from_pymc3 &lt;arviz.from_pymc3&gt;. \nNote, however, how coords are not set. This is done to prevent errors due to coordinates and values shapes being incompatible during refits. Otherwise we'd have to handle subsetting of the coordinate values even though the refits are never used outside the refitting functions such as {func}~arviz.reloo.\nWe also exclude the model because the model, like the trace, is different for every refit. This may seem counterintuitive or even plain wrong, but we have to remember that the pm.Model object contains information like the observed data.", "dims = {\"y\": [\"time\"], \"x\": [\"time\"]}\nidata_kwargs = {\n \"dims\": dims,\n \"log_likelihood\": False,\n}\nidata = az.from_pymc3(trace, model=linreg_model, **idata_kwargs)\n\nidata", "We are now missing the log_likelihood group due to setting log_likelihood=False in idata_kwargs. We are doing this to ease the job of the sampling wrapper. Instead of going out of our way to get PyMC3 to calculate the pointwise log likelihood values for each refit and for the excluded observation at every refit, we will compromise and manually write a function to calculate the pointwise log likelihood.\nEven though it is not ideal to lose part of the straight out of the box capabilities of PyMC3, this should generally not be a problem. In fact, other PPLs such as Stan always require writing the pointwise log likelihood values manually (either within the Stan code or in Python). Moreover, computing the pointwise log likelihood in Python using xarray will be more efficient in computational terms than the automatic extraction from PyMC3. \nIt could even be written to be compatible with Dask. Thus it will work even in cases where the large number of observations makes it impossible to store pointwise log likelihood values (with shape n_samples * n_observations) in memory.", "def calculate_log_lik(x, y, b0, b1, sigma_e):\n mu = b0 + b1 * x\n return stats.norm(mu, sigma_e).logpdf(y)", "This function should work for any shape of the input arrays as long as their shapes are compatible and can broadcast. There is no need to loop over each draw in order to calculate the pointwise log likelihood using scalars.\nTherefore, we can use {func}xr.apply_ufunc &lt;xarray.apply_ufunc&gt; to handle the broadasting and preserve the dimension names:", "log_lik = xr.apply_ufunc(\n calculate_log_lik,\n idata.constant_data[\"x\"],\n idata.observed_data[\"y\"],\n idata.posterior[\"b0\"],\n idata.posterior[\"b1\"],\n idata.posterior[\"sigma_e\"],\n)\nidata.add_groups(log_likelihood=log_lik)", "The first argument is the function, followed by as many positional arguments as needed by the function, 5 in our case. As this case does not have many different dimensions nor combinations of these, we do not need to use any extra kwargs passed to xr.apply_ufunc.\nWe are now passing the arguments to calculate_log_lik initially as {class}xarray:xarray.DataArrays. What is happening here behind the scenes is that xr.apply_ufunc is broadcasting and aligning the dimensions of all the DataArrays involved and afterwards passing Numpy arrays to calculate_log_lik. Everything works automagically. \nNow let's see what happens if we were to pass the arrays directly to calculate_log_lik instead:", "calculate_log_lik(\n idata.constant_data[\"x\"].values,\n idata.observed_data[\"y\"].values,\n idata.posterior[\"b0\"].values,\n idata.posterior[\"b1\"].values,\n idata.posterior[\"sigma_e\"].values\n)", "If you are still curious about the magic of xarray and xr.apply_ufunc, you can also try to modify the dims used to generate the InferenceData a couple cells before:\ndims = {\"y\": [\"time\"], \"x\": [\"time\"]}\n\nWhat happens to the result if you use a different name for the dimension of x?", "idata", "We will create a subclass of az.SamplingWrapper.", "class PyMC3LinRegWrapper(az.SamplingWrapper): \n def sample(self, modified_observed_data):\n with self.model(*modified_observed_data) as linreg_model:\n idata = pm.sample(\n **self.sample_kwargs, \n return_inferencedata=True, \n idata_kwargs=self.idata_kwargs\n )\n return idata\n \n def get_inference_data(self, idata):\n return idata\n \n def sel_observations(self, idx):\n xdata = self.idata_orig.constant_data[\"x\"]\n ydata = self.idata_orig.observed_data[\"y\"]\n mask = np.isin(np.arange(len(xdata)), idx)\n data__i = [ary[~mask] for ary in (xdata, ydata)]\n data_ex = [ary[mask] for ary in (xdata, ydata)]\n return data__i, data_ex\n\nloo_orig = az.loo(idata, pointwise=True)\nloo_orig", "In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify loo_orig in order to make az.reloo believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value.", "loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9])", "We initialize our sampling wrapper. Let's stop and analyze each of the arguments. \nWe'd generally use model to pass a model object of some kind, already compiled and reexecutable, however, as we saw before, we need to recompile the model every time we use it to pass the model generating function instead. Close enough.\nWe then use the log_lik_fun and posterior_vars argument to tell the wrapper how to call xr.apply_ufunc. log_lik_fun is the function to be called, which is then called with the following positional arguments:\nlog_lik_fun(*data_ex, *[idata__i.posterior[var_name] for var_name in posterior_vars]\n\nwhere data_ex is the second element returned by sel_observations and idata__i is the InferenceData object result of get_inference_data which contains the fit on the subsetted data. We have generated data_ex to be a tuple of DataArrays so it plays nicely with this call signature.\nWe use idata_orig as a starting point, and mostly as a source of observed and constant data which is then subsetted in sel_observations.\nFinally, sample_kwargs and idata_kwargs are used to make sure all refits and corresponding InferenceData are generated with the same properties.", "pymc3_wrapper = PyMC3LinRegWrapper(\n model=compile_linreg_model, \n log_lik_fun=calculate_log_lik, \n posterior_vars=(\"b0\", \"b1\", \"sigma_e\"),\n idata_orig=idata,\n sample_kwargs=sample_kwargs, \n idata_kwargs=idata_kwargs,\n)", "And eventually, we can use this wrapper to call az.reloo, and compare the results with the PSIS LOO-CV results.", "loo_relooed = az.reloo(pymc3_wrapper, loo_orig=loo_orig)\n\nloo_relooed\n\nloo_orig" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
adrianstaniec/deep-learning
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
mit
[ "Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.", "%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')", "Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.\n\nExercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.", "def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, real_dim), 'input_real')\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), 'input_z')\n return inputs_real, inputs_z", "Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.", "def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n ''' Build the generator network.\n \n Arguments\n ---------\n z : Input tensor for the generator\n out_dim : Shape of the generator output\n n_units : Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: generated outputs after and before activation\n \n '''\n with tf.variable_scope('generator', reuse=reuse):\n # Hidden layer\n #h1 = tf.contrib.layers.fully_connected(z, n_units, activation_fn=None)\n # shorter\n h1 = tf.layers.dense(z, n_units)\n \n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n # Logits and tanh output\n #logits = tf.contrib.layers.fully_connected(h1, out_dim, activation_fn=None)\n # shorter\n logits = tf.layers.dense(h1, out_dim)\n out = tf.tanh(logits)\n \n return out, logits", "Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.\n\nExercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.", "def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n ''' Build the discriminator network.\n \n Arguments\n ---------\n x : Input tensor for the discriminator\n n_units: Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: generated output after and before activation\n '''\n with tf.variable_scope('discriminator', reuse=reuse):\n # Hidden layer\n #h1 = tf.contrib.layers.fully_connected(x, n_units, activation_fn=None)\n # shorter\n h1 = tf.layers.dense(x, n_units)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n #logits = tf.contrib.layers.fully_connected(h1, 1, activation_fn=None)\n # shape\n logits = tf.layers.dense(h1, 1)\n out = tf.sigmoid(logits)\n \n return out, logits", "Hyperparameters", "# Size of input image to discriminator\ninput_size = 784 # 28x28 MNIST images flattened\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Label smoothing \nsmooth = 0.1", "Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).\n\nExercise: Build the network from the functions you defined earlier.", "tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size, z_size)\n\n# Generator network here\ng_model, g_logits = generator(input_z, input_size, g_hidden_size, False, alpha)\n# g_model is the generator output\n\n# Disriminator network here\nd_model_real, d_logits_real = discriminator(input_real, d_hidden_size, False, alpha)\nd_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, True, alpha)", "Discriminator and Generator Losses\nFor the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The fake logits are used with labels of all zeros. We want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using the labels that are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.", "# Calculate losses\nd_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(\n labels=tf.ones_like(d_logits_real) * (1-smooth),\n logits=d_logits_real))\n\nd_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(\n labels=tf.zeros_like(d_logits_fake),\n logits=d_logits_fake))\n\nd_loss = d_loss_real + d_loss_fake\n\ng_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(\n labels=tf.ones_like(d_logits_fake),\n logits=d_logits_fake))", "Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.\n\nExercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.", "# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = tf.trainable_variables()\ng_vars = [x for x in tf.trainable_variables() if 'generator' in x.name]\nd_vars = [x for x in tf.trainable_variables() if x.name.startswith('discriminator')]\n\nd_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)\ng_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)", "Training", "batch_size = 100\nepochs = 100\nsamples = []\nlosses = []\nsaver = tf.train.Saver(var_list = g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n\n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)", "Training loss\nHere we'll check out the training losses for the generator and discriminator.", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()", "Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.", "def view_samples(epoch, samples):\n print(len(samples[0][0]),len(samples[0][1]))\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch][0]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)", "These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.", "_ = view_samples(-1, samples)", "Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!", "rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\nprint(np.array(samples).shape)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[0][::int(len(sample[0])/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)", "It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!", "saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),\n feed_dict={input_z: sample_z})\nview_samples(0, [gen_samples])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tpin3694/tpin3694.github.io
sql/drop_rows.ipynb
mit
[ "Title: Drop Rows \nSlug: drop_rows \nSummary: Drop rows in SQL. \nDate: 2017-01-16 12:00 \nCategory: SQL\nTags: Basics\nAuthors: Chris Albon \nNote: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.\nFor more, check out Learning SQL by Alan Beaulieu.", "# Ignore\n%load_ext sql\n%sql sqlite://\n%config SqlMagic.feedback = False", "Create Data", "%%sql\n\n-- Create a table of criminals\nCREATE TABLE criminals (pid, name, age, sex, city, minor);\nINSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);\nINSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);\nINSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);\nINSERT INTO criminals VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);\nINSERT INTO criminals VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);\nINSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);\nINSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);\nINSERT INTO criminals VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);", "View Table", "%%sql\n\n-- Select all\nSELECT *\n\n-- From the criminals table\nFROM criminals", "Drop Row Based On A Conditional", "%%sql\n\n-- Delete all rows\nDELETE FROM criminals\n\n-- if the age is less than 18\nWHERE age < 18", "View Table Again", "%%sql\n\n-- Select all\nSELECT *\n\n-- From the criminals table\nFROM criminals" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
roebius/deeplearning_keras2
nbs/dogscats-ensemble.ipynb
apache-2.0
[ "from __future__ import division, print_function\n%matplotlib inline\nfrom importlib import reload # Python 3\nimport utils; reload(utils)\nfrom utils import *", "Setup", "path = \"data/dogscats/\"\n# path = \"data/dogscats/sample/\"\nmodel_path = path + 'models/'\nif not os.path.exists(model_path): os.mkdir(model_path)\n\nbatch_size=32\n# batch_size=1\n\nbatches = get_batches(path+'train', shuffle=False, batch_size=batch_size)\nval_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)\n\n(val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames) = get_classes(path)", "In this notebook we're going to create an ensemble of models and use their average as our predictions. For each ensemble, we're going to follow our usual fine-tuning steps:\n1) Create a model that retrains just the last layer\n2) Add this to a model containing all VGG layers except the last layer\n3) Fine-tune just the dense layers of this model (pre-computing the convolutional layers)\n4) Add data augmentation, fine-tuning the dense layers without pre-computation.\nSo first, we need to create our VGG model and pre-compute the output of the conv layers:", "model = Vgg16().model\nconv_layers,fc_layers = split_at(model, Convolution2D)\n\nconv_model = Sequential(conv_layers)\n\nval_features = conv_model.predict_generator(val_batches, int(np.ceil(val_batches.samples/batch_size)))\ntrn_features = conv_model.predict_generator(batches, int(np.ceil(batches.samples/batch_size)))\n\nsave_array(model_path + 'train_convlayer_features.bc', trn_features)\nsave_array(model_path + 'valid_convlayer_features.bc', val_features)", "In the future we can just load these precomputed features:", "trn_features = load_array(model_path+'train_convlayer_features.bc')\nval_features = load_array(model_path+'valid_convlayer_features.bc')", "We can also save some time by pre-computing the training and validation arrays with the image decoding and resizing already done:", "trn = get_data(path+'train')\nval = get_data(path+'valid')\n\nsave_array(model_path+'train_data.bc', trn)\nsave_array(model_path+'valid_data.bc', val)", "In the future we can just load these resized images:", "trn = load_array(model_path+'train_data.bc')\nval = load_array(model_path+'valid_data.bc')", "Finally, we can precompute the output of all but the last dropout and dense layers, for creating the first stage of the model:", "model.pop()\nmodel.pop()\n\nll_val_feat = model.predict_generator(val_batches, int(np.ceil(val_batches.samples/batch_size)))\nll_feat = model.predict_generator(batches, int(np.ceil(batches.samples/batch_size)))\n\nsave_array(model_path + 'train_ll_feat.bc', ll_feat)\nsave_array(model_path + 'valid_ll_feat.bc', ll_val_feat)\n\nll_feat = load_array(model_path+ 'train_ll_feat.bc')\nll_val_feat = load_array(model_path + 'valid_ll_feat.bc')", "...and let's also grab the test data, for when we need to submit:", "test = get_data(path+'test')\nsave_array(model_path+'test_data.bc', test)\n\ntest = load_array(model_path+'test_data.bc')", "Last layer\nThe functions automate creating a model that trains the last layer from scratch, and then adds those new layers on to the main model.", "def get_ll_layers():\n return [ \n BatchNormalization(input_shape=(4096,)),\n Dropout(0.5),\n Dense(2, activation='softmax') \n ]\n\ndef train_last_layer(i):\n ll_layers = get_ll_layers()\n ll_model = Sequential(ll_layers)\n ll_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n ll_model.optimizer.lr=1e-5\n ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), epochs=12)\n ll_model.optimizer.lr=1e-7\n ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), epochs=1)\n ll_model.save_weights(model_path+'ll_bn' + i + '.h5')\n\n vgg = Vgg16BN()\n model = vgg.model\n model.pop(); model.pop(); model.pop()\n for layer in model.layers: layer.trainable=False\n model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n\n ll_layers = get_ll_layers()\n for layer in ll_layers: model.add(layer)\n for l1,l2 in zip(ll_model.layers, model.layers[-3:]):\n l2.set_weights(l1.get_weights())\n model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n model.save_weights(model_path+'bn' + i + '.h5')\n return model", "Dense model", "def get_conv_model(model):\n layers = model.layers\n last_conv_idx = [index for index,layer in enumerate(layers) \n if type(layer) is Convolution2D][-1]\n\n conv_layers = layers[:last_conv_idx+1]\n conv_model = Sequential(conv_layers)\n fc_layers = layers[last_conv_idx+1:]\n return conv_model, fc_layers, last_conv_idx\n\ndef get_fc_layers(p, in_shape):\n return [\n MaxPooling2D(input_shape=in_shape),\n Flatten(),\n Dense(4096, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(4096, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(2, activation='softmax')\n ]\n\ndef train_dense_layers(i, model):\n conv_model, fc_layers, last_conv_idx = get_conv_model(model)\n conv_shape = conv_model.output_shape[1:]\n fc_model = Sequential(get_fc_layers(0.5, conv_shape))\n for l1,l2 in zip(fc_model.layers, fc_layers): \n weights = l2.get_weights()\n l1.set_weights(weights)\n fc_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', \n metrics=['accuracy'])\n fc_model.fit(trn_features, trn_labels, epochs=2, \n batch_size=batch_size, validation_data=(val_features, val_labels))\n\n # width_zoom_range removed from the following because not available in Keras2\n gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.05, zoom_range=0.05,\n channel_shift_range=10, height_shift_range=0.05, shear_range=0.05, horizontal_flip=True)\n batches = gen.flow(trn, trn_labels, batch_size=batch_size)\n val_batches = image.ImageDataGenerator().flow(val, val_labels, \n shuffle=False, batch_size=batch_size)\n\n for layer in conv_model.layers: layer.trainable = False\n for layer in get_fc_layers(0.5, conv_shape): conv_model.add(layer)\n for l1,l2 in zip(conv_model.layers[last_conv_idx+1:], fc_model.layers): \n l1.set_weights(l2.get_weights())\n\n steps_per_epoch = int(np.ceil(batches.n/batch_size))\n validation_steps = int(np.ceil(val_batches.n/batch_size))\n\n conv_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', \n metrics=['accuracy'])\n conv_model.save_weights(model_path+'no_dropout_bn' + i + '.h5')\n conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1, \n validation_data=val_batches, validation_steps=validation_steps)\n for layer in conv_model.layers[16:]: layer.trainable = True\n \n #- added again the compile instruction in order to avoid a Keras 2.1 warning message\n conv_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', \n metrics=['accuracy'])\n \n conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=8, \n validation_data=val_batches, validation_steps=validation_steps)\n\n conv_model.optimizer.lr = 1e-7\n conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=10, \n validation_data=val_batches, validation_steps=validation_steps)\n conv_model.save_weights(model_path + 'aug' + i + '.h5')", "Build ensemble", "for i in range(5):\n i = str(i)\n model = train_last_layer(i)\n train_dense_layers(i, model)", "Combine ensemble and test", "ens_model = vgg_ft_bn(2)\nfor layer in ens_model.layers: layer.trainable=True\n\ndef get_ens_pred(arr, fname):\n ens_pred = []\n for i in range(5):\n i = str(i)\n ens_model.load_weights('{}{}{}.h5'.format(model_path, fname, i))\n preds = ens_model.predict(arr, batch_size=batch_size)\n ens_pred.append(preds)\n return ens_pred\n\nval_pred2 = get_ens_pred(val, 'aug')\n\nval_avg_preds2 = np.stack(val_pred2).mean(axis=0)\n\ncategorical_accuracy(val_labels, val_avg_preds2).eval().mean()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/probability
tensorflow_probability/examples/jupyter_notebooks/Learnable_Distributions_Zoo.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Probability Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Learnable Distributions Zoo\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Learnable_Distributions_Zoo\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Learnable_Distributions_Zoo.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Learnable_Distributions_Zoo.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Learnable_Distributions_Zoo.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nIn this colab we show various examples of building learnable (\"trainable\") distributions. (We make no effort to explain the distributions, only to show how to build them.)", "import numpy as np\nimport tensorflow.compat.v2 as tf\nimport tensorflow_probability as tfp\nfrom tensorflow_probability.python.internal import prefer_static\ntfb = tfp.bijectors\ntfd = tfp.distributions\ntf.enable_v2_behavior()\n\nevent_size = 4\nnum_components = 3", "Learnable Multivariate Normal with Scaled Identity for chol(Cov)", "learnable_mvn_scaled_identity = tfd.Independent(\n tfd.Normal(\n loc=tf.Variable(tf.zeros(event_size), name='loc'),\n scale=tfp.util.TransformedVariable(\n tf.ones([1]),\n bijector=tfb.Exp(),\n name='scale')),\n reinterpreted_batch_ndims=1,\n name='learnable_mvn_scaled_identity')\n\nprint(learnable_mvn_scaled_identity)\nprint(learnable_mvn_scaled_identity.trainable_variables)", "Learnable Multivariate Normal with Diagonal for chol(Cov)", "learnable_mvndiag = tfd.Independent(\n tfd.Normal(\n loc=tf.Variable(tf.zeros(event_size), name='loc'),\n scale=tfp.util.TransformedVariable(\n tf.ones(event_size),\n bijector=tfb.Softplus(), # Use Softplus...cuz why not?\n name='scale')),\n reinterpreted_batch_ndims=1,\n name='learnable_mvn_diag')\n\nprint(learnable_mvndiag)\nprint(learnable_mvndiag.trainable_variables)", "Mixture of Multivarite Normal (spherical)", "learnable_mix_mvn_scaled_identity = tfd.MixtureSameFamily(\n mixture_distribution=tfd.Categorical(\n logits=tf.Variable(\n # Changing the `1.` intializes with a geometric decay.\n -tf.math.log(1.) * tf.range(num_components, dtype=tf.float32),\n name='logits')),\n components_distribution=tfd.Independent(\n tfd.Normal(\n loc=tf.Variable(\n tf.random.normal([num_components, event_size]),\n name='loc'),\n scale=tfp.util.TransformedVariable(\n 10. * tf.ones([num_components, 1]),\n bijector=tfb.Softplus(), # Use Softplus...cuz why not?\n name='scale')),\n reinterpreted_batch_ndims=1),\n name='learnable_mix_mvn_scaled_identity')\n\nprint(learnable_mix_mvn_scaled_identity)\nprint(learnable_mix_mvn_scaled_identity.trainable_variables)", "Mixture of Multivariate Normal (spherical) with first mix weight unlearnable", "learnable_mix_mvndiag_first_fixed = tfd.MixtureSameFamily(\n mixture_distribution=tfd.Categorical(\n logits=tfp.util.TransformedVariable(\n # Initialize logits as geometric decay.\n -tf.math.log(1.5) * tf.range(num_components, dtype=tf.float32),\n tfb.Pad(paddings=[[1, 0]], constant_values=0)),\n name='logits'),\n components_distribution=tfd.Independent(\n tfd.Normal(\n loc=tf.Variable(\n # Use Rademacher...cuz why not?\n tfp.random.rademacher([num_components, event_size]),\n name='loc'),\n scale=tfp.util.TransformedVariable(\n 10. * tf.ones([num_components, 1]),\n bijector=tfb.Softplus(), # Use Softplus...cuz why not?\n name='scale')),\n reinterpreted_batch_ndims=1),\n name='learnable_mix_mvndiag_first_fixed')\n\nprint(learnable_mix_mvndiag_first_fixed)\nprint(learnable_mix_mvndiag_first_fixed.trainable_variables)", "Mixture of Multivariate Normal (full Cov)", "learnable_mix_mvntril = tfd.MixtureSameFamily(\n mixture_distribution=tfd.Categorical(\n logits=tf.Variable(\n # Changing the `1.` intializes with a geometric decay.\n -tf.math.log(1.) * tf.range(num_components, dtype=tf.float32),\n name='logits')),\n components_distribution=tfd.MultivariateNormalTriL(\n loc=tf.Variable(tf.zeros([num_components, event_size]), name='loc'),\n scale_tril=tfp.util.TransformedVariable(\n 10. * tf.eye(event_size, batch_shape=[num_components]),\n bijector=tfb.FillScaleTriL(),\n name='scale_tril')),\n name='learnable_mix_mvntril')\n\nprint(learnable_mix_mvntril)\nprint(learnable_mix_mvntril.trainable_variables)", "Mixture of Multivariate Normal (full Cov) with unlearnable first mix & first component", "# Make a bijector which pads an eye to what otherwise fills a tril.\nnum_tril_nonzero = lambda num_rows: num_rows * (num_rows + 1) // 2\n\nnum_tril_rows = lambda nnz: prefer_static.cast(\n prefer_static.sqrt(0.25 + 2. * prefer_static.cast(nnz, tf.float32)) - 0.5,\n tf.int32)\n\n# TFP doesn't have a concat bijector, so we roll out our own.\nclass PadEye(tfb.Bijector):\n\n def __init__(self, tril_fn=None):\n if tril_fn is None:\n tril_fn = tfb.FillScaleTriL()\n self._tril_fn = getattr(tril_fn, 'inverse', tril_fn)\n super(PadEye, self).__init__(\n forward_min_event_ndims=2,\n inverse_min_event_ndims=2,\n is_constant_jacobian=True,\n name='PadEye')\n\n def _forward(self, x):\n num_rows = int(num_tril_rows(tf.compat.dimension_value(x.shape[-1])))\n eye = tf.eye(num_rows, batch_shape=prefer_static.shape(x)[:-2])\n return tf.concat([self._tril_fn(eye)[..., tf.newaxis, :], x],\n axis=prefer_static.rank(x) - 2)\n\n def _inverse(self, y):\n return y[..., 1:, :]\n\n def _forward_log_det_jacobian(self, x):\n return tf.zeros([], dtype=x.dtype)\n\n def _inverse_log_det_jacobian(self, y):\n return tf.zeros([], dtype=y.dtype)\n\n def _forward_event_shape(self, in_shape):\n n = prefer_static.size(in_shape)\n return in_shape + prefer_static.one_hot(n - 2, depth=n, dtype=tf.int32)\n\n def _inverse_event_shape(self, out_shape):\n n = prefer_static.size(out_shape)\n return out_shape - prefer_static.one_hot(n - 2, depth=n, dtype=tf.int32)\n\n\ntril_bijector = tfb.FillScaleTriL(diag_bijector=tfb.Softplus())\nlearnable_mix_mvntril_fixed_first = tfd.MixtureSameFamily(\n mixture_distribution=tfd.Categorical(\n logits=tfp.util.TransformedVariable(\n # Changing the `1.` intializes with a geometric decay.\n -tf.math.log(1.) * tf.range(num_components, dtype=tf.float32),\n bijector=tfb.Pad(paddings=[(1, 0)]),\n name='logits')),\n components_distribution=tfd.MultivariateNormalTriL(\n loc=tfp.util.TransformedVariable(\n tf.zeros([num_components, event_size]),\n bijector=tfb.Pad(paddings=[(1, 0)], axis=-2),\n name='loc'),\n scale_tril=tfp.util.TransformedVariable(\n 10. * tf.eye(event_size, batch_shape=[num_components]),\n bijector=tfb.Chain([tril_bijector, PadEye(tril_bijector)]),\n name='scale_tril')),\n name='learnable_mix_mvntril_fixed_first')\n\n\nprint(learnable_mix_mvntril_fixed_first)\nprint(learnable_mix_mvntril_fixed_first.trainable_variables)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/inm/cmip6/models/sandbox-3/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: INM\nSource ID: SANDBOX-3\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:05\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inm', 'sandbox-3', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
david-hagar/NLP-Analytics
rnn-lstm-text-classification/LSTM Text Classification.ipynb
mit
[ "Example adapted from: https://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/\nRequired instals:\n1. jupyter notebooks\n2. keras \n3. tensorflow\nMy install process was:\n1. follow instructions for python virtual environment (Virtualenv) install at https://www.tensorflow.org/install/\n2. install keras python env using https://keras.io/#installation\n3. install jupyter notebooks (http://jupyter.org/install) and set up a tensorflow kernel that uses the virtualenv set up above. \n4. start jupyter notebooks in a parent directory of this notebook and open this notebook. Make sure the Tensorflow Virtualenv jupyter kernel is active when running the notebook.\nThe logs of my install are at: https://www.evernote.com/l/ACtXalW9qSpOVZOUU04V2ATOmJOvw4Ffido", "import numpy as np\nfrom keras.datasets import imdb\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, GRU, Dropout\nfrom keras.layers.embeddings import Embedding\nfrom keras.preprocessing import sequence\nfrom keras.callbacks import TensorBoard\nfrom keras import backend \n# fix random seed for reproducibility\nnp.random.seed(7)\n\nimport shutil\nimport os\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:97% !important; }</style>\")) #Set width of iPython cells", "Load IMDB Dataset", "# load the dataset but only keep the top n words, zero the rest\n# docs at: https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb/load_data \ntop_words = 5000\nstart_char=1\noov_char=2\nindex_from=3\n(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words, \n start_char=start_char, oov_char = oov_char, index_from = index_from )\n\nprint(X_train.shape)\nprint(y_train.shape)\n\nprint(len(X_train[0]))\nprint(len(X_train[1]))\n\nprint(X_test.shape)\nprint(y_test.shape)\n\nX_train[0]", "Pad sequences so they are all the same length (required by keras/tensorflow).", "# truncate and pad input sequences\nmax_review_length = 500\nX_train = sequence.pad_sequences(X_train, maxlen=max_review_length)\nX_test = sequence.pad_sequences(X_test, maxlen=max_review_length)\n\nprint(X_train.shape)\nprint(y_train.shape)\n\nprint(len(X_train[0]))\nprint(len(X_train[1]))\n\nprint(X_test.shape)\nprint(y_test.shape)\n\nX_train[0]\n\ny_train[0:20] # first 20 sentiment labels", "Setup Vocabulary Dictionary\nThe index value loaded differes from the dictionary value by \"index_from\" so that special characters for padding, start of sentence, and out of vocabulary can be prepended to the start of the vocabulary.", "word_index = imdb.get_word_index()\ninv_word_index = np.empty(len(word_index)+index_from+3, dtype=np.object)\nfor k, v in word_index.items():\n inv_word_index[v+index_from]=k\n\ninv_word_index[0]='<pad>' \ninv_word_index[1]='<start>'\ninv_word_index[2]='<oov>' \n\nword_index['ai']\n\ninv_word_index[16942+index_from]\n\ninv_word_index[:50]", "Convert Encoded Sentences to Readable Text", "def toText(wordIDs):\n s = ''\n for i in range(len(wordIDs)):\n if wordIDs[i] != 0:\n w = str(inv_word_index[wordIDs[i]])\n s+= w + ' '\n return s\n\nfor i in range(5):\n print()\n print(str(i) + ') sentiment = ' + ('negative' if y_train[i]==0 else 'positive'))\n print(toText(X_train[i]))", "Build the model\nSequential guide, compile() and fit() \nEmbedding The embeddings layer works like an effiecient one hot encoding for the word index followed by a dense layer of size embedding_vector_length.\nLSTM (middle of page)\nDense\nDropout (1/3 down the page)\n\"model.compile(...) sets up the \"adam\" optimizer, similar to SGD but with some gradient averaging that works like a larger batch size to reduce the variability in the gradient from one small batch to the next. Each SGD step is of batch_size training records. Adam is also a variant of momentum optimizers.\n'binary_crossentropy' is the loss functiom used most often with logistic regression and is equivalent to softmax for only two classes.\nIn the \"Output Shape\", None is a unknown for a variable number of training records to be supplied later.", "backend.clear_session()\n\nembedding_vector_length = 5\nrnn_vector_length = 150\n#activation = 'relu'\nactivation = 'sigmoid'\n\n \nmodel = Sequential()\nmodel.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length))\nmodel.add(Dropout(0.2))\n#model.add(LSTM(rnn_vector_length, activation=activation))\nmodel.add(GRU(rnn_vector_length, activation=activation))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(1, activation=activation))\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\nprint(model.summary())\n\n", "Setup Tensorboard\n\nmake sure /data/kaggle-tensorboard path exists or can be created\nStart tensorboard from command line:\ntensorboard --logdir=/data/kaggle-tensorboard\nopen http://localhost:6006/", "log_dir = '/data/kaggle-tensorboard'\nshutil.rmtree(log_dir, ignore_errors=True)\nos.makedirs(log_dir)\n\ntbCallBack = TensorBoard(log_dir=log_dir, histogram_freq=0, write_graph=True, write_images=True)\nfull_history=[]", "Train the Model\nEach epoch takes about 3 min. You can reduce the epochs to 3 for a faster build and still get good accuracy. Overfitting starts to happen at epoch 7 to 9.\nNote: You can run this cell multiple times to add more epochs to the model training without starting over.", "history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=8, batch_size=64, callbacks=[tbCallBack])\nfull_history += history.history['loss']", "Accuracy on the Test Set", "scores = model.evaluate(X_test, y_test, verbose=0)\nprint(\"Accuracy: %.2f%%\" % (scores[1]*100))\nprint( 'embedding_vector_length = ' + str( embedding_vector_length ))\nprint( 'rnn_vector_length = ' + str( rnn_vector_length ))", "# Hyper Parameter Tuning Notes\n| Accuracy % | Type | max val acc epoch | embedding_vector_length | RNN state size | Dropout\n| :------------ | :-------- | :---- | :---- | :---- | :----\n| 88.46 * | GRU | 6 | 5 | 150 | 0.2 (after Embedding and LSTM)\n| 88.4 | GRU | 4 | 5 | 100 | no dropout\n| 88.32 | GRU | 7 | 32 | 100 |\n| 88.29 | GRU | 8 | 5 | 200 | no dropout \n| 88.03 | GRU | >6 | 20 | 40 | 0.3 (after Embedding and LSTM)\n| 87.93 | GRU | 4 | 32 | 50 | 0.2 (after LSTM)\n| 87.60 | GRU | 5 | 5 | 50 | no dropout\n| 87.5 | GRU | 8 | 10 | 20 | no dropout\n| 87.5 | GRU | 5 | 32 | 50 |\n| 87.46 | GRU | 8 | 16 | 100 |\n| < 87 | LSTM | 9-11 | 32 | 100 |\n| 87.66 | GRU | 5 | 32 | 50 | 0.3 (after Embedding and LSTM)\n| 86.5 | GRU | >10 | 5 | 10 | no dropout\nGraphs", "history.history # todo: add graph of all 4 values with history\n\nplt.plot(history.history['loss'])\nplt.yscale('log')\nplt.show()\n\nplt.plot(full_history)\nplt.yscale('log')\nplt.show()", "Evaluate on Custom Text", "import re\nwords_only = r'[^\\s!,.?\\-\":;0-9]+'\nre.findall(words_only, \"Some text to, tokenize. something's.Something-else?\".lower())\n\ndef encode(reviewText):\n\n words = re.findall(words_only, reviewText.lower())\n reviewIDs = [start_char]\n for word in words:\n index = word_index.get(word, oov_char -index_from) + index_from # defaults to oov_char for missing\n if index > top_words:\n index = oov_char\n reviewIDs.append(index) \n return reviewIDs\n\ntoText(encode('To code and back again. ikkyikyptangzooboing ni !!'))\n\n\n# reviews from: \n# https://www.pluggedin.com/movie-reviews/solo-a-star-wars-story\n# http://badmovie-badreview.com/category/bad-reviews/\n\nuser_reviews = [\"This movie is horrible\",\n \"This wasn't a horrible movie and I liked it actually\",\n \"This movie was great.\",\n \"What a waste of time. It was too long and didn't make any sense.\",\n \"This was boring and drab.\",\n \"I liked the movie.\",\n \"I didn't like the movie.\",\n \"I like the lead actor but the movie as a whole fell flat\",\n \"I don't know. It was ok, some good and some bad. Some will like it, some will not like it.\",\n \"There are definitely heroic seeds at our favorite space scoundrel's core, though, seeds that simply need a little life experience to nurture them to growth. And that's exactly what this swooping heist tale is all about. You get a yarn filled with romance, high-stakes gambits, flashy sidekicks, a spunky robot and a whole lot of who's-going-to-outfox-who intrigue. Ultimately, it's the kind of colorful adventure that one could imagine Harrison Ford's version of Han recalling with a great deal of flourish … and a twinkle in his eye.\",\n \"There are times to be politically correct and there are times to write things about midget movies, and I’m afraid that sharing Ankle Biters with the wider world is an impossible task without taking the low road, so to speak. There are horrible reasons for this, all of them the direct result of the midgets that this film contains, which makes it sound like I am blaming midgets for my inability to regulate my own moral temperament but I like to think I am a…big…enough person (geddit?) to admit that the problem rests with me, and not the disabled.\",\n \"While Beowulf didn’t really remind me much of Beowulf, it did reminded me of something else. At first I thought it was Van Helsing, but that just wasn’t it. It only hit me when Beowulf finally told his backstory and suddenly even the dumbest of the dumb will realise that this is a simple ripoff of Blade. The badass hero, who is actually born from evil, now wants to destroy it, while he apparently has to fight his urges to become evil himself (not that it is mentioned beyond a single reference at the end of Beowulf) and even the music fits into the same range. Sadly Beowulf is not even nearly as interesting or entertaining as its role model. The only good aspects I can see in Beowulf would be the stupid beginning and Christopher Lamberts hair. But after those first 10 minutes, the movie becomes just boring and you don’t care much anymore.\",\n \"You don't frighten us, English pig-dogs! Go and boil your bottoms, son of a silly person! I blow my nose at you, so-called Arthur King! You and all your silly English Knnnnnnnn-ighuts!!!\"\n ]\n\nX_user = np.array([encode(review) for review in user_reviews ])\nX_user\n\n\nX_user_pad = sequence.pad_sequences(X_user, maxlen=max_review_length)\nX_user_pad", "Features View", "for row in X_user_pad:\n print()\n print(toText(row))", "Results", "user_scores = model.predict(X_user_pad)\nis_positive = user_scores >= 0.5 # I'm an optimist\n\nfor i in range(len(user_reviews)):\n print( '\\n%.2f %s:' % (user_scores[i][0], 'positive' if is_positive[i] else 'negative' ) + ' ' + user_reviews[i] )" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
apache-2.0
[ "Basic Feature Engineering in BQML\nLearning Objectives\n\nCreate SQL statements to evaluate the model\nExtract temporal features\nPerform a feature cross on temporal features\n\nIntroduction\nIn this lab, we utilize feature engineering to improve the prediction of the fare amount for a taxi ride in New York City. We will use BigQuery ML to build a taxifare prediction model, using feature engineering to improve and create a final model.\nIn this Notebook we set up the environment, create the project dataset, create a feature engineering table, create and evaluate a baseline model, extract temporal features, perform a feature cross on temporal features, and evaluate model performance throughout the process. \nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.\nSet up environment variables and load necessary libraries", "# Installing the latest version of the package\n!pip install --user google-cloud-bigquery==1.25.0", "Note: Restart your kernel to use updated packages.\nKindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.", "# Installing the latest version of the package\nimport tensorflow as tf\nprint(\"TensorFlow version: \",tf.version.VERSION)\n\n\n%%bash\n# Exporting the project\n\nexport PROJECT=$(gcloud config list project --format \"value(core.project)\")\necho \"Your current GCP Project Name is: \"$PROJECT", "The source dataset\nOur dataset is hosted in BigQuery. The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.\nThe Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict.\nCreate a BigQuery Dataset\nA BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called feat_eng if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.", "%%bash\n\n# Create a BigQuery dataset for feat_eng if it doesn't exist\ndatasetexists=$(bq ls -d | grep -w feat_eng)\n\nif [ -n \"$datasetexists\" ]; then\n echo -e \"BigQuery dataset already exists, let's not recreate it.\"\n\nelse\n echo \"Creating BigQuery dataset titled: feat_eng\"\n \n bq --location=US mk --dataset \\\n --description 'Taxi Fare' \\\n $PROJECT:feat_eng\n echo \"\\nHere are your current datasets:\"\n bq ls\nfi ", "Create the training data table\nSince there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query: This clause allows us to TRAIN a portion of the data (e.g. one hundred thousand rows versus one million rows), which keeps your query costs down. If you need a refresher on using MOD() for repeatable splits see this post. \nNote: The dataset in the create table code below is the one created previously, e.g. \"feat_eng\". The table name is \"feateng_training_data\". Run the query to create the table.", "%%bigquery \n\nCREATE OR REPLACE TABLE\n feat_eng.feateng_training_data AS\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n passenger_count*1.0 AS passengers,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 1\n AND fare_amount >= 2.5\n AND passenger_count > 0\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45", "Verify table creation\nVerify that you created the dataset.", "%%bigquery\n\n# LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT\n*\nFROM\n feat_eng.feateng_training_data\nLIMIT\n 0", "Baseline Model: Create the baseline model\nNext, you create a linear regression baseline model with no feature engineering. Recall that a model in BigQuery ML represents what an ML system has learned from the training data. A baseline model is a solution to a problem without applying any machine learning techniques. \nWhen creating a BQML model, you must specify the model type (in our case linear regression) and the input label (fare_amount). Note also that we are using the training data table as the data source.\nNow we create the SQL statement to create the baseline model.", "%%bigquery\n\nCREATE OR REPLACE MODEL\n feat_eng.baseline_model OPTIONS (model_type='linear_reg',\n input_label_cols=['fare_amount']) AS\nSELECT\n fare_amount,\n passengers,\n pickup_datetime,\n pickuplon,\n pickuplat,\n dropofflon,\n dropofflat\nFROM\n feat_eng.feateng_training_data", "Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.\nYou can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes.\nOnce the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.\nEvaluate the baseline model\nNote that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.\nNOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab.\nReview the learning and eval statistics for the baseline_model.", "%%bigquery\n\n# Eval statistics on the held out data.\n# Here, ML.EVALUATE function is used to evaluate model metrics\nSELECT\n *,\n SQRT(loss) AS rmse\nFROM\n ML.TRAINING_INFO(MODEL feat_eng.baseline_model)\n\n%%bigquery\n\n# Here, ML.EVALUATE function is used to evaluate model metrics\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL feat_eng.baseline_model)", "NOTE: Because you performed a linear regression, the results include the following columns:\n\nmean_absolute_error\nmean_squared_error\nmean_squared_log_error\nmedian_absolute_error\nr2_score\nexplained_variance\n\nResource for an explanation of the Regression Metrics.\nMean squared error (MSE) - Measures the difference between the values our model predicted using the test set and the actual values. You can also think of it as the distance between your regression (best fit) line and the predicted values. \nRoot mean squared error (RMSE) - The primary evaluation metric for this ML problem is the root mean-squared error. RMSE measures the difference between the predictions of a model, and the observed values. A large RMSE is equivalent to a large average error, so smaller values of RMSE are better. One nice property of RMSE is that the error is given in the units being measured, so you can tell very directly how incorrect the model might be on unseen data.\nR2: An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. Zero (0) indicates that the model explains none of the variability of the response data around the mean. One (1) indicates that the model explains all the variability of the response data around the mean.\nNext, we write a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.", "%%bigquery\n#TODO 1\n\n# Here, ML.EVALUATE function is used to evaluate model metrics\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL feat_eng.baseline_model)", "Model 1: EXTRACT dayofweek from the pickup_datetime feature.\n\n\nAs you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday).\n\n\nIf you were to extract the dayofweek from pickup_datetime using BigQuery SQL, the datatype returned would be integer.\n\n\nNext, we create a model titled \"model_1\" from the benchmark model and extract out the DayofWeek.", "%%bigquery\n#TODO 2\n\nCREATE OR REPLACE MODEL\n feat_eng.model_1 OPTIONS (model_type='linear_reg',\n input_label_cols=['fare_amount']) AS\nSELECT\n fare_amount,\n passengers,\n pickup_datetime,\n EXTRACT(DAYOFWEEK\n FROM\n pickup_datetime) AS dayofweek,\n pickuplon,\n pickuplat,\n dropofflon,\n dropofflat\nFROM\n feat_eng.feateng_training_data", "Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.\nNext, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.", "%%bigquery\n\n# Here, ML.TRAINING_INFO function is used to see information about the training iterations of a model.\nSELECT\n *,\n SQRT(loss) AS rmse\nFROM\n ML.TRAINING_INFO(MODEL feat_eng.model_1)\n\n%%bigquery\n\n# Here, ML.EVALUATE function is used to evaluate model metrics\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL feat_eng.model_1)", "Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.", "%%bigquery\n\n# Here, ML.EVALUATE function is used to evaluate model metrics\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL feat_eng.model_1)", "Model 2: EXTRACT hourofday from the pickup_datetime feature\nAs you recall, pickup_datetime is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number of the given date.\nHourofday is best thought of as a discrete ordinal variable (and not a categorical feature), as the hours can be ranked (e.g. there is a natural ordering of the values). Hourofday has an added characteristic of being cyclic, since 12am follows 11pm and precedes 1am.\nNext, we create a model titled \"model_2\" and EXTRACT the hourofday from the pickup_datetime feature to improve our model's rmse.", "%%bigquery\n#TODO 3a\n\nCREATE OR REPLACE MODEL\n feat_eng.model_2 OPTIONS (model_type='linear_reg',\n input_label_cols=['fare_amount']) AS\nSELECT\n fare_amount,\n passengers,\n #pickup_datetime,\n EXTRACT(DAYOFWEEK\n FROM\n pickup_datetime) AS dayofweek,\n EXTRACT(HOUR\n FROM\n pickup_datetime) AS hourofday,\n pickuplon,\n pickuplat,\n dropofflon,\n dropofflat\nFROM\n `feat_eng.feateng_training_data`\n\n%%bigquery\n\n# Here, ML.EVALUATE function is used to evaluate model metrics\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL feat_eng.model_2)\n\n%%bigquery\n\n# Here, ML.EVALUATE function is used to evaluate model metrics\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL feat_eng.model_2)", "Model 3: Feature cross dayofweek and hourofday using CONCAT\nFirst, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a feature cross. \nNote: BQML by default assumes that numbers are numeric features, and strings are categorical features. We need to convert both the dayofweek and hourofday features to strings because the model (Neural Network) will automatically treat any integer as a numerical value rather than a categorical value. Thus, if not cast as a string, the dayofweek feature will be interpreted as numeric values (e.g. 1,2,3,4,5,6,7) and hourofday will also be interpreted as numeric values (e.g. the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00). As such, there is no way to distinguish the \"feature cross\" of hourofday and dayofweek \"numerically\". Casting the dayofweek and hourofday as strings ensures that each element will be treated like a label and will get its own coefficient associated with it.\nCreate the SQL statement to feature cross the dayofweek and hourofday using the CONCAT function. Name the model \"model_3\"", "%%bigquery\n#TODO 3b\n\nCREATE OR REPLACE MODEL\n feat_eng.model_3 OPTIONS (model_type='linear_reg',\n input_label_cols=['fare_amount']) AS\nSELECT\n fare_amount,\n passengers,\n #pickup_datetime,\n #EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,\n #EXTRACT(HOUR FROM pickup_datetime) AS hourofday,\n CONCAT(CAST(EXTRACT(DAYOFWEEK\n FROM\n pickup_datetime) AS STRING), CAST(EXTRACT(HOUR\n FROM\n pickup_datetime) AS STRING)) AS hourofday,\n pickuplon,\n pickuplat,\n dropofflon,\n dropofflat\nFROM\n `feat_eng.feateng_training_data`\n\n%%bigquery\n\n# Here, ML.EVALUATE function is used to evaluate model metrics\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL feat_eng.model_3)\n\n%%bigquery\n\n# Here, ML.EVALUATE function is used to evaluate model metrics\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL feat_eng.model_3)", "Optional: Create a RMSE summary table to evaluate model performance.\n| Model | Taxi Fare | Description |\n|----------------|-----------|----------------------------------------------|\n| baseline_model | 8.62 | Baseline model - no feature engineering |\n| model_1 | 9.43 | EXTRACT dayofweek from the pickup_datetime |\n| model_2 | 8.40 | EXTRACT hourofday from the pickup_datetime |\n| model_3 | 8.32 | FEATURE CROSS hourofday and dayofweek |\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cochoa0x1/integer-programming-with-python
03.5-network-flows/factory_routing_problem.ipynb
mit
[ "Example\nLets assume we are the manager of operations for a small company. We operate a few factories that make plumbus-es and ship them to many retailers. Our shipping lanes don't have uniform costs, and due to outside factors some lanes have fixed capacity. For example the route connecting the Garfield plumbus factory to the Woolmart warehouse can only handle at most 36 plumbuses per month and has an average shipping cost per plumbus of $1.69. \nThe big boss asks:\nWhat is the best way to supply our retailers?\nThis question, as stated, cannot be confidently answered. Our coworkers in the conference room suggest some solutions:\n\n\"Supply each retailer from the factory with the lowest shipping cost\"\n\"Supply each retailer first from the factory that has the most supply\"\n\"Supply Woolmart first because Ted has a friend who works there\"\netc etc\n\nThese suggestions certainly sound reasonable, but the problem is that \"best\" is loosely defined. Should we optimize for transit time, for cost, for distance? For a first model, lets solve the following problem instead:\nWhat quantity of plumbus-es should we ship from each factory to each warehouse to minimize the total shipping cost?\nand the immediate follow up question, \nHow much will our shipping cost be?\nlets solve it!\n1. Load the data\nLets load the data with pandas. It was given to us in excel with three tabs.", "import pandas as pd\nimport numpy as np\nimport os\nfile = os.path.join('data','factory_to_warehouse.xlsx')\n\nfactories = pd.read_excel(file, sheetname='Factories')\nwarehouses = pd.read_excel(file, sheetname='Warehouses')\nlanes = pd.read_excel(file, sheetname='Lanes')\n\nfactories.head(3)\n\nwarehouses.head(3)\n\nlanes.head(3)", "2. Check that your problem is solvable\nWhile we don't have to check everything. Rather than spend hours trying to debug our program, it helps to spend a few moments and make sure the data you have to work with makes sense. Certainly this doesn't have to be exhaustive, but it saves headaches later.\nSome common things to check, specifically in this context:\n\nMissing variables\nDo we have lanes defined for factories or warehouses that don't exist?\n\n\nImpossible conditions\nIs the total demand more than the total supply?\nIs the inbound capacity obviously too small to feed each retailer?\netc", "#Do we have lanes defined for factories or warehouses that don't exist?\nall_locations = set(lanes.origin) | set(lanes.destination)\n\nfor f in factories.Factory:\n if f not in all_locations:\n print('missing ', f)\n \nfor w in warehouses.Warehouse:\n if w not in all_locations:\n print('missing ', w)\n\n#Is the total demand more than the total supply?\nassert factories.Supply.sum() >= warehouses.Demand.sum()\n\n#Is the inbound capacity obviously too small to feed each retailer?\ncapacity_in = lanes.groupby('destination').capacity.sum()\ncheck = warehouses.set_index('Warehouse').join(capacity_in)\nassert np.all(check.capacity >= check.Demand)", "3. Model the data with a graph\nOur data has a very obvious graph structure to it. We have factories and warehouses (nodes), and we have lanes that connect them (edges). In many cases the extra effort of explicitly making a graph allows us to have very natural looking constraint and objective formulations. This is absolutely not required but makes reasoning very straightforward. To make a graph, we will use networkx", "import networkx as nx\n\nG = nx.DiGraph()\n\n#add all the nodes\nfor i, row in factories.iterrows():\n G.add_node(row.Factory, supply=row.Supply, node_type='factory')\n \nfor i, row in warehouses.iterrows():\n G.add_node(row.Warehouse, demand=row.Demand, node_type='warehouse')\n\n#add the lanes (edges)\nfor i, row in lanes.iterrows():\n G.add_edge(row.origin, row.destination, cost=row.cost, capacity=row.capacity)\n\n#lets make a quick rendering to spot check the connections\n%matplotlib inline\nlayout = nx.layout.circular_layout(G)\nnx.draw(G,layout)\nnx.draw_networkx_labels(G,layout);", "4. Define the actual Linear Program\nSo far everything we have done hasn't concerned itself with solving a linear program. We have one primary question to answer here:\nWhat quantity of plumbus-es should we ship from each factory to each warehouse to minimize the total shipping cost?\nTaking this apart, we are looking for quantities from each factory to each warehouse - these are our shipping lanes. We will need as many variables as we have lanes.", "from pulp import *", "The variables are the amounts to put on each edge. LpVariable.dicts allows us to access the variables using dictionary access syntax, i.e., the quantity from Garfield to BurgerQueen is\npython\nqty[('Garfield','BurgerQueen')]\nthe actual variable name created under the hood is \nqty_('Garfield',_'BurgerQueen')", "qty = LpVariable.dicts(\"qty\", G.edges(), lowBound=0)", "okay cool, so what about our objective? Revisiting the question:\nWhat quantity of plumbus-es should we ship from each factory to each warehouse to minimize the total shipping cost?\nWe are seeking to minimize the shipping cost. So we need to calculate our shipping cost as a function of our variables (the lanes), and it needs to be linear. This is just the lane quantity multiplied by the lane cost.\n$$f(Lanes) = \\sum_{o,d \\in Lanes} qty_{o,d}*cost_{o,d} $$\nWhen dealing with sums in pulp, it is most efficient to use its supplied lpSum function.", "#the total cost of this routing is the cost per unit * the qty sent on each lane\ndef objective():\n shipping_cost = lpSum([ qty[(org,dest)]*data['cost'] for (org,dest,data) in G.edges(data=True)])\n return shipping_cost", "We have a few constraints to define:\n\n\nThe demand at each retailer must be satisfied. In graph syntax this means the sum of all inbound edges must match the demand we have on file: $$\\sum_{o,d \\in in_edges(d)} qty_{o,d} = Demand(d)$$\n\n\nWe must not use more supply than each factory has. i.e., the sum of the outbound edges from a factory must be less than or equal to the supply: $$\\sum_{o,d \\in out_edges(o)} qty_{o,d} \\leq Supply(o)$$\n\n\nEach qty must be less than or equal to the lane capacity: $$qty_{o,d} \\leq Capacity_{o,d}$$\n\n\nnetworkx makes this very easy to program because we can simply ask for all the inbound edges to a given node using nx.Digraph.in_edges", "def constraints():\n \n constraints=[]\n \n for x, data in G.nodes(data=True):\n #demand must be met\n if data['node_type'] =='warehouse':\n inbound_qty = lpSum([ qty[(org,x)] for org, _ in G.in_edges(x)])\n c = inbound_qty == data['demand']\n constraints.append(c)\n #must not use more than the available supply\n elif data['node_type'] =='factory':\n out_qty = lpSum([ qty[(x,dest)] for _,dest in G.out_edges(x)])\n c = out_qty <= data['supply']\n constraints.append(c)\n \n #now the edge constraints\n #we qty <= capacity on each lane\n for org,dest, data in G.edges(data=True):\n c = qty[(org,dest)] <= data['capacity']\n constraints.append(c)\n \n return constraints", "Finally ready to create the problem, add the objective, and add the constraints", "#setup the problem\nprob = LpProblem('warehouse_routing',LpMinimize)\n\n#add the objective\nprob += objective()\n\n#add all the constraints\nfor c in constraints():\n prob+=c", "Solve it!", "%time prob.solve()\nprint(LpStatus[prob.status])", "Now we can finally answer:\nWhat quantity of plumbus-es should we ship from each factory to each warehouse?", "#you can also use the value() function instead of .varValue\nfor org,dest in G.edges():\n v= value(qty[(org,dest)])\n if v >0:\n print(org,dest, v)", "and,\nHow much will our shipping cost be?", "value(prob.objective)", "It is a good idea to verify explicitly that all the constraints were met. Sometimes it is easy to forget a necessary constraint.", "#lets verify all the conditions\n#first lets stuff our result into a dataframe for export\nresult=[]\nfor org,dest in G.edges():\n v= value(qty[(org,dest)])\n result.append({'origin':org,'destination':dest,'qty':v})\nresult_df = pd.DataFrame(result)\n\nlanes['key']=lanes.origin+lanes.destination\nresult_df['key'] = result_df.origin+result_df.destination\n\nlanes = lanes.set_index('key').merge(result_df.set_index('key'))\n\n#any lane over capacity?\nassert np.all(lanes.qty <= lanes.capacity)\n\n#check that we met the demand\nout_qty =lanes.groupby('destination').qty.sum()\ncheck = warehouses.set_index('Warehouse').join(out_qty)\nassert np.all(check.qty == check.Demand)\n\n#check that we met the supply\nin_qty =lanes.groupby('origin').qty.sum()\ncheck = factories.set_index('Factory').join(in_qty)\nassert np.all(check.qty <= check.Supply)\n\n#the result!\nlanes[lanes.qty !=0]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/cccr-iitm/cmip6/models/sandbox-1/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: CCCR-IITM\nSource ID: SANDBOX-1\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:48\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-1', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mtimmerm/IPythonNotebooks
NaturalCubicSplines.ipynb
apache-2.0
[ "Convolutional Cubic Splines\n$C^2$-continuous cubic splines through evenly spaced data points can be created by convolving the data points with a $C^2$-continuous piecewise cubic kernel, characterized as follows:\n\\begin{split}\n y(0) &= 1\\\n y(x) &= 0,\\text{ for all integer }x \\neq 0\\\n y'(x) &= sgn(x) * 3(\\sqrt3-2)^{|x|},\\text{ for all integer }x\\\n\\end{split}\nwhich implies\n$$y''(x) = -6\\sqrt 3(\\sqrt3-2)^{|x|},\\text{ for all integer }x \\neq 0$$\nThe double-sided exponential allows the convolution to be performed extremely efficiently.\nAny of the standard boundary conditions can then be applied by adjusting the start and end derivatives appropritately, and propagating the change to the rest of the curve.\nThe following function calculates \"natural\" cubic splines (with $y'' = 0$ at start and end) using this technique, which is much easiser than the way everyone is taught!\nUse it however you like.\nCheers,\nMatt Timmermans", "import math\n#given an array of Y values at consecutive integral x abscissas,\n#return array of corresponding derivatives to make a natural cubic spline\ndef naturalSpline(ys):\n vs = [0.0] * len(ys)\n if (len(ys) < 2):\n return vs\n \n DECAY = math.sqrt(3)-2;\n endi = len(ys)-1\n\n # make convolutional spline\n S = 0.0;E = 0.0\n for i in range(len(Y)):\n vs[i]+=S;vs[endi-i]+=E;\n S=(S+3.0*ys[i])*DECAY;\n E=(E-3.0*ys[endi-i])*DECAY;\n\n #Natural Boundaries\n S2 = 6.0*(ys[1]-ys[0]) - 4.0*vs[0] - 2.0*vs[1]\n E2 = 6.0*(ys[endi-1]-ys[endi]) + 4.0*vs[endi] + 2.0*vs[endi-1]\n # A = dE2/dE = -dS2/dS, B = dE2/dS = -dS2/dS\n A = 4.0+2.0*DECAY\n B = (4.0*DECAY+2.0)*(DECAY**(len(ys)-2))\n DEN = A*A - B*B\n S = (A*S2 + B*E2) / DEN\n E = (-A*E2 - B*S2) / DEN\n for i in range(len(ys)):\n vs[i]+=S;vs[endi-i]+=E\n S*=DECAY;E*=DECAY\n return vs\n\n#\n#Plot a different natural spline, along with its 1st and 2nd derivatives, each time you run this\n#\n%run plothelp.py\n%matplotlib inline\nimport random\nimport numpy\nY = [random.random()*10.0+2 for _ in range(5)]\nV = naturalSpline(Y)\nxs = numpy.linspace(0,len(Y)-1, 1000)\nplt.figure(0, figsize=(12.0,4.0))\nplt.plot(xs,[hermite_interp(Y,V,x) for x in xs])\nplt.plot(range(0,len(Y)),[Y[x] for x in range(0,len(Y))], \"bo\")\nplt.figure(1, figsize=(12.0,4.0));plt.grid(True)\nplt.plot(xs,[hermite_interp1(Y,V,x) for x in xs])\nplt.plot(xs,[hermite_interp2(Y,V,x) for x in xs])", "The Kernel\nThe kernel decays quickly around $x=0$, which is why cubic splines suffer from very little \"ringing\" -- moving one point doesn't significantly affect the curve at points far away.", "#\n# Plot the kernel\n#\nDECAY = math.sqrt(3)-2;\nvs = [3*(DECAY**x) for x in range(1,7)]\nys = [0]*len(vs) + [1] + [0]*len(vs)\nvs = [-v for v in vs[::-1]] + [0.0] + vs\nxs = numpy.linspace(0,len(ys)-1, 1000)\nplt.figure(0, figsize=(12.0,4.0));plt.grid(True);plt.ylim([-0.2,1.1]);plt.xticks(range(-5,6))\nplt.plot([x-6.0 for x in xs],[hermite_interp(ys,vs,x) for x in xs])", "Derivation\nEach segment of the curve is a cubic polynomial, which has 4 unknowns: $Y = Ax^3 + Bx^2 + C +D$.\nThe kernel consists of two main lobe segments (for $\\text{x in }[-1,0]$ and $\\text{x in }[0,1]$), and many side lobe segments.\nFor each side lobe, the end points are set:\n$$\n\\begin{split}\n y_0 &= 0\\\n y_1 &= 0\n\\end{split}\n$$\nThere are only 2 unknowns left, so specifying the first and second derivatives at one end will fix the first and second derivatives at the other end as well. There is a linear relationship:\n$$\n\\begin{bmatrix}\n y'_1 \\\n y''_1\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n -2 & -\\frac{1}{2} \\\n -6 & -2\n\\end{bmatrix}\n\\begin{bmatrix}\n y'_0 \\\n y''_0\n\\end{bmatrix}\n$$\nIf $(y'_0,y''_0)$ is an eigenvector of this matrix, then $(y'_1,y''_1)$ will be as well, implying the same for $(y'_2,y''_2)$, etc. All of the adjacent sidelobes will have the same shape with different amplitudes.\nThe two eigenvectors correspond to exponentially increasing or decreasing sidelobe amplitudes, respectively:\n$$\n\\begin{split}\n\\begin{bmatrix}\n -2 & -\\frac{1}{2} \\\n -6 & -2\n\\end{bmatrix}\n\\begin{bmatrix}\n 1 \\\n 2\\sqrt{3}\n\\end{bmatrix}\n&=\n\\frac{1}{\\sqrt{3}-2}\n\\begin{bmatrix}\n 1 \\\n 2\\sqrt{3}\n\\end{bmatrix}\n\\\n\\begin{bmatrix}\n -2 & -\\frac{1}{2} \\\n -6 & -2\n\\end{bmatrix}\n\\begin{bmatrix}\n 1 \\\n -2\\sqrt{3}\n\\end{bmatrix}\n&=\n\\left(\\sqrt{3}-2\\right)\n\\begin{bmatrix}\n 1 \\\n -2\\sqrt{3}\n\\end{bmatrix}\n\\end{split}\n$$\nTo create the kernel, then, we just calculate the main lobes to meet the side lobes with first and second derivaties along these eigenvectors. The $C^2$-continuity requirement then forces exponentially decaying sidelobes on both sides:\n$$\n\\begin{gather}\ny(0)=1\\\ny'(0) = 0\n\\\ny(-1) = y(1) = 0\n\\\n\\frac{y''(-1)}{y'(-1)} = -\\frac{y''(1)}{y'(1)} = 2\\sqrt{3}\n\\end{gather}\n$$" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
jdnz/qml-rg
Tutorials/Python_Introduction.ipynb
gpl-3.0
[ "1. Introduction\nPerhaps instead of telling you how to write a loop or a conditional in Python, it might be a better option to put Python in context, tell a bit about how programming languages are designed, and why certain trade-offs are chosen. A programming language is something you can learn on your own once you understand why it works the way it does. \n2. Compilers, interpreters, JIT compilation\nA compiler takes a piece of code written in a high-level language and translates it to binary machine code that the CPU can run. Compilation is a complex process that looks at the entire code, checks syntax, does optimizations, links to other binaries, and spits out an executable or some other form of binary code such as a dynamic library. \nInterpreted languages parse the code line by line, and thus they only translate something to a machine-executable format one command at a time. This means that you can have an interactive shell and you can type in commands one by one, and see the result immediately. If you make an error, your previous variables and computations are not lost: the interpreter keeps track of them and you can still access them. In contrast, unless you have some mechanism in your compiled code to save interim calculations, an error will terminate the program, and its full memory space is liberated and control is returned to the operating system. \nFor interactive work, an interpreter is much more suitable. This explains why scientific languages like R, Mathematica, and MATLAB work in this fashion. On the other hand, since there are no optimizations whatsoever, they tend to be sluggish. So for numerical calculations, compiled languages are better: Fortran, C, or newer ones that were designed with safety and concurrency in mind, such as Go and Rust.\nA newer paradigm does just-in-time (JIT) compilation: you get an interactive shell, but everything you enter is actually compiled quickly, and then run. That is, a JIT system combines the best of two worlds. Most modern languages are either written with JIT in mind, such as Scala and Julia, or were adapted to be used in this fashion. \nApart from these paradigms, there are abominations like Java: it is both compiled and slow, running on a perfectly horrific level of abstraction called the Java Virtual Machine. MATLAB is a multiparadigm language that is designed to maximize user frustration, although it is primarily interpreted. The following table gives a few examples of each paradigm in approximate temporal order.\n| Compiled | Interpreted | JIT | Horror |\n| ------------- |---------------------------------| ----------- |--------------|\n| Fortran (1957)| Lisp (1958) | | |\n| | BASIC (1964) | | |\n| C (1972) | S (1976) | | |\n| C++ (1983) | Perl (1987), Mathematica (1988) | | MATLAB (1984)|\n| Haskell (1990)| R (1993) | | Java (1995) |\n| Go (2009) | | Scala (2004)| |\n| Rust (2010) | | Julia (2012)| |\n3. So what is Python?\nPython is a language specification born in 1991. As it is the case with many languages (Fortran, C, C++, Haskell), the language specification and its actual implementations are independently developed, although the development is correlated. What you normally call Python, and this is the Python that ships with your operating system or with Anaconda, is actually the reference implementation of the language, which is formally called CPython. This reference implementation is a Python interpreter written in the C language. \nThe Python language was the first language that was designed with humans in mind: code was meant to be easy to read by humans. This was a response to write-only languages that introduce tricky syntax that is difficult to decipher. Both Mathematica and MATLAB are guilty of being write-only languages, and so are the latest standards of C++. Here is a priceless Mathematica one-liner:\nArrayPlot@Log[BinCounts[Through@{Re,Im}@#&amp;/@NestList[(5#c+Re@#^6-2.7)#+c^5&amp;,.1+.2|,9^7],a={-1,1,0.001},a]+1]\nYou clearly don't syntax highlighting for this. No matter how hard you try, it would be difficult to write something as convoluted as this in Python.\nPython also wants to have exactly one obvious way to do something, which was anything but true for a similar scripting language called Perl that many of us refuse to admit that we ever used it.\nBy some clever design decisions, it is extremely easy to call low-level code from Python, and this makes it the best glue language: you can call C, Fortran, Julia, Lisp, and whatever functions from Python with ease. Try that from Mathematica.\nThe default CPython implementation is an interpreter, and therefore it comes with a shell (the funny screen where you type stuff in). This shell, however, is not any good by today's standards. IPython was conceived to have a good shell for Python. In principle, IPython can use any Python interpreter on the back (CPython, Pypy, and others). Jupyter provides a notebook interface based on IPython that allows you to practice literate programming, that is, mixing code, text, mathematical formula, and images in the same environment, making it attractive for scientists. Mathematica's notebook interface is far more advanced than that of Jupyter, but development is rapid, and the functionality keeps expanding. Both IPython and Jupyter were conceived for Python, but now they work with many other languages.\nDue to its ease of use and glue language nature, Python became massively popular among programmers. They developed thousands of packages for Python, everything from controlling robots to running websites. It was never designed to be a language for scientific computing. Yet, it became a de facto next-best alternative to MATLAB after the unification of the various numerical libraries under the umbrella of numpy and SciPy. With the development of SymPy, it acquired properties similar in functionality to Mathematica. With Pandas, it takes on R as the choice for statistical modelling. With TensorFlow, it is overtaking distributed frameworks like Hadoop and Spark in large-scale machine learning. We can keep listing packages, but you get the idea. The package ecosystem gives Python users superpowers.\nThe reference implementation, CPython, by virtue of being an interpreter, is slow, but it is not the only implementation of the language. Pypy is a JIT implementation of the language, started in 2007. It is up to 20-40x faster on pure Python code. The problem is that its foreign language interface is incompatible with CPython, so the glue language nature is gone, and many important Python packages do not work with it. Cython is an extension of Python that generates C code that in turn can be compiled for speed. As a user of Python, you probably don't want to deal with this directly, but it is nevertheless an option if you want speed. To put this together, we can extend the table above:\n| Compiled | Interpreted | JIT | Horror |\n| ------------- |------------------ | -----------|-------------|\n| | CPython (0.9, 1991)| | |\n| | CPython (1.0, 1994)| | |\n| | CPython (2.0, 2000)| | Jython (2001)|\n|Cython (2007) | CPython (3.0, 2008)|Pypy (2.7, 2007)| |\n| | CPython (3.6, 2016)|Pypy (3.2, 2014), Pyston (2014)| |\n3.1 Global interpreter lock\nPython was originally conceived in 1991: until the second half of the 2000s, consumer-grade CPUs were single core. Thus Python was not designed to be easy to parallelize. To understand what goes on here, we have to understand what \"running parallel\" means. \nConceptually the simplest case is when you have several computers: each one accesses its own memory space and communicates via the network. This is called distributed memory model.\nFor the next level, we have to understand what a process is. The operating system that you run, let that be Android, macOS, Linux, and even Windows on a good day, ensures that when you run a program, it has its own, protected memory space. It cannot access the memory space allocated to a different program, and other programs cannot access its own allocated memory space. In fact, the operating system itself cannot access the memory space of any of the running programmes: it can terminate them and free the memory, but it cannot access the content of the memory (in principle). A thing that runs with its allocated, protected memory space is called a process. \nMultiprocessing means running several processes at the same time. If the processes run on several cores on a multicore processor working on the same calculation, you end up with a scheme similar to the distributed memory model: the processes must communicate with one another if they want to exchange data. It does not happen through the network, but the operating system's help must be invoked. This is a shared memory model with isolated memory spaces. Going between multiprocessing and distributed memory processing is straightforward, at least from the users' perspective.\nMultithreading means that one single process uses several CPU cores. It means that each thread can access an arbitrary piece of data belonging to the process. Now imagine you have some variable a and two processes want to increase its value by 1. First, process 1 reads it, learns that the value is 5, and wants to write back 6. The second process reads out 5 as well, and writes back 6. So the final value is 6, instead of 7. This is called a race condition. To get around it, the thread can declare a lock: no other thread can access that part of the code until the lock is released. If the thread that declared the lock waits for another lock to be released, a deadlock can occur: this is an infinite cycle from which there is no exit.\nPython allows you to have multiprocessing, but multithreading is implicitly forbidden. To avoid race conditions and deadlocks, the interpreter maintains a global lock on every variable: this is called the global interpreter lock (GIL). \nMultiprocessing is inherently less efficient, so there is an increasing pressure to remove the 26-year-old GIL. Pypy introduced an experimental software transaction memory that replaces the GIL. It is an inefficient implementation and it is more of a proof of concept, but it works. Cython allows you to release the GIL and write multithreaded code in C, if that is your thing. There are also plans that upcoming releases of CPython would slowly outphase the GIL in favour of a software transaction memory, but it will take decades.\n3.2 Python 2 versus 3\nPython 3 is the present and future of the Python language. It is actively developed, whereas Python 2 only receives security updates, and its end-of-life was declared several times (although it refuses to die). Python 3 is a more elegant and consistent language, which is also faster than older versions, at least starting from version 3.5. Yet, there are still some libraries out there that do not work with Python 3. With the release of Python 3.5 in 2015, now most people recommend Python 3. Anaconda changed to recommending Python 3 in January 2017.\nThe transition between Python 2 and 3 is a tale of how to do it wrong. Most people never asked for Python 3, and for the first seven years of Python 3, the changes were mainly below the hood. Perhaps the most important change was the proper handling of UTF characters, which sounds abstract for a scientist, until you learn that you can type in Greek characters in mathematical formulas if you use Python 3.\nIn any case, the two differences every Python-using scientist should be aware of are related to printing and integer division. If you start your code with this line, you ensure that your code will work in both versions identically:", "from __future__ import print_function, division", "Printing had a weird implementation in Python 2 that was rectified, and now printing is a function like every other. This means that you must use brackets when you print something. Then in Python 2, there were two ways of doing integer division: 2 / 3 and 2 // 3 both gave zero. In Python 3, the former triggers a type upgrade to floats. If you import division from future, you get the same behaviour in Python 2.\n3.4 Don't know how to code? Completely new to Python?\nA good start for any programming language is a Jupyter kernel if the language has one. Jupyter was originally designed for Python, so naturally it has a matching kernel. Why Jupyter? It is a uniform interface for many languages (Python, Julia, R, Scala, Haskell, even bloody MATLAB has a Jupyter kernel), so you can play with a new language in a familiar, interpreter-oriented environment. If you never coded in your life, it is also a good start, as you get instant feedback on your initial steps in what essentially is a tab in your browser.\nIf you are coming from MATLAB, or you advanced beyond the skills of writing a few dozens lines of code in Python, I recommend using Spyder. It is an awesome integrated environment for doing scientific work in Python: it includes instant access to documentation, variable inspection, code navigation, an IPython console, plus cool tools for writing beautiful and efficient code.\nFor tutorials, check out the Learning tab in Anaconda Navigator. Both videos and other tutorials are available in great multitude.\n4. Where to find code and how (don't reinvent the wheel, round 1)\nThe fundamental difference between a computer scientist and an arbitrary other scientist is that the former will first try to find other people's code to achieve a task, whereas the latter type is suspicious of alien influence and will try to code up everything from scratch. Find a balance.\nHere we are not talking about packages: we are talking about snippets of code. The chances are slim that you want to do something in Python that N+1 humans did not do before. Two and a half places to look for code:\n\n\nThe obvious internet search will point you to the exact solution on Stackoverflow.\n\n\nCode search engines are junk, so for even half-trivial queries that include idiomatic use of a programming language, they will not show up much. This is when you can turn to GitHub's Advanced Search. It will not let you search directly for code, but you can restrict your search by language, and look at relevant commits and issues. You have a good chance of finding what you want.\n\n\nGitHub has a thing called gist. These are short snippets (1-150 lines) of code under git control. The gist search engine is awesome for finding good code.\n\n\nExercise 1. Find three different ways of iterating over a dictionary and printing out each key-value pairs. Explain the design principle of one obvious way of doing something through this example. If you do not know what a dictionary is, that is even better.\n5. Why am I committing a crime against humanity by using MATLAB?\nHate speech follows:\n\n\nLicence fee: MathWorks is second biggest enemy of science after academic publishers. You need a pricey licence on every computer where you want to use it. Considering that the language did not see much development since 1984, it does not seem like a great deal. They, however, ensure that subsequent releases break something, so open source replacement efforts like Octave will never be able to catch up. \n\n\nPackage management does not exist.\n\n\nMaintenance: maintaining a toolbox is a major pain since the language forces you to have a very large number of files. \n\n\nSlow: raw MATLAB code is on par with Python in terms of inefficiency. It can be fast, but only when the operations you use actually translate to low-level linear algebra operations.\n\n\nMEX: this system was designed to interact with C code. In reality, it only ensures that you tear your hair out if you try to use it. \n\n\nInterface is not decoupled correctly. You cannot use the editor while running a code in the interpreter. Seriously? In 2017?\n\n\nName space mangling: imported functions override older ones. There is no other option. You either overwrite, or you do not use a toolbox.\n\n\nWrite-only language: this one can be argued. With an excessive use of parentheses, MATLAB code can be pretty hard to parse, but allegedly some humans mastered it.\n\n\n6. Package management (don't reinvent the wheel, round 2)\nOnce you go beyond the basic hurdles of Python, you definitely want to use packages. Many of them are extremely well written, efficient, and elegant. Although most of the others are complete junk.\nPackage management in Python used to be terrible, but nowadays it is simply bad (this is already a step up from MATLAB or Mathematica). So where does the difficulty stem from? From compilation. Since Python interacts so well with compiled languages, it is the most natural thing to do to bypass the GIL with C or Cython code for some quick calculations, and then get everything back to Python. The problem is that we have to deal with three major operating systems and at least three compiler chain families.\nPython allows the distribution of pre-compiled packages through a system called wheels, which works okay if the developers have access to all the platforms. Anaconda itself is essentially a package management system for Python, shipping precompiled binaries that supposed to work together well. So, assuming you have Anaconda, and you know which package you want to install, try this first:\nconda install whatever_package\nIf the package is not in the Anaconda ecosytem, you can use the standard Python Package Index (PyPI) through the ultra-universal pip command:\npip install whatever_package\nIf you do not have Anaconda or you use some shared computer, change this to pip install whatever_package --user. This will install the package locally to your home folder.\nDepending on your operating system, several things can happen.\n\n\nWindows: if there are no binaries in Anaconda or on PyPI, good luck. Compilation is notoriously difficult to get right on Windows both for package developers and for users.\n\n\nmacOS: if there are no binaries in Anaconda or on PyPI, start scratching your head. There are two paths to follow: (i) the code will compile with Apple's purposefully maimed Clang variant. In this case, if you XCode, things will work with a high chance of success. The downside: Apple hates you. They keep removing support for compiling multithreaded from Clang. (ii) Install the uncontaminated GNU Compiler Chain (gcc) with brew. You still have a high chance of making it work. The problems begin if the compilation requires many dependent libraries to be present, which may or may not be supported by brew.\n\n\nLinux: there are no binaries by design. The compiler chain is probably already there. The pain comes from getting the development headers of all necessary libraries, not to mention, the right version of the libraries. Ubuntu tends to have outdated libraries.\n\n\nExercise 2. Install the conic optimization library Picos. In Anaconda, proceed in two steps: install cvxopt with conda, and then Picos from PyPI. If you are not using Anaconda, a pip install will be just fine.\n7. Idiomatic Python\n7.1 Tricks with lists\nPython has few syntactic candies, precisely because it wants to keep code readable. One thing you can do, though, is defining lists in a functional programming way, that is, it will be familiar to Mathematica users. This is the crappy way of filling a list with values:", "l = []\nfor i in range(10):\n l.append(i)\nprint(l)", "This is more Pythonesque:", "l = [i**2 for i in range(10)]\nprint(l)", "What you have inside the square bracket is a generator expression. Sometimes you do not need the list, only its values. In such cases, it suffices to use the generator expression. The following two lines of code achieve the same thing:", "print(sum([i for i in range(10)]))\nprint(sum(i for i in range(10)))", "Which one is more efficient? Why? \nYou can also use conditionals in the generator expressions. For instance, this is a cheap way to get even numbers:", "[i for i in range(10) if i % 2 == 0]", "Exercise 3. List all odd square numbers below 1000.\n7.2 PEP8\nAnd on the seventh day, God created PEP8. Python Enhancement Proposal (PEP) is a series of ideas and good practices for writing nice Python code and evolving the language. PEP8 is the set of policies that tells you what makes Python syntax pretty (meaning it is easy to read for any other Python programmer). In an ideal world, everybody should follow it. Start programming in Python by keeping good practices in mind. \nAs a starter, Python uses indentation and indentation alone to tell the hierarchy of code. Use EXACTLY four space characters as indentation, always. If somebody tells you to use one tab, butcher the devil on the spot.\nBad:", "for _ in range(10):\n print(\"Vomit\")", "Good:", "for _ in range(10):\n print(\"OMG, the code generating this is so prettily idented\")", "The code is more readable if it is a bit leafy. For this reason, leave a space after every comma just as you would do in natural languages:", "print([1,2,3,4]) # Ugly crap\nprint([1, 2, 3, 4]) # My god, this is so much easier to read!", "Spyder has tools for helping you keeping to PEP8, but it is not so straightforward in Jupyter unfortunately.\nExercise 4. Clean up this horrific mess:", "for i in range(2,5):\n print(i)\nfor j in range( -10,0, 1):\n print(j )", "7.3 Tuples, swap\nTuples are like lists, but with a fixed number of entries. Technically, this is a tuple:", "t = (2, 3, 4)\nprint(t)\nprint(type(t))", "You would, however, seldom use it in this form, because you would just use a list. They come handy in certain scenarios, like enumerating a list:", "very_interesting_list = [i**2-1 for i in range(10) if i % 2 != 0]\nfor i, e in enumerate(very_interesting_list):\n print(i, e)", "Here enumerate returns you a tuple with the running index and the matching entry of the list. You can also zip several lists and create a stream of tuples:", "another_interesting_list = [i**2+1 for i in range(10) if i % 2 == 0]\n\nfor i, j in zip(very_interesting_list, another_interesting_list):\n print(i, j)", "You can use tuple-like assignment to initialize multiple variables:", "a, b, c = 1, 2, 3\nprint(a, b, c)", "This syntax in turn enables you the most elegant way of swapping the value of two variables:", "a, b = b, a\nprint(a, b)", "7.4 Indexing\nYou saw that you can use in, zip, and enumerate to iterate over lists. You can also use slicing on one-dimensional lists:", "l = [i for i in range(10)]\nprint(l)\nprint(l[2:5])\nprint(l[2:])\nprint(l[:-1])\n\nl[-2]", "Note that the upper index is not inclusive (the same as in range). The index -1 refers to the last item, -2 to the second last, and so on. Python lists are zero-indexed.\nUnfortunately, you cannot do convenient double indexing on multidimensional lists. For this, you need numpy.", "import numpy as np\na = np.array([[(i+1)*(j+1)for j in range(5)] \n for i in range(3)])\nprint(a)\nprint(a[:, 0])\nprint(a[0, :])", "Exercise 5. Get the bottom-right 2x2 submatrix of a.\n8. Types\nPython will hide the pain of working with types: you don't have to declare the type of any variable. But this does not mean they don't have a type. The type gets assigned automatically via an internal type inference mechanism. To demonstrate this, we import the main numerical and symbolic packages, along with an option to pretty-print symbolic operations.", "import sympy as sp\nimport numpy as np\nfrom sympy.interactive import printing\nprinting.init_printing(use_latex='mathjax')\n\nprint(np.sqrt(2))\nsp.sqrt(2)", "The types tell you why these two look different:", "print(type(np.sqrt(2)))\nprint(type(sp.sqrt(2)))", "The symbolic representation is, in principle, infinite precision, whereas the numerical representation uses 64 bits.\nAs we said above, you can do some things with numpy arrays that you cannot do with lists. Their types can be checked:", "a = [0. for _ in range(5)]\nb = np.zeros(5)\nprint(a)\nprint(b)\nprint(type(a))\nprint(type(b))", "There are many differences between numpy arrays and lists. The most important ones are that lists can expand, but arrays cannot, and lists can contain any object, whereas numpy arrays can only contain things of the same type.\nType conversion is (usually) easy:", "print(type(list(b)))\nprint(type(np.array(a)))", "This is where the trouble begins:", "from sympy import sqrt\nfrom numpy import sqrt\nsqrt(2)", "Because of this, never import everything from a package: from numpy import * is forbidden.\nExercise 6. What would you do to keep everything at infinite precision to ensure the correctness of a computational proof? This does not seem to be working:", "b = np.zeros(3)\nb[0] = sp.pi\nb[1] = sqrt(2)\nb[2] = 1/3\nprint(b)", "9. Read the fine documentation (and write it)\nPython packages and individual functions typically come with documentation. Documentation is often hosted on ReadTheDocs. For individual functions, you can get the matching documentation as you type. Just press Shift+Tab on a function:", "sp.sqrt", "In Spyder, Ctrl+I will bring up the documentation of the function.\nThis documentation is called docstring, and it is extremely easy to write, and you should do it yourself if you write a function. It is epsilon effort and it will take you a second to write it. Here is an example:", "def multiply(a, b):\n \"\"\"Multiply two numbers together.\n \n :param a: The first number to be multiplied.\n :type a: float.\n :param b: The second number to be multiplied.\n :type b: float.\n \n :returns: the multiplication of the two numbers.\n \"\"\"\n \n return a*b", "Now you can press Shift+Tab to see the above documentation:", "multiply", "Exercise 7. In the documentation above, it was specified that the types of the arguments are floats, but the actual implementation multiplies anything. Add a type check. Then extend the function and the documentation to handle three inputs." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
benvanwerkhoven/kernel_tuner
tutorial/diffusion_use_optparam.ipynb
apache-2.0
[ "Tutorial: From physics to tuned GPU kernels\nThis tutorial is designed to show you the whole process starting from modeling a physical process to a Python implementation to creating optimized and auto-tuned GPU application using Kernel Tuner.\nIn this tutorial, we will use diffusion as an example application.\nWe start with modeling the physical process of diffusion, for which we create a simple numerical implementation in Python. Then we create a CUDA kernel that performs the same computation, but on the GPU. Once we have a CUDA kernel, we start using the Kernel Tuner for auto-tuning our GPU application. And finally, we'll introduce a few code optimizations to our CUDA kernel that will improve performance, but also add more parameters to tune on using the Kernel Tuner.\n<div class=\"alert alert-info\">\n\n**Note:** If you are reading this tutorial on the Kernel Tuner's documentation pages, note that you can actually run this tutorial as a Jupyter Notebook. Just clone the Kernel Tuner's [GitHub repository](http://github.com/benvanwerkhoven/kernel_tuner). Install the Kernel Tuner and Jupyter Notebooks and you're ready to go! You can start the tutorial by typing \"jupyter notebook\" in the \"kernel_tuner/tutorial\" directory.\n\n</div>\n\nDiffusion\nPut simply, diffusion is the redistribution of something from a region of high concentration to a region of low concentration without bulk motion. The concept of diffusion is widely used in many fields, including physics, chemistry, biology, and many more.\nSuppose that we take a metal sheet, in which the temperature is exactly equal to one degree everywhere in the sheet.\nNow if we were to heat a number of points on the sheet to a very high temperature, say a thousand degrees, in an instant by some method. We could see the heat diffuse from these hotspots to the cooler areas. We are assuming that the metal does not melt. In addition, we will ignore any heat loss from radiation or other causes in this example.\nWe can use the diffusion equation to model how the heat diffuses through our metal sheet:\n\\begin{equation}\n\\frac{\\partial u}{\\partial t}= D \\left( \\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} \\right)\n\\end{equation}\nWhere $x$ and $y$ represent the spatial descretization of our 2D domain, $u$ is the quantity that is being diffused, $t$ is the descretization in time, and the constant $D$ determines how fast the diffusion takes place.\nIn this example, we will assume a very simple descretization of our problem. We assume that our 2D domain has $nx$ equi-distant grid points in the x-direction and $ny$ equi-distant grid points in the y-direction. Be sure to execute every cell as you read through this document, by selecting it and pressing shift+enter.", "nx = 1024\nny = 1024", "This results in a constant distance of $\\delta x$ between all grid points in the $x$ dimension. Using central differences, we can numerically approximate the derivative for a given point $x_i$:\n\\begin{equation}\n\\left. \\frac{\\partial^2 u}{\\partial x^2} \\right|{x{i}} \\approx \\frac{u_{x_{i+1}}-2u_{{x_i}}+u_{x_{i-1}}}{(\\delta x)^2}\n\\end{equation}\nWe do the same for the partial derivative in $y$:\n\\begin{equation}\n\\left. \\frac{\\partial^2 u}{\\partial y^2} \\right|{y{i}} \\approx \\frac{u_{y_{i+1}}-2u_{y_{i}}+u_{y_{i-1}}}{(\\delta y)^2}\n\\end{equation}\nIf we combine the above equations, we can obtain a numerical estimation for the temperature field of our metal sheet in the next time step, using $\\delta t$ as the time between time steps. But before we do, we also simplify the expression a little bit, because we'll assume that $\\delta x$ and $\\delta y$ are always equal to 1.\n\\begin{equation}\nu'{x,y} = u{x,y} + \\delta t \\times \\left( \\left( u_{x_{i+1},y}-2u_{{x_i},y}+u_{x_{i-1},y} \\right) + \\left( u_{x,y_{i+1}}-2u_{x,y_{i}}+u_{x,y_{i-1}} \\right) \\right)\n\\end{equation}\nIn this formula $u'_{x,y}$ refers to the temperature field at the time $t + \\delta t$. As a final step, we further simplify this equation to:\n\\begin{equation}\nu'{x,y} = u{x,y} + \\delta t \\times \\left( u_{x,y_{i+1}}+u_{x_{i+1},y}-4u_{{x_i},y}+u_{x_{i-1},y}+u_{x,y_{i-1}} \\right)\n\\end{equation}\nPython implementation\nWe can create a Python function that implements the numerical approximation defined in the above equation. For simplicity we'll use the assumption of a free boundary condition.", "def diffuse(field, dt=0.225):\n field[1:nx-1,1:ny-1] = field[1:nx-1,1:ny-1] + dt * (\n field[1:nx-1,2:ny]+field[2:nx,1:ny-1]-4*field[1:nx-1,1:ny-1]+\n field[0:nx-2,1:ny-1]+field[1:nx-1,0:ny-2] ) \n return field", "To give our Python function a test run, we will now do some imports and generate the input data for the initial conditions of our metal sheet with a few very hot points. We'll also make two plots, one after a thousand time steps, and a second plot after another two thousand time steps. Do note that the plots are using different ranges for the colors. Also, executing the following cell may take a little while.", "import numpy\n\n#setup initial conditions\ndef get_initial_conditions(nx, ny):\n field = numpy.ones((ny, nx)).astype(numpy.float32)\n field[numpy.random.randint(0,nx,size=10), numpy.random.randint(0,ny,size=10)] = 1e3\n return field\nfield = get_initial_conditions(nx, ny)", "We can now use this initial condition to solve the diffusion problem and plot the results.", "from matplotlib import pyplot\n%matplotlib inline\n\n#run the diffuse function a 1000 times and another 2000 times and make plots\nfig, (ax1, ax2) = pyplot.subplots(1,2)\ncpu=numpy.copy(field)\nfor i in range(1000):\n cpu = diffuse(cpu)\nax1.imshow(cpu)\nfor i in range(2000):\n cpu = diffuse(cpu)\nax2.imshow(cpu)", "Now let's take a quick look at the execution time of our diffuse function. Before we do, we also copy the current state of the metal sheet to be able to restart the computation from this state.", "#run another 1000 steps of the diffuse function and measure the time\nfrom time import time\nstart = time()\ncpu=numpy.copy(field)\nfor i in range(1000):\n cpu = diffuse(cpu)\nend = time()\nprint(\"1000 steps of diffuse on a %d x %d grid took\" %(nx,ny), (end-start)*1000.0, \"ms\")\npyplot.imshow(cpu)", "Computing on the GPU\nThe next step in this tutorial is to implement a GPU kernel that will allow us to run our problem on the GPU. We store the kernel code in a Python string, because we can directly compile and run the kernel from Python. In this tutorial, we'll use the CUDA programming model to implement our kernels.\n\nIf you prefer OpenCL over CUDA, don't worry. Everything in this tutorial \n applies as much to OpenCL as it does to CUDA. But we will use CUDA for our \n examples, and CUDA terminology in the text.", "def get_kernel_string(nx, ny):\n return \"\"\"\n #define nx %d\n #define ny %d\n #define dt 0.225f\n __global__ void diffuse_kernel(float *u_new, float *u) {\n int x = blockIdx.x * block_size_x + threadIdx.x;\n int y = blockIdx.y * block_size_y + threadIdx.y;\n\n if (x>0 && x<nx-1 && y>0 && y<ny-1) {\n u_new[y*nx+x] = u[y*nx+x] + dt * ( \n u[(y+1)*nx+x]+u[y*nx+x+1]-4.0f*u[y*nx+x]+u[y*nx+x-1]+u[(y-1)*nx+x]);\n }\n }\n \"\"\" % (nx, ny)\nkernel_string = get_kernel_string(nx, ny)", "The above CUDA kernel parallelizes the work such that every grid point will be processed by a different CUDA thread. Therefore, the kernel is executed by a 2D grid of threads, which are grouped together into 2D thread blocks. The specific thread block dimensions we choose are not important for the result of the computation in this kernel. But as we will see will later, they will have an impact on performance.\nIn this kernel we are using two, currently undefined, compile-time constants for block_size_x and block_size_y, because we will auto tune these parameters later. It is often needed for performance to fix the thread block dimensions at compile time, because the compiler can unroll loops that iterate using the block size, or because you need to allocate shared memory using the thread block dimensions.\nThe next bit of Python code initializes PyCuda, and makes preparations so that we can call the CUDA kernel to do the computation on the GPU as we did earlier in Python.", "from pycuda import driver, compiler, gpuarray, tools\nimport pycuda.autoinit\nfrom time import time\n\n#allocate GPU memory\nu_old = gpuarray.to_gpu(field)\nu_new = gpuarray.to_gpu(field)\n\n#setup thread block dimensions and compile the kernel\nthreads = (16,16,1)\ngrid = (int(nx/16), int(ny/16), 1)\nblock_size_string = \"#define block_size_x 16\\n#define block_size_y 16\\n\"\nmod = compiler.SourceModule(block_size_string+kernel_string)\ndiffuse_kernel = mod.get_function(\"diffuse_kernel\")", "The above code is a bit of boilerplate we need to compile a kernel using PyCuda. We've also, for the moment, fixed the thread block dimensions at 16 by 16. These dimensions serve as our initial guess for what a good performing pair of thread block dimensions could look like.\nNow that we've setup everything, let's see how long the computation would take using the GPU.", "#call the GPU kernel a 1000 times and measure performance\nt0 = time()\nfor i in range(500):\n diffuse_kernel(u_new, u_old, block=threads, grid=grid)\n diffuse_kernel(u_old, u_new, block=threads, grid=grid)\ndriver.Context.synchronize()\nprint(\"1000 steps of diffuse ona %d x %d grid took\" %(nx,ny), (time()-t0)*1000, \"ms.\")\n\n#copy the result from the GPU to Python for plotting\ngpu_result = u_old.get()\nfig, (ax1, ax2) = pyplot.subplots(1,2)\nax1.imshow(gpu_result)\nax1.set_title(\"GPU Result\")\nax2.imshow(cpu)\nax2.set_title(\"Python Result\")", "That should already be a lot faster than our previous Python implementation, but we can do much better if we optimize our GPU kernel. And that is exactly what the rest of this tutorial is about!\nAlso, if you think the Python boilerplate code to call a GPU kernel was a bit messy, we've got good news for you! From now on, we'll only use the Kernel Tuner to compile and benchmark GPU kernels, which we can do with much cleaner Python code.\nAuto-Tuning with the Kernel Tuner\nRemember that previously we've set the thread block dimensions to 16 by 16. But how do we actually know if that is the best performing setting? That is where auto-tuning comes into play. Basically, it is very difficult to provide an answer through performance modeling and as such, we'd rather use the Kernel Tuner to compile and benchmark all possible kernel configurations.\nBut before we continue, we'll increase the problem size, because the GPU is very likely underutilized.", "nx = 4096\nny = 4096\nfield = get_initial_conditions(nx, ny)\nkernel_string = get_kernel_string(nx, ny)", "The above code block has generated new initial conditions and a new string that contains our CUDA kernel using our new domain size.\nTo call the Kernel Tuner, we have to specify the tunable parameters, in our case block_size_x and block_size_y. For this purpose, we'll create an ordered dictionary to store the tunable parameters. The keys will be the name of the tunable parameter, and the corresponding value is the list of possible values for the parameter. For the purpose of this tutorial, we'll use a small number of commonly used values for the thread block dimensions, but feel free to try more!", "from collections import OrderedDict\ntune_params = OrderedDict()\ntune_params[\"block_size_x\"] = [16, 32, 48, 64, 128]\ntune_params[\"block_size_y\"] = [2, 4, 8, 16, 32]", "We also have to tell the Kernel Tuner about the argument list of our CUDA kernel. Because the Kernel Tuner will be calling the CUDA kernel and measure its execution time. For this purpose we create a list in Python, that corresponds with the argument list of the diffuse_kernel CUDA function. This list will only be used as input to the kernel during tuning. The objects in the list should be Numpy arrays or scalars.\nBecause you can specify the arguments as Numpy arrays, the Kernel Tuner will take care of allocating GPU memory and copying the data to the GPU.", "args = [field, field]", "We're almost ready to call the Kernel Tuner, we just need to set how large the problem is we are currently working on by setting a problem_size. The Kernel Tuner knows about thread block dimensions, which it expects to be called block_size_x, block_size_y, and/or block_size_z. From these and the problem_size, the Kernel Tuner will compute the appropiate grid dimensions on the fly.", "problem_size = (nx, ny)", "And that's everything the Kernel Tuner needs to know to be able to start tuning our kernel. Let's give it a try by executing the next code block!", "from kernel_tuner import tune_kernel\nresult = tune_kernel(\"diffuse_kernel\", kernel_string, problem_size, args, tune_params)", "Note that the Kernel Tuner prints a lot of useful information. To ensure you'll be able to tell what was measured in this run the Kernel Tuner always prints the GPU or OpenCL Device name that is being used, as well as the name of the kernel.\nAfter that every line contains the combination of parameters and the time that was measured during benchmarking. The time that is being printed is in milliseconds and is obtained by averaging the execution time of 7 runs of the kernel. Finally, as a matter of convenience, the Kernel Tuner also prints the best performing combination of tunable parameters. However, later on in this tutorial we'll explain how to analyze and store the tuning results using Python.\nLooking at the results printed above, the difference in performance between the different kernel configurations may seem very little. However, on our hardware, the performance of this kernel already varies in the order of 10%. Which of course can build up to large differences in the execution time if the kernel is to be executed thousands of times. We can also see that the performance of the best configuration in this set is 5% better than our initially guessed thread block dimensions of 16 by 16.\nIn addtion, you may notice that not all possible combinations of values for block_size_x and block_size_y are among the results. For example, 128x32 is not among the results. This is because some configuration require more threads per thread block than allowed on our GPU. The Kernel Tuner checks the limitations of your GPU at runtime and automatically skips over configurations that use too many threads per block. It will also do this for kernels that cannot be compiled because they use too much shared memory. And likewise for kernels that use too many registers to be launched at runtime. If you'd like to know about which configurations were skipped automatically you can pass the optional parameter verbose=True to tune_kernel.\nHowever, knowing the best performing combination of tunable parameters becomes even more important when we start to further optimize our CUDA kernel. In the next section, we'll add a simple code optimization and show how this affects performance.\nUsing shared memory\nShared memory, is a special type of the memory available in CUDA. Shared memory can be used by threads within the same thread block to exchange and share values. It is in fact, one of the very few ways for threads to communicate on the GPU.\nThe idea is that we'll try improve the performance of our kernel by using shared memory as a software controlled cache. There are already caches on the GPU, but most GPUs only cache accesses to global memory in L2. Shared memory is closer to the multiprocessors where the thread blocks are executed, comparable to an L1 cache.\nHowever, because there are also hardware caches, the performance improvement from this step is expected to not be that great. The more fine-grained control that we get by using a software managed cache, rather than a hardware implemented cache, comes at the cost of some instruction overhead. In fact, performance is quite likely to degrade a little. However, this intermediate step is necessary for the next optimization step we have in mind.", "kernel_string_shared = \"\"\" \n#define nx %d\n#define ny %d\n#define dt 0.225f\n__global__ void diffuse_kernel(float *u_new, float *u) {\n\n int tx = threadIdx.x;\n int ty = threadIdx.y;\n int bx = blockIdx.x * block_size_x;\n int by = blockIdx.y * block_size_y;\n\n __shared__ float sh_u[block_size_y+2][block_size_x+2];\n\n #pragma unroll\n for (int i = ty; i<block_size_y+2; i+=block_size_y) {\n #pragma unroll\n for (int j = tx; j<block_size_x+2; j+=block_size_x) {\n int y = by+i-1;\n int x = bx+j-1;\n if (x>=0 && x<nx && y>=0 && y<ny) {\n sh_u[i][j] = u[y*nx+x];\n }\n }\n }\n __syncthreads();\n \n int x = bx+tx;\n int y = by+ty;\n if (x>0 && x<nx-1 && y>0 && y<ny-1) {\n int i = ty+1;\n int j = tx+1;\n u_new[y*nx+x] = sh_u[i][j] + dt * ( \n sh_u[i+1][j] + sh_u[i][j+1] -4.0f * sh_u[i][j] +\n sh_u[i][j-1] + sh_u[i-1][j] );\n } \n\n}\n\"\"\" % (nx, ny)", "We can now tune this new kernel using the kernel tuner", "result = tune_kernel(\"diffuse_kernel\", kernel_string_shared, problem_size, args, tune_params)", "Tiling GPU Code\nOne very useful code optimization is called tiling, sometimes also called thread-block-merge. You can look at it in this way, currently we have many thread blocks that together work on the entire domain. If we were to use only half of the number of thread blocks, every thread block would need to double the amount of work it performs to cover the entire domain. However, the threads may be able to reuse part of the data and computation that is required to process a single output element for every element beyond the first.\nThis is a code optimization because effectively we are reducing the total number of instructions executed by all threads in all thread blocks. So in a way, were are condensing the total instruction stream while keeping the all the really necessary compute instructions. More importantly, we are increasing data reuse, where previously these values would have been reused from the cache or in the worst-case from GPU memory.\nWe can apply tiling in both the x and y-dimensions. This also introduces two new tunable parameters, namely the tiling factor in x and y, which we will call tile_size_x and tile_size_y. \nThis is what the new kernel looks like:", "kernel_string_tiled = \"\"\" \n#define nx %d\n#define ny %d\n#define dt 0.225f\n__global__ void diffuse_kernel(float *u_new, float *u) {\n\n int tx = threadIdx.x;\n int ty = threadIdx.y;\n int bx = blockIdx.x * block_size_x * tile_size_x;\n int by = blockIdx.y * block_size_y * tile_size_y;\n\n __shared__ float sh_u[block_size_y*tile_size_y+2][block_size_x*tile_size_x+2];\n\n #pragma unroll\n for (int i = ty; i<block_size_y*tile_size_y+2; i+=block_size_y) {\n #pragma unroll\n for (int j = tx; j<block_size_x*tile_size_x+2; j+=block_size_x) {\n int y = by+i-1;\n int x = bx+j-1;\n if (x>=0 && x<nx && y>=0 && y<ny) {\n sh_u[i][j] = u[y*nx+x];\n }\n }\n }\n __syncthreads();\n\n #pragma unroll\n for (int tj=0; tj<tile_size_y; tj++) {\n int i = ty+tj*block_size_y+1;\n int y = by + ty + tj*block_size_y;\n #pragma unroll\n for (int ti=0; ti<tile_size_x; ti++) {\n int j = tx+ti*block_size_x+1;\n int x = bx + tx + ti*block_size_x;\n if (x>0 && x<nx-1 && y>0 && y<ny-1) {\n u_new[y*nx+x] = sh_u[i][j] + dt * ( \n sh_u[i+1][j] + sh_u[i][j+1] -4.0f * sh_u[i][j] +\n sh_u[i][j-1] + sh_u[i-1][j] );\n }\n }\n \n }\n\n}\n\"\"\" % (nx, ny)", "We can tune our tiled kernel by adding the two new tunable parameters to our dictionary tune_params.\nWe also need to somehow tell the Kernel Tuner to use fewer thread blocks to launch kernels with tile_size_x or tile_size_y larger than one. For this purpose the Kernel Tuner's tune_kernel function supports two optional arguments, called grid_div_x and grid_div_y. These are the grid divisor lists, which are lists of strings containing all the tunable parameters that divide a certain grid dimension. So far, we have been using the default settings for these, in which case the Kernel Tuner only uses the block_size_x and block_size_y tunable parameters to divide the problem_size.\nNote that the Kernel Tuner will replace the values of the tunable parameters inside the strings and use the product of the parameters in the grid divisor list to compute the grid dimension rounded up. You can even use arithmetic operations, inside these strings as they will be evaluated. As such, we could have used [\"block_size_x*tile_size_x\"] to get the same result.\nWe are now ready to call the Kernel Tuner again and tune our tiled kernel. Let's execute the following code block, note that it may take a while as the number of kernel configurations that the Kernel Tuner will try has just been increased with a factor of 9!", "tune_params[\"tile_size_x\"] = [1,2,4] #add tile_size_x to the tune_params\ntune_params[\"tile_size_y\"] = [1,2,4] #add tile_size_y to the tune_params\ngrid_div_x = [\"block_size_x\", \"tile_size_x\"] #tile_size_x impacts grid dimensions\ngrid_div_y = [\"block_size_y\", \"tile_size_y\"] #tile_size_y impacts grid dimensions\nresult = tune_kernel(\"diffuse_kernel\", kernel_string_tiled, problem_size, args,\n tune_params, grid_div_x=grid_div_x, grid_div_y=grid_div_y)", "We can see that the number of kernel configurations tried by the Kernel Tuner is growing rather quickly. Also, the best performing configuration quite a bit faster than the best kernel before we started optimizing. On our GTX Titan X, the execution time went from 0.72 ms to 0.53 ms, a performance improvement of 26%!\nNote that the thread block dimensions for this kernel configuration are also different. Without optimizations the best performing kernel used a thread block of 32x2, after we've added tiling the best performing kernel uses thread blocks of size 64x4, which is four times as many threads! Also the amount of work increased with tiling factors 2 in the x-direction and 4 in the y-direction, increasing the amount of work per thread block by a factor of 8. The difference in the area processed per thread block between the naive and the tiled kernel is a factor 32.\nHowever, there are actually several kernel configurations that come close. The following Python code prints all instances with an execution time within 5% of the best performing configuration.\nUsing the best parameters in a production run\nNow that we have determined which parameters are the best for our problems we can use them to simulate the heat diffusion problem. There are several ways to do so depending on the host language you wish to use. \nPython run\nTo use the optimized parameters in a python run, we simply have to modify the kernel code to specify which value to use for the block and tile size. There are of course many different ways to achieve this. In simple cases on can define a dictionary of values and replace the string block_size_i and tile_size_j by their values.", "import pycuda.autoinit\n\n# define the optimal parameters\nsize = [nx,ny,1]\nthreads = [128,4,1]\n\n# create a dict of fixed parameters\nfixed_params = OrderedDict()\nfixed_params['block_size_x'] = threads[0]\nfixed_params['block_size_y'] = threads[1]\n\n# select the kernel to use\nkernel_string = kernel_string_shared\n\n# replace the block/tile size\nfor k,v in fixed_params.items():\n kernel_string = kernel_string.replace(k,str(v))", "We also need to determine the size of the grid", "# for regular and shared kernel \ngrid = [int(numpy.ceil(n/t)) for t,n in zip(threads,size)]", "We can then transfer the data initial condition on the two gpu arrays as well as compile the code and get the function we want to use.", "#allocate GPU memory\nu_old = gpuarray.to_gpu(field)\nu_new = gpuarray.to_gpu(field)\n\n# compile the kernel\nmod = compiler.SourceModule(kernel_string)\ndiffuse_kernel = mod.get_function(\"diffuse_kernel\")", "We now just have to use the kernel with these optimized parameters to run the simulation", "#call the GPU kernel a 1000 times and measure performance\nt0 = time()\nfor i in range(500):\n diffuse_kernel(u_new, u_old, block=tuple(threads), grid=tuple(grid))\n diffuse_kernel(u_old, u_new, block=tuple(threads), grid=tuple(grid))\ndriver.Context.synchronize()\nprint(\"1000 steps of diffuse on a %d x %d grid took\" %(nx,ny), (time()-t0)*1000, \"ms.\")\n\n#copy the result from the GPU to Python for plotting\ngpu_result = u_old.get()\npyplot.imshow(gpu_result)", "C run\nIf you wish to incorporate the optimized parameters in the kernel and use it in a C run you can use ifndef statement at the begining of the kerenel as demonstrated in the psedo code below.", "kernel_string = \"\"\" \n\n#ifndef block_size_x \n #define block_size_x <insert optimal value>\n#endif\n\n#ifndef block_size_y \n #define block_size_y <insert optimal value>\n#endif\n\n#define nx %d\n#define ny %d\n#define dt 0.225f\n__global__ void diffuse_kernel(float *u_new, float *u) {\n ......\n } \n\n}\n\"\"\" % (nx, ny)", "This kernel can be used during the tuning since the kernel tuner will prepend #define statements to the kernel. As a result the #ifndef will be bypassed during the tuning. However the same kernel will work just fine on its own in a larger program." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rsignell-usgs/notebook
hfr_start_stop.ipynb
mit
[ "Check the start/stop time of a netcdf/OPeNDAP dataset", "import netCDF4\nimport numpy as np\n\ndef start_stop(url,tvar):\n nc = netCDF4.Dataset(url)\n time_var = nc[tvar]\n first = netCDF4.num2date(time_var[0],time_var.units)\n last = netCDF4.num2date(time_var[-1],time_var.units)\n\n print(first.strftime('%Y-%b-%d %H:%M'))\n print(last.strftime('%Y-%b-%d %H:%M'))\n\nurl='http://hfrnet.ucsd.edu/thredds/dodsC/HFR/USWC/6km/hourly/RTV/HFRADAR,_US_West_Coast,_6km_Resolution,_Hourly_RTV_best.ncd'\ntvar='time'\nstart_stop(url,tvar)\n\nnc = netCDF4.Dataset(url)\nt = nc[tvar][:]\nprint(nc[tvar].units)", "Calculate the average time step", "print np.mean(np.diff(t))", "So we have time steps of about 1 hour\nNow calculate the unique time steps", "print(np.unique(np.diff(t)).data)", "So there are gaps of 2, 3, 6, 9, 10, 14 and 19 hours in the otherwise hourly data", "nc['time'][:]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
beangoben/HistoriaDatos_Higgs
Dia3/4_Estadistica_Basica.ipynb
gpl-2.0
[ "<i class=\"fa fa-diamond\"></i> Primero pimpea tu libreta!", "from IPython.core.display import HTML\nimport os\ndef css_styling():\n \"\"\"Load default custom.css file from ipython profile\"\"\"\n base = os.getcwd()\n styles = \"<style>\\n%s\\n</style>\" % (open(os.path.join(base,'files/custom.css'),'r').read())\n return HTML(styles)\ncss_styling()", "Un poco de estadística", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Hacemos dos listas, la primera contendrá las edades de los chavos de clubes de ciencia y la segusda el número de personas que tienen dicha edad", "Edades = np.array([15, 16, 17, 18, 19, 20, 21, 22, 23, 24])\nFrecuencia = np.array([10, 22, 39, 32, 26, 10, 7, 5, 8, 1])\nprint sum(Frecuencia)\nplt.bar(Edades, Frecuencia)\nplt.show()", "Distribución uniforme\nLotería mexicana", "x1=np.random.rand(50)\nplt.hist(x1)\nplt.show()", "Distribución de Poisson\nNúmero de solicitudes de amistad en facebook en una semana", "s = np.random.poisson(5,20)\nplt.hist(s)\nplt.show()", "Distribución normal\nDistribución de calificaciones en un exámen", "x=np.random.randn(50)\nplt.hist(x)\nplt.show()\n\nx=np.random.randn(100)\nplt.hist(x)\nplt.show()\n\nx=np.random.randn(200)\nplt.hist(x)\nplt.show()", "Una forma de automatizar esto es:", "tams = [1,2,3,4,5,6,7]\n\nfor tam in tams:\n numeros = np.random.randn(10**tam)\n plt.hist(numeros,bins=20 )\n plt.title('%d' %tam)\n plt.show()\n\nnumeros = np.random.normal(loc=2.0,scale=2.0,size=1000)\nplt.hist(numeros)\nplt.show()", "Probabilidad en una distribución normal\n\n$1 \\sigma$ = 68.26%\n$2 \\sigma$ = 95.44%\n$3 \\sigma$ = 99.74%\n$4 \\sigma$ = 99.995%\n$5 \\sigma$ = 99.99995%\nActividades\nGrafica lo siguiente:\n\nCrear 3 distribuciones variando mean\nCrear 3 distribuciones variando std\nCrear 2 distribuciones con cierto sobrelape\n\nCampanas gaussianas en la Naturaleza\nExamenes de salidad en prepas en Polonia:\n\nDistribución normal en 2D", "x = np.random.normal(loc=2.0,scale=2.0,size=100)\ny = np.random.normal(loc=2.0,scale=2.0,size=100)\nplt.scatter(x,y)\nplt.show()", "Actividades\n\nCrear 3 distribuciones variando mean\nCrear 3 distribuciones variando std\nCrear 2 distribuciones con cierto sobrelape" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/pt-br/tutorials/load_data/csv.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Carregar dados CSV\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/load_data/csv\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />Ver em TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/load_data/csv.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Executar em Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/load_data/csv.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />Ver código fonte no GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/load_data/csv.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Baixar notebook</a>\n </td>\n</table>\n\nEste tutorial fornece um exemplo de como carregar dados CSV de um arquivo em um tf.data.Dataset.\nOs dados usados neste tutorial foram retirados da lista de passageiros do Titanic. O modelo preverá a probabilidade de sobrevivência de um passageiro com base em características como idade, sexo, classe de passagem e se a pessoa estava viajando sozinha.\nSetup", "try:\n # %tensorflow_version only exists in Colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass\n\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\nimport functools\n\nimport numpy as np\nimport tensorflow as tf\n\nTRAIN_DATA_URL = \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\"\nTEST_DATA_URL = \"https://storage.googleapis.com/tf-datasets/titanic/eval.csv\"\n\ntrain_file_path = tf.keras.utils.get_file(\"train.csv\", TRAIN_DATA_URL)\ntest_file_path = tf.keras.utils.get_file(\"eval.csv\", TEST_DATA_URL)\n\n# Facilitar a leitura de valores numpy.\nnp.set_printoptions(precision=3, suppress=True)", "Carregar dados\nPara começar, vejamos a parte superior do arquivo CSV para ver como ele está formatado.", "!head {train_file_path}", "Você pode [carregar isso usando pandas] (pandas.ipynb) e passar as matrizes NumPy para o TensorFlow. Se você precisar escalar até um grande conjunto de arquivos ou precisar de um carregador que se integre ao [TensorFlow e tf.data] (../../guide/data.ipynb), use o tf.data.experimental. função make_csv_dataset:\nA única coluna que você precisa identificar explicitamente é aquela com o valor que o modelo pretende prever.", "LABEL_COLUMN = 'survived'\nLABELS = [0, 1]", "Now read the CSV data from the file and create a dataset. \n(For the full documentation, see tf.data.experimental.make_csv_dataset)", "def get_dataset(file_path, **kwargs):\n dataset = tf.data.experimental.make_csv_dataset(\n file_path,\n batch_size=5, # Artificialmente pequeno para facilitar a exibição de exemplos\n label_name=LABEL_COLUMN,\n na_value=\"?\",\n num_epochs=1,\n ignore_errors=True, \n **kwargs)\n return dataset\n\nraw_train_data = get_dataset(train_file_path)\nraw_test_data = get_dataset(test_file_path)\n\ndef show_batch(dataset):\n for batch, label in dataset.take(1):\n for key, value in batch.items():\n print(\"{:20s}: {}\".format(key,value.numpy()))", "Cada item do conjunto de dados é um lote, representado como uma tupla de ( muitos exemplos , * muitos rótulos *). Os dados dos exemplos são organizados em tensores baseados em colunas (em vez de tensores baseados em linhas), cada um com tantos elementos quanto o tamanho do lote (5 neste caso).\nPode ajudar a ver isso por si mesmo.", "show_batch(raw_train_data)", "Como você pode ver, as colunas no CSV são nomeadas. O construtor do conjunto de dados selecionará esses nomes automaticamente. Se o arquivo com o qual você está trabalhando não contém os nomes das colunas na primeira linha, passe-os em uma lista de strings para o argumento column_names na função make_csv_dataset.", "CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']\n\ntemp_dataset = get_dataset(train_file_path, column_names=CSV_COLUMNS)\n\nshow_batch(temp_dataset)", "Este exemplo vai usar todas as colunas disponíveis. Se você precisar omitir algumas colunas do conjunto de dados, crie uma lista apenas das colunas que planeja usar e passe-a para o argumento (opcional) select_columns do construtor.", "SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'class', 'deck', 'alone']\n\ntemp_dataset = get_dataset(train_file_path, select_columns=SELECT_COLUMNS)\n\nshow_batch(temp_dataset)", "Pré-processamento dos Dados\nUm arquivo CSV pode conter uma variedade de tipos de dados. Normalmente, você deseja converter desses tipos mistos em um vetor de comprimento fixo antes de alimentar os dados em seu modelo.\nO TensorFlow possui um sistema interno para descrever conversões de entrada comuns: tf.feature_column, consulte [este tutorial] (../keras/feature_columns) para detalhes.\nVocê pode pré-processar seus dados usando qualquer ferramenta que desejar (como [nltk] (https://www.nltk.org/) ou [sklearn] (https://scikit-learn.org/stable/)) e apenas passar a saída processada para o TensorFlow.\nA principal vantagem de fazer o pré-processamento dentro do seu modelo é que, quando você exporta o modelo, ele inclui o pré-processamento. Dessa forma, você pode passar os dados brutos diretamente para o seu modelo.\nDados contínuos\nSe seus dados já estiverem em um formato numérico apropriado, você poderá compactá-los em um vetor antes de transmiti-los ao modelo:", "SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'parch', 'fare']\nDEFAULTS = [0, 0.0, 0.0, 0.0, 0.0]\ntemp_dataset = get_dataset(train_file_path, \n select_columns=SELECT_COLUMNS,\n column_defaults = DEFAULTS)\n\nshow_batch(temp_dataset)\n\nexample_batch, labels_batch = next(iter(temp_dataset)) ", "Aqui está uma função simples que agrupará todas as colunas:", "def pack(features, label):\n return tf.stack(list(features.values()), axis=-1), label", "Aplique isso a cada elemento do conjunto de dados:", "packed_dataset = temp_dataset.map(pack)\n\nfor features, labels in packed_dataset.take(1):\n print(features.numpy())\n print()\n print(labels.numpy())", "Se você tiver tipos de dados mistos, poderá separar esses campos numéricos simples. A API tf.feature_column pode lidar com eles, mas isso gera alguma sobrecarga e deve ser evitado, a menos que seja realmente necessário. Volte para o conjunto de dados misto:", "show_batch(raw_train_data)\n\nexample_batch, labels_batch = next(iter(temp_dataset)) ", "Portanto, defina um pré-processador mais geral que selecione uma lista de recursos numéricos e os agrupe em uma única coluna:", "class PackNumericFeatures(object):\n def __init__(self, names):\n self.names = names\n\n def __call__(self, features, labels):\n numeric_features = [features.pop(name) for name in self.names]\n numeric_features = [tf.cast(feat, tf.float32) for feat in numeric_features]\n numeric_features = tf.stack(numeric_features, axis=-1)\n features['numeric'] = numeric_features\n\n return features, labels\n\nNUMERIC_FEATURES = ['age','n_siblings_spouses','parch', 'fare']\n\npacked_train_data = raw_train_data.map(\n PackNumericFeatures(NUMERIC_FEATURES))\n\npacked_test_data = raw_test_data.map(\n PackNumericFeatures(NUMERIC_FEATURES))\n\nshow_batch(packed_train_data)\n\nexample_batch, labels_batch = next(iter(packed_train_data)) ", "Normalização dos dados\nDados contínuos sempre devem ser normalizados.", "import pandas as pd\ndesc = pd.read_csv(train_file_path)[NUMERIC_FEATURES].describe()\ndesc\n\nMEAN = np.array(desc.T['mean'])\nSTD = np.array(desc.T['std'])\n\ndef normalize_numeric_data(data, mean, std):\n # Center the data\n return (data-mean)/std\n", "Agora crie uma coluna numérica. A API tf.feature_columns.numeric_column aceita um argumento normalizer_fn, que será executado em cada lote.\nLigue o MEAN e oSTD ao normalizador fn usando [functools.partial] (https://docs.python.org/3/library/functools.html#functools.partial)", "# Veja o que você acabou de criar.\nnormalizer = functools.partial(normalize_numeric_data, mean=MEAN, std=STD)\n\nnumeric_column = tf.feature_column.numeric_column('numeric', normalizer_fn=normalizer, shape=[len(NUMERIC_FEATURES)])\nnumeric_columns = [numeric_column]\nnumeric_column", "Ao treinar o modelo, inclua esta coluna de característica para selecionar e centralizar este bloco de dados numéricos:", "example_batch['numeric']\n\nnumeric_layer = tf.keras.layers.DenseFeatures(numeric_columns)\nnumeric_layer(example_batch).numpy()", "A normalização baseada em média usada aqui requer conhecer os meios de cada coluna antes do tempo.\nDados categóricos\nAlgumas das colunas nos dados CSV são colunas categóricas. Ou seja, o conteúdo deve ser um dentre um conjunto limitado de opções.\nUse a API tf.feature_column para criar uma coleção com uma tf.feature_column.indicator_column para cada coluna categórica.", "CATEGORIES = {\n 'sex': ['male', 'female'],\n 'class' : ['First', 'Second', 'Third'],\n 'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],\n 'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'],\n 'alone' : ['y', 'n']\n}\n\n\ncategorical_columns = []\nfor feature, vocab in CATEGORIES.items():\n cat_col = tf.feature_column.categorical_column_with_vocabulary_list(\n key=feature, vocabulary_list=vocab)\n categorical_columns.append(tf.feature_column.indicator_column(cat_col))\n\n# Veja o que você acabou de criar.\ncategorical_columns\n\ncategorical_layer = tf.keras.layers.DenseFeatures(categorical_columns)\nprint(categorical_layer(example_batch).numpy()[0])", "Isso fará parte de uma entrada de processamento de dados posteriormente, quando você construir o modelo.\nCamada combinada de pré-processamento\nAdicione as duas coleções de colunas de recursos e passe-as para um tf.keras.layers.DenseFeatures para criar uma camada de entrada que extrairá e pré-processará os dois tipos de entrada:", "preprocessing_layer = tf.keras.layers.DenseFeatures(categorical_columns+numeric_columns)\n\nprint(preprocessing_layer(example_batch).numpy()[0])", "Construir o modelo\nCrie um tf.keras.Sequential, começando com o preprocessing_layer.", "model = tf.keras.Sequential([\n preprocessing_layer,\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(1),\n])\n\nmodel.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer='adam',\n metrics=['accuracy'])", "Treinar, avaliar, e prever\nAgora o modelo pode ser instanciado e treinado.", "train_data = packed_train_data.shuffle(500)\ntest_data = packed_test_data\n\nmodel.fit(train_data, epochs=20)", "Depois que o modelo é treinado, você pode verificar sua acurácia no conjunto test_data.", "test_loss, test_accuracy = model.evaluate(test_data)\n\nprint('\\n\\nTest Loss {}, Test Accuracy {}'.format(test_loss, test_accuracy))", "Use tf.keras.Model.predict para inferir rótulos em um lote ou em um conjunto de dados de lotes.", "predictions = model.predict(test_data)\n\n# Mostrar alguns resultados\nfor prediction, survived in zip(predictions[:10], list(test_data)[0][1][:10]):\n print(\"Predicted survival: {:.2%}\".format(prediction[0]),\n \" | Actual outcome: \",\n (\"SURVIVED\" if bool(survived) else \"DIED\"))\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yangw1234/BigDL
apps/variational-autoencoder/using_variational_autoencoder_and_deep_feature_loss_to_generate_faces.ipynb
apache-2.0
[ "Using Variational Autoencoder and Deep Feature Loss to Generate Faces\nFrom the \"Using Variational Autoencoder to Generate Faces\" example, we see that using VAE, we can generate realistic human faces, but the generated image is a little blury. Though, you can continue to tuning the hyper paramters or using more data to get a better result, in this example, we adopted the approach in this paper. That is, instead of using pixel-by-pixel loss of between the original images and the generated images, we use the feature map generated by a pre-trained CNN network to define a feature perceptual loss. As you will see, the generated images will become more vivid.", "from bigdl.dllib.nn.layer import *\nfrom bigdl.dllib.nn.criterion import *\nfrom bigdl.dllib.optim.optimizer import *\nfrom bigdl.dllib.feature.dataset import mnist\nimport datetime as dt\nfrom glob import glob\nimport os\nimport numpy as np\nfrom utils import *\nimport imageio\n\nimage_size = 148\nZ_DIM = 100\nENCODER_FILTER_NUM = 32\n\n# we use the vgg16 model, it should work on other popular CNN models\n# You can download them here (https://github.com/intel-analytics/analytics-zoo/tree/master/models\n# download the data CelebA, and may repalce with your own data path\nDATA_PATH = os.getenv(\"ANALYTICS_ZOO_HOME\") + \"/apps/variational-autoencoder/img_align_celeba\"\nVGG_PATH = os.getenv(\"ANALYTICS_ZOO_HOME\")+\"/apps/variational-autoencoder/analytics-zoo_vgg-16_imagenet_0.1.0.model\"\n\nfrom bigdl.dllib.nncontext import *\nsc = init_nncontext(\"Variational Autoencoder Example\")\nsc.addFile(os.getenv(\"ANALYTICS_ZOO_HOME\")+\"/apps/variational-autoencoder/utils.py\")", "Define the Model\nWe are uing the same model as \"Using Variational Autoencoder to Generate Faces\" example.", "def conv_bn_lrelu(in_channels, out_channles, kw=4, kh=4, sw=2, sh=2, pw=-1, ph=-1):\n model = Sequential()\n model.add(SpatialConvolution(in_channels, out_channles, kw, kh, sw, sh, pw, ph))\n model.add(SpatialBatchNormalization(out_channles))\n model.add(LeakyReLU(0.2))\n return model\n\ndef upsample_conv_bn_lrelu(in_channels, out_channles, out_width, out_height, kw=3, kh=3, sw=1, sh=1, pw=-1, ph=-1):\n model = Sequential()\n model.add(ResizeBilinear(out_width, out_height))\n model.add(SpatialConvolution(in_channels, out_channles, kw, kh, sw, sh, pw, ph))\n model.add(SpatialBatchNormalization(out_channles))\n model.add(LeakyReLU(0.2))\n return model\n\ndef get_encoder_cnn():\n input0 = Input()\n \n #CONV\n conv1 = conv_bn_lrelu(3, ENCODER_FILTER_NUM)(input0) # 32 * 32 * 32\n conv2 = conv_bn_lrelu(ENCODER_FILTER_NUM, ENCODER_FILTER_NUM*2)(conv1) # 16 * 16 * 64\n conv3 = conv_bn_lrelu(ENCODER_FILTER_NUM*2, ENCODER_FILTER_NUM*4)(conv2) # 8 * 8 * 128\n conv4 = conv_bn_lrelu(ENCODER_FILTER_NUM*4, ENCODER_FILTER_NUM*8)(conv3) # 4 * 4 * 256\n view = View([4*4*ENCODER_FILTER_NUM*8])(conv4)\n \n # fully connected to generate mean and log-variance\n mean = Linear(4*4*ENCODER_FILTER_NUM*8, Z_DIM)(view)\n log_variance = Linear(4*4*ENCODER_FILTER_NUM*8, Z_DIM)(view)\n \n model = Model([input0], [mean, log_variance])\n return model\n\ndef get_decoder_cnn():\n input0 = Input()\n \n linear = Linear(Z_DIM, 4*4*ENCODER_FILTER_NUM*8)(input0)\n reshape = Reshape([ENCODER_FILTER_NUM*8, 4, 4])(linear)\n bn = SpatialBatchNormalization(ENCODER_FILTER_NUM*8)(reshape)\n \n # upsampling\n up1 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*8, ENCODER_FILTER_NUM*4, 8, 8)(bn) # 8 * 8 * 128\n up2 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*4, ENCODER_FILTER_NUM*2, 16, 16)(up1) # 16 * 16 * 64\n up3 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*2, ENCODER_FILTER_NUM, 32, 32)(up2) # 32 * 32 * 32\n up4 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM, 3, 64, 64)(up3) # 64 * 64 * 3\n output = Tanh()(up4)\n \n model = Model([input0], [output])\n return model\n\ndef get_autoencoder_cnn():\n input0 = Input()\n encoder = get_encoder_cnn()(input0)\n sampler = GaussianSampler()(encoder)\n \n decoder_model = get_decoder_cnn()\n decoder = decoder_model(sampler)\n \n model = Model([input0], [encoder, decoder])\n return model, decoder_model", "Load the pre-trained CNN model", "def get_vgg():\n # we use the vgg16 model, it should work on other popular CNN models\n # You can download them here (https://github.com/intel-analytics/analytics-zoo/tree/master/models)\n vgg_whole = Model.from_jvalue(Model.loadModel(VGG_PATH).value)\n\n # we only use one feature map here for the sake of simlicity and efficiency\n # You can and other feature to the outputs to mix high-level and low-level\n # feature to get higher quality images\n outputs = [vgg_whole.node(name) for name in [\"relu1_2\"]]\n inputs = [vgg_whole.node(name) for name in [\"data\"]]\n \n outputs[0].remove_next_edges()\n\n vgg_light = Model(inputs, outputs).freeze()\n \n return vgg_light\n \n\nvgg = get_vgg()\n\nmodel, decoder = get_autoencoder_cnn()", "Load the Datasets", "def get_data():\n data_files = glob(os.path.join(DATA_PATH, \"*.jpg\"))\n \n rdd_train_images = sc.parallelize(data_files[:100000]) \\\n .map(lambda path: get_image(path, image_size).transpose(2, 0, 1))\n\n rdd_train_sample = rdd_train_images.map(lambda img: Sample.from_ndarray(img, [np.array(0.0), img]))\n return rdd_train_sample\n\ntrain_data = get_data()", "Define the Training Objective", "criterion = ParallelCriterion()\ncriterion.add(KLDCriterion(), 0.005) # You may want to twick this parameter\ncriterion.add(TransformerCriterion(MSECriterion(), vgg, vgg), 1.0)", "Define the Optimizer", "batch_size = 64\n\n\n# Create an Optimizer\noptimizer = Optimizer(\n model=model,\n training_rdd=train_data,\n criterion=criterion,\n optim_method=Adam(0.0005),\n end_trigger=MaxEpoch(1),\n batch_size=batch_size)\n\n\napp_name='vae-'+dt.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\ntrain_summary = TrainSummary(log_dir='/tmp/vae',\n app_name=app_name)\n\n\noptimizer.set_train_summary(train_summary)\n\nprint (\"saving logs to \",app_name)", "Spin Up the Training\nThis could take a while. It took about 6 hours on a desktop with a intel i7-6700 cpu and 40GB java heap memory. You can reduce the training time by using less data (some changes in the \"Load the Dataset\" section), but the performce may not as good.", "redire_spark_logs()\nshow_bigdl_info_logs()\n\ndef gen_image_row():\n decoder.evaluate()\n return np.column_stack([decoder.forward(np.random.randn(1, Z_DIM)).reshape(3, 64,64).transpose(1, 2, 0) for s in range(8)])\n\ndef gen_image():\n return inverse_transform(np.row_stack([gen_image_row() for i in range(8)]))\n\nfor i in range(1, 6):\n optimizer.set_end_when(MaxEpoch(i))\n trained_model = optimizer.optimize()\n image = gen_image()\n if not os.path.exists(\"./images\"):\n os.makedirs(\"./images\")\n if not os.path.exists(\"./models\"):\n os.makedirs(\"./models\")\n # you may change the following directory accordingly and make sure the directory\n # you are writing to exists\n imageio.imwrite(\"./images/image_vgg_%s.png\" % i, image)\n decoder.saveModel(\"./models/decoder_vgg_%s.model\" % i, over_write = True)\n\nimport matplotlib\nmatplotlib.use('Agg')\n%pylab inline\n\nimport numpy as np\nimport datetime as dt\nimport matplotlib.pyplot as plt\n\nloss = np.array(train_summary.read_scalar(\"Loss\"))\n\nplt.figure(figsize = (12,12))\nplt.plot(loss[:,0],loss[:,1],label='loss')\nplt.xlim(0,loss.shape[0]+10)\nplt.grid(True)\nplt.title(\"loss\")", "Random Sample Some Images", "from matplotlib.pyplot import imshow\nimg = gen_image()\nimshow(img)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
asimshankar/tensorflow
tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\").\nGenerating Handwritten Digits with DCGAN\n<table class=\"tfo-notebook-buttons\" align=\"left\"><td>\n<a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a> \n</td><td>\n<a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a></td></table>\n\nThis tutorial demonstrates how to generate images of handwritten digits using a Deep Convolutional Generative Adversarial Network (DCGAN). The code is written in tf.keras with eager execution enabled. \n\nGenerating Handwritten Digits with DCGAN\n\nWhat are GANs?\n\nImport TensorFlow and enable eager execution\nLoad the dataset\nUse tf.data to create batches and shuffle the dataset\n\nCreate the models\n\nThe Generator Model\nThe Discriminator model\n\nDefine the loss functions and the optimizer\n\nGenerator loss\nDiscriminator loss\n\nSet up GANs for Training\nTrain the GANs\nGenerated images\nLearn more about GANs\n\n\nWhat are GANs?\nGANs, or Generative Adversarial Networks, are a framework for estimating generative models. Two models are trained simultaneously by an adversarial process: a Generator, which is responsible for generating data (say, images), and a Discriminator, which is responsible for estimating the probability that an image was drawn from the training data (the image is real), or was produced by the Generator (the image is fake). During training, the Generator becomes progressively better at generating images, until the Discriminator is no longer able to distinguish real images from fake. \n\nWe will demonstrate this process end-to-end on MNIST. Below is an animation that shows a series of images produced by the Generator as it was trained for 50 epochs. Overtime, the generated images become increasingly difficult to distinguish from the training set.\nTo learn more about GANs, we recommend MIT's Intro to Deep Learning course, which includes a lecture on Deep Generative Models (video | slides). Now, let's head to the code!", "# Install imgeio in order to generate an animated gif showing the image generating process\n!pip install imageio", "Import TensorFlow and enable eager execution", "import tensorflow as tf\ntf.enable_eager_execution()\n\nimport glob\nimport imageio\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport PIL\nimport time\n\nfrom IPython import display", "Load the dataset\nWe are going to use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data.", "(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()\n\ntrain_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')\ntrain_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]\n\nBUFFER_SIZE = 60000\nBATCH_SIZE = 256", "Use tf.data to create batches and shuffle the dataset", "train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)", "Create the models\nWe will use tf.keras Sequential API to define the generator and discriminator models.\nThe Generator Model\nThe generator is responsible for creating convincing images that are good enough to fool the discriminator. The network architecture for the generator consists of Conv2DTranspose (Upsampling) layers. We start with a fully connected layer and upsample the image two times in order to reach the desired image size of 28x28x1. We increase the width and height, and reduce the depth as we move through the layers in the network. We use Leaky ReLU activation for each layer except for the last one where we use a tanh activation.", "def make_generator_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))\n model.add(tf.keras.layers.BatchNormalization())\n model.add(tf.keras.layers.LeakyReLU())\n \n model.add(tf.keras.layers.Reshape((7, 7, 256)))\n assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size\n \n model.add(tf.keras.layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))\n assert model.output_shape == (None, 7, 7, 128) \n model.add(tf.keras.layers.BatchNormalization())\n model.add(tf.keras.layers.LeakyReLU())\n\n model.add(tf.keras.layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))\n assert model.output_shape == (None, 14, 14, 64) \n model.add(tf.keras.layers.BatchNormalization())\n model.add(tf.keras.layers.LeakyReLU())\n\n model.add(tf.keras.layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))\n assert model.output_shape == (None, 28, 28, 1)\n \n return model", "The Discriminator model\nThe discriminator is responsible for distinguishing fake images from real images. It's similar to a regular CNN-based image classifier.", "def make_discriminator_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same'))\n model.add(tf.keras.layers.LeakyReLU())\n model.add(tf.keras.layers.Dropout(0.3))\n \n model.add(tf.keras.layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))\n model.add(tf.keras.layers.LeakyReLU())\n model.add(tf.keras.layers.Dropout(0.3))\n \n model.add(tf.keras.layers.Flatten())\n model.add(tf.keras.layers.Dense(1))\n \n return model\n\ngenerator = make_generator_model()\ndiscriminator = make_discriminator_model()", "Define the loss functions and the optimizer\nLet's define the loss functions and the optimizers for the generator and the discriminator.\nGenerator loss\nThe generator loss is a sigmoid cross entropy loss of the generated images and an array of ones, since the generator is trying to generate fake images that resemble the real images.", "def generator_loss(generated_output):\n return tf.losses.sigmoid_cross_entropy(tf.ones_like(generated_output), generated_output)", "Discriminator loss\nThe discriminator loss function takes two inputs: real images, and generated images. Here is how to calculate the discriminator loss:\n1. Calculate real_loss which is a sigmoid cross entropy loss of the real images and an array of ones (since these are the real images).\n2. Calculate generated_loss which is a sigmoid cross entropy loss of the generated images and an array of zeros (since these are the fake images).\n3. Calculate the total_loss as the sum of real_loss and generated_loss.", "def discriminator_loss(real_output, generated_output):\n # [1,1,...,1] with real output since it is true and we want our generated examples to look like it\n real_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.ones_like(real_output), logits=real_output)\n\n # [0,0,...,0] with generated images since they are fake\n generated_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.zeros_like(generated_output), logits=generated_output)\n\n total_loss = real_loss + generated_loss\n\n return total_loss", "The discriminator and the generator optimizers are different since we will train two networks separately.", "generator_optimizer = tf.train.AdamOptimizer(1e-4)\ndiscriminator_optimizer = tf.train.AdamOptimizer(1e-4)", "Checkpoints (Object-based saving)", "checkpoint_dir = './training_checkpoints'\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")\ncheckpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,\n discriminator_optimizer=discriminator_optimizer,\n generator=generator,\n discriminator=discriminator)", "Set up GANs for Training\nNow it's time to put together the generator and discriminator to set up the Generative Adversarial Networks, as you see in the diagam at the beginning of the tutorial.\nDefine training parameters", "EPOCHS = 50\nnoise_dim = 100\nnum_examples_to_generate = 16\n\n# We'll re-use this random vector used to seed the generator so\n# it will be easier to see the improvement over time.\nrandom_vector_for_generation = tf.random_normal([num_examples_to_generate,\n noise_dim])", "Define training method\nWe start by iterating over the dataset. The generator is given a random vector as an input which is processed to output an image looking like a handwritten digit. The discriminator is then shown the real MNIST images as well as the generated images.\nNext, we calculate the generator and the discriminator loss. Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables.", "def train_step(images):\n # generating noise from a normal distribution\n noise = tf.random_normal([BATCH_SIZE, noise_dim])\n \n with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:\n generated_images = generator(noise, training=True)\n \n real_output = discriminator(images, training=True)\n generated_output = discriminator(generated_images, training=True)\n \n gen_loss = generator_loss(generated_output)\n disc_loss = discriminator_loss(real_output, generated_output)\n \n gradients_of_generator = gen_tape.gradient(gen_loss, generator.variables)\n gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.variables)\n \n generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.variables))\n discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.variables))", "This model takes about ~30 seconds per epoch to train on a single Tesla K80 on Colab, as of October 2018. \nEager execution can be slower than executing the equivalent graph as it can't benefit from whole-program optimizations on the graph, and also incurs overheads of interpreting Python code. By using tf.contrib.eager.defun to create graph functions, we get a ~20 secs/epoch performance boost (from ~50 secs/epoch down to ~30 secs/epoch). This way we get the best of both eager execution (easier for debugging) and graph mode (better performance).", "train_step = tf.contrib.eager.defun(train_step)\n\ndef train(dataset, epochs): \n for epoch in range(epochs):\n start = time.time()\n \n for images in dataset:\n train_step(images)\n\n display.clear_output(wait=True)\n generate_and_save_images(generator,\n epoch + 1,\n random_vector_for_generation)\n \n # saving (checkpoint) the model every 15 epochs\n if (epoch + 1) % 15 == 0:\n checkpoint.save(file_prefix = checkpoint_prefix)\n \n print ('Time taken for epoch {} is {} sec'.format(epoch + 1,\n time.time()-start))\n # generating after the final epoch\n display.clear_output(wait=True)\n generate_and_save_images(generator,\n epochs,\n random_vector_for_generation)", "Generate and save images", "def generate_and_save_images(model, epoch, test_input):\n # make sure the training parameter is set to False because we\n # don't want to train the batchnorm layer when doing inference.\n predictions = model(test_input, training=False)\n\n fig = plt.figure(figsize=(4,4))\n \n for i in range(predictions.shape[0]):\n plt.subplot(4, 4, i+1)\n plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')\n plt.axis('off')\n \n plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))\n plt.show()", "Train the GANs\nWe will call the train() method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).\nAt the beginning of the training, the generated images look like random noise. As training progresses, you can see the generated digits look increasingly real. After 50 epochs, they look very much like the MNIST digits.", "%%time\ntrain(train_dataset, EPOCHS)", "Restore the latest checkpoint", "# restoring the latest checkpoint in checkpoint_dir\ncheckpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))", "Generated images\nAfter training, its time to generate some images! \nThe last step is to plot the generated images and voila!", "# Display a single image using the epoch number\ndef display_image(epoch_no):\n return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))\n\ndisplay_image(EPOCHS)", "Generate a GIF of all the saved images\nWe will use imageio to create an animated gif using all the images saved during training.", "with imageio.get_writer('dcgan.gif', mode='I') as writer:\n filenames = glob.glob('image*.png')\n filenames = sorted(filenames)\n last = -1\n for i,filename in enumerate(filenames):\n frame = 2*(i**0.5)\n if round(frame) > round(last):\n last = frame\n else:\n continue\n image = imageio.imread(filename)\n writer.append_data(image)\n image = imageio.imread(filename)\n writer.append_data(image)\n \n# this is a hack to display the gif inside the notebook\nos.system('cp dcgan.gif dcgan.gif.png')", "Display the animated gif with all the mages generated during the training of GANs.", "display.Image(filename=\"dcgan.gif.png\")", "Download the animated gif\nUncomment the code below to download an animated gif from Colab.", "#from google.colab import files\n#files.download('dcgan.gif')", "Learn more about GANs\nWe hope this tutorial was helpful! As a next step, you might like to experiment with a different dataset, for example the Large-scale Celeb Faces Attributes (CelebA) dataset available on Kaggle.\nTo learn more about GANs:\n\n\nCheck out MIT's lecture (linked above), or this lecture form Stanford's CS231n. \n\n\nWe also recommend the CVPR 2018 Tutorial on GANs, and the NIPS 2016 Tutorial: Generative Adversarial Networks." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gdementen/larray
doc/source/tutorial/tutorial_combine_arrays.ipynb
gpl-3.0
[ "Combining arrays\nImport the LArray library:", "from larray import *\n\n# load the 'demography_eurostat' dataset\ndemography_eurostat = load_example_data('demography_eurostat')\n\n# load 'gender' and 'time' axes\ngender = demography_eurostat.gender\ntime = demography_eurostat.time\n\n# load the 'population' array from the 'demography_eurostat' dataset\npopulation = demography_eurostat.population\n\n# show 'population' array \npopulation\n\n# load the 'population_benelux' array from the 'demography_eurostat' dataset\npopulation_benelux = demography_eurostat.population_benelux\n\n# show 'population_benelux' array \npopulation_benelux", "The LArray library offers several methods and functions to combine arrays:\n\ninsert: inserts an array in another array along an axis\nappend: adds an array at the end of an axis.\nprepend: adds an array at the beginning of an axis.\nextend: extends an array along an axis.\nstack: combines several arrays along a new axis.\n\nInsert", "other_countries = zeros((Axis('country=Luxembourg,Netherlands'), gender, time), dtype=int)\n\n# insert new countries before 'France'\npopulation_new_countries = population.insert(other_countries, before='France')\npopulation_new_countries\n\n# insert new countries after 'France'\npopulation_new_countries = population.insert(other_countries, after='France')\npopulation_new_countries", "See insert for more details and examples.\nAppend\nAppend one element to an axis of an array:", "# append data for 'Luxembourg'\npopulation_new = population.append('country', population_benelux['Luxembourg'], 'Luxembourg')\npopulation_new", "The value being appended can have missing (or even extra) axes as long as common axes are compatible:", "population_lux = Array([-1, 1], gender)\npopulation_lux\n\npopulation_new = population.append('country', population_lux, 'Luxembourg')\npopulation_new", "See append for more details and examples.\nPrepend\nPrepend one element to an axis of an array:", "# append data for 'Luxembourg'\npopulation_new = population.prepend('country', population_benelux['Luxembourg'], 'Luxembourg')\npopulation_new", "See prepend for more details and examples.\nExtend\nExtend an array along an axis with another array with that axis (but other labels)", "population_extended = population.extend('country', population_benelux[['Luxembourg', 'Netherlands']])\npopulation_extended", "See extend for more details and examples.\nStack\nStack several arrays together to create an entirely new dimension", "# imagine you have loaded data for each country in different arrays \n# (e.g. loaded from different Excel sheets)\npopulation_be = population['Belgium']\npopulation_fr = population['France']\npopulation_de = population['Germany']\n\nprint(population_be)\nprint(population_fr)\nprint(population_de)\n\n# create a new array with an extra axis 'country' by stacking the three arrays population_be/fr/de\npopulation_stacked = stack({'Belgium': population_be, 'France': population_fr, 'Germany': population_de}, 'country')\npopulation_stacked", "See stack for more details and examples." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hasecbinusr/pysal
pysal/contrib/clusterpy/clusterpy.ipynb
bsd-3-clause
[ "import pysal.contrib.clusterpy as cp\n%pylab inline\n\nimport numpy as np\nimport pysal as ps\nfrom collections import Counter\n\ncolumbus = cp.loadArcData(ps.examples.get_path('columbus.shp'))\ncolumbus.fieldNames\nn = len(columbus.Wqueen)\n#columbus.generateData('Uniform', 'rook', 1, 1, 10)\ncolumbus.dataOperation(\"CONSTANT = 1\")\ncolumbus.cluster('maxpTabu', ['CRIME', 'CONSTANT'], threshold=4, dissolve=0, std=0)\n\n\nCounter(columbus.region2areas)\n\ncolumbus.cluster('arisel', ['CRIME'], 5, wType='rook', inits=10, dissolve=0)\n#calif.cluster('arisel', ['PCR2002'], 9, wType='rook', inits=10, dissolve=1)\n\n# regionalization solutions are added as a list of region ids at the end\ncolumbus.fieldNames\n\nwarisel = ps.block_weights(columbus.region2areas)\n\nwarisel.neighbors\n\nwregimes = ps.block_weights(columbus.region2areas)\n\ncolumbus.region2areas[5]\n\nwregimes.n\n\nwregimes.neighbors", "Attrribute data from a csv file and a W from a gal file", "mexico = cp.importCsvData(ps.examples.get_path('mexico.csv'))\n\nmexico.fieldNames\n\nw = ps.open(ps.examples.get_path('mexico.gal')).read()\n\nw.n\n\ncp.addRook2Layer(ps.examples.get_path('mexico.gal'), mexico)\n\nmexico.Wrook\n\nmexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)\n\n\nmexico.fieldNames\n\nmexico.getVars('pcgdp1940')\n\n# mexico example all together\n\ncsvfile = ps.examples.get_path('mexico.csv')\ngalfile = ps.examples.get_path('mexico.gal')\n\nmexico = cp.importCsvData(csvfile)\ncp.addRook2Layer(galfile, mexico)\nmexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)\n\n\nmexico.region2areas.index(2)\n\nmexico.Wrook[0]\n\nmexico.getVars('State')\n\nregions = np.array(mexico.region2areas)\n\nregions\n\nCounter(regions)", "Attrribute data from a csv file and an external W object", "mexico = cp.importCsvData(ps.examples.get_path('mexico.csv'))\n\nw = ps.open(ps.examples.get_path('mexico.gal')).read()\n\ncp.addW2Layer(w, mexico)\n\nmexico.Wrook\n\nmexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)", "Shapefile and mapping results with PySAL Viz", "usf = ps.examples.get_path('us48.shp')\n\nus = cp.loadArcData(usf.split(\".\")[0])\n\nus.Wqueen\n\nus.fieldNames\n\nuscsv = ps.examples.get_path(\"usjoin.csv\")\n\nf = ps.open(uscsv)\npci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]).T\n\npci\n\nusy = cp.Layer()\n\ncp.addQueen2Layer(ps.examples.get_path('states48.gal'), usy)\n\nnames = [\"Y_%d\"%v for v in range(1929,2010)]\ncp.addArray2Layer(pci, usy, names)\n\nnames\n\nusy.fieldNames\n\nusy.getVars('Y_1929')\n\nusy.Wrook\n\nusy.cluster('arisel', ['Y_1980'], 8, wType='queen', inits=10, dissolve=0)\n#mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)\n\n\nus = cp.Layer()\n\ncp.addQueen2Layer(ps.examples.get_path('states48.gal'), us)\n\nuscsv = ps.examples.get_path(\"usjoin.csv\")\n\nf = ps.open(uscsv)\npci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]).T\nnames = [\"Y_%d\"%v for v in range(1929,2010)]\ncp.addArray2Layer(pci, us, names)\n\nusy.cluster('arisel', ['Y_1980'], 8, wType='queen', inits=10, dissolve=0)\n\n\nus_alpha = cp.importCsvData(ps.examples.get_path('usjoin.csv'))\n\nalpha_fips = us_alpha.getVars('STATE_FIPS')\n\nalpha_fips\n\ndbf = ps.open(ps.examples.get_path('us48.dbf'))\n\ndbf.header\n\nstate_fips = dbf.by_col('STATE_FIPS')\nnames = dbf.by_col('STATE_NAME')\n\nnames\n\nstate_fips = map(int, state_fips)\n\nstate_fips\n\n# the csv file has the states ordered alphabetically, but this isn't the case for the order in the shapefile so we have to reorder before any choropleths are drawn\nalpha_fips = [i[0] for i in alpha_fips.values()]\nreorder = [ alpha_fips.index(s) for s in state_fips]\n\nregions = usy.region2areas\n\nregions\n\nfrom pysal.contrib.viz import mapping as maps\n\nshp = ps.examples.get_path('us48.shp')\nregions = np.array(regions)\n\nmaps.plot_choropleth(shp, regions[reorder], 'unique_values')\n\nusy.cluster('arisel', ['Y_1929'], 8, wType='queen', inits=10, dissolve=0)\n\n\nregions = usy.region2areas\n\nregions = np.array(regions)\n\nmaps.plot_choropleth(shp, regions[reorder], 'unique_values')\n\nnames = [\"Y_%d\"%i for i in range(1929, 2010)]\n#usy.cluster('arisel', ['Y_1929'], 8, wType='queen', inits=10, dissolve=0)\nusy.cluster('arisel', names, 8, wType='queen', inits=10, dissolve=0)\n\n\nregions = usy.region2areas\nregions = np.array(regions)\nmaps.plot_choropleth(shp, regions[reorder], 'unique_values', title='All Years')\n\nps.version\n\nusy.cluster('arisel', names[:40], 8, wType='queen', inits=10, dissolve=0)\nregions = usy.region2areas\nregions = np.array(regions)\nmaps.plot_choropleth(shp, regions[reorder], 'unique_values', title='1929-68')\n\nusy.cluster('arisel', names[40:], 8, wType='queen', inits=10, dissolve=0)\nregions = usy.region2areas\nregions = np.array(regions)\nmaps.plot_choropleth(shp, regions[reorder], 'unique_values', title='1969-2009')\n\nusy.cluster('arisel', names[40:], 8, wType='queen', inits=10, dissolve=0)\n\nusy.dataOperation(\"CONSTANT = 1\")\nusy.Wrook = usy.Wqueen\nusy.cluster('maxpTabu', ['Y_1929', 'Y_1929'], threshold=1000, dissolve=0)\nregions = usy.region2areas\nregions = np.array(regions)\nmaps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929')\n\nCounter(regions)\n\nusy.getVars('Y_1929')\n\nusy.Wrook\n\nusy.cluster('maxpTabu', ['Y_1929', 'CONSTANT'], threshold=8, dissolve=0)\nregions = usy.region2areas\nregions = np.array(regions)\nmaps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929')\n\nregions\n\nCounter(regions)\n\nvars = names\n\nvars.append('CONSTANT')\n\nvars\n\nusy.cluster('maxpTabu', vars, threshold=8, dissolve=0)\nregions = usy.region2areas\nregions = np.array(regions)\nmaps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929-2009')\n\nCounter(regions)\n\nsouth = cp.loadArcData(ps.examples.get_path(\"south.shp\"))\n\nsouth.fieldNames\n\n# uncomment if you have some time ;->\n#south.cluster('arisel', ['HR70'], 20, wType='queen', inits=10, dissolve=0)\n\n#regions = south.region2areas\n\nshp = ps.examples.get_path('south.shp')\n#maps.plot_choropleth(shp, np.array(regions), 'unique_values')\n\nsouth.dataOperation(\"CONSTANT = 1\")\nsouth.cluster('maxpTabu', ['HR70', 'CONSTANT'], threshold=70, dissolve=0)\nregions = south.region2areas\nregions = np.array(regions)\nmaps.plot_choropleth(shp, regions, 'unique_values', title='maxp HR70 threshold=70')\n\nCounter(regions)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scottkleinman/WE1S
we1s-test/ipython-notebook-test.ipynb
mit
[ "Introduction\nIn this test, we'll repeat the previous one by working directly from this pre-written jupyter notebook. Start by configuring the file paths in the cell below. Make sure that you are using the appropriate file path for you system. If you have Mac, remove the # from the beginning of the second input_path and output_path lines and type it at the beginning of the first input_path and output_path lines, which contain C:\\\\. When you are finished, type Shift+Enter. If nothing happens, your configuration was successful. If you get an error, go back and check your file paths.", "##### Configuration #####\n\n# Configure the filename\nfilename = \"file.txt\"\n\n# Configure the path to input directory\ninput_path = \"C:\\Users\\USERNAME\\Desktop\\we1s-test\\input\"\n# input_path = \"/Users/USERNAME/Desktop/we1s-test/input\"\n\n# Configure the path to output directory\noutput_path = \"C:\\Users\\USERNAME\\Desktop\\we1s-test\\output\"\n# output_path = \"/Users/USERNAME/Desktop/we1s-test/output\"\n\n##### End of Configuration #####", "Now you are ready to run the main part of the code. Click on the cell below and then type Shift+Enter. If you do not get an error, you should receive a message indicating that a new file has been written to your output folder.", "# Import the os package to manage file paths\nimport os\n\n# Create input and out file paths\ninput_file = os.path.join(input_path, filename)\noutput_file = os.path.join(output_path, filename)\n\n# Open the input file and read it\nf = open(input_file)\ntext = f.read()\nf.close()\nprint(\"The input file says: \" + text)\n\n# Convert the text to lower case\ntext = text.lower()\n\n# Open the output file for writing and save the new text to it\noutput_file = os.path.join(output_path, \"file2.txt\")\nf = open(output_file, \"w\")\nf.write(text)\nf.close()\nprint(\"I've just written a new file called 'file2.txt' to your output folder. Check it out!\")", "If you did not receive an error, you have successfully run the jupyter notebook test. You can now exit from jupyter notebooks. To do this, close the jupyter notebooks windows and return to the command or terminal prompt. Type Control/Command+c to interrupt the Python process. You may have to do this several times. Once the process is interrupted, you can type exit followed by enter to exit from the command prompt or terminal." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
mdeff/ntds_2016
project/reports/airbnb_booking/Main Preprocessing.ipynb
mit
[ "----------------------------- AIRBNB CHALLENGE -----------------------------\nWhere will new guests book their first travel experience?\nMalo Grisard, Guillaume Jaume, Cyril Pecoraro - EPFL - 15th of January 2017\n<br>\n<br>\nMain Preprocessing:\nPipeline:\n 1. Data exploration and cleaning\n 2. Machine learning preprocessing\n 3. Machine learning optimization\n 4. Results\nThe purpose of this project is to predict which country a new user's first booking destination will be. We are given a list of users along with their demographics, web session records, and some summary statistics. All the users in this dataset are from the USA.\nThere are 12 possible outcomes of the destination country: 'US', 'FR', 'CA', 'GB', 'ES', 'IT', 'PT', 'NL','DE', 'AU', 'NDF' (no destination found), and 'other'.\nIn this notebook, we explored and cleaned the given data and try to show the most relevant features extracted.\nThe cleaned data are saved into separated files and analysed in the Machine Learning Notebook.\nImportant Remark: Due to size constraints, we were not able to load the file session.csv in the Git, you can upload it directly online from the Kaggle Competition here.", "import pandas as pd\nimport os as os\nimport preprocessing_helper as preprocessing_helper\nimport matplotlib as plt\n% matplotlib inline", "1. Data exploration and cleaning\nThe dataset is composed by several files. First, we are going to explore each of them and clean some variables. For a complete explanation of each file, please see the file DATA.md.\n1.1 file 'train_users_2.csv'\nThis file is the most important file in our dataset as it contains the users, information about them and the country of destination.\nWhen a user has booked a travel through Airbnb, the destination country will be specified. Otherwise, 'NDF' will be indicated.", "filename = \"train_users_2.csv\"\nfolder = 'data'\nfileAdress = os.path.join(folder, filename)\ndf = pd.read_csv(fileAdress)\ndf.head()", "There are missing values in the columns : \n\ndate_first_booking : users that never booked an airbnb apartment\ngender : users that didn't wish to specify their gender\nage : users that didn't wish to specify their age\nfirst_affiliate_tracked : problem of missing data\n\nWe wil go each of these variable and take decisions regarding the missing values", "df.isnull().any()", "Ages\nThere are 2 problems regarding ages in the dataset.\nFirst, many users did not specify an age.\nAlso, some users specified their year of birth instead of age.\nFor the relevancy of the data we will keep users between the age of 15 and 100 years old, and those who specified their age.\nFor the others, we will naively assign a value of -1", "df = preprocessing_helper.cleanAge(df,'k')", "The following graph presents the distribution of ages in the dataset. Also, the irrelevant ages are represented here, with their value of -1.", "preprocessing_helper.plotAge(df)", "Gender\nThe following graph highlights the gender of Airbnb users. Note that around 45% did not specify their age.", "df = preprocessing_helper.cleanGender(df)\npreprocessing_helper.plotGender(df)", "first_affiliate_tracked feature\nSet the first marketing the user interacted with before the signing up to 'Untracked' if not specified.", "df = preprocessing_helper.cleanFirst_affiliate_tracked(df)", "Date_first_booking\n\nThis has a high similarity with the dates where accounts were created. Despite the high growth of airbnb bookings throughout the years, it is possible to see that the difference between the months increases over the years as each year parabol curve increases.\nBy studying each year independently, we could see that four peaks arise each month corresponding to a certain day in the week. The following plots will show the bookings distribution over the months and later over the week\nThe most prolific day for Airbnb counted 248 bookings", "df = preprocessing_helper.cleanDate_First_booking(df)\npreprocessing_helper.plotDate_First_booking_years(df)", "It is possible to understand from this histogram that the bookings are pretty well spread over the year. Much less bookings are made during november and december and the months of May and June are the ones where users book the most. For these two months Airbnb counts more than 20000 bookings which corresponds to allmost a quarter of the bookings from our dataset.", "preprocessing_helper.plotDate_First_booking_months(df)", "As for the day where most accounts are created, it seems that tuesday and wednesdays are the days where people book the most appartments on Airbnb.", "preprocessing_helper.plotDate_First_booking_weekdays(df)", "Save cleaned and explored file", "filename = \"cleaned_train_user.csv\"\nfolder = 'cleaned_data'\nfileAdress = os.path.join(folder, filename)\npreprocessing_helper.saveFile(df, fileAdress)", "1.2 file 'test_user.csv'\nThis file has a similar structure than train_user_2.csv, so here, we will just do the cleaning process here.", "# extract file \nfilename = \"test_users.csv\"\nfolder = 'data'\nfileAdress = os.path.join(folder, filename)\ndf = pd.read_csv(fileAdress)\n# process file\ndf = preprocessing_helper.cleanAge(df,'k')\ndf = preprocessing_helper.cleanGender(df)\ndf = preprocessing_helper.cleanFirst_affiliate_tracked(df)\n# save file \nfilename = \"cleaned_test_user.csv\"\nfolder = 'cleaned_data'\nfileAdress = os.path.join(folder, filename)\npreprocessing_helper.saveFile(df, fileAdress)", "1.3 file 'countries.csv'\nThis file presents a summary of the countries presented in the dataset.\nThis is the signification:\n- 'AU' = Australia\n- 'ES' = Spain\n- 'PT' = Portugal\n- 'US' = USA\n- 'FR' = France\n- 'CA' = Canada\n- 'GB' = Great Britain\n- 'IT' = Italy\n- 'NL' = Netherlands\n- 'DE' = Germany\n- 'NDF'= No destination found\nAll the variables are calculated wrt. the US and english. The levenshtein distance is an indication on how far is the language spoken in the destination country compared to english. All the other variables are general geographics elements. This file will not be used in our model as it does not give direct indications regarding the users.", "filename = \"countries.csv\"\nfolder = 'data'\nfileAdress = os.path.join(folder, filename)\ndf = pd.read_csv(fileAdress)\ndf\n\ndf.describe()", "1.4 file 'age_gender_bkts.csv'\nThis file presents demograhpic statistics about each country present in our dataset. This file will not be used in our model.", "filename = \"age_gender_bkts.csv\"\nfolder = 'data'\nfileAdress = os.path.join(folder, filename)\ndf = pd.read_csv(fileAdress)\ndf.head()", "Population total per country\nThe following table shows the population in the country in 2015. These numbers correspond to data that can be found on the web.", "df_country = df.groupby(['country_destination'],as_index=False).sum()\ndf_country", "1.5 file 'sessions.csv'\nThis file keeps a track of each action made by each user (represented by their id). For each action (lookup, search etc...), the device type is saved so as the time spend for this action.", "filename = \"sessions.csv\"\nfolder = 'data'\nfileAdress = os.path.join(folder, filename)\ndf = pd.read_csv(fileAdress)\ndf.head()", "NaN users\nAs we can see, there are some missing user_id. Without a user_id, it is impossible to link them with the file train_user.csv. We will delete them as we cannot do anything with them.", "df.isnull().any()\n\ndf = preprocessing_helper.cleanSubset(df, 'user_id') ", "Invalid session time\nIf a session time is NaN, there was probably an error during the session. We are not going to remove the rows correponding, because there are still some data interesting for the actions variable.\nInstead, we are naively going to assign them a value of -1.", "df['secs_elapsed'].fillna(-1, inplace = True)", "Actions\nSome action produce -unknown- for action_type and/or action_detail. Sometimes they produce NaN. We replace the NaN values with -unknown- for action_type,action_detail, action", "df = preprocessing_helper.cleanAction(df)", "As shown in the following, there are no more NaN values.", "df.isnull().any()", "Total number of actions per user\nFrom the session, we can compute the total number of actions per user. Intuitively, we can imagine that a user totalising few actions might be a user that does not book in the end. This value will be used as a new feature for the machine learning.\nNote: The total number of actions is represented on a logarithmic basis.", "# Get total number of action per user_id\ndata_session_number_action = preprocessing_helper.createActionFeature(df)\n\n# Save to .csv file\nfilename = \"total_action_user_id.csv\"\nfolder = 'cleaned_data'\nfileAdress = os.path.join(folder, filename)\npreprocessing_helper.saveFile(data_session_number_action, fileAdress)\n\n# Plot distribution total number of action per user_id\npreprocessing_helper.plotActionFeature(data_session_number_action)", "Device types\nThere are 14 possible devices. Most of the users however are distributed on three main devices.", "preprocessing_helper.plotHist(df['device_type'])", "Time spent on average per user\nThe figure below shows the time spent on average per user. The following plot relates to the Total number of actions one with even clearer two gaussians.\nWe display only time > 20s.\nThis value will also be used as a feature for the machine learning.", "# Get Time spent on average per user_id\ndata_time_mean = preprocessing_helper.createAverageTimeFeature(df)\n\n# Save to .csv file\ndata_time_mean = data_time_mean.rename(columns={'user_id': 'id'})\nfilename = \"time_mean_user_id.csv\"\nfolder = 'cleaned_data'\nfileAdress = os.path.join(folder, filename)\npreprocessing_helper.saveFile(data_time_mean, fileAdress)\n\n# Plot distribution average time of session per user_id\npreprocessing_helper.plotTimeFeature(data_time_mean['secs_elapsed'],'mean')", "Time spent in total per user\nThe figure below shows the total amount of time spent per user. \nWe display only time > 20s\nThis feature is the 3rd and last one used for the machine learning from the file session. Intuitively, a long time spent leads to a booking and possibly further destinations.", "# Get Time spent in total per user_id\ndata_time_total = preprocessing_helper.createTotalTimeFeature(df)\n\n# Save to .csv file\ndata_time_total = data_time_total.rename(columns={'user_id': 'id'})\nfilename = \"time_total_user_id.csv\"\nfolder = 'cleaned_data'\nfileAdress = os.path.join(folder, filename)\npreprocessing_helper.saveFile(data_time_total, fileAdress)\n\n# Plot distribution total time of session per user_id\npreprocessing_helper.plotTimeFeature(data_time_total['secs_elapsed'],'total')", "Distribution of time spent\nThis last graph shows the distribution of time spent in second per session. \nWe display only time > 20s", "preprocessing_helper.plotTimeFeature(df['secs_elapsed'],'dist')", "Conclusion on the preprocessing\nThrough this notebook, we explored all the files in the dataset and displayed the most relevant statictics. From the file session, we constructed features to reinforce the train_user2 file. \nStarting from the cleaned data generated, we are now able to design a machine learning model. This problem will be adressed in the second notebook Machine Learning." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hanleilei/note
training/submit/PythonExercises1stAnd2nd_sulution.ipynb
cc0-1.0
[ "Python入门 第一周和第二周的练习\n练习\n回答下列粗体文字所描述的问题,如果需要,使用任何合适的方法,以掌握技能,完成自己想要的程序为目标,不用太在意实现的过程。\n 7 的四次方是多少?", "pow(7, 4)", "分割以下字符串 \ns = \"Hi there Sam!\"\n\n 到一个列表中", "s = \"Hi there Sam!\"\ns.split(' ')", "提供了一下两个变量 \nplanet = \"Earth\"\ndiameter = 12742\n\n 使用format()函数输出一下字符串 \nThe diameter of Earth is 12742 kilometers.", "planet = \"Earth\"\ndiameter = 12742\n\"The diameter of {0} is {1} kilometers.\".format(planet, diameter)", "提供了以下嵌套列表,使用索引的方法获取单词‘hello'", "lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]\n\nlst[3][1][2][0]", "提供以下嵌套字典,从中抓去单词 “hello”", "d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}\n\nd['k1'][3]['tricky'][3]['target'][3]", "字典和列表之间的差别是什么??", "# Just answer with text, no code necessary", "编写一个函数,该函数能够获取类似于以下email地址的域名部分 \[email protected]\n\n 因此,对于这个示例,传入 \"[email protected]\" 将返回: domain.com", "'[email protected]'.split('@')[-1]\n\ndef domain(email):\n return email.split('@')[-1]", "创建一个函数,如果输入的字符串中包含‘dog’,(请忽略corn case)统计一下'dog'的个数", "ss = 'This dog runs faster than the other dog dude!'\n\ndef countdog(s):\n return s.lower().split(' ').count('dog')\n\ncountdog(ss)\n\ndef countDog(st):\n count = 0\n for word in st.lower().split():\n if word == 'dog':\n count += 1\n return count", "创建一个函数,判断'dog' 是否包含在输入的字符串中(请同样忽略corn case)", "s = 'I have a dog'\n\ndef judge_dog_in_str(s):\n return 'dog' in s.lower().split(' ')\n\njudge_dog_in_str(s)", "如果你驾驶的过快,交警就会拦下你。编写一个函数来返回以下三种可能的情况之一:\"No ticket\", \"Small ticket\", 或者 \"Big Ticket\". \n 如果速度小于等于60, 结果为\"No Ticket\". 如果速度在61和80之间, 结果为\"Small Ticket\". 如果速度大于81,结果为\"Big Ticket\". 除非这是你的生日(传入一个boolean值),如果是生日当天,就允许超速5公里/小时。(同样,请忽略corn case)。", "def caught_speeding(speed, is_birthday):\n \n if is_birthday:\n speeding = speed - 5\n else:\n speeding = speed\n \n if speeding > 80:\n return 'Big Ticket'\n elif speeding > 60:\n return 'Small Ticket'\n else:\n return 'No Ticket'\n\ncaught_speeding(81,True)\n\ncaught_speeding(81,False)", "计算斐波那契数列,使用生成器实现", "def fib_dyn(n):\n a,b = 1,1\n for i in range(n-1):\n a,b = b,a+b\n return a\n\nfib_dyn(10)\n\ndef fib_recur(n):\n if n == 0:\n return 0\n if n == 1:\n return 1\n else:\n return fib_recur(n-1) + fib_recur(n-2)\nfib_recur(10)\n\ndef fib(max): \n n, a, b = 0, 0, 1 \n while n < max: \n yield b \n # print(b)\n a, b = b, a + b \n n = n + 1 \nprint(list(fib(10))[-1])", "Great job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/ko/guide/basic_training_loops.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "기본 훈련 루프\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/basic_training_loops\" class=\"\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a> </td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/basic_training_loops.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/basic_training_loops.ipynb\" class=\"\"> <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\"> GitHub에서 소스 보기</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/basic_training_loops.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드</a></td>\n</table>\n\n이전 가이드에서 텐서, 변수, 그래디언트 테이프 , 모듈에 관해 배웠습니다. 이 가이드에서는 모델을 훈련하기 위해 이들 요소를 모두 맞춤 조정합니다.\nTensorFlow에는 상용구를 줄이기 위해 유용한 추상화를 제공하는 상위 수준의 신경망 API인 tf.Keras API도 포함되어 있습니다. 그러나 이 가이드에서는 기본 클래스를 사용합니다.\n설정", "import tensorflow as tf", "머신러닝 문제 해결하기\n머신러닝 문제의 해결은 일반적으로 다음 단계로 구성됩니다.\n\n훈련 데이터를 얻습니다.\n모델을 정의합니다.\n손실 함수를 정의합니다.\n훈련 데이터를 실행하여 이상적인 값에서 손실을 계산합니다.\n손실에 대한 기울기를 계산하고 최적화 프로그램 를 사용하여 데이터에 맞게 변수를 조정합니다.\n결과를 평가합니다.\n\n설명을 위해 이 가이드에서는 $W$(가중치) 및 $b$(바이어스)의 두 가지 변수가 있는 간단한 선형 모델 $f(x) = x * W + b$를 개발합니다.\n이것이 가장 기본적인 머신러닝 문제입니다. $x$와 $y$가 주어지면 간단한 선형 회귀를 통해 선의 기울기와 오프셋을 찾습니다.\n데이터\n지도 학습은 입력(일반적으로 x로 표시됨)과 출력(y로 표시, 종종 레이블이라고 함)을 사용합니다. 목표는 입력에서 출력 값을 예측할 수 있도록 쌍을 이룬 입력과 출력에서 학습하는 것입니다.\nTensorFlow에서 데이터의 각 입력은 거의 항상 텐서로 표현되며, 종종 벡터입니다. 지도 학습에서 출력(또는 예측하려는 값)도 텐서입니다.\n다음은 선을 따라 점에 가우시안 (정규 분포) 노이즈를 추가하여 합성된 데이터입니다.", "# The actual line\nTRUE_W = 3.0\nTRUE_B = 2.0\n\nNUM_EXAMPLES = 1000\n\n# A vector of random x values\nx = tf.random.normal(shape=[NUM_EXAMPLES])\n\n# Generate some noise\nnoise = tf.random.normal(shape=[NUM_EXAMPLES])\n\n# Calculate y\ny = x * TRUE_W + TRUE_B + noise\n\n# Plot all the data\nimport matplotlib.pyplot as plt\n\nplt.scatter(x, y, c=\"b\")\nplt.show()", "텐서는 일반적으로 배치 또는 입력과 출력이 함께 쌓인 그룹의 형태로 수집됩니다. 일괄 처리는 몇 가지 훈련 이점을 제공할 수 있으며 가속기 및 벡터화된 계산에서 잘 동작합니다. 데이터세트가 얼마나 작은지를 고려할 때 전체 데이터세트를 단일 배치로 처리할 수 있습니다.\n모델 정의하기\ntf.Variable을 사용하여 모델의 모든 가중치를 나타냅니다. tf.Variable은 값을 저장하고 필요에 따라 텐서 형식으로 제공합니다. 자세한 내용은 변수 가이드를 참조하세요.\ntf.Module을 사용하여 변수와 계산을 캡슐화합니다. 모든 Python 객체를 사용할 수 있지만 이렇게 하면 쉽게 저장할 수 있습니다.\n여기서 w와 b를 모두 변수로 정의합니다.", "class MyModel(tf.Module):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n # Initialize the weights to `5.0` and the bias to `0.0`\n # In practice, these should be randomly initialized\n self.w = tf.Variable(5.0)\n self.b = tf.Variable(0.0)\n\n def __call__(self, x):\n return self.w * x + self.b\n\nmodel = MyModel()\n\n# List the variables tf.modules's built-in variable aggregation.\nprint(\"Variables:\", model.variables)\n\n# Verify the model works\nassert model(3.0).numpy() == 15.0", "초기 변수는 여기에서 고정된 방식으로 설정되지만 Keras에는 나머지 Keras의 유무에 관계없이 사용할 수 있는 여러 초기화 프로그램이 함께 제공됩니다.\n손실 함수 정의하기\n손실 함수는 주어진 입력에 대한 모델의 출력이 목표 출력과 얼마나 잘 일치하는지 측정합니다. 목표는 훈련 중에 이러한 차이를 최소화하는 것입니다. \"평균 제곱\" 오류라고도 하는 표준 L2 손실을 정의합니다.", "# This computes a single loss value for an entire batch\ndef loss(target_y, predicted_y):\n return tf.reduce_mean(tf.square(target_y - predicted_y))", "모델을 훈련하기 전에 모델의 예측을 빨간색으로, 훈련 데이터를 파란색으로 플롯하여 손실값을 시각화할 수 있습니다.", "plt.scatter(x, y, c=\"b\")\nplt.scatter(x, model(x), c=\"r\")\nplt.show()\n\nprint(\"Current loss: %1.6f\" % loss(y, model(x)).numpy())", "훈련 루프 정의하기\n훈련 루프는 순서대로 3가지 작업을 반복적으로 수행하는 것으로 구성됩니다.\n\n모델을 통해 입력 배치를 전송하여 출력 생성\n출력을 출력(또는 레이블)과 비교하여 손실 계산\n그래디언트 테이프를 사용하여 그래디언트 찾기\n해당 그래디언트로 변수 최적화\n\n이 예제에서는 경사 하강법을 사용하여 모델을 훈련할 수 있습니다.\ntf.keras.optimizers에서 캡처되는 경사 하강법 체계에는 다양한 변형이 있습니다. 하지만 첫 번째 원칙을 준수하는 의미에서, 기본적인 수학을 직접 구현할 것입니다. 자동 미분을 위한 tf.GradientTape 및 값 감소를 위한 tf.assign_sub(tf.assign과 tf.sub를 결합하는 값)의 도움을 받습니다.", "# Given a callable model, inputs, outputs, and a learning rate...\ndef train(model, x, y, learning_rate):\n\n with tf.GradientTape() as t:\n # Trainable variables are automatically tracked by GradientTape\n current_loss = loss(y, model(x))\n\n # Use GradientTape to calculate the gradients with respect to W and b\n dw, db = t.gradient(current_loss, [model.w, model.b])\n\n # Subtract the gradient scaled by the learning rate\n model.w.assign_sub(learning_rate * dw)\n model.b.assign_sub(learning_rate * db)", "훈련을 살펴보려면 훈련 루프를 통해 x 및 y의 같은 배치를 보내고 W 및 b가 발전하는 모습을 확인합니다.", "model = MyModel()\n\n# Collect the history of W-values and b-values to plot later\nWs, bs = [], []\nepochs = range(10)\n\n# Define a training loop\ndef training_loop(model, x, y):\n\n for epoch in epochs:\n # Update the model with the single giant batch\n train(model, x, y, learning_rate=0.1)\n\n # Track this before I update\n Ws.append(model.w.numpy())\n bs.append(model.b.numpy())\n current_loss = loss(y, model(x))\n\n print(\"Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f\" %\n (epoch, Ws[-1], bs[-1], current_loss))\n\n\nprint(\"Starting: W=%1.2f b=%1.2f, loss=%2.5f\" %\n (model.w, model.b, loss(y, model(x))))\n\n# Do the training\ntraining_loop(model, x, y)\n\n# Plot it\nplt.plot(epochs, Ws, \"r\",\n epochs, bs, \"b\")\n\nplt.plot([TRUE_W] * len(epochs), \"r--\",\n [TRUE_B] * len(epochs), \"b--\")\n\nplt.legend([\"W\", \"b\", \"True W\", \"True b\"])\nplt.show()\n\n\n# Visualize how the trained model performs\nplt.scatter(x, y, c=\"b\")\nplt.scatter(x, model(x), c=\"r\")\nplt.show()\n\nprint(\"Current loss: %1.6f\" % loss(model(x), y).numpy())", "같은 솔루션이지만, Keras를 사용한 경우\n위의 코드를 Keras의 해당 코드와 대조해 보면 유용합니다.\ntf.keras.Model을 하위 클래스화하면 모델 정의는 정확히 같게 보입니다. Keras 모델은 궁극적으로 모듈에서 상속한다는 것을 기억하세요.", "class MyModelKeras(tf.keras.Model):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n # Initialize the weights to `5.0` and the bias to `0.0`\n # In practice, these should be randomly initialized\n self.w = tf.Variable(5.0)\n self.b = tf.Variable(0.0)\n\n def call(self, x):\n return self.w * x + self.b\n\nkeras_model = MyModelKeras()\n\n# Reuse the training loop with a Keras model\ntraining_loop(keras_model, x, y)\n\n# You can also save a checkpoint using Keras's built-in support\nkeras_model.save_weights(\"my_checkpoint\")", "모델을 생성할 때마다 새로운 훈련 루프를 작성하는 대신 Keras의 내장 기능을 바로 가기로 사용할 수 있습니다. Python 훈련 루프를 작성하거나 디버그하지 않으려는 경우 유용할 수 있습니다.\n그렇게 하려면, model.compile()을 사용하여 매개변수를 설정하고 model.fit()을 사용하여 훈련해야 합니다. L2 손실 및 경사 하강법의 Keras 구현을 바로 가기로 사용하면 코드가 적을 수 있습니다. Keras 손실 및 최적화 프록그램은 이러한 편의성 함수 외부에서 사용할 수 있으며 이전 예제에서 사용할 수 있습니다.", "keras_model = MyModelKeras()\n\n# compile sets the training parameters\nkeras_model.compile(\n # By default, fit() uses tf.function(). You can\n # turn that off for debugging, but it is on now.\n run_eagerly=False,\n\n # Using a built-in optimizer, configuring as an object\n optimizer=tf.keras.optimizers.SGD(learning_rate=0.1),\n\n # Keras comes with built-in MSE error\n # However, you could use the loss function\n # defined above\n loss=tf.keras.losses.mean_squared_error,\n)", "Keras fit 배치 데이터 또는 전체 데이터세트를 NumPy 배열로 예상합니다. NumPy 배열은 배치로 분할되며, 기본 배치 크기는 32입니다.\n이 경우 손으로 쓴 루프의 동작과 일치시키려면 x를 크기 1000의 단일 배치로 전달해야 합니다.", "print(x.shape[0])\nkeras_model.fit(x, y, epochs=10, batch_size=1000)", "Keras는 훈련 전이 아닌 훈련 후 손실을 출력하므로 첫 번째 손실이 더 낮게 나타나지만, 그렇지 않으면 본질적으로 같은 훈련 성능을 보여줍니다.\n다음 단계\n이 가이드에서는 텐서, 변수, 모듈 및 그래디언트 테이프의 핵심 클래스를 사용하여 모델을 빌드하고 훈련하는 방법과 이러한 아이디어가 Keras에 매핑되는 방법을 살펴보았습니다.\n그러나 이것은 매우 단순한 문제입니다. 보다 실용적인 소개는 사용자 정의 훈련 연습을 참조하세요.\n내장 Keras 훈련 루프의 사용에 관한 자세한 내용은 이 가이드를 참조하세요. 훈련 루프 및 Keras에 관한 자세한 내용은 이 가이드를 참조하세요. 사용자 정의 분산 훈련 루프의 작성에 관해서는 이 가이드를 참조하세요." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jmhsi/justin_tinker
data_science/courses/temp/courses/ml1/lesson1-rf.ipynb
apache-2.0
[ "Intro to Random Forests\nAbout this course\nTeaching approach\nThis course is being taught by Jeremy Howard, and was developed by Jeremy along with Rachel Thomas. Rachel has been dealing with a life-threatening illness so will not be teaching as originally planned this year.\nJeremy has worked in a number of different areas - feel free to ask about anything that he might be able to help you with at any time, even if not directly related to the current topic:\n\nManagement consultant (McKinsey; AT Kearney)\nSelf-funded startup entrepreneur (Fastmail: first consumer synchronized email; Optimal Decisions: first optimized insurance pricing)\nVC-funded startup entrepreneur: (Kaggle; Enlitic: first deep-learning medical company)\n\nI'll be using a top-down teaching method, which is different from how most math courses operate. Typically, in a bottom-up approach, you first learn all the separate components you will be using, and then you gradually build them up into more complex structures. The problems with this are that students often lose motivation, don't have a sense of the \"big picture\", and don't know what they'll need.\nIf you took the fast.ai deep learning course, that is what we used. You can hear more about my teaching philosophy in this blog post or in this talk.\nHarvard Professor David Perkins has a book, Making Learning Whole in which he uses baseball as an analogy. We don't require kids to memorize all the rules of baseball and understand all the technical details before we let them play the game. Rather, they start playing with a just general sense of it, and then gradually learn more rules/details as time goes on.\nAll that to say, don't worry if you don't understand everything at first! You're not supposed to. We will start using some \"black boxes\" such as random forests that haven't yet been explained in detail, and then we'll dig into the lower level details later.\nTo start, focus on what things DO, not what they ARE.\nYour practice\nPeople learn by:\n1. doing (coding and building)\n2. explaining what they've learned (by writing or helping others)\nTherefore, we suggest that you practice these skills on Kaggle by:\n1. Entering competitions (doing)\n2. Creating Kaggle kernels (explaining)\nIt's OK if you don't get good competition ranks or any kernel votes at first - that's totally normal! Just try to keep improving every day, and you'll see the results over time.\nTo get better at technical writing, study the top ranked Kaggle kernels from past competitions, and read posts from well-regarded technical bloggers. Some good role models include:\n\nPeter Norvig (more here)\nStephen Merity\nJulia Evans (more here)\nJulia Ferraioli\nEdwin Chen\nSlav Ivanov (fast.ai student)\nBrad Kenstler (fast.ai and USF MSAN student)\n\nBooks\nThe more familiarity you have with numeric programming in Python, the better. If you're looking to improve in this area, we strongly suggest Wes McKinney's Python for Data Analysis, 2nd ed.\nFor machine learning with Python, we recommend:\n\nIntroduction to Machine Learning with Python: From one of the scikit-learn authors, which is the main library we'll be using\nPython Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition: New version of a very successful book. A lot of the new material however covers deep learning in Tensorflow, which isn't relevant to this course\nHands-On Machine Learning with Scikit-Learn and TensorFlow\n\nSyllabus in brief\nDepending on time and class interests, we'll cover something like (not necessarily in this order):\n\nTrain vs test\nEffective validation set construction\nTrees and ensembles\nCreating random forests\nInterpreting random forests\nWhat is ML? Why do we use it?\nWhat makes a good ML project?\nStructured vs unstructured data\nExamples of failures/mistakes\nFeature engineering\nDomain specific - dates, URLs, text\nEmbeddings / latent factors\nRegularized models trained with SGD\nGLMs, Elasticnet, etc (NB: see what James covered)\nBasic neural nets\nPyTorch\nBroadcasting, Matrix Multiplication\nTraining loop, backpropagation\nKNN\nCV / bootstrap (Diabetes data set?)\nEthical considerations\n\nSkip:\n\nDimensionality reduction\nInteractions\nMonitoring training\nCollaborative filtering\nMomentum and LR annealing\n\nImports", "%load_ext autoreload\n%autoreload 2\n\n%matplotlib inline\n\nfrom fastai.imports import *\nfrom fastai.structured import *\n\nfrom pandas_summary import DataFrameSummary\nfrom sklearn.ensemble import RandomForestRegressor, RandomForestClassifier\nfrom IPython.display import display\n\nfrom sklearn import metrics\n\nPATH = \"data/bulldozers/\"\n\n!ls {PATH}", "Introduction to Blue Book for Bulldozers\nAbout...\n...our teaching\nAt fast.ai we have a distinctive teaching philosophy of \"the whole game\". This is different from how most traditional math & technical courses are taught, where you have to learn all the individual elements before you can combine them (Harvard professor David Perkins call this elementitis), but it is similar to how topics like driving and baseball are taught. That is, you can start driving without knowing how an internal combustion engine works, and children begin playing baseball before they learn all the formal rules.\n...our approach to machine learning\nMost machine learning courses will throw at you dozens of different algorithms, with a brief technical description of the math behind them, and maybe a toy example. You're left confused by the enormous range of techniques shown and have little practical understanding of how to apply them.\nThe good news is that modern machine learning can be distilled down to a couple of key techniques that are of very wide applicability. Recent studies have shown that the vast majority of datasets can be best modeled with just two methods:\n\nEnsembles of decision trees (i.e. Random Forests and Gradient Boosting Machines), mainly for structured data (such as you might find in a database table at most companies)\nMulti-layered neural networks learnt with SGD (i.e. shallow and/or deep learning), mainly for unstructured data (such as audio, vision, and natural language)\n\nIn this course we'll be doing a deep dive into random forests, and simple models learnt with SGD. You'll be learning about gradient boosting and deep learning in part 2.\n...this dataset\nWe will be looking at the Blue Book for Bulldozers Kaggle Competition: \"The goal of the contest is to predict the sale price of a particular piece of heavy equiment at auction based on it's usage, equipment type, and configuaration. The data is sourced from auction result postings and includes information on usage and equipment configurations.\"\nThis is a very common type of dataset and prediciton problem, and similar to what you may see in your project or workplace.\n...Kaggle Competitions\nKaggle is an awesome resource for aspiring data scientists or anyone looking to improve their machine learning skills. There is nothing like being able to get hands-on practice and receiving real-time feedback to help you improve your skills.\nKaggle provides:\n\nInteresting data sets\nFeedback on how you're doing\nA leader board to see what's good, what's possible, and what's state-of-art.\nBlog posts by winning contestants share useful tips and techniques.\n\nThe data\nLook at the data\nKaggle provides info about some of the fields of our dataset; on the Kaggle Data info page they say the following:\nFor this competition, you are predicting the sale price of bulldozers sold at auctions. The data for this competition is split into three parts:\n\nTrain.csv is the training set, which contains data through the end of 2011.\nValid.csv is the validation set, which contains data from January 1, 2012 - April 30, 2012 You make predictions on this set throughout the majority of the competition. Your score on this set is used to create the public leaderboard.\nTest.csv is the test set, which won't be released until the last week of the competition. It contains data from May 1, 2012 - November 2012. Your score on the test set determines your final rank for the competition.\n\nThe key fields are in train.csv are:\n\nSalesID: the uniue identifier of the sale\nMachineID: the unique identifier of a machine. A machine can be sold multiple times\nsaleprice: what the machine sold for at auction (only provided in train.csv)\nsaledate: the date of the sale\n\nQuestion\nWhat stands out to you from the above description? What needs to be true of our training and validation sets?", "df_raw = pd.read_csv(f'{PATH}Train.csv', low_memory=False, \n parse_dates=[\"saledate\"])", "In any sort of data science work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. Even if you've read descriptions about your data, the actual data may not be what you expect.", "def display_all(df):\n with pd.option_context(\"display.max_rows\", 1000): \n with pd.option_context(\"display.max_columns\", 1000): \n display(df)\n\ndisplay_all(df_raw.tail().transpose())\n\ndisplay_all(df_raw.describe(include='all').transpose())", "It's important to note what metric is being used for a project. Generally, selecting the metric(s) is an important part of the project setup. However, in this case Kaggle tells us what metric to use: RMSLE (root mean squared log error) between the actual and predicted auction prices. Therefore we take the log of the prices, so that RMSE will give us what we need.", "df_raw.SalePrice = np.log(df_raw.SalePrice)", "Initial processing", "m = RandomForestRegressor(n_jobs=-1)\nm.fit(df_raw.drop('SalePrice', axis=1), df_raw.SalePrice)", "This dataset contains a mix of continuous and categorical variables.\nThe following method extracts particular date fields from a complete datetime for the purpose of constructing categoricals. You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities.", "add_datepart(df_raw, 'saledate')\ndf_raw.saleYear.head()", "The categorical variables are currently stored as strings, which is inefficient, and doesn't provide the numeric coding required for a random forest. Therefore we call train_cats to convert strings to pandas categories.", "train_cats(df_raw)", "We can specify the order to use for categorical variables if we wish:", "df_raw.UsageBand.cat.categories\n\ndf_raw.UsageBand.cat.set_categories(['High', 'Medium', 'Low'], ordered=True, inplace=True)\n\ndf_raw.UsageBand = df_raw.UsageBand.cat.codes", "We're still not quite done - for instance we have lots of missing values, wish we can't pass directly to a random forest.", "display_all(df_raw.isnull().sum().sort_index()/len(df_raw))", "But let's save this file for now, since it's already in format can we be stored and accessed efficiently.", "os.makedirs('tmp', exist_ok=True)\ndf_raw.to_feather('tmp/bulldozers-raw')", "Pre-processing\nIn the future we can simply read it from this fast format.", "df_raw = pd.read_feather('tmp/bulldozers-raw')", "We'll replace categories with their numeric codes, handle missing continuous values, and split the dependent variable into a separate variable.", "df, y, nas = proc_df(df_raw, 'SalePrice')", "We now have something we can pass to a random forest!", "m = RandomForestRegressor(n_jobs=-1)\nm.fit(df, y)\nm.score(df,y)", "todo define r^2\nWow, an r^2 of 0.98 - that's great, right? Well, perhaps not...\nPossibly the most important idea in machine learning is that of having separate training & validation data sets. As motivation, suppose you don't divide up your data, but instead use all of it. And suppose you have lots of parameters:\n<img src=\"images/overfitting2.png\" alt=\"\" style=\"width: 70%\"/>\n<center>\nUnderfitting and Overfitting\n</center>\nThe error for the pictured data points is lowest for the model on the far right (the blue curve passes through the red points almost perfectly), yet it's not the best choice. Why is that? If you were to gather some new data points, they most likely would not be on that curve in the graph on the right, but would be closer to the curve in the middle graph.\nThis illustrates how using all our data can lead to overfitting. A validation set helps diagnose this problem.", "def split_vals(a,n): return a[:n].copy(), a[n:].copy()\n\nn_valid = 12000 # same as Kaggle's test set size\nn_trn = len(df)-n_valid\nraw_train, raw_valid = split_vals(df_raw, n_trn)\nX_train, X_valid = split_vals(df, n_trn)\ny_train, y_valid = split_vals(y, n_trn)\n\nX_train.shape, y_train.shape, X_valid.shape", "Random Forests\nBase model\nLet's try our model again, this time with separate training and validation sets.", "def rmse(x,y): return math.sqrt(((x-y)**2).mean())\n\ndef print_score(m):\n res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),\n m.score(X_train, y_train), m.score(X_valid, y_valid)]\n if hasattr(m, 'oob_score_'): res.append(m.oob_score_)\n print(res)\n\nm = RandomForestRegressor(n_jobs=-1)\n%time m.fit(X_train, y_train)\nprint_score(m)", "An r^2 in the high-80's isn't bad at all (and the RMSLE puts us around rank 100 of 470 on the Kaggle leaderboard), but we can see from the validation set score that we're over-fitting badly. To understand this issue, let's simplify things down to a single small tree.\nSpeeding things up", "df_trn, y_trn, nas = proc_df(df_raw, 'SalePrice', subset=30000, na_dict=nas)\nX_train, _ = split_vals(df_trn, 20000)\ny_train, _ = split_vals(y_trn, 20000)\n\nm = RandomForestRegressor(n_jobs=-1)\n%time m.fit(X_train, y_train)\nprint_score(m)", "Single tree", "m = RandomForestRegressor(n_estimators=1, max_depth=3, bootstrap=False, n_jobs=-1)\nm.fit(X_train, y_train)\nprint_score(m)\n\ndraw_tree(m.estimators_[0], df_trn, precision=3)", "Let's see what happens if we create a bigger tree.", "m = RandomForestRegressor(n_estimators=1, bootstrap=False, n_jobs=-1)\nm.fit(X_train, y_train)\nprint_score(m)", "The training set result looks great! But the validation set is worse than our original model. This is why we need to use bagging of multiple trees to get more generalizable results.\nBagging\nIntro to bagging\nTo learn about bagging in random forests, let's start with our basic model again.", "m = RandomForestRegressor(n_jobs=-1)\nm.fit(X_train, y_train)\nprint_score(m)", "We'll grab the predictions for each individual tree, and look at one example.", "preds = np.stack([t.predict(X_valid) for t in m.estimators_])\npreds[:,0], np.mean(preds[:,0]), y_valid[0]\n\npreds.shape\n\nplt.plot([metrics.r2_score(y_valid, np.mean(preds[:i+1], axis=0)) for i in range(10)]);", "The shape of this curve suggests that adding more trees isn't going to help us much. Let's check. (Compare this to our original model on a sample)", "m = RandomForestRegressor(n_estimators=20, n_jobs=-1)\nm.fit(X_train, y_train)\nprint_score(m)\n\nm = RandomForestRegressor(n_estimators=40, n_jobs=-1)\nm.fit(X_train, y_train)\nprint_score(m)\n\nm = RandomForestRegressor(n_estimators=80, n_jobs=-1)\nm.fit(X_train, y_train)\nprint_score(m)", "Out-of-bag (OOB) score\nIs our validation set worse than our training set because we're over-fitting, or because the validation set is for a different time period, or a bit of both? With the existing information we've shown, we can't tell. However, random forests have a very clever trick called out-of-bag (OOB) error which can handle this (and more!)\nThe idea is to calculate error on the training set, but only include the trees in the calculation of a row's error where that row was not included in training that tree. This allows us to see whether the model is over-fitting, without needing a separate validation set.\nThis also has the benefit of allowing us to see whether our model generalizes, even if we only have a small amount of data so want to avoid separating some out to create a validation set.\nThis is as simple as adding one more parameter to our model constructor. We print the OOB error last in our print_score function below.", "m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)\nm.fit(X_train, y_train)\nprint_score(m)", "This shows that our validation set time difference is making an impact, as is model over-fitting.\nReducing over-fitting\nSubsampling\nIt turns out that one of the easiest ways to avoid over-fitting is also one of the best ways to speed up analysis: subsampling. Let's return to using our full dataset, so that we can demonstrate the impact of this technique.", "df_trn, y_trn = proc_df(df_raw, 'SalePrice')\nX_train, X_valid = split_vals(df_trn, n_trn)\ny_train, y_valid = split_vals(y_trn, n_trn)", "The basic idea is this: rather than limit the total amount of data that our model can access, let's instead limit it to a different random subset per tree. That way, given enough trees, the model can still see all the data, but for each individual tree it'll be just as fast as if we had cut down our dataset as before.", "set_rf_samples(20000)\n\nm = RandomForestRegressor(n_jobs=-1, oob_score=True)\n%time m.fit(X_train, y_train)\nprint_score(m)", "Since each additional tree allows the model to see more data, this approach can make additional trees more useful.", "m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)\nm.fit(X_train, y_train)\nprint_score(m)", "Tree building parameters\nWe revert to using a full bootstrap sample in order to show the impact of other over-fitting avoidance methods.", "reset_rf_samples()", "Let's get a baseline for this full set to compare to.", "m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)\nm.fit(X_train, y_train)\nprint_score(m)", "Another way to reduce over-fitting is to grow our trees less deeply. We do this by specifying (with min_samples_leaf) that we require some minimum number of rows in every leaf node. This has two benefits:\n\nThere are less decision rules for each leaf node; simpler models should generalize better\nThe predictions are made by averaging more rows in the leaf node, resulting in less volatility", "m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, n_jobs=-1, oob_score=True)\nm.fit(X_train, y_train)\nprint_score(m)", "We can also increase the amount of variation amongst the trees by not only use a sample of rows for each tree, but to also using a sample of columns for each split. We do this by specifying max_features, which is the proportion of features to randomly select from at each split.\n\nNone\n0.5\n\n'sqrt'\n\n\n1, 3, 5, 10, 25, 100", "m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)\nm.fit(X_train, y_train)\nprint_score(m)", "We can't compare our results directly with the Kaggle competition, since it used a different validation set (and we can no longer to submit to this competition) - but we can at least see that we're getting similar results to the winners based on the dataset we have.\nThe sklearn docs show an example of different max_features methods with increasing numbers of trees - as you see, using a subset of features on each split requires using more trees, but results in better models:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
xsolo/machine-learning
neural_network/recognize_hand_written_digits.ipynb
mit
[ "import numpy as np\nimport pandas as pd\nimport scipy.io as sio\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets import fetch_mldata\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.metrics import accuracy_score\n\n%matplotlib inline", "For this example I am using MNIST dataset of handwritten images", "scaler = StandardScaler()\nmnist = fetch_mldata('MNIST original')\n\n# converting data to be of type float .astype(float) to supress\n# data conversion warrning during scaling\nX= pd.DataFrame(scaler.fit_transform(mnist['data'].astype(float)))\ny= pd.DataFrame(mnist['target'].astype(int))\n\n# This function plots the given sample set of images as a grid with labels \n# if labels are available.\ndef plot_sample(S,labels=None):\n m, n = S.shape;\n example_width = int(np.round(np.sqrt(n)));\n example_height = int((n / example_width));\n \n # Compute number of items to display\n display_rows = int(np.floor(np.sqrt(m)));\n display_cols = int(np.ceil(m / display_rows));\n \n fig = plt.figure()\n for i in range(0,m):\n arr = S[i,:]\n arr = arr.reshape((example_width,example_height))\n ax = fig.add_subplot(display_rows,display_cols , i+1)\n ax.imshow(arr, aspect='auto', cmap=plt.get_cmap('gray'))\n if labels is not None:\n ax.text(0,0, '{}'.format(labels[i]), bbox={'facecolor':'white', 'alpha':0.8,'pad':2})\n ax.axis('off')\n plt.show()", "Let's plot a random 100 images", "samples = X.sample(100)\nplot_sample(samples.as_matrix())", "Now, let use the Neural Network with 1 hidden layers. The number of neurons in each layer is X_train.shape[1] which is 400 in our example (excluding the extra bias unit).", "from sklearn.neural_network import MLPClassifier\nfrom sklearn.model_selection import train_test_split\n\n# since the data we have is one big array, we want to split it into training\n# and testing sets, the split is 70% goes to training and 30% of data for testing\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\n\nneural_network =(80,)\n\n# for this excersize we are using MLPClassifier with lbfgs optimizer (the family of quasi-Newton methods). In my simple\n# experiments it produces good quality outcome\nclf = MLPClassifier(solver='lbfgs', alpha=1, hidden_layer_sizes=neural_network)\nclf.fit(X_train, y_train[0].ravel())\n\n# So after the classifier is trained, lets see what it predicts on the test data\nprediction = clf.predict(X_test)\n\nquality = np.where(prediction == y_test[0].ravel(),1,0)\nprint (\"Percentage of correct results is {:.04f}\".format(accuracy_score(y_test,prediction)))\n\n\n# I am going to use the same test set of data and will select random 48 examples from it.\n# The top left corner is the prediction from the Neural Network\n# please note that 0 is represented as 10 in this data set\nsamples = X_test.sample(100)\nplot_sample(samples.as_matrix(),clf.predict(samples))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
abhishekraok/GraphMap
notebook/Getting_Started.ipynb
apache-2.0
[ "GraphMap\nGetting Started\nThis notebook shows how to get started using GraphMap", "%pylab inline\nimport sys\nimport os\nsys.path.insert(0,'..')\nimport graphmap", "First let us import the module and create a GraphMap that persists in memory.", "from graphmap.graphmap_main import GraphMap\nfrom graphmap.memory_persistence import MemoryPersistence\nG = GraphMap(MemoryPersistence())", "Let us create two nodes with images of Seattle skyline and Mt. Tacoma from wikimedia.", "from graphmap.graph_helpers import NodeLink\nseattle_skyline_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/2f/Space_Needle002.jpg/640px-Space_Needle002.jpg'\nmt_tacoma_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/a/a2/Mount_Rainier_from_the_Silver_Queen_Peak.jpg/1024px-Mount_Rainier_from_the_Silver_Queen_Peak.jpg'\nseattle_node_link = NodeLink('seattle')\nmt_tacoma_node_link = NodeLink('tacoma')\n\nG.create_node(root_node_link=seattle_node_link, image_value_link=seattle_skyline_image_url)\n\nG.create_node(root_node_link=mt_tacoma_node_link, image_value_link=mt_tacoma_image_url)", "Now that we have created the 'seattle' node let's see how it looks", "seattle_pil_image_result = G.get_image_at_quad_key(root_node_link=seattle_node_link, resolution=256, quad_key='')\nmt_tacoma_pil_image_result = G.get_image_at_quad_key(root_node_link=mt_tacoma_node_link, resolution=256, quad_key='')\nimport matplotlib.pyplot as plt\nplt.imshow(seattle_pil_image_result.value)\nplt.figure()\nplt.imshow(mt_tacoma_pil_image_result.value)", "Let us insert the 'tacoma' node into the 'seattle' node at the top right. The quad key we will use is 13. 1 correpsonds to the top right quadrant, inside that we will insert at bottom right hence 3.", "insert_quad_key = '13'\ncreated_node_link_result = G.connect_child(root_node_link=seattle_node_link, \n quad_key=insert_quad_key,\n child_node_link=mt_tacoma_node_link,)\nprint(created_node_link_result)", "Let us see how the new_seattle_node looks after the insertion.", "created_node_link = created_node_link_result.value\nnew_seattle_image_result = G.get_image_at_quad_key(created_node_link, resolution=256, quad_key='')\nnew_seattle_image_result\n\nplt.imshow(new_seattle_image_result.value)", "One can see a nice image of Mt. Tacoma inserted into image of Seattle." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dolittle007/dolittle007.github.io
notebooks/updating_priors.ipynb
gpl-3.0
[ "Updating priors\nIn this notebook, I will show how it is possible to update the priors as new data becomes available. The example is a slightly modified version of the linear regression in the Getting started with PyMC3 notebook.", "import matplotlib.pyplot as plt\nimport matplotlib as mpl\nfrom pymc3 import Model, Normal, Slice\nfrom pymc3 import sample\nfrom pymc3 import traceplot\nfrom pymc3.distributions import Interpolated\nfrom theano import as_op\nimport theano.tensor as tt\nimport numpy as np\nfrom scipy import stats\n\n%matplotlib inline", "Generating data", "# Initialize random number generator\nnp.random.seed(123)\n\n# True parameter values\nalpha_true = 5\nbeta0_true = 7\nbeta1_true = 13\n\n# Size of dataset\nsize = 100\n\n# Predictor variable\nX1 = np.random.randn(size)\nX2 = np.random.randn(size) * 0.2\n\n# Simulate outcome variable\nY = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)", "Model specification\nOur initial beliefs about the parameters are quite informative (sd=1) and a bit off the true values.", "basic_model = Model()\n\nwith basic_model:\n \n # Priors for unknown model parameters\n alpha = Normal('alpha', mu=0, sd=1)\n beta0 = Normal('beta0', mu=12, sd=1)\n beta1 = Normal('beta1', mu=18, sd=1)\n \n # Expected value of outcome\n mu = alpha + beta0 * X1 + beta1 * X2\n \n # Likelihood (sampling distribution) of observations\n Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)\n \n # draw 10000 posterior samples\n trace = sample(10000)\n\ntraceplot(trace);", "In order to update our beliefs about the parameters, we use the posterior distributions, which will be used as the prior distributions for the next inference. The data used for each inference iteration has to be independent from the previous iterations, otherwise the same (possibly wrong) belief is injected over and over in the system, amplifying the errors and misleading the inference. By ensuring the data is independent, the system should converge to the true parameter values.\nBecause we draw samples from the posterior distribution (shown on the right in the figure above), we need to estimate their probability density (shown on the left in the figure above). Kernel density estimation (KDE) is a way to achieve this, and we will use this technique here. In any case, it is an empirical distribution that cannot be expressed analytically. Fortunately PyMC3 provides a way to use custom distributions, via Interpolated class.", "def from_posterior(param, samples):\n smin, smax = np.min(samples), np.max(samples)\n width = smax - smin\n x = np.linspace(smin, smax, 100)\n y = stats.gaussian_kde(samples)(x)\n \n # what was never sampled should have a small probability but not 0,\n # so we'll extend the domain and use linear approximation of density on it\n x = np.concatenate([[x[0] - 3 * width], x, [x[-1] + 3 * width]])\n y = np.concatenate([[0], y, [0]])\n return Interpolated(param, x, y)", "Now we just need to generate more data and build our Bayesian model so that the prior distributions for the current iteration are the posterior distributions from the previous iteration. It is still possible to continue using NUTS sampling method because Interpolated class implements calculation of gradients that are necessary for Hamiltonian Monte Carlo samplers.", "traces = [trace]\n\nfor _ in range(10):\n\n # generate more data\n X1 = np.random.randn(size)\n X2 = np.random.randn(size) * 0.2\n Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)\n\n model = Model()\n with model:\n # Priors are posteriors from previous iteration\n alpha = from_posterior('alpha', trace['alpha'])\n beta0 = from_posterior('beta0', trace['beta0'])\n beta1 = from_posterior('beta1', trace['beta1'])\n\n # Expected value of outcome\n mu = alpha + beta0 * X1 + beta1 * X2\n\n # Likelihood (sampling distribution) of observations\n Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)\n \n # draw 10000 posterior samples\n trace = sample(10000)\n traces.append(trace)\n\nprint('Posterior distributions after ' + str(len(traces)) + ' iterations.')\ncmap = mpl.cm.autumn\nfor param in ['alpha', 'beta0', 'beta1']:\n plt.figure(figsize=(8, 2))\n for update_i, trace in enumerate(traces):\n samples = trace[param]\n smin, smax = np.min(samples), np.max(samples)\n x = np.linspace(smin, smax, 100)\n y = stats.gaussian_kde(samples)(x)\n plt.plot(x, y, color=cmap(1 - update_i / len(traces)))\n plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k')\n plt.ylabel('Frequency')\n plt.title(param)\n plt.show()", "You can re-execute the last two cells to generate more updates.\nWhat is interesting to note is that the posterior distributions for our parameters tend to get centered on their true value (vertical lines), and the distribution gets thiner and thiner. This means that we get more confident each time, and the (false) belief we had at the beginning gets flushed away by the new data we incorporate." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/applied-machine-learning-intensive
content/04_classification/04_classification_project/colab.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/04_classification/04_classification_project/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Classification Project\nIn this project you will apply what you have learned about classification and TensorFlow to complete a project from Kaggle. The challenge is to achieve a high accuracy score while trying to predict which passengers survived the Titanic ship crash. After building your model, you will upload your predictions to Kaggle and submit the score that you get.\nThe Titanic Dataset\nKaggle has a dataset containing the passenger list on the Titanic. The data contains passenger features such as age, gender, ticket class, as well as whether or not they survived.\nYour job is to create a binary classifier using TensorFlow to determine if a passenger survived or not. The Survived column lets you know if the person survived. Then, upload your predictions to Kaggle and submit your accuracy score at the end of this Colab, along with a brief conclusion.\nTo get the dataset, you'll need to accept the competition's rules by clicking the \"I understand and accept\" button on the competition rules page. Then upload your kaggle.json file and run the code below.", "! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && cp kaggle.json ~/.kaggle/ && echo 'Done'\n! kaggle competitions download -c titanic\n! ls", "Note: If you see a \"403 - Forbidden\" error above, you still need to click \"I understand and accept\" on the competition rules page.\nThree files are downloaded:\n\ntrain.csv: training data (contains features and targets)\ntest.csv: feature data used to make predictions to send to Kaggle\ngender_submission.csv: an example competition submission file\n\nStep 1: Exploratory Data Analysis\nPerform exploratory data analysis and data preprocessing. Use as many text and code blocks as you need to explore the data. Note any findings. Repair any data issues you find.\nStudent Solution", "# Your code goes here", "Step 2: The Model\nBuild, fit, and evaluate a classification model. Perform any model-specific data processing that you need to perform. If the toolkit you use supports it, create visualizations for loss and accuracy improvements. Use as many text and code blocks as you need to explore the data. Note any findings.\nStudent Solution", "# Your code goes here", "Step 3: Make Predictions and Upload To Kaggle\nIn this step you will make predictions on the features found in the test.csv file and upload them to Kaggle using the Kaggle API. Use as many text and code blocks as you need to explore the data. Note any findings.\nStudent Solution", "# Your code goes here", "What was your Kaggle score?\n\nRecord your score here\n\n\nStep 4: Iterate on Your Model\nIn this step you're encouraged to play around with your model settings and to even try different models. See if you can get a better score. Use as many text and code blocks as you need to explore the data. Note any findings.\nStudent Solution", "# Your code goes here", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
junhwanjang/DataSchool
Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/2) NumPy 배열 생성과 변형.ipynb
mit
[ "NumPy 배열 생성과 변형\nNumPy의 자료형\nNumPy의 ndarray클래스는 포함하는 모든 데이터가 같은 자료형(data type)이어야 한다. 또한 자료형 자체도 일반 파이썬에서 제공하는 것보다 훨씬 세분화되어 있다.\nNumPy의 자료형은 dtype 이라는 인수로 지정한다. dtype 인수로 지정할 값은 다음 표에 보인것과 같은 dtype 접두사로 시작하는 문자열이고 비트/바이트 수를 의미하는 숫자가 붙을 수도 있다.\n| dtype 접두사 | 설명 | 사용 예 |\n|-|-|-|\n| t | 비트 필드 | t4 (4비트) | \n| b | 불리언 | b (참 혹은 거짓) | \n| i | 정수 | i8 (64비트) | \n| u | 부호 없는 정수 | u8 (64비트) | \n| f | 부동소수점 | f8 (64비트) | \n| c | 복소 부동소수점 | c16 (128비트) | \n| O | 객체 | 0 (객체에 대한 포인터) | \n| S, a | 문자열 | S24 (24 글자) | \n| U | 유니코드 문자열 | U24 (24 유니코드 글자) | \n| V | 기타 | V12 (12바이트의 데이터 블럭) | \nndarray 객체의 dtype 속성으로 자료형을 알 수 있다.", "x = np.array([1, 2, 3])\nx.dtype", "만약 부동소수점을 사용하는 경우에는 무한대를 표현하기 위한 np.inf와 정의할 수 없는 숫자를 나타내는 np.nan 을 사용할 수 있다.", "np.exp(-np.inf)\n\nnp.array([1, 0]) / np.array([0, 0])", "배열 생성", "x = np.array([1, 2, 3])\nx", "앞에서 파이썬 리스트를 NumPy의 ndarray 객체로 변환하여 생성하려면 array 명령을 사용하였다. 그러나 보통은 이러한 기본 객체없이 다음과 같은 명령을 사용하여 바로 ndarray 객체를 생성한다. \n\nzeros, ones\nzeros_like, ones_like\nempty\narange\nlinspace, logspace\nrand, randn\n\n크기가 정해져 있고 모든 값이 0인 배열을 생성하려면 zeros 명령을 사용한다. dtype 인수가 없으면 정수형이 된다.", "a = np.zeros(5) \na", "dtype 인수를 명시하면 해당 자료형 원소를 가진 배열을 만든다.", "b = np.zeros((5,2), dtype=\"f8\")\nb", "문자열 배열도 가능하지면 모든 원소의 문자열 크기가 같아야 한다. 만약 더 큰 크기의 문자열을 할당하면 잘릴 수 있다.", "c = np.zeros(5, dtype=\"S4\")\nc[0] = \"abcd\"\nc[1] = \"ABCDE\"\nc", "1이 아닌 0으로 초기화된 배열을 생성하려면 ones 명령을 사용한다.", "d = np.ones((2,3,4), dtype=\"i8\")\nd", "만약 크기를 튜플(tuple)로 명시하지 않고 특정한 배열 혹은 리스트와 같은 크기의 배열을 생성하고 싶다면 ones_like, zeros_like 명령을 사용한다.", "e = range(10)\nprint(e)\nf = np.ones_like(e, dtype=\"f\")\nf", "배열의 크기가 커지면 배열을 초기화하는데도 시간이 걸린다. 이 시간을 단축하려면 생성만 하고 초기화를 하지 않는 empty 명령을 사용할 수 있다. empty 명령으로 생성된 배열에 어떤 값이 들어있을지는 알 수 없다.", "g = np.empty((4,3))\ng", "arange 명령은 NumPy 버전의 range 명령이라고 볼 수 있다. 해당하는 범위의 숫자 순열을 생성한다.", "np.arange(10) # 0 .. n-1 \n\nnp.arange(3, 21, 2) # start, end (exclusive), step", "linspace 명령이나 logspace 명령은 선형 구간 혹은 로그 구간을 지정한 구간의 수만큼 분할한다.", "np.linspace(0, 100, 5) # start, end, num-points\n\nnp.logspace(0, 4, 4, endpoint=False)", "임의의 난수를 생성하고 싶다면 random 서브패키지의 rand 혹은 randn 명령을 사용한다. rand 명령을 uniform 분포를 따르는 난수를 생성하고 randn 명령을 가우시안 정규 분포를 따르는 난수를 생성한다. 생성할 시드(seed)값을 지정하려면 seed 명령을 사용한다.", "np.random.seed(0)\n\nnp.random.rand(4)\n\nnp.random.randn(3,5)", "배열의 크기 변형\n일단 만들어진 배열의 내부 데이터는 보존한 채로 형태만 바꾸려면 reshape 명령이나 메서드를 사용한다. 예를 들어 12개의 원소를 가진 1차원 행렬은 3x4 형태의 2차원 행렬로 만들 수 있다.", "a = np.arange(12)\na\n\nb = a.reshape(3, 4)\nb", "사용하는 원소의 갯수가 정해저 있기 때문에 reshape 명령의 형태 튜플의 원소 중 하나는 -1이라는 숫자로 대체할 수 있다. -1을 넣으면 해당 숫자는 다를 값에서 계산되어 사용된다.", "a.reshape(2,2,-1)\n\na.reshape(2,-1,2)", "다차원 배열을 무조건 1차원으로 펼치기 위해서는 flatten 명령이나 메서드를 사용한다.", "a.flatten()", "길이가 5인 1차원 배열과 행, 열의 갯수가 (5,1)인 2차원 배열은 데이터는 같아도 엄연히 다른 객체이다.", "x = np.arange(5)\nx\n\ny = x.reshape(5,1)\ny", "이렇게 같은 배열에 대해 차원만 1차원 증가시키는 경우에는 newaxis 명령을 사용하기도 한다.", "z = x[:, np.newaxis]\nz", "배열 연결\n행의 수나 열의 수가 같은 두 개 이상의 배열을 연결하여(concatenate) 더 큰 배열을 만들 때는 다음과 같은 명령을 사용한다.\n\nhstack\nvstack\ndstack\nstack\nr_\ntile\n\nhstack 명령은 행의 수가 같은 두 개 이상의 배열을 옆으로 연결하여 열의 수가 더 많은 배열을 만든다. 연결할 배열은 하나의 리스트에 담아야 한다.", "a1 = np.ones((2, 3))\na1\n\na2 = np.zeros((2, 2))\na2\n\nnp.hstack([a1, a2])", "vstack 명령은 열의 수가 같은 두 개 이상의 배열을 위아래로 연결하여 행의 수가 더 많은 배열을 만든다. 연결할 배열은 마찬가지로 하나의 리스트에 담아야 한다.", "b1 = np.ones((2, 3))\nb1\n\nb2 = np.zeros((3, 3))\nb2\n\nnp.vstack([b1, b2])", "dstack 명령은 제3의 축 즉, 행이나 열이 아닌 깊이(depth) 방향으로 배열을 합친다.", "c1 = np.ones((2,3))\nc1\n\nc2 = np.zeros((2,3))\nc2\n\nnp.dstack([c1, c2])", "stack 명령은 새로운 차원(축으로) 배열을 연결하며 당연히 연결하고자 하는 배열들의 크기가 모두 같아야 한다.\naxis 인수(디폴트 0)를 사용하여 연결후의 회전 방향을 정한다.", "np.stack([c1, c2])\n\nnp.stack([c1, c2], axis=1)", "r_ 메서드는 hstack 명령과 유사하다. 다만 메서드임에도 불구하고 소괄호(parenthesis, ())를 사용하지 않고 인덱싱과 같이 대괄호(bracket, [])를 사용한다.", "np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])]", "tile 명령은 동일한 배열을 반복하여 연결한다.", "a = np.array([0, 1, 2])\nnp.tile(a, 2)\n\nnp.tile(a, (3, 2))", "그리드 생성\n변수가 2개인 2차원 함수의 그래프를 그리거나 표를 작성하려면 많은 좌표를 한꺼번에 생성하여 각 좌표에 대한 함수 값을 계산해야 한다.\n예를 들어 x, y 라는 두 변수를 가진 함수에서 x가 0부터 2까지, y가 0부터 4까지의 사각형 영역에서 변화하는 과정을 보고 싶다면 이 사각형 영역 안의 다음과 같은 (x,y) 쌍 값들에 대해 함수를 계산해야 한다. \n$$ (x,y) = (0,0), (0,1), (0,2), (0,3), (0,4), (1,0), \\cdots (2,4) $$\n이러한 과정을 자동으로 해주는 것이 NumPy의 meshgrid 명령이다. meshgrid 명령은 사각형 영역을 구성하는 가로축의 점들과 세로축의 점을 나타내는 두 벡터를 인수로 받아서 이 사각형 영역을 이루는 조합을 출력한다. 단 조합이 된 (x,y)쌍을 x값만을 표시하는 행렬과 y값만을 표시하는 행렬 두 개로 분리하여 출력한다.", "x = np.arange(3)\nx\n\ny = np.arange(5)\ny\n\nX, Y = np.meshgrid(x, y)\n\nX\n\nY\n\n[zip(x, y) for x, y in zip(X, Y)]\n\nplt.scatter(X, Y, linewidths=10);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ja/guide/checkpoint.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "トレーニングのチェックポイント\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/guide/checkpoint\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org で表示</a></td>\n <td>Google Colab で実行</td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/checkpoint.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub でソースを表示</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/checkpoint.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a></td>\n</table>\n\n「TensorFlow のモデルを保存する」という言いまわしは通常、次の 2 つのいずれかを意味します。\n\nチェックポイント、または\n保存されたモデル(SavedModel)\n\nチェックポイントは、モデルで使用されるすべてのパラメータ(tf.Variableオブジェクト)の正確な値をキャプチャします。チェックポイントにはモデルで定義された計算のいかなる記述も含まれていないため、通常は、保存されたパラメータ値を使用するソースコードが利用可能な場合に限り有用です。\n一方、SavedModel 形式には、パラメータ値(チェックポイント)に加え、モデルで定義された計算のシリアライズされた記述が含まれています。この形式のモデルは、モデルを作成したソースコードから独立しています。したがって、TensorFlow Serving、TensorFlow Lite、TensorFlow.js、または他のプログラミング言語のプログラム(C、C++、Java、Go、Rust、C# などの TensorFlow API)を介したデプロイに適しています。\nこのガイドでは、チェックポイントの書き込みと読み取りを行う API について説明します。\nセットアップ", "import tensorflow as tf\n\nclass Net(tf.keras.Model):\n \"\"\"A simple linear model.\"\"\"\n\n def __init__(self):\n super(Net, self).__init__()\n self.l1 = tf.keras.layers.Dense(5)\n\n def call(self, x):\n return self.l1(x)\n\nnet = Net()", "tf.kerasトレーニング API から保存する\ntf.kerasの保存と復元に関するガイドをご覧ください。\ntf.keras.Model.save_weightsで TensorFlow チェックポイントを保存します。", "net.save_weights('easy_checkpoint')", "チェックポイントを記述する\nTensorFlow モデルの永続的な状態は、tf.Variableオブジェクトに格納されます。これらは直接作成できますが、多くの場合はtf.keras.layersやtf.keras.Modelなどの高レベル API を介して作成されます。\n変数を管理する最も簡単な方法は、変数を Python オブジェクトにアタッチし、それらのオブジェクトを参照することです。\ntf.train.Checkpoint、tf.keras.layers.Layerおよびtf.keras.Modelのサブクラスは、属性に割り当てられた変数を自動的に追跡します。以下の例では、単純な線形モデルを作成し、モデルのすべての変数の値を含むチェックポイントを記述します。\nModel.save_weightsで、モデルチェックポイントを簡単に保存できます。\n手動チェックポイント\nセットアップ\ntf.train.Checkpoint のすべての機能を実演するために、トイデータセットと最適化ステップを次のように定義します。", "def toy_dataset():\n inputs = tf.range(10.)[:, None]\n labels = inputs * 5. + tf.range(5.)[None, :]\n return tf.data.Dataset.from_tensor_slices(\n dict(x=inputs, y=labels)).repeat().batch(2)\n\ndef train_step(net, example, optimizer):\n \"\"\"Trains `net` on `example` using `optimizer`.\"\"\"\n with tf.GradientTape() as tape:\n output = net(example['x'])\n loss = tf.reduce_mean(tf.abs(output - example['y']))\n variables = net.trainable_variables\n gradients = tape.gradient(loss, variables)\n optimizer.apply_gradients(zip(gradients, variables))\n return loss", "チェックポイントオブジェクトを作成する\nチェックポイントを手動で作成するには、tf.train.Checkpoint オブジェクトを使用します。チェックポイントを設定するオブジェクトは、オブジェクトの属性として設定されます。\ntf.train.CheckpointManagerは、複数のチェックポイントの管理にも役立ちます。", "opt = tf.keras.optimizers.Adam(0.1)\ndataset = toy_dataset()\niterator = iter(dataset)\nckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)\nmanager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)", "モデルをトレーニングおよびチェックポイントする\n次のトレーニングループは、モデルとオプティマイザのインスタンスを作成し、それらをtf.train.Checkpointオブジェクトに集めます。それはデータの各バッチのループ内でトレーニングステップを呼び出し、定期的にチェックポイントをディスクに書き込みます。", "def train_and_checkpoint(net, manager):\n ckpt.restore(manager.latest_checkpoint)\n if manager.latest_checkpoint:\n print(\"Restored from {}\".format(manager.latest_checkpoint))\n else:\n print(\"Initializing from scratch.\")\n\n for _ in range(50):\n example = next(iterator)\n loss = train_step(net, example, opt)\n ckpt.step.assign_add(1)\n if int(ckpt.step) % 10 == 0:\n save_path = manager.save()\n print(\"Saved checkpoint for step {}: {}\".format(int(ckpt.step), save_path))\n print(\"loss {:1.2f}\".format(loss.numpy()))\n\ntrain_and_checkpoint(net, manager)", "復元してトレーニングを続ける\n最初のトレーニングサイクルの後、新しいモデルとマネージャーを渡すことができますが、トレーニングはやめた所から再開します。", "opt = tf.keras.optimizers.Adam(0.1)\nnet = Net()\ndataset = toy_dataset()\niterator = iter(dataset)\nckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)\nmanager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)\n\ntrain_and_checkpoint(net, manager)", "tf.train.CheckpointManagerオブジェクトは古いチェックポイントを削除します。上記では、最新の 3 つのチェックポイントのみを保持するように構成されています。", "print(manager.checkpoints) # List the three remaining checkpoints", "これらのパス、例えば'./tf_ckpts/ckpt-10'などは、ディスク上のファイルではなく、indexファイルのプレフィックスで、変数値を含む 1 つまたはそれ以上のデータファイルです。これらのプレフィックスは、まとめて単一のcheckpointファイル('./tf_ckpts/checkpoint')にグループ化され、CheckpointManagerがその状態を保存します。", "!ls ./tf_ckpts", "<a id=\"loading_mechanics\"></a>\n読み込みの仕組み\nTensorFlowは、読み込まれたオブジェクトから始めて、名前付きエッジを持つ有向グラフを走査することにより、変数をチェックポイントされた値に合わせます。エッジ名は通常、オブジェクトの属性名に由来しており、self.l1 = tf.keras.layers.Dense(5)の\"l1\"などがその例です。tf.train.Checkpointは、tf.train.Checkpoint(step=...)の\"step\"のように、キーワード引数名を使用します。\n上記の例の依存関係グラフは次のようになります。\n\nオプティマイザは赤、通常の変数は青、オプティマイザスロット変数はオレンジで表されています。tf.train.Checkpoint を表すノードなどは黒で示されています。\nオプティマイザは赤色、通常変数は青色、オプティマイザスロット変数はオレンジ色です。他のノード、例えばtf.train.Checkpointを表すものは黒色です。\ntf.train.Checkpoint オブジェクトで restore を読み出すと、リクエストされた復元がキューに入れられ、Checkpoint オブジェクトから一致するパスが見つかるとすぐに変数値が復元されます。たとえば、ネットワークとレイヤーを介してバイアスのパスを再構築すると、上記で定義したモデルからそのバイアスのみを読み込むことができます。", "to_restore = tf.Variable(tf.zeros([5]))\nprint(to_restore.numpy()) # All zeros\nfake_layer = tf.train.Checkpoint(bias=to_restore)\nfake_net = tf.train.Checkpoint(l1=fake_layer)\nnew_root = tf.train.Checkpoint(net=fake_net)\nstatus = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/'))\nprint(to_restore.numpy()) # We get the restored value now", "これらの新しいオブジェクトの依存関係グラフは、上で書いたより大きなチェックポイントのはるかに小さなサブグラフです。 これには、バイアスと tf.train.Checkpoint がチェックポイントに番号付けするために使用する保存カウンタのみが含まれます。\n\nrestore は、オプションのアサーションを持つステータスオブジェクトを返します。新しい Checkpoint で作成されたすべてのオブジェクトが復元されるため、status.assert_existing_objects_matched がパスとなります。", "status.assert_existing_objects_matched()", "チェックポイントには、レイヤーのカーネルやオプティマイザの変数など、一致しない多くのオブジェクトがあります。status.assert_consumed() は、チェックポイントとプログラムが正確に一致する場合に限りパスするため、ここでは例外がスローされます。\n復元延期 (Deferred restoration)\nTensorFlow のLayerオブジェクトは、入力形状が利用可能な場合、最初の呼び出しまで変数の作成を遅らせる可能性があります。例えば、Denseレイヤーのカーネルの形状はレイヤーの入力形状と出力形状の両方に依存するため、コンストラクタ引数として必要な出力形状は、単独で変数を作成するために充分な情報ではありません。Layerの呼び出しは変数の値も読み取るため、復元は変数の作成とその最初の使用の間で発生する必要があります。\nこのイディオムをサポートするために、tf.train.Checkpoint は一致する変数がまだない場合、復元を延期します。", "deferred_restore = tf.Variable(tf.zeros([1, 5]))\nprint(deferred_restore.numpy()) # Not restored; still zeros\nfake_layer.kernel = deferred_restore\nprint(deferred_restore.numpy()) # Restored", "チェックポイントを手動で検査する\ntf.train.load_checkpoint は、チェックポイントのコンテンツにより低いレベルのアクセスを提供する CheckpointReader を返します。これには各変数のキーからチェックポイントの各変数の形状と dtype へのマッピングが含まれます。変数のキーは上に表示されるグラフのようなオブジェクトパスです。\n注意: チェックポイントへのより高いレベルの構造はありません。変数のパスと値のみが認識されており、models、layers、またはそれらがどのように接続されているかについての概念が一切ありません。", "tf.train.list_variables(tf.train.latest_checkpoint('./tf_ckpts/'))", "net.l1.kernel の値に関心がある場合は、次のコードを使って値を取得できます。", "key = 'net/l1/kernel/.ATTRIBUTES/VARIABLE_VALUE'\n\nprint(\"Shape:\", shape_from_key[key])\nprint(\"Dtype:\", dtype_from_key[key].name)", "また、変数の値を検査できるようにする get_tensor メソッドも提供されています。", "reader.get_tensor(key)", "オブジェクトの追跡\nself.l1 = tf.keras.layers.Dense(5)のような直接の属性割り当てと同様に、リストとディクショナリを属性に割り当てると、それらの内容を追跡します。\nself.l1 = tf.keras.layers.Dense(5)のような直接の属性割り当てと同様に、リストとディクショナリを属性に割り当てると、それらの内容を追跡します。", "save = tf.train.Checkpoint()\nsave.listed = [tf.Variable(1.)]\nsave.listed.append(tf.Variable(2.))\nsave.mapped = {'one': save.listed[0]}\nsave.mapped['two'] = save.listed[1]\nsave_path = save.save('./tf_list_example')\n\nrestore = tf.train.Checkpoint()\nv2 = tf.Variable(0.)\nassert 0. == v2.numpy() # Not restored yet\nrestore.mapped = {'two': v2}\nrestore.restore(save_path)\nassert 2. == v2.numpy()", "リストとディクショナリのラッパーオブジェクトにお気づきでしょうか。これらのラッパーは基礎的なデータ構造のチェックポイント可能なバージョンです。属性に基づく読み込みと同様に、これらのラッパーは変数の値がコンテナに追加されるとすぐにそれを復元します。", "restore.listed = []\nprint(restore.listed) # ListWrapper([])\nv1 = tf.Variable(0.)\nrestore.listed.append(v1) # Restores v1, from restore() in the previous cell\nassert 1. == v1.numpy()", "追跡可能なオブジェクトには tf.train.Checkpoint、tf.Module およびそのサブクラス ( keras.layers.Layer や keras.Model など)、および認識された Python コンテナが含まれています。\n\ndict (および collections.OrderedDict)\nlist\ntuple (および collections.namedtuple、typing.NamedTuple)\n\n以下のような他のコンテナタイプはサポートされていません。\n\ncollections.defaultdict\nset\n\n以下のような他のすべての Python オブジェクトは無視されます。\n\nint\nstring\nfloat\n\nまとめ\nTensorFlow オブジェクトは、それらが使用する変数の値を保存および復元するための容易で自動的な仕組みを提供します。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
IgorWang/MachineLearningPracticer
basic/Cross-Validation and Bootstrap.ipynb
gpl-3.0
[ "Cross-validation and Bootstrap", "import pandas as pd\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n#读取数据集\nauto_df = pd.read_csv('data/Auto.csv', na_values = \"?\")\nauto_df.dropna(inplace = True)\nauto_df.head()\n\nfig, ax = plt.subplots()\nax.scatter(x=auto_df['horsepower'],y=auto_df['mpg'])\nax.set_ylabel('mpg')", "Leave One Out Cross Validation(LOOCV)", "from sklearn.linear_model import LinearRegression\nfrom sklearn.cross_validation import LeaveOneOut\nfrom sklearn.metrics import mean_squared_error\n\nclf = LinearRegression()\nloo = LeaveOneOut(len(auto_df))\n#loo提供了训练和测试的索引\nX = auto_df[['horsepower']].values\ny = auto_df['mpg'].values\nn = np.shape(X)[0]\nmses =[]\nfor train, test in loo:\n Xtrain,ytrain,Xtest,ytest = X[train],y[train],X[test],y[test]\n clf.fit(Xtrain,ytrain)\n ypred = clf.predict(Xtest)\n mses.append(mean_squared_error(ytest,ypred))\nnp.mean(mses)\n\ndef loo_shortcut(X,y):\n clf = LinearRegression()\n clf.fit(X,y)\n ypred = clf.predict(X)\n xbar = np.mean(X,axis =0)\n xsum = np.sum(np.power(X-xbar,2))\n nrows = np.shape(X)[0]\n mses = []\n for row in range(0,nrows):\n hi = (1 / nrows) + (np.sum(X[row] - xbar) ** 2 / xsum)\n mse = ((y[row] - ypred[row])/(1-hi))**2\n mses.append(mse)\n return np.mean(mses)\n\nloo_shortcut(auto_df[['horsepower']].values,auto_df['mpg'].values)", "$$CV_{(n)} = \\frac {1} {n} \\sum_{i =1}^n (\\frac{y_i - \\hat y_i}{1- h_i})^2$$\n$$ h_i = \\frac {1}{h} + \\frac{(x_i - \\bar x)^2}{\\sum_{i'=1} ^n (x_i' - \\bar x)^2 }$$", "# LOOCV 应用于同一种模型不同复杂度的选择\nauto_df['horsepower^2'] = auto_df['horsepower'] * auto_df['horsepower']\nauto_df['horsepower^3'] = auto_df['horsepower^2'] * auto_df['horsepower']\nauto_df['horsepower^4'] = auto_df['horsepower^3'] * auto_df['horsepower']\nauto_df['horsepower^5'] = auto_df['horsepower^4'] * auto_df['horsepower']\nauto_df['unit'] = 1\ncolnames = [\"unit\", \"horsepower\", \"horsepower^2\", \"horsepower^3\", \"horsepower^4\", \"horsepower^5\"]\ncv_errors = []\nfor ncols in range(2,6):\n X = auto_df[colnames[0:ncols]]\n y = auto_df['mpg']\n clf = LinearRegression()\n clf.fit(X,y)\n cv_errors.append(loo_shortcut(X.values,y.values))\nplt.plot(range(1,5),cv_errors)\nplt.xlabel('degree')\nplt.ylabel('cv.error')", "K-Fold Cross Validation", "from sklearn.cross_validation import KFold\n\ncv_errors = []\nfor ncols in range(2,6):\n X = auto_df[colnames[0:ncols]].values\n y = auto_df['mpg'].values\n kfold = KFold(len(auto_df),n_folds = 10)\n mses =[]\n for train,test in kfold:\n Xtrain,ytrain,Xtest,ytest = X[train],y[train],X[test],y[test]\n clf.fit(X,y)\n ypred = clf.predict(Xtest)\n mses.append(mean_squared_error(ypred,ytest))\n cv_errors.append(np.mean(mses))\nplt.plot(range(1,5),cv_errors)\nplt.xlabel(\"degree\")\nplt.ylabel('cv.error')", "Bootstrap", "from sklearn.cross_validation import Bootstrap\n\ncv_errors = []\nfor ncols in range(2,6):\n X = auto_df[colnames[0:ncols]].values\n y = auto_df['mpg'].values\n n = len(auto_df)\n bs = Bootstrap(n,train_size=int(0.9*n),test_size=int(0.1*n),n_iter=10,random_state=0)\n mses = []\n for train,test in bs:\n Xtrain,ytrain,Xtest,ytest = X[train],y[train],X[test],y[test]\n clf = LinearRegression()\n clf.fit(X,y)\n ypred = clf.predict(Xtest)\n mses.append(mean_squared_error(ypred,ytest))\n cv_errors.append(np.mean(mses))\nplt.plot(range(1,5),cv_errors)\nplt.xlabel('degree')\nplt.ylabel('cv.error')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sbu-python-summer/python-tutorial
extra/python-io.ipynb
bsd-3-clause
[ "from __future__ import print_function", "One of the main things that we want to do in scientific computing is get data into and out of our programs. In addition to plain text files, there are modules that can read lots of different data formats we might encounter.\nPrint\nWe've already been using print quite a bit, but now we'll look at how to control how information is printed. Note that there is an older and newer way to format print statements -- we'll focus only on the newer way (it's nicer).\nThis is compatible with both python 2 and 3", "x = 1\ny = 0.0000354\nz = 3.0\ns = \"my string\"\n\n\nprint(x)", "We write a string with {} embedded to indicate where variables are to be inserted. Note that {} can take arguments. We use the format() method on the string to match the variables to the {}.", "print(\"x = {}, y = {}, z = {}, s = {}\".format(x, y, z, s))", "Before a semi-colon, we can give an optional index/position/descriptor of the value we want to print.\nAfter the semi-colon we give a format specifier. It has a number field and a type, like f and g to describe how floating point numbers appear and how much precision to show. Other bits are possible as well (like justification).", "print(\"x = {0}, y = {1:10.5g}, z = {2:.3f}, s = {3}\".format(x, y, z, s))", "there are other formatting things, like justification, etc. See the tutorial", "print(\"{:^80}\".format(\"centered string\"))", "File I/O\nas expected, a file is an object. Here we'll use the try, except block to capture exceptions (like if the file cannot be opened).", "f = open(\"./sample.txt\", \"w\")\nprint(f)\n\nf.write(\"this is my first write\\n\")\nf.close()", "we can easily loop over the lines in a file", "f = open(\"./test.txt\", \"r\")\n\nfor line in f:\n print(line.split())\n \nf.close()", "as mentioned earlier, there are lots of string functions. Above we used strip() to remove the trailing whitespace and returns\nCSV Files\ncomma-separated values are an easy way to exchange data -- you can generate these from a spreadsheet program. In the example below, we are assuming that the first line of the spreadsheet/csv file gives the headings that identify the columns. \nNote that there is an amazing amount of variation in terms of what can be in a CSV file and what the format is -- the csv module does a good job sorting this all out for you.", "import csv\n\nreader = csv.reader(open(\"shopping.csv\", \"r\"))\n\nheadings = None\n\nitems = []\nquantity = []\nunit_price = []\ntotal = []\n\nfor row in reader:\n if headings == None:\n # first row\n headings = row\n else:\n items.append(row[headings.index(\"item\")])\n quantity.append(row[headings.index(\"quantity\")])\n unit_price.append(row[headings.index(\"unit price\")])\n total.append(row[headings.index(\"total\")])\n \n\nfor i, q in zip(items, quantity):\n print (\"item: {}, quantity: {}\".format(i, q))\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
peakrisk/peakrisk
posts/latest-weather-pijessie.ipynb
gpl-3.0
[ "Simple quick update latest weather", "# Tell matplotlib to plot in line\n%matplotlib inline\n\n# import pandas\nimport pandas\n\n# seaborn magically adds a layer of goodness on top of Matplotlib\n# mostly this is just changing matplotlib defaults, but it does also\n# provide some higher level plotting methods.\nimport seaborn\n\n# Tell seaborn to set things up\nseaborn.set()\n\ndef smooth(data, thresh=None):\n \n means = data.mean()\n\n if thresh is None:\n sds = data.std()\n else:\n sds = thresh\n \n delta = data - data.shift()\n \n good = delta[abs(delta) < sds]\n\n #print(good.describe())\n \n return delta.where(good, 0.0)\n\n\ninfile = \"../files/pijessie_weather.csv\"\n\n!scp 192.168.0.127:Adafruit_Python_BMP/weather.csv $infile\n\n\"\"\" assume it is csv and let pandas do magic\n\n index_col tells it to use the 'date' column in the data\n as the row index, plotting picks up on this and uses the\n date on the x-axis\n\n The *parse_dates* bit just tells it to try and figure out\n the date/time in the columne labeled 'date'.\n\"\"\"\ndata = pandas.read_csv(infile, index_col='date', parse_dates=['date'])\n#data = smooth(data)\n\n# smooth the data to filter out bad temps and pressures\n#data.altitude = (smooth(data.altitude, 5.0).cumsum() + data.altitude[0])\n#data.temp = (smooth(data.temp, 5.0).cumsum() + data.temp[0])\n\ndata.altitude.plot()", "Last 24 hours:", "# reading is once a minute, so take last 24 * 60 readings\ndef plotem(data, n=-60):\n \n \n if n < 0:\n start = n\n end = len(data)\n else:\n start = 0\n end = n\n \n data[['temp', 'altitude', 'humidity']][n:].plot(subplots=True)\n \nplotem(data, -24*60)\n\ndata.altitude[-8*60:].plot()", "Last week", "# reading is once a minute, so take last 7 * 24 * 60 readings\nplotem(data, -7*24*60)\n\nplotem(data)", "Look at all the data", "data.describe()\n\ndata.tail()", "I currently have two temperature sensors:\n\nDHT22 sensor which gives temperature and humidity.\nBMP180 sensor which gives pressure and temperature.\n\nThe plot below shows the two temperature plots.\nBoth these sensors are currently in my study. For temperature and humidity I would like to have some readings from outside. If I can solder them to a phone jack then I can just run phone cable to where they need to be.\nBelow plots the current values from these sensors. This is handy for calibration.", "data[['temp', 'temp_dht']].plot()", "Dew Point\nThe warmer air is, the more moisture it can hold. The dew point is\nthe temperature at which air would be totally saturated if it had as \nmuch moisture as it currently does. \nGiven the temperature and humidity the dew point can be calculated, the actual formula is\npretty complex.\nIt is explained in more detail here: http://iridl.ldeo.columbia.edu/dochelp/QA/Basic/dewpoint.html\n\nIf you are interested in a simpler calculation that gives an approximation of dew point temperature if you know >the observed temperature and relative humidity, the following formula was proposed in a 2005 article by Mark G. >Lawrence in the Bulletin of the American Meteorological Society:\n\n$$Td = T - ((100 - RH)/5.)$$", "data['dewpoint'] = data.temp - ((100. - data.humidity)/5.)\n\ndata[['temp', 'dewpoint', 'humidity']].plot()\n\ndata[['temp', 'dewpoint', 'humidity']].plot(subplots=True)\n\ndata[['temp', 'dewpoint']].plot()\n\ndata.altitude.plot()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
brandoncgay/deep-learning
dcgan-svhn/DCGAN_Exercises.ipynb
mit
[ "Deep Convolutional GANs\nIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.\nYou'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. \n\nSo, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.", "%matplotlib inline\n\nimport pickle as pkl\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.io import loadmat\nimport tensorflow as tf\n\n!mkdir data", "Getting the data\nHere you can download the SVHN dataset. Run the cell above and it'll download to your machine.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\ndata_dir = 'data/'\n\nif not isdir(data_dir):\n raise Exception(\"Data directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(data_dir + \"train_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',\n data_dir + 'train_32x32.mat',\n pbar.hook)\n\nif not isfile(data_dir + \"test_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',\n data_dir + 'test_32x32.mat',\n pbar.hook)", "These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.", "trainset = loadmat(data_dir + 'train_32x32.mat')\ntestset = loadmat(data_dir + 'test_32x32.mat')", "Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.", "idx = np.random.randint(0, trainset['X'].shape[3], size=36)\nfig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)\nfor ii, ax in zip(idx, axes.flatten()):\n ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\nplt.subplots_adjust(wspace=0, hspace=0)", "Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.", "def scale(x, feature_range=(-1, 1)):\n # scale to (0, 1)\n x = ((x - x.min())/(255 - x.min()))\n \n # scale to feature_range\n min, max = feature_range\n x = x * (max - min) + min\n return x\n\nclass Dataset:\n def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):\n split_idx = int(len(test['y'])*(1 - val_frac))\n self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]\n self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]\n self.train_x, self.train_y = train['X'], train['y']\n \n self.train_x = np.rollaxis(self.train_x, 3)\n self.valid_x = np.rollaxis(self.valid_x, 3)\n self.test_x = np.rollaxis(self.test_x, 3)\n \n if scale_func is None:\n self.scaler = scale\n else:\n self.scaler = scale_func\n self.shuffle = shuffle\n \n def batches(self, batch_size):\n if self.shuffle:\n idx = np.arange(len(dataset.train_x))\n np.random.shuffle(idx)\n self.train_x = self.train_x[idx]\n self.train_y = self.train_y[idx]\n \n n_batches = len(self.train_y)//batch_size\n for ii in range(0, len(self.train_y), batch_size):\n x = self.train_x[ii:ii+batch_size]\n y = self.train_y[ii:ii+batch_size]\n \n yield self.scaler(x), y", "Network Inputs\nHere, just creating some placeholders like normal.", "def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z", "Generator\nHere you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.\nWhat's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.\nYou keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:\n\nNote that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. \n\nExercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.", "def generator(z, output_dim, reuse=False, alpha=0.2, training=True):\n with tf.variable_scope('generator', reuse=reuse):\n # First fully connected layer\n x1 = tf.layers.dense(z, 4*4*512)\n \n x1 = tf.reshape(x1, )\n \n # Output layer, 32x32x3\n logits = \n \n out = tf.tanh(logits)\n \n return out", "Discriminator\nHere you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.\nYou'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.\nNote: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.\n\nExercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.", "def discriminator(x, reuse=False, alpha=0.2):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Input layer is 32x32x3\n x =\n \n logits = \n out = \n \n return out, logits", "Model Loss\nCalculating the loss like before, nothing new here.", "def model_loss(input_real, input_z, output_dim, alpha=0.2):\n \"\"\"\n Get the loss for the discriminator and generator\n :param input_real: Images from the real dataset\n :param input_z: Z input\n :param out_channel_dim: The number of channels in the output image\n :return: A tuple of (discriminator loss, generator loss)\n \"\"\"\n g_model = generator(input_z, output_dim, alpha=alpha)\n d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)\n d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)\n\n d_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))\n d_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))\n g_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))\n\n d_loss = d_loss_real + d_loss_fake\n\n return d_loss, g_loss", "Optimizers\nNot much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.", "def model_opt(d_loss, g_loss, learning_rate, beta1):\n \"\"\"\n Get optimization operations\n :param d_loss: Discriminator loss Tensor\n :param g_loss: Generator loss Tensor\n :param learning_rate: Learning Rate Placeholder\n :param beta1: The exponential decay rate for the 1st moment in the optimizer\n :return: A tuple of (discriminator training operation, generator training operation)\n \"\"\"\n # Get weights and bias to update\n t_vars = tf.trainable_variables()\n d_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n g_vars = [var for var in t_vars if var.name.startswith('generator')]\n\n # Optimize\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)\n g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)\n\n return d_train_opt, g_train_opt", "Building the model\nHere we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.", "class GAN:\n def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):\n tf.reset_default_graph()\n \n self.input_real, self.input_z = model_inputs(real_size, z_size)\n \n self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,\n real_size[2], alpha=alpha)\n \n self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)", "Here is a function for displaying generated images.", "def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):\n fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, \n sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.axis('off')\n img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)\n ax.set_adjustable('box-forced')\n im = ax.imshow(img, aspect='equal')\n \n plt.subplots_adjust(wspace=0, hspace=0)\n return fig, axes", "And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.", "def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):\n saver = tf.train.Saver()\n sample_z = np.random.uniform(-1, 1, size=(72, z_size))\n\n samples, losses = [], []\n steps = 0\n\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in dataset.batches(batch_size):\n steps += 1\n\n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n\n # Run optimizers\n _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})\n _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})\n\n if steps % print_every == 0:\n # At the end of each epoch, get the losses and print them out\n train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})\n train_loss_g = net.g_loss.eval({net.input_z: batch_z})\n\n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g))\n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n\n if steps % show_every == 0:\n gen_samples = sess.run(\n generator(net.input_z, 3, reuse=True, training=False),\n feed_dict={net.input_z: sample_z})\n samples.append(gen_samples)\n _ = view_samples(-1, samples, 6, 12, figsize=figsize)\n plt.show()\n\n saver.save(sess, './checkpoints/generator.ckpt')\n\n with open('samples.pkl', 'wb') as f:\n pkl.dump(samples, f)\n \n return losses, samples", "Hyperparameters\nGANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.\n\nExercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.", "real_size = (32,32,3)\nz_size = 100\nlearning_rate = 0.001\nbatch_size = 64\nepochs = 1\nalpha = 0.01\nbeta1 = 0.9\n\n# Create the network\nnet = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)\n\n# Load the data and train the network here\ndataset = Dataset(trainset, testset)\nlosses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()\n\n_ = view_samples(-1, samples, 6, 12, figsize=(10,5))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
slerch/ppnn
data_exploration/python_data_handling.ipynb
mit
[ "# Table of Contents\n<div class=\"toc\" style=\"margin-top: 1em;\"><ul class=\"toc-item\" id=\"toc-level0\"><li><span><a href=\"http://localhost:8889/notebooks/python_data_handling.ipynb#1.-Reading-and-exploring-the-data\" data-toc-modified-id=\"1.-Reading-and-exploring-the-data-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>1. Reading and exploring the data</a></span><ul class=\"toc-item\"><li><span><a href=\"http://localhost:8889/notebooks/python_data_handling.ipynb#1.1-Time\" data-toc-modified-id=\"1.1-Time-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>1.1 Time</a></span></li><li><span><a href=\"http://localhost:8889/notebooks/python_data_handling.ipynb#1.2-Station-variables\" data-toc-modified-id=\"1.2-Station-variables-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>1.2 Station variables</a></span></li><li><span><a href=\"http://localhost:8889/notebooks/python_data_handling.ipynb#1.3-Temperature-forecasts-and-observations\" data-toc-modified-id=\"1.3-Temperature-forecasts-and-observations-1.3\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>1.3 Temperature forecasts and observations</a></span></li></ul></li><li><span><a href=\"http://localhost:8889/notebooks/python_data_handling.ipynb#2.-Slicing-the-data\" data-toc-modified-id=\"2.-Slicing-the-data-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>2. Slicing the data</a></span><ul class=\"toc-item\"><li><span><a href=\"http://localhost:8889/notebooks/python_data_handling.ipynb#2.1-Monthly-slices\" data-toc-modified-id=\"2.1-Monthly-slices-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>2.1 Monthly slices</a></span></li></ul></li><li><span><a href=\"http://localhost:8889/notebooks/python_data_handling.ipynb#3.-Compute-the-parametric-and-sample-CRPS-for-the-raw-ensemble-data\" data-toc-modified-id=\"3.-Compute-the-parametric-and-sample-CRPS-for-the-raw-ensemble-data-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>3. Compute the parametric and sample CRPS for the raw ensemble data</a></span><ul class=\"toc-item\"><li><span><a href=\"http://localhost:8889/notebooks/python_data_handling.ipynb#3.1-CRPS-for-a-normal-distribution\" data-toc-modified-id=\"3.1-CRPS-for-a-normal-distribution-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>3.1 CRPS for a normal distribution</a></span></li><li><span><a href=\"http://localhost:8889/notebooks/python_data_handling.ipynb#3.2-Sample-CRPS\" data-toc-modified-id=\"3.2-Sample-CRPS-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>3.2 Sample CRPS</a></span></li></ul></li></ul></div>\n\nReading the netCDF files in Python\nNote: This notebook uses Python 3.\nIn this notebook we will explore the interpolated dataset and test some basic python functions.\nContents\n1. Reading and exploring the data\n2. Slicing the data (to do)\n3. Computing the CRPS from the raw ensemble (to do)\n1. Reading and exploring the data\nLet's start by opening the file and checking out what data it contains.", "from netCDF4 import Dataset\n# Define directory where interpolated files are stored.\nDATA_DIR = '/project/meteo/w2w/C7/ppnn_data/' # At LMU\n# DATA_DIR = '/Users/stephanrasp/repositories/ppnn/data/' # Mac\n# Define file name\nfn = 'data_interpolated.nc'\n\n# Open NetCDF rootgroup \nrg = Dataset(DATA_DIR + 'data_interpolated.nc')\n\n# What does the NetCDF file contain?\nrg", "Here is what's in the file:\n- 4 dimensions: station, member, time and nchar\n- Variables:\n - station(station) == station_id\n - member(member)\n - time(time) Time in s since 1.1.1970\n - t2m_fc(time, member, station)\n - t2m_obs(time, station)\n - station_alt(station) Altitute of station in m\n - station_lat(station) Latitude in degrees\n - station_lon(station) Longitude in degrees\n - station_id(station) == station\n - station_loc(station, nchar) Location name\nSo how much training data do we have?", "# Total amount of data\nrg.dimensions['station'].size * rg.dimensions['time'].size / 2\n\n# Rough data amount per month\nrg.dimensions['station'].size * rg.dimensions['time'].size / 2 / 12.\n\n# Per station per month\nrg.dimensions['time'].size / 2 / 12.", "Ok, let's now look at some of the variables.\n1.1 Time", "time = rg.variables['time']\ntime\n\ntime[:5]", "In fact, the time is given in seconds rather than hours.", "# convert back to dates (http://unidata.github.io/netcdf4-python/#section7)\nfrom netCDF4 import num2date\ndates = num2date(time[:],units='seconds since 1970-01-01 00:00 UTC')\ndates[:5]", "So dates are in 12 hour intervals. Which means that since we downloaded 36/48h forecasts: the 12UTC dates correspond to the 36 hour fcs and the following 00UTC dates correspond to the same forecast at 48 hour lead time.\n1.2 Station variables\nStation and station ID are in fact the same and simply contain a number, which does not start at one and is not continuous.", "import numpy as np\n# Check whether the two variables are equal\nnp.array_equal(rg.variables['station'][:], rg.variables['station_id'][:])\n\n# Then just print the first 5\nrg.variables['station'][:5]", "station_alt contains the station altitude in meters.", "rg.variables['station_alt'][:5]\n\nrg.variables['station_loc'][0].data", "Ahhhh, Aachen :D\nSo this leads me to believe that the station numbering is done by name.", "station_lon = rg.variables['station_lon']\nstation_lat = rg.variables['station_lat']\n\nimport matplotlib\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.scatter(station_lon[:], station_lat[:])", "Wohooo, Germany!\n1.3 Temperature forecasts and observations\nOk, so now let's explore the temperature data a little.", "# Let's extract the actual data from the NetCDF array\n# Then we can manipulate it later.\ntfc = rg.variables['t2m_fc'][:]\ntobs = rg.variables['t2m_obs'][:]\n\ntobs[:5, :5].data", "So there are actually missing data in the observations. We will need to think about how to deal with those.\nSebastian mentioned that in the current version there is some Celcius/Kelvin inconsistencies.", "plt.plot(np.mean(tfc, axis=(1, 2)))\n\n# Since this will be fixed soon, let's just create a little ad hoc fix\nidx = np.where(np.mean(tfc, axis=(1, 2)) > 100)[0][0]\ntfc[idx:] = tfc[idx:] - 273.15\n\n# Let's create a little function to visualize the ensemble forecast\n# and the corresponding observation\ndef plot_fc_obs_hist(t, s):\n plt.hist(tfc[t, :, s])\n plt.axvline(tobs[t, s], c='r')\n plt.title(num2date(time[t], units='seconds since 1970-01-01 00:00 UTC'))\n plt.show()\n\n# Now let's plot some forecast for random time steps and stations\nplot_fc_obs_hist(0, 0)\nplot_fc_obs_hist(100, 100)\nplot_fc_obs_hist(1000, 200)", "2. Slicing the data\nNow let's see how we can conveniently prepare the data in chunks for the post-processing purposes.\n2.1 Monthly slices\nThe goal here is to pick all data points from a given month and also for a given time, so 00 or 12 UTC", "# Let's write a handy function which returns the required data\n# from the NetCDF object\ndef get_data_slice(rg, month, utc=0):\n # Get array of datetime objects\n dates = num2date(rg.variables['time'][:],\n units='seconds since 1970-01-01 00:00 UTC')\n # Extract months and hours\n months = np.array([d.month for d in list(dates)])\n hours = np.array([d.hour for d in list(dates)])\n \n # for now I need to include the Kelvin fix\n tfc = rg.variables['t2m_fc'][:]\n idx = np.where(np.mean(tfc, axis=(1, 2)) > 100)[0][0]\n tfc[idx:] = tfc[idx:] - 273.15\n \n # Extract the requested data\n tobs = rg.variables['t2m_obs'][(months == 1) & (hours == 0)]\n tfc = tfc[(months == 1) & (hours == 0)]\n return tobs, tfc\n\ntobs_jan_00, tfc_jan_00 = get_data_slice(rg, 1, 0)\n\ntfc_jan_00.shape", "3. Compute the parametric and sample CRPS for the raw ensemble data\n3.1 CRPS for a normal distribution\nFrom Gneiting et al. 2005, EMOS:", "from scipy.stats import norm\n\ndef crps_normal(mu, sigma, y):\n loc = (y - mu) / sigma\n crps = sigma * (loc * (2 * norm.cdf(loc) - 1) + \n 2 * norm.pdf(loc) - 1. / np.sqrt(np.pi))\n return crps\n\n# Get ensmean and ensstd\ntfc_jan_00_mean = np.mean(tfc_jan_00, axis=1)\ntfc_jan_00_std = np.std(tfc_jan_00, axis=1, ddof=1)\n\n# Compute CRPS using the ensemble mean and variance\ncrps_jan_00 = crps_normal(tfc_jan_00_mean, tfc_jan_00_std, tobs_jan_00)\n\n# Wanrings probably doe to missing values\ncrps_jan_00.mean()", "Nice, this corresponds well to the value sebastian got for the raw ensemble in January.\n3.2 Sample CRPS\nFor this we use the scoringRules package inside enstools.", "import sys\nsys.path.append('/Users/stephanrasp/repositories/enstools')\nfrom enstools.scores.ScoringRules2Py.scoringtools import \n\n??enstools.scores.crps_sample\n\ntfc_jan_00.shape\n\ntobs_jan_00.shape\n\ntfc_jan_00_flat = np.rollaxis(tfc_jan_00, 1, 0)\ntfc_jan_00_flat.shape\n\ntfc_jan_00_flat = tfc_jan_00_flat.reshape(tfc_jan_00_flat.shape[0], -1)\ntfc_jan_00_flat.shape\n\ntobs_jan_00_flat = tobs_jan_00.ravel()\n\nmask = tobs_jan_00_flat.mask\ntobs_jan_00_flat_true = np.array(tobs_jan_00_flat)[~mask]\ntfc_jan_00_flat_true = np.array(tfc_jan_00_flat)[:, ~mask]\n\nnp.isfinite(tobs_jan_00_flat_true)\n\ntfc_jan_00_flat_true.shape\n\nenstools.scores.crps_sample(tobs_jan_00_flat_true, tfc_jan_00_flat_true, mean=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
phenology/infrastructure
applications/notebooks/laurens/comparisons.ipynb
apache-2.0
[ "Title\nText\nInitialization\nThis section initializes the notebook.\nDependencies\nHere, all necessary libraries are imported.", "#Add all dependencies to PYTHON_PATH\nimport sys\nsys.path.append(\"/usr/lib/spark/python\")\nsys.path.append(\"/usr/lib/spark/python/lib/py4j-0.10.4-src.zip\")\nsys.path.append(\"/usr/lib/python3/dist-packages\")\nsys.path.append(\"/data/local/jupyterhub/modules/python\")\n\n#Define environment variables\nimport os\nos.environ[\"HADOOP_CONF_DIR\"] = \"/etc/hadoop/conf\"\nos.environ[\"PYSPARK_PYTHON\"] = \"python3\"\nos.environ[\"PYSPARK_DRIVER_PYTHON\"] = \"ipython\"\n\nimport subprocess\n\n#Load PySpark to connect to a Spark cluster\nfrom pyspark import SparkConf, SparkContext\nfrom hdfs import InsecureClient\nfrom tempfile import TemporaryFile\n\n#from osgeo import gdal\n#To read GeoTiffs as a ByteArray\nfrom io import BytesIO\nfrom rasterio.io import MemoryFile\n\nimport numpy\nimport numpy as np\nimport pandas\nimport datetime\nimport matplotlib.pyplot as plt\nimport rasterio\nfrom rasterio import plot\nfrom os import listdir\nfrom os.path import isfile, join\nimport scipy.linalg", "Configuration\nThis configuration determines whether functions print logs during the execution.", "debugMode = True", "Connect to Spark\nHere, the Spark context is loaded, which allows for a connection to HDFS.", "appName = \"plot_GeoTiff\"\nmasterURL = \"spark://emma0.emma.nlesc.nl:7077\"\n\n#A context needs to be created if it does not already exist\ntry:\n sc.stop()\nexcept NameError:\n print(\"A new Spark Context will be created.\")\n\nsc = SparkContext(conf = SparkConf().setAppName(appName).setMaster(masterURL))\nconf = sc.getConf()", "Subtitle", "def getModeAsArray(filePath):\n data = sc.binaryFiles(filePath).take(1)\n byteArray = bytearray(data[0][1])\n memfile = MemoryFile(byteArray)\n dataset = memfile.open()\n array = np.array(dataset.read()[0], dtype=np.float64)\n memfile.close()\n array = array.flatten()\n array = array[~np.isnan(array)]\n return array\n\ndef detemineNorm(array1, array2):\n if array1.shape != array2.shape:\n print(\"Error: shapes are not the same: (\" + str(array1.shape) + \" vs \" + str(array2.shape) + \")\")\n return 0\n value = scipy.linalg.norm(array1 - array2)\n if value > 1:\n value = scipy.linalg.norm(array1 + array2)\n return value\n\ntextFile1 = sc.textFile(\"hdfs:///user/emma/svd/spark/BloomGridmetLeafGridmetCali3/U.csv\").map(lambda line: (line.split(','))).map(lambda m: [ float(i) for i in m]).collect()\n\narray1 = numpy.array(textFile1, dtype=np.float64)\nvector11 = array1.T[0]\nvector12 = array1.T[1]\nvector13 = array1.T[2]\n\ntextFile2 = sc.textFile(\"hdfs:///user/emma/svd/BloomGridmetLeafGridmetCali/U.csv\").map(lambda line: (line.split(','))).map(lambda m: [ np.float64(i) for i in m]).collect()\n\narray2 = numpy.array(textFile2, dtype=np.float64).reshape(37,23926)\nvector21 = array2[0]\nvector22 = array2[1]\nvector23 = array2[2]\n\narray2.shape\n\nprint(detemineNorm(vector11, vector21))\nprint(detemineNorm(vector12, vector22))\nprint(detemineNorm(vector13, vector23))\n\narray1 = getModeAsArray(\"hdfs:///user/emma/svd/spark/BloomGridmetLeafGridmetCali3/u_tiffs/svd_u_0_26.tif\")\narray2 = getModeAsArray(\"hdfs:///user/emma/svd/BloomGridmetLeafGridmetCali/ModeU01.tif\")\ndetemineNorm(array1, array2)\n\nprint(detemineNorm(array1, vector11))\nprint(detemineNorm(array1, vector21))\nprint(detemineNorm(array2, vector11))\nprint(detemineNorm(array2, vector21))\n\n~np.in1d(array1, vector21)\n\nfor i in range(10):\n print(\"%.19f %0.19f %0.19f\" % (array1[i], array2[i], (array1[i]+array2[i])))", "BloomFinalLowPR and LeafFinalLowPR", "array1 = getModeAsArray(\"hdfs:///user/emma/svd/BloomFinalLowPRLeafFinalLowPR/ModeU01.tif\")\narray2 = getModeAsArray(\"hdfs:///user/emma/svd/spark/BloomFinalLowPRLeafFinalLowPR3/u_tiffs/svd_u_0_3.tif\")\ndetemineNorm(array1, array2)\n\narray1 = getModeAsArray(\"hdfs:///user/emma/svd/BloomFinalLowPRLeafFinalLowPR/ModeU02.tif\")\narray2 = getModeAsArray(\"hdfs:///user/emma/svd/spark/BloomFinalLowPRLeafFinalLowPR3/u_tiffs/svd_u_1_3.tif\")\ndetemineNorm(array1, array2)\n\narray1 = getModeAsArray(\"hdfs:///user/emma/svd/BloomFinalLowPRLeafFinalLowPR/ModeU01.tif\")\narray2 = getModeAsArray(\"hdfs:///user/emma/svd/spark/BloomFinalLowPRLeafFinalLowPR3/u_tiffs/svd_u_0_3.tif\")\ndetemineNorm(array1, array2)", "BloomGridmet and LeafGridmet", "for i in range(37):\n if (i < 9):\n path1 = \"hdfs:///user/emma/svd/BloomGridmetLeafGridmetCali/ModeU0\"+ str(i+1) + \".tif\"\n else:\n path1 = \"hdfs:///user/emma/svd/BloomGridmetLeafGridmetCali/ModeU\"+ str(i+1) + \".tif\"\n array1 = getModeAsArray(path1)\n array2 = getModeAsArray(\"hdfs:///user/emma/svd/spark/BloomGridmetLeafGridmetCali3/u_tiffs/svd_u_\" +str(i) +\"_26.tif\")\n print(detemineNorm(array1, array2))", "End of Notebook" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
thombashi/sqlitebiter
test/data/pytablewriter_examples.ipynb
mit
[ "from textwrap import dedent\n\nimport pytablewriter\n\ntable_name = \"example_table\"\nheader_list = [\"int\", \"float\", \"str\", \"bool\", \"mix\", \"time\"]\ndata = [\n [0, 0.1, \"hoge\", True, 0, \"2017-01-01 03:04:05+0900\"],\n [2, \"-2.23\", \"foo\", False, None, \"2017-12-23 12:34:51+0900\"],\n [3, 0, \"bar\", \"true\", \"inf\", \"2017-03-03 22:44:55+0900\"],\n [-10, -9.9, \"\", \"FALSE\", \"nan\", \"2017-01-01 00:00:00+0900\"],\n]\n\nfor name in pytablewriter.TableWriterFactory.get_format_name_list():\n print(name)\n\nfor name in pytablewriter.TableWriterFactory.get_extension_list():\n print(name)\n\nwriter = pytablewriter.CsvTableWriter()\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.SpaceAlignedTableWriter()\nwriter.header_list = [\n \"PID\",\n \"USER\",\n \"PR\",\n \"NI\",\n \"VIRT\",\n \"RES\",\n \"SHR\",\n \"S\",\n \"%CPU\",\n \"%MEM\",\n \"TIME+\",\n \"COMMAND\",\n]\nwriter.value_matrix = csv1 = [\n [32866, \"root\", 20, 0, 48344, 3924, 3448, \"R\", 5.6, 0.2, \"0:00.03\", \"top\"],\n [1, \"root\", 20, 0, 212080, 7676, 5876, \"S\", 0, 0.4, \"1:06.56\", \"systemd\"],\n [2, \"root\", 20, 0, 0, 0, 0, \"S\", 0, 0, \"0:01.92\", \"kthreadd\"],\n [4, \"root\", 0, -20, 0, 0, 0, \"S\", 0, 0, \"0:00.00\", \"kworker/0:0H\"],\n]\n\nwriter.write_table()\n\nwriter = pytablewriter.TsvTableWriter()\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.HtmlTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.JavaScriptTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.JsonTableWriter()\n# writer.table_name = \"Timezone\"\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.JsonTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.LatexMatrixWriter()\nwriter.table_name = \"A\"\nwriter.value_matrix = [\n [0.01, 0.00125, 0.0],\n [1.0, 99.9, 0.01],\n [1.2, 999999.123, 0.001],\n]\nwriter.write_table()", "\\begin{equation}\n A = \\left( \\begin{array}{rrr}\n 0.01 & 0.0012 & 0.000 \\\n 1.00 & 99.9000 & 0.010 \\\n 1.20 & 999999.1230 & 0.001 \\\n \\end{array} \\right)\n\\end{equation}", "writer = pytablewriter.LatexMatrixWriter()\nwriter.table_name = \"B\"\nwriter.value_matrix = [\n [\"a_{11}\", \"a_{12}\", \"\\\\ldots\", \"a_{1n}\"],\n [\"a_{21}\", \"a_{22}\", \"\\\\ldots\", \"a_{2n}\"],\n [r\"\\vdots\", \"\\\\vdots\", \"\\\\ddots\", \"\\\\vdots\"],\n [\"a_{n1}\", \"a_{n2}\", \"\\\\ldots\", \"a_{nn}\"],\n]\nwriter.write_table()", "\\begin{equation}\n B = \\left( \\begin{array}{llll}\n a_{11} & a_{12} & \\ldots & a_{1n} \\\n a_{21} & a_{22} & \\ldots & a_{2n} \\\n \\vdots & \\vdots & \\ddots & \\vdots \\\n a_{n1} & a_{n2} & \\ldots & a_{nn} \\\n \\end{array} \\right)\n\\end{equation}", "writer = pytablewriter.LatexTableWriter()\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()", "\\begin{array}{r | r | l | l | l | l} \\hline\n \\verb|int| & \\verb|float| & \\verb|str| & \\verb|bool| & \\verb|mix| & \\verb|time| \\ \\hline\n \\hline\n 0 & 0.10 & hoge & True & 0 & \\verb|2017-01-01 03:04:05+0900| \\ \\hline\n 2 & -2.23 & foo & False & & \\verb|2017-12-23 12:34:51+0900| \\ \\hline\n 3 & 0.00 & bar & True & \\infty & \\verb|2017-03-03 22:44:55+0900| \\ \\hline\n -10 & -9.90 & & False & NaN & \\verb|2017-01-01 00:00:00+0900| \\ \\hline\n\\end{array}", "writer = pytablewriter.MarkdownTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.MarkdownTableWriter()\nwriter.table_name = \"write example with a margin\"\nwriter.header_list = header_list\nwriter.value_matrix = data\nwriter.margin = 1 # add a whitespace for both sides of each cell\n\nwriter.write_table()\n\nwriter = pytablewriter.MediaWikiTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.NumpyTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.PandasDataFrameWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.PandasDataFrameWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\nwriter.is_datetime_instance_formatting = False\n\nwriter.write_table()\n\nwriter = pytablewriter.PythonCodeTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.PythonCodeTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\nwriter.is_datetime_instance_formatting = False\n\nwriter.write_table()\n\nwriter = pytablewriter.RstGridTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.RstSimpleTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.RstCsvTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.LtsvTableWriter()\nwriter.header_list = header_list\nwriter.value_matrix = data\n\nwriter.write_table()\n\nwriter = pytablewriter.TomlTableWriter()\nwriter.table_name = table_name\nwriter.header_list = header_list\nwriter.value_matrix = data\n\n\nwriter.write_table()\n\nfrom datetime import datetime\nimport pytablewriter as ptw\n\nwriter = ptw.JavaScriptTableWriter()\nwriter.header_list = [\"header_a\", \"header_b\", \"header_c\"]\nwriter.value_matrix = [\n [-1.1, \"2017-01-02 03:04:05\", datetime(2017, 1, 2, 3, 4, 5)],\n [0.12, \"2017-02-03 04:05:06\", datetime(2017, 2, 3, 4, 5, 6)],\n]\n\nprint(\"// without type hints: column data types detected automatically by default\")\nwriter.table_name = \"without type hint\"\nwriter.write_table()\n\nprint(\"// with type hints: Integer, DateTime, String\")\nwriter.table_name = \"with type hint\"\nwriter.type_hint_list = [ptw.Integer, ptw.DateTime, ptw.String]\nwriter.write_table()\n\nfrom datetime import datetime\nimport pytablewriter as ptw\n\nwriter = ptw.PythonCodeTableWriter()\nwriter.value_matrix = [\n [-1.1, float(\"inf\"), \"2017-01-02 03:04:05\", datetime(2017, 1, 2, 3, 4, 5)],\n [0.12, float(\"nan\"), \"2017-02-03 04:05:06\", datetime(2017, 2, 3, 4, 5, 6)],\n]\n\n# column data types detected automatically by default\nwriter.table_name = \"python variable without type hints\"\nwriter.header_list = [\"float\", \"infnan\", \"string\", \"datetime\"]\nwriter.write_table()\n\n# set type hints\nwriter.table_name = \"python variable with type hints\"\nwriter.header_list = [\"hint_int\", \"hint_str\", \"hint_datetime\", \"hint_str\"]\nwriter.type_hint_list = [ptw.Integer, ptw.String, ptw.DateTime, ptw.String]\nwriter.write_table()\n\nwriter = pytablewriter.MarkdownTableWriter()\nwriter.from_csv(\n dedent(\n \"\"\"\\\n \"i\",\"f\",\"c\",\"if\",\"ifc\",\"bool\",\"inf\",\"nan\",\"mix_num\",\"time\"\n 1,1.10,\"aa\",1.0,\"1\",True,Infinity,NaN,1,\"2017-01-01 00:00:00+09:00\"\n 2,2.20,\"bbb\",2.2,\"2.2\",False,Infinity,NaN,Infinity,\"2017-01-02 03:04:05+09:00\"\n 3,3.33,\"cccc\",-3.0,\"ccc\",True,Infinity,NaN,NaN,\"2017-01-01 00:00:00+09:00\"\n \"\"\"\n )\n)\nwriter.write_table()\n\nwriter = pytablewriter.MarkdownTableWriter()\nwriter.table_name = \"ps\"\nwriter.from_csv(\n dedent(\n \"\"\"\\\n USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\n root 1 0.0 0.4 77664 8784 ? Ss May11 0:02 /sbin/init\n root 2 0.0 0.0 0 0 ? S May11 0:00 [kthreadd]\n root 4 0.0 0.0 0 0 ? I< May11 0:00 [kworker/0:0H]\n root 6 0.0 0.0 0 0 ? I< May11 0:00 [mm_percpu_wq]\n root 7 0.0 0.0 0 0 ? S May11 0:01 [ksoftirqd/0]\n \"\"\"\n ),\n delimiter=\" \",\n)\nwriter.write_table()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/cmcc/cmip6/models/cmcc-esm2-sr5/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: CMCC\nSource ID: CMCC-ESM2-SR5\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:50\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-esm2-sr5', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bashtage/statsmodels
examples/notebooks/metaanalysis1.ipynb
bsd-3-clause
[ "Meta-Analysis in statsmodels\nStatsmodels include basic methods for meta-analysis. This notebook illustrates the current usage.\nStatus: The results have been verified against R meta and metafor packages. However, the API is still experimental and will still change. Some options for additional methods that are available in R meta and metafor are missing.\nThe support for meta-analysis has 3 parts:\n\neffect size functions: this currently includes\n effectsize_smd computes effect size and their standard errors for standardized mean difference,\neffectsize_2proportions computes effect sizes for comparing two independent proportions using risk difference, (log) risk ratio, (log) odds-ratio or arcsine square root transformation\nThe combine_effects computes fixed and random effects estimate for the overall mean or effect. The returned results instance includes a forest plot function.\nhelper functions to estimate the random effect variance, tau-squared\n\nThe estimate of the overall effect size in combine_effects can also be performed using WLS or GLM with var_weights.\nFinally, the meta-analysis functions currently do not include the Mantel-Hanszel method. However, the fixed effects results can be computed directly using StratifiedTable as illustrated below.", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nfrom scipy import stats, optimize\n\nfrom statsmodels.regression.linear_model import WLS\nfrom statsmodels.genmod.generalized_linear_model import GLM\n\nfrom statsmodels.stats.meta_analysis import (\n effectsize_smd,\n effectsize_2proportions,\n combine_effects,\n _fit_tau_iterative,\n _fit_tau_mm,\n _fit_tau_iter_mm,\n)\n\n# increase line length for pandas\npd.set_option(\"display.width\", 100)", "Example", "data = [\n [\"Carroll\", 94, 22, 60, 92, 20, 60],\n [\"Grant\", 98, 21, 65, 92, 22, 65],\n [\"Peck\", 98, 28, 40, 88, 26, 40],\n [\"Donat\", 94, 19, 200, 82, 17, 200],\n [\"Stewart\", 98, 21, 50, 88, 22, 45],\n [\"Young\", 96, 21, 85, 92, 22, 85],\n]\ncolnames = [\"study\", \"mean_t\", \"sd_t\", \"n_t\", \"mean_c\", \"sd_c\", \"n_c\"]\nrownames = [i[0] for i in data]\ndframe1 = pd.DataFrame(data, columns=colnames)\nrownames\n\nmean2, sd2, nobs2, mean1, sd1, nobs1 = np.asarray(\n dframe1[[\"mean_t\", \"sd_t\", \"n_t\", \"mean_c\", \"sd_c\", \"n_c\"]]\n).T\nrownames = dframe1[\"study\"]\nrownames.tolist()\n\nnp.array(nobs1 + nobs2)", "estimate effect size standardized mean difference", "eff, var_eff = effectsize_smd(mean2, sd2, nobs2, mean1, sd1, nobs1)", "Using one-step chi2, DerSimonian-Laird estimate for random effects variance tau\nMethod option for random effect method_re=\"chi2\" or method_re=\"dl\", both names are accepted.\nThis is commonly referred to as the DerSimonian-Laird method, it is based on a moment estimator based on pearson chi2 from the fixed effects estimate.", "res3 = combine_effects(eff, var_eff, method_re=\"chi2\", use_t=True, row_names=rownames)\n# TODO: we still need better information about conf_int of individual samples\n# We don't have enough information in the model for individual confidence intervals\n# if those are not based on normal distribution.\nres3.conf_int_samples(nobs=np.array(nobs1 + nobs2))\nprint(res3.summary_frame())\n\nres3.cache_ci\n\nres3.method_re\n\nfig = res3.plot_forest()\nfig.set_figheight(6)\nfig.set_figwidth(6)\n\nres3 = combine_effects(eff, var_eff, method_re=\"chi2\", use_t=False, row_names=rownames)\n# TODO: we still need better information about conf_int of individual samples\n# We don't have enough information in the model for individual confidence intervals\n# if those are not based on normal distribution.\nres3.conf_int_samples(nobs=np.array(nobs1 + nobs2))\nprint(res3.summary_frame())", "Using iterated, Paule-Mandel estimate for random effects variance tau\nThe method commonly referred to as Paule-Mandel estimate is a method of moment estimate for the random effects variance that iterates between mean and variance estimate until convergence.", "res4 = combine_effects(\n eff, var_eff, method_re=\"iterated\", use_t=False, row_names=rownames\n)\nres4_df = res4.summary_frame()\nprint(\"method RE:\", res4.method_re)\nprint(res4.summary_frame())\nfig = res4.plot_forest()", "Example Kacker interlaboratory mean\nIn this example the effect size is the mean of measurements in a lab. We combine the estimates from several labs to estimate and overall average.", "eff = np.array([61.00, 61.40, 62.21, 62.30, 62.34, 62.60, 62.70, 62.84, 65.90])\nvar_eff = np.array(\n [0.2025, 1.2100, 0.0900, 0.2025, 0.3844, 0.5625, 0.0676, 0.0225, 1.8225]\n)\nrownames = [\"PTB\", \"NMi\", \"NIMC\", \"KRISS\", \"LGC\", \"NRC\", \"IRMM\", \"NIST\", \"LNE\"]\n\nres2_DL = combine_effects(eff, var_eff, method_re=\"dl\", use_t=True, row_names=rownames)\nprint(\"method RE:\", res2_DL.method_re)\nprint(res2_DL.summary_frame())\nfig = res2_DL.plot_forest()\nfig.set_figheight(6)\nfig.set_figwidth(6)\n\nres2_PM = combine_effects(eff, var_eff, method_re=\"pm\", use_t=True, row_names=rownames)\nprint(\"method RE:\", res2_PM.method_re)\nprint(res2_PM.summary_frame())\nfig = res2_PM.plot_forest()\nfig.set_figheight(6)\nfig.set_figwidth(6)", "Meta-analysis of proportions\nIn the following example the random effect variance tau is estimated to be zero. \nI then change two counts in the data, so the second example has random effects variance greater than zero.", "import io\n\nss = \"\"\"\\\n study,nei,nci,e1i,c1i,e2i,c2i,e3i,c3i,e4i,c4i\n 1,19,22,16.0,20.0,11,12,4.0,8.0,4,3\n 2,34,35,22.0,22.0,18,12,15.0,8.0,15,6\n 3,72,68,44.0,40.0,21,15,10.0,3.0,3,0\n 4,22,20,19.0,12.0,14,5,5.0,4.0,2,3\n 5,70,32,62.0,27.0,42,13,26.0,6.0,15,5\n 6,183,94,130.0,65.0,80,33,47.0,14.0,30,11\n 7,26,50,24.0,30.0,13,18,5.0,10.0,3,9\n 8,61,55,51.0,44.0,37,30,19.0,19.0,11,15\n 9,36,25,30.0,17.0,23,12,13.0,4.0,10,4\n 10,45,35,43.0,35.0,19,14,8.0,4.0,6,0\n 11,246,208,169.0,139.0,106,76,67.0,42.0,51,35\n 12,386,141,279.0,97.0,170,46,97.0,21.0,73,8\n 13,59,32,56.0,30.0,34,17,21.0,9.0,20,7\n 14,45,15,42.0,10.0,18,3,9.0,1.0,9,1\n 15,14,18,14.0,18.0,13,14,12.0,13.0,9,12\n 16,26,19,21.0,15.0,12,10,6.0,4.0,5,1\n 17,74,75,,,42,40,,,23,30\"\"\"\ndf3 = pd.read_csv(io.StringIO(ss))\ndf_12y = df3[[\"e2i\", \"nei\", \"c2i\", \"nci\"]]\n# TODO: currently 1 is reference, switch labels\ncount1, nobs1, count2, nobs2 = df_12y.values.T\ndta = df_12y.values.T\n\neff, var_eff = effectsize_2proportions(*dta, statistic=\"rd\")\n\neff, var_eff\n\nres5 = combine_effects(\n eff, var_eff, method_re=\"iterated\", use_t=False\n) # , row_names=rownames)\nres5_df = res5.summary_frame()\nprint(\"method RE:\", res5.method_re)\nprint(\"RE variance tau2:\", res5.tau2)\nprint(res5.summary_frame())\nfig = res5.plot_forest()\nfig.set_figheight(8)\nfig.set_figwidth(6)", "changing data to have positive random effects variance", "dta_c = dta.copy()\ndta_c.T[0, 0] = 18\ndta_c.T[1, 0] = 22\ndta_c.T\n\neff, var_eff = effectsize_2proportions(*dta_c, statistic=\"rd\")\nres5 = combine_effects(\n eff, var_eff, method_re=\"iterated\", use_t=False\n) # , row_names=rownames)\nres5_df = res5.summary_frame()\nprint(\"method RE:\", res5.method_re)\nprint(res5.summary_frame())\nfig = res5.plot_forest()\nfig.set_figheight(8)\nfig.set_figwidth(6)\n\nres5 = combine_effects(eff, var_eff, method_re=\"chi2\", use_t=False)\nres5_df = res5.summary_frame()\nprint(\"method RE:\", res5.method_re)\nprint(res5.summary_frame())\nfig = res5.plot_forest()\nfig.set_figheight(8)\nfig.set_figwidth(6)", "Replicate fixed effect analysis using GLM with var_weights\ncombine_effects computes weighted average estimates which can be replicated using GLM with var_weights or with WLS.\nThe scale option in GLM.fit can be used to replicate fixed meta-analysis with fixed and with HKSJ/WLS scale", "from statsmodels.genmod.generalized_linear_model import GLM\n\neff, var_eff = effectsize_2proportions(*dta_c, statistic=\"or\")\nres = combine_effects(eff, var_eff, method_re=\"chi2\", use_t=False)\nres_frame = res.summary_frame()\nprint(res_frame.iloc[-4:])", "We need to fix scale=1 in order to replicate standard errors for the usual meta-analysis.", "weights = 1 / var_eff\nmod_glm = GLM(eff, np.ones(len(eff)), var_weights=weights)\nres_glm = mod_glm.fit(scale=1.0)\nprint(res_glm.summary().tables[1])\n\n# check results\nres_glm.scale, res_glm.conf_int() - res_frame.loc[\n \"fixed effect\", [\"ci_low\", \"ci_upp\"]\n].values", "Using HKSJ variance adjustment in meta-analysis is equivalent to estimating the scale using pearson chi2, which is also the default for the gaussian family.", "res_glm = mod_glm.fit(scale=\"x2\")\nprint(res_glm.summary().tables[1])\n\n# check results\nres_glm.scale, res_glm.conf_int() - res_frame.loc[\n \"fixed effect\", [\"ci_low\", \"ci_upp\"]\n].values", "Mantel-Hanszel odds-ratio using contingency tables\nThe fixed effect for the log-odds-ratio using the Mantel-Hanszel can be directly computed using StratifiedTable.\nWe need to create a 2 x 2 x k contingency table to be used with StratifiedTable.", "t, nt, c, nc = dta_c\ncounts = np.column_stack([t, nt - t, c, nc - c])\nctables = counts.T.reshape(2, 2, -1)\nctables[:, :, 0]\n\ncounts[0]\n\ndta_c.T[0]\n\nimport statsmodels.stats.api as smstats\n\nst = smstats.StratifiedTable(ctables.astype(np.float64))", "compare pooled log-odds-ratio and standard error to R meta package", "st.logodds_pooled, st.logodds_pooled - 0.4428186730553189 # R meta\n\nst.logodds_pooled_se, st.logodds_pooled_se - 0.08928560091027186 # R meta\n\nst.logodds_pooled_confint()\n\nprint(st.test_equal_odds())\n\nprint(st.test_null_odds())", "check conversion to stratified contingency table\nRow sums of each table are the sample sizes for treatment and control experiments", "ctables.sum(1)\n\nnt, nc", "Results from R meta package\n```\n\nres_mb_hk = metabin(e2i, nei, c2i, nci, data=dat2, sm=\"OR\", Q.Cochrane=FALSE, method=\"MH\", method.tau=\"DL\", hakn=FALSE, backtransf=FALSE)\nres_mb_hk\n logOR 95%-CI %W(fixed) %W(random)\n1 2.7081 [ 0.5265; 4.8896] 0.3 0.7\n2 1.2567 [ 0.2658; 2.2476] 2.1 3.2\n3 0.3749 [-0.3911; 1.1410] 5.4 5.4\n4 1.6582 [ 0.3245; 2.9920] 0.9 1.8\n5 0.7850 [-0.0673; 1.6372] 3.5 4.4\n6 0.3617 [-0.1528; 0.8762] 12.1 11.8\n7 0.5754 [-0.3861; 1.5368] 3.0 3.4\n8 0.2505 [-0.4881; 0.9892] 6.1 5.8\n9 0.6506 [-0.3877; 1.6889] 2.5 3.0\n10 0.0918 [-0.8067; 0.9903] 4.5 3.9\n11 0.2739 [-0.1047; 0.6525] 23.1 21.4\n12 0.4858 [ 0.0804; 0.8911] 18.6 18.8\n13 0.1823 [-0.6830; 1.0476] 4.6 4.2\n14 0.9808 [-0.4178; 2.3795] 1.3 1.6\n15 1.3122 [-1.0055; 3.6299] 0.4 0.6\n16 -0.2595 [-1.4450; 0.9260] 3.1 2.3\n17 0.1384 [-0.5076; 0.7844] 8.5 7.6\n\nNumber of studies combined: k = 17\n logOR 95%-CI z p-value\n\nFixed effect model 0.4428 [0.2678; 0.6178] 4.96 < 0.0001\nRandom effects model 0.4295 [0.2504; 0.6086] 4.70 < 0.0001\nQuantifying heterogeneity:\n tau^2 = 0.0017 [0.0000; 0.4589]; tau = 0.0410 [0.0000; 0.6774];\n I^2 = 1.1% [0.0%; 51.6%]; H = 1.01 [1.00; 1.44]\nTest of heterogeneity:\n Q d.f. p-value\n 16.18 16 0.4404\nDetails on meta-analytical method:\n- Mantel-Haenszel method\n- DerSimonian-Laird estimator for tau^2\n- Jackson method for confidence interval of tau^2 and tau\n\nres_mb_hk$TE.fixed\n[1] 0.4428186730553189\nres_mb_hk$seTE.fixed\n[1] 0.08928560091027186\nc(res_mb_hk$lower.fixed, res_mb_hk$upper.fixed)\n[1] 0.2678221109331694 0.6178152351774684\n\n```", "print(st.summary())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ajgpitch/qutip-notebooks
examples/control-pulseoptim-CRAB-2qubitInerac.ipynb
lgpl-3.0
[ "Calculation of control fields for state-to-state transfer of a 2 qubit system using CRAB algorithm\nJonathan Zoller ([email protected])\nExample to demonstrate using the control library to determine control\npulses using the ctrlpulseoptim.optimize_pulse_unitary function.\nThe CRAB algorithm is used to optimize pulse shapes to minimize the fidelity\nerror, which is equivalent maximising the fidelity to an optimal value of 1.\nThe system in this example are two qubits, where the interaction can be\ncontrolled. The target is to perform a pure state transfer from a down-down\nstate to an up-up state.\nThe user can experiment with the timeslicing, by means of changing the\nnumber of timeslots and/or total time for the evolution.\nDifferent initial (starting) pulse types can be tried as well as\nboundaries on the control and a smooth ramping of the pulse when\nswitching the control on and off (at the beginning and close to the end).\nThe initial and final pulses are displayed in a plot\nAn in depth discussion of using methods of this type can be found in [1,2]", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport datetime\n\nfrom qutip import Qobj, identity, sigmax, sigmaz, tensor\nimport random\nimport qutip.logging_utils as logging\nlogger = logging.get_logger()\n#Set this to None or logging.WARN for 'quiet' execution\nlog_level = logging.INFO\n#QuTiP control modules\nimport qutip.control.pulseoptim as cpo\n\nexample_name = '2qubitInteract'", "Defining the physics\nThe dynamics of the system are governed by the combined Hamiltonian:\nH(t) = H_d + sum(u1(t)Hc1 + u2(t)Hc2 + ....)\nThat is the time-dependent Hamiltonian has a constant part (called here the drift) and time vary parts, which are the control Hamiltonians scaled by some functions u_j(t) known as control amplitudes\nIn this example we describe an Ising like Hamiltonian, encompassing random coefficients in the drift part and controlling the interaction of the qubits:\n$ \\hat{H} = \\sum_{i=1}^2 \\alpha_i \\sigma_x^i + \\beta_i \\sigma_z^i + u(t) \\cdot \\sigma_z \\otimes \\sigma_z $\nInitial $\\newcommand{\\ket}[1]{\\left|{#1}\\right\\rangle} \\ket{\\psi_0} = \\text{U_0}$ and target state $\\ket{\\psi_t} = \\text{U_targ}$ are chosen to be:\n$ \\ket{\\psi_0} = \\begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \\end{pmatrix}$\n$ \\ket{\\psi_t} = \\begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \\end{pmatrix}$", "random.seed(20)\nalpha = [random.random(),random.random()]\nbeta = [random.random(),random.random()]\n\nSx = sigmax()\nSz = sigmaz()\n\nH_d = (alpha[0]*tensor(Sx,identity(2)) + \n alpha[1]*tensor(identity(2),Sx) +\n beta[0]*tensor(Sz,identity(2)) +\n beta[1]*tensor(identity(2),Sz))\nH_c = [tensor(Sz,Sz)]\n# Number of ctrls\nn_ctrls = len(H_c)\n\nq1_0 = q2_0 = Qobj([[1], [0]])\nq1_targ = q2_targ = Qobj([[0], [1]])\n\npsi_0 = tensor(q1_0, q2_0)\npsi_targ = tensor(q1_targ, q2_targ)", "Defining the time evolution parameters\nTo solve the evolution the control amplitudes are considered constant within piecewise timeslots, hence the evolution during the timeslot can be calculated using U(t_k) = expm(-iH(t_k)dt). Combining these for all the timeslots gives the approximation to the evolution from an initial state $\\psi_0$ at t=0 to U(T) at the t=evo_time.\nThe number of timeslots and evo_time have to be chosen such that the timeslot durations (dt) are small compared with the dynamics of the system.", "# Number of time slots\nn_ts = 100\n# Time allowed for the evolution\nevo_time = 18", "Set the conditions which will cause the pulse optimisation to terminate\nAt each iteration the fidelity of the evolution is tested by comparaing the calculated evolution U(T) with the target U_targ. For unitary systems such as this one this is typically:\nf = normalise(overlap(U(T), U_targ)). The maximum fidelity (for a unitary system) calculated this way would be 1, and hence the error is calculated as fid_err = 1 - fidelity. As such the optimisation is considered completed when the fid_err falls below such a target value.\nIn some cases the optimisation either gets stuck in some local minima, or the fid_err_targ is just not achievable, therefore some limits are set to the time/effort allowed to find a solution.\nThe algorithm uses the CRAB algorithm to determine optimized coefficients that lead to a minimal fidelity error. The underlying optimization procedure is set to be the Nelder-Mead downhill simplex. Therefore, when all vertices shrink together, the algorithm will terminate.", "# Fidelity error target\nfid_err_targ = 1e-3\n# Maximum iterations for the optisation algorithm\nmax_iter = 500\n# Maximum (elapsed) time allowed in seconds\nmax_wall_time = 120", "Set the initial pulse type\nThe control amplitudes must be set to some initial values. Typically these are just random values for each control in each timeslot. These do however result in erratic optimised pulses. For this example, a solution will be found for any initial pulse, and so it can be interesting to look at the other initial pulse alternatives.", "# pulse type alternatives: RND|ZERO|LIN|SINE|SQUARE|SAW|TRIANGLE|\np_type = 'DEF'", "Give an extension for output files", "#Set to None to suppress output files\nf_ext = \"{}_n_ts{}_ptype{}.txt\".format(example_name, n_ts, p_type)", "Run the optimisation\nIn this step, the actual optimization is performed. At each iteration the Nelder-Mead algorithm calculates a new set of coefficients that improves the currently worst set among all set of coefficients. For details see [1,2] and a textbook about static search methods. The algorithm continues until one of the termination conditions defined above has been reached. If undesired results are achieved, rerun the algorithm and/or try to change the number of coefficients to be optimized for, as this is a very crucial parameter.", "result = cpo.opt_pulse_crab_unitary(H_d, H_c, psi_0, psi_targ, n_ts, evo_time, \n fid_err_targ=fid_err_targ, \n max_iter=max_iter, max_wall_time=max_wall_time, \n init_coeff_scaling=5.0, num_coeffs=5, \n method_params={'xtol':1e-3},\n guess_pulse_type=None, guess_pulse_action='modulate',\n out_file_ext=f_ext,\n log_level=log_level, gen_stats=True)", "Report the results\nFirstly the performace statistics are reported, which gives a breakdown of the processing times. In this example it can be seen that the majority of time is spent calculating the propagators, i.e. exponentiating the combined Hamiltonian.\nThe optimised U(T) is reported as the 'final evolution', which is essentially the string representation of the Qobj that holds the full time evolution at the point when the optimisation is terminated.\nThe key information is in the summary (given last). Here the final fidelity is reported and the reason for termination of the algorithm.", "result.stats.report()\nprint(\"Final evolution\\n{}\\n\".format(result.evo_full_final))\nprint(\"********* Summary *****************\")\nprint(\"Final fidelity error {}\".format(result.fid_err))\nprint(\"Final gradient normal {}\".format(result.grad_norm_final))\nprint(\"Terminated due to {}\".format(result.termination_reason))\nprint(\"Number of iterations {}\".format(result.num_iter))\nprint(\"Completed in {} HH:MM:SS.US\".format(\n datetime.timedelta(seconds=result.wall_time)))", "Plot the initial and final amplitudes\nHere the (random) starting pulse is plotted along with the pulse (control amplitudes) that was found to produce the target gate evolution to within the specified error.", "fig1 = plt.figure()\nax1 = fig1.add_subplot(2, 1, 1)\nax1.set_title(\"Initial Control amps\")\nax1.set_ylabel(\"Control amplitude\")\nax1.step(result.time, \n np.hstack((result.initial_amps[:, 0], result.initial_amps[-1, 0])), \n where='post')\n\nax2 = fig1.add_subplot(2, 1, 2)\nax2.set_title(\"Optimised Control Amplitudes\")\nax2.set_xlabel(\"Time\")\nax2.set_ylabel(\"Control amplitude\")\nax2.step(result.time, \n np.hstack((result.final_amps[:, 0], result.final_amps[-1, 0])), \n where='post')\nplt.tight_layout()\nplt.show()", "Versions", "from qutip.ipynbtools import version_table\n\nversion_table()", "References\n[1] Doria, P., Calarco, T. & Montangero, S.: Optimal Control Technique for Many-Body Quantum Dynamics. Phys. Rev. Lett. 106, 1–4 (2011).\n[2] Caneva, T., Calarco, T. & Montangero, S.: Chopped random-basis quantum optimization. Phys. Rev. A - At. Mol. Opt. Phys. 84, (2011)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Heroes-Academy/OOP_Spring_2016
notebooks/giordani/Python_3_OOP_Part_6__Abstract_Base_Classes.ipynb
mit
[ "The Inspection Club\nAs you know, Python leverages polymorphism at its maximum by dealing only with generic references to objects. This makes OOP not an addition to the language but part of its structure from the ground up. Moreover, Python pushes the EAFP appoach, which tries to avoid direct inspection of objects as much as possible.\nIt is however very interesting to read what Guido van Rossum says in PEP 3119: Invocation means interacting with an object by invoking its methods. Usually this is combined with polymorphism, so that invoking a given method may run different code depending on the type of an object. Inspection means the ability for external code (outside of the object's methods) to examine the type or properties of that object, and make decisions on how to treat that object based on that information. [...] In classical OOP theory, invocation is the preferred usage pattern, and inspection is actively discouraged, being considered a relic of an earlier, procedural programming style. However, in practice this view is simply too dogmatic and inflexible, and leads to a kind of design rigidity that is very much at odds with the dynamic nature of a language like Python.\nThe author of Python recognizes that forcing the use of a pure polymorphic approach leads sometimes to solutions that are too complex or even incorrect. In this section I want to show some of the problems that can arise from a pure polymorphic approach and introduce Abstract Base Classes, which aim to solve them. I strongly suggest to read PEP 3119 (as for any other PEP) since it contains a deeper and better explanation of the whole matter. Indeed I think that this PEP is so well written that any further explanation is hardly needed. I am however used to write explanations to check how much I understood about the topic, so I am going to try it this time too.\nE.A.F.P the Extra Test Trial\nThe EAFP coding style requires you to trust the incoming objects to provide the attributes and methods you need, and to manage the possible exceptions, if you know how to do it. Sometimes, however, you need to test if the incoming object matches a complex behaviour. For example, you could be interested in testing if the object acts like a list, but you quickly realize that the amount of methods a list provides is very big and this could lead to odd EAFP code like\npython\ntry:\n obj.append\n obj.count\n obj.extend\n obj.index\n obj.insert\n [...]\nexcept AttributeError:\n [...]\nwhere the methods of the list type are accessed (not called) just to force the object to raise the AttributeError exception if they are not present. This code, however, is not only ugly but also wrong. If you recall the \"Enter the Composition\" section of the third post of this series, you know that in Python you can always customize the __getattr__() method, which is called whenever the requested attribute is not found in the object. So I could write a class that passes the test but actually does not act like a list\n``` python\nclass FakeList:\n def fakemethod(self):\n pass\ndef __getattr__(self, name):\n if name in ['append', 'count', 'extend', 'index', 'insert', ...]:\n return self.fakemethod\n\n```\nThis is obviously just an example, and no one will ever write such a class, but this demonstrates that just accessing methods does not guarantee that a class acts like the one we are expecting.\nThere are many examples that could be done leveraging the highly dynamic nature of Python and its rich object model. I would summarize them by saying that sometimes you'd better to check the type of the incoming object.\nIn Python you can obtain the type of an object using the type() built-in function, but to check it you'd better use isinstance(), which returns a boolean value. Let us see an example before moving on", "isinstance([], list)\n\nisinstance(1, int)\n\nclass Door:\n pass\n\nd = Door()\nisinstance(d, Door)\n\nclass EnhancedDoor(Door):\n pass\n\ned = EnhancedDoor()\nisinstance(ed, EnhancedDoor)\n\nisinstance(ed, Door)", "As you can see the function can also walk the class hierarchy, so the check is not so trivial like the one you would obtain by directly using type().\nThe isinstance() function, however, does not completely solve the problem. If we write a class that actually acts like a list but does not inherit from it, isinstance() does not recognize the fact that the two may be considered the same thing. The following code returns False regardless the content of the MyList class", "class MyList:\n pass\n\nml = MyList()\nisinstance(ml, list)", "since isinstance() does not check the content of the class or its behaviour, it just consider the class and its ancestors.\nThe problem, thus, may be summed up with the following question: what is the best way to test that an object exposes a given interface? Here, the word interface is used for its natural meaning, without any reference to other programming solutions, which however address the same problem.\nA good way to address the problem could be to write inside an attribute of the object the list of interfaces it promises to implement, and to agree that any time we want to test the behaviour of an object we simply have to check the content of this attribute. This is exactly the path followed by Python, and it is very important to understand that the whole system is just about a promised behaviour.\nThe solution proposed through PEP 3119 is, in my opinion, very simple and elegant, and it perfectly fits the nature of Python, where things are usually agreed rather than being enforced. Not only, the solution follows the spirit of polymorphism, where information is provided by the object itself and not extracted by the calling code.\nIn the next sections I am going to try and describe this solution in its main building blocks. The matter is complex so my explanation will lack some details: please refer to the forementioned PEP 3119 for a complete description.\nWho Framed the Metaclasses\nAs already described, Python provides two built-ins to inspect objects and classes, which are isinstance() and issubclass() and it would be desirable that a solution to the inspection problem allows the programmer to go on with using those two functions.\nThis means that we need to find a way to inject the \"behaviour promise\" into both classes and instances. This is the reason why metaclasses come in play. Recall what we said about them in the fifth issue of this series: metaclasses are the classes used to build classes, which means that they are the preferred way to change the structure of a class, and, in consequence, of its instances.\nAnother way to do the same job would be to leverage the inheritance mechanism, injecting the behaviour through a dedicated parent class. This solution has many downsides, which I'm am not going to detail. It is enough to say that affecting the class hierarchy may lead to complex situations or subtle bugs. Metaclasses may provide here a different entry point for the introduction of a \"virtual base class\" (as PEP 3119 specifies, this is not the same concept as in C++).\nOverriding Places\nAs said, isinstance() and issubclass() are built-in functions, not object methods, so we cannot simply override them providing a different implementation in a given class. So the first part of the solution is to change the behaviour of those two functions to first check if the class or the instance contain a special method, which is __instancecheck__() for isinstance() and __subclasscheck__() for issubclass(). So both built-ins try to run the respective special method, reverting to the standard algorithm if it is not present.\nA note about naming. Methods must accept the object they belong to as the first argument, so the two special methods shall have the form\n``` python\ndef instancecheck(cls, inst):\n [...]\ndef subclasscheck(cls, sub):\n [...]\n```\nwhere cls is the class where they are injected, that is the one representing the promised behaviour. The two built-ins, however, have a reversed argument order, where the behaviour comes after the tested object: when you write isinstance([], list) you want to check if the [] instance has the list behaviour. This is the reason behind the name choice: just calling the methods __isinstance__() and __issubclass__() and passing arguments in a reversed order would have been confusing.\nThis is ABC\nThe proposed solution is thus called Abstract Base Classes, as it provides a way to attach to a concrete class a virtual class with the only purpose of signaling a promised behaviour to anyone inspecting it with isinstance() or issubclass().\nTo help programmers implement Abstract Base Classes, the standard library has been given an abc module, thet contains the ABCMeta class (and other facilities). This class is the one that implements __instancecheck__() and __subclasscheck__() and shall be used as a metaclass to augment a standard class. This latter will then be able to register other classes as implementation of its behaviour.\nSounds complex? An example may clarify the whole matter. The one from the official documentation is rather simple:", "from abc import ABCMeta\n\nclass MyABC(metaclass=ABCMeta):\n pass\n\nMyABC.register(tuple)\n\nassert issubclass(tuple, MyABC)\nassert isinstance((), MyABC)", "Here, the MyABC class is provided the ABCMeta metaclass. This puts the two __isinstancecheck__() and __subclasscheck__() methods inside MyABC so that, when issuing isinstance(), what Python actually ececutes is", "d = {'a': 1}\nisinstance(d, MyABC)\n\nMyABC.__class__.__instancecheck__(MyABC, d)\n\nisinstance((), MyABC)\n\nMyABC.__class__.__instancecheck__(MyABC, ())", "After the definition of MyABC we need a way to signal that a given class is an instance of the Abstract Base Class and this happens through the register() method, provided by the ABCMeta metaclass. Calling MyABC.register(tuple) we record inside MyABC the fact that the tuple class shall be identified as a subclass of MyABC itself. This is analogous to saying that tuple inherits from MyABC but not quite the same. As already said registering a class in an Abstract Base Class with register() does not affect the class hierarchy. Indeed, the whole tuple class is unchanged.\nThe current implementation of ABCs stores the registered types inside the _abc_registry attribute. Actually it stores there weak references to the registered types (this part is outside the scope of this article, so I'm not detailing it)", "MyABC._abc_registry.data", "Movie Trivia\nSection titles come from the following movies: The Breakfast Club (1985), E.T. the Extra-Terrestrial (1982), Who Framed Roger Rabbit (1988), Trading Places (1983), This is Spinal Tap (1984).\nSources\nYou will find a lot of documentation in this Reddit post. Most of the information contained in this series come from those sources.\nFeedback\nFeel free to use the blog Google+ page to comment the post. The GitHub issues page is the best place to submit corrections." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tpin3694/tpin3694.github.io
neural-networks/defining_activation_functions.ipynb
mit
[ "Title: Defining Activation Functions\nSlug: activation-functions\nSummary: A Overview of Implementing Activation Functions in Your Own Neural Network\nDate: 2018-01-1 09:11\nCategory: Neural Networks\nTags: Basics\nAuthors: Thomas Pinder\nActivation functions are an integral part to a neural network, mapping the weighted input to a range of outputs. It is through the use of an activation function that a neural network can model non-linear mappings and consequently, the choice of activation function is important. In this brief summary, the sigmoid, tanh, ReLU and softmax activation functions will be presented along with an implementation. \nPreliminaries\nWith all activation functions, not only is the function itself needed, but also the functions derivative will be needed for back propogation. Some people prefer to define these in seperate functions, however I prefer to have it wrapped up in one function for conciseness. For all of the following activation functions, the NumPy library should be loaded.", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Sigmoid\nThe sigmoid function was once the default choice of activation function when building a network and to some extent it still is. By mapping values into a range between 0 and 1 it lacks the beneficial quality of being zero centered - a property that aids gradient descent during back propogation.", "def activation_sigmoid(x, derivative):\n sigmoid_value = 1/(1+np.exp(-x))\n if not derivative:\n return sigmoid_value\n else:\n return sigmoid_value*(1-sigmoid_value)", "When plotted on a range of -5,5, this gives the following shape.", "x_values = np.arange(-5, 6, 0.1)\ny_sigmoid = activation_sigmoid(x_values, derivative=False)\n\nplt.plot(x_values, y_sigmoid)", "Tanh\ntanh is very similar in shape to the sigmoid, however the defining difference is that tanh ranges from -1 to 1, making it zero centered and consequently a very popular choice. Conveniently, tanh is pre-defined in NumPy, however it is still worthwhile wrapping it up in a function in order to define the derivative of tanh.", "def activation_tanh(x, derivative):\n tanh_value = np.tanh(x)\n if not derivative:\n return tanh_value\n else:\n return 1-tanh_value**2\n\ny_tanh = activation_tanh(x_values, derivative = False)\nplt.plot(x_values, y_tanh)", "ReLU\nThe Rectified Linear Unit is another commonly used activation function with a range from 0 to infinity. A major advantage of the ReLU function is that, unlike the sigmoid and tanh, the gradient of the ReLU function does not vanish as the limits are approached. An additionaly benefit of the ReLU is its enhanced computational efficiency as shown by Krizhevsky et. al. who found the ReLU function to be six times faster than tanh.", "def relu_activation(x, derivative):\n if not derivative:\n return x * (x>0)\n else:\n x[x <= 0] = 0 \n x[x > 0] = 1\n return x\n\ny_relu = relu_activation(x_values, derivative=False)\nplt.plot(x_values, y_relu)", "It is probably worth noting, that the leaky ReLU is a closely related function with the only difference being that values < 0 are not completely set to 0, instead multiplied by 0.01.\nSoftmax\nThe final function to be discussed is the softmax, a function typically used in the final layer of a network. The softmax function reduces the value of each neurone in the final layer to a value in the range of 0 and 1, such that all values in the final layer sum to 1. The benefit of this is that in a multi-classification problem, the softmax function will assign a probability to each class, allowing for deeper insight into the performance of the network to be obtained through metrics such as top-n error. Note, the softmax will sometimes be written with the omission of the subtraction of np.max(x) stablises the function due to the exponent in the softmax sometimes resulting in a value larger than what Python can accept (10 followed by 138 0s) being calculated.", "def softmax_activation(x):\n exponent = np.exp(x - np.max(x))\n softmax_value = exponent/np.sum(exponent, axis = 0)\n return softmax_value\ny_softmax = softmax_activation(x_values)\nplt.plot(x_values, y_softmax)\nprint(\"The sum of all softmax probabilities can be confirmed as \" + str(np.sum(y_softmax)))", "Conclusion\nThis brief discussion around the main activation functions used in neural networks should provide you with a good understanding of how each function works and the relationships between them. If in doubt, it is generally advisable to build your network using the ReLU function in the hidden layers and the softmax function in your final layer, however, it is often worth trialing different functions to be sure." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jhaber-zz/Charter-school-identities
scripts/analysis_prelim.ipynb
mit
[ "<p style=\"text-align: center;\"> Charter school identities and outcomes in the accountability era:<br/> Preliminary results\n<p style=\"text-align: center;\">April 19th, 2017<br/>By Jaren Haber, PhD Candidate<break/>Dept. of Sociology, UC Berkeley\n\n<p style=\"text-align: center;\">![alt text](http://jaypgreene.files.wordpress.com/2009/12/explosion_600x625600x625.jpg \"Old U.S. Map of charter schools by state\")\n<p style=\"text-align: center;\">(this out-dated graphic courtesy of U.S. News & World Report, 2009)\n\n## Research questions\n**How are charter schools different from each other in terms of ideology? How do these differences shape their survival and their outcomes, and what does this reveal about current educational policy?** \n\n## The corpus\n- Website self-descriptions of all **6,753 charter schools** open in 2014-15 (identified using the NCES Public School Universe Survey)\n- Charter school websites are a publicly visible proclamation of identity attempting to impress parents, regulators, etc.\n- This study the first to use this contemporary, comprehensive data source on U.S. charter school identities\n- Me & research team working on using BeautifulSoup and requests.get to webscrape the full sample\n\n### Motivation\n- Too much focus on test scores in education, too little on organizational aspects\n- Are charter schools innovative? How?\n- How does educational policy shape ed. philosophy? Organization? Outcomes?\n- No one has studied charters' public image as expressed in their OWN words\n\n### Methods\n- NLP: Word frequencies, distinctive words, etc.\n- Supervised: Custom dictionaries\n- Unsupervised: Topic models, word embeddings\n- Later: statistical regression to test, e.g., how progressivist schools in liberal communities have higher performance than they do in other places\n\n## Preliminary analysis: website self-descriptions of non-random sample of 196 schools\n- Early-stage sample: NOT representative!\n- About half randomly selected, half tracked down (many through Internet Archive) because of missing URLs\n- Closed schools over-represented\n\n## Preliminary conclusions: \n### Word counts:\n- Website self-descriptions for schools in mid-sized cities and suburbs tend to be longest, followed by other urban and suburban schools, then schools in towns, and shortest tends to be rural schools\n- Charter schools in cities and suburbs have the highest textual redundancy (lowest ratio of types to tokens)\n\n### Word embeddings:\n- The two educational philosophies I'm interested in--**progressivism** and **essentialism**--can be distinguished using semantic vectors\n- Useful way for creating and checking my dictionaries\n\n### Topic modeling:\n- Urban charter schools' websites emphasize **GOALS** (topic 0)\n- Suburban charter schools' websites emphasize **CURRICULUM** (topic 1) in addition to goals\n\n## Next steps:\n- Working with custom dictionaries, POS tagging\n- Webscraping and parsing HTML to get full sample\n- Match website text with data on test scores and community characteristics (e.g., race, class, political leanings) --> test hypotheses with statistical regression<br/><br/>\n- **More long-term**: Collect longitudinal mission statement data from the Internet Archive --> look at survival and geographic dispersion of identity categories over time (especially pre-NCLB if possible)", "# The keyword categories to help parse website text:\nmission = ['mission',' vision ', 'vision:', 'mission:', 'our purpose', 'our ideals', 'ideals:', 'our cause', 'cause:', 'goals', 'objective']\ncurriculum = ['curriculum', 'curricular', 'program', 'method', 'pedagogy', 'pedagogical', 'approach', 'model', 'system', 'structure']\nphilosophy = ['philosophy', 'philosophical', 'beliefs', 'believe', 'principles', 'creed', 'credo', 'value', 'moral']\nhistory = ['history', 'our story', 'the story', 'school story', 'background', 'founding', 'founded', 'established', 'establishment', 'our school began', 'we began', 'doors opened', 'school opened']\ngeneral = ['about us', 'our school', 'who we are', 'overview', 'general information', 'our identity', 'profile', 'highlights']", "Initializing Python", "#!/usr/bin/env python\n# -*- coding: UTF-8\n\n# IMPORTING KEY PACKAGES\nimport csv # for reading in CSVs and turning them into dictionaries\nimport re # for regular expressions\nimport os # for navigating file trees\nimport nltk # for natural language processing tools\nimport pandas # for working with dataframes\nimport numpy as np # for working with numbers\n\n# FOR CLEANING, TOKENIZING, AND STEMMING THE TEXT\nfrom nltk import word_tokenize, sent_tokenize # widely used text tokenizer\nfrom nltk.stem.porter import PorterStemmer # an approximate method of stemming words (it just cuts off the ends)\nfrom nltk.corpus import stopwords # for one method of eliminating stop words, to clean the text\nstopenglish = list(stopwords.words(\"english\")) # assign the string of english stopwords to a variable and turn it into a list\nimport string # for one method of eliminating punctuation\npunctuations = list(string.punctuation) # assign the string of common punctuation symbols to a variable and turn it into a list\n\n# FOR ANALYZING WITH THE TEXT\nfrom sklearn.feature_extraction.text import CountVectorizer # to work with document-term matrices, especially\ncountvec = CountVectorizer(tokenizer=nltk.word_tokenize)\nfrom sklearn.feature_extraction.text import TfidfVectorizer # for creating TF-IDFs\ntfidfvec = TfidfVectorizer()\nfrom sklearn.decomposition import LatentDirichletAllocation # for topic modeling\n\nimport gensim # for word embedding models\nfrom scipy.spatial.distance import cosine # for cosine similarity\nfrom sklearn.metrics import pairwise # for pairwise similarity\nfrom sklearn.manifold import MDS, TSNE # for multi-dimensional scaling\n\n# FOR VISUALIZATIONS\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n# Visualization parameters\n% pylab inline \n% matplotlib inline\nmatplotlib.style.use('ggplot')", "Reading in preliminary data", "sample = [] # make empty list\nwith open('../data_URAP_etc/mission_data_prelim.csv', 'r', encoding = 'Latin-1')\\\nas csvfile: # open file \n reader = csv.DictReader(csvfile) # create a reader\n for row in reader: # loop through rows\n sample.append(row) # append each row to the list\n\nsample[0]\n\n# Take a look at the most important contents and the variables list\n# in our sample (a list of dictionaries)--let's look at just the first entry\nprint(sample[1][\"SCHNAM\"], \"\\n\", sample[1][\"URL\"], \"\\n\", sample[1][\"WEBTEXT\"], \"\\n\")\nprint(sample[1].keys()) # look at all the variables!\n\n# Read the data in as a pandas dataframe\ndf = pandas.read_csv(\"../data_URAP_etc/mission_data_prelim.csv\", encoding = 'Latin-1')\ndf = df.dropna(subset=[\"WEBTEXT\"]) # drop any schools with no webtext that might have snuck in (none currently)\n\n# Add additional variables for analysis:\n# PCTETH = percentage of enrolled students belonging to a racial minority\n# this includes American Indian, Asian, Hispanic, Black, Hawaiian, or Pacific Islander\ndf[\"PCTETH\"] = (df[\"AM\"] + df[\"ASIAN\"] + df[\"HISP\"] + df[\"BLACK\"] + df[\"PACIFIC\"]) / df[\"MEMBER\"]\n\ndf[\"STR\"] = df[\"MEMBER\"] / df[\"FTE\"] # Student/teacher ratio\ndf[\"PCTFRPL\"] = df[\"TOTFRL\"] / df[\"MEMBER\"] # Percent of students receiving FRPL\n\n# Another interesting variable: \n# TYPE = type of school, where 1 = regular, 2 = special ed, 3 = vocational, 4 = other/alternative, 5 = reportable program\n\n## Print the webtext from the first school in the dataframe\nprint(df.iloc[0][\"WEBTEXT\"])", "Descriptive statistics\nHow urban proximity is coded: Lower number = more urban (closer to large city)\nMore specifically, it uses two digits with distinct meanings: \n- the first digit: \n - 1 = city\n - 2 = suburb\n - 3 = town\n - 4 = rural\n- the second digit:\n - 1 = large or fringe\n - 2 = mid-size or distant\n - 3 = small/remote", "print(df.describe()) # get descriptive statistics for all numerical columns\nprint()\nprint(df['ULOCAL'].value_counts()) # frequency counts for categorical data\nprint()\nprint(df['LEVEL'].value_counts()) # treat grade range served as categorical\n# Codes for level/ grade range served: 3 = High school, 2 = Middle school, 1 = Elementary, 4 = Other)\nprint()\nprint(df['LSTATE'].mode()) # find the most common state represented in these data\nprint(df['ULOCAL'].mode()) # find the most urbanicity represented in these data\n# print(df['FTE']).mean() # What's the average number of full-time employees by school?\n# print(df['STR']).mean() # And the average student-teacher ratio?\n\n# here's the number of schools from each state, in a graph:\ngrouped_state = df.groupby('LSTATE')\ngrouped_state['WEBTEXT'].count().sort_values(ascending=True).plot(kind = 'bar', title='Schools mostly in CA, TX, AZ, FL--similar to national trend')\nplt.show()\n\n# and here's the number of schools in each urban category, in a graph:\ngrouped_urban = df.groupby('ULOCAL')\ngrouped_urban['WEBTEXT'].count().sort_values(ascending=True).plot(kind = 'bar', title='Most schools are in large cities or large suburbs')\nplt.show()", "What these numbers say about the charter schools in the sample:\n\nMost are located in large cities, followed by large suburbs, then medium and small city, and then rural.\nThe means for percent minorities and students receiving free- or reduced-price lunch are both about 60%.\nMost are in CA, TX, AZ, and FL\nMost of the schools in the sample are primary schools\n\nThis means that the sample reflects national averages. In that sense, this sample isn't so bad.\nCleaning, tokenizing, and stemming the text", "# Now we clean the webtext by rendering each word lower-case then removing punctuation. \ndf['webtext_lc'] = df['WEBTEXT'].str.lower() # make the webtext lower case\ndf['webtokens'] = df['webtext_lc'].apply(nltk.word_tokenize) # tokenize the lower-case webtext by word\ndf['webtokens_nopunct'] = df['webtokens'].apply(lambda x: [word for word in x if word not in list(string.punctuation)]) # remove punctuation\n\nprint(df.iloc[0][\"webtokens\"]) # the tokenized text without punctuation\n\n# Now we remove stopwords and stem. This will improve the results\ndf['webtokens_clean'] = df['webtokens_nopunct'].apply(lambda x: [word for word in x if word not in list(stopenglish)]) # remove stopwords\ndf['webtokens_stemmed'] = df['webtokens_clean'].apply(lambda x: [PorterStemmer().stem(word) for word in x])\n\n# Some analyses require a string version of the webtext without punctuation or numbers.\n# To get this, we join together the cleaned and stemmed tokens created above, and then remove numbers and punctuation:\ndf['webtext_stemmed'] = df['webtokens_stemmed'].apply(lambda x: ' '.join(char for char in x))\ndf['webtext_stemmed'] = df['webtext_stemmed'].apply(lambda x: ''.join(char for char in x if char not in punctuations))\ndf['webtext_stemmed'] = df['webtext_stemmed'].apply(lambda x: ''.join(char for char in x if not char.isdigit()))\n\ndf['webtext_stemmed'][0]\n\n# Some analyses require tokenized sentences. I'll do this with the list of dictionaries.\n# I'll use cleaned, tokenized sentences (with stopwords) to create both a dictionary variable and a separate list for word2vec\n\nwords_by_sentence = [] # initialize the list of tokenized sentences as an empty list\nfor school in sample:\n school[\"sent_toksclean\"] = []\n school[\"sent_tokens\"] = [word_tokenize(sentence) for sentence in sent_tokenize(school[\"WEBTEXT\"])] \n for sent in school[\"sent_tokens\"]:\n school[\"sent_toksclean\"].append([PorterStemmer().stem(word.lower()) for word in sent if (word not in punctuations)]) # for each word: stem, lower-case, and remove punctuations\n words_by_sentence.append([PorterStemmer().stem(word.lower()) for word in sent if (word not in punctuations)])\n\nwords_by_sentence[:2]", "Counting document lengths", "# We can also count document lengths. I'll mostly use the version with punctuation removed but including stopwords,\n# because stopwords are also part of these schools' public image/ self-presentation to potential parents, regulators, etc.\n\ndf['webstem_count'] = df['webtokens_stemmed'].apply(len) # find word count without stopwords or punctuation\ndf['webpunct_count'] = df['webtokens_nopunct'].apply(len) # find length with stopwords still in there (but no punctuation)\ndf['webclean_count'] = df['webtokens_clean'].apply(len) # find word count without stopwords or punctuation\n\n# For which urban status are website self-description the longest?\nprint(grouped_urban['webpunct_count'].mean().sort_values(ascending=False))\n\n# here's the mean website self-description word count for schools grouped by urban proximity, in a graph:\ngrouped_urban['webpunct_count'].mean().sort_values(ascending=True).plot(kind = 'bar', title='Schools in mid-sized cities and suburbs have longer self-descriptions than in fringe areas', yerr = grouped_state[\"webpunct_count\"].std())\nplt.show()\n\n# Look at 'FTE' (proxy for # administrators) clustered by urban proximity and whether it explains this\ngrouped_urban['FTE'].mean().sort_values(ascending=True).plot(kind = 'bar', title='Title', yerr = grouped_state[\"FTE\"].std())\nplt.show()\n\n# Now let's calculate the type-token ratio (TTR) for each school, which compares\n# the number of types (unique words used) with the number of words (including repetitions of words).\n\ndf['numtypes'] = df['webtokens_nopunct'].apply(lambda x: len(set(x))) # this is the number of unique words per site\ndf['TTR'] = df['numtypes'] / df['webpunct_count'] # calculate TTR\n\n# here's the mean TTR for schools grouped by urban category:\ngrouped_urban = df.groupby('ULOCAL')\ngrouped_urban['TTR'].mean().sort_values(ascending=True).plot(kind = 'bar', title='Charters in cities and suburbs have higher textual redundancy than in fringe areas', yerr = grouped_urban[\"TTR\"].std())\nplt.show()", "(Excessively) Frequent words", "# First, aggregate all the cleaned webtext:\nwebtext_all = []\ndf['webtokens_clean'].apply(lambda x: [webtext_all.append(word) for word in x])\nwebtext_all[:20]\n\n# Now apply the nltk function FreqDist to count the number of times each token occurs.\nword_frequency = nltk.FreqDist(webtext_all)\n\n#print out the 50 most frequent words using the function most_common\nprint(word_frequency.most_common(50))", "### These are prolific, ritual, empty words and will be excluded from topic models!\nDistinctive words (mostly place names)", "sklearn_dtm = countvec.fit_transform(df['webtext_stemmed'])\nprint(sklearn_dtm)\n\n# What are some of the words in the DTM? \nprint(countvec.get_feature_names()[:10])\n\n# now we can create the dtm, but with cells weigthed by the tf-idf score.\ndtm_tfidf_df = pandas.DataFrame(tfidfvec.fit_transform(df.webtext_stemmed).toarray(), columns=tfidfvec.get_feature_names(), index = df.index)\n\ndtm_tfidf_df[:20] # let's take a look!\n\n# What are the 20 words with the highest TF-IDF scores?\nprint(dtm_tfidf_df.max().sort_values(ascending=False)[:20])", "Like the frequent words above, these highly \"unique\" words are empty of meaning and will be excluded from topic models!\nWord Embeddings with word2vec\nWord2Vec features\n<ul>\n<li>Size: Number of dimensions for word embedding model</li>\n<li>Window: Number of context words to observe in each direction</li>\n<li>min_count: Minimum frequency for words included in model</li>\n<li>sg (Skip-Gram): '0' indicates CBOW model; '1' indicates Skip-Gram</li>\n<li>Alpha: Learning rate (initial); prevents model from over-correcting, enables finer tuning</li>\n<li>Iterations: Number of passes through dataset</li>\n<li>Batch Size: Number of words to sample from data during each pass</li>\n<li>Worker: Set the 'worker' option to ensure reproducibility</li>\n</ul>", "# train the model, using a minimum of 5 words\nmodel = gensim.models.Word2Vec(words_by_sentence, size=100, window=5, \\\n min_count=2, sg=1, alpha=0.025, iter=5, batch_words=10000, workers=1)\n\n# dictionary of words in model (may not work for old gensim)\n# print(len(model.vocab))\n# model.vocab\n\n# Find cosine distance between two given word vectors\nprint(model.similarity('college-prep','align')) # these two are close to essentialism\nprint(model.similarity('emot', 'curios')) # these two are close to progressivism\n\n# create some rough dictionaries for our contrasting educational philosophies\nessentialism = ['excel', 'perform', 'prep', 'rigor', 'standard', 'align', 'comprehens', 'content', \\\n 'data-driven', 'market', 'research', 'research-bas', 'program', 'standards-bas']\nprogressivism = ['inquir', 'curios', 'project', 'teamwork', 'social', 'emot', 'reflect', 'creat',\\\n 'ethic', 'independ', 'discov', 'deep', 'problem-solv', 'natur']\n\n# Let's look at two vectors that demonstrate the binary between these philosophies: align and emot\nprint(model.most_similar('align')) # words core to essentialism\nprint()\nprint(model.most_similar('emot')) # words core to progressivism\n\nprint(model.most_similar('emot')) # words core to progressivism\n\n# Let's work with the binary between progressivism vs. essentialism\n# first let's find the 50 words closest to each philosophy using the two 14-term dictionaries defined above\nprog_words = model.most_similar(progressivism, topn=50)\nprog_words = [word for word, similarity in prog_words]\nfor word in progressivism:\n prog_words.append(word)\nprint(prog_words[:20])\n\ness_words = model.most_similar(essentialism, topn=50) # now let's get the 50 most similar words for our essentialist dictionary\ness_words = [word for word, similarity in ess_words]\nfor word in essentialism:\n ess_words.append(word)\nprint(ess_words[:20])\n\n# construct an combined dictionary\nphil_words = ess_words + prog_words\n\n# preparing for visualizing this binary with word2vec\nx = [model.similarity('emot', word) for word in phil_words]\ny = [model.similarity('align', word) for word in phil_words]\n\n# here's a visual of the progressivism/essentialism binary: \n# top-left half is essentialism, bottom-right half is progressivism\n_, ax = plt.subplots(figsize=(20,20))\nax.scatter(x, y, alpha=1, color='b')\nfor i in range(len(phil_words)):\n ax.annotate(phil_words[i], (x[i], y[i]))\nax.set_xlim(.635, 1.005)\nax.set_ylim(.635, 1.005)\nplt.plot([0, 1], [0, 1], linestyle='--');", "Binary of essentialist (top-left) and progressivist (bottom-right) word vectors\nTopic Modeling with scikit-learn\n\nFor documentation on this topic modeling (TM) package, which uses Latent Dirichlet Allocation (LDA), see here.\nAnd for documentation on the vectorizer package, CountVectorizer from scikit-learn, see here.", "####Adopted From: \n#Author: Olivier Grisel <[email protected]>\n# Lars Buitinck\n# Chyi-Kwei Yau <[email protected]>\n# License: BSD 3 clause\n\n# Initialize the variables needed for the topic models\nn_samples = 2000\nn_topics = 3\nn_top_words = 50\n\n# Create helper function that prints out the top words for each topic in a pretty way\ndef print_top_words(model, feature_names, n_top_words):\n for topic_idx, topic in enumerate(model.components_):\n print(\"\\nTopic #%d:\" % topic_idx)\n print(\" \".join([feature_names[i]\n for i in topic.argsort()[:-n_top_words - 1:-1]]))\n print()\n\n# Vectorize our text using CountVectorizer\nprint(\"Extracting tf features for LDA...\")\ntf_vectorizer = CountVectorizer(max_df=70, min_df=4,\n max_features=None,\n stop_words=stopenglish, lowercase=1\n )\n\ntf = tf_vectorizer.fit_transform(df.WEBTEXT)\n\nprint(\"Fitting LDA models with tf features, \"\n \"n_samples=%d and n_topics=%d...\"\n % (n_samples, n_topics))\n\n# define the lda function, with desired options\nlda = LatentDirichletAllocation(n_topics=n_topics, max_iter=20,\n learning_method='online',\n learning_offset=80.,\n total_samples=n_samples,\n random_state=0)\n#fit the model\nlda.fit(tf)\n\n# print the top words per topic, using the function defined above.\n\nprint(\"\\nTopics in LDA model:\")\ntf_feature_names = tf_vectorizer.get_feature_names()\nprint_top_words(lda, tf_feature_names, n_top_words)", "These topics seem to mean:\n- topic 0 relates to GOALS,\n- topic 1 relates to CURRICULUM, and \n- topic 2 relates to PHILOSOPHY or learning process (but this topic less clear/ more mottled)", "# Preparation for looking at distribution of topics over schools\ntopic_dist = lda.transform(tf) # transpose topic distribution\ntopic_dist_df = pandas.DataFrame(topic_dist) # turn into a df\ndf_w_topics = topic_dist_df.join(df) # merge with charter MS dataframe\ndf_w_topics[:20] # check out the merged df with topics!\n\ntopic_columns = range(0,n_topics) # Set numerical range of topic columns for use in analyses, using n_topics from above\n\n# Which schools are weighted highest for topic 0? How do they trend with regard to urban proximity and student class? \nprint(df_w_topics[['LSTATE', 'ULOCAL', 'PCTETH', 'PCTFRPL', 0, 1, 2]].sort_values(by=[0], ascending=False))\n\n# Preparation for comparing total number of words aligned with each topic\n# To weight each topic by its prevalenced in the corpus, multiply each topic by the word count from above\n\ncol_list = []\nfor num in topic_columns:\n col = \"%d_wc\" % num\n col_list.append(col)\n df_w_topics[col] = df_w_topics[num] * df_w_topics['webpunct_count']\n \ndf_w_topics[:20]\n\n# Now we can see the prevalence of each topic over words for each urban category and state\ngrouped_urban = df_w_topics.groupby('ULOCAL')\nfor e in col_list:\n print(e)\n print(grouped_urban[e].sum()/grouped_urban['webpunct_count'].sum())\n\ngrouped_state = df_w_topics.groupby('LSTATE')\nfor e in col_list:\n print(e)\n print(grouped_state[e].sum()/grouped_state['webpunct_count'].sum())\n\n# Here's the distribution of urban proximity over the three topics:\nfig1 = plt.figure()\nchrt = 0\nfor num in topic_columns:\n chrt += 1 \n ax = fig1.add_subplot(2,3, chrt)\n grouped_urban[num].mean().plot(kind = 'bar', yerr = grouped_urban[num].std(), ylim=0, ax=ax, title=num)\n\nfig1.tight_layout()\nplt.show()\n\n# Here's the distribution of each topic over words, for each urban category:\nfig2 = plt.figure()\nchrt = 0\nfor e in col_list:\n chrt += 1 \n ax2 = fig2.add_subplot(2,3, chrt)\n (grouped_urban[e].sum()/grouped_urban['webpunct_count'].sum()).plot(kind = 'bar', ylim=0, ax=ax2, title=e)\n\nfig2.tight_layout()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xR86/ml-stuff
kaggle/machine-learning-with-a-heart/Lab5.ipynb
mit
[ "Lab 5 - Unsupervised Learning <a class=\"tocSkip\">\nuse elbow point for hierarchical and kmeans\nkmeans:\n+ interclass var (WSS within sum of squares) vs no of clusters => elbow\nhierarchical:\n+ use dendrogram height, last 2 clusters heights are relevant\nneed Silhouette Width (https://en.wikipedia.org/wiki/Silhouette_(clustering))\n+ https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html\n+ https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html\n+ https://plot.ly/scikit-learn/plot-kmeans-silhouette-analysis/\n...\n\nhttps://en.wikipedia.org/wiki/Rand_index#Adjusted_Rand_index\nhttps://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html\nhttps://www.scikit-yb.org/en/latest/api/cluster/elbow.html\n\n...\n\nhttps://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html\nhttps://plot.ly/scikit-learn/plot-cluster-iris/\nhttps://plot.ly/scikit-learn/plot-kmeans-digits/\nhttps://plot.ly/python/3d-point-clustering/\nhttps://community.plot.ly/t/what-colorscales-are-available-in-plotly-and-which-are-the-default/2079\nhttps://plot.ly/python/cmocean-colorscales/\nhttps://matplotlib.org/cmocean/\n\n\nhttp://cs.joensuu.fi/sipu/datasets/\nhttps://towardsdatascience.com/k-means-clustering-implementation-2018-ac5cd1e51d0a\nhttps://github.com/deric/clustering-benchmark/blob/master/README.md\nhttp://neupy.com/2017/12/09/sofm_applications.html\nhttps://wonikjang.github.io/deeplearning_unsupervised_som/2017/06/30/som.html\nhttps://www.kaggle.com/raghavrastogi75/fraud-detection-using-self-organising-maps\nhttps://medium.com/@navdeepsingh_2336/self-organizing-maps-for-machine-learning-algorithms-ad256a395fc5\nhttps://heartbeat.fritz.ai/introduction-to-self-organizing-maps-soms-98e88b568f5d\nImports\nImport dependencies", "from datetime import datetime as dt\n\nimport numpy as np\nimport pandas as pd\n\n# viz libs\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport plotly.graph_objs as go\nimport plotly.figure_factory as ff\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\ninit_notebook_mode(connected=True)\n\nrandom_state=42\nnb_start = dt.now()", "Import data", "features = pd.read_csv('train_values.csv')\nlabels = pd.read_csv('train_labels.csv')\n\nxlab = 'serum_cholesterol_mg_per_dl'\nylab = 'resting_blood_pressure'\n\nprint(labels.head())\nfeatures.head()\n\ncluster_arr = np.array(features[[xlab,ylab]]).reshape(-1,2)\n\ncluster_arr[:5]", "Cluster subsample visualization", "x = features['serum_cholesterol_mg_per_dl']\ny = features['resting_blood_pressure']\n\ntrace = [go.Scatter(\n x = x,\n y = y,\n name = 'data',\n mode = 'markers',\n hoverinfo = 'text',\n text = ['x: %s<br>y: %s' % (x_i, y_i) for x_i, y_i in zip(x, y)]\n)]\n\nlayout = go.Layout(\n xaxis = dict({'title': xlab}),\n yaxis = dict({'title': ylab})\n)\n\nfig = go.Figure(data=trace, layout=layout)\niplot(fig, layout)", "Hierarchical Clustering\n\nhttps://scikit-learn.org/stable/modules/clustering.html\nhttps://scikit-learn.org/stable/modules/classes.html#module-sklearn.cluster\nhttps://stackabuse.com/hierarchical-clustering-with-python-and-scikit-learn/", "from scipy.cluster.hierarchy import dendrogram, linkage", "Single Link", "plt.figure(figsize=(15, 7))\n\nlinked = linkage(cluster_arr, 'single')\n\n# labelList = range(1, 11)\ndendrogram(linked, \n orientation='top',\n# labels=labelList,\n distance_sort='descending',\n show_leaf_counts=True)\nplt.show() ", "Complete Link", "plt.figure(figsize=(15, 7))\n\nlinked = linkage(cluster_arr, 'complete')\n\n# labelList = range(1, 11)\ndendrogram(linked, \n orientation='top',\n# labels=labelList,\n distance_sort='descending',\n show_leaf_counts=True)\nplt.show() ", "Average Link", "plt.figure(figsize=(15, 7))\n\nlinked = linkage(cluster_arr, 'average')\n\n# labelList = range(1, 11)\ndendrogram(linked, \n orientation='top',\n# labels=labelList,\n distance_sort='descending',\n show_leaf_counts=True)\nplt.show() ", "Ward Variance", "plt.figure(figsize=(15, 7))\n\nlinked = linkage(cluster_arr, 'ward')\n\n# labelList = range(1, 11)\ndendrogram(linked, \n orientation='top',\n# labels=labelList,\n distance_sort='descending',\n show_leaf_counts=True)\nplt.show() ", "Density-based clustering\nDBSCAN", "from sklearn.cluster import DBSCAN\n\nclustering = DBSCAN(eps=3, min_samples=2).fit(cluster_arr)\n\nclustering\n\ny_pred = clustering.labels_\n\ny_pred\n\nx = cluster_arr[:, 0]\ny = cluster_arr[:, 1]\n\n# col = ['#F33' if i == 1 else '#33F' for i in y_pred]\n\ntrace = [go.Scatter(\n x = x,\n y = y,\n marker = dict(\n # color = col,\n color = y_pred,\n colorscale='MAGMA',\n colorbar=dict(\n title='Labels'\n ),\n ),\n name = 'data',\n mode = 'markers',\n hoverinfo = 'text',\n text = ['x: %s<br>y: %s' % (x_i, y_i) for x_i, y_i in zip(x, y)]\n)]\n\nlayout = go.Layout(\n xaxis = dict({'title': xlab}),\n yaxis = dict({'title': ylab})\n)\n\nfig = go.Figure(data=trace, layout=layout)\niplot(fig, layout)", "Other based on DBSCAN\nK-Means", "from sklearn.cluster import KMeans\n\ny_pred = KMeans(n_clusters=2, random_state=random_state).fit_predict(cluster_arr)\n\ny_pred\n\nx = cluster_arr[:, 0]\ny = cluster_arr[:, 1]\n\n# col = ['#F33' if i == 1 else '#33F' for i in y_pred]\n\ntrace = [go.Scatter(\n x = x,\n y = y,\n marker = dict(\n # color = col,\n color = y_pred,\n colorscale='YlOrRd',\n colorbar=dict(\n title='Labels'\n ),\n ),\n name = 'data',\n mode = 'markers',\n hoverinfo = 'text',\n text = ['x: %s<br>y: %s' % (x_i, y_i) for x_i, y_i in zip(x, y)]\n)]\n\nlayout = go.Layout(\n xaxis = dict({'title': xlab}),\n yaxis = dict({'title': ylab})\n)\n\nfig = go.Figure(data=trace, layout=layout)\niplot(fig, layout)\n\nKs = range(2, 11)\nkm = [KMeans(n_clusters=i) for i in Ks] # , verbose=True\n# score = [km[i].fit(cluster_arr).score(cluster_arr) for i in range(len(km))]\n\nfitted = [km[i].fit(cluster_arr) for i in range(len(km))]\nscore = [fitted[i].score(cluster_arr) for i in range(len(km))]\ninertia = [fitted[i].inertia_ for i in range(len(km))]\n\nrelative_diff = [inertia[0]]\nrelative_diff.extend([inertia[i-1] - inertia[i] for i in range(1, len(inertia))])\n\nprint(fitted[:1])\nprint(score[:1])\nprint(inertia[:1])\nprint(relative_diff)\n\nfitted[0]\n\ndir(fitted[0])[:5]\n\ndata = [\n# go.Bar(\n# x = list(Ks),\n# y = score\n# ),\n go.Bar(\n x = list(Ks),\n y = inertia,\n text = ['Diff is: %s' % diff for diff in relative_diff]\n ),\n go.Scatter(\n x = list(Ks),\n y = inertia\n ),\n]\n\n\nlayout = go.Layout(\n xaxis = dict(\n title = 'No of Clusters [%s-%s]' % (min(Ks), max(Ks))\n ),\n yaxis = dict(\n title = 'Sklearn score / inertia'\n ),\n # barmode='stack'\n)\n\nfig = go.Figure(data=data, layout=layout)\niplot(fig)\n\ndata = [\n go.Bar(\n x = list(Ks),\n y = relative_diff\n ),\n go.Scatter(\n x = list(Ks),\n y = relative_diff\n ),\n]\n\n\nlayout = go.Layout(\n xaxis = dict(\n title = 'No of Clusters [%s-%s]' % (min(Ks), max(Ks))\n ),\n yaxis = dict(\n title = 'Pairwise difference'\n ),\n # barmode='stack'\n)\n\nfig = go.Figure(data=data, layout=layout)\niplot(fig)\n\nnb_end = dt.now()\n\n'Time elapsed: %s' % (nb_end - nb_start)", "Bibliography" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gwu-libraries/notebooks
20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb
mit
[ "Recipes for processing Twitter data with jq\nThis notebook is a companion to Getting Started Working with Twitter Data Using jq. It focuses on recipes that the Social Feed Manager team has used when preparing datasets of tweets for researchers.\nWe will continue to add additional recipes to this notebook. If you have any suggestions, please contact us.\nThis notebook requires at least jq 1.5. Note that only earlier versions may be available from your package manager; manual installation may be necessary.\nThese recipes can be used with any data source that outputs tweets as line-oriented JSON. Within the context of SFM, this is usually the output of twitter_rest_warc_iter.py or twitter_stream_warc_iter.py within a processing container. Alternatively, Twarc is a commandline tool for retrieving data from the Twitter API that outputs tweets as line-oriented JSON.\nFor the purposes of this notebook, we will use a line-oriented JSON file that was created using Twarc. It contains the user timeline of @SocialFeedMgr. The command used to produce this file was twarc.py --timeline socialfeedmgr &gt; tweets.json.\nFor an explanation of the fields in a tweet see the Tweet Field Guide. For other helpful tweet processing utilities, see twarc utils.\nFor the sake of brevity, some of the examples may only output a subset of the tweets fields and/or a subset of the tweets contained in tweets.json. The following example outputs the tweet id and text of all of the first 5 tweets.", "!head -n5 tweets.json | jq -c '[.id_str, .text]'", "Dates\nFor both filtering and output, it is often necessary to parse and/or normalize the created_at date. The following shows the original created_at date and the date as an ISO 8601 date.", "!head -n5 tweets.json | jq -c '[.created_at, .created_at | strptime(\"%A %B %d %T %z %Y\") | todate]'", "Filtering\nFiltering text\nCase sensitive", "!cat tweets.json | jq -c 'select(.text | contains(\"blog\")) | [.id_str, .text]'\n\n!cat tweets.json | jq -c 'select(.text | contains(\"BLOG\")) | [.id_str, .text]'", "Case insensitive\nTo ignore case, use a regular expression filter with the case-insensitive flag.", "!cat tweets.json | jq -c 'select(.text | test(\"BLog\"; \"i\")) | [.id_str, .text]'", "Filtering on multiple terms (OR)", "!cat tweets.json | jq -c 'select(.text | test(\"BLog|twarc\"; \"i\")) | [.id_str, .text]'", "Filtering on multiple terms (AND)", "!cat tweets.json | jq -c 'select((.text | test(\"BLog\"; \"i\")) and (.text | test(\"twitter\"; \"i\"))) | [.id_str, .text]'", "Filter dates\nThe following shows tweets created after November 5, 2016.", "!cat tweets.json | jq -c 'select((.created_at | strptime(\"%A %B %d %T %z %Y\") | mktime) > (\"2016-11-05T00:00:00Z\" | fromdateiso8601)) | [.id_str, .created_at, (.created_at | strptime(\"%A %B %d %T %z %Y\") | todate)]'", "Is retweet", "!cat tweets.json | jq -c 'select(has(\"retweeted_status\")) | [.id_str, .retweeted_status.id]'", "Is quote", "!cat tweets.json | jq -c 'select(has(\"quoted_status\")) | [.id_str, .quoted_status.id]'", "Output\nTo write output to a file use &gt; &lt;filename&gt;. For example: cat tweets.json | jq -r '.id_str' &gt; tweet_ids.txt\nCSV\nFollowing is a CSV output that has fields similar to the CSV output produced by SFM's export functionality.\nNote that is uses the -r flag for jq instead of the -c flag.\nAlso note that is it is necessary to remove line breaks from the tweet text to prevent it from breaking the CSV. This is done with (.text | gsub(\"\\n\";\" \")).", "!head -n5 tweets.json | jq -r '[(.created_at | strptime(\"%A %B %d %T %z %Y\") | todate), .id_str, .user.screen_name, .user.followers_count, .user.friends_count, .retweet_count, .favorite_count, .in_reply_to_screen_name, \"http://twitter.com/\" + .user.screen_name + \"/status/\" + .id_str, (.text | gsub(\"\\n\";\" \")), has(\"retweeted_status\"), has(\"quoted_status\")] | @csv'", "Header row\nThe header row should be written to the output file with &gt; before appending the CSV with &gt;&gt;.", "!echo \"[]\" | jq -r '[\"created_at\",\"twitter_id\",\"screen_name\",\"followers_count\",\"friends_count\",\"retweet_count\",\"favorite_count\",\"in_reply_to_screen_name\",\"twitter_url\",\"text\",\"is_retweet\",\"is_quote\"] | @csv'", "Splitting files\nExcel can load CSV files with over a million rows. Howver, for practical purposes a much smaller number is recommended.\nThe following uses the split command to split the CSV output into multiple files. Note that the flags accepted may be different in your environment.\ncat tweets.json | jq -r '[.id_str, (.text | gsub(\"\\n\";\" \"))] | @csv' | split --lines=5 -d --additional-suffix=.csv - tweets\nls *.csv\ntweets00.csv tweets01.csv tweets02.csv tweets03.csv tweets04.csv\ntweets05.csv tweets06.csv tweets07.csv tweets08.csv tweets09.csv\n--lines=5 sets the number of lines to include in each file.\n--additional-suffix=.csv set the file extension.\ntweets is the base name for each file.\nTweet ids\nWhen outputting tweet ids, .id_str should be used instead of .id. See Ed Summer's blog post for an explanation.", "!head -n5 tweets.json | jq -r '.id_str'" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cathalmccabe/PYNQ
boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb
bsd-3-clause
[ "Video Pipeline Details\nThis notebook goes into detail about the stages of the video pipeline in the base overlay and is written for people who want to create and integrate their own video IP. For most regular input and output use cases the high level wrappers of HDMIIn and HDMIOut should be used.\nBoth the input and output pipelines in the base overlay consist of four stages, an HDMI frontend, a colorspace converter, a pixel format converter, and the video DMA. For the input the stages are arranged Frontend -> Colorspace Converter -> Pixel Format -> VDMA with the order reversed for the output side. The aim of this notebook is to give you enough information to use each stage separately and be able to modify the pipeline for your own ends.\nBefore exploring the pipeline we'll import the entire pynq.lib.video module where all classes relating to the pipelines live. We'll also load the base overlay to serve as an example.\nThe following table shows the IP responsible for each stage in the base overlay which will be referenced throughout the rest of the notebook\n|Stage | Input IP | Output IP |\n|------------------|:---------------------------------------|:-----------------------------------|\n|Frontend (Timing) |video/hdmi_in/frontend/vtc_in |video/hdmi_out/frontend/vtc_out |\n|Frontend (Other) |video/hdmi_in/frontend/axi_gpio_hdmiin|video/hdmi_out/frontend/axi_dynclk|\n|Colour Space |video/hdmi_in/color_convert |video/hdmi_out/color_convert |\n|Pixel Format |video/hdmi_in/pixel_pack |video/hdmi_outpixel_unpack |\n|VDMA |video/axi_vdma |video/axi_vdam |", "from pynq.overlays.base import BaseOverlay\nfrom pynq.lib.video import *\n\nbase = BaseOverlay(\"base.bit\")", "HDMI Frontend\nThe HDMI frontend modules wrap all of the clock and timing logic. The HDMI input frontend can be used independently from the rest of the pipeline by accessing its driver from the base overlay.", "hdmiin_frontend = base.video.hdmi_in.frontend", "Creating the device will signal to the computer that a monitor is connected. Starting the frontend will wait attempt to detect the video mode, blocking until a lock can be achieved. Once the frontend is started the video mode will be available.", "hdmiin_frontend.start()\nhdmiin_frontend.mode", "The HDMI output frontend can be accessed in a similar way.", "hdmiout_frontend = base.video.hdmi_out.frontend", "and the mode must be set prior to starting the output. In this case we are just going to use the same mode as the input.", "hdmiout_frontend.mode = hdmiin_frontend.mode\nhdmiout_frontend.start()", "Note that nothing will be displayed on the screen as no video data is currently being send.\nColorspace conversion\nThe colorspace converter operates on each pixel independently using a 3x4 matrix to transform the pixels. The converter is programmed with a list of twelve coefficients in the folling order:\n| |in1 |in2 |in3 | 1 |\n|-----|----|----|----|----|\n|out1 |c1 |c2 |c3 |c10 |\n|out2 |c4 |c5 |c6 |c11 |\n|out3 |c7 |c8 |c9 |c12 |\nEach coefficient should be a floating point number between -2 and +2.\nThe pixels to and from the HDMI frontends are in BGR order so a list of coefficients to convert from the input format to RGB would be:\n[0, 0, 1,\n 0, 1, 0,\n 1, 0, 0,\n 0, 0, 0]\n\nreversing the order of the pixels and not adding any bias.\nThe driver for the colorspace converters has a single property that contains the list of coefficients.", "colorspace_in = base.video.hdmi_in.color_convert\ncolorspace_out = base.video.hdmi_out.color_convert\n\nbgr2rgb = [0, 0, 1,\n 0, 1, 0, \n 1, 0, 0,\n 0, 0, 0]\n\ncolorspace_in.colorspace = bgr2rgb\ncolorspace_out.colorspace = bgr2rgb\n\ncolorspace_in.colorspace", "Pixel format conversion\nThe pixel format converters convert between the 24-bit signal used by the HDMI frontends and the colorspace converters to either an 8, 24, or 32 bit signal. 24-bit mode passes the input straight through, 32-bit pads the additional pixel with 0 and 8-bit mode selects the first channel in the pixel. This is exposed by a single property to set or get the number of bits.", "pixel_in = base.video.hdmi_in.pixel_pack\npixel_out = base.video.hdmi_out.pixel_unpack\n\npixel_in.bits_per_pixel = 8\npixel_out.bits_per_pixel = 8\n\npixel_in.bits_per_pixel", "Video DMA\nThe final element in the pipeline is the video DMA which transfers video frames to and from memory. The VDMA consists of two channels, one for each direction which operate completely independently. To use a channel its mode must be set prior to start being called. After the DMA is started readframe and writeframe transfer frames. Frames are only transferred once with the call blocking if necessary. asyncio coroutines are available as readframe_async and writeframe_async which yield instead of blocking. A frame of the size of the output can be retrieved from the VDMA by calling writechannel.newframe(). This frame is not guaranteed to be initialised to blank so should be completely written before being handed back.", "inputmode = hdmiin_frontend.mode\nframemode = VideoMode(inputmode.width, inputmode.height, 8)\n\nvdma = base.video.axi_vdma\nvdma.readchannel.mode = framemode\nvdma.readchannel.start()\nvdma.writechannel.mode = framemode\nvdma.writechannel.start()\n\nframe = vdma.readchannel.readframe()\nvdma.writechannel.writeframe(frame)", "In this case, because we are only using 8 bits per pixel, only the red channel is read and displayed.\nThe two channels can be tied together which will ensure that the input is always mirrored to the output", "vdma.readchannel.tie(vdma.writechannel)", "Frame Ownership\nThe VDMA driver has a strict method of frame ownership. Any frames returned by readframe or newframe are owned by the user and should be destroyed by the user when no longer needed by calling frame.freebuffer(). Frames handed back to the VDMA with writeframe are no longer owned by the user and should not be touched - the data may disappear at any time.\nCleaning up\nIt is vital to stop the VDMA before reprogramming the bitstream otherwise the memory system of the chip can be placed into an undefined state. If the monitor does not power on when starting the VDMA this is the likely cause.", "vdma.readchannel.stop()\nvdma.writechannel.stop()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yevheniyc/Python
1m_ML_Security/notebooks/day_1/Worksheet 2 - Exploring Two Dimensional Data.ipynb
mit
[ "Worksheet 2: Exploring Two Dimensional Data\nImport the Libraries\nFor this exercise, we will be using:\n* Pandas (http://pandas.pydata.org/pandas-docs/stable/)\n* Numpy (https://docs.scipy.org/doc/numpy/reference/)\n* Matplotlib (http://matplotlib.org/api/pyplot_api.html)", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n%pylab inline", "Exercise 1: Reading various forms of JSON Data\nIn the /data/ folder, you will find a series of .json files called dataN.json, numbered 1-4. Each file contains the following data:\n| |birthday | first_name |last_name |\n|--|-----------|------------|----------|\n|0 |5\\/3\\/67 |Robert |Hernandez |\n|1 |8\\/4\\/84 |Steve |Smith |\n|2 |9\\/13\\/91 |Anne |Raps |\n|3 |4\\/15\\/75 |Alice |Muller |", "#Your code here...\nfile1 = pd.read_json('../../data/data1.json')\nfile2 = pd.read_json('../../data/data2.json')\nfile2 = pd.read_json('../../data/data2.json')\nfile3 = pd.read_json('../../data/data3.json') # add orient=columns\nfile4 = pd.read_json('../../data/data4.json', orient='split')\ncombined = pd.concat([file1, file2.T, file3, file4], ignore_index=True)\ncombined", "Exercise 2:\nIn the data file, there is a webserver file called hackers-access.httpd. For this exercise, you will use this file to answer the following questions:\n1. Which browsers are the top 10 most used browsers in this data?\n2. Which are the top 10 most used operating systems?\nIn order to accomplish this task, do the following:\n1. Write a function which takes a User Agent string as an argument and returns the relevant data. HINT: You might want to use python's user_agents module, the documentation for which is available here: (https://pypi.python.org/pypi/user-agents)\n2. Next, apply this function to the column which contains the user agent string.\n3. Store this series as a new column in the dataframe\n4. Count the occurances of each value in the new columns", "import apache_log_parser\nfrom user_agents import parse\n\n\ndef parse_ua(line):\n parsed_data = parse(line)\n return str(parsed_data).split('/')[1]\n\ndef parse_ua_2(line):\n parsed_data = parse(line)\n return str(parsed_data).split('/')[2]\n\n#Read in the log file\nline_parser = apache_log_parser.make_parser(\"%h %l %u %t \\\"%r\\\" %>s %b \\\"%{Referer}i\\\" \\\"%{User-agent}i\\\"\")\n\nserver_log = open(\"../../data/hackers-access.httpd\", \"r\")\nparsed_server_data = []\nfor line in server_log:\n data = {}\n data = line_parser(line)\n parsed_server_data.append( data )\n\nserver_df = pd.DataFrame(parsed_server_data)\nserver_df['OS'] = server_df['request_header_user_agent'].apply(parse_ua)\nserver_df['Browser'] = server_df['request_header_user_agent'].apply(parse_ua_2)\nserver_df['OS'].value_counts().head(10)\n\n#Apply the functions to the dataframe\n\n\n#Get the top 10 values\n", "Exercise 3:\nUsing the dailybots.csv film, read the file into a DataFrame and perform the following operations:\n1. Filter the DataFrame to include bots from the Government/Politics Industry.\n2. Calculate the ratio of hosts to orgs and add this as a column to the DataFrame and output the result\n3. Calculate the total number of hosts infected by each BotFam in the Government/Politics Industry. You should use the groupby() function which is documented here: (http://pandas.pydata.org/pandas-docs/stable/groupby.html)", "#Your code here...\nbots = pd.read_csv('../../data/dailybots.csv')\ngov_bots = bots[['botfam', 'hosts']][bots['industry'] == 'Government/Politics']\ngov_bots.groupby(['botfam']).size()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jdhp-docs/python_notebooks
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
mit
[ "Perceptron Multicouche\nAuteur: Jérémie Decock", "%matplotlib inline\n\n#import nnfigs\n\n# https://github.com/jeremiedecock/neural-network-figures.git\nimport nnfigs.core as nnfig\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport ipywidgets\nfrom ipywidgets import interact\n\n# TODO\n\n# Shallow et Deep learning à lire:\n# - https://www.miximum.fr/blog/introduction-au-deep-learning-2/\n# - https://sciencetonnante.wordpress.com/2016/04/08/le-deep-learning/\n# - https://www.technologies-ebusiness.com/enjeux-et-tendances/le-deep-learning-pas-a-pas\n# - http://scholar.google.fr/scholar_url?url=https://arxiv.org/pdf/1404.7828&hl=fr&sa=X&scisig=AAGBfm07Y2UDlPpbninerh4gxHUj2SJfDQ&nossl=1&oi=scholarr&sqi=2&ved=0ahUKEwjfxMu7jKnUAhUoCsAKHR_RDlkQgAMIKygAMAA", "$\n\\newcommand{\\cur}{i}\n\\newcommand{\\prev}{j}\n\\newcommand{\\prevcur}{{\\cur\\prev}}\n\\newcommand{\\next}{k}\n\\newcommand{\\curnext}{{\\next\\cur}}\n\\newcommand{\\ex}{\\eta}\n\\newcommand{\\pot}{\\rho}\n\\newcommand{\\feature}{x}\n\\newcommand{\\weight}{w}\n\\newcommand{\\wcur}{{\\weight_{\\cur\\prev}}}\n\\newcommand{\\activthres}{\\theta}\n\\newcommand{\\activfunc}{f}\n\\newcommand{\\errfunc}{E}\n\\newcommand{\\learnrate}{\\epsilon}\n\\newcommand{\\learnit}{n}\n\\newcommand{\\sigout}{y}\n\\newcommand{\\sigoutdes}{d}\n\\newcommand{\\weights}{\\boldsymbol{W}}\n\\newcommand{\\errsig}{\\Delta}\n$", "# Notations :\n# - $\\cur$: couche courante\n# - $\\prev$: couche immédiatement en amont de la courche courrante (i.e. vers la couche d'entrée du réseau)\n# - $\\next$: couche immédiatement en aval de la courche courrante (i.e. vers la couche de sortie du réseau)\n# - $\\ex$: exemple (*sample* ou *feature*) courant (i.e. le vecteur des entrées courantes du réseau)\n# - $\\pot_\\cur$: *Potentiel d'activation* du neurone $i$ pour l'exemple courant\n# - $\\wcur$: Poids de la connexion entre le neurone $j$ et le neurone $i$\n# - $\\activthres_\\cur$: *Seuil d'activation* du neurone $i$\n# - $\\activfunc_\\cur$: *Fonction d'activation* du neurone $i$\n# - $\\errfunc$: *Fonction objectif* ou *fonction d'erreur*\n# - $\\learnrate$: *Pas d'apprentissage* ou *Taux d'apprentissage*\n# - $\\learnit$: Numéro d'itération (ou cycle ou époque) du processus d'apprentissage\n# - $\\sigout_\\cur$: Signal de sortie du neurone $i$ pour l'exemple courant\n# - $\\sigoutdes_\\cur$: Sortie désirée (*étiquette*) du neurone $i$ pour l'exemple courant\n# - $\\weights$: Matrice des poids du réseau (en réalité il y a une matrice de taille potentiellement différente par couche)\n# - $\\errsig_i$: *Signal d'erreur* du neurone $i$ pour l'exemple courant", "Introduction\nQu'est-ce qu'un réseau de neurones ?\nUne grosse fonction parametrique.\nPour peu qu'on donne suffisamment de paramètres à cette fonction, elle est capable d'approximer n'importe quelle fonction continue.\nReprésentation schématique d'une fonction paramètrique avec 3 paramètres avec une entrée en une sortie à 1 dimension\n$$\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$x \\mapsto g_{\\boldsymbol{\\omega}}(x)$$\nTODO: image/schéma intuition : entrés -> fonction avec paramètres = table de mixage -> sortie\nÀ quoi ça sert ?\nTODO : expliquer la régression et la classification\nTODO : applications avec références...\nExemples d'application concrètes :\n- Reconnaissance de texte manuscrit\n- Reconnaissance de formes, d'objets, de visages, etc. dans des images\n- Reconnaissance de la parole\n- Prédiction de séries temporelles (cours de la bourse, etc.)\n- etc.\nDéfinition du neurone \"formel\"", "STR_CUR = r\"i\" # Couche courante\nSTR_PREV = r\"j\" # Couche immédiatement en amont de la courche courrante (i.e. vers la couche d'entrée du réseau)\nSTR_NEXT = r\"k\" # Couche immédiatement en aval de la courche courrante (i.e. vers la couche de sortie du réseau)\nSTR_EX = r\"\\eta\" # Exemple (*sample* ou *feature*) courant (i.e. le vecteur des entrées courantes du réseau)\nSTR_POT = r\"x\" # *Potentiel d'activation* du neurone $i$ pour l'exemple $\\ex$\nSTR_POT_CUR = r\"x_i\" # *Potentiel d'activation* du neurone $i$ pour l'exemple $\\ex$\nSTR_WEIGHT = r\"w\"\nSTR_WEIGHT_CUR = r\"w_{ij}\" # Poids de la connexion entre le neurone $j$ et le neurone $i$\nSTR_ACTIVTHRES = r\"\\theta\" # *Seuil d'activation* du neurone $i$\nSTR_ACTIVFUNC = r\"f\" # *Fonction d'activation* du neurone $i$\nSTR_ERRFUNC = r\"E\" # *Fonction objectif* ou *fonction d'erreur*\nSTR_LEARNRATE = r\"\\epsilon\" # *Pas d'apprentissage* ou *Taux d'apprentissage*\nSTR_LEARNIT = r\"n\" # Numéro d'itération (ou cycle ou époque) du processus d'apprentissage\nSTR_SIGIN = r\"x\" # Signal de sortie du neurone $i$ pour l'exemple $\\ex$\nSTR_SIGOUT = r\"y\" # Signal de sortie du neurone $i$ pour l'exemple $\\ex$\nSTR_SIGOUT_CUR = r\"y_i\"\nSTR_SIGOUT_PREV = r\"y_j\"\nSTR_SIGOUT_DES = r\"d\" # Sortie désirée (*étiquette*) du neurone $i$ pour l'exemple $\\ex$\nSTR_SIGOUT_DES_CUR = r\"d_i\"\nSTR_WEIGHTS = r\"W\" # Matrice des poids du réseau (en réalité il y a une matrice de taille potentiellement différente par couche)\nSTR_ERRSIG = r\"\\Delta\" # *Signal d'erreur* du neurone $i$ pour l'exemple $\\ex$\n\ndef tex(tex_str):\n return r\"$\" + tex_str + r\"$\"\n\nfig, ax = nnfig.init_figure(size_x=8, size_y=4)\n\nnnfig.draw_synapse(ax, (0, -6), (10, 0))\nnnfig.draw_synapse(ax, (0, -2), (10, 0))\nnnfig.draw_synapse(ax, (0, 2), (10, 0))\nnnfig.draw_synapse(ax, (0, 6), (10, 0), label=tex(STR_WEIGHT_CUR), label_position=0.5, fontsize=14)\n\nnnfig.draw_synapse(ax, (10, 0), (12, 0))\n\nnnfig.draw_neuron(ax, (0, -6), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 6), 0.5, empty=True)\nplt.text(x=0, y=7.5, s=tex(STR_PREV), fontsize=14)\nplt.text(x=10, y=1.5, s=tex(STR_CUR), fontsize=14)\nplt.text(x=0, y=0, s=r\"$\\vdots$\", fontsize=14)\nplt.text(x=-2.5, y=0, s=tex(STR_SIGOUT_PREV), fontsize=14)\nplt.text(x=13, y=0, s=tex(STR_SIGOUT_CUR), fontsize=14)\nplt.text(x=9.2, y=-1.8, s=tex(STR_POT_CUR), fontsize=14)\n\nnnfig.draw_neuron(ax, (10, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\nplt.show()", "$$\n\\sigout = \\activfunc \\left( \\sum_i \\weight_i \\feature_i \\right)\n$$\n$$\n\\pot_\\cur = \\sum_\\prev \\wcur \\sigout_{\\prev}\n$$\n$$\n\\sigout_{\\cur} = \\activfunc(\\pot_\\cur)\n$$\n$$\n\\weights = \\begin{pmatrix}\n \\weight_{11} & \\cdots & \\weight_{1m} \\\n \\vdots & \\ddots & \\vdots \\\n \\weight_{n1} & \\cdots & \\weight_{nm}\n\\end{pmatrix}\n$$\nAvec :\n- $\\cur$: couche courante\n- $\\prev$: couche immédiatement en amont de la courche courrante (i.e. vers la couche d'entrée du réseau)\n- $\\next$: couche immédiatement en aval de la courche courrante (i.e. vers la couche de sortie du réseau)\n- $\\ex$: exemple (sample ou feature) courant (i.e. le vecteur des entrées courantes du réseau)\n- $\\pot_\\cur$: Potentiel d'activation du neurone $i$ pour l'exemple courant\n- $\\wcur$: Poids de la connexion entre le neurone $j$ et le neurone $i$\n- $\\activthres_\\cur$: Seuil d'activation du neurone $i$\n- $\\activfunc_\\cur$: Fonction d'activation du neurone $i$\n- $\\errfunc$: Fonction objectif ou fonction d'erreur\n- $\\learnrate$: Pas d'apprentissage ou Taux d'apprentissage\n- $\\learnit$: Numéro d'itération (ou cycle ou époque) du processus d'apprentissage\n- $\\sigout_\\cur$: Signal de sortie du neurone $i$ pour l'exemple courant\n- $\\sigoutdes_\\cur$: Sortie désirée (étiquette) du neurone $i$ pour l'exemple courant\n- $\\weights$: Matrice des poids du réseau (en réalité il y a une matrice de taille potentiellement différente par couche)\n- $\\errsig_i$: Signal d'erreur du neurone $i$ pour l'exemple courant\nFonction d'activation\nFonction sigmoïde\nLa fonction sigmoïde (en forme de \"S\") est définie par :\n$$f(x) = \\frac{1}{1 + e^{-x}}$$\npour tout réel $x$.\nOn peut la généraliser à toute fonction dont l'expression est :\n$$f(x) = \\frac{1}{1 + e^{-\\lambda x}}$$", "def sigmoid(x, _lambda=1.):\n y = 1. / (1. + np.exp(-_lambda * x))\n return y\n\n%matplotlib inline\n\nx = np.linspace(-5, 5, 300)\n\ny1 = sigmoid(x, 1.)\ny2 = sigmoid(x, 5.)\ny3 = sigmoid(x, 0.5)\n\nplt.plot(x, y1, label=r\"$\\lambda=1$\")\nplt.plot(x, y2, label=r\"$\\lambda=5$\")\nplt.plot(x, y3, label=r\"$\\lambda=0.5$\")\n\nplt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')\nplt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')\n\nplt.legend()\n\nplt.title(\"Fonction sigmoïde\")\nplt.axis([-5, 5, -0.5, 2]);", "Tangente hyperbolique", "def tanh(x):\n y = np.tanh(x)\n return y\n\nx = np.linspace(-5, 5, 300)\ny = tanh(x)\n\nplt.plot(x, y)\n\nplt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')\nplt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')\n\nplt.title(\"Fonction tangente hyperbolique\")\nplt.axis([-5, 5, -2, 2]);", "Fonction logistique\nFonctions ayant pour expression\n$$\nf(t) = K \\frac{1}{1+ae^{-\\lambda t}}\n$$\noù $K$ et $\\lambda$ sont des réels positifs et $a$ un réel quelconque.\nLes fonctions sigmoïdes sont un cas particulier de fonctions logistique avec $a > 0$.", "def logistique(x, a=1., k=1., _lambda=1.):\n y = k / (1. + a * np.exp(-_lambda * x))\n return y\n\n%matplotlib inline\n\nx = np.linspace(-5, 5, 300)\n\ny1 = logistique(x, a=1.)\ny2 = logistique(x, a=2.)\ny3 = logistique(x, a=0.5)\n\nplt.plot(x, y1, label=r\"$a=1$\")\nplt.plot(x, y2, label=r\"$a=2$\")\nplt.plot(x, y3, label=r\"$a=0.5$\")\n\nplt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')\nplt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')\n\nplt.legend()\n\nplt.title(\"Fonction logistique\")\nplt.axis([-5, 5, -0.5, 2]);", "Le terme de biais\nTODO", "fig, ax = nnfig.init_figure(size_x=8, size_y=6)\n\nHSPACE = 6\nVSPACE = 4\n\n# Synapse #####################################\n\n#nnfig.draw_synapse(ax, (0,2*VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + \"_0\"), label_position=0.3)\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + \"_1\"), label_position=0.3)\nnnfig.draw_synapse(ax, (0, 0), (HSPACE, 0), label=tex(STR_WEIGHT + \"_2\"), label_position=0.3)\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + \"_3\"), label_position=0.3, label_offset_y=-0.8)\n\nnnfig.draw_synapse(ax, (HSPACE, 0), (HSPACE + 2, 0))\n\n# Neuron ######################################\n\n# Layer 1 (input)\n#nnfig.draw_neuron(ax, (0,2*VSPACE), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 0), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True)\n\n# Layer 2\nnnfig.draw_neuron(ax, (HSPACE, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Text ########################################\n\n# Layer 1 (input)\n#plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + \"_i\"), fontsize=12)\n\n#plt.text(x=-1.7, y=2*VSPACE, s=tex(\"1\"), fontsize=12)\nplt.text(x=-1.7, y=VSPACE, s=tex(STR_SIGIN + \"_1\"), fontsize=12)\nplt.text(x=-1.7, y=-0.2, s=tex(STR_SIGIN + \"_2\"), fontsize=12)\nplt.text(x=-1.7, y=-VSPACE-0.2, s=tex(STR_SIGIN + \"_3\"), fontsize=12)\n\n# Layer 2\n#plt.text(x=HSPACE-1.25, y=1.5, s=tex(STR_POT), fontsize=12)\n#plt.text(x=2*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + \"_o\"), fontsize=12)\n\nplt.text(x=HSPACE+2.5, y=-0.3,\n s=tex(STR_SIGOUT),\n fontsize=12)\n\nplt.show()\n\nfig, ax = nnfig.init_figure(size_x=8, size_y=6)\n\nHSPACE = 6\nVSPACE = 4\n\n# Synapse #####################################\n\nnnfig.draw_synapse(ax, (0,2*VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + \"_0\"), label_position=0.3)\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + \"_1\"), label_position=0.3)\nnnfig.draw_synapse(ax, (0, 0), (HSPACE, 0), label=tex(STR_WEIGHT + \"_2\"), label_position=0.3)\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + \"_3\"), label_position=0.3, label_offset_y=-0.8)\n\nnnfig.draw_synapse(ax, (HSPACE, 0), (HSPACE + 2, 0))\n\n# Neuron ######################################\n\n# Layer 1 (input)\nnnfig.draw_neuron(ax, (0,2*VSPACE), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 0), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True)\n\n# Layer 2\nnnfig.draw_neuron(ax, (HSPACE, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Text ########################################\n\n# Layer 1 (input)\n#plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + \"_i\"), fontsize=12)\n\nplt.text(x=-1.7, y=2*VSPACE, s=tex(\"1\"), fontsize=12)\nplt.text(x=-1.7, y=VSPACE, s=tex(STR_SIGIN + \"_1\"), fontsize=12)\nplt.text(x=-1.7, y=-0.2, s=tex(STR_SIGIN + \"_2\"), fontsize=12)\nplt.text(x=-1.7, y=-VSPACE-0.2, s=tex(STR_SIGIN + \"_3\"), fontsize=12)\n\n# Layer 2\n#plt.text(x=HSPACE-1.25, y=1.5, s=tex(STR_POT), fontsize=12)\n#plt.text(x=2*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + \"_o\"), fontsize=12)\n\nplt.text(x=HSPACE+2.5, y=-0.3,\n s=tex(STR_SIGOUT),\n fontsize=12)\n\nplt.show()", "Exemple", "fig, ax = nnfig.init_figure(size_x=8, size_y=6)\n\nHSPACE = 6\nVSPACE = 4\n\n# Synapse #####################################\n\nnnfig.draw_synapse(ax, (0,2*VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + \"_0\"), label_position=0.3)\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + \"_1\"), label_position=0.3)\nnnfig.draw_synapse(ax, (0, 0), (HSPACE, 0), label=tex(STR_WEIGHT + \"_2\"), label_position=0.3)\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + \"_3\"), label_position=0.3, label_offset_y=-0.8)\n\nnnfig.draw_synapse(ax, (HSPACE, 0), (HSPACE + 2, 0))\n\n# Neuron ######################################\n\n# Layer 1 (input)\nnnfig.draw_neuron(ax, (0,2*VSPACE), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 0), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True)\n\n# Layer 2\nnnfig.draw_neuron(ax, (HSPACE, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Text ########################################\n\n# Layer 1 (input)\n#plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + \"_i\"), fontsize=12)\n\nplt.text(x=-1.7, y=2*VSPACE, s=tex(\"1\"), fontsize=12)\nplt.text(x=-1.7, y=VSPACE, s=tex(STR_SIGIN + \"_1\"), fontsize=12)\nplt.text(x=-1.7, y=-0.2, s=tex(STR_SIGIN + \"_2\"), fontsize=12)\nplt.text(x=-1.7, y=-VSPACE-0.2, s=tex(STR_SIGIN + \"_3\"), fontsize=12)\n\n# Layer 2\n#plt.text(x=HSPACE-1.25, y=1.5, s=tex(STR_POT), fontsize=12)\n#plt.text(x=2*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + \"_o\"), fontsize=12)\n\nplt.text(x=HSPACE+2.5, y=-0.3,\n s=tex(STR_SIGOUT),\n fontsize=12)\n\nplt.show()", "Pour vecteur d'entrée = ... et un vecteur de poids arbitrairement fixé à ...\net un neurone défini avec la fonction sigmoïde, \non peut calculer la valeur de sortie du neurone :\nOn a:\n$$\n\\sum_i \\weight_i \\feature_i = \\dots\n$$\ndonc\n$$\ny = \\frac{1}{1 + e^{-\\dots}} \n$$", "@interact(w1=(-10., 10., 0.5), w2=(-10., 10., 0.5))\ndef nn1(wb1=0., w1=10.):\n x = np.linspace(-10., 10., 100)\n xb = np.ones(x.shape)\n\n s1 = wb1 * xb + w1 * x\n y = sigmoid(s1)\n\n plt.plot(x, y)", "Définition d'un réseau de neurones\nDisposition des neurones en couches et couches cachées\nTODO\nExemple : réseau de neurones à 1 couche \"cachée\"", "fig, ax = nnfig.init_figure(size_x=8, size_y=4)\n\nHSPACE = 6\nVSPACE = 4\n\n# Synapse #####################################\n\n# Layer 1-2\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + \"_1\"), label_position=0.4)\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + \"_3\"), label_position=0.25, label_offset_y=-0.8)\n\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, -VSPACE), label=tex(STR_WEIGHT + \"_2\"), label_position=0.25)\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, -VSPACE), label=tex(STR_WEIGHT + \"_4\"), label_position=0.4, label_offset_y=-0.8)\n\n# Layer 2-3\nnnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, 0), label=tex(STR_WEIGHT + \"_5\"), label_position=0.4)\nnnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, 0), label=tex(STR_WEIGHT + \"_6\"), label_position=0.4, label_offset_y=-0.8)\n\nnnfig.draw_synapse(ax, (2*HSPACE, 0), (2*HSPACE + 2, 0))\n\n# Neuron ######################################\n\n# Layer 1 (input)\nnnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True)\n\n# Layer 2\nnnfig.draw_neuron(ax, (HSPACE, VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (HSPACE, -VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Layer 3\nnnfig.draw_neuron(ax, (2*HSPACE, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Text ########################################\n\n# Layer 1 (input)\n#plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + \"_i\"), fontsize=12)\nplt.text(x=-1.7, y=VSPACE, s=tex(STR_SIGIN + \"_1\"), fontsize=12)\nplt.text(x=-1.7, y=-VSPACE-0.2, s=tex(STR_SIGIN + \"_2\"), fontsize=12)\n\n# Layer 2\n#plt.text(x=HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + \"_1\"), fontsize=12)\nplt.text(x=HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + \"_1\"), fontsize=12)\n\n#plt.text(x=HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + \"_2\"), fontsize=12)\nplt.text(x=HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + \"_2\"), fontsize=12)\n\n# Layer 3\n#plt.text(x=2*HSPACE-1.25, y=1.5, s=tex(STR_POT + \"_o\"), fontsize=12)\n#plt.text(x=2*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + \"_o\"), fontsize=12)\n\nplt.text(x=2*HSPACE+2.5, y=-0.3,\n s=tex(STR_SIGOUT),\n fontsize=12)\n\nplt.show()", "TODO: il manque les biais...\n$$\n\\sigout =\n\\activfunc \\left(\n\\weight_5 ~ \\underbrace{\\activfunc \\left(\\weight_1 \\feature_1 + \\weight_3 \\feature_2 \\right)}{\\sigout_1}\n+\n\\weight_6 ~ \\underbrace{\\activfunc \\left(\\weight_2 \\feature_1 + \\weight_4 \\feature_2 \\right)}{\\sigout_2}\n\\right)\n$$", "@interact(wb1=(-10., 10., 0.5), w1=(-10., 10., 0.5), wb2=(-10., 10., 0.5), w2=(-10., 10., 0.5))\ndef nn1(wb1=0.1, w1=0.1, wb2=0.1, w2=0.1):\n x = np.linspace(-10., 10., 100)\n xb = np.ones(x.shape)\n\n s1 = wb1 * xb + w1 * x\n y1 = sigmoid(s1)\n \n s2 = wb2 * xb + w2 * x\n y2 = sigmoid(s2)\n \n s = wb2 * xb + w2 * x\n y = sigmoid(s)\n\n plt.plot(x, y)", "Exemple : réseau de neurones à 2 couches \"cachée\"", "fig, ax = nnfig.init_figure(size_x=8, size_y=4)\n\nHSPACE = 6\nVSPACE = 4\n\n# Synapse #####################################\n\n# Layer 1-2\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + \"_1\"), label_position=0.4)\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + \"_3\"), label_position=0.25, label_offset_y=-0.8)\n\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, -VSPACE), label=tex(STR_WEIGHT + \"_2\"), label_position=0.25)\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, -VSPACE), label=tex(STR_WEIGHT + \"_4\"), label_position=0.4, label_offset_y=-0.8)\n\n# Layer 2-3\nnnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, VSPACE), label=tex(STR_WEIGHT + \"_5\"), label_position=0.4)\nnnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, VSPACE), label=tex(STR_WEIGHT + \"_7\"), label_position=0.25, label_offset_y=-0.8)\n\nnnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, -VSPACE), label=tex(STR_WEIGHT + \"_6\"), label_position=0.25)\nnnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, -VSPACE), label=tex(STR_WEIGHT + \"_8\"), label_position=0.4, label_offset_y=-0.8)\n\n# Layer 3-4\nnnfig.draw_synapse(ax, (2*HSPACE, VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + \"_9\"), label_position=0.4)\nnnfig.draw_synapse(ax, (2*HSPACE, -VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + \"_{10}\"), label_position=0.4, label_offset_y=-0.8)\n\nnnfig.draw_synapse(ax, (3*HSPACE, 0), (3*HSPACE + 2, 0))\n\n# Neuron ######################################\n\n# Layer 1 (input)\nnnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True)\n\n# Layer 2\nnnfig.draw_neuron(ax, (HSPACE, VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (HSPACE, -VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Layer 3\nnnfig.draw_neuron(ax, (2*HSPACE, VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (2*HSPACE, -VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Layer 4\nnnfig.draw_neuron(ax, (3*HSPACE, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Text ########################################\n\n# Layer 1 (input)\n#plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + \"_i\"), fontsize=12)\nplt.text(x=-1.7, y=VSPACE, s=tex(STR_SIGIN + \"_1\"), fontsize=12)\nplt.text(x=-1.7, y=-VSPACE-0.2, s=tex(STR_SIGIN + \"_2\"), fontsize=12)\n\n# Layer 2\n#plt.text(x=HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + \"_1\"), fontsize=12)\nplt.text(x=HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + \"_1\"), fontsize=12)\n\n#plt.text(x=HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + \"_2\"), fontsize=12)\nplt.text(x=HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + \"_2\"), fontsize=12)\n\n# Layer 3\n#plt.text(x=2*HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + \"_3\"), fontsize=12)\nplt.text(x=2*HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + \"_3\"), fontsize=12)\n\n#plt.text(x=2*HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + \"_4\"), fontsize=12)\nplt.text(x=2*HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + \"_4\"), fontsize=12)\n\n# Layer 4\n#plt.text(x=3*HSPACE-1.25, y=1.5, s=tex(STR_POT + \"_o\"), fontsize=12)\n#plt.text(x=3*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + \"_o\"), fontsize=12)\n\nplt.text(x=3*HSPACE+2.5, y=-0.3,\n s=tex(STR_SIGOUT),\n fontsize=12)\n\nplt.show()", "TODO: il manque le biais...\n$\n\\newcommand{\\yone}{\\underbrace{\\activfunc \\left(\\weight_1 \\feature_1 + \\weight_3 \\feature_2 \\right)}{\\sigout_1}}\n\\newcommand{\\ytwo}{\\underbrace{\\activfunc \\left(\\weight_2 \\feature_1 + \\weight_4 \\feature_2 \\right)}{\\sigout_2}}\n\\newcommand{\\ythree}{\\underbrace{\\activfunc \\left(\\weight_5 \\yone + \\weight_7 \\ytwo \\right)}{\\sigout_3}}\n\\newcommand{\\yfour}{\\underbrace{\\activfunc \\left(\\weight_6 \\yone + \\weight_8 \\ytwo \\right)}{\\sigout_4}}\n$\n$$\n\\sigout =\n\\activfunc \\left(\n\\weight_9 ~ \\ythree\n+\n\\weight_{10} ~ \\yfour\n\\right)\n$$\nPouvoir expressif d'un réseau de neurones\nTODO\nApprentissage\nFonction objectif (ou fonction d'erreur)\nFonction objectif: $\\errfunc \\left( \\weights \\right)$\nTypiquement, la fonction objectif (fonction d'erreur) est la somme du carré de l'erreur de chaque neurone de sortie.\n$$\n\\errfunc = \\frac12 \\sum_{\\cur \\in \\Omega} \\left[ \\sigout_\\cur - \\sigoutdes_\\cur \\right]^2\n$$\n$\\Omega$: l'ensemble des neurones de sortie\nLe $\\frac12$, c'est juste pour simplifier les calculs de la dérivée.", "fig, ax = nnfig.init_figure(size_x=8, size_y=4)\n\nnnfig.draw_synapse(ax, (0, -6), (10, 0))\nnnfig.draw_synapse(ax, (0, -2), (10, 0))\nnnfig.draw_synapse(ax, (0, 2), (10, 0))\nnnfig.draw_synapse(ax, (0, 6), (10, 0))\n\nnnfig.draw_synapse(ax, (0, -6), (10, -4))\nnnfig.draw_synapse(ax, (0, -2), (10, -4))\nnnfig.draw_synapse(ax, (0, 2), (10, -4))\nnnfig.draw_synapse(ax, (0, 6), (10, -4))\n\nnnfig.draw_synapse(ax, (0, -6), (10, 4))\nnnfig.draw_synapse(ax, (0, -2), (10, 4))\nnnfig.draw_synapse(ax, (0, 2), (10, 4))\nnnfig.draw_synapse(ax, (0, 6), (10, 4))\n\nnnfig.draw_synapse(ax, (10, -4), (12, -4))\nnnfig.draw_synapse(ax, (10, 0), (12, 0))\nnnfig.draw_synapse(ax, (10, 4), (12, 4))\n\nnnfig.draw_neuron(ax, (0, -6), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 6), 0.5, empty=True)\n\nnnfig.draw_neuron(ax, (10, -4), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (10, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (10, 4), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\nplt.text(x=0, y=7.5, s=tex(STR_PREV), fontsize=14)\nplt.text(x=10, y=7.5, s=tex(STR_CUR), fontsize=14)\n\nplt.text(x=0, y=0, s=r\"$\\vdots$\", fontsize=14)\nplt.text(x=9.7, y=-6.1, s=r\"$\\vdots$\", fontsize=14)\nplt.text(x=9.7, y=5.8, s=r\"$\\vdots$\", fontsize=14)\n\nplt.text(x=12.5, y=4, s=tex(STR_SIGOUT + \"_1\"), fontsize=14)\nplt.text(x=12.5, y=0, s=tex(STR_SIGOUT + \"_2\"), fontsize=14)\nplt.text(x=12.5, y=-4, s=tex(STR_SIGOUT + \"_3\"), fontsize=14)\n\nplt.text(x=16, y=4, s=tex(STR_ERRFUNC + \"_1 = \" + STR_SIGOUT + \"_1 - \" + STR_SIGOUT_DES + \"_1\"), fontsize=14)\nplt.text(x=16, y=0, s=tex(STR_ERRFUNC + \"_2 = \" + STR_SIGOUT + \"_2 - \" + STR_SIGOUT_DES + \"_2\"), fontsize=14)\nplt.text(x=16, y=-4, s=tex(STR_ERRFUNC + \"_3 = \" + STR_SIGOUT + \"_3 - \" + STR_SIGOUT_DES + \"_3\"), fontsize=14)\n\nplt.text(x=16, y=-8, s=tex(STR_ERRFUNC + \" = 1/2 ( \" + STR_ERRFUNC + \"^2_1 + \" + STR_ERRFUNC + \"^2_2 + \" + STR_ERRFUNC + \"^2_3 + \\dots )\"), fontsize=14)\n\nplt.show()", "Mise à jours des poids\n$$\n\\weights_{\\learnit + 1} = \\weights_{\\learnit} - \\underbrace{\\learnrate \\nabla_{\\weights} \\errfunc \\left( \\weights_{\\learnit} \\right)}\n$$\n$- \\learnrate \\nabla_{\\weights} \\errfunc \\left( \\weights_{\\learnit} \\right)$: descend dans la direction opposée au gradient (plus forte pente)\navec $\\nabla_{\\weights} \\errfunc \\left( \\weights_{\\learnit} \\right)$: gradient de la fonction objectif au point $\\weights$\n$\\learnrate > 0$: pas (ou taux) d'apprentissage\n$$\n\\begin{align}\n\\delta_{\\wcur} & = \\wcur_{\\learnit + 1} - \\wcur_{\\learnit} \\\n & = - \\learnrate \\frac{\\partial \\errfunc}{\\partial \\wcur}\n\\end{align}\n$$\n$$\n\\Leftrightarrow \\wcur_{\\learnit + 1} = \\wcur_{\\learnit} - \\learnrate \\frac{\\partial \\errfunc}{\\partial \\wcur}\n$$\nChaque présentation de l'ensemble des exemples = un cycle (ou une époque) d'apprentissage\nCritère d'arrêt de l'apprentissage: quand la valeur de la fonction objectif se stabilise (ou que le problème est résolu avec la précision souhaitée)\nDérivée des principales fonctions d'activation\nFonction sigmoïde\nFonction dérivée :\n$$\nf'(x) = \\frac{\\lambda e^{-\\lambda x}}{(1+e^{-\\lambda x})^{2}}\n$$\nqui peut aussi être défini par\n$$\n\\frac{\\mathrm{d} y}{\\mathrm{d} x} = \\lambda y (1-y)\n$$\noù $y$ varie de 0 à 1.", "def d_sigmoid(x, _lambda=1.):\n e = np.exp(-_lambda * x)\n y = _lambda * e / np.power(1 + e, 2)\n return y\n\n%matplotlib inline\n\nx = np.linspace(-5, 5, 300)\n\ny1 = d_sigmoid(x, 1.)\ny2 = d_sigmoid(x, 5.)\ny3 = d_sigmoid(x, 0.5)\n\nplt.plot(x, y1, label=r\"$\\lambda=1$\")\nplt.plot(x, y2, label=r\"$\\lambda=5$\")\nplt.plot(x, y3, label=r\"$\\lambda=0.5$\")\n\nplt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')\nplt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')\n\nplt.legend()\n\nplt.title(\"Fonction dérivée de la sigmoïde\")\nplt.axis([-5, 5, -0.5, 2]);", "Tangente hyperbolique\nDérivée :\n$$\n\\tanh '= \\frac{1}{\\cosh^{2}} = 1-\\tanh^{2}\n$$", "def d_tanh(x):\n y = 1. - np.power(np.tanh(x), 2)\n return y\n\nx = np.linspace(-5, 5, 300)\ny = d_tanh(x)\n\nplt.plot(x, y)\n\nplt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')\nplt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')\n\nplt.title(\"Fonction dérivée de la tangente hyperbolique\")\nplt.axis([-5, 5, -2, 2]);\n\n# TODO\n# - \"généralement le minimum local suffit\" (preuve ???)\n# - \"dans le cas contraire, le plus simple est de recommencer plusieurs fois l'apprentissage avec des poids initiaux différents et de conserver la meilleure matrice $\\weights$ (celle qui minimise $\\errfunc$)\"\n\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(4, 4))\n\nx = np.arange(10, 30, 0.1)\ny = (x - 20)**2 + 2\n\nax.set_xlabel(r\"Poids $\" + STR_WEIGHTS + \"$\", fontsize=14)\nax.set_ylabel(r\"Fonction objectif $\" + STR_ERRFUNC + \"$\", fontsize=14)\n\n# See http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.tick_params\nax.tick_params(axis='both', # changes apply to the x and y axis\n which='both', # both major and minor ticks are affected\n bottom='on', # ticks along the bottom edge are on\n top='off', # ticks along the top edge are off\n left='on', # ticks along the left edge are on\n right='off', # ticks along the right edge are off\n labelbottom='off', # labels along the bottom edge are off\n labelleft='off') # labels along the lefleft are off\n\nax.set_xlim(left=10, right=25)\nax.set_ylim(bottom=0, top=5)\n\nax.plot(x, y);", "Apprentissage incrémentiel (ou partiel) (ang. incremental learning):\non ajuste les poids $\\weights$ après la présentation d'un seul exemple\n(\"ce n'est pas une véritable descente de gradient\").\nC'est mieux pour éviter les minimums locaux, surtout si les exemples sont\nmélangés au début de chaque itération", "# *Apprentissage différé* (ang. *batch learning*):\n# TODO\n# Est-ce que la fonction objectif $\\errfunc$ est une fonction multivariée\n# ou est-ce une aggrégation des erreurs de chaque exemple ?\n\n# **TODO: règle du delta / règle du delta généralisée**", "Rétropropagation du gradient\nRétropropagation du gradient:\nune méthode pour calculer efficacement le gradient de la fonction objectif $\\errfunc$.\nIntuition:\nLa rétropropagation du gradient n'est qu'une méthode parmis d'autre pour résoudre le probème d'optimisation des poids $\\weight$. On pourrait très bien résoudre ce problème d'optimisation avec des algorithmes évolutionnistes par exemple.\nEn fait, l'intérêt de la méthode de la rétropropagation du gradient (et ce qui explique sa notoriété) est qu'elle formule le problème d'optimisation des poids avec une écriture analytique particulièrement efficace qui élimine astucieusement un grand nombre de calculs redondants (un peu à la manière de ce qui se fait en programmation dynamique): quand on decide d'optimiser les poids via une descente de gradient, certains termes (les signaux d'erreurs $\\errsig$) apparaissent un grand nombre de fois dans l'écriture analytique complète du gradient. La méthode de la retropropagation du gradient fait en sorte que ces termes ne soient calculés qu'une seule fois.\nÀ noter qu'on aurrait très bien pu résoudre le problème avec une descente de gradient oú le gradient $\\frac{\\partial \\errfunc}{\\partial\\wcur_{\\learnit}}$ serait calculé via une approximation numérique (méthode des différences finies par exemple) mais ce serait beaucoup plus lent et beaucoup moins efficace...\nPrincipe:\non modifie les poids à l'aide des signaux d'erreur $\\errsig$.\n$$\n\\wcur_{\\learnit + 1} = \\wcur_{\\learnit} \\underbrace{- \\learnrate \\frac{\\partial \\errfunc}{\\partial \\wcur_{\\learnit}}}{\\delta\\prevcur}\n$$\n$$\n\\begin{align}\n\\delta_\\prevcur & = - \\learnrate \\frac{\\partial \\errfunc}{\\partial \\wcur(\\learnit)} \\\n & = - \\learnrate \\errsig_\\cur \\sigout\\prev\n\\end{align}\n$$\n\nDans le cas de l'apprentissage différé (batch), on calcule pour chaque exemple l'erreur correspondante. Leur contribution individuelle aux modifications des poids sont additionnées\nL'apprentissage suppervisé fonctionne mieux avec des neurones de sortie linéaires (fonction d'activation $\\activfunc$ = fonction identitée) \"car les signaux d'erreurs se transmettent mieux\".\nDes données d'entrée binaires doivent être choisies dans ${-1,1}$ plutôt que ${0,1}$ car un signal nul ne contribu pas à l'apprentissage.", "# TODO\n#Voc:\n#- *erreur marginale*: **TODO**", "Note intéressante de Jürgen Schmidhuber : http://people.idsia.ch/~juergen/who-invented-backpropagation.html\nSignaux d'erreur $\\errsig_\\cur$ pour les neurones de sortie $(\\cur \\in \\Omega)$\n$$\n\\errsig_\\cur = \\activfunc'(\\pot_\\cur)[\\sigout_\\cur - \\sigoutdes_\\cur]\n$$\nSignaux d'erreur $\\errsig_\\cur$ pour les neurones cachés $(\\cur \\not\\in \\Omega)$\n$$\n\\errsig_\\cur = \\activfunc'(\\pot_\\cur) \\sum_\\next \\weight_\\curnext \\errsig_\\next\n$$", "fig, ax = nnfig.init_figure(size_x=8, size_y=4)\n\nnnfig.draw_synapse(ax, (0, -2), (10, 0))\nnnfig.draw_synapse(ax, (0, 2), (10, 0), label=tex(STR_WEIGHT + \"_{\" + STR_NEXT + STR_CUR + \"}\"), label_position=0.5, fontsize=14)\n\nnnfig.draw_synapse(ax, (10, 0), (12, 0))\n\nnnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)\n\nplt.text(x=0, y=3.5, s=tex(STR_CUR), fontsize=14)\nplt.text(x=10, y=3.5, s=tex(STR_NEXT), fontsize=14)\nplt.text(x=0, y=-0.2, s=r\"$\\vdots$\", fontsize=14)\n\nnnfig.draw_neuron(ax, (10, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\nplt.show()", "Plus de détail : calcul de $\\errsig_\\cur$\nDans l'exemple suivant on ne s'intéresse qu'aux poids $\\weight_1$, $\\weight_2$, $\\weight_3$, $\\weight_4$ et $\\weight_5$ pour simplifier la demonstration.", "fig, ax = nnfig.init_figure(size_x=8, size_y=4)\n\nHSPACE = 6\nVSPACE = 4\n\n# Synapse #####################################\n\n# Layer 1-2\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + \"_1\"), label_position=0.4)\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, VSPACE), color=\"lightgray\")\n\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, -VSPACE), color=\"lightgray\")\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, -VSPACE), color=\"lightgray\")\n\n# Layer 2-3\nnnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, VSPACE), label=tex(STR_WEIGHT + \"_2\"), label_position=0.4)\nnnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, VSPACE), color=\"lightgray\")\n\nnnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, -VSPACE), label=tex(STR_WEIGHT + \"_3\"), label_position=0.4)\nnnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, -VSPACE), color=\"lightgray\")\n\n# Layer 3-4\nnnfig.draw_synapse(ax, (2*HSPACE, VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + \"_4\"), label_position=0.4)\nnnfig.draw_synapse(ax, (2*HSPACE, -VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + \"_5\"), label_position=0.4, label_offset_y=-0.8)\n\n# Neuron ######################################\n\n# Layer 1 (input)\nnnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True, line_color=\"lightgray\")\n\n# Layer 2\nnnfig.draw_neuron(ax, (HSPACE, VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (HSPACE, -VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\", line_color=\"lightgray\")\n\n# Layer 3\nnnfig.draw_neuron(ax, (2*HSPACE, VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (2*HSPACE, -VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Layer 4\nnnfig.draw_neuron(ax, (3*HSPACE, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Text ########################################\n\n# Layer 1 (input)\nplt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + \"_i\"), fontsize=12)\n\n# Layer 2\nplt.text(x=HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + \"_1\"), fontsize=12)\nplt.text(x=HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + \"_1\"), fontsize=12)\n\n# Layer 3\nplt.text(x=2*HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + \"_2\"), fontsize=12)\nplt.text(x=2*HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + \"_2\"), fontsize=12)\n\nplt.text(x=2*HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + \"_3\"), fontsize=12)\nplt.text(x=2*HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + \"_3\"), fontsize=12)\n\n# Layer 4\nplt.text(x=3*HSPACE-1.25, y=1.5, s=tex(STR_POT + \"_o\"), fontsize=12)\nplt.text(x=3*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + \"_o\"), fontsize=12)\n\nplt.text(x=3*HSPACE+2, y=-0.3,\n s=tex(STR_ERRFUNC + \" = (\" + STR_SIGOUT + \"_o - \" + STR_SIGOUT_DES + \"_o)^2/2\"),\n fontsize=12)\n\nplt.show()", "Attention: $\\weight_1$ influe $\\pot_2$ et $\\pot_3$ en plus de $\\pot_1$ et $\\pot_o$.\nCalcul de la dérivée partielle de l'erreur par rapport au poid synaptique $\\weight_4$\nrappel:\n$$\n\\begin{align}\n\\errfunc &= \\frac12 \\left( \\sigout_o - \\sigoutdes_o \\right)^2 \\tag{1} \\\n\\sigout_o &= \\activfunc(\\pot_o) \\tag{2} \\\n\\pot_o &= \\sigout_2 \\weight_4 + \\sigout_3 \\weight_5 \\tag{3} \\\n\\end{align}\n$$\nc'est à dire:\n$$\n\\errfunc = \\frac12 \\left( \\activfunc \\left( \\sigout_2 \\weight_4 + \\sigout_3 \\weight_5 \\right) - \\sigoutdes_o \\right)^2\n$$\ndonc, en appliquant les règles de derivation de fonctions composées, on a:\n$$\n\\frac{\\partial \\errfunc}{\\partial \\weight_4} =\n\\frac{\\partial \\pot_o}{\\partial \\weight_4}\n\\underbrace{\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n}_{\\errsig_o}\n$$\nRappel: dérivation des fonctions composées (parfois appelé règle de dérivation en chaîne ou règle de la chaîne)\n$$\n\\frac{\\mathrm{d} y}{\\mathrm{d} x} = \\frac{\\mathrm{d} y}{\\mathrm{d} u} \\cdot \\frac{\\mathrm{d} u}{\\mathrm {d} x}\n$$\nde (1), (2) et (3) on déduit:\n$$\n\\begin{align}\n\\frac{\\partial \\pot_o}{\\partial \\weight_4} &= \\sigout_2 \\\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o} &= \\activfunc'(\\pot_o) \\\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o} &= \\sigout_o - \\sigoutdes_o \\\n\\end{align}\n$$\nle signal d'erreur s'écrit donc:\n$$\n\\begin{align}\n\\errsig_o &=\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o} \\\n&= \\activfunc'(\\pot_o) [\\sigout_o - \\sigoutdes_o]\n\\end{align}\n$$\nCalcul de la dérivée partielle de l'erreur par rapport au poid synaptique $\\weight_5$\n$$\n\\frac{\\partial \\errfunc}{\\partial \\weight_5} =\n\\frac{\\partial \\pot_o}{\\partial \\weight_5}\n\\underbrace{\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n}_{\\errsig_o}\n$$\navec:\n$$\n\\begin{align}\n\\frac{\\partial \\pot_o}{\\partial \\weight_5} &= \\sigout_3 \\\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o} &= \\activfunc'(\\pot_o) \\\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o} &= \\sigout_o - \\sigoutdes_o \\\n\\errsig_o &=\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o} \\\n&= \\activfunc'(\\pot_o) [\\sigout_o - \\sigoutdes_o]\n\\end{align}\n$$\nCalcul de la dérivée partielle de l'erreur par rapport au poid synaptique $\\weight_2$\n$$\n\\frac{\\partial \\errfunc}{\\partial \\weight_2} =\n\\frac{\\partial \\pot_2}{\\partial \\weight_2}\n%\n\\underbrace{\n \\frac{\\partial \\sigout_2}{\\partial \\pot_2}\n \\frac{\\partial \\pot_o}{\\partial \\sigout_2}\n \\underbrace{\n \\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n \\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n }{\\errsig_o}\n}{\\errsig_2}\n$$\navec:\n$$\n\\begin{align}\n\\frac{\\partial \\pot_2}{\\partial \\weight_2} &= \\sigout_1 \\\n\\frac{\\partial \\sigout_2}{\\partial \\pot_2} &= \\activfunc'(\\pot_2) \\\n\\frac{\\partial \\pot_o}{\\partial \\sigout_2} &= \\weight_4 \\\n\\errsig_2 &=\n\\frac{\\partial \\sigout_2}{\\partial \\pot_2}\n\\frac{\\partial \\pot_o}{\\partial \\sigout_2}\n\\errsig_o \\\n&= \\activfunc'(\\pot_2) \\weight_4 \\errsig_o\n\\end{align}\n$$\nCalcul de la dérivée partielle de l'erreur par rapport au poid synaptique $\\weight_3$\n$$\n\\frac{\\partial \\errfunc}{\\partial \\weight_3} =\n\\frac{\\partial \\pot_3}{\\partial \\weight_3}\n%\n\\underbrace{\n \\frac{\\partial \\sigout_3}{\\partial \\pot_3}\n \\frac{\\partial \\pot_o}{\\partial \\sigout_3}\n \\underbrace{\n \\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n \\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n }{\\errsig_o}\n}{\\errsig_3}\n$$\navec:\n$$\n\\begin{align}\n\\frac{\\partial \\pot_3}{\\partial \\weight_3} &= \\sigout_1 \\\n\\frac{\\partial \\sigout_3}{\\partial \\pot_3} &= \\activfunc'(\\pot_3) \\\n\\frac{\\partial \\pot_o}{\\partial \\sigout_3} &= \\weight_5 \\\n\\errsig_3 &= \n\\frac{\\partial \\sigout_3}{\\partial \\pot_3}\n\\frac{\\partial \\pot_o}{\\partial \\sigout_3}\n\\errsig_o \\\n&= \\activfunc'(\\pot_3) \\weight_5 \\errsig_o\n\\end{align}\n$$\nCalcul de la dérivée partielle de l'erreur par rapport au poid synaptique $\\weight_1$\n$$\n\\frac{\\partial \\errfunc}{\\partial \\weight_1} =\n\\frac{\\partial \\pot_1}{\\partial \\weight_1}\n%\n\\underbrace{\n \\frac{\\partial \\sigout_1}{\\partial \\pot_1}\n \\left(\n \\frac{\\partial \\pot_2}{\\partial \\sigout_1} % err?\n \\underbrace{\n \\frac{\\partial \\sigout_2}{\\partial \\pot_2}\n \\frac{\\partial \\pot_o}{\\partial \\sigout_2}\n \\underbrace{\n \\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n \\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n }{\\errsig_o}\n }{\\errsig_2}\n +\n \\frac{\\partial \\pot_3}{\\partial \\sigout_1} % err?\n \\underbrace{\n \\frac{\\partial \\sigout_3}{\\partial \\pot_3}\n \\frac{\\partial \\pot_o}{\\partial \\sigout_3}\n \\underbrace{\n \\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n \\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n }{\\errsig_o}\n }{\\errsig_3}\n \\right)\n}_{\\errsig_1}\n$$\navec:\n$$\n\\begin{align}\n\\frac{\\partial \\pot_1}{\\partial \\weight_1} &= \\sigout_i \\\n\\frac{\\partial \\sigout_1}{\\partial \\pot_1} &= \\activfunc'(\\pot_1) \\\n\\frac{\\partial \\pot_2}{\\partial \\sigout_1} &= \\weight_2 \\\n\\frac{\\partial \\pot_3}{\\partial \\sigout_1} &= \\weight_3 \\\n\\errsig_1 &=\n\\frac{\\partial \\sigout_1}{\\partial \\pot_1}\n\\left(\n\\frac{\\partial \\pot_2}{\\partial \\sigout_1}\n\\errsig_2\n+\n\\frac{\\partial \\pot_3}{\\partial \\sigout_1}\n\\errsig_3\n\\right) \\\n&= \n\\activfunc'(\\pot_1) \\left( \\weight_2 \\errsig_2 + \\weight_3 \\errsig_3 \\right)\n\\end{align}\n$$\nPython implementation", "# Define the activation function and its derivative\nactivation_function = tanh\nd_activation_function = d_tanh\n\ndef init_weights(num_input_cells, num_output_cells, num_cell_per_hidden_layer, num_hidden_layers=1):\n \"\"\"\n The returned `weights` object is a list of weight matrices,\n where weight matrix at index $i$ represents the weights between\n layer $i$ and layer $i+1$.\n \n Numpy array shapes for e.g. num_input_cells=2, num_output_cells=2,\n num_cell_per_hidden_layer=3 (without taking account bias):\n - in: (2,)\n - in+bias: (3,)\n - w[0]: (3,3)\n - w[0]+bias: (3,4)\n - w[1]: (3,2)\n - w[1]+bias: (4,2)\n - out: (2,)\n \"\"\"\n \n # TODO:\n # - faut-il que wij soit positif ?\n # - loi normale plus appropriée que loi uniforme ?\n # - quel sigma conseillé ?\n \n W = []\n \n # Weights between the input layer and the first hidden layer\n W.append(np.random.uniform(low=0., high=1., size=(num_input_cells + 1, num_cell_per_hidden_layer + 1)))\n \n # Weights between hidden layers (if there are more than one hidden layer)\n for layer in range(num_hidden_layers - 1):\n W.append(np.random.uniform(low=0., high=1., size=(num_cell_per_hidden_layer + 1, num_cell_per_hidden_layer + 1)))\n \n # Weights between the last hidden layer and the output layer\n W.append(np.random.uniform(low=0., high=1., size=(num_cell_per_hidden_layer + 1, num_output_cells)))\n \n return W\n\ndef evaluate_network(weights, input_signal): # TODO: find a better name\n \n # Add the bias on the input layer\n input_signal = np.concatenate([input_signal, [-1]])\n \n assert input_signal.ndim == 1\n assert input_signal.shape[0] == weights[0].shape[0]\n \n # Compute the output of the first hidden layer\n p = np.dot(input_signal, weights[0])\n output_hidden_layer = activation_function(p)\n \n # Compute the output of the intermediate hidden layers\n # TODO: check this\n num_layers = len(weights)\n for n in range(num_layers - 2):\n p = np.dot(output_hidden_layer, weights[n + 1])\n output_hidden_layer = activation_function(p)\n \n # Compute the output of the output layer\n p = np.dot(output_hidden_layer, weights[-1])\n output_signal = activation_function(p)\n \n return output_signal\n\ndef compute_gradient():\n # TODO\n pass\n\nweights = init_weights(num_input_cells=2, num_output_cells=2, num_cell_per_hidden_layer=3, num_hidden_layers=1)\nprint(weights)\n#print(weights[0].shape)\n#print(weights[1].shape)\n\ninput_signal = np.array([.1, .2])\ninput_signal\n\nevaluate_network(weights, input_signal)", "Divers\nLe PMC peut approximer n'importe quelle fonction continue avec une précision arbitraire suivant le nombre de neurones présents sur la couche cachée.\nInitialisation des poids: généralement des petites valeurs aléatoires", "# TODO: la différence entre:\n# * réseau bouclé\n# * réseau récurent", "Notes de la documentation sklearn\n\nfeatures : les données d'entrée du réseau (i.e. les entrées de la 1ere couche du réseau)\n\"nombre de features\" = taille du vecteur d'entrées\nloss function: fonction objectif (ou fonction d'erreur)\nfitting: processus d'apprentissage (training)\nsample: exemple\n\nLes biais sont stockés dans une liste de vecteurs plutôt qu'une liste de scalaires... pourquoi ???\nAvantages des PMC:\n- capables d'apprendre des modèles non linéaires\n- capables d'apprendre des modèles en temps réel (apprentissage on-line)\nInconvenients des PMC:\n- les PMC avec une ou plusieurs couches cachées ont une fonction objectif non-convexe avec des minimas locaux. Par conséquent, le résultat du processus d'apprentissage peut varier d'une execution à l'autre suivant la valeur des poids initiaux et l'obtention d'un réseau optimal n'est pas garanti\n- pour obtenir un résultat satisfaisant, il est souvant nécessaire de régler (plus ou moins empiriquement) de nombreux meta-paramètres (nombres de couches cachées, nombre de neurones sur les couches cachées, nombres d'itérations, ...)\n- une mauvaise normalisation des données d'entrée a un impact très négatif sur la qualité du résultat (\"mal conditionné\" ???)\nCross-Entropy Loss Function: ...\nSoftmax: ...\nMulti-label classification: ... modèle de classifieur qui permet a un exemple d'appartenir à plusieurs classes" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bismayan/MaterialsMachineLearning
notebooks/old_ICSD_Notebooks/Understanding ICSD data.ipynb
mit
[ "from __future__ import division, print_function\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom pymatgen.core import Element, Composition\n\n%matplotlib inline\n\nimport csv\n\nwith open(\"ICSD/icsd-ternaries.csv\", \"r\") as f:\n csv_reader = csv.reader(f, dialect = csv.excel_tab)\n data = [line for line in csv_reader]\n\nformulas = [line[2] for line in data]\ncompositions = [Composition(f) for f in formulas]", "Structure Types\nStructure types are assigned by hand by ICSD curators.", "# How many ternaries have been assigned a structure type?\nstructure_types = [line[3] for line in data if line[3] is not '']\nunique_structure_types = set(structure_types)\nprint(\"There are {} ICSD ternaries entries.\".format(len(data)))\nprint(\"Structure types are assigned for {} entries.\".format(len(structure_types)))\nprint(\"There are {} unique structure types.\".format(len(unique_structure_types)))", "Filter for stoichiometric compounds only:", "def is_stoichiometric(composition):\n return np.all(np.mod(composition.values(), 1) == 0)\n\nstoichiometric_compositions = [c for c in compositions if is_stoichiometric(c)]\nprint(\"Number of stoichiometric compositions: {}\".format(len(stoichiometric_compositions)))\n\nternaries = set(c.formula for c in stoichiometric_compositions)\n\nlen(ternaries)\n\ndata_stoichiometric = [x for x in data if is_stoichiometric(Composition(x[2]))]\n\nfrom collections import Counter\n\nstruct_type_freq = Counter(x[3] for x in data_stoichiometric if x[3] is not '')\n\nplt.loglog(range(1, len(struct_type_freq)+1),\n sorted(struct_type_freq.values(), reverse = True), 'o')\n\nsorted(struct_type_freq.items(), key = lambda x: x[1], reverse = True)\n\nlen(set([x[2] for x in data if x[3] == 'Perovskite-GdFeO3']))\n\nuniq_phases = set()\nfor row in data_stoichiometric:\n spacegroup, formula, struct_type = row[1:4]\n phase = (spacegroup, Composition(formula).formula, struct_type)\n uniq_phases.add(phase)\n\nuniq_struct_type_freq = Counter(x[2] for x in uniq_phases if x[2] is not '')\nuniq_struct_type_freq_sorted = sorted(uniq_struc_type_freq.items(), key = lambda x: x[1], reverse = True)\n\nplt.loglog(range(1, len(uniq_struct_type_freq_sorted)+1),\n [x[1] for x in uniq_struct_type_freq_sorted], 'o')\n\nuniq_struct_type_freq_sorted\n\nfor struct_type,freq in uniq_struct_type_freq_sorted[:10]:\n print(\"{} : {}\".format(struct_type, freq))\n fffs = [p[1] for p in uniq_phases if p[2] == struct_type]\n fmt = \" \".join([\"{:14}\"]*5)\n print(fmt.format(*fffs[0:5]))\n print(fmt.format(*fffs[5:10]))\n print(fmt.format(*fffs[10:15]))\n print(fmt.format(*fffs[15:20]))", "Long Formulas", "# What are the longest formulas?\nfor formula in sorted(formulas, key = lambda x: len(x), reverse = True)[:20]:\n print(formula)", "Two key insights:\n1. Just because there are three elements in the formula\n doesn't mean the compound is fundamentally a ternary.\n There are doped binaries which masquerade as ternaries.\n And there are doped ternaries which masquerade as quaternaries,\n or even quintenaries. Because I only asked for compositions\n with 3 elements, this data is missing.\n2. ICSD has strategically placed parentheses in the formulas\n which give hints as to logical groupings. For example:\n (Ho1.3 Ti0.7) ((Ti0.64 Ho1.36) O6.67)\n is in fact in the pyrochlore family, A2B2O7.\nIntermetallics\nHow many intermetallics does the ICSD database contain?", "def filter_in_set(compound, universe):\n return all((e in universe) for e in Composition(compound))\n\ntransition_metals = [e for e in Element if e.is_transition_metal]\ntm_ternaries = [c for c in formulas if filter_in_set(c, transition_metals)]\nprint(\"Number of intermetallics:\", len(tm_ternaries))\n\nunique_tm_ternaries = set([Composition(c).formula for c in tm_ternaries])\nprint(\"Number of unique intermetallics:\", len(unique_tm_ternaries))\n\nunique_tm_ternaries" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.24/_downloads/e23ed246a9a354f899dfb3ce3b06e194/10_overview.ipynb
bsd-3-clause
[ "%matplotlib inline", "Overview of MEG/EEG analysis with MNE-Python\nThis tutorial covers the basic EEG/MEG pipeline for event-related analysis:\nloading data, epoching, averaging, plotting, and estimating cortical activity\nfrom sensor data. It introduces the core MNE-Python data structures\n~mne.io.Raw, ~mne.Epochs, ~mne.Evoked, and ~mne.SourceEstimate, and\ncovers a lot of ground fairly quickly (at the expense of depth). Subsequent\ntutorials address each of these topics in greater detail.\nWe begin by importing the necessary Python modules:", "import os\nimport numpy as np\nimport mne", "Loading data\nMNE-Python data structures are based around the FIF file format from\nNeuromag, but there are reader functions for a wide variety of other\ndata formats &lt;data-formats&gt;. MNE-Python also has interfaces to a\nvariety of publicly available datasets &lt;datasets&gt;,\nwhich MNE-Python can download and manage for you.\nWe'll start this tutorial by loading one of the example datasets (called\n\"sample-dataset\"), which contains EEG and MEG data from one subject\nperforming an audiovisual experiment, along with structural MRI scans for\nthat subject. The mne.datasets.sample.data_path function will automatically\ndownload the dataset if it isn't found in one of the expected locations, then\nreturn the directory path to the dataset (see the documentation of\n~mne.datasets.sample.data_path for a list of places it checks before\ndownloading). Note also that for this tutorial to run smoothly on our\nservers, we're using a filtered and downsampled version of the data\n(:file:sample_audvis_filt-0-40_raw.fif), but an unfiltered version\n(:file:sample_audvis_raw.fif) is also included in the sample dataset and\ncould be substituted here when running the tutorial locally.", "sample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_filt-0-40_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)", "By default, ~mne.io.read_raw_fif displays some information about the file\nit's loading; for example, here it tells us that there are four \"projection\nitems\" in the file along with the recorded data; those are :term:SSP\nprojectors &lt;projector&gt; calculated to remove environmental noise from the MEG\nsignals, plus a projector to mean-reference the EEG channels; these are\ndiscussed in the tutorial tut-projectors-background. In addition to\nthe information displayed during loading, you can get a glimpse of the basic\ndetails of a ~mne.io.Raw object by printing it; even more is available by\nprinting its info attribute (a dictionary-like object &lt;mne.Info&gt; that\nis preserved across ~mne.io.Raw, ~mne.Epochs, and ~mne.Evoked objects).\nThe info data structure keeps track of channel locations, applied\nfilters, projectors, etc. Notice especially the chs entry, showing that\nMNE-Python detects different sensor types and handles each appropriately. See\ntut-info-class for more on the ~mne.Info class.", "print(raw)\nprint(raw.info)", "~mne.io.Raw objects also have several built-in plotting methods; here we\nshow the power spectral density (PSD) for each sensor type with\n~mne.io.Raw.plot_psd, as well as a plot of the raw sensor traces with\n~mne.io.Raw.plot. In the PSD plot, we'll only plot frequencies below 50 Hz\n(since our data are low-pass filtered at 40 Hz). In interactive Python\nsessions, ~mne.io.Raw.plot is interactive and allows scrolling, scaling,\nbad channel marking, annotations, projector toggling, etc.", "raw.plot_psd(fmax=50)\nraw.plot(duration=5, n_channels=30)", "Preprocessing\nMNE-Python supports a variety of preprocessing approaches and techniques\n(maxwell filtering, signal-space projection, independent components analysis,\nfiltering, downsampling, etc); see the full list of capabilities in the\n:mod:mne.preprocessing and :mod:mne.filter submodules. Here we'll clean\nup our data by performing independent components analysis\n(~mne.preprocessing.ICA); for brevity we'll skip the steps that helped us\ndetermined which components best capture the artifacts (see\ntut-artifact-ica for a detailed walk-through of that process).", "# set up and fit the ICA\nica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)\nica.fit(raw)\nica.exclude = [1, 2] # details on how we picked these are omitted here\nica.plot_properties(raw, picks=ica.exclude)", "Once we're confident about which component(s) we want to remove, we pass them\nas the exclude parameter and then apply the ICA to the raw signal. The\n~mne.preprocessing.ICA.apply method requires the raw data to be loaded into\nmemory (by default it's only read from disk as-needed), so we'll use\n~mne.io.Raw.load_data first. We'll also make a copy of the ~mne.io.Raw\nobject so we can compare the signal before and after artifact removal\nside-by-side:", "orig_raw = raw.copy()\nraw.load_data()\nica.apply(raw)\n\n# show some frontal channels to clearly illustrate the artifact removal\nchs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231',\n 'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531',\n 'EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006',\n 'EEG 007', 'EEG 008']\nchan_idxs = [raw.ch_names.index(ch) for ch in chs]\norig_raw.plot(order=chan_idxs, start=12, duration=4)\nraw.plot(order=chan_idxs, start=12, duration=4)", "Detecting experimental events\nThe sample dataset includes several :term:\"STIM\" channels &lt;stim channel&gt;\nthat recorded electrical signals sent from the stimulus delivery computer (as\nbrief DC shifts / squarewave pulses). These pulses (often called \"triggers\")\nare used in this dataset to mark experimental events: stimulus onset,\nstimulus type, and participant response (button press). The individual STIM\nchannels are combined onto a single channel, in such a way that voltage\nlevels on that channel can be unambiguously decoded as a particular event\ntype. On older Neuromag systems (such as that used to record the sample data)\nthis summation channel was called STI 014, so we can pass that channel\nname to the mne.find_events function to recover the timing and identity of\nthe stimulus events.", "events = mne.find_events(raw, stim_channel='STI 014')\nprint(events[:5]) # show the first 5", "The resulting events array is an ordinary 3-column :class:NumPy array\n&lt;numpy.ndarray&gt;, with sample number in the first column and integer event ID\nin the last column; the middle column is usually ignored. Rather than keeping\ntrack of integer event IDs, we can provide an event dictionary that maps\nthe integer IDs to experimental conditions or events. In this dataset, the\nmapping looks like this:\n+----------+----------------------------------------------------------+\n| Event ID | Condition |\n+==========+==========================================================+\n| 1 | auditory stimulus (tone) to the left ear |\n+----------+----------------------------------------------------------+\n| 2 | auditory stimulus (tone) to the right ear |\n+----------+----------------------------------------------------------+\n| 3 | visual stimulus (checkerboard) to the left visual field |\n+----------+----------------------------------------------------------+\n| 4 | visual stimulus (checkerboard) to the right visual field |\n+----------+----------------------------------------------------------+\n| 5 | smiley face (catch trial) |\n+----------+----------------------------------------------------------+\n| 32 | subject button press |\n+----------+----------------------------------------------------------+", "event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4, 'smiley': 5, 'buttonpress': 32}", "Event dictionaries like this one are used when extracting epochs from\ncontinuous data; the / character in the dictionary keys allows pooling\nacross conditions by requesting partial condition descriptors (i.e.,\nrequesting 'auditory' will select all epochs with Event IDs 1 and 2;\nrequesting 'left' will select all epochs with Event IDs 1 and 3). An\nexample of this is shown in the next section. There is also a convenient\n~mne.viz.plot_events function for visualizing the distribution of events\nacross the duration of the recording (to make sure event detection worked as\nexpected). Here we'll also make use of the ~mne.Info attribute to get the\nsampling frequency of the recording (so our x-axis will be in seconds instead\nof in samples).", "fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq'],\n first_samp=raw.first_samp)", "For paradigms that are not event-related (e.g., analysis of resting-state\ndata), you can extract regularly spaced (possibly overlapping) spans of data\nby creating events using mne.make_fixed_length_events and then proceeding\nwith epoching as described in the next section.\nEpoching continuous data\nThe ~mne.io.Raw object and the events array are the bare minimum needed to\ncreate an ~mne.Epochs object, which we create with the ~mne.Epochs class\nconstructor. Here we'll also specify some data quality constraints: we'll\nreject any epoch where peak-to-peak signal amplitude is beyond reasonable\nlimits for that channel type. This is done with a rejection dictionary; you\nmay include or omit thresholds for any of the channel types present in your\ndata. The values given here are reasonable for this particular dataset, but\nmay need to be adapted for different hardware or recording conditions. For a\nmore automated approach, consider using the autoreject package_.", "reject_criteria = dict(mag=4000e-15, # 4000 fT\n grad=4000e-13, # 4000 fT/cm\n eeg=150e-6, # 150 µV\n eog=250e-6) # 250 µV", "We'll also pass the event dictionary as the event_id parameter (so we can\nwork with easy-to-pool event labels instead of the integer event IDs), and\nspecify tmin and tmax (the time relative to each event at which to\nstart and end each epoch). As mentioned above, by default ~mne.io.Raw and\n~mne.Epochs data aren't loaded into memory (they're accessed from disk only\nwhen needed), but here we'll force loading into memory using the\npreload=True parameter so that we can see the results of the rejection\ncriteria being applied:", "epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5,\n reject=reject_criteria, preload=True)", "Next we'll pool across left/right stimulus presentations so we can compare\nauditory versus visual responses. To avoid biasing our signals to the left or\nright, we'll use ~mne.Epochs.equalize_event_counts first to randomly sample\nepochs from each condition to match the number of epochs present in the\ncondition with the fewest good epochs.", "conds_we_care_about = ['auditory/left', 'auditory/right',\n 'visual/left', 'visual/right']\nepochs.equalize_event_counts(conds_we_care_about) # this operates in-place\naud_epochs = epochs['auditory']\nvis_epochs = epochs['visual']\ndel raw, epochs # free up memory", "Like ~mne.io.Raw objects, ~mne.Epochs objects also have a number of\nbuilt-in plotting methods. One is ~mne.Epochs.plot_image, which shows each\nepoch as one row of an image map, with color representing signal magnitude;\nthe average evoked response and the sensor location are shown below the\nimage:", "aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021'])", "<div class=\"alert alert-info\"><h4>Note</h4><p>Both `~mne.io.Raw` and `~mne.Epochs` objects have `~mne.Epochs.get_data`\n methods that return the underlying data as a\n :class:`NumPy array <numpy.ndarray>`. Both methods have a ``picks``\n parameter for subselecting which channel(s) to return; ``raw.get_data()``\n has additional parameters for restricting the time domain. The resulting\n matrices have dimension ``(n_channels, n_times)`` for `~mne.io.Raw` and\n ``(n_epochs, n_channels, n_times)`` for `~mne.Epochs`.</p></div>\n\nTime-frequency analysis\nThe :mod:mne.time_frequency submodule provides implementations of several\nalgorithms to compute time-frequency representations, power spectral density,\nand cross-spectral density. Here, for example, we'll compute for the auditory\nepochs the induced power at different frequencies and times, using Morlet\nwavelets. On this dataset the result is not especially informative (it just\nshows the evoked \"auditory N100\" response); see here\n&lt;inter-trial-coherence&gt; for a more extended example on a dataset with richer\nfrequency content.", "frequencies = np.arange(7, 30, 3)\npower = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False,\n freqs=frequencies, decim=3)\npower.plot(['MEG 1332'])", "Estimating evoked responses\nNow that we have our conditions in aud_epochs and vis_epochs, we can\nget an estimate of evoked responses to auditory versus visual stimuli by\naveraging together the epochs in each condition. This is as simple as calling\nthe ~mne.Epochs.average method on the ~mne.Epochs object, and then using\na function from the :mod:mne.viz module to compare the global field power\nfor each sensor type of the two ~mne.Evoked objects:", "aud_evoked = aud_epochs.average()\nvis_evoked = vis_epochs.average()\n\nmne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked),\n legend='upper left', show_sensors='upper right')", "We can also get a more detailed view of each ~mne.Evoked object using other\nplotting methods such as ~mne.Evoked.plot_joint or\n~mne.Evoked.plot_topomap. Here we'll examine just the EEG channels, and see\nthe classic auditory evoked N100-P200 pattern over dorso-frontal electrodes,\nthen plot scalp topographies at some additional arbitrary times:", "aud_evoked.plot_joint(picks='eeg')\naud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg')", "Evoked objects can also be combined to show contrasts between conditions,\nusing the mne.combine_evoked function. A simple difference can be\ngenerated by passing weights=[1, -1]. We'll then plot the difference wave\nat each sensor using ~mne.Evoked.plot_topo:", "evoked_diff = mne.combine_evoked([aud_evoked, vis_evoked], weights=[1, -1])\nevoked_diff.pick_types(meg='mag').plot_topo(color='r', legend=False)", "Inverse modeling\nFinally, we can estimate the origins of the evoked activity by projecting the\nsensor data into this subject's :term:source space (a set of points either\non the cortical surface or within the cortical volume of that subject, as\nestimated by structural MRI scans). MNE-Python supports lots of ways of doing\nthis (dynamic statistical parametric mapping, dipole fitting, beamformers,\netc.); here we'll use minimum-norm estimation (MNE) to generate a continuous\nmap of activation constrained to the cortical surface. MNE uses a linear\n:term:inverse operator to project EEG+MEG sensor measurements into the\nsource space. The inverse operator is computed from the\n:term:forward solution for this subject and an estimate of the\ncovariance of sensor measurements &lt;tut-compute-covariance&gt;. For this\ntutorial we'll skip those computational steps and load a pre-computed inverse\noperator from disk (it's included with the sample data\n&lt;sample-dataset&gt;). Because this \"inverse problem\" is underdetermined (there\nis no unique solution), here we further constrain the solution by providing a\nregularization parameter specifying the relative smoothness of the current\nestimates in terms of a signal-to-noise ratio (where \"noise\" here is akin to\nbaseline activity level across all of cortex).", "# load inverse operator\ninverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis-meg-oct-6-meg-inv.fif')\ninv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)\n# set signal-to-noise ratio (SNR) to compute regularization parameter (λ²)\nsnr = 3.\nlambda2 = 1. / snr ** 2\n# generate the source time course (STC)\nstc = mne.minimum_norm.apply_inverse(vis_evoked, inv_operator,\n lambda2=lambda2,\n method='MNE') # or dSPM, sLORETA, eLORETA", "Finally, in order to plot the source estimate on the subject's cortical\nsurface we'll also need the path to the sample subject's structural MRI files\n(the subjects_dir):", "# path to subjects' MRI files\nsubjects_dir = os.path.join(sample_data_folder, 'subjects')\n# plot the STC\nstc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'],\n subjects_dir=subjects_dir)", "The remaining tutorials have much more detail on each of these topics (as\nwell as many other capabilities of MNE-Python not mentioned here:\nconnectivity analysis, encoding/decoding models, lots more visualization\noptions, etc). Read on to learn more!\n.. LINKS" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AstroHackWeek/AstroHackWeek2014
day1/Version Control.ipynb
bsd-3-clause
[ "Version control for fun and profit: the tool you didn't know you needed. From personal workflows to open collaboration\nNote: this tutorial was particularly modeled, and therefore owes a lot, to the excellent materials offered in:\n\n\"Git for Scientists: A Tutorial\" by John McDonnell \nEmanuele Olivetti's lecture notes and exercises from the G-Node summer school on Advanced Scientific Programming in Python.\n\nIn particular I've reused the excellent images from the Pro Git book that John had already selected and downloaded, as well as some of his outline. But this version of the tutorial aims to be 100% reproducible by being executed directly as an IPython notebook and is hosted itself on github so that others can more easily make improvements to it by collaborating on Github. Many thanks to John and Emanuele for making their materials available online.\nAfter writing this document, I discovered J.R. Johansson's tutorial on version control that is also written as a fully reproducible notebook and is also aimed at a scientific audience. It has a similar spirit to this one, and is part of his excellent series Lectures on Scientific Computing with Python that is entirely available as IPython Notebooks.\nWikipedia\n“Revision control, also known as version control, source control\nor software configuration management (SCM), is the\nmanagement of changes to documents, programs, and other\ninformation stored as computer files.”\nReproducibility?\n\nTracking and recreating every step of your work\nIn the software world: it's called Version Control!\n\nWhat do (good) version control tools give you?\n\nPeace of mind (backups)\nFreedom (exploratory branching)\nCollaboration (synchronization)\n\nGit is an enabling technology: Use version control for everything\n\nPaper writing (never get paper_v5_john_jane_final_oct22_really_final.tex by email again!)\nGrant writing\nEveryday research\nTeaching (never accept an emailed homework assignment again!)\n\nTeaching courses with Git\n<!-- offline: \n<img src=\"files/fig/indefero_projects_notes.png\" width=\"100%\">\n-->\n<img src=\"https://raw.github.com/fperez/reprosw/master/fig/indefero_projects_notes.png\" width=\"100%\">\nAnnotated history of each student's worfklow (and backup!)\n<!-- offline: \n <img src=\"files/fig/indefero_projects1.png\" width=\"100%\">\n -->\n<img src=\"https://raw.github.com/fperez/reprosw/master/fig/indefero_projects1.png\" width=\"100%\">\nThe plan for this tutorial\nThis tutorial is structured in the following way: we will begin with a brief overview of key concepts you need to understand in order for git to really make sense. We will then dive into hands-on work: after a brief interlude into necessary configuration we will discuss 5 \"stages of git\" with scenarios of increasing sophistication and complexity, introducing the necessary commands for each stage:\n\nLocal, single-user, linear workflow\nSingle local user, branching\nUsing remotes as a single user\nRemotes for collaborating in a small team\nFull-contact github: distributed collaboration with large teams\n\nIn reality, this tutorial only covers stages 1-4, since for #5 there are many software develoment-oriented tutorials and documents of very high quality online. But most scientists start working alone with a few files or with a small team, so I feel it's important to build first the key concepts and practices based on problems scientists encounter in their everyday life and without the jargon of the software world. Once you've become familiar with 1-4, the excellent tutorials that exist about collaborating on github on open-source projects should make sense.\nVery high level picture: an overview of key concepts\nThe commit: a snapshot of work at a point in time\n<!-- offline: \n![](fig/commit_anatomy.png)\n-->\n\n<img src=\"https://raw.github.com/fperez/reprosw/master/fig/commit_anatomy.png\">\nCredit: ProGit book, by Scott Chacon, CC License.", "ls", "A repository: a group of linked commits\n<!-- offline: \n![](files/fig/threecommits.png)\n-->\n\n<img src=\"https://raw.github.com/fperez/reprosw/master/fig/threecommits.png\" >\nNote: these form a Directed Acyclic Graph (DAG), with nodes identified by their hash.\nA hash: a fingerprint of the content of each commit and its parent", "import sha\n\n# Our first commit\ndata1 = 'This is the start of my paper2.'\nmeta1 = 'date: 1/1/12'\nhash1 = sha.sha(data1 + meta1).hexdigest()\nprint 'Hash:', hash1\n\n# Our second commit, linked to the first\ndata2 = 'Some more text in my paper...'\nmeta2 = 'date: 1/2/12'\n# Note we add the parent hash here!\nhash2 = sha.sha(data2 + meta2 + hash1).hexdigest()\nprint 'Hash:', hash2", "And this is pretty much the essence of Git!\nFirst things first: git must be configured before first use\nThe minimal amount of configuration for git to work without pestering you is to tell it who you are:", "%%bash\ngit config --global user.name \"Fernando Perez\"\ngit config --global user.email \"[email protected]\"", "And how you will edit text files (it will often ask you to edit messages and other information, and thus wants to know how you like to edit your files):", "%%bash\n# Put here your preferred editor. If this is not set, git will honor\n# the $EDITOR environment variable\ngit config --global core.editor /usr/bin/jed # my lightweight unix editor\n\n# On Windows Notepad will do in a pinch, I recommend Notepad++ as a free alternative\n# On the mac, you can set nano or emacs as a basic option\n\n# And while we're at it, we also turn on the use of color, which is very useful\ngit config --global color.ui \"auto\"", "Set git to use the credential memory cache so we don't have to retype passwords too frequently. On Linux, you should run the following (note that this requires git version 1.7.10 or newer):", "%%bash \ngit config --global credential.helper cache\n# Set the cache to timeout after 2 hours (setting is in seconds)\ngit config --global credential.helper 'cache --timeout=7200'", "Github offers in its help pages instructions on how to configure the credentials helper for Mac OSX and Windows.\nStage 1: Local, single-user, linear workflow\nSimply type git to see a full list of all the 'core' commands. We'll now go through most of these via small practical exercises:", "!git", "git init: create an empty repository", "%%bash\nrm -rf test\ngit init test", "Note: all these cells below are meant to be run by you in a terminal where you change once to the test directory and continue working there.\nSince we are putting all of them here in a single notebook for the purposes of the tutorial, they will all be prepended with the first two lines:\n%%bash\ncd test\n\nthat tell IPython to do that each time. But you should ignore those two lines and type the rest of each cell yourself in your terminal.\nLet's look at what git did:", "%%bash\ncd test\n\nls\n\n%%bash\ncd test\n\nls -la\n\n%%bash\ncd test\n\nls -l .git", "Now let's edit our first file in the test directory with a text editor... I'm doing it programatically here for automation purposes, but you'd normally be editing by hand", "%%bash\ncd test\n\necho \"My first bit of text\" > file1.txt", "git add: tell git about this new file", "%%bash\ncd test\n\ngit add file1.txt", "We can now ask git about what happened with status:", "%%bash\ncd test\n\ngit status", "git commit: permanently record our changes in git's database\nFor now, we are always going to call git commit either with the -a option or with specific filenames (git commit file1 file2...). This delays the discussion of an aspect of git called the index (often referred to also as the 'staging area') that we will cover later. Most everyday work in regular scientific practice doesn't require understanding the extra moving parts that the index involves, so on a first round we'll bypass it. Later on we will discuss how to use it to achieve more fine-grained control of what and how git records our actions.", "%%bash\ncd test\n\ngit commit -a -m\"This is our first commit\"", "In the commit above, we used the -m flag to specify a message at the command line. If we don't do that, git will open the editor we specified in our configuration above and require that we enter a message. By default, git refuses to record changes that don't have a message to go along with them (though you can obviously 'cheat' by using an empty or meaningless string: git only tries to facilitate best practices, it's not your nanny).\ngit log: what has been committed so far", "%%bash\ncd test\n\ngit log", "git diff: what have I changed?\nLet's do a little bit more work... Again, in practice you'll be editing the files by hand, here we do it via shell commands for the sake of automation (and therefore the reproducibility of this tutorial!)", "%%bash\ncd test\n\necho \"And now some more text...\" >> file1.txt", "And now we can ask git what is different:", "%%bash\ncd test\n\ngit diff", "The cycle of git virtue: work, commit, work, commit, ...", "%%bash\ncd test\n\ngit commit -a -m\"I have made great progress on this critical matter.\"", "git log revisited\nFirst, let's see what the log shows us now:", "%%bash\ncd test\n\ngit log", "Sometimes it's handy to see a very summarized version of the log:", "%%bash\ncd test\n\ngit log --oneline --topo-order --graph", "Git supports aliases: new names given to command combinations. Let's make this handy shortlog an alias, so we only have to type git slog and see this compact log:", "%%bash\ncd test\n\n# We create our alias (this saves it in git's permanent configuration file):\ngit config --global alias.slog \"log --oneline --topo-order --graph\"\n\n# And now we can use it\ngit slog", "git mv and rm: moving and removing files\nWhile git add is used to add fils to the list git tracks, we must also tell it if we want their names to change or for it to stop tracking them. In familiar Unix fashion, the mv and rm git commands do precisely this:", "%%bash\ncd test\n\ngit mv file1.txt file-newname.txt\ngit status", "Note that these changes must be committed too, to become permanent! In git's world, until something hasn't been committed, it isn't permanently recorded anywhere.", "%%bash\ncd test\n\ngit commit -a -m\"I like this new name better\"\necho \"Let's look at the log again:\"\ngit slog", "And git rm works in a similar fashion.\nExercise\nAdd a new file file2.txt, commit it, make some changes to it, commit them again, and then remove it (and don't forget to commit this last step!).\nLocal user, branching\nWhat is a branch? Simply a label for the 'current' commit in a sequence of ongoing commits:\n<!-- offline: \n![](files/fig/masterbranch.png)\n-->\n\n<img src=\"https://raw.github.com/fperez/reprosw/master/fig/masterbranch.png\" >\nThere can be multiple branches alive at any point in time; the working directory is the state of a special pointer called HEAD. In this example there are two branches, master and testing, and testing is the currently active branch since it's what HEAD points to:\n<!-- offline: \n![](files/fig/HEAD_testing.png)\n-->\n\n<img src=\"https://raw.github.com/fperez/reprosw/master/fig/HEAD_testing.png\" >\nOnce new commits are made on a branch, HEAD and the branch label move with the new commits:\n<!-- offline: \n![](files/fig/branchcommit.png)\n-->\n\n<img src=\"https://raw.github.com/fperez/reprosw/master/fig/branchcommit.png\" >\nThis allows the history of both branches to diverge:\n<!-- offline: \n![](files/fig/mergescenario.png)\n-->\n\n<img src=\"https://raw.github.com/fperez/reprosw/master/fig/mergescenario.png\" >\nBut based on this graph structure, git can compute the necessary information to merge the divergent branches back and continue with a unified line of development:\n<!-- offline: \n![](files/fig/mergeaftermath.png)\n-->\n\n<img src=\"https://raw.github.com/fperez/reprosw/master/fig/mergeaftermath.png\" >\nLet's now illustrate all of this with a concrete example. Let's get our bearings first:", "%%bash\ncd test\n\ngit status\nls", "We are now going to try two different routes of development: on the master branch we will add one file and on the experiment branch, which we will create, we will add a different one. We will then merge the experimental branch into master.", "%%bash\ncd test\n\ngit branch experiment\ngit checkout experiment\n\n%%bash\ncd test\n\necho \"Some crazy idea\" > experiment.txt\ngit add experiment.txt\ngit commit -a -m\"Trying something new\"\ngit slog\n\n%%bash\ncd test\n\ngit checkout master\ngit slog\n\n%%bash\ncd test\n\necho \"All the while, more work goes on in master...\" >> file-newname.txt\ngit commit -a -m\"The mainline keeps moving\"\ngit slog\n\n%%bash\ncd test\n\nls\n\n%%bash\ncd test\n\ngit merge experiment\ngit slog", "Using remotes as a single user\nWe are now going to introduce the concept of a remote repository: a pointer to another copy of the repository that lives on a different location. This can be simply a different path on the filesystem or a server on the internet.\nFor this discussion, we'll be using remotes hosted on the GitHub.com service, but you can equally use other services like BitBucket or Gitorious as well as host your own.", "%%bash\ncd test\n\nls\necho \"Let's see if we have any remote repositories here:\"\ngit remote -v", "Since the above cell didn't produce any output after the git remote -v call, it means we have no remote repositories configured. We will now proceed to do so. Once logged into GitHub, go to the new repository page and make a repository called test. Do not check the box that says Initialize this repository with a README, since we already have an existing repository here. That option is useful when you're starting first at Github and don't have a repo made already on a local computer.\nWe can now follow the instructions from the next page:", "%%bash\ncd test\n\ngit remote add origin https://github.com/fperez/test.git\ngit push -u origin master", "Let's see the remote situation again:", "%%bash\ncd test\n\ngit remote -v", "We can now see this repository publicly on github.\nLet's see how this can be useful for backup and syncing work between two different computers. I'll simulate a 2nd computer by working in a different directory...", "%%bash\n\n# Here I clone my 'test' repo but with a different name, test2, to simulate a 2nd computer\ngit clone https://github.com/fperez/test.git test2\ncd test2\npwd\ngit remote -v", "Let's now make some changes in one 'computer' and synchronize them on the second.", "%%bash\ncd test2 # working on computer #2\n\necho \"More new content on my experiment\" >> experiment.txt\ngit commit -a -m\"More work, on machine #2\"", "Now we put this new work up on the github server so it's available from the internet", "%%bash\ncd test2\n\ngit push", "Now let's fetch that work from machine #1:", "%%bash\ncd test\n\ngit pull", "An important aside: conflict management\nWhile git is very good at merging, if two different branches modify the same file in the same location, it simply can't decide which change should prevail. At that point, human intervention is necessary to make the decision. Git will help you by marking the location in the file that has a problem, but it's up to you to resolve the conflict. Let's see how that works by intentionally creating a conflict.\nWe start by creating a branch and making a change to our experiment file:", "%%bash\ncd test\n\ngit branch trouble\ngit checkout trouble\necho \"This is going to be a problem...\" >> experiment.txt\ngit commit -a -m\"Changes in the trouble branch\"", "And now we go back to the master branch, where we change the same file:", "%%bash\ncd test\n\ngit checkout master\necho \"More work on the master branch...\" >> experiment.txt\ngit commit -a -m\"Mainline work\"", "So now let's see what happens if we try to merge the trouble branch into master:", "%%bash\ncd test\n\ngit merge trouble", "Let's see what git has put into our file:", "%%bash\ncd test\n\ncat experiment.txt", "At this point, we go into the file with a text editor, decide which changes to keep, and make a new commit that records our decision. I've now made the edits, in this case I decided that both pieces of text were useful, but integrated them with some changes:", "%%bash\ncd test\n\ncat experiment.txt", "Let's then make our new commit:", "%%bash\ncd test\n\ngit commit -a -m\"Completed merge of trouble, fixing conflicts along the way\"\ngit slog", "Note: While it's a good idea to understand the basics of fixing merge conflicts by hand, in some cases you may find the use of an automated tool useful. Git supports multiple merge tools: a merge tool is a piece of software that conforms to a basic interface and knows how to merge two files into a new one. Since these are typically graphical tools, there are various to choose from for the different operating systems, and as long as they obey a basic command structure, git can work with any of them.\nCollaborating on github with a small team\nSingle remote with shared access: we are going to set up a shared collaboration with one partner (the person sitting next to you). This will show the basic workflow of collaborating on a project with a small team where everyone has write privileges to the same repository. \nNote for SVN users: this is similar to the classic SVN workflow, with the distinction that commit and push are separate steps. SVN, having no local repository, commits directly to the shared central resource, so to a first approximation you can think of svn commit as being synonymous with git commit; git push.\nWe will have two people, let's call them Alice and Bob, sharing a repository. Alice will be the owner of the repo and she will give Bob write privileges. \nWe begin with a simple synchronization example, much like we just did above, but now between two people instead of one person. Otherwise it's the same:\n\nBob clones Alice's repository.\nBob makes changes to a file and commits them locally.\nBob pushes his changes to github.\nAlice pulls Bob's changes into her own repository.\n\nNext, we will have both parties make non-conflicting changes each, and commit them locally. Then both try to push their changes:\n\nAlice adds a new file, alice.txt to the repo and commits.\nBob adds bob.txt and commits.\nAlice pushes to github.\nBob tries to push to github. What happens here?\n\nThe problem is that Bob's changes create a commit that conflicts with Alice's, so git refuses to apply them. It forces Bob to first do the merge on his machine, so that if there is a conflict in the merge, Bob deals with the conflict manually (git could try to do the merge on the server, but in that case if there's a conflict, the server repo would be left in a conflicted state without a human to fix things up). The solution is for Bob to first pull the changes (pull in git is really fetch+merge), and then push again.\nFull-contact github: distributed collaboration with large teams\nMultiple remotes and merging based on pull request workflow: this is beyond the scope of this brief tutorial, so we'll simply discuss how it works very briefly, illustrating it with the activity on the IPython github repository.\nOther useful commands\n\nshow\nreflog\nrebase\ntag\n\nGit resources\nIntroductory materials\nThere are lots of good tutorials and introductions for Git, which you\ncan easily find yourself; this is just a short list of things I've found\nuseful. For a beginner, I would recommend the following 'core' reading list, and\nbelow I mention a few extra resources:\n\n\nThe smallest, and in the style of this tuorial: git - the simple guide\ncontains 'just the basics'. Very quick read.\n\n\nThe concise Git Reference: compact but with\n all the key ideas. If you only read one document, make it this one.\n\n\nIn my own experience, the most useful resource was Understanding Git\nConceptually.\nGit has a reputation for being hard to use, but I have found that with a\nclear view of what is actually a very simple internal design, its\nbehavior is remarkably consistent, simple and comprehensible.\n\n\nFor more detail, see the start of the excellent Pro\n Git online book, or similarly the early\n parts of the Git community book. Pro\n Git's chapters are very short and well illustrated; the community\n book tends to have more detail and has nice screencasts at the end\n of some sections.\n\n\nIf you are really impatient and just want a quick start, this visual git tutorial\nmay be sufficient. It is nicely illustrated with diagrams that show what happens on the filesystem.\nFor windows users, an Illustrated Guide to Git on Windows is useful in that\nit contains also some information about handling SSH (necessary to interface with git hosted on remote servers when collaborating) as well\nas screenshots of the Windows interface.\nCheat sheets\n: Two different\n cheat\n sheets\n in PDF format that can be printed for frequent reference.\nBeyond the basics\nAt some point, it will pay off to understand how git itself is built. These two documents, written in a similar spirit, \nare probably the most useful descriptions of the Git architecture short of diving into the actual implementation. They walk you through\nhow you would go about building a version control system with a little story. By the end you realize that Git's model is almost\nan inevitable outcome of the proposed constraints:\n\nThe Git parable by Tom Preston-Werner.\nGit foundations by Matthew Brett.\n\nGit ready\n: A great website of posts on specific git-related topics, organized\n by difficulty.\nQGit: an excellent Git GUI\n: Git ships by default with gitk and git-gui, a pair of Tk graphical\n clients to browse a repo and to operate in it. I personally have\n found qgit to be nicer and\n easier to use. It is available on modern linux distros, and since it\n is based on Qt, it should run on OSX and Windows.\nGit Magic\n: Another book-size guide that has useful snippets.\nThe learning center at Github\n: Guides on a number of topics, some specific to github hosting but\n much of it of general value.\nA port of the Hg book's beginning\n: The Mercurial book has a reputation\n for clarity, so Carl Worth decided to\n port its introductory chapter\n to Git. It's a nicely written intro, which is possible in good\n measure because of how similar the underlying models of Hg and Git\n ultimately are.\nIntermediate tips\n: A set of tips that contains some very valuable nuggets, once you're\n past the basics.\nFinally, if you prefer a video presentation, this 1-hour tutorial prepared by the GitHub educational team will walk you through the entire process:", "from IPython.display import YouTubeVideo\nYouTubeVideo('U8GBXvdmHT4')", "For SVN users\nIf you want a bit more background on why the model of version control\nused by Git and Mercurial (known as distributed version control) is such\na good idea, I encourage you to read this very well written\npost by Joel\nSpolsky on the topic. After that post, Joel created a very nice\nMercurial tutorial, whose first page\napplies equally well to git and is a very good 're-education' for anyone\ncoming from an SVN (or similar) background.\nIn practice, I think you are better off following Joel's advice and\nunderstanding git on its own merits instead of trying to bang SVN\nconcepts into git shapes. But for the occasional translation from SVN to\nGit of a specific idiom, the Git - SVN Crash\nCourse can be handy.\nA few useful tips for common tasks\nBetter shell support\nAdding git branch info to your bash prompt and tab completion for git commands and branches is extremely useful. I suggest you at least copy:\n\ngit-completion.bash\ngit-prompt.sh\n\nYou can then source both of these files in your ~/.bashrc and then set your prompt (I'll assume you named them as the originals but starting with a . at the front of the name):\nsource $HOME/.git-completion.bash\nsource $HOME/.git-prompt.sh\nPS1='[\\u@\\h \\W$(__git_ps1 \" (%s)\")]\\$ ' # adjust this to your prompt liking\n\nSee the comments in both of those files for lots of extra functionality they offer.\nEmbedding Git information in LaTeX documents\n(Sent by Yaroslav Halchenko)\nsu\nI use a Make rule:\n# Helper if interested in providing proper version tag within the manuscript\nrevision.tex: ../misc/revision.tex.in ../.git/index\n GITID=$$(git log -1 | grep -e '^commit' -e '^Date:' | sed -e 's/^[^ ]* *//g' | tr '\\n' ' '); \\\n echo $$GITID; \\\n sed -e \"s/GITID/$$GITID/g\" $&lt; &gt;| $@\n\nin the top level Makefile.common which is included in all\nsubdirectories which actually contain papers (hence all those\n../.git). The revision.tex.in file is simply:\n% Embed GIT ID revision and date\n\\def\\revision{GITID}\n\nThe corresponding paper.pdf depends on revision.tex and includes the\nline \\input{revision} to load up the actual revision mark.\ngit export\nGit doesn't have a native export command, but this works just fine:\ngit archive --prefix=fperez.org/ master | gzip &gt; ~/tmp/source.tgz" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
datapythonista/datapythonista.github.io
docs/Bayesian inference tutorial.ipynb
apache-2.0
[ "Bayesian inference tutorial: a hello world example\nThe goal is to find a statistical model with its parameters that explains the data.\nSo, let's assume we've got some data, regarding the height of Python developers.\nThis is our data:", "x = [183, 168, 177, 170, 175, 177, 178, 166, 174, 178]", "Deciding a model\nThe first thing once we've got some data is decide which is the model that generated the data. In this case we decide that the height of Python developers comes from a normal distribution.\nA normal distribution has two parameters, the mean $\\mu$ and the standard deviation $\\sigma$ (or the variance $\\sigma^2$ which is equivalent, as it's just the square of the standard deviation).\nDeciding which model to use can be obvious in few cases, but it'll be the most complex part of the statistical inference problem in many others. Some of the obvious cases are:\n* The Normal distribution when modelling natural phenomena like human heights.\n* The Beta distribution when modelling probability distributions.\n* The Poisson distribution when modelling the frequency of events occurring.\nIn many cases we will use a combination of different distributions to explain how our data was generated.\nEach of these distribution has parameters, \\alpha and \\beta for the Beta distribution, \\lambda for the Poisson, or $\\mu$ and $\\sigma$ for the normal distribution of our example.\nThe goal of inference is to find the best values for these parameters.\nEvaluating a set of parameters\nBefore trying to find the best parameters, let's choose some arbitrary parameters, and let's evaluate them.\nFor example, we can choose the values $\\mu=175$ and $\\sigma=5$. And to evaluate them, we'll use the Bayes formula:\n$$P(\\theta|x) = \\frac{P(x|\\theta) \\cdot P(\\theta)}{P(x)}$$\nGiven a model, a normal distribution in this case, $P(\\theta|x)$ is the probability that the parameters $\\theta$ (which are $\\mu$ and $\\sigma$ in this case) given the data $x$.\nThe higher the probability of the parameters given the data, the better they are. So, this value is the score we will use to decide which are the best parameters $\\mu$ and $\\sigma$ for our data $x$, assuming data comes from a normal distribution.\nParts of the problem\nTo recap, we have:\n* Data $x$: [183, 168, 177, 170, 175, 177, 178, 166, 174, 178]\n* A model: the normal distribution\n* The parameters of the model: $\\mu$ and $\\sigma$\nAnd we're interested in finding the best values for $\\mu$ and $\\sigma$ for the data $x$, for example $\\mu=175$ and $\\sigma=5$.\nBayes formula\nBack to Bayes formula for conditional probability:\n$$P(\\theta|x) = \\frac{P(x|\\theta) \\cdot P(\\theta)}{P(x)}$$\nWe already mentioned that $P(\\theta|x)$ is the probability of the parameter values we're checking given the data $x$. And assuming our data is generated by the model we decided, the normal distribution. And this is the value we're interested in maximizing. In Bayesian terminology, $P(\\theta|x)$ is known as the posterior.\nThe posterior is a function of three other values.\n$P(x|\\theta)$: the likelihood, which is the probability of obtaining the data $x$ if the parameters $\\sigma$ were the values we're checking (e.g. $\\mu=175$ and $\\sigma=5$). And always assuming our data is generated by our model, the normal distribution.\n$P(\\theta)$: the prior, which is our knowledge about the parameters before seeing any data.\n$P(x)$: the evidence, which is the probability of the data, not given any specific set of parameters $\\sigma$, but given the model we choose, the normal distribution in the example.\nLikelihood\nThe likelihood is the probability of obtaining the data $x$ from the choosen model (e.g. the normal distribution) and for a specific set of parameters $\\theta$ (e.g. $\\mu=175$ and $\\sigma=5$).\nIt is often represented as $\\mathcal{L}(\\theta|x)$ (note that the order of $\\theta$ and $x$ is reversed to when the probability notation is used).\nIn the case of a normal distribution, the formula to compute the probability given $x$ (its probability density function) is:\n$$P(x|\\theta) = P(x| \\mu, \\sigma) = \\frac{1}{\\sqrt{2 \\pi \\sigma^2}} \\cdot e^{-\\frac{(x - \\mu)^2}{2 \\sigma^2}}$$\nIf we plot it, we obtain the famous normal bell curve (we use $\\mu=0$ and $\\sigma=1$ in the plot):", "import numpy\nimport scipy.stats\nfrom matplotlib import pyplot\n\nmu = 0.\nsigma = 1.\n\nx = numpy.linspace(-10., 10., 201)\nlikelihood = scipy.stats.norm.pdf(x, mu, sigma)\n\npyplot.plot(x, likelihood)\npyplot.xlabel('x')\npyplot.ylabel('Likelihood')\npyplot.title('Normal distribution with $\\mu=0$ and $\\sigma=1$');", "Following the example, we wanted to score how good are the parameters $\\mu=175$ and $\\sigma=5$ for our data. So far we choosen these parameters arbitrarily, but we'll choose them in a smarter way later on.\nIf we take the probability density function (p.d.f.) of the normal distribution and we compute for the first data point of $x$ 183, we have:\n$$P(x| \\mu, \\sigma) = \\frac{1}{\\sqrt{2 \\pi \\sigma^2}} \\cdot e^{-\\frac{(x - \\mu)^2}{2 \\sigma^2}}$$\nwhere $\\mu=175$, $\\sigma=5$ and $x=183$, so:\n$$P(x=183| \\mu=175, \\sigma=5) = \\frac{1}{\\sqrt{2 \\cdot \\pi \\cdot 5^2}} \\cdot e^{-\\frac{(183 - 175)^2}{2 \\cdot 5^2}}$$\nIf we do the math:", "import math\n\n1. / math.sqrt(2 * math.pi * (5 **2)) * math.exp(-((183 - 175) ** 2) / (2 * (5 ** 2)))", "This is the probability that 183 was generated by a normal distribution with mean 175 and standard deviation 5.\nWith scipy we can easily compute the likelihood of all values in our data:", "import scipy.stats\n\nmu = 175\nsigma = 5\n\nx = [183, 168, 177, 170, 175, 177, 178, 166, 174, 178]\n\nscipy.stats.norm.pdf(x, mu, sigma)", "Prior\nThe prior is our knowledge of the parameters before we observe the data. It's probably the most subjective part of Bayesian inference, and different approaches can be used.\nWe can use informed priors, and try to give the model as much information as possible. Or use uninformed priors, and let the process find the parameters using mainly the data.\nIn our case, we can start thinking on which are the possible values for a normal distribution.\nFor the mean, the range is between $-\\infty$ and $\\infty$. But we can of course do better than this.\nWe're interested on the mean of Python developers height. And it's easy to see that the minimum possible height is $0$. And for the maximum, we can start by considering the maximum known human height. This is 272 cms, the maximum measured height of Robert Pershing Wadlow, born in 1918. We can be very confident that the mean of the height of Python developers is in the range $0$ to $272$. So, a first option for an uninformed prior could be all the values in this range with equal probability.", "import numpy\nimport scipy.stats\nfrom matplotlib import pyplot\n\nmean_height = numpy.linspace(0, 272, 273)\nprobability = scipy.stats.uniform.pdf(mean_height, 0, 272)\n\npyplot.plot(mean_height, probability)\npyplot.xlabel('Mean height')\npyplot.ylabel('Probability')\npyplot.title('Uninformed prior for Python developers height');", "This could work, but we can do better. Just having 10 data points, the amount of information that we can learn from them is quite limited. And we may use these 10 data points to discover something we already know. That the probability of the mean height being 0 is nil, as it is the probability of the maximum ever observed height. And that the probability of a value like 175 cms is much higher than the probability of a value like 120 cms.\nIf we know all this before observing any data, why not use it? This is exactly what a prior is. The tricky part is defining the exact prior.\nIn this case, we don't know the mean of the height of Python developers, but we can check the mean of the height of the world population, which is arond 165. This doesn't need to be the value we're looking for. It's known that there are more male than female Python programmers. And male height is higher, so the value we're looking for will probably be higher. Also, height changes from country to country, and Python programmers are not equally distributed around the world. But we will use our data to try to find the value that contains all these biases. The prior is just a starting point that will help find the value faster.\nSo, let's use the mean of the world population as the mean of our prior, and we'll take the standard deviation of the world population, 7 cms, and we'll use the double of it. Multiplying it by 2 is arbitrary, but we'll make our prior a bit less informed. As mentioned before, choosing a prior is quite subjective.\nNote that it's not necessary to use a normal distribution for the prior. We were considering a uniform distribution before. But in this case it can make sense, as we're more sure than the mean we're looking for will be close to the mean of the human population.", "import numpy\nimport scipy.stats\nfrom matplotlib import pyplot\n\nworld_height_mean = 165\nworld_height_standard_deviation = 7\n\nmean_height = numpy.linspace(0, 272, 273)\nprobability = scipy.stats.norm.pdf(mean_height, world_height_mean, world_height_standard_deviation * 2)\n\npyplot.plot(mean_height, probability)\npyplot.xlabel('Mean height')\npyplot.ylabel('Probability')\npyplot.title('Informed prior for Python developers height');", "Evidence\nThe evidence is the probability of the data $P(x)$. The whole Bayesian formula assumes the model we choose, so it can be seen as the probability of the model coming from a normal distribution (or any distribution or combination of them we're using for the problem).\nWe can see the probability of the data coming from a normal distribution like the sum of the probabilities of the data coming from each of the possible parameters.\nIf we consider the height a discrete variable, and the range of its values $0$ to $272$. And we ignore that the normal has the standard deviation parameter, this could be expressed as:\n$$P(x) = \\sum_{i=0}^{272} P(x|\\mu_i)$$\nEach of the probabilities $P(\\mu_i)$ is a likelihood, and we've already seen how to compute them.\nIn practise, we can't ignore the simplifications we made. We first need to consider the standard deviation. Then we need to consider that both are continuous and not discrete. Being continuous means that instead of a sum, we have an integral. And finally, we will consider the interval $-\\infty$ yo $\\infty$ instead of $0$ to $272$.\nThe actual equation considering these things is:\n$$P(x) = \\int_{-\\infty}^{\\infty} P(x|\\theta) \\cdot d\\theta$$\nMathematically, this equation is more complex than the previous, but conceptually they are the same.\nGrid based Bayesian inference\nTODO\nMCMC\nTODO" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
LimeeZ/phys292-2015-work
assignments/assignment04/TheoryAndPracticeEx02.ipynb
mit
[ "Theory and Practice of Visualization Exercise 2\nImports", "from IPython.display import Image", "Violations of graphical excellence and integrity\nFind a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information.\n\nCNN\nFox News\nTime\n\nUpload the image for the visualization to this directory and display the image inline in this notebook.", "# Add your filename and uncomment the following line:\nImage(filename='StockPicture.png')", "Describe in detail the ways in which the visualization violates graphical integrity and excellence:\nI do not even know what is going on in this grpah. The x and y axes are not labeled. I wish there was a grid to see the slopes relative to one another. I have no idea what \"Open\" means. I guess the frame is necessary, but it looks ugly. It's also small... I even tried going to the link for just the image but it was still tiny. I just wish this graph were bigger!" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
infilect/ml-course1
week2/vgg_transfer_imagenet_to_flower/transfer_learning_solution.ipynb
mit
[ "Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. \nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")", "Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.", "import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()", "ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $244 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code:\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)", "import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]", "Below I'm running images through the VGG network in batches.", "# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 10\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n images = np.concatenate(batch)\n\n feed_dict = {input_: images}\n codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)\n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)", "Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.", "# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))", "Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.", "from sklearn.preprocessing import LabelBinarizer\n\nlb = LabelBinarizer()\nlb.fit(labels)\n\nlabels_vecs = lb.transform(labels)", "Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.", "from sklearn.model_selection import StratifiedShuffleSplit\n\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\n\ntrain_idx, val_idx = next(ss.split(codes, labels))\n\nhalf_val_len = int(len(val_idx)/2)\nval_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]\n\ntrain_x, train_y = codes[train_idx], labels_vecs[train_idx]\nval_x, val_y = codes[val_idx], labels_vecs[val_idx]\ntest_x, test_y = codes[test_idx], labels_vecs[test_idx]\n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)", "If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.", "inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\nfc = tf.contrib.layers.fully_connected(inputs_, 256)\n \nlogits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)\ncost = tf.reduce_mean(cross_entropy)\n\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.", "def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y", "Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.", "epochs = 10\niteration = 0\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in get_batches(train_x, train_y):\n feed = {inputs_: x,\n labels_: y}\n loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e+1, epochs),\n \"Iteration: {}\".format(iteration),\n \"Training loss: {:.5f}\".format(loss))\n iteration += 1\n \n if iteration % 5 == 0:\n feed = {inputs_: val_x,\n labels_: val_y}\n val_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Validation Acc: {:.4f}\".format(val_acc))\n saver.save(sess, \"checkpoints/flowers.ckpt\")", "Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread", "Below, feel free to choose images and see how the trained classifier predicts the flowers in them.", "test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nwith tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
beangoben/quantum_solar
Dia2/2_Intro_Matplotlib.ipynb
mit
[ "Intro a Matplotlib\nMatplotlib = Libreria para graficas cosas matematicas\nQue es Matplotlib?\n\nMatplotlin es un libreria para crear imagenes 2D de manera facil.\nChecate mas en :\n\nPagina oficial : http://matplotlib.org/\nGalleria de ejemplo: http://matplotlib.org/gallery.html\nUna libreria mas avanzada que usa matplotlib, Seaborn: http://stanford.edu/~mwaskom/software/seaborn/\nLibreria de visualizacion interactiva: http://bokeh.pydata.org/\nBuenisimo Tutorial: http://www.labri.fr/perso/nrougier/teaching/matplotlib/\n\nPara usar matplotlib, solo tiene que importar el modulo ..tambien te conviene importar numpy pues es muy util", "import numpy as np # modulo de computo numerico\nimport matplotlib.pyplot as plt # modulo de graficas\nimport pandas as pd # modulo de datos\nimport seaborn as sns\n# esta linea hace que las graficas salgan en el notebook\n%matplotlib inline", "Graficas chidas!", "def awesome_settings():\n # awesome plot options\n sns.set_style(\"white\")\n sns.set_style(\"ticks\")\n sns.set_context(\"paper\", font_scale=2)\n sns.set_palette(sns.color_palette('Set2'))\n # image stuff\n plt.rcParams['figure.figsize'] = (12.0, 6.0)\n plt.rcParams['savefig.dpi'] = 60\n plt.rcParams['lines.linewidth'] = 3\n\n return\n\n%config InlineBackend.figure_format='retina'\nawesome_settings()", "1 Crear graficas (plt.plot)\nUn ejemplo \"complejo\"\nCrear graficas es muy facil en matplotlib, aqui va un ejemplo complicado..si entiendes este pedazo de codigo puedes entender el resto.", "# datos\nx = np.linspace(0.0, 2.0, 40)\ny1 = np.sin(2*np.pi*x)\ny2 = 0.5*x+0.1\ny3 = 0.5*x**2+0.5*x+0.1\n\n# a graficas\nplt.plot(x,y1,'--',label='Seno')\nplt.plot(x,y2,'-',label='Linea')\nplt.plot(x,y3,'.',label='Cuadratica')\n\n# estilo\nplt.xlabel('y')\nplt.ylabel('x')\nplt.title('Unas grafiquitas')\nplt.legend(loc='best')\nsns.despine()\nplt.show()", "Ahora por pedazos\nPodemos usar la funcion np.linspace para crear valores en un rango, por ejemplo si queremos 100 numeros entre 0 y 10 usamos:\nY podemos graficar dos cosas al mismo tiempo:\nQue tal si queremos distinguir cada linea? Pues usamos legend(), de leyenda..tambien tenemos que agregarles nombres a cada plot\nTambien podemos hacer mas cosas, como dibujar solamente los puntos, o las lineas con los puntos usando linestyle:\nActividad: Haz muchas graficas\nGrafica las siguientes curvas:\n\nUsa x dentro del rango $[-2,2]$. \n$e^{-x^2}$\n$x^2$\n$ cos(2 x) $\nPonle nombre a cada curva, usa leyendas, titulos y demas informacion.\n\nPero ademas podemos meter mas informacion, por ejemplo dar colores cada punto, o darle tamanos diferentes:\nHistogramas (plt.hist)\nLos histogramas nos muestran distribuciones de datos, la forma de los datos, nos muestran el numero de datos de diferentes tipos:", "mu, sigma = 100, 15\nx = mu + sigma*np.random.randn(10000)\nn, bins, patches = plt.hist(x, 50, normed=1)\nplt.ylabel('Porcentaje')\nplt.xlabel('IQ')\nplt.title('Distribucion de IQ entre 10k personas')\nplt.xlim([0,200])\nsns.despine()\nplt.show()", "Actividad: Convergencia de distirbucion normal\nQueremos que grafiques:\n\nUna distribucion normal creada con $10^n$ numeros aleatorios donde $n=1,2,3,4,5,6$\nPoner nombre a cada histograma.\nTitulo, leyenda y toda la demas informacion.\nCambia plt.hist por sns.distplot y ve la diferencia." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
drphilmarshall/StatisticalMethods
tutorials/Week1/GithubAndGoals.ipynb
gpl-2.0
[ "Week 1 Tutorial\nGitHub Workflow and Goals for the Class\nGetting Started\nIdeally, you have already work through the Getting Started page on the course GitHub repository. You will need a computer that is running git, Jupyter notebook, and has all the required packages installed in order to do the homework, and some of the in-class exercises. (The exercises are intended to be collaborative, so don't worry if you don't have a laptop - but do sit next to someone who does!) If you haven't installed the required software, do it now (although the myriad of python packages can wait).\nTo run the tutorial notebooks in class, make sure you have forked and git clone'd the course repository. You might need to git pull to get the current tutorial, since we probably uploaded it just before class.\nMega-important!\nAfter pulling down the tutorial notebook, immediately make a copy. Then do not modify the original. Do your work in the copy. This will prevent the possibility of git conflicts should the version-controlled file change at any point in the future. (The same exhortation applies to homeworks.)\nTo modify the notebook, you'll need to have it open and running in Jupyter notebook locally. At this point, the URL in your browser window should say something like \"http://localhost:8890/notebooks/some_other_stuff/tutorial.ipynb\"\nThis Week's \"Tutorial\"\n\n\nMake sure you have read and understood the Homework instructions, have forked and cloned the 2019 homework repo, and have done any other necessary computer setup. If not, or if you need technical help, this is a great time for it.\n\n\nThe cells below contain an absurdly simple chunk of python code for you to complete, demonstrating the way that these tutorial notebooks will generally contain a mix of completed and incompleted code. Your job is to complete the code such that running the notebook will result in a string being printed out. Specifically, the string should be a brief statement of what you hope to learn from this class.\n\n\nOnce you've produced a functional notebook, submit your solution to the Tutorial1 folder of the private repo per the usual procedure for submitting homework assignments. (Note that we will not do this for any other tutorials; this is just to make sure that everyone knows how to use the repository.)\n\n\nPreliminaries\nThe first code cell will usually contain some import statements in addition to the following definitions.\nThe REPLACE_WITH_YOUR_SOLUTION and/or REMOVE_THIS_LINE functions will show up anywhere you need to add your own code to complete the tutorial. Trying to run those cells as-is will produce a reminder.", "class SolutionMissingError(Exception):\n def __init__(self):\n Exception.__init__(self,\"You need to complete the solution for this code to work!\")\ndef REPLACE_WITH_YOUR_SOLUTION():\n raise SolutionMissingError\nREMOVE_THIS_LINE = REPLACE_WITH_YOUR_SOLUTION", "This crazy try-except construction is our way of making sure the notebooks will work when completed without actually providing complete code. You can either write your code directly in the except block, or delete the try, exec and except lines entirely (remembering to unindent the remaining lines in that case, because python).", "try:\n exec(open('Solution/goals.py').read())\nexcept IOError:\n my_goals = REPLACE_WITH_YOUR_SOLUTION()", "This cell just prints out the string my_goals.", "print(my_goals)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
pkreissl/espresso
doc/tutorials/active_matter/active_matter.ipynb
gpl-3.0
[ "Active Matter\nTable of Contents\n\nIntroduction\nActive particles\nEnhanced Diffusion\nRectification\nHydrodynamics of self-propelled particles\nFurther reading\n\nIntroduction\nIn this tutorial we explore the ways to simulate self-propulsion in the\nsimulation software package ESPResSo. We consider three examples that illustrate\nthe properties of these systems. First, we study the concept of enhanced\ndiffusion of a self-propelled particle. Second, we investigate rectification in\nan asymmetric geometry. Finally, we determine the flow field around a\nself-propelled particle using lattice-Boltzmann simulations (LB). These three\nsubsections should give insight into the basics of simulating active matter\nwith ESPResSo. This tutorial assumes basic knowledge of Python and ESPResSo,\nas well as the use of lattice-Boltzmann within ESPResSo. It is therefore\nrecommended to go through the relevant tutorials first, before attempting this one.\nActive particles\nActive matter is a term that describes a class of systems, in which energy is\nconstantly consumed to perform work. These systems are therefore highly\nout-of-equilibrium (thermodynamically) and (can) thus defy description using\nthe standard framework of statistical mechanics. Active systems are, however,\nubiquitous. On our length scale, we encounter flocks of\nbirds, schools of fish, and, of course, humans;\non the mesoscopic level examples are found in bacteria, sperm, and algae;\nand on the nanoscopic level, transport along the cytoskeleton is achieved by\nmyosin motors. This exemplifies the range of length scales\nwhich the field of active matter encompasses, as well as its diversity. Recent\nyears have seen a huge increase in studies into systems consisting of\nself-propelled particles, in particular artificial ones in the colloidal\nregime.\nThese self-propelled colloids show promise as physical model systems for\ncomplex biological behavior (bacteria moving collectively) and could be used to\nanswer fundamental questions concerning out-of-equilibrium statistical\nphysics.\nSimulations can also play an important role in this regard, as the\nparameters are more easily tunable and the results ‘cleaner’ than in\nexperiments. The above should give you some idea of the importance of\nthe field of active matter and why you should be interested in\nperforming simulations in it.\nActive Particles in ESPResSo\nThe <tt>ENGINE</tt> feature offers intuitive syntax for adding self-propulsion to\na particle. The propulsion will occur along the vector that defines the\norientation of the particle (henceforth referred to as ‘director’). In ESPResSo\nthe orientation of the particle is defined by a quaternion; this in turn\ndefines a rotation matrix that acts on the particle's initial orientation\n(along the z-axis), which then defines the particles current orientation\nthrough the matrix-oriented vector.\nWithin the <tt>ENGINE</tt> feature there are two ways of setting up a self-propelled\nparticle, with and without hydrodynamic interactions. The particle without\nhydrodynamic interactions will be discussed first, as it is the simplest case.\nSelf-Propulsion without Hydrodynamics\nFor this type of self-propulsion the Langevin thermostat can be used. The\nLangevin thermostat imposes a velocity-dependent friction on a particle.\nWhen a constant force is applied along the director,\nthe friction causes the particle to attain a terminal velocity, due to the balance\nof driving and friction force, see <a href='#fig:balance'>Fig.&nbsp;1</a>. The exponent with\nwhich the particle's velocity relaxes towards this value depends on the\nstrength of the friction and the mass of the particle. The <tt>ENGINE</tt>\nfeature implies that rotation of the particles (the <tt>ROTATION</tt> feature) is\ncompiled into ESPResSo. The particle can thus reorient due to external torques or\ndue to thermal fluctuations, whenever the rotational degrees of freedom are\nthermalized. Note that the rotation of the particles has to be enabled\nexplicitly via their <tt>ROTATION</tt> property. This ‘engine’ building block can\nbe connected to other particles, e.g., via the virtual sites (rigid\nbody) to construct complex self-propelled objects.\n<a id='fig:balance'></a>\n<figure><img src=\"figures/friction.svg\" style=\"float: center; width: 40%\"/>\n<center>\n<figcaption>Fig. 1: A balance of the driving force in the\ndirection defined by the ‘director’ unit vector and the friction due to\nthe Langevin thermostat results in a constant terminal\nvelocity.</figcaption>\n</center>\n</figure>\n\nEnhanced Diffusion\nFirst we import the necessary modules, define the parameters and set up the system.", "import tqdm\nimport numpy as np\nimport espressomd.observables\nimport espressomd.accumulators\nimport matplotlib.pyplot as plt\nplt.rcParams.update({'font.size': 18})\n%matplotlib inline\n\n\nespressomd.assert_features(\n [\"ENGINE\", \"ROTATION\", \"MASS\", \"ROTATIONAL_INERTIA\", \"CUDA\"])\n\nED_PARAMS = {'time_step': 0.01,\n 'box_l': 3*[10.],\n 'skin': 0.4,\n 'active_velocity': 5,\n 'kT': 1,\n 'gamma': 1,\n 'gamma_rotation': 1,\n 'mass': 0.1,\n 'rinertia': 3*[1.],\n 'corr_tmax': 100}\nED_N_SAMPLING_STEPS = 5000000\n\nsystem = espressomd.System(box_l=ED_PARAMS['box_l'])\nsystem.cell_system.skin = ED_PARAMS['skin']\nsystem.time_step = ED_PARAMS['time_step']", "Exercise\n\nSet up a Langevin thermostat for translation and rotation of the particles.\n\npython\nsystem.thermostat.set_langevin(kT=ED_PARAMS['kT'],\n gamma=ED_PARAMS['gamma'],\n gamma_rotation=ED_PARAMS['gamma_rotation'],\n seed=42)\nThe configuration for the Langevin-based swimming is exposed as an attribute of\nthe <tt>ParticleHandle</tt> class of ESPResSo, which represents a particle in the\nsimulation. You can either set up the self-propulsion during the creation of a\nparticle or at a later stage.\nExercise\n\nSet up one active and one passive particle, call them part_act and part_pass (Hint: see the docs)\nUse ED_PARAMS for the necessary parameters\n\npython\npart_act = system.part.add(pos=[5.0, 5.0, 5.0], swimming={'v_swim': ED_PARAMS['active_velocity']},\n mass=ED_PARAMS['mass'], rotation=3 * [True], rinertia=ED_PARAMS['rinertia'])\npart_pass = system.part.add(pos=[5.0, 5.0, 5.0],\n mass=ED_PARAMS['mass'], rotation=3 * [True], rinertia=ED_PARAMS['rinertia'])\nNext we set up three ESPResSo correlators for the Mean Square Displacement (MSD), Velocity Autocorrelation Function (VACF) and the Angular Velocity Autocorrelation Function (AVACF).", "pos_obs = espressomd.observables.ParticlePositions(\n ids=[part_act.id, part_pass.id])\nmsd = espressomd.accumulators.Correlator(obs1=pos_obs,\n corr_operation=\"square_distance_componentwise\",\n delta_N=1,\n tau_max=ED_PARAMS['corr_tmax'],\n tau_lin=16)\nsystem.auto_update_accumulators.add(msd)\n\nvel_obs = espressomd.observables.ParticleVelocities(\n ids=[part_act.id, part_pass.id])\nvacf = espressomd.accumulators.Correlator(obs1=vel_obs,\n corr_operation=\"componentwise_product\",\n delta_N=1,\n tau_max=ED_PARAMS['corr_tmax'],\n tau_lin=16)\nsystem.auto_update_accumulators.add(vacf)\n\nang_obs = espressomd.observables.ParticleAngularVelocities(\n ids=[part_act.id, part_pass.id])\navacf = espressomd.accumulators.Correlator(obs1=ang_obs,\n corr_operation=\"componentwise_product\",\n delta_N=1,\n tau_max=ED_PARAMS['corr_tmax'],\n tau_lin=16)\nsystem.auto_update_accumulators.add(avacf)", "No more setup needed! We can run the simulation and plot our observables.", "for i in tqdm.tqdm(range(100)):\n system.integrator.run(int(ED_N_SAMPLING_STEPS/100))\n\nsystem.auto_update_accumulators.remove(msd)\nmsd.finalize()\nsystem.auto_update_accumulators.remove(vacf)\nvacf.finalize()\nsystem.auto_update_accumulators.remove(avacf)\navacf.finalize()\n\ntaus_msd = msd.lag_times()\nmsd_result = msd.result()\nmsd_result = np.sum(msd_result, axis=2)\n\ntaus_vacf = vacf.lag_times()\nvacf_result = np.sum(vacf.result(), axis=2)\n\ntaus_avacf = avacf.lag_times()\navacf_result = np.sum(avacf.result(), axis=2)\n\nfig_msd = plt.figure(figsize=(10, 6))\nplt.plot(taus_msd, msd_result[:, 0], label='active')\nplt.plot(taus_msd, msd_result[:, 1], label='passive')\nplt.xlim((taus_msd[1], None))\nplt.loglog()\nplt.xlabel('t')\nplt.ylabel('MSD(t)')\nplt.legend()\nplt.show()", "The Mean Square Displacement of an active particle is characterized by a longer ballistic regime and an increased diffusion coefficient for longer lag times. In the overdamped limit it is given by\n$$\n\\langle r^{2}(t) \\rangle = 6 D t + \\frac{v^{2} \\tau^{2}{R}}{2} \\left[ \\frac{2 t}{\\tau^{2}{R}} + \\exp\\left( \\frac{-2t}{\\tau^{2}{R}} \\right) - 1 \\right],\n$$\nwhere $\\tau{R} = \\frac{8\\pi\\eta R^{3}}{k_{B} T}$ is the characteristic time scale for rotational diffusion and $ D = \\frac{k_B T}{\\gamma}$ is the translational diffusion coefficient.\nFor small times ($t \\ll \\tau_{R}$) the motion is ballistic\n$$\\langle r^{2}(t) \\rangle = 6 D t + v^{2} t^{2},$$\nwhile for long times ($t \\gg \\tau_{R}$) the motion is diffusive\n$$\\langle r^{2}(t) \\rangle = (6 D + v^{2}\\tau_{R}) t.$$\nNote that no matter the strength of the activity, provided it is some finite value, the crossover between ballistic motion and enhanced diffusion is controlled by the rotational diffusion time.\nThe passive particle also displays a crossover from a ballistic to a diffusive motion. However, the crossover time $\\tau_{C}=\\frac{m}{\\gamma}$ is not determined by the rotational motion but instead by the mass of the particles. \nFrom the longterm MSD of the active particles we can define an effective diffusion coefficient $D_{\\mathrm{eff}} = D + v^{2}\\tau_{R}/6$. One can, of course, also connect this increased diffusion with an effective temperature. However, this apparent equivalence can lead to problems when one then attempts to apply statistical mechanics to such systems at the effective temperature. That is, there is typically more to being out-of-equilibrium than can be captured by a simple remapping of equilibrium parameters, as we will see in the second part of the tutorial.\nFrom the autocorrelation functions of the velocity and the angular velocity we can see that the activity does not influence the rotational diffusion. Yet the directed motion for $t<\\tau_{R}$ leads to an enhanced correlation of the velocity.", "def acf_stable_regime(x, y):\n \"\"\"\n Remove the noisy tail in autocorrelation functions of finite time series.\n \"\"\"\n cut = np.argmax(y <= 0.) - 2\n assert cut >= 1\n return (x[1:cut], y[1:cut])\n\nfig_vacf = plt.figure(figsize=(10, 6))\nplt.plot(*acf_stable_regime(taus_vacf, vacf_result[:, 0]), label='active')\nplt.plot(*acf_stable_regime(taus_vacf, vacf_result[:, 1]), label='passive')\nplt.xlim((taus_vacf[1], None))\nplt.loglog()\nplt.xlabel('t')\nplt.ylabel('VACF(t)')\nplt.legend()\nplt.show()\n\nfig_avacf = plt.figure(figsize=(10, 6))\nplt.plot(*acf_stable_regime(taus_avacf, avacf_result[:, 0]), label='active')\nplt.plot(*acf_stable_regime(taus_avacf, avacf_result[:, 1]), label='passive')\nplt.xlim((taus_avacf[1], None))\nplt.loglog()\nplt.xlabel('t')\nplt.ylabel('AVACF(t)')\nplt.legend()\nplt.show()", "Before we go to the second part, it is important to clear the state of the system.", "def clear_system(system):\n system.part.clear()\n system.thermostat.turn_off()\n system.constraints.clear()\n system.auto_update_accumulators.clear()\n system.time = 0.\n\nclear_system(system)", "Rectification\nIn the second part of this tutorial you will consider the ‘rectifying’ properties of certain\nasymmetric geometries on active systems. Rectification can be best understood by\nconsidering a system of passive particles first. In an equilibrium system,\nfor which the particles are confined to an asymmetric box with hard walls, we know that the\nparticle density is homogeneous throughout. However, in an out-of-equilibrium setting one can have a\nheterogeneous distribution of particles, which limits the applicability of an\n‘effective’ temperature description.\nThe geometry we will use is a cylindrical system with a funnel dividing\ntwo halves of the box as shown in <a href='#fig:geometry'>Fig.&nbsp;2</a>.\n<a id='fig:geometry'></a>\n<figure><img src=\"figures/geometry.svg\" style=\"float: center; width: 75%\"/>\n<center>\n<figcaption>Fig. 2: Sketch of the rectifying geometry which we\nsimulate for this tutorial.</figcaption>\n</center>\n</figure>", "import espressomd.shapes\nimport espressomd.math\n\nRECT_PARAMS = {'length': 100,\n 'radius': 20,\n 'funnel_inner_radius': 3,\n 'funnel_angle': np.pi / 4.0,\n 'funnel_thickness': 0.1,\n 'n_particles': 500,\n 'active_velocity': 7,\n 'time_step': 0.01,\n 'wca_sigma': 0.5,\n 'wca_epsilon': 0.1,\n 'skin': 0.4,\n 'kT': 0.1,\n 'gamma': 1.,\n 'gamma_rotation': 1}\n\nRECT_STEPS_PER_SAMPLE = 100\nRECT_N_SAMPLES = 500\n\nTYPES = {'particles': 0,\n 'boundaries': 1}\n\nbox_l = np.array(\n [RECT_PARAMS['length'], 2*RECT_PARAMS['radius'], 2*RECT_PARAMS['radius']])\nsystem.box_l = box_l\nsystem.cell_system.skin = RECT_PARAMS['skin']\nsystem.time_step = RECT_PARAMS['time_step']\nsystem.thermostat.set_langevin(\n kT=RECT_PARAMS['kT'], gamma=RECT_PARAMS['gamma'], gamma_rotation=RECT_PARAMS['gamma_rotation'], seed=42)\n\ncylinder = espressomd.shapes.Cylinder(\n center=0.5 * box_l,\n axis=[1, 0, 0], radius=RECT_PARAMS['radius'], length=RECT_PARAMS['length'], direction=-1)\nsystem.constraints.add(shape=cylinder, particle_type=TYPES['boundaries'])\n\n# Setup walls\nwall = espressomd.shapes.Wall(dist=0, normal=[1, 0, 0])\nsystem.constraints.add(shape=wall, particle_type=TYPES['boundaries'])\n\nwall = espressomd.shapes.Wall(dist=-RECT_PARAMS['length'], normal=[-1, 0, 0])\nsystem.constraints.add(shape=wall, particle_type=TYPES['boundaries'])\n\nfunnel_length = (RECT_PARAMS['radius']-RECT_PARAMS['funnel_inner_radius']\n )/np.tan(RECT_PARAMS['funnel_angle'])", "Exercise\n\nUsing funnel_length and the geometric parameters in RECT_PARAMS, set up the funnel cone (Hint: Conical Frustum)\n\n```python\nctp = espressomd.math.CylindricalTransformationParameters(\n axis=[1, 0, 0], center=box_l/2.)\nhollow_cone = espressomd.shapes.HollowConicalFrustum(\n cyl_transform_params=ctp,\n r1=RECT_PARAMS['funnel_inner_radius'], r2=RECT_PARAMS['radius'],\n thickness=RECT_PARAMS['funnel_thickness'],\n length=funnel_length,\n direction=1)\nsystem.constraints.add(shape=hollow_cone, particle_type=TYPES['boundaries'])\n```\nExercise\n\nSet up a WCA potential between the walls and the particles using the parameters in RECT_PARAMS\n\npython\nsystem.non_bonded_inter[TYPES['particles'], TYPES['boundaries']].wca.set_params(\n epsilon=RECT_PARAMS['wca_epsilon'], sigma=RECT_PARAMS['wca_sigma'])\nESPResSo uses quaternions to describe the rotational state of particles. Here we provide a convenience method to calculate quaternions from spherical coordinates.\nExercise\n\nPlace an equal number of swimming particles (the total number should be RECT_PARAMS['n_particles']) in the left and the right part of the box such that the center of mass is exactly in the middle. (Hint: Particles do not interact so you can put multiple in the same position)\nParticles must be created with a random orientation\n\n```python\nfor i in range(RECT_PARAMS['n_particles']):\n pos = box_l / 2\n pos[0] += (-1)**i * 0.25 * RECT_PARAMS['length']\n# https://mathworld.wolfram.com/SpherePointPicking.html\ntheta = np.arccos(2. * np.random.random() - 1)\nphi = 2. * np.pi * np.random.random()\ndirector = [np.sin(theta) * np.cos(phi),\n np.sin(theta) * np.cos(phi),\n np.cos(theta)]\n\nsystem.part.add(pos=pos, swimming={'v_swim': RECT_PARAMS['active_velocity']},\n director=director, rotation=3*[True])\n\n```", "com_deviations = list()\ntimes = list()", "Exercise\n\nRun the simulation using RECT_N_SAMPLES and RECT_STEPS_PER_SAMPLE and calculate the deviation of the center of mass from the center of the box in each sample step. (Hint: Center of mass)\nSave the result and the corresponding time of the system in the lists given above.\n\npython\nfor _ in tqdm.tqdm(range(RECT_N_SAMPLES)):\n system.integrator.run(RECT_STEPS_PER_SAMPLE)\n com_deviations.append(system.galilei.system_CMS()[0] - 0.5 * box_l[0])\n times.append(system.time)", "def moving_average(data, window_size):\n return np.convolve(data, np.ones(window_size), 'same') / window_size\n\nsmoothing_window = 10\ncom_smoothed = moving_average(com_deviations, smoothing_window)\nfig_rect = plt.figure(figsize=(10, 6))\nplt.plot(times[smoothing_window:-smoothing_window],\n com_smoothed[smoothing_window:-smoothing_window])\nplt.xlabel('t')\nplt.ylabel('center of mass deviation')\nplt.show()", "Even though the potential energy inside the geometry is 0 in every part of the accessible region, the active particles are clearly not Boltzmann distributed (homogenous density). Instead, they get funneled into the right half, showing the inapplicability of equilibrium statistical mechanics.", "clear_system(system)", "Hydrodynamics of self-propelled particles\nIn situations where hydrodynamic interactions between swimmers or swimmers and\nobjects are of importance, we use the lattice-Boltzmann (LB) to propagate the\nfluid's momentum diffusion. We recommend the GPU-based variant of LB in ESPResSo,\nsince it is much faster. Moreover, the current implementation of the CPU\nself-propulsion is limited to one CPU. This is because the ghost-node structure\nof the ESPResSo cell-list code does not allow for straightforward MPI parallellization\nof the swimmer objects across several CPUs.\nOf particular importance for self-propulsion at low Reynolds number is the fact\nthat active systems (bacteria, sperm, algae, but also artificial chemically\npowered swimmers) are force free. That is, the flow field around one of these\nobjects does not contain a monopolar (Stokeslet) contribution. In the case of a\nsperm cell, see <a href='#fig:pusher-puller'>Fig.&nbsp;3</a>(a), the reasoning is as follows.\nThe whip-like tail pushes against the fluid and the fluid pushes against the\ntail, at the same time the head experiences drag, pushing against the fluid and\nbeing pushed back against by the fluid. This ensures that both the swimmer and\nthe fluid experience no net force. However, due to the asymmetry of the\ndistribution of forces around the swimmer, the fluid flow still causes net\nmotion. When there is no net force on the fluid, the lowest-order multipole\nthat can be present is a hydrodynamic dipole. Since a dipole has an\norientation, there are two types of swimmer: pushers and pullers. The\ndistinction is made by whether the particle pulls fluid in from the front and\nback, and pushes it out towards its side (puller), or vice versa (pusher), see\n<a href='#fig:pusher-puller'>Fig.&nbsp;3</a>(c,d).\n<a id='fig:pusher-puller'></a>\n<figure><img src=\"figures/pusher-puller.svg\" style=\"float: center; width: 75%\"/>\n<center>\n<figcaption>Fig. 3: (a) Illustration of a sperm cell modeled\nusing our two-point swimmer code. The head is represented by a solid particle,\non which a force is acting (black arrow). In the fluid a counter force is\napplied (white arrow). This generates a pusher-type particle. (b) Illustration\nof the puller-type Chlamydomonas algae, also represented by our two-point\nswimmer. (c,d) Sketch of the flow-lines around the swimmers: (c) pusher and (d)\npuller.</figcaption>\n</center>\n</figure>\n\nFor the setup of the swimming particles with hydrodynamics we cannot use the v_swim argument anymore because it is not trivial to determine the friction acting on the particle. Instead, we have to provide the keys f_swim and dipole_length. Together they determine what the dipole strength and the terminal velocity of the swimmer is.\nOne should be careful, however, the dipole_length should be at least one\ngrid spacing, since use is made of the LB interpolation scheme. If the length\nis less than one grid spacing, you can easily run into discretization artifacts\nor cause the particle not to move. This dipole length together with the\ndirector and the keyword <tt>pusher/puller</tt> determines where the counter\nforce on the fluid is applied to make the system force free, see\n<a href='#fig:pusher-puller'>Fig.&nbsp;3</a>(a) for an illustration of the setup. That is to\nsay, a force of magnitude f_swim is applied to the particle (leading\nto a Stokeslet in the fluid, due to friction) and a counter force is applied to\ncompensate for this in the fluid (resulting in an extended dipole flow field,\ndue to the second monopole). For a puller the counter force is applied in front\nof the particle and for a pusher it is in the back\n(<a href='#fig:pusher-puller'>Fig.&nbsp;3</a>(b)).\nFinally, there are a few caveats to the swimming setup with hydrodynamic\ninteractions. First, the stability of this algorithm is governed by the\nstability limitations of the LB method. Second, since the particle is\nessentially a point particle, there is no rotation caused by the fluid\nflow, e.g., a swimmer in a Poiseuille flow. If the thermostat is\nswitched on, the rotational degrees of freedom will also be thermalized, but\nthere is still no contribution of rotation due to ‘external’ flow fields.\nIt is recommended to use an alternative means of obtaining rotations in your LB\nswimming simulations. For example, by constructing a raspberry\nparticle.", "import espressomd.lb\n\nHYDRO_PARAMS = {'box_l': 3*[25],\n 'time_step': 0.01,\n 'skin': 1,\n 'agrid': 1,\n 'dens': 1,\n 'visc': 1,\n 'gamma': 1,\n 'mass': 5,\n 'dipole_length': 2,\n 'active_force': 0.1,\n 'mode': 'pusher'}\n\nHYDRO_N_STEPS = 2000\n\nsystem.box_l = HYDRO_PARAMS['box_l']\nsystem.cell_system.skin = HYDRO_PARAMS['skin']\nsystem.time_step = HYDRO_PARAMS['time_step']\nsystem.min_global_cut = HYDRO_PARAMS['dipole_length']", "Exercise\n\nUsing HYDRO_PARAMS, set up a lattice-Boltzmann fluid and activate it as a thermostat (Hint: lattice-Boltzmann)\n\npython\nlbf = espressomd.lb.LBFluidGPU(agrid=HYDRO_PARAMS['agrid'], dens=HYDRO_PARAMS['dens'],\n visc=HYDRO_PARAMS['visc'], tau=HYDRO_PARAMS['time_step'])\nsystem.actors.add(lbf)\nsystem.thermostat.set_lb(LB_fluid=lbf, gamma=HYDRO_PARAMS['gamma'], seed=42)", "box_l = np.array(HYDRO_PARAMS['box_l'])\npos = box_l/2.\npos[2] = -10.", "Exercise\n\nUsing HYDRO_PARAMS, place particle at pos that swims in z-direction. The particle handle should be called particle.\n\npython\nparticle = system.part.add(\n pos=pos, mass=HYDRO_PARAMS['mass'], rotation=3*[False],\n swimming={'f_swim': HYDRO_PARAMS['active_force'],\n 'mode': HYDRO_PARAMS['mode'],\n 'dipole_length': HYDRO_PARAMS['dipole_length']})", "system.integrator.run(HYDRO_N_STEPS)\n\nvels = np.squeeze(lbf[:, int(system.box_l[1]/2), :].velocity)\nvel_abs = np.linalg.norm(vels, axis=2)\n\nlb_shape = lbf.shape\nxs, zs = np.meshgrid(np.linspace(0.5, box_l[0] - 0.5, num=lb_shape[0]),\n np.linspace(0.5, box_l[2] - 0.5, num=lb_shape[2]))\n\nfig_vels, ax_vels = plt.subplots(figsize=(10, 6))\nim = plt.pcolormesh(vel_abs.T, cmap='YlOrRd')\nplt.quiver(xs, zs, vels[:, :, 0].T, vels[:, :, 2].T, angles='xy', scale=0.005)\ncirc = plt.Circle(particle.pos_folded[[0, 2]], 0.5, color='blue')\nax_vels.add_patch(circ)\nax_vels.set_aspect('equal')\nplt.xlabel('x')\nplt.ylabel('z')\ncb = plt.colorbar(im, label=r'$|v_{\\mathrm{fluid}}|$')\nplt.show()", "We can also export the particle and fluid data to .vtk format to display the results with a visualization software like ParaView.", "lbf.write_vtk_velocity('./fluid.vtk')\nsystem.part.writevtk('./particle.vtk')", "The result of such a visualization could look like <a href='#fig:flow_field'>Fig.&nbsp;4</a>.\n<a id='fig:flow_field'></a>\n<figure><img src=\"figures/flow_field.svg\" style=\"float: center; width: 40%\"/>\n<center>\n<figcaption>Fig. 4: Snapshot of the flow field around a pusher particle visualized with ParaView.</figcaption>\n</center>\n</figure>\n\nFurther reading\n<a id='[1]'></a>[1] M. Ballerini, N. Cabibbo, R. Candelier, A. Cavagna, E. Cisbani, I. Giardina, V. Lecomte, A. Orlandi, G. Parisi, A. Procaccini, M. Viale, and V. Zdravkovic. Interaction ruling animal collective behavior depends on topological rather than metric distance: Evidence from a field study. Proc. Natl. Acad. Sci., 105:1232, 2008.\n<a id='[2]'></a>[2] Y. Katz, K. Tunstrøm, C.C. Ioannou, C. Huepe, and I.D. Couzin. Inferring the structure and dynamics of interactions in schooling fish. Proc. Nat. Acad. Sci., 108(46):18720, 2011.\n<a id='[3]'></a>[3] D. Helbing, I. Farkas, and T. Vicsek. Simulating dynamical features of escape panic. Nature, 407:487, 2000.\n<a id='[4]'></a>[4] J. Zhang, W. Klingsch, A. Schadschneider, and A. Seyfried. Experimental study of pedestrian flow through a T-junction. In V. V. Kozlov, A. P. Buslaev, A. S. Bugaev, M. V. Yashina, A. Schadschneider, and M. Schreckenberg, editors, Traffic and Granular Flow ’11, page 241. Springer (Berlin/Heidelberg), 2013.\n<a id='[5]'></a>[5] J.L. Silverberg, M. Bierbaum, J.P. Sethna, and I. Cohen. Collective motion of humans in mosh and circle pits at heavy metal concerts. Phys. Rev. Lett., 110:228701, 2013.\n<a id='[6]'></a>[6] A. Sokolov, I.S. Aranson, J.O. Kessler, and R.E. Goldstein. Concentration dependence of the collective dynamics of swimming bacteria. Phys. Rev. Lett., 98:158102, 2007.\n<a id='[7]'></a>[7] J. Schwarz-Linek, C. Valeriani, A. Cacciuto, M. E. Cates, D. Marenduzzo, A. N. Morozov, and W. C. K. Poon. Phase separation and rotor self-assembly in active particle suspensions. Proc. Nat. Acad. Sci., 109:4052, 2012.\n<a id='[8]'></a>[8] M. Reufer, R. Besseling, J. Schwarz-Linek, V.A. Martinez, A.N. Morozov, J. Arlt, D. Trubitsyn, F.B. Ward, and W.C.K. Poon. Switching of swimming modes in magnetospirillium gryphiswaldense. Biophys. J., 106:37, 2014.\n<a id='[9]'></a>[9] D.M. Woolley. Motility of spermatozoa at surfaces. Reproduction, 126:259, 2003.\n<a id='[10]'></a>[10] I.H. Riedel, K. Kruse, and J. Howard. A self-organized vortex array of hydrodynamically entrained sperm cells. Science, 309(5732):300, 2005.\n<a id='[11]'></a>[11] R. Ma, G.S. Klindt, I.H. Riedel-Kruse, F. Jülicher, and B.M. Friedrich. Active phase and amplitude fluctuations of flagellar beating. Phys. Rev. Lett., 113:048101, 2014.\n<a id='[12]'></a>[12] M. Polin, I. Tuval, K. Drescher, J.P. Gollub, and R.E. Goldstein. Chlamydomonas swims with two “gears” in a eukaryotic version of run-and-tumble locomotion. Science, 325:487, 2009.\n<a id='[13]'></a>[13] V.F. Geyer, F. Jülicher, J. Howard, and B.M. Friedrich. Cell-body rocking is a dominant mechanism for flagellar synchronization in a swimming alga. Proc. Nat. Acad. Sci., 110:18058, 2013.\n<a id='[14]'></a>[14] D. Mizuno, C. Tardin, C.F. Schmidt, and F.C. MacKintosh. Nonequilibrium mechanics of active cytoskeletal networks. Science, 315:370, 2007.\n<a id='[15]'></a>[15] R.F. Ismagilov, A. Schwartz, N. Bowden, and G.M. Whitesides. Autonomous movement and self-assembly. Angew. Chem. Int. Ed., 41:652, 2002.\n<a id='[16]'></a>[16] W. F. Paxton, K. C. Kistler, C. C. Olmeda, A. Sen, S. K. St. Angelo, Y. Cao, T. E. Mallouk, P. E. Lammert, and V. H. Crespi. Catalytic nanomotors: Autonomous movement of striped nanorods. J. Am. Chem. Soc., 126:13424, 2004.\n<a id='[17]'></a>[17] Y. Wang, R. M. Hernandez, D. J. Bartlett, J. M. Bingham, T. R. Kline, A. Sen, and T. E. Mallouk. Bipolar electrochemical mechanism for the propulsion of catalytic nanomotors in hydrogen peroxide solutions. Langmuir, 22:10451, 2006.\n<a id='[18]'></a>[18] A. Brown and W. Poon. Ionic effects in self-propelled Pt-coated Janus swimmers. Soft Matter, 10:4016–4027, 2014.\n<a id='[19]'></a>[19] S. Ebbens, D.A. Gregory, G. Dunderdale, J.R. Howse, Y. Ibrahim, T.B. Liverpool, and R. Golestanian. Electrokinetic effects in catalytic platinum-insulator Janus swimmers. Euro. Phys. Lett., 106:58003, 2014.\n<a id='[20]'></a>[20] S. Ebbens, M.-H. Tu, J. R. Howse, and R. Golestanian. Size dependence of the propulsion velocity for catalytic Janus-sphere swimmers. Phys. Rev. E, 85:020401, 2012.\n<a id='[21]'></a>[21] J. R. Howse, R. A. L. Jones, A. J. Ryan, T. Gough, R. Vafabakhsh, and R. Golestanian. Self-motile colloidal particles: From directed propulsion to random walk. Phys. Rev. Lett., 99:048102, 2007.\n<a id='[22]'></a>[22] L. F. Valadares, Y.-G. Tao, N. S. Zacharia, V. Kitaev, F. Galembeck, R. Kapral, and G. A. Ozin. Catalytic nanomotors: Self-propelled sphere dimers. Small, 6:565, 2010.\n<a id='[23]'></a>[23] J. Simmchen, V. Magdanz, S. Sanchez, S. Chokmaviroj, D. Ruiz-Molina, A. Baeza, and O.G. Schmidt. Effect of surfactants on the performance of tubular and spherical micromotors – a comparative study. RSC Adv., 4:20334, 2014.\n<a id='[24]'></a>[24] H.-R. Jiang, N. Yoshinaga, and M. Sano. Active motion of a Janus particle by self-thermophoresis in a defocused laser beam. Phys. Rev. Lett., 105:268302, 2010.\n<a id='[25]'></a>[25] L. Baraban, R. Streubel, D. Makarov, L. Han, D. Karnaushenko, O. G. Schmidt, and G. Cuniberti. Fuel-free locomotion of Janus motors: Magnetically induced thermophoresis. ACS Nano, 7:1360, 2013.\n<a id='[26]'></a>[26] I. Buttinoni, G. Volpe, F. Kümmel, G. Volpe, and C. Bechinger. Active Brownian motion tunable by light. J. Phys.: Condens. Matter, 24:284129, 2012.\n<a id='[27]'></a>[27] A. A. Solovev, Y. Mei, E. Bermúdez Ureña, G. Huang, and O. G. Schmidt. Catalytic microtubular jet engines self-propelled by accumulated gas bubbles. Small, 5:1688, 2009.\n<a id='[28]'></a>[28] Y. Mei, A. A. Solovev, S. Sanchez, and O. G. Schmidt. Rolled-up nanotech on polymers: from basic perception to self-propelled catalytic microengines. Chem. Soc. Rev., 40:2109, 2011.\n<a id='[29]'></a>[29] M.E. Cates. Diffusive transport without detailed balance in motile bacteria: does microbiology need statistical physics? Rep. Prog. Phys., 75:042601, 2012.\n<a id='[30]'></a>[30] M.E. Cates and J. Tailleur. Motility-induced phase separation. Annu. Rev. Condens. Matter Phys., 6:219, 2015.\n<a id='[31]'></a>[31] A. Arnold et al. Espresso user guide. User Guide: ESPResSo git repository, 3.4-dev-1404-g32d3874:1, 2015.\n<a id='[32]'></a>[32] H. J. Limbach, A. Arnold, B. A. Mann, and C. Holm. ESPResSo – an extensible simulation package for research on soft matter systems. Comp. Phys. Comm., 174:704, 2006.\n<a id='[33]'></a>[33] A. Arnold, O. Lenz, S. Kesselheim, R. Weeber, F. Fahrenberger, D. Roehm, P. Košovan, and C. Holm. ESPResSo 3.1 — Molecular dynamics software for coarse-grained models. In M. Griebel and M. A. Schweitzer, editors, Meshfree Methods for Partial Differential Equations VI, volume 89 of Lecture Notes in Computational Science and Engineering. Springer, 2013.\n<a id='[34]'></a>[34] S. E. Ilse, C. Holm, and J. de Graaf. Surface roughness stabilizes the clustering of self-propelled triangles. The Journal of Chemical Physics, 145(13):134904, 2016.\n<a id='[35]'></a>[35] V. Lobaskin and B. Dünweg. A new model of simulating colloidal dynamics. New J. Phys., 6:54, 2004.\n<a id='[36]'></a>[36] A. Chatterji and J. Horbach. Combining molecular dynamics with lattice Boltzmann: A hybrid method for the simulation of (charged) colloidal systems. J. Chem. Phys., 122:184903, 2005.\n<a id='[37]'></a>[37] L.P. Fischer, T. Peter, C. Holm, and J. de Graaf. The raspberry model for hydrodynamic interactions revisited. I. Periodic arrays of spheres and dumbbells. J. Chem. Phys., 143:084107, 2015.\n<a id='[38]'></a>[38] J. de Graaf, T. Peter, L.P. Fischer, and C. Holm. The raspberry model for hydrodynamic interactions revisited. II. The effect of confinement. J. Chem. Phys., 143:084108, 2015.\n<a id='[39]'></a>[39] J. de Graaf, A. JTM Mathijssen, M. Fabritius, H. Menke, C. Holm, and T. N Shendruk. Understanding the onset of oscillatory swimming in microchannels. Soft Matter, 12(21):4704–4708, 2016.\n<a id='[40]'></a>[40] A. Einstein. Eine neue Bestimmung der Moleküldimension. Ann. Phys., 19:289, 1906.\n<a id='[42]'></a>[42] I. Berdakin, Y. Jeyaram, V.V. Moshchalkov, L. Venken, S. Dierckx, S.J. Vanderleyden, A.V. Silhanek, C.A. Condat, and V.I. Marconi. Influence of swimming strategy on microorganism separation by asymmetric obstacles. Phys. Rev. E, 87:052702, 2013.\n<a id='[43]'></a>[43] I. Berdakin, A.V. Silhanek, H.N. Moyano, V.I. Marconi, and C.A. Condat. Quantifying the sorting efficiency of self-propelled run-and-tumble swimmers by geometrical ratchets. Central Euro. J. Phys., 12:1653, 2013.\n<a id='[44]'></a>[44] S.E. Spagnolie and E. Lauga. Hydrodynamics of self-propulsion near a boundary: predictions and accuracy of far-field approximations. J. Fluid Mech., 700:105, 2012.\n<a id='[45]'></a>[45] A. Morozov and D. Marenduzzo. Enhanced diffusion of tracer particles in dilute bacterial suspensions. Soft Matter, 10:2748, 2014.\n<a id='[46]'></a>[46] A. Zöttl and H. Stark. Hydrodynamics determines collective motion and phase behavior of active colloids in quasi-two-dimensional confinement. Phys. Rev. Lett., 112:118101, 2014." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vinhqdang/my_mooc
coursera/machine_learning_specialization/1_foundation/Document retrieval.ipynb
mit
[ "Document retrieval from wikipedia data\nFire up GraphLab Create", "import graphlab", "Load some text data - from wikipedia, pages on people", "people = graphlab.SFrame('people_wiki.gl/')", "Data contains: link to wikipedia article, name of person, text of article.", "people.head()\n\nlen(people)", "Explore the dataset and checkout the text it contains\nExploring the entry for president Obama", "obama = people[people['name'] == 'Barack Obama']\n\nobama\n\nobama['text']", "Exploring the entry for actor George Clooney", "clooney = people[people['name'] == 'George Clooney']\nclooney['text']", "Get the word counts for Obama article", "obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])\n\nprint obama['word_count']", "Sort the word counts for the Obama article\nTurning dictonary of word counts into a table", "obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])", "Sorting the word counts to show most common words at the top", "obama_word_count_table.head()\n\nobama_word_count_table.sort('count',ascending=False)", "Most common words include uninformative words like \"the\", \"in\", \"and\",...\nCompute TF-IDF for the corpus\nTo give more weight to informative words, we weigh them by their TF-IDF scores.", "people['word_count'] = graphlab.text_analytics.count_words(people['text'])\npeople.head()\n\ntfidf = graphlab.text_analytics.tf_idf(people['word_count'])\ntfidf\n\npeople['tfidf'] = tfidf['docs']", "Examine the TF-IDF for the Obama article", "obama = people[people['name'] == 'Barack Obama']\n\nobama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)", "Words with highest TF-IDF are much more informative.\nManually compute distances between a few people\nLet's manually compare the distances between the articles for a few famous people.", "clinton = people[people['name'] == 'Bill Clinton']\n\nbeckham = people[people['name'] == 'David Beckham']", "Is Obama closer to Clinton than to Beckham?\nWe will use cosine distance, which is given by\n(1-cosine_similarity) \nand find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.", "graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])\n\ngraphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])", "Build a nearest neighbor model for document retrieval\nWe now create a nearest-neighbors model and apply it to document retrieval.", "knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')", "Applying the nearest-neighbors model for retrieval\nWho is closest to Obama?", "knn_model.query(obama)", "As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians. \nOther examples of document retrieval", "swift = people[people['name'] == 'Taylor Swift']\n\nknn_model.query(swift)\n\njolie = people[people['name'] == 'Angelina Jolie']\n\nknn_model.query(jolie)\n\narnold = people[people['name'] == 'Arnold Schwarzenegger']\n\nknn_model.query(arnold)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lorisercole/thermocepstrum
examples/example_cepstrum_singlecomp_silica.ipynb
gpl-3.0
[ "Example 1: Cepstral Analysis of solid amorphous Silica\nThis example shows the basic usage of sportran to compute the thermal conductivity of a classical MD simulation of a-SiO$_2$.", "import numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\ntry:\n import sportran as st\nexcept ImportError:\n from sys import path\n path.append('..')\n import sportran as st\n\nc = plt.rcParams['axes.prop_cycle'].by_key()['color']\n\n%matplotlib notebook", "1. Load trajectory\nRead the heat current from a simple column-formatted file. The desired columns are selected based on their header (e.g. with LAMMPS format).\nFor other input formats see corresponding the example.", "jfile = st.i_o.TableFile('./data/Silica.dat', group_vectors=True)\n\njfile.read_datalines(start_step=0, NSTEPS=0, select_ckeys=['flux1'])", "2. Heat Current\nDefine a HeatCurrent from the trajectory, with the correct parameters.", "DT_FS = 1.0 # time step [fs]\nTEMPERATURE = 1065.705630 # temperature [K]\nVOLUME = 3130.431110818 # volume [A^3]\n\nj = st.HeatCurrent(jfile.data['flux1'], UNITS= 'metal', DT_FS=DT_FS,\n TEMPERATURE=TEMPERATURE, VOLUME=VOLUME)\n\n# trajectory\nf = plt.figure()\nax = plt.plot(j.timeseries()/1000., j.traj);\nplt.xlim([0, 1.0])\nplt.xlabel(r'$t$ [ps]')\nplt.ylabel(r'$J$ [eV A/ps]');", "Compute the Power Spectral Density and filter it for visualization.", "# Periodogram with given filtering window width\nax = j.plot_periodogram(PSD_FILTER_W=0.5, kappa_units=True)\nprint(j.Nyquist_f_THz)\nplt.xlim([0, 50])\nax[0].set_ylim([0, 150]);\nax[1].set_ylim([12, 18]);", "3. Resampling\nIf the Nyquist frequency is very high (i.e. the sampling time is small), such that the log-spectrum goes to low values, you may want resample your time series to obtain a maximum frequency $f^$.\nBefore performing that operation, the time series is automatically filtered to reduce the amount of aliasing introduced. Ideally you do not want to go too low in $f^$. In an intermediate region the results should not change. \nTo perform resampling you can choose the resampling frequency $f^$ or the resampling step (TSKIP). If you choose $f^$, the code will try to choose the closest value allowed.\nThe resulting PSD is visualized to ensure that the low-frequency region is not affected.", "FSTAR_THZ = 28.0\njf,ax = j.resample(fstar_THz=FSTAR_THZ, plot=True, freq_units='thz')\nplt.xlim([0, 80])\nax[1].set_ylim([12,18]);\n\nax = jf.plot_periodogram(PSD_FILTER_W=0.1)\nax[1].set_ylim([12, 18]);", "4. Cepstral Analysis\nPerform Cepstral Analysis. The code will:\n 1. the parameters describing the theoretical distribution of the PSD are computed\n 2. the Cepstral coefficients are computed by Fourier transforming the log(PSD)\n 3. the Akaike Information Criterion is applied\n 4. the resulting $\\kappa$ is returned", "jf.cepstral_analysis()\n\n# Cepstral Coefficients\nprint('c_k = ', jf.dct.logpsdK)\n\nax = jf.plot_ck()\nax.set_xlim([0, 50])\nax.set_ylim([-0.5, 0.5])\nax.grid();\n\n# AIC function\nf = plt.figure()\nplt.plot(jf.dct.aic, '.-', c=c[0])\nplt.xlim([0, 200])\nplt.ylim([2800, 3000]);\n\nprint('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin))\nprint('AIC_min = {:f}'.format(jf.dct.aic_min))", "Plot the thermal conductivity $\\kappa$ as a function of the cutoff $P^*$", "# L_0 as a function of cutoff K\nax = jf.plot_L0_Pstar()\nax.set_xlim([0, 200])\nax.set_ylim([12.5, 14.5]);\n\nprint('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin))\nprint('AIC_min = {:f}'.format(jf.dct.aic_min))\n\n# kappa as a function of cutoff K\nax = jf.plot_kappa_Pstar()\nax.set_xlim([0,200])\nax.set_ylim([0, 5.0]);\n\nprint('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin))\nprint('AIC_min = {:f}'.format(jf.dct.aic_min))", "Print the results :)", "results = jf.cepstral_log\nprint(results)", "You can now visualize the filtered PSD...", "# filtered log-PSD\nax = j.plot_periodogram(0.5, kappa_units=True)\nax = jf.plot_periodogram(0.5, axes=ax, kappa_units=True)\nax = jf.plot_cepstral_spectrum(axes=ax, kappa_units=True)\nax[0].axvline(x = jf.Nyquist_f_THz, ls='--', c='r')\nax[1].axvline(x = jf.Nyquist_f_THz, ls='--', c='r')\nplt.xlim([0., 50.])\nax[1].set_ylim([12,18])\nax[0].legend(['original', 'resampled', 'cepstrum-filtered'])\nax[1].legend(['original', 'resampled', 'cepstrum-filtered']);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ondrejiayc/StatisticalMethods
examples/Cepheids/PeriodMagnitudeRelation.ipynb
gpl-2.0
[ "A Period - Magnitude Relation in Cepheid Stars\n\n\nCepheids are stars whose brightness oscillates with a stable period that appears to be strongly correlated with their luminosity (or absolute magnitude).\n\n\nA lot of monitoring data - repeated imaging and subsequent \"photometry\" of the star - can provide a measurement of the absolute magnitude (if we know the distance to it's host galaxy) and the period of the oscillation.\n\n\nLet's look at some Cepheid measurements reported by Riess et al (2011). Like the correlation function summaries, they are in the form of datapoints with error bars, where it is not clear how those error bars were derived (or what they mean).\n\n\nOur goal is to infer the parameters of a simple relationship between Cepheid period and, in the first instance, apparent magnitude.", "from __future__ import print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (15.0, 8.0) ", "A Look at Each Host Galaxy's Cepheids\nLet's read in all the data, and look at each galaxy's Cepheid measurements separately. Instead of using pandas, we'll write our own simple data structure, and give it a custom plotting method so we can compare the different host galaxies' datasets.", "# First, we need to know what's in the data file.\n\n!head -15 R11ceph.dat\n\nclass Cepheids(object):\n \n def __init__(self,filename):\n # Read in the data and store it in this master array:\n self.data = np.loadtxt(filename)\n self.hosts = self.data[:,1].astype('int').astype('str')\n # We'll need the plotting setup to be the same each time we make a plot:\n colornames = ['red','orange','yellow','green','cyan','blue','violet','magenta','gray']\n self.colors = dict(zip(self.list_hosts(), colornames))\n self.xlimits = np.array([0.3,2.3])\n self.ylimits = np.array([30.0,17.0])\n return\n \n def list_hosts(self):\n # The list of (9) unique galaxy host names:\n return np.unique(self.hosts)\n \n def select(self,ID):\n # Pull out one galaxy's data from the master array:\n index = (self.hosts == str(ID))\n self.mobs = self.data[index,2]\n self.merr = self.data[index,3]\n self.logP = np.log10(self.data[index,4])\n return\n \n def plot(self,X):\n # Plot all the points in the dataset for host galaxy X.\n ID = str(X)\n self.select(ID)\n plt.rc('xtick', labelsize=16) \n plt.rc('ytick', labelsize=16)\n plt.errorbar(self.logP, self.mobs, yerr=self.merr, fmt='.', ms=7, lw=1, color=self.colors[ID], label='NGC'+ID)\n plt.xlabel('$\\\\log_{10} P / {\\\\rm days}$',fontsize=20)\n plt.ylabel('${\\\\rm magnitude (AB)}$',fontsize=20)\n plt.xlim(self.xlimits)\n plt.ylim(self.ylimits)\n plt.title('Cepheid Period-Luminosity (Riess et al 2011)',fontsize=20)\n return\n\n def overlay_straight_line_with(self,a=0.0,b=24.0):\n # Overlay a straight line with gradient a and intercept b.\n x = self.xlimits\n y = a*x + b\n plt.plot(x, y, 'k-', alpha=0.5, lw=2)\n plt.xlim(self.xlimits)\n plt.ylim(self.ylimits)\n return\n \n def add_legend(self):\n plt.legend(loc='upper left')\n return\n\n\ndata = Cepheids('R11ceph.dat')\nprint(data.colors)", "OK, now we are all set up! Let's plot one of the datasets.", "data.plot(4258)\n\n# for ID in data.list_hosts():\n# data.plot(ID)\n \ndata.overlay_straight_line_with(a=-2.0,b=24.0)\n\ndata.add_legend()", "Q: Is the Cepheid Period-Luminosity relation likely to be well-modeled by a power law ?\nIs it easy to find straight lines that \"fit\" all the data from each host? And do we get the same \"fit\" for each host?\nInferring the Period-Magnitude Relation\n\nLet's try inferring the parameters $a$ and $b$ of the following linear relation:\n\n$m = a\\;\\log_{10} P + b$\n\nWe have data consisting of observed magnitudes with quoted uncertainties, of the form \n\n$m^{\\rm obs} = 24.51 \\pm 0.31$ at $\\log_{10} P = \\log_{10} (13.0/{\\rm days})$\n\nLet's draw the PGM together, on the whiteboard, imagining our way through what we would do to generate a mock dataset like the one we have.\n\nQ: What is the PDF for $m$, ${\\rm Pr}(m|a,b,H)$?\nQ: What are reasonable assumptions about the sampling distribution for the $k^{\\rm th}$ datapoint, ${\\rm Pr}(m^{\\rm obs}_k|m,H)$?\nQ: What is the conditional PDF ${\\rm Pr}(m_k|a,b,\\log{P_k},H)$?\nQ: What is the resulting joint likelihood, ${\\rm Pr}(m^{\\rm obs}|a,b,H)$?\nQ: What could be reasonable assumptions for the prior ${\\rm Pr}(a,b|H)$?\nWe should now be able to code up functions for the log likelihood, log prior and log posterior, such that we can evaluate them on a 2D parameter grid. Let's fill them in:", "def log_likelihood(logP,mobs,merr,a,b):\n return 0.0 # m given a,b? mobs given m? Combining all data points?\n\ndef log_prior(a,b):\n return 0.0 # Ranges? Functions?\n\ndef log_posterior(logP,mobs,merr,a,b):\n return log_likelihood(logP,mobs,merr,a,b) + log_prior(a,b)", "Now, let's set up a suitable parameter grid and compute the posterior PDF!", "# Select a Cepheid dataset:\ndata.select(4258)\n\n# Set up parameter grids:\nnpix = 100\namin,amax = -4.0,-2.0\nbmin,bmax = 25.0,27.0\nagrid = np.linspace(amin,amax,npix)\nbgrid = np.linspace(bmin,bmax,npix)\nlogprob = np.zeros([npix,npix])\n\n# Loop over parameters, computing unnormlized log posterior PDF:\nfor i,a in enumerate(agrid):\n for j,b in enumerate(bgrid):\n logprob[j,i] = log_posterior(data.logP,data.mobs,data.merr,a,b)\n\n# Normalize and exponentiate to get posterior density:\nZ = np.max(logprob)\nprob = np.exp(logprob - Z)\nnorm = np.sum(prob)\nprob /= norm", "Now, plot, with confidence contours:", "sorted = np.sort(prob.flatten())\nC = sorted.cumsum()\n\n# Find the pixel values that lie at the levels that contain\n# 68% and 95% of the probability:\nlvl68 = np.min(sorted[C > (1.0 - 0.68)])\nlvl95 = np.min(sorted[C > (1.0 - 0.95)])\n\nplt.imshow(prob, origin='lower', cmap='Blues', interpolation='none', extent=[amin,amax,bmin,bmax])\nplt.contour(prob,[lvl68,lvl95],colors='black',extent=[amin,amax,bmin,bmax])\nplt.grid()\nplt.xlabel('slope a')\nplt.ylabel('intercept b / AB magnitudes')", "Are these inferred parameters sensible? \n\n\nLet's read off a plausible (a,b) pair and overlay the model period-magnitude relation on the data.", "data.plot(4258)\n\ndata.overlay_straight_line_with(a=-3.0,b=26.3)\n\ndata.add_legend()", "Summarizing our Inferences\nLet's compute the 1D marginalized posterior PDFs for $a$ and for $b$, and report the median and 68% credible interval.", "prob_a_given_data = np.sum(prob,axis=0)\nprob_b_given_data = np.sum(prob,axis=1)\n\nprint(prob_a_given_data.shape, np.sum(prob_a_given_data))\n\n# Plot 1D distributions:\n\nfig,ax = plt.subplots(nrows=1, ncols=2)\nfig.set_size_inches(15, 6)\nplt.subplots_adjust(wspace=0.2)\n\nleft = ax[0].plot(agrid, prob_a_given_data)\nax[0].set_title('${\\\\rm Pr}(a|d)$')\nax[0].set_xlabel('slope $a$')\nax[0].set_ylabel('Posterior probability density')\n\nright = ax[1].plot(bgrid, prob_b_given_data)\nax[1].set_title('${\\\\rm Pr}(a|d)$')\nax[0].set_xlabel('intercept $b$ / AB magnitudes')\nax[1].set_ylabel('Posterior probability density')\n\n# Compress each PDF into a median and 68% credible interval, and report:\n\ndef compress_1D_pdf(x,pr,ci=68,dp=1):\n \n # Interpret credible interval request:\n low = (1.0 - ci/100.0)/2.0 # 0.16 for ci=68\n high = 1.0 - low # 0.84 for ci=68\n\n # Find cumulative distribution and compute percentiles:\n cumulant = pr.cumsum()\n pctlow = x[cumulant>low].min()\n median = x[cumulant>0.50].min()\n pcthigh = x[cumulant>high].min()\n \n # Convert to error bars, and format a string:\n errplus = np.abs(pcthigh - median)\n errminus = np.abs(median - pctlow)\n \n report = \"$ \"+str(round(median,dp))+\"^{+\"+str(round(errplus,dp))+\"}_{-\"+str(round(errminus,dp))+\"} $\"\n \n return report\n\nprint(\"a = \",compress_1D_pdf(agrid,prob_a_given_data,ci=68,dp=2))\n\nprint(\"b = \",compress_1D_pdf(bgrid,prob_b_given_data,ci=68,dp=2))", "Notes\n\n\nIn this simple case, our report makes sense: the medians of both 1D marginalized PDFs lie within the region of high 2D posterior PDF. This will not always be the case.\n\n\nThe marginalized posterior for $x$ has a well-defined meaning, regardless of the higher dimensional structure of the joint posterior: it is ${\\rm Pr}(x|d,H)$, the PDF for $x$ given the data and the model, and accounting for the uncertainty in all other parameters.\n\n\nThe high degree of symmetry in this problem is due to the posterior being a bivariate Gaussian. We could have derived the posterior PDF analytically - but in general this will not be possible. The homework invites you to explore various other analytic and numerical possibilities in this simple inference scenario." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wtsi-medical-genomics/team-code
python-club/notebooks/python-club-10.ipynb
gpl-2.0
[ "Chapter 10\nLists\nA sequence of elements of any type.", "L = [1,2,3]\nM = ['a', 'b', 'c']\nN = [1, 'a', 2, [32, 64]]", "Lists are mutable while strings are immutable. We can never change a string, only reassign it to something else.", "S = 'abc'\n#S[1] = 'z' # <== Doesn't work!\nL = ['a', 'b', 'c']\nL[1] = 'z'\nprint L", "Some common list procedures:\nreduce\nConvert a sequence (eg list) into a single element. Examples: sum, mean\nmap\nApply some function to each element of a sequence. Examples: making every element in a list positive, capitalizing all elements of a list\nfilter\nSelecting some elements of a sequence according to some condition. Examples: selecting only positive numbers from a list, selecting only elements of a list of strings that have length greater than 10.\nEverything in Python is an object. Think of an object as the underlying data. Objects have individuality. For example,", "a = 23\nb = 23\na is b\n\nlist1 = [1,2,3]\nlist2 = [1,2,3]\nlist1 is list2", "list1 and list2 are equivalent (same values) but not identical (same object). In order to make these two lists identical we can alias the object.", "list2 = list1\nlist1 is list2", "Now both names/variables point at the same object (reference the same object).", "list1[0] = 1234\nprint list1\nprint list2\n\nBack to the strings,\n\nb = 'abc'\na = b\na is b", "Let's try to change b by assigning to a (they reference the same object after all)", "a = 'xyz'\nprint a\nprint b", "What happened is that we have reassigned a to a new object, that is they no longer point at the same object.", "a is b\n\nid(b)", "All exercises from Downey, Allen. Think Python. Green Tea Press, 2014. http://www.greenteapress.com/thinkpython/\nExercise 10.1 (Dan)\nWrite a function called nested_sum that takes a nested list of integers and add up the elements from all of the nested lists.\nExercise 10.2\nUse capitalize_all to write a function named capitalize_nested that takes a nested list of strings and returns a new nested list with all strings capitalized.\ndef capitalize_all(t):\n res = []\n for s in t:\n res.append(s.capitalize())\n return res\nExercise 10.3\nWrite a function that takes a list of numbers and returns the cumulative sum; that is, a new list where the ith element is the sum of the first i+1 elements from the original list. For example, the cumulative sum of [1, 2, 3] is [1, 3, 6].\nExercise 10.4 (Wendy)\nWrite a function called middle that takes a list and returns a new list that contains all but the first and last elements. So middle([1,2,3,4]) should return [2,3].\nExercise 10.5 (Wendy)\nWrite a function called chop that takes a list, modifies it by removing the first and last elements, and returns None.\nExercise 10.6 (Sarah)\nWrite a function called is_sorted that takes a list as a parameter and returns True if the list is sorted in ascending order and False otherwise. You can assume (as a precondition) that the elements of the list can be compared with the relational operators &lt;, &gt;, etc.\nFor example, is_sorted([1,2,2]) should return True and is_sorted(['b','a']) should return False.\nExercise 10.7 (Sarah)\nTwo words are anagrams if you can rearrange the letters from one to spell the other. Write a function called is_anagram that takes two strings and returns True if they are anagrams.\nExercise 10.8\nThe (so-called) Birthday Paradox:\n* Write a function called has_duplicates that takes a list and returns True if there is any element that appears more than once. It should not modify the original list.\n* If there are 23 students in your class, what are the chances that two of you have the same birthday? You can estimate this probability by generating random samples of 23 birthdays and checking for matches. Hint: you can generate random birthdays with the randint function in the random module.\nExercise 10.9 (Masa)\nWrite a function called remove_duplicates that takes a list and returns a new list with only the unique elements from the original. Hint: they don’t have to be in the same order.\nExercise 10.10\nWrite a function that reads the file words.txt and builds a list with one element per word. Write two versions of this function, one using the append method and the other using the idiom t = t + [x]. Which one takes longer to run? Why?\nHint: use the time module to measure elapsed time. \nExercise 10.11 (Liu)\nTo check whether a word is in the word list, you could use the in operator, but it would be slow because it searches through the words in order.\nBecause the words are in alphabetical order, we can speed things up with a bisection search (also known as binary search), which is similar to what you do when you look a word up in the dictionary. You start in the middle and check to see whether the word you are looking for comes before the word in the middle of the list. If so, then you search the first half of the list the same way. Otherwise you search the second half.\nEither way, you cut the remaining search space in half. If the word list has 113,809 words, it will take about 17 steps to find the word or conclude that it’s not there.\nWrite a function called bisect that takes a sorted list and a target value and returns the index of the value in the list, if it’s there, or None if it’s not.\nExercise 10.12\nTwo words are a “reverse pair” if each is the reverse of the other. Write a program that finds all the reverse pairs in the word list. \nExercise 10.13\nTwo words “interlock” if taking alternating letters from each forms a new word. For example, “shoe” and “cold” interlock to form “schooled.”\n1. Write a program that finds all pairs of words that interlock. Hint: don’t enumerate all pairs!\n2. Can you find any words that are three-way interlocked; that is, every third letter forms a word, starting from the first, second or third?\nCredit: This exercise is inspired by an example at http://puzzlers.org." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kimkipyo/dss_git_kkp
통계, 머신러닝 복습/160615수_16일차_문서 전처리 Text Preprocessing/4.문서 전처리.ipynb
mit
[ "문서 전처리\n모든 데이터 분석 모형은 숫자로 구성된 고정 차원 벡터를 독립 변수로 하고 있으므로 문서(document)를 분석을 하는 경우에도 숫자로 구성된 특징 벡터(feature vector)를 문서로부터 추출하는 과정이 필요하다. 이러한 과정을 문서 전처리(document preprocessing)라고 한다.\nBOW (Bag of Words)\n문서를 숫자 벡터로 변환하는 가장 기본적인 방법은 BOW (Bag of Words) 이다. BOW 방법에서는 전체 문서 ${D_1, D_2, \\ldots, D_n}$ 를 구성하는 고정된 단어장(vocabulary) ${W_1, W_2, \\ldots, W_m}$ 를 만들고 $D_i$라는 개별 문서에 단어장에 해당하는 단어들이 포함되어 있는지를 표시하는 방법이다.\n$$ \\text{ if word $W_j$ in document $D_i$ }, \\;\\; \\rightarrow x_{ij} = 1 $$ \nScikit-Learn 의 문서 전처리 기능\nScikit-Learn 의 feature_extraction.text 서브 패키지는 다음과 같은 문서 전처리용 클래스를 제공한다.\n\nCountVectorizer: \n문서 집합으로부터 단어의 수를 세어 카운트 행렬을 만든다.\nTfidfVectorizer: \n문서 집합으로부터 단어의 수를 세고 TF-IDF 방식으로 단어의 가중치를 조정한 카운트 행렬을 만든다.\nHashingVectorizer: \nhashing trick 을 사용하여 빠르게 카운트 행렬을 만든다.", "from sklearn.feature_extraction.text import CountVectorizer\ncorpus = [\n 'This is the first document.',\n 'This is the second second document.',\n 'And the third one.',\n 'Is this the first document?',\n 'The last document?', \n]\nvect = CountVectorizer()\nvect.fit(corpus)\nvect.vocabulary_\n\nvect.transform(['This is the second document.']).toarray()\n\nvect.transform(['Something completely new.']).toarray()\n\nvect.transform(corpus).toarray()", "문서 처리 옵션\nCountVectorizer는 다양한 인수를 가진다. 그 중 중요한 것들은 다음과 같다.\n\nstop_words : 문자열 {‘english’}, 리스트 또는 None (디폴트)\nstop words 목록.‘english’이면 영어용 스탑 워드 사용.\nanalyzer : 문자열 {‘word’, ‘char’, ‘char_wb’} 또는 함수\n단어 n-그램, 문자 n-그램, 단어 내의 문자 n-그램 \ntokenizer : 함수 또는 None (디폴트)\n토큰 생성 함수 .\ntoken_pattern : string\n토큰 정의용 정규 표현식 \nngram_range : (min_n, max_n) 튜플\nn-그램 범위 \nmax_df : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1\n단어장에 포함되기 위한 최대 빈도\nmin_df : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1\n단어장에 포함되기 위한 최소 빈도 \nvocabulary : 사전이나 리스트\n단어장\n\nStop Words\nStop Words 는 문서에서 단어장을 생성할 때 무시할 수 있는 단어를 말한다. 보통 영어의 관사나 접속사, 한국어의 조사 등이 여기에 해당한다. stop_words 인수로 조절할 수 있다.", "vect = CountVectorizer(stop_words=[\"and\", \"is\", \"the\", \"this\"]).fit(corpus)\nvect.vocabulary_\n\nvect = CountVectorizer(stop_words=\"english\").fit(corpus)\nvect.vocabulary_", "토큰(token)\n토큰은 문서에서 단어장을 생성할 때 하나의 단어가 되는 단위를 말한다. analyzer, tokenizer, token_pattern 등의 인수로 조절할 수 있다.\n문서를 보고 어떤 언어인지 맞추는 방법은 토큰으로 사용빈도를 보고 맞춘다. 예를 들어 제일 많이 나오는 char를 e로 잡고 그 다음 뭐 인지를 패턴화해서 맞추는 방식으로", "vect = CountVectorizer(analyzer=\"char\").fit(corpus) #토큰 1개가 vocaburary로 인식. 원래 기본은 word이지만 char가 들어갈 수 있다.\nvect.vocabulary_\n\nimport nltk\nnltk.download(\"punkt\")\n\nvect = CountVectorizer(tokenizer=nltk.word_tokenize).fit(corpus)\nvect.vocabulary_\n\nvect = CountVectorizer(token_pattern=\"t\\w+\").fit(corpus)\nvect.vocabulary_", "n-그램\nn-그램은 단어장 생성에 사용할 토큰의 크기를 결정한다. 1-그램은 토큰 하나만 단어로 사용하며 2-그램은 두 개의 연결된 토큰을 하나의 단어로 사용한다.", "vect = CountVectorizer(ngram_range=(2,2)).fit(corpus)\nvect.vocabulary_\n\nvect = CountVectorizer(ngram_range=(1,2), token_pattern=\"t\\w+\").fit(corpus)\nvect.vocabulary_", "빈도수\nmax_df, min_df 인수를 사용하여 문서에서 토큰이 나타난 횟수를 기준으로 단어장을 구성할 수도 있다. 토큰의 빈도가 max_df로 지정한 값을 초과 하거나 min_df로 지정한 값보다 작은 경우에는 무시한다. 인수 값은 정수인 경우 횟수, 부동소수점인 경우 비중을 뜻한다.", "vect = CountVectorizer(max_df=4, min_df=2).fit(corpus)\nvect.vocabulary_, vect.stop_words_\n\nvect.transform(corpus).toarray()\n\nvect.transform(corpus).toarray().sum(axis=0)", "TF-IDF\nTF-IDF(Term Frequency – Inverse Document Frequency) 인코딩은 단어를 갯수 그대로 카운트하지 않고 모든 문서에 공통적으로 들어있는 단어의 경우 문서 구별 능력이 떨어진다고 보아 가중치를 축소하는 방법이다. \n구제적으로는 문서 $d$(document)와 단어 $t$ 에 대해 다음과 같이 계산한다.\n$$ \\text{tf-idf}(d, t) = \\text{tf}(d, t) \\cdot \\text{idf}(d, t) $$\n여기에서\n\n$\\text{tf}(d, t)$: 단어의 빈도수\n$\\text{idf}(d, t)$ : inverse document frequency \n\n$$ \\text{idf}(d, t) = \\log \\dfrac{n_d}{1 + \\text{df}(t)} $$\n\n$n_d$ : 전체 문서의 수\n$\\text{df}(t)$: 단어 $t$를 가진 문서의 수\n\n1을 더하는 이유는 스무딩 하기 위해서. 너무 커지기 때문에 log를 취해서 스케일링 했음. df(t)가 크면 idf가 작게 된다. idf는 가중치를 축소시키기도 하고 확대시키기도 한다.", "from sklearn.feature_extraction.text import TfidfVectorizer\n\ntfidv = TfidfVectorizer().fit(corpus)\ntfidv.transform(corpus).toarray()", "Hashing Trick\nCountVectorizer는 모든 작업을 in-memory 상에서 수행하므로 데이터 양이 커지면 속도가 느려지거나 실행이 불가능해진다. 이 때 \nHashingVectorizer를 사용하면 Hashing Trick을 사용하여 메모리 및 실행 시간을 줄일 수 있다. 하지만 사용 빈도로는 이게 더 잘 안 쓰인다.", "from sklearn.datasets import fetch_20newsgroups\ntwenty = fetch_20newsgroups()\nlen(twenty.data)\n\n%time CountVectorizer().fit(twenty.data).transform(twenty.data)\n\nfrom sklearn.feature_extraction.text import HashingVectorizer\nhv = HashingVectorizer(n_features=10)\n\n%time hv.transform(twenty.data)", "형태소 분석기 이용", "corpus = [\"imaging\", \"image\", \"imagination\", \"imagine\", \"buys\", \"buying\", \"bought\"]\nvect = CountVectorizer().fit(corpus)\nvect.vocabulary_\n\nfrom sklearn.datasets import fetch_20newsgroups\ntwenty = fetch_20newsgroups()\ndocs = twenty.data[:100]\n\nvect = CountVectorizer(stop_words=\"english\", token_pattern=\"wri\\w+\").fit(docs)\nvect.vocabulary_\n\nfrom nltk.stem import SnowballStemmer\n\nclass StemTokenizer(object):\n def __init__(self):\n self.s = SnowballStemmer('english')\n self.t = CountVectorizer(stop_words=\"english\", token_pattern=\"wri\\w+\").build_tokenizer()\n def __call__(self, doc):\n return [self.s.stem(t) for t in self.t(doc)]\n\nvect = CountVectorizer(tokenizer=StemTokenizer()).fit(docs)\nvect.vocabulary_", "예", "import json\nimport string\nfrom konlpy.utils import pprint\nfrom konlpy.tag import Hannanum\nhannanum = Hannanum()\n\nreq = urllib2.Request(\"https://www.datascienceschool.net/download-notebook/708e711429a646818b9dcbb581e0c10a/\")\nopener = urllib2.build_opener()\nf = opener.open(req)\njson = json.loads(f.read())\ncell = [\"\\n\".join(c[\"source\"]) for c in json[\"cells\"] if c[\"cell_type\"] == u\"markdown\"]\ndocs = [w for w in hannanum.nouns(\" \".join(cell)) if ((not w[0].isnumeric()) and (w[0] not in string.punctuation))]\n\nvect = CountVectorizer().fit(docs)\ncount = vect.transform(docs).toarray().sum(axis=0)\nplt.bar(range(len(count)), count)\nplt.show()\n\npprint(zip(vect.get_feature_names(), count))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
julienchastang/unidata-python-workshop
notebooks/XArray/XArray Introduction.ipynb
mit
[ "<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>XArray Introduction</h1>\n<h3>Unidata Python Workshop</h3>\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">\n\n<div style=\"float:right; width:250 px\"><img src=\"http://xarray.pydata.org/en/stable/_static/dataset-diagram-logo.png\" alt=\"NumPy Logo\" style=\"height: 250px;\"></div>\n\nOverview:\n\nTeaching: 25 minutes\nExercises: 20 minutes\n\nQuestions\n\nWhat is XArray?\nHow does XArray fit in with Numpy and Pandas?\n\nObjectives\n\nCreate a DataArray.\nOpen netCDF data using XArray\nSubset the data.\n\nXArray\nXArray expands on the capabilities on NumPy arrays, providing a lot of streamlined data manipulation. It is similar in that respect to Pandas, but whereas Pandas excels at working with tabular data, XArray is focused on N-dimensional arrays of data (i.e. grids). Its interface is based largely on the netCDF data model (variables, attributes, and dimensions), but it goes beyond the traditional netCDF interfaces to provide functionality similar to netCDF-java's Common Data Model (CDM). \nDataArray\nThe DataArray is one of the basic building blocks of XArray. It provides a NumPy ndarray-like object that expands to provide two critical pieces of functionality:\n\nCoordinate names and values are stored with the data, making slicing and indexing much more powerful\nIt has a built-in container for attributes", "# Convention for import to get shortened namespace\nimport numpy as np\nimport xarray as xr\n\n# Create some sample \"temperature\" data\ndata = 283 + 5 * np.random.randn(5, 3, 4)\ndata", "Here we create a basic DataArray by passing it just a numpy array of random data. Note that XArray generates some basic dimension names for us.", "temp = xr.DataArray(data)\ntemp", "We can also pass in our own dimension names:", "temp = xr.DataArray(data, dims=['time', 'lat', 'lon'])\ntemp", "This is already improved upon from a numpy array, because we have names for each of the dimensions (or axes in NumPy parlance). Even better, we can take arrays representing the values for the coordinates for each of these dimensions and associate them with the data when we create the DataArray.", "# Use pandas to create an array of datetimes\nimport pandas as pd\ntimes = pd.date_range('2018-01-01', periods=5)\ntimes\n\n# Sample lon/lats\nlons = np.linspace(-120, -60, 4)\nlats = np.linspace(25, 55, 3)", "When we create the DataArray instance, we pass in the arrays we just created:", "temp = xr.DataArray(data, coords=[times, lats, lons], dims=['time', 'lat', 'lon'])\ntemp", "...and we can also set some attribute metadata:", "temp.attrs['units'] = 'kelvin'\ntemp.attrs['standard_name'] = 'air_temperature'\n\ntemp", "Notice what happens if we perform a mathematical operaton with the DataArray: the coordinate values persist, but the attributes are lost. This is done because it is very challenging to know if the attribute metadata is still correct or appropriate after arbitrary arithmetic operations.", "# For example, convert Kelvin to Celsius\ntemp - 273.15", "Selection\nWe can use the .sel method to select portions of our data based on these coordinate values, rather than using indices (this is similar to the CDM).", "temp.sel(time='2018-01-02')", ".sel has the flexibility to also perform nearest neighbor sampling, taking an optional tolerance:", "from datetime import timedelta\ntemp.sel(time='2018-01-07', method='nearest', tolerance=timedelta(days=2))", "Exercise\n.interp() works similarly to .sel(). Using .interp(), get an interpolated time series \"forecast\" for Boulder (40°N, 105°W) or your favorite latitude/longitude location. (Documentation for <a href=\"http://xarray.pydata.org/en/stable/interpolation.html\">interp</a>).", "# Your code goes here\n", "Solution", "# %load solutions/interp_solution.py\n", "Slicing with Selection", "temp.sel(time=slice('2018-01-01', '2018-01-03'), lon=slice(-110, -70), lat=slice(25, 45))", ".loc\nAll of these operations can also be done within square brackets on the .loc attribute of the DataArray. This permits a much more numpy-looking syntax, though you lose the ability to specify the names of the various dimensions. Instead, the slicing must be done in the correct order.", "# As done above\ntemp.loc['2018-01-02']\n\ntemp.loc['2018-01-01':'2018-01-03', 25:45, -110:-70]\n\n# This *doesn't* work however:\n#temp.loc[-110:-70, 25:45,'2018-01-01':'2018-01-03']", "Opening netCDF data\nWith its close ties to the netCDF data model, XArray also supports netCDF as a first-class file format. This means it has easy support for opening netCDF datasets, so long as they conform to some of XArray's limitations (such as 1-dimensional coordinates).", "# Open sample North American Reanalysis data in netCDF format\nds = xr.open_dataset('../../data/NARR_19930313_0000.nc')\nds", "This returns a Dataset object, which is a container that contains one or more DataArrays, which can also optionally share coordinates. We can then pull out individual fields:", "ds.isobaric1", "or", "ds['isobaric1']", "Datasets also support much of the same subsetting operations as DataArray, but will perform the operation on all data:", "ds_1000 = ds.sel(isobaric1=1000.0)\nds_1000\n\nds_1000.Temperature_isobaric", "Aggregation operations\nNot only can you use the named dimensions for manual slicing and indexing of data, but you can also use it to control aggregation operations, like sum:", "u_winds = ds['u-component_of_wind_isobaric']\nu_winds.std(dim=['x', 'y'])", "Exercise\nUsing the sample dataset, calculate the mean temperature profile (temperature as a function of pressure) over Colorado within this dataset. For this exercise, consider the bounds of Colorado to be:\n* x: -182km to 424km\n* y: -1450km to -990km\n(37°N to 41°N and 102°W to 109°W projected to Lambert Conformal projection coordinates)\nSolution", "# %load solutions/mean_profile.py\n", "Resources\nThere is much more in the XArray library. To learn more, visit the XArray Documentation" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/production_ml/solutions/parameter_server_training.ipynb
apache-2.0
[ "Parameter Server Training\nLearning Objectives\n\nInstantiate a ParameterServerStrategy\nTraining with Model.fit\nTraining with Custom Training Loop\nDefine and run an evaluation loop\n\nIntroduction\nParameter server training\nis a common data-parallel method to scale up model training on multiple\nmachines. A parameter server training cluster consists of workers and parameter\nservers. Variables are created on parameter servers and they are read and updated by workers in each step. By default, workers read and update these variables independently without synchronizing with each other. This is why sometimes parameter server-style training is called asynchronous training.\nIn TF2, parameter server training is powered by the\ntf.distribute.experimental.ParameterServerStrategy class, which distributes\nthe training steps to a cluster that scales up to thousands of workers\n(accompanied by parameter servers). There are two main supported training APIs:\nKeras Training API, also known as Model.fit, and Custom Training Loop (CTL).\nModel.fit is recommended when users prefer a high-level abstraction\nand handling of training, while CTL is recommended when users prefer to define the details of their training\nloop.\nRegardless of the API of choice, distributed training in TF2 involves a\n\"cluster\" with several \"jobs\", and each of the jobs may have one or more\n\"tasks\". When using parameter server training, it is recommended to have one\ncoordinator job (which has the job name chief), multiple worker jobs (job name\nworker), and multiple parameter server jobs (job name ps).\nWhile the coordinator creates resources, dispatches training tasks, writes\ncheckpoints, and deals with task failures, workers and parameter servers run tf.distribute.Server that listen for requests from the coordinator.\nParameter server training with Model.fit API\nParameter server training with Model.fit API requires the coordinator to use a tf.distribute.experimental.ParameterServerStrategy object, and a tf.keras.utils.experimental.DatasetCreator as the input. Similar to Model.fit usage with no strategy, or with other strategies, the workflow\ninvolves creating and compiling the model, preparing the callbacks, followed by a Model.fit call.\nParameter server training with custom training loop (CTL) API\nWith CTLs, the tf.distribute.experimental.coordinator.ClusterCoordinator\nclass is the key component used for the coordinator. The ClusterCoordinator\nclass needs to work in conjunction with a tf.distribute.Strategy object. This\ntf.distribute.Strategy object is needed to provide the information of the cluster and is used to define a training step as we have seen in custom training with MirroredStrategy. The ClusterCoordinator object then dispatches the execution of these training\nsteps to remote workers. For parameter server training, the ClusterCoordinator\nneeds to work with a tf.distribute.experimental.ParameterServerStrategy.\nThe most important API provided by the ClusterCoordinator object is schedule. The schedule API enqueues a tf.function and returns a future-like RemoteValue immediately. The queued functions will be dispatched to remote workers in background threads and their RemoteValues will be filled asynchronously. Since schedule doesn’t require worker assignment, the tf.function passed in can be executed on any available worker. If the worker it is executed on becomes unavailable before its completion, the function will be retried on another available worker. Because of this fact and the fact that function execution is not atomic, a function may be executed more than once.\nIn addition to dispatching remote functions, the ClusterCoordinator also helps\nto create datasets on all the workers and rebuild these datasets when a worker recovers from failure.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook\nSetup\nThe tutorial will branch into CTL or Model.fit paths, and you can choose the\none that fits your need. Sections other than \"Training with X\" are appliable to\nboth paths.", "!pip install -q portpicker\n!pip install --upgrade tensorflow==2.6", "NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.", "import multiprocessing\nimport os\nimport random\nimport portpicker\nimport tensorflow as tf\nimport tensorflow.keras as keras\nimport tensorflow.keras.layers.experimental.preprocessing as kpl", "This notebook uses TF2.x.\nPlease check your tensorflow version using the cell below.", "# Show the currently installed version of TensorFlow\nprint(\"TensorFlow version: \",tf.version.VERSION)", "Cluster Setup\nAs mentioned above, a parameter server training cluster requires a coordinator task that runs your training program, one or several workers and parameter server tasks that run TensorFlow servers, i.e. tf.distribute.Server, and possibly an additional evaluation task that runs side-car evaluation (see the side-car evaluation section below). The\nrequirements to set them up are:\n\nThe coordinator task needs to know the addresses and ports of all other TensorFlow servers except the evaluator.\nThe workers and parameter servers need to know which port they need to listen to. For the sake of simplicity, we usually pass in the complete cluster information when we create TensorFlow servers on these tasks.\nThe evaluator task doesn’t have to know the setup of the training cluster. If it does, it should not attempt to connect to the training cluster.\nWorkers and parameter servers should have task types as “worker” and “ps” respectively. The coordinator should use “chief” as the task type for legacy reasons.\n\nIn this tutorial, we will create an in-process cluster so that the whole parameter server training can be run in colab. We will introduce how to set up real clusters in a later section.\nIn-process cluster\nIn this tutorial, we will start a bunch of TensorFlow servers in advance and\nconnect to them later. Note that this is only for the purpose of this tutorial's\ndemonstration, and in real training the servers will be started on worker and ps\nmachines.", "def create_in_process_cluster(num_workers, num_ps):\n \"\"\"Creates and starts local servers and returns the cluster_resolver.\"\"\"\n worker_ports = [portpicker.pick_unused_port() for _ in range(num_workers)]\n ps_ports = [portpicker.pick_unused_port() for _ in range(num_ps)]\n\n cluster_dict = {}\n cluster_dict[\"worker\"] = [\"localhost:%s\" % port for port in worker_ports]\n if num_ps > 0:\n cluster_dict[\"ps\"] = [\"localhost:%s\" % port for port in ps_ports]\n\n cluster_spec = tf.train.ClusterSpec(cluster_dict)\n\n # Workers need some inter_ops threads to work properly.\n worker_config = tf.compat.v1.ConfigProto()\n if multiprocessing.cpu_count() < num_workers + 1:\n worker_config.inter_op_parallelism_threads = num_workers + 1\n\n for i in range(num_workers):\n tf.distribute.Server(\n cluster_spec, job_name=\"worker\", task_index=i, config=worker_config,\n protocol=\"grpc\")\n\n for i in range(num_ps):\n tf.distribute.Server(\n cluster_spec, job_name=\"ps\", task_index=i, protocol=\"grpc\")\n\n cluster_resolver = tf.distribute.cluster_resolver.SimpleClusterResolver(\n cluster_spec, rpc_layer=\"grpc\")\n return cluster_resolver\n\n# Set the environment variable to allow reporting worker and ps failure to the\n# coordinator. This is a workaround and won't be necessary in the future.\nos.environ[\"GRPC_FAIL_FAST\"] = \"use_caller\"\n\nNUM_WORKERS = 3\nNUM_PS = 2\ncluster_resolver = create_in_process_cluster(NUM_WORKERS, NUM_PS)", "The in-process cluster setup is frequently used in our unit testing. Here is\none example.\nInstantiate a ParameterServerStrategy\nBefore we dive into the training code, let's instantiate a ParameterServerStrategy object. Note that this is needed regardless of whether you are proceeding with a custom training loop or Model.fit. variable_partitioner argument will be explained in the next section.", "variable_partitioner = (\n tf.distribute.experimental.partitioners.FixedShardsPartitioner(\n num_shards=NUM_PS))\n\nstrategy = tf.distribute.experimental.ParameterServerStrategy(\n cluster_resolver,\n variable_partitioner=variable_partitioner)", "In order to use GPUs for training, allocate GPUs visible to each worker.\nParameterServerStrategy will use all the available GPUs on each worker,\nwith the restriction that all workers should have the same number of GPUs\navailable. \nVariable sharding\nVariable sharding refers to splitting a variable into multiple smaller\nvariables. We call these smaller variables shards. Variable sharding may be\nuseful to distribute the network load when accessing these shards. It is also\nuseful to distribute computation and storage of a normal variable across\nmultiple parameter servers.\nTo enable variable sharding, you can pass in a variable_partitioner when\nconstructing a ParameterServerStrategy object. The variable_partitioner will\nbe invoked every time when a variable is created and it is expected to return\nthe number of shards along each dimension of the variable. Some out-of-box\nvariable_partitioners are provided such as\ntf.distribute.experimental.partitioners.FixedShardsPartitioner.\nWhen a variable_partitioner is passed in and if you create a variable directly\nunder strategy.scope(), it will become a container type with a variables\nproperty which provides access to the list of shards. In most cases, this\ncontainer will be automatically converted to a Tensor by concatenating all the\nshards. As a result, it can be used as a normal variable. On the other hand,\nsome TensorFlow methods such as tf.nn.embedding_lookup provide efficient\nimplementation for this container type and in these methods automatic\nconcatenation will be avoided.\nPlease see the API docstring of ParameterServerStrategy for more details.\nTraining with Model.fit\n<a id=\"training_with_modelfit\"></a>\nKeras provides an easy-to-use training API via Model.fit that handles the\ntraining loop under the hood, with the flexbility of overridable train_step,\nand callbacks which provide functionalities such as checkpoint saving, or\nsummary saving for TensorBoard. With Model.fit, the same training code can be\nused for other strategies with a simple swap of the strategy object.\nInput data\nModel.fit with parameter server training requires that the input data be\nprovided in a callable that takes a single argument of type\ntf.distribute.InputContext, and returns a tf.data.Dataset. Then, create a\ntf.keras.utils.experimental.DatasetCreator object that takes such callable,\nand an optional tf.distribute.InputOptions object via input_options\nargument. Note that it is recommended to shuffle and repeat the data with\nparameter server training, and specify steps_per_epoch in fit call so the library knows the\nepoch boundaries.\nPlease see\nDistributed Input\nguide for more information about the InputContext argument.", "def dataset_fn(input_context):\n global_batch_size = 64\n batch_size = input_context.get_per_replica_batch_size(global_batch_size)\n x = tf.random.uniform((10, 10))\n y = tf.random.uniform((10,))\n dataset = tf.data.Dataset.from_tensor_slices((x, y)).shuffle(10).repeat()\n dataset = dataset.shard(\n input_context.num_input_pipelines, input_context.input_pipeline_id)\n dataset = dataset.batch(batch_size)\n dataset = dataset.prefetch(2)\n return dataset\n\ndc = tf.keras.utils.experimental.DatasetCreator(dataset_fn)", "The code in dataset_fn will be invoked on the input device, which is usually\nthe CPU, on each of the worker machines.\nModel construction and compiling\nNow, you will create a tf.keras.Model with the APIs of choice (a trivial\ntf.keras.models.Sequential model is being used as a demonstration here),\nfollowed by a Model.compile call to incorporate components such as optimizer,\nmetrics, or parameters such as steps_per_execution:", "# TODO\nwith strategy.scope():\n model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])\n\nmodel.compile(tf.keras.optimizers.SGD(), loss='mse', steps_per_execution=10)", "Callbacks and training\n<a id=\"callbacks-and-training\"> </a>\nBefore you call model.fit for the actual training, let's prepare the needed\ncallbacks for common tasks such as:\n\n\nModelCheckpoint - to save the model weights.\n\n\nBackupAndRestore - to make sure the training progress is automatically\n backed up, and recovered if the cluster experiences unavailability (such as\n abort or preemption), or\n\n\nTensorBoard - to save the progress reports into summary files which get\n visualized in TensorBoard tool.\n\n\nNote that due to performance consideration, custom callbacks cannot have batch\nlevel callbacks overridden when used with ParameterServerStrategy. Please\nmodify your custom callbacks to make them epoch level calls, and adjust\nsteps_per_epoch to a suitable value. In addition, steps_per_epoch is a\nrequired argument for Model.fit when used with ParameterServerStrategy.", "working_dir = '/tmp/my_working_dir'\nlog_dir = os.path.join(working_dir, 'log')\nckpt_filepath = os.path.join(working_dir, 'ckpt')\nbackup_dir = os.path.join(working_dir, 'backup')\ncallbacks = [\n tf.keras.callbacks.TensorBoard(log_dir=log_dir),\n tf.keras.callbacks.ModelCheckpoint(filepath=ckpt_filepath),\n #tf.keras.callbacks.experimental.BackupAndRestore(backup_dir=backup_dir),\n]\nmodel.fit(dc, epochs=5, steps_per_epoch=20, callbacks=callbacks)", "Direct usage with ClusterCoordinator (optional)\nEven if you choose Model.fit training path, you can optionally instantiate a\nClusterCoordinator object to schedule other functions you would like to be\nexecuted on the workers. See below\nTraining with Custom Training Loop\nsection for more details and examples.\nTraining with Custom Training Loop\n<a id=\"training_with_custom_training_loop\"> </a>\nCustom training loop with tf.distribute.Strategy \nprovides great flexibility to define training loops. With the ParameterServerStrategy defined above, you will use a\nClusterCoordinator to dispatch the execution of training steps to remote\nworkers.\nThen, you will create a model, define a dataset and a step function as we have\nseen in the training loop with other tf.distribute.Strategys. You can find\nmore details in this\ntutorial.\nTo ensure efficient dataset prefetching, use the recommended \ndistributed dataset creation APIs mentioned in the\nDispatch Training steps to remote workers\nsection below. Also, make sure to call strategy.run inside worker_fn \nto take full advantage of GPUs allocated on workers. Rest of the steps \nare the same for training with or without GPUs.\nLet’s create these components in following steps:\nSetup the data\nFirst, write a function that creates a dataset that includes preprocessing logic implemented by Keras preprocessing layers. We will create these layers outside the dataset_fn but apply the transformation inside the dataset_fn since you will wrap the dataset_fn into a tf.function which doesn't allow variables to be created inside it.", "feature_vocab = [\n \"avenger\", \"ironman\", \"batman\", \"hulk\", \"spiderman\", \"kingkong\",\n \"wonder_woman\"\n]\nlabel_vocab = [\"yes\", \"no\"]\n\nwith strategy.scope():\n feature_lookup_layer = kpl.StringLookup(vocabulary=feature_vocab)\n\n label_lookup_layer = kpl.StringLookup(vocabulary=label_vocab,\n num_oov_indices=0,\n mask_token=None)\n\n raw_feature_input = keras.layers.Input(\n shape=(3,), dtype=tf.string, name=\"feature\")\n feature_id_input = feature_lookup_layer(raw_feature_input)\n feature_preprocess_stage = keras.Model(\n {\"features\": raw_feature_input}, feature_id_input)\n\n raw_label_input = keras.layers.Input(\n shape=(1,), dtype=tf.string, name=\"label\")\n label_id_input = label_lookup_layer(raw_label_input)\n label_preprocess_stage = keras.Model({\"label\": raw_label_input}, label_id_input)", "Generate toy examples in a dataset:", "def feature_and_label_gen(num_examples=200):\n examples = {\"features\": [], \"label\": []}\n for _ in range(num_examples):\n features = random.sample(feature_vocab, 3)\n label = [\"yes\"] if \"avenger\" in features else [\"no\"]\n examples[\"features\"].append(features)\n examples[\"label\"].append(label)\n return examples\n\nexamples = feature_and_label_gen()", "Then we create the training dataset wrapped in a dataset_fn:", "def dataset_fn(_):\n raw_dataset = tf.data.Dataset.from_tensor_slices(examples)\n\n# TODO\n train_dataset = raw_dataset.map(\n lambda x: (\n {\"features\": feature_preprocess_stage(x[\"features\"])},\n label_preprocess_stage(x[\"label\"])\n )).shuffle(200).batch(32).repeat()\n return train_dataset", "Build the model\nSecond, we create the model and other objects. Make sure to create all variables\nunder strategy.scope.", "# These variables created under the `strategy.scope` will be placed on parameter\n# servers in a round-robin fashion.\nwith strategy.scope():\n # Create the model. The input needs to be compatible with KPLs.\n # TODO\n model_input = keras.layers.Input(\n shape=(3,), dtype=tf.int64, name=\"model_input\")\n\n emb_layer = keras.layers.Embedding(\n input_dim=len(feature_lookup_layer.get_vocabulary()), output_dim=20)\n emb_output = tf.reduce_mean(emb_layer(model_input), axis=1)\n dense_output = keras.layers.Dense(units=1, activation=\"sigmoid\")(emb_output)\n model = keras.Model({\"features\": model_input}, dense_output)\n\n optimizer = keras.optimizers.RMSprop(learning_rate=0.1)\n accuracy = keras.metrics.Accuracy()", "Let's confirm that the use of FixedShardsPartitioner split all variables into two shards and each shard was assigned to different parameter servers:", "assert len(emb_layer.weights) == 2\n#assert emb_layer.weights[0].shape == (4, 20)\nassert emb_layer.weights[1].shape == (4, 20)\nassert emb_layer.weights[0].device == \"/job:ps/replica:0/task:0/device:CPU:0\"\nassert emb_layer.weights[1].device == \"/job:ps/replica:0/task:1/device:CPU:0\"", "Define the training step\nThird, create the training step wrapped into a tf.function:", "@tf.function\ndef step_fn(iterator):\n\n def replica_fn(batch_data, labels):\n with tf.GradientTape() as tape:\n pred = model(batch_data, training=True)\n per_example_loss = keras.losses.BinaryCrossentropy(\n reduction=tf.keras.losses.Reduction.NONE)(labels, pred)\n loss = tf.nn.compute_average_loss(per_example_loss)\n gradients = tape.gradient(loss, model.trainable_variables)\n\n optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n\n actual_pred = tf.cast(tf.greater(pred, 0.5), tf.int64)\n accuracy.update_state(labels, actual_pred)\n return loss\n\n batch_data, labels = next(iterator)\n losses = strategy.run(replica_fn, args=(batch_data, labels))\n return strategy.reduce(tf.distribute.ReduceOp.SUM, losses, axis=None)", "In the above step function, calling strategy.run and strategy.reduce in the\nstep_fn can support multiple GPUs per worker. If the workers have GPUs\nallocated, strategy.run will distribute the datasets on multiple replicas.\nDispatch training steps to remote workers\n<a id=\"dispatch_training_steps_to_remote_workers\"> </a>\nAfter all the computations are defined by ParameterServerStrategy, we will use\nthe ClusterCoordinator class to create resources and distribute the training\nsteps to remote workers.\nLet’s first create a ClusterCoordinator object and pass in the strategy\nobject:", "coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(strategy)", "Then we create a per-worker dataset and an iterator. In the per_worker_dataset_fn below, wrapping the dataset_fn into\nstrategy.distribute_datasets_from_function is recommended to allow efficient\nprefetching to GPUs seamlessly.", "@tf.function\ndef per_worker_dataset_fn():\n return strategy.distribute_datasets_from_function(dataset_fn)\n\n# TODO\nper_worker_dataset = coordinator.create_per_worker_dataset(per_worker_dataset_fn)\nper_worker_iterator = iter(per_worker_dataset)", "The final step is to distribute the computation to remote workers using schedule. The schedule method enqueues a tf.function and returns a future-like RemoteValue immediately. The queued functions will be dispatched to remote workers in background threads and the RemoteValue will be filled asynchronously. The join method can be used to wait until all scheduled functions are excuted.", "num_epoches = 4\nsteps_per_epoch = 5\nfor i in range(num_epoches):\n accuracy.reset_states()\n for _ in range(steps_per_epoch):\n coordinator.schedule(step_fn, args=(per_worker_iterator,))\n # Wait at epoch boundaries.\n coordinator.join()\n print (\"Finished epoch %d, accuracy is %f.\" % (i, accuracy.result().numpy()))", "Here is how you can fetch the result of a RemoteValue:", "# TODO\nloss = coordinator.schedule(step_fn, args=(per_worker_iterator,))\nprint (\"Final loss is %f\" % loss.fetch())", "Alternatively, you can launch all steps and do something while waiting for\ncompletion:\nPython\nfor _ in range(total_steps):\n coordinator.schedule(step_fn, args=(per_worker_iterator,))\nwhile not coordinator.done():\n time.sleep(10)\n # Do something like logging metrics or writing checkpoints.\nFor the complete training and serving workflow for this particular example,\nplease check out this\ntest.\nMore about dataset creation\nThe dataset in the above code is created using the create_per_worker_dataset\nAPI. It creates one dataset per worker and returns a container object. You can\ncall iter method on it to create a per-worker iterator. The per-worker\niterator contains one iterator per worker and the corresponding slice of a\nworker will be substituted in the input argument of the function passed to the\nschedule method before the function is executed on a particular worker.\nCurrently the schedule method assumes workers are equivalent and thus assumes\nthe datasets on different workers are the same except they may be shuffled\ndifferently if they contain a\ndataset.shuffle\noperation. Because of this, we also recommend the datasets to be repeated\nindefinitely and schedule a finite number of steps instead of relying on the\nOutOfRangeError from a dataset.\nAnother important note is that tf.data datasets don’t support implicit\nserialization and deserialization across task boundaries. So it is important to\ncreate the whole dataset inside the function passed to\ncreate_per_worker_dataset.\nEvaluation\nThere are more than one way to define and run an evaluation loop in distributed training. Each has its own pros and cons as described below. The inline evaluation method is recommended if you don't have a preference.\nInline evaluation\nIn this method the coordinator alternates between training and evaluation and thus we call it inline evaluation. There are several benefits of inline evaluation. For example, it can support large evaluation models and evaluation datasets that a single task cannot hold. For another example, the evaluation results can be used to make decisions for training next epoch.\nThere are two ways to implement inline evaluation:\n\nDirect evaluation - For small models and evaluation datasets the coordinator can run evaluation directly on the distributed model with the evaluation dataset on the coordinator:", "eval_dataset = tf.data.Dataset.from_tensor_slices(\n feature_and_label_gen(num_examples=16)).map(\n lambda x: (\n {\"features\": feature_preprocess_stage(x[\"features\"])},\n label_preprocess_stage(x[\"label\"])\n )).batch(8)\n\neval_accuracy = keras.metrics.Accuracy()\nfor batch_data, labels in eval_dataset:\n pred = model(batch_data, training=False)\n actual_pred = tf.cast(tf.greater(pred, 0.5), tf.int64)\n eval_accuracy.update_state(labels, actual_pred)\n\nprint (\"Evaluation accuracy: %f\" % eval_accuracy.result())", "Distributed evaluation - For large models or datasets that are infeasible to run directly on the coordinator, the coordinator task can distribute evaluation tasks to the workers via the schedule/join methods:", "with strategy.scope():\n # Define the eval metric on parameter servers.\n eval_accuracy = keras.metrics.Accuracy()\n\[email protected]\ndef eval_step(iterator):\n def replica_fn(batch_data, labels):\n pred = model(batch_data, training=False)\n actual_pred = tf.cast(tf.greater(pred, 0.5), tf.int64)\n eval_accuracy.update_state(labels, actual_pred)\n batch_data, labels = next(iterator)\n strategy.run(replica_fn, args=(batch_data, labels))\n\ndef eval_dataset_fn():\n return tf.data.Dataset.from_tensor_slices(\n feature_and_label_gen(num_examples=16)).map(\n lambda x: (\n {\"features\": feature_preprocess_stage(x[\"features\"])},\n label_preprocess_stage(x[\"label\"])\n )).shuffle(16).repeat().batch(8)\n\nper_worker_eval_dataset = coordinator.create_per_worker_dataset(eval_dataset_fn)\nper_worker_eval_iterator = iter(per_worker_eval_dataset)\n\neval_steps_per_epoch = 2\nfor _ in range(eval_steps_per_epoch):\n coordinator.schedule(eval_step, args=(per_worker_eval_iterator,))\ncoordinator.join()\nprint (\"Evaluation accuracy: %f\" % eval_accuracy.result())", "Note: currently the schedule/join methods don’t support visitation guarantee or exactly-once semantics. In other words, there is no guarantee that all evaluation examples in a dataset will be evaluated exactly once; some may not be visited and some may be evaluated multiple times. Visitation guarantee on evaluation dataset is being worked on.\nSide-car evaluation\nAnother method is called side-car evaluation which is to create a dedicated evaluator task that repeatedly reads checkpoints and runs evaluation on a latest checkpoint. It allows your training program to finish early if you don't need to change your training loop based on evaluation results. However, it requires an additional evaluator task and periodic checkpointing to trigger evaluation. Following is a possible side-car evaluation loop:\n```Python\ncheckpoint_dir = ...\neval_model = ...\neval_data = ...\ncheckpoint = tf.train.Checkpoint(model=eval_model)\nfor latest_checkpoint in tf.train.checkpoints_iterator(\n checkpoint_dir):\n try:\n checkpoint.restore(latest_checkpoint).expect_partial()\n except (tf.errors.OpError,) as e:\n # checkpoint may be deleted by training when it is about to read it.\n continue\n# Optionally add callbacks to write summaries.\n eval_model.evaluate(eval_data)\n# Evaluation finishes when it has evaluated the last epoch.\n if latest_checkpoint.endswith('-{}'.format(train_epoches)):\n break\n```\nClusters in Real-world\n<a id=\"real_clusters\"></a>\nNote: this section is not necessary for running the tutorial code in this page.\nIn a real production environment, you will run all tasks in different processes\non different machines. The simplest way to configure cluster information on each\ntask is to set \"TF_CONFIG\" environment variables and use a\ntf.distribute.cluster_resolver.TFConfigClusterResolver to parse \"TF_CONFIG\".\nFor a general description about \"TF_CONFIG\" environment variables, please see\nthe distributed training guide.\nIf you start your training tasks using Kubernetes or other configuration templates, it is very likely that these templates have already set “TF_CONFIG” for you.\nSet “TF_CONFIG” environment variable\nSuppose you have 3 workers and 2 parameter servers, the “TF_CONFIG” of worker 1\ncan be:\nPython\nos.environ[\"TF_CONFIG\"] = json.dumps({\n \"cluster\": {\n \"worker\": [\"host1:port\", \"host2:port\", \"host3:port\"],\n \"ps\": [\"host4:port\", \"host5:port\"],\n \"chief\": [\"host6:port\"]\n },\n \"task\": {\"type\": \"worker\", \"index\": 1}\n})\nThe “TF_CONFIG” of the evaluator can be:\nPython\nos.environ[\"TF_CONFIG\"] = json.dumps({\n \"cluster\": {\n \"evaluator\": [\"host7:port\"]\n },\n \"task\": {\"type\": \"evaluator\", \"index\": 0}\n})\nThe “cluster” part in the above “TF_CONFIG” string for the evaluator is\noptional.\nIf you use the same binary for all tasks\nIf you prefer to run all these tasks using a single binary, you will need to let\nyour program branch into different roles at the very beginning:\nPython\ncluster_resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver()\nif cluster_resolver.task_type in (\"worker\", \"ps\"):\n # start a TensorFlow server and wait.\nelif cluster_resolver.task_type == \"evaluator\":\n # run side-car evaluation\nelse:\n # run the coordinator.\nThe following code starts a TensorFlow server and waits:\n```Python\nSet the environment variable to allow reporting worker and ps failure to the\ncoordinator. This is a workaround and won't be necessary in the future.\nos.environ[\"GRPC_FAIL_FAST\"] = \"use_caller\"\nserver = tf.distribute.Server(\n cluster_resolver.cluster_spec(),\n job_name=cluster_resolver.task_type,\n task_index=cluster_resolver.task_id,\n protocol=cluster_resolver.rpc_layer or \"grpc\",\n start=True)\nserver.join()\n```\nHandling Task Failure\nWorker failure\nClusterCoordinator or Model.fit provides built-in fault tolerance for worker\nfailure. Upon worker recovery, the previously provided dataset function (either\nto create_per_worker_dataset for CTL, or DatasetCreator for Model.fit)\nwill be invoked on the workers to re-create the datasets.\nParameter server or coordinator failure\nHowever, when the coordinator sees a parameter server error, it will raise an UnavailableError or AbortedError immediately. You can restart the coordinator in this case. The coordinator itself can also become unavailable. Therefore, certain tooling is recommended in order to not lose the training progress:\n\n\nFor Model.fit, you should use a BackupAndRestore callback, which handles\n the progress saving and restoration automatically. See\n Callbacks and training section above for an\n example.\n\n\nFor CTLs, you should checkpoint the model variables periodically and load\n model variables from a checkpoint, if any, before training starts. The\n training progress can be inferred approximately from optimizer.iterations\n if an optimizer is checkpointed:\n\n\n```Python\ncheckpoint_manager = tf.train.CheckpointManager(\n tf.train.Checkpoint(model=model, optimizer=optimizer),\n checkpoint_dir,\n max_to_keep=3)\nif checkpoint_manager.latest_checkpoint:\n checkpoint = checkpoint_manager.checkpoint\n checkpoint.restore(\n checkpoint_manager.latest_checkpoint).assert_existing_objects_matched()\nglobal_steps = int(optimizer.iterations.numpy())\nstarting_epoch = global_steps // steps_per_epoch\nfor _ in range(starting_epoch, num_epoches):\n for _ in range(steps_per_epoch):\n coordinator.schedule(step_fn, args=(per_worker_iterator,))\n coordinator.join()\n checkpoint_manager.save()\n```\nFetching a RemoteValue\nFetching a RemoteValue is guaranteed to succeed if a function is executed\nsuccessfully. This is because currently the return value is immediately copied\nto the coordinator after a function is executed. If there is any worker failure\nduring the copy, the function will be retried on another available worker.\nTherefore, if you want to optimize for performance, you can schedule functions\nwithout a return value.\nError Reporting\nOnce the coordinator sees an error such as UnavailableError from parameter\nservers or other application errors such as an InvalidArgument from\ntf.debugging.check_numerics, it will cancel all pending and queued functions\nbefore raising the error. Fetching their corresponding RemoteValues will raise\na CancelledError.\nAfter an error is raised, the coordinator will not raise the same error or any\nerror from cancelled functions.\nPerformance Improvement\nThere are several possible reasons if you see performance issues when you train\nwith ParameterServerStrategy and ClusterResolver.\nOne common reason is parameter servers have unbalanced load and some\nheavily-loaded parameter servers have reached capacity. There can also be\nmultiple root causes. Some simple methods to mitigate this issue are to\n\nshard your large model variables via specifying a variable_partitioner\n when constructing a ParameterServerStrategy.\navoid creating a hotspot variable that is required by all parameter servers\n in a single step if possible. For example, use a constant learning rate\n or subclass tf.keras.optimizers.schedules.LearningRateSchedule in\n optimizers since the default behavior is that the learning rate will become\n a variable placed on a particular parameter server and requested by all\n other parameter servers in each step.\nshuffle your large vocabularies before passing them to Keras preprocessing\n layers.\n\nAnother possible reason for performance issues is the coordinator. Our first\nimplementation of schedule/join is Python-based and thus may have threading\noverhead. Also the latency between the coordinator and the workers can be large.\nIf this is the case,\n\n\nFor Model.fit, you can set steps_per_execution argument provided at\n Model.compile to a value larger than 1.\n\n\nFor CTLs, you can pack multiple steps into a single tf.function:\n\n\n```\nsteps_per_invocation = 10\[email protected]\ndef step_fn(iterator):\n for _ in range(steps_per_invocation):\n features, labels = next(iterator)\n def replica_fn(features, labels):\n ...\nstrategy.run(replica_fn, args=(features, labels))\n\n```\nAs we continue to optimize the library, we hope most users don’t have to\nmanually pack steps in the future.\nIn addition, a small trick for performance improvement is to schedule functions\nwithout a return value as explained in the handling task failure section above.\nKnown Limitations\nMost of the known limitations are covered in above sections. This section\nprovides a summary.\nParameterServerStrategy general\n\nos.environment[\"grpc_fail_fast\"]=\"use_caller\" is needed on every task, including the coordinator, to make fault tolerance work properly. \nSynchronous parameter server training is not supported.\nIt is usually necessary to pack multiple steps into a single function to achieve optimal performance.\nIt is not supported to load a saved_model via tf.saved_model.load containing sharded variables. Note loading such a saved_model using TensorFlow Serving is expected to work.\nIt is not supported to load a checkpoint containg sharded optimizer slot variables into a different number of shards.\nIt is not supported to recover from parameter server failure without restarting the coordinator task.\n\nModel.fit specifics\n\nsteps_per_epoch argument is required in Model.fit. You can select a\n value that provides appropriate intervals in an epoch.\nParameterServerStrategy does not have support for custom callbacks that\n have batch-level calls for performance reason. You should convert those\n calls into epoch-level calls with suitably picked steps_per_epoch, so that\n they are called every steps_per_epoch number of steps. Built-in callbacks\n are not affected: their batch-level calls have been modified to be\n performant. Supporting batch-level calls for ParameterServerStrategy is\n being planned.\nFor the same reason, unlike other strategies, progress bar and metrics are\n logged only at epoch boundaries.\nInput for Model.fit only takes the type DatasetCreator.\nrun_eagerly is not supported.\nEvaluation in Model.fit is not yet supported. This is one of the\n priorities.\nModel.evaluate and Model.predict are not yet supported.\n\nCustom Training Loop specifics\n\nClusterCoordinator.schedule doesn't support visitation guarantees for a dataset.\nWhen ClusterCoordinator.create_per_worker_dataset is used, the whole dataset must be created inside the function passed to it.\ntf.data.Options is ignored in dataset created by ClusterCoordinator.create_per_worker_dataset." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
c22n/ion-channel-ABC
docs/examples/human-atrial/nygren_ical_unified.ipynb
gpl-3.0
[ "ABC calibration of $I_\\text{CaL}$ in Nygren model to unified dataset.", "import os, tempfile\nimport logging\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\n\nfrom ionchannelABC import theoretical_population_size\nfrom ionchannelABC import IonChannelDistance, EfficientMultivariateNormalTransition, IonChannelAcceptor\nfrom ionchannelABC.experiment import setup\nfrom ionchannelABC.visualization import plot_sim_results, plot_kde_matrix_custom\nimport myokit\n\nfrom pyabc import Distribution, RV, History, ABCSMC\nfrom pyabc.epsilon import MedianEpsilon\nfrom pyabc.sampler import MulticoreEvalParallelSampler, SingleCoreSampler\nfrom pyabc.populationstrategy import ConstantPopulationSize", "Initial set-up\nLoad experiments used for unified dataset calibration:\n - Steady-state activation [Li1997]\n - Activation time constant [Li1997]\n - Steady-state inactivation [Li1997]\n - Inactivation time constant (fast+slow) [Li1997]\n - Recovery time constant (fast+slow) [Li1997]", "from experiments.ical_li import (li_act_and_tau, # combines steady-state activation and time constant\n li_inact_1000,\n li_inact_kin_80,\n li_recov)\n\nmodelfile = 'models/nygren_ical.mmt'", "Plot steady-state and time constant functions of original model", "from ionchannelABC.visualization import plot_variables\n\nsns.set_context('talk')\n\nV = np.arange(-140, 50, 0.01)\n\nnyg_par_map = {'di': 'ical.d_inf',\n 'f1i': 'ical.f_inf',\n 'f2i': 'ical.f_inf',\n 'dt': 'ical.tau_d',\n 'f1t': 'ical.tau_f_1',\n 'f2t': 'ical.tau_f_2'}\n\nf, ax = plot_variables(V, nyg_par_map, 'models/nygren_ical.mmt', figshape=(3,2))", "Activation gate ($d$) calibration\nCombine model and experiments to produce:\n - observations dataframe\n - model function to run experiments and return traces\n - summary statistics function to accept traces", "observations, model, summary_statistics = setup(modelfile,\n li_act_and_tau)\n\nassert len(observations)==len(summary_statistics(model({})))\n\ng = plot_sim_results(modelfile,\n li_act_and_tau)", "Set up prior ranges for each parameter in the model.\nSee the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.", "limits = {'ical.p1': (-100, 100),\n 'ical.p2': (0, 50),\n 'log_ical.p3': (-7, 3),\n 'ical.p4': (-100, 100),\n 'ical.p5': (0, 50),\n 'log_ical.p6': (-7, 3)}\nprior = Distribution(**{key: RV(\"uniform\", a, b - a)\n for key, (a,b) in limits.items()})\n\n# Test this works correctly with set-up functions\nassert len(observations) == len(summary_statistics(model(prior.rvs())))", "Run ABC calibration", "db_path = (\"sqlite:///\" + os.path.join(tempfile.gettempdir(), \"nygren_ical_dgate_unified.db\"))\n\nlogging.basicConfig()\nabc_logger = logging.getLogger('ABC')\nabc_logger.setLevel(logging.DEBUG)\neps_logger = logging.getLogger('Epsilon')\neps_logger.setLevel(logging.DEBUG)", "Test theoretical number of particles for approximately 2 particles per dimension in the initial sampling of the parameter hyperspace.", "pop_size = theoretical_population_size(2, len(limits))\nprint(\"Theoretical minimum population size is {} particles\".format(pop_size))", "Initialise ABCSMC (see pyABC documentation for further details).\nIonChannelDistance calculates the weighting applied to each datapoint based on the experimental variance.", "abc = ABCSMC(models=model,\n parameter_priors=prior,\n distance_function=IonChannelDistance(\n exp_id=list(observations.exp_id),\n variance=list(observations.variance),\n delta=0.05),\n population_size=ConstantPopulationSize(2000),\n summary_statistics=summary_statistics,\n transitions=EfficientMultivariateNormalTransition(),\n eps=MedianEpsilon(initial_epsilon=100),\n sampler=MulticoreEvalParallelSampler(n_procs=8),\n acceptor=IonChannelAcceptor())\n\nobs = observations.to_dict()['y']\nobs = {str(k): v for k, v in obs.items()}\n\nabc_id = abc.new(db_path, obs)", "Run calibration with stopping criterion of particle 1\\% acceptance rate.", "history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)", "Analysis of results", "history = History(db_path)\n\nhistory.all_runs()\n\ndf, w = history.get_distribution(m=0)\n\ndf.describe()\n\nsns.set_context('poster')\n\nmpl.rcParams['font.size'] = 14\nmpl.rcParams['legend.fontsize'] = 14\n\ng = plot_sim_results(modelfile,\n li_act_and_tau,\n df=df, w=w)\n\nplt.tight_layout()\n\nimport pandas as pd\nN = 100\n\nnyg_par_samples = df.sample(n=N, weights=w, replace=True)\nnyg_par_samples = nyg_par_samples.set_index([pd.Index(range(N))])\nnyg_par_samples = nyg_par_samples.to_dict(orient='records')\n\nsns.set_context('talk')\nmpl.rcParams['font.size'] = 14\nmpl.rcParams['legend.fontsize'] = 14\n\nV = np.arange(-140, 50, 0.01)\n\nnyg_par_map = {'di': 'ical.d_inf',\n 'f1i': 'ical.f_inf',\n 'f2i': 'ical.f_inf',\n 'dt': 'ical.tau_d',\n 'f1t': 'ical.tau_f_1',\n 'f2t': 'ical.tau_f_2'}\n\nf, ax = plot_variables(V, nyg_par_map, \n 'models/nygren_ical.mmt', \n [nyg_par_samples],\n figshape=(3,2))\n\nfrom ionchannelABC.visualization import plot_kde_matrix_custom\nimport myokit\nimport numpy as np\n\nm,_,_ = myokit.load(modelfile)\n\noriginals = {}\nfor name in limits.keys():\n if name.startswith(\"log\"):\n name_ = name[4:]\n else:\n name_ = name\n val = m.value(name_)\n if name.startswith(\"log\"):\n val_ = np.log10(val)\n else:\n val_ = val\n originals[name] = val_\n\nsns.set_context('paper')\ng = plot_kde_matrix_custom(df, w, limits=limits, refval=originals)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rmenegaux/bqplot
examples/Mark Interactions.ipynb
apache-2.0
[ "from __future__ import print_function\nfrom bqplot import *\nfrom IPython.display import display\nimport numpy as np\nimport pandas as pd", "Scatter Chart\nScatter Chart Selections\nClick a point on the Scatter plot to select it. Now, run the cell below to check the selection. After you've done this, try holding the ctrl (or command key on Mac) and clicking another point. Clicking the background will reset the selection.", "x_sc = LinearScale()\ny_sc = LinearScale()\n\nx_data = np.arange(20)\ny_data = np.random.randn(20)\n\nscatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, default_colors=['dodgerblue'],\n interactions={'click': 'select'},\n selected_style={'opacity': 1.0, 'fill': 'DarkOrange', 'stroke': 'Red'},\n unselected_style={'opacity': 0.5})\n\nax_x = Axis(scale=x_sc)\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\n\nfig = Figure(marks=[scatter_chart], axes=[ax_x, ax_y])\ndisplay(fig)\n\nscatter_chart.selected", "Alternately, the selected attribute can be directly set on the Python side (try running the cell below):", "scatter_chart.selected = [1, 2, 3]", "Scatter Chart Interactions and Tooltips", "from ipywidgets import *\n\nx_sc = LinearScale()\ny_sc = LinearScale()\n\nx_data = np.arange(20)\ny_data = np.random.randn(20)\n\ndd = Dropdown(options=['First', 'Second', 'Third', 'Fourth'])\nscatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, default_colors=['dodgerblue'],\n names=np.arange(100, 200), names_unique=False, display_names=False, display_legend=True,\n labels=['Blue'])\nins = Button(icon='fa-legal')\nscatter_chart.tooltip = ins\n\nscatter_chart2 = Scatter(x=x_data, y=np.random.randn(20), \n scales= {'x': x_sc, 'y': y_sc}, default_colors=['orangered'],\n tooltip=dd, names=np.arange(100, 200), names_unique=False, display_names=False, \n display_legend=True, labels=['Red'])\n\nax_x = Axis(scale=x_sc)\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\n\nfig = Figure(marks=[scatter_chart, scatter_chart2], axes=[ax_x, ax_y])\ndisplay(fig)\n\ndef print_event(self, target):\n print(target)\n\n# Adding call back to scatter events\n# print custom mssg on hover and background click of Blue Scatter\nscatter_chart.on_hover(print_event)\nscatter_chart.on_background_click(print_event)\n\n# print custom mssg on click of an element or legend of Red Scatter\nscatter_chart2.on_element_click(print_event)\nscatter_chart2.on_legend_click(print_event)\n\n# Adding figure as tooltip\nx_sc = LinearScale()\ny_sc = LinearScale()\n\nx_data = np.arange(10)\ny_data = np.random.randn(10)\n\nlc = Lines(x=x_data, y=y_data, scales={'x': x_sc, 'y':y_sc})\nax_x = Axis(scale=x_sc)\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\ntooltip_fig = Figure(marks=[lc], axes=[ax_x, ax_y], min_height=400, min_width=400)\n\nscatter_chart.tooltip = tooltip_fig\n\n# Changing interaction from hover to click for tooltip\nscatter_chart.interactions = {'click': 'tooltip'}", "Line Chart", "# Adding default tooltip to Line Chart\nx_sc = LinearScale()\ny_sc = LinearScale()\n\nx_data = np.arange(100)\ny_data = np.random.randn(3, 100)\n\ndef_tt = Tooltip(fields=['name', 'index'], formats=['', '.2f'], labels=['id', 'line_num'])\nline_chart = Lines(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, \n tooltip=def_tt, display_legend=True, labels=[\"line 1\", \"line 2\", \"line 3\"] )\n\nax_x = Axis(scale=x_sc)\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\n\nfig = Figure(marks=[line_chart], axes=[ax_x, ax_y])\ndisplay(fig)\n\n# Adding call back to print event when legend or the line is clicked\nline_chart.on_legend_click(print_event)\nline_chart.on_element_click(print_event)", "Bar Chart", "# Adding interaction to select bar on click for Bar Chart\nx_sc = OrdinalScale()\ny_sc = LinearScale()\n\nx_data = np.arange(10)\ny_data = np.random.randn(2, 10)\n\nbar_chart = Bars(x=x_data, y=[y_data[0, :].tolist(), y_data[1, :].tolist()], scales= {'x': x_sc, 'y': y_sc},\n interactions={'click': 'select'},\n selected_style={'stroke': 'orange', 'fill': 'red'},\n labels=['Level 1', 'Level 2'],\n display_legend=True)\nax_x = Axis(scale=x_sc)\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\n\nfig = Figure(marks=[bar_chart], axes=[ax_x, ax_y])\ndisplay(fig)\n\n# Adding a tooltip on hover in addition to select on click\ndef_tt = Tooltip(fields=['x', 'y'], formats=['', '.2f'])\nbar_chart.tooltip=def_tt\nbar_chart.interactions = {\n 'legend_hover': 'highlight_axes',\n 'hover': 'tooltip', \n 'click': 'select',\n}\n\n# Changing tooltip to be on click\nbar_chart.interactions = {'click': 'tooltip'}\n\n# Call back on legend being clicked\nbar_chart.type='grouped'\nbar_chart.on_legend_click(print_event)", "Histogram", "# Adding tooltip for Histogram\nx_sc = LinearScale()\ny_sc = LinearScale()\n\nsample_data = np.random.randn(100)\n\ndef_tt = Tooltip(formats=['', '.2f'], fields=['count', 'midpoint'])\nhist = Hist(sample=sample_data, scales= {'sample': x_sc, 'count': y_sc},\n tooltip=def_tt, display_legend=True, labels=['Test Hist'], select_bars=True)\nax_x = Axis(scale=x_sc, tick_format='0.2f')\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\n\nfig = Figure(marks=[hist], axes=[ax_x, ax_y])\ndisplay(fig)\n\n# Changing tooltip to be displayed on click\nhist.interactions = {'click': 'tooltip'}\n\n# Changing tooltip to be on click of legend\nhist.interactions = {'legend_click': 'tooltip'}", "Pie Chart", "pie_data = np.abs(np.random.randn(10))\n\nsc = ColorScale(scheme='Reds')\ntooltip_widget = Tooltip(fields=['size', 'index', 'color'], formats=['0.2f', '', '0.2f'])\npie = Pie(sizes=pie_data, scales={'color': sc}, color=np.random.randn(10), \n tooltip=tooltip_widget, interactions = {'click': 'tooltip'}, selected_style={'fill': 'red'})\n\npie.selected_style = {\"opacity\": \"1\", \"stroke\": \"white\", \"stroke-width\": \"2\"}\npie.unselected_style = {\"opacity\": \"0.2\"}\n\nFigure(marks=[pie])\n\n# Changing interaction to select on click and tooltip on hover\npie.interactions = {'click': 'select', 'hover': 'tooltip'}" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eds-uga/cbio4835-sp17
lectures/Lecture24.ipynb
mit
[ "Lecture 24: Visualization with matplotlib\nCBIO (CSCI) 4835/6835: Introduction to Computational Biology\nOverview and Objectives\nData visualization is one of, if not the, most important method of communicating data science results. It's analogous to writing: if you can't visualize your results, you'll be hard-pressed to convince anyone else of them. By the end of this lecture, you should be able to\n\nDefine and describe some types of plots and what kinds of data they're used to visualize\nUse the basic functionality of matplotlib to generate figures\nCustomize the look and feel of figures to suit particular formats\n\nPart 1: Introduction to matplotlib\nThe Matplotlib package as we know it was originally conceived and designed by John Hunter in 2002, originally built as an IPython plugin to enable Matlab-style plotting.\nIPython's creator, Fernando Perez, was at the time finishing his PhD and didn't have time to fully vet John's patch. So John took his fledgling plotting library and ran with it, releasing Matplotlib version 0.1 in 2003 and setting the stage for what would be the most flexible and cross-platform Python plotting library to date.\nMatplotlib can run on a wide variety of operating systems and make use of a wide variety of graphical backends. Hence, despite some developers complaining that it can feel bloated and clunky, it easily maintains the largest active user base and team of developers, ensuring it will remain relevant in some sense for quite some time yet.\nYou've seen snippets of matplotlib in action in several assignments and lectures, but we haven't really formalized it yet. Like NumPy, matplotlib follows some use conventions.", "import matplotlib as mpl\nimport matplotlib.pyplot as plt", "By far, we'll use the plt object from the second import the most; that contains the main plotting library.\nPlotting in a script\nLet's say you're coding a standalone Python application, contained in a file myapp.py. You'll need to explicitly tell matplotlib to generate a figure and display it, via the show() command.\n<img src=\"https://raw.githubusercontent.com/eds-uga/csci1360e-su16/master/lectures/script.png\" width=\"50%\" />\nThen you can run the code from the command line:\n<pre>$ python myapp.py</pre>\n\nBeware: plt.show() does a lot of things under-the-hood, including interacting with your operating system's graphical backend.\nMatplotlib hides all these details from you, but as a consequence you should be careful to only use plt.show() once per Python session.\nMultiple uses of show() can lead to unpredictable behavior that depends entirely on what backend is in use, so try your best to avoid it.\nPlotting in a shell (e.g., IPython)\nRemember back to our first lecture, when you learned how to fire up a Python prompt on the terminal? You can plot in that shell just as you can in a script!\n<img src=\"https://raw.githubusercontent.com/eds-uga/csci1360e-su16/master/lectures/shell.png\" width=\"75%\" />\nIn addition, you can enter \"matplotlib mode\" by using the %matplotlib magic command in the IPython shell. You'll notice in the above screenshot that the prompt is hovering below line [6], but no line [7] has emerged. That's because the shell is currently not in matplotlib mode, so it will wait indefinitely until you close the figure on the right.\nBy contrast, in matplotlib mode, you'll immediately get the next line of the prompt while the figure is still open. You can then edit the properties of the figure dynamically to update the plot. To force an update, you can use the command plt.draw().\nPlotting in a notebook (e.g., Jupyter)\nThis is probably the mode you're most familiar with: plotting in a notebook, such as the one you're viewing right now.\nSince matplotlib's default is to render its graphics in an external window, for plotting in a notebook you will have to specify otherwise, as it's impossible to do this in a browser. You'll once again make use of the %matplotlib magic command, this time with the inline argument added to tell matplotlib to embed the figures into the notebook itself.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.random.random(10)\ny = np.random.random(10)\nplt.plot(x, y)", "Note that you do NOT need to use plt.show()! When in \"inline\" mode, matplotlib will automatically render whatever the \"active\" figure is as soon as you issue some kind of plotting command.\nSaving plots to files\nSometimes you'll want to save the plots you're making to files for use later, perhaps as part of a presentation to demonstrate to your bosses what you've accomplished.\nIn this case, you once again won't use the plt.show() command, but instead substitute in the plt.savefig() command.\n<img src=\"https://raw.githubusercontent.com/eds-uga/csci1360e-su16/master/lectures/savefig.png\" width=\"75%\" />\nAn image file will be created (in this case, fig.png) on the filesystem with the plot.\nMatplotlib is designed to operate nicely with lots of different output formats; PNG was just the example used here.\nThe output format is inferred from the filename used in savefig(). You can see all the other formats matplotlib supports with the command", "fig = plt.figure()\nfig.canvas.get_supported_filetypes()", "Part 2: Basics of plotting\nOk, let's dive in with some plotting examples and how-tos!\nThe most basic kind of plot you can make is the line plot. This kind of plot uses (x, y) coordinate pairs and implicitly draws lines between them. Here's an example:", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.array([4, 5, 6])\ny = np.array([9, 4, 7])\nplt.plot(x, y)", "Matplotlib sees we've created points at (4, 9), (5, 4), and (6, 7), and it connects each of these in turn with a line, producing the above plot. It also automatically scales the x and y axes of the plot so all the data fit visibly inside.\nAn important side note: matplotlib is stateful, which means it has some memory of what commands you've issued. So if you want to, say, include multiple different plots on the same figure, all you need to do is issue additional plotting commands.", "x1 = np.array([4, 5, 6])\ny1 = np.array([9, 4, 7])\nplt.plot(x1, y1)\nx2 = np.array([1, 2, 4])\ny2 = np.array([4, 6, 9])\nplt.plot(x2, y2)", "They'll even be plotted in different colors. How nice!\nLine plots are nice, but let's say I really want a scatter plot of my data; there's no real concept of a line, but instead I have disparate data points in 2D space that I want to visualize. There's a function for that!", "x = np.array([4, 5, 6])\ny = np.array([9, 4, 7])\nplt.scatter(x, y)", "We use the plt.scatter() function, which operates pretty much the same way as plt.plot(), except it puts dots in for each data point without drawing lines between them.\nAnother very useful plot, especially in scientific circles, is the errorbar plot. This is a lot like the line plot, except each data point comes with an errorbar to quantify uncertainty or variance present in each datum.", "# This is a great function that gives me 50 evenly-spaced values from 0 to 10.\nx = np.linspace(0, 10, 50)\n\ndy = 0.8 # The error rate.\ny = np.sin(x) + dy * np.random.random(50) # Adds a little bit of noise.\n\nplt.errorbar(x, y, yerr = dy)", "You use the yerr argument of the function plt.errorbar() in order to specify what your error rate in the y-direction is. There's also an xerr optional argument, if your error is actually in the x-direction.\nWhat about the histograms we built from the color channels of the images in last week's lectures? We can use matplotlib's hist() function for this.", "x = np.random.normal(size = 100)\n_ = plt.hist(x, bins = 20)", "plt.hist() has only 1 required argument: a list of numbers.\nHowever, the optional bins argument is very useful, as it dictates how many bins you want to use to divide up the data in the required argument. Too many bins and every bar in the histogram will have a count of 1; too few bins and all your data will end up in just a single bar!\nHere's too few bins:", "_ = plt.hist(x, bins = 2)", "And too many:", "_ = plt.hist(x, bins = 200)", "Picking the number of bins for histograms is an art unto itself that usually requires a lot of trial-and-error, hence the importance of having a good visualization setup!\nAnother point on histograms, specifically its lone required argument: matplotlib expects a 1D array.\nThis is important if you're trying to visualize, say, the pixel intensities of an image channel. Images are always either 2D (grayscale) or 3D (color, RGB).\nAs such, if you feed an image object directly into the hist method, matplotlib will complain:", "import matplotlib.image as mpimg\nimg = mpimg.imread(\"Lecture22/image1.png\") # Our good friend!\nchannel = img[:, :, 0] # The \"R\" channel\n\n_ = plt.hist(channel)", "Offhand, I don't know what this is, but it definitely is not the intensity histogram we were hoping for.\nHere's the magical way around it: all NumPy arrays (which images objects are!) have a flatten() method.\nThis function is dead simple: no matter how many dimensions the NumPy array has, whether it's a grayscale image (2D), a color image (3D), or million-dimensional tensor, it completely flattens the whole thing out into a long 1D list of numbers.", "print(channel.shape) # Before\nflat = channel.flatten()\nprint(flat.shape) # After", "Then just feed the flattened array into the hist method:", "_ = plt.hist(flat)", "The last type of plot we'll discuss here isn't really a \"plot\" in the sense as the previous ones have been, but it is no less important: showing images!", "img = mpimg.imread(\"Lecture22/image1.png\")\nplt.imshow(img)", "The plt.imshow() method takes as input a matrix and renders it as an image. If the matrix is 3D, it considers this to be an image in RGB format (width, height, and 3 color dimensions) and uses that information to determine colors. If the matrix is only 2D, it will consider it to be grayscale.\nIt doesn't even have be a \"true\" image. Often you want to look at a matrix that you're building, just to get a \"feel\" for the structure of it. imshow() is great for this as well.", "matrix = np.random.random((100, 100))\nplt.imshow(matrix, cmap = \"gray\")", "We built a random matrix matrix, and as you can see it looks exactly like that: in fact, a lot like TV static (coincidence?...). The cmap = \"gray\" optional argument specifies the \"colormap\", of which matplotlib has quite a few, but this explicitly enforces the \"gray\" colormap, otherwise matplotlib will attempt to predict a color scheme.\nPart 3: Customizing the look and feel\nYou may be thinking at this point: this is all cool, but my inner graphic designer cringed at how a few of these plots looked. Is there any way to make them look, well, \"nicer\"?\nThere are, in fact, a couple things we can do to spiff things up a little, starting with how we can annotate the plots in various ways.\nAxis labels and plot titles\nYou can add text along the axes and the top of the plot to give a little extra information about what, exactly, your plot is visualizing. For this you use the plt.xlabel(), plt.ylabel(), and plt.title() functions.", "x = np.linspace(0, 10, 50) # 50 evenly-spaced numbers from 0 to 10\ny = np.sin(x) # Compute the sine of each of these numbers.\nplt.plot(x, y)\nplt.xlabel(\"x\") # This goes on the x-axis.\nplt.ylabel(\"sin(x)\") # This goes on the y-axis.\nplt.title(\"Plot of sin(x)\") # This goes at the top, as the plot title.", "Legends\nGoing back to the idea of plotting multiple datasets on a single figure, it'd be nice to label them in addition to using colors to distinguish them. Luckily, we have legends we can use, but it takes a coordinated effort to use them effectively. Pay close attention:", "x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10\ny1 = np.sin(x) # Compute the sine of each of these numbers.\ny2 = np.cos(x) # Compute the cosine of each number.\n\nplt.plot(x, y1, label = \"sin(x)\")\nplt.plot(x, y2, label = \"cos(x)\")\nplt.legend(loc = 0)", "First, you'll notice that the plt.plot() call changed a little with the inclusion of an optional argument: label. This string is the label that will show up in the legend.\nSecond, you'll also see a call to plt.legend(). This instructs matplotlib to show the legend on the plot. The loc argument specifies the location; \"0\" tells matplotlib to \"put the legend in the best possible spot, respecting where the graphics tend to be.\" This is usually the best option, but if you want to override this behavior and specify a particular location, the numbers 1-9 refer to different specific areas of the plot.\nAxis limits\nThis will really come in handy when you need to make multiple plots that span different datasets, but which you want to compare directly. We've seen how matplotlib scales the axes so the data you're plotting are visible, but if you're plotting the data in entirely separate figures, matplotlib may scale the figures differently. If you need set explicit axis limits:", "x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10\ny = np.sin(x) # Compute the sine of each of these numbers.\n\nplt.plot(x, y)\nplt.xlim([-1, 11]) # Range from -1 to 11 on the x-axis.\nplt.ylim([-3, 3]) # Range from -3 to 3 on the y-axis.", "This can potentially help center your visualizations, too.\nColors, markers, and colorbars\nMatplotlib has a default progression of colors it uses in plots--you may have noticed the first data you plot is always blue, followed by green. You're welcome to stick with this, or you can manually override the colors scheme in any plot using the optional argument c (for color).", "x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10\ny = np.sin(x) # Compute the sine of each of these numbers.\n\nplt.plot(x, y, c = \"cyan\")", "If you're making scatter plots, it can be especially useful to specify the type of marker in addition to the color you want to use. This can really help differentiate multiple scatter plots that are combined on one figure.", "X1 = np.random.normal(loc = [-1, -1], size = (10, 2))\nX2 = np.random.normal(loc = [1, 1], size = (10, 2))\nplt.scatter(X1[:, 0], X1[:, 1], c = \"black\", marker = \"v\")\nplt.scatter(X2[:, 0], X2[:, 1], c = \"yellow\", marker = \"o\")", "Finally, when you're rendering images, and especially matrices, it can help to have a colorbarthat shows the scale of colors you have in your image plot.", "matrix = np.random.normal(size = (100, 100))\nplt.imshow(matrix, cmap = \"gray\")\nplt.colorbar()", "The matrix is clearly still random, but the colorbar tells us the values in the picture range from around -3.5 or so to +4, giving us an idea of what's in our data.\nseaborn\nThe truth is, there is endless freedom in matplotlib to customize the look and feel; you could spend a career digging through the documentation to master the ability to change edge colors, line thickness, and marker transparencies. At least in my opinion, there's a better way.", "import seaborn as sns # THIS IS THE KEY TO EVERYTHING\n\nx = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10\ny = np.sin(x) # Compute the sine of each of these numbers.\n\nplt.plot(x, y)", "The seaborn package is a plotting library in its own right, but first and foremost it effectively serves as a \"light reskin\" of matplotlib, changing the defaults (sometimes drastically) to be much more aesthetically and practically agreeable.\nThere will certainly be cases where seaborn doesn't solve your plotting issue, but for the most part, I've found import seaborn to assuage a lot of my complaints.\nMiscellany\nMatplotlib has a ton of other functionality we've not touched on, but in case you wanted to look into:\n\n\nAnimations: you can create animated plots in the form of gifs or movie files to show dynamic behavior.\n\n\n3D plotting: Matplotlib has an Axes3D object you can use to create 3D lines, scatter plots, and surfaces.\n\n\nBox plots, violin plots, heatmaps, polar plots, and countless others. Matplotlib has a gallery set up with examples of how to do all of these (see Additional Resources)\n\n\nThe one thing it doesn't quite do is allow for interactive plots--as in, figures you can embed within HTML that use JavaScript in order to give you the ability to do things like zoom in at certain places, or click on specific points. This can be used with other plotting packages like bokeh (pronounced \"bouquet\") or mpld3.\nCourse Administrivia\n\n\nGuest lecturer on Thursday!\n\n\nFinal project proposal feedback will start coming this week.\n\n\nAssignment 4?\n\n\nAdditional Resources\n\nMatplotlib gallery https://matplotlib.org/gallery.html \nMatplotlib 3D plotting tutorials https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html\nmpld3 examples https://mpld3.github.io/examples/index.html\nbokeh gallery http://bokeh.pydata.org/en/latest/docs/gallery.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.18/_downloads/11f39f61bd7f4cfd5791b0d10da462f2/plot_eeg_erp.ipynb
bsd-3-clause
[ "%matplotlib inline", "EEG processing and Event Related Potentials (ERPs)\nFor a generic introduction to the computation of ERP and ERF\nsee tut_epoching_and_averaging.\n :depth: 1", "import mne\nfrom mne.datasets import sample", "Setup for reading the raw data", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\n# these data already have an EEG average reference\nraw = mne.io.read_raw_fif(raw_fname, preload=True)", "Let's restrict the data to the EEG channels", "raw.pick_types(meg=False, eeg=True, eog=True)", "By looking at the measurement info you will see that we have now\n59 EEG channels and 1 EOG channel", "print(raw.info)", "In practice it's quite common to have some EEG channels that are actually\nEOG channels. To change a channel type you can use the\n:func:mne.io.Raw.set_channel_types method. For example\nto treat an EOG channel as EEG you can change its type using", "raw.set_channel_types(mapping={'EOG 061': 'eeg'})\nprint(raw.info)", "And to change the nameo of the EOG channel", "raw.rename_channels(mapping={'EOG 061': 'EOG'})", "Let's reset the EOG channel back to EOG type.", "raw.set_channel_types(mapping={'EOG': 'eog'})", "The EEG channels in the sample dataset already have locations.\nThese locations are available in the 'loc' of each channel description.\nFor the first channel we get", "print(raw.info['chs'][0]['loc'])", "And it's actually possible to plot the channel locations using\n:func:mne.io.Raw.plot_sensors.", "raw.plot_sensors()\nraw.plot_sensors('3d') # in 3D", "Setting EEG Montage (using standard montages)\nIn the case where your data don't have locations you can set them\nusing a :class:mne.channels.Montage. MNE comes with a set of default\nmontages. To read one of them do:", "montage = mne.channels.read_montage('standard_1020')\nprint(montage)", "To apply a montage on your data use the set_montage method.\nfunction. Here don't actually call this function as our demo dataset\nalready contains good EEG channel locations.\nNext we'll explore the definition of the reference.\nSetting EEG reference\nLet's first remove the reference from our Raw object.\nThis explicitly prevents MNE from adding a default EEG average reference\nrequired for source localization.", "raw_no_ref, _ = mne.set_eeg_reference(raw, [])", "We next define Epochs and compute an ERP for the left auditory condition.", "reject = dict(eeg=180e-6, eog=150e-6)\nevent_id, tmin, tmax = {'left/auditory': 1}, -0.2, 0.5\nevents = mne.read_events(event_fname)\nepochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,\n reject=reject)\n\nevoked_no_ref = mne.Epochs(raw_no_ref, **epochs_params).average()\ndel raw_no_ref # save memory\n\ntitle = 'EEG Original reference'\nevoked_no_ref.plot(titles=dict(eeg=title), time_unit='s')\nevoked_no_ref.plot_topomap(times=[0.1], size=3., title=title, time_unit='s')", "Average reference: This is normally added by default, but can also\nbe added explicitly.", "raw.del_proj()\nraw_car, _ = mne.set_eeg_reference(raw, 'average', projection=True)\nevoked_car = mne.Epochs(raw_car, **epochs_params).average()\ndel raw_car # save memory\n\ntitle = 'EEG Average reference'\nevoked_car.plot(titles=dict(eeg=title), time_unit='s')\nevoked_car.plot_topomap(times=[0.1], size=3., title=title, time_unit='s')", "Custom reference: Use the mean of channels EEG 001 and EEG 002 as\na reference", "raw_custom, _ = mne.set_eeg_reference(raw, ['EEG 001', 'EEG 002'])\nevoked_custom = mne.Epochs(raw_custom, **epochs_params).average()\ndel raw_custom # save memory\n\ntitle = 'EEG Custom reference'\nevoked_custom.plot(titles=dict(eeg=title), time_unit='s')\nevoked_custom.plot_topomap(times=[0.1], size=3., title=title, time_unit='s')", "Evoked arithmetic (e.g. differences)\nTrial subsets from Epochs can be selected using 'tags' separated by '/'.\nEvoked objects support basic arithmetic.\nFirst, we create an Epochs object containing 4 conditions.", "event_id = {'left/auditory': 1, 'right/auditory': 2,\n 'left/visual': 3, 'right/visual': 4}\nepochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,\n reject=reject)\nepochs = mne.Epochs(raw, **epochs_params)\n\nprint(epochs)", "Next, we create averages of stimulation-left vs stimulation-right trials.\nWe can use basic arithmetic to, for example, construct and plot\ndifference ERPs.", "left, right = epochs[\"left\"].average(), epochs[\"right\"].average()\n\n# create and plot difference ERP\njoint_kwargs = dict(ts_args=dict(time_unit='s'),\n topomap_args=dict(time_unit='s'))\nmne.combine_evoked([left, -right], weights='equal').plot_joint(**joint_kwargs)", "This is an equal-weighting difference. If you have imbalanced trial numbers,\nyou could also consider either equalizing the number of events per\ncondition (using\n:meth:epochs.equalize_event_counts &lt;mne.Epochs.equalize_event_counts&gt;).\nAs an example, first, we create individual ERPs for each condition.", "aud_l = epochs[\"auditory\", \"left\"].average()\naud_r = epochs[\"auditory\", \"right\"].average()\nvis_l = epochs[\"visual\", \"left\"].average()\nvis_r = epochs[\"visual\", \"right\"].average()\n\nall_evokeds = [aud_l, aud_r, vis_l, vis_r]\nprint(all_evokeds)", "This can be simplified with a Python list comprehension:", "all_evokeds = [epochs[cond].average() for cond in sorted(event_id.keys())]\nprint(all_evokeds)\n\n# Then, we construct and plot an unweighted average of left vs. right trials\n# this way, too:\nmne.combine_evoked(\n [aud_l, -aud_r, vis_l, -vis_r], weights='equal').plot_joint(**joint_kwargs)", "Often, it makes sense to store Evoked objects in a dictionary or a list -\neither different conditions, or different subjects.", "# If they are stored in a list, they can be easily averaged, for example,\n# for a grand average across subjects (or conditions).\ngrand_average = mne.grand_average(all_evokeds)\nmne.write_evokeds('/tmp/tmp-ave.fif', all_evokeds)\n\n# If Evokeds objects are stored in a dictionary, they can be retrieved by name.\nall_evokeds = dict((cond, epochs[cond].average()) for cond in event_id)\nprint(all_evokeds['left/auditory'])\n\n# Besides for explicit access, this can be used for example to set titles.\nfor cond in all_evokeds:\n all_evokeds[cond].plot_joint(title=cond, **joint_kwargs)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hpparvi/PyTransit
notebooks/example_uniform_model.ipynb
gpl-2.0
[ "Uniform model\nThe uniform model, pytransit.UniformModel, implements a transit over a uniform stellar disk as described by Mandel & Agol (ApJ 580, 2002). The model is parallelised using numba, and the number of threads can be set using the NUMBA_NUM_THREADS environment variable. An OpenCL version for GPU computation is implemented by pytransit.UniformModelCL.", "%pylab inline \n\nsys.path.append('..')\n\nfrom pytransit import UniformModel\n\nseed(0)\n\ntimes_sc = linspace(0.85, 1.15, 1000) # Short cadence time stamps\ntimes_lc = linspace(0.85, 1.15, 100) # Long cadence time stamps\n\nk, t0, p, a, i, e, w = 0.1, 1., 2.1, 3.2, 0.5*pi, 0.3, 0.4*pi\npvp = tile([k, t0, p, a, i, e, w], (50,1))\npvp[1:,0] += normal(0.0, 0.005, size=pvp.shape[0]-1)\npvp[1:,1] += normal(0.0, 0.02, size=pvp.shape[0]-1)", "Model initialization\nThe uniform model doesn't take any special initialization arguments, so the initialization is straightforward.", "tm = UniformModel()", "Data setup\nHomogeneous time series\nThe model needs to be set up by calling set_data() before it can be used. At its simplest, set_data takes the mid-exposure times of the time series to be modelled.", "tm.set_data(times_sc)", "Model use\nEvaluation\nThe transit model can be evaluated using either a set of scalar parameters, a parameter vector (1D ndarray), or a parameter vector array (2D ndarray). The model flux is returned as a 1D ndarray in the first two cases, and a 2D ndarray in the last (one model per parameter vector).\n\ntm.evaluate_ps(k, t0, p, a, i, e=0, w=0) evaluates the model for a set of scalar parameters, where k is the radius ratio, t0 the zero epoch, p the orbital period, a the semi-major axis divided by the stellar radius, i the inclination in radians, e the eccentricity, and w the argument of periastron. Eccentricity and argument of periastron are optional, and omitting them defaults to a circular orbit. \n\ntm.evaluate_pv(pv) evaluates the model for a 1D parameter vector, or 2D array of parameter vectors. In the first case, the parameter vector should be array-like with elements [k, t0, p, a, i, e, w]. In the second case, the parameter vectors should be stored in a 2d ndarray with shape (npv, 7) as \n[[k1, t01, p1, a1, i1, e1, w1],\n [k2, t02, p2, a2, i2, e2, w2],\n ...\n [kn, t0n, pn, an, in, en, wn]]\n\n\nThe reason for the different options is that the model implementations may have optimisations that make the model evaluation for a set of parameter vectors much faster than if computing them separately. This is especially the case for the OpenCL models.", "def plot_transits(tm, fmt='k'):\n fig, axs = subplots(1, 3, figsize = (13,3), constrained_layout=True, sharey=True)\n\n flux = tm.evaluate_ps(k, t0, p, a, i, e, w)\n axs[0].plot(tm.time, flux, fmt)\n axs[0].set_title('Individual parameters')\n\n flux = tm.evaluate_pv(pvp[0])\n axs[1].plot(tm.time, flux, fmt)\n axs[1].set_title('Parameter vector')\n\n flux = tm.evaluate_pv(pvp)\n axs[2].plot(tm.time, flux.T, 'k', alpha=0.2);\n axs[2].set_title('Parameter vector array')\n\n setp(axs[0], ylabel='Normalised flux')\n setp(axs, xlabel='Time [days]', xlim=tm.time[[0,-1]])\n\ntm.set_data(times_sc)\nplot_transits(tm)", "Supersampling\nThe transit model can be supersampled by setting the nsamples and exptimes arguments in set_data.", "tm.set_data(times_lc, nsamples=10, exptimes=0.01)\nplot_transits(tm)", "Heterogeneous time series\nPyTransit allows for heterogeneous time series, that is, a single time series can contain several individual light curves (with, e.g., different time cadences and required supersampling rates) observed (possibly) in different passbands.\nIf a time series contains several light curves, it also needs the light curve indices for each exposure. These are given through lcids argument, which should be an array of integers. If the time series contains light curves observed in different passbands, the passband indices need to be given through pbids argument as an integer array, one per light curve. Supersampling can also be defined on per-light curve basis by giving the nsamplesand exptimes as arrays with one value per light curve. \nFor example, a set of three light curves, two observed in one passband and the third in another passband\ntimes_1 (lc = 0, pb = 0, sc) = [1, 2, 3, 4]\ntimes_2 (lc = 1, pb = 0, lc) = [3, 4]\ntimes_3 (lc = 2, pb = 1, sc) = [1, 5, 6]\n\nWould be set up as\ntm.set_data(time = [1, 2, 3, 4, 3, 4, 1, 5, 6], \n lcids = [0, 0, 0, 0, 1, 1, 2, 2, 2], \n pbids = [0, 0, 1],\n nsamples = [ 1, 10, 1],\n exptimes = [0.1, 1.0, 0.1])\n\nExample: two light curves with different cadences", "times_1 = linspace(0.85, 1.0, 500)\ntimes_2 = linspace(1.0, 1.15, 10)\ntimes = concatenate([times_1, times_2])\nlcids = concatenate([full(times_1.size, 0, 'int'), full(times_2.size, 1, 'int')])\nnsamples = [1, 10]\nexptimes = [0, 0.0167]\n\ntm.set_data(times, lcids, nsamples=nsamples, exptimes=exptimes)\nplot_transits(tm, 'k.-')", "OpenCL\nUsage\nThe OpenCL version of the uniform model, pytransit.UniformModelCL works identically to the Python version, except that the OpenCL context and queue can be given as arguments in the initialiser, and the model evaluation method can be told to not to copy the model from the GPU memory. If the context and queue are not given, the model creates a default context using cl.create_some_context().", "import pyopencl as cl\nfrom pytransit import UniformModelCL\n\ndevices = cl.get_platforms()[0].get_devices()[2:]\nctx = cl.Context(devices)\nqueue = cl.CommandQueue(ctx)\n\ntm_cl = UniformModelCL(cl_ctx=ctx, cl_queue=queue)\n\ntm_cl.set_data(times_sc)\nplot_transits(tm_cl)", "GPU vs. CPU Performance\nThe performance difference between the OpenCL and Python versions depends on the CPU, GPU, number of simultaneously evaluated models, amount of supersampling, and whether the model data is copied from the GPU memory. The performance difference grows in the favour of OpenCL model with the number of simultaneous models and amount of supersampling, but copying the data slows the OpenCL implementation down. For best performance, also the log likelihood computations should be done in the GPU.", "times_sc2 = tile(times_sc, 20) # 20000 short cadence datapoints\ntimes_lc2 = tile(times_lc, 50) # 5000 long cadence datapoints\n\ntm_py = UniformModel()\ntm_cl = UniformModelCL(cl_ctx=ctx, cl_queue=queue)\n\ntm_py.set_data(times_sc2)\ntm_cl.set_data(times_sc2)\n\n%%timeit\ntm_py.evaluate_pv(pvp)\n\n%%timeit\ntm_cl.evaluate_pv(pvp, copy=True)\n\ntm_py.set_data(times_lc2, nsamples=10, exptimes=0.01)\ntm_cl.set_data(times_lc2, nsamples=10, exptimes=0.01)\n\n%%timeit\ntm_py.evaluate_pv(pvp)\n\n%%timeit\ntm_cl.evaluate_pv(pvp, copy=True)", "<center>&copy; Hannu Parviainen 2010-2020</center>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dryadb11781/machine-learning-python
Feature_Selection/ipython_notebook/ex2_Recursive_feature_elimination.ipynb
bsd-3-clause
[ "特徵選擇/範例二: Recursive feature elimination\nhttp://scikit-learn.org/stable/auto_examples/feature_selection/plot_rfe_digits.html\n本範例主要目的是減少特徵數量來提升機器學習之預測準確度。\n主要方法是去不斷去剔除與資料分類關係轉少之特徵,來篩選特徵數目至指定數目。\n\n以load_digits取得內建的數字辨識資料\n以RFE疊代方式刪去相對不具有目標影響力的特徵\n\n(一)產生內建的數字辨識資料", "# Load the digits dataset\ndigits = load_digits()\nX = digits.images.reshape((len(digits.images), -1))\ny = digits.target", "數位數字資料是解析度為8*8的手寫數字影像,總共有1797筆資料。預設為0~9十種數字類型,亦可由n_class來設定要取得多少種數字類型。\n輸出的資料包含\n1. ‘data’, 特徵資料(179764)\n2. ‘images’, 影像資料(1797*88)\n3. ‘target’, 資料標籤(1797)\n4. ‘target_names’, 選取出的標籤列表(與n_class給定的長度一樣)\n5. ‘DESCR’, 此資料庫的描述\n可以參考Classification的Ex1\n(二)以疊代方式計算模型\nRFE以排除最不具目標影響力的特徵,做特徵的影響力排序。並且將訓練用的特徵挑選至n_features_to_select所給定的特徵數。因為要看每一個特徵的影響力排序,所以我們將n_features_to_select設定為1,一般會根據你所知道的具有影響力特徵數目來設定該參數。而step代表每次刪除較不具影響力的特徵數目,因為本範例要觀察每個特徵的影響力排序,所以也是設定為1。若在實際應用時,特徵的數目較大,可以考慮將step的參數設高一點。", "# Create the RFE object and rank each pixel\nsvc = SVC(kernel=\"linear\", C=1)\nrfe = RFE(estimator=svc, n_features_to_select=1, step=1)\nrfe.fit(X, y)\nranking = rfe.ranking_.reshape(digits.images[0].shape)", "可以用方法ranking_來看輸入的特徵權重關係。而方法estimator_可以取得訓練好的分類機狀態。比較特別的是當我們核函數是以線性來做分類時,estimator_下的方法coef_即為特徵的分類權重矩陣。權重矩陣的大小會因為n_features_to_select與資料的分類類別而改變,譬如本範例是十個數字的分類,並選擇以一個特徵來做分類訓練,就會得到45*1的係數矩陣,其中45是從分類類別所需要的判斷式而來,與巴斯卡三角形的第三層數正比。\n(三)畫出每個像素所對應的權重順序\n取得每個像素位置對於判斷數字的權重順序後,我們把權重順序依照顏色畫在對應的位置,數值愈大代表該像素是較不重要之特徵。由結果來看,不重要之特徵多半位於影像之外圍部份。而所有的訓練影像中,外圍像素多半為空白,因此較不重要。", "# Plot pixel ranking\nplt.matshow(ranking, cmap=plt.cm.Blues)\nplt.colorbar()\nplt.title(\"Ranking of pixels with RFE\")\nplt.show()", "(四)原始碼\nPython source code: plot_rfe_digits.py", "print(__doc__)\n\nfrom sklearn.svm import SVC\nfrom sklearn.datasets import load_digits\nfrom sklearn.feature_selection import RFE\nimport matplotlib.pyplot as plt\n\n# Load the digits dataset\ndigits = load_digits()\nX = digits.images.reshape((len(digits.images), -1))\ny = digits.target\n\n# Create the RFE object and rank each pixel\nsvc = SVC(kernel=\"linear\", C=1)\nrfe = RFE(estimator=svc, n_features_to_select=1, step=1)\nrfe.fit(X, y)\nranking = rfe.ranking_.reshape(digits.images[0].shape)\n\n# Plot pixel ranking\nplt.matshow(ranking, cmap=plt.cm.Blues)\nplt.colorbar()\nplt.title(\"Ranking of pixels with RFE\")\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
csdms/pymt
notebooks/hydrotrend.ipynb
mit
[ "HydroTrend\n\nLink to this notebook: https://github.com/csdms/pymt/blob/master/notebooks/hydrotrend.ipynb\nPackage installation command: $ conda install notebook pymt_hydrotrend\nCommand to download a local copy:\n\n$ curl -O https://raw.githubusercontent.com/csdms/pymt/master/notebooks/hydrotrend.ipynb\nHydroTrend is a 2D hydrological water balance and transport model that simulates water discharge and sediment load at a river outlet. You can read more about the model, find references or download the C source code at: https://csdms.colorado.edu/wiki/Model:HydroTrend.\nThis notebook has been created by Irina Overeem, September 18, 2019.\nRiver Sediment Supply Modeling\nThis notebook is meant to give you a better understanding of what HydroTrend is capable of. In this example we are using a theoretical river basin of ~1990 km<sup>2</sup>, with 1200m of relief and a river length of\n~100 km. All parameters that are shown by default once the HydroTrend Model is loaded are based\non a present-day, temperate climate. Whereas these runs are not meant to be specific, we are\nusing parameters that are realistic for the Waiapaoa River in New Zealand. The Waiapaoa River\nis located on North Island and receives high rain and has erodible soils, so the river sediment\nloads are exceptionally high. It has been called the \"dirtiest small river in the world\".\nA more detailed description of applying HydroTrend to the Waipaoa basin, New Zealand has been published in WRR: hydrotrend_waipaoa_paper. \nRun HydroTrend Simulations with pymt\nNow we will be using the capability of the Python Modeling Tool, pymt. Pymt is a Python toolkit for running and coupling Earth surface models. \nhttps://csdms.colorado.edu/wiki/PyMT", "# To start, import numpy and matplotlib.\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Then we import the package \nimport pymt.models\n\nhydrotrend = pymt.models.Hydrotrend()\n\nimport pymt\npymt.__version__", "Learn about the Model Input\n<br>\nHydroTrend will now be activated in PyMT. You can find information on the model, the developer, the papers that describe the moel in more detail etc. \nImportantly you can scroll down a bit to the Parameters list, it shows what parameters the model uses to control the simulations. The list is alphabetical and uses precisely specified 'Standard Names'.\nNote that every parameter has a 'default' value, so that when you do not list it in the configure command, you will run with these values.", "# Get basic information about the HydroTrend model \nhelp(hydrotrend)", "Exercise 1: Explore the Hydrotrend base-case river simulation\nFor this case study, first we will create a subdirectory in which the basecase (BC) simulation will be implemented. \nThen we specify for how long we will run a simulation: for 100 years at daily time-step.\nThis means you run Hydrotrend for 36,500 days total. \nThis is also the line of code where you would add other input parameters with their values.", "# Set up Hydrotrend model by indicating the number of years to run\nconfig_file, config_folder = hydrotrend.setup(\"_hydrotrendBC\", run_duration=100)", "With the cat command you can print character by character one of the two input files that HydroTrend uses.\nHYDRO0.HYPS: This first file specifies the River Basin Hysometry - the surface area per elevation zone. The hypsometry captures the geometric characteristics of the river basin, how high is the relief, how much uplands are there versus lowlands, where would the snow fall elevation line be etcetera. <br>\nHYDRO.IN: This other file specifies the basin and climate input data.", "cat _hydrotrendBC/HYDRO0.HYPS\n\ncat _hydrotrendBC/HYDRO.IN\n\n#In pymt one can always find out what output a model generates by using the .output_var_names method. \nhydrotrend.output_var_names\n\n# Now we initialize the model with the configure file and in the configure folder\nhydrotrend.initialize(config_file, config_folder)\n\n# this line of code lists time parameters, when, how long and at what timestep will the model simulation work?\nhydrotrend.start_time, hydrotrend.time, hydrotrend.end_time, hydrotrend.time_step, hydrotrend.time_units\n\n# this code declares numpy arrays for several important parameters we want to save.\nn_days = int(hydrotrend.end_time)\nq = np.empty(n_days) #river discharge at the outlet\nqs = np.empty(n_days)# sediment load at the outlet\ncs = np.empty(n_days) # suspended sediment concentration for different grainsize classes at the outlet\nqb = np.empty(n_days) # bedload at the outlet\n\n# here we have coded up the time loop using i as the index\n# we update the model with one timestep at the time, untill we reach the end time \n# for each time step we also get the values for the output parameters we wish to \nfor i in range(n_days):\n hydrotrend.update()\n q[i] = hydrotrend.get_value(\"channel_exit_water__volume_flow_rate\")\n qs[i] = hydrotrend.get_value(\"channel_exit_water_sediment~suspended__mass_flow_rate\")\n cs[i] = hydrotrend.get_value(\"channel_exit_water_sediment~suspended__mass_concentration\")\n qb[i] = hydrotrend.get_value(\"channel_exit_water_sediment~bedload__mass_flow_rate\")\n\n# We can plot the simulated output timeseries of Hydrotrend, for example the river discharge\n\nplt.plot(q)\nplt.title('HydroTrend simulation of 100 year river discharge, Waiapaoa River')\nplt.ylabel('river discharge in m3/sec')\nplt.show\n\n# Or you can plot a subset of the simulated daily timeseries using the index\n\n#for example the first year\nplt.plot(q[0:365], 'black')\n# compare with the last year\nplt.plot(q[-366:-1],'grey')\nplt.title('HydroTrend simulation of first and last year discharge, Waiapaoa River')\nplt.show()\n\n# Of course, it is important to calculate statistical properties of the simulated parameters\n\nprint(q.mean())\nhydrotrend.get_var_units(\"channel_exit_water__volume_flow_rate\")", "## <font color = green> Assignment 1 </font> \nCalculate mean water discharge Q, mean suspended load Qs, mean sediment concentration Cs, and mean bedload Qb for this 100 year simulation of the river dynamics of the Waiapaoa River.\nNote all values are reported as daily averages. What are the units?", "# your code goes here", "<font color = green> Assignment 2 </font>\nIdentify the highest flood event for this simulation. Is this the 100-year flood? Please list a definition of a 100 year flood, and discuss whether the modeled extreme event fits this definition. \nPlot the year of Q-data which includes the flood.", "# here you can calculate the maximum river discharge.\n\n# your code to determine which day and which year encompass the maximum discharge go here\n# Hint: you will want to determine the ndex of htis day first, look into the numpy.argmax and numpy.argmin \n\n# as a sanity check you can see whether the plot y-axis seems to go up to the maximum you had calculated in the previous step\n# as a sanity check you can look in the plot of all the years to see whether the timing your code predicts is correct\n\n# type your explanation about the 100 year flood here.", "<font color = green> Assignment 3 </font>\nCalculate the mean annual sediment load for this river system.\nThen compare the annual load of the Waiapaoha river to the Mississippi River. <br>\nTo compare the mean annual load to other river systems you will need to calculate its sediment yield. \nSediment Yield is defined as sediment load normalized for the river drainage area; \nso it can be reported in T/km2/yr.", "# your code goes here\n# you will have to sum all days of the individual years, to get the annual loads, then calculate the mean over the 100 years.\n# one possible trick is to use the .reshape() method\n# plot a graph of the 100 years timeseries of the total annual loads \n\n# take the mean over the 100 years\n\n#your evaluation of the sediment load of the Waiapaoha River and its comparison to the Mississippi River goes here.\n#Hint: use the following paper to read about the Mississippi sediment load (Blum, M, Roberts, H., 2009. Drowning of the Mississippi Delta due to insufficient sediment supply and global sea-level rise, Nature Geoscience).", "HydroTrend Exercise 2: How does a river system respond to climate change; two simple scenarios for the coming century.\nNow we will look at changing climatic conditions in a small river basin. We'll change temperature and precipitation regimes and compare discharge and sediment load characteristics to the original basecase. And we will look at the are potential implications of changes in the peak events.\nModify the mean annual temperature T, the mean annual precipitation P. You can specify trends over time, by modifying the parameter ‘change in mean annual temperature’ or ‘change in mean annual precipitation’. HydroTrend runs at daily timestep, and thus can deal with seasonal variations in temperature and precipitation for a basin. The model ingests monthly mean input values for these two climate parameters and their monthly standard deviations, ideally the values would be derived from analysis of a longterm record of daily climate data. You can adapt seasonal trends by using the monthly values.\n<font color = green> Assignment 4 </font>\nWhat happens to river discharge, suspended load and bedload if the mean annual temperature in this specific river basin increases by 4 °C over the next 50 years? In this assignment we set up a new simulation for a warming climate.", "# Set up a new run of the Hydrotrend model \n# Create a new config file a different folder for input and output files, indicating the number of years to run, and specify the change in mean annual temparture parameter\nhydrotrendHT = pymt.models.Hydrotrend()\nconfig_file, config_folder = hydrotrendHT.setup(\"_hydrotrendhighT\", run_duration=50, change_in_mean_annual_temperature=0.08)\n\n# intialize the new simulation\nhydrotrendHT.initialize(config_file, config_folder)\n\n# the code for the timeloop goes here\n# I use the abbrevation HT for 'High Temperature' scenario\nn_days = int(hydrotrendHT.end_time)\nq_HT = np.empty(n_days) #river discharge at the outlet\nqs_HT = np.empty(n_days)# sediment load at the outlet\ncs_HT = np.empty(n_days) # suspended sediment concentration for different grainsize classes at the outlet\nqb_HT = np.empty(n_days) # bedload at the outlet\nfor i in range(n_days):\n hydrotrendHT.update()\n q_HT[i] = hydrotrendHT.get_value(\"channel_exit_water__volume_flow_rate\")\n qs_HT[i] = hydrotrendHT.get_value(\"channel_exit_water_sediment~suspended__mass_flow_rate\")\n cs_HT[i] = hydrotrendHT.get_value(\"channel_exit_water_sediment~suspended__mass_concentration\")\n qb_HT[i] = hydrotrendHT.get_value(\"channel_exit_water_sediment~bedload__mass_flow_rate\")\n\n# your code that prints out the mean river discharge, the mean sediment load and the mean bedload goes here\n\n\n# print out these same parameters for the basecase for comparison", "<font color = green> Assignment 5 </font>\nSo what is the effect of a warming basin temperature? \nHow much increase or decrease of river discharge do you see after 50 years? <br>\nHow is the mean suspended load affected? <br>\nHow does the mean bedload change? <br>\nWhat happens to the peak event; look at the maximum sediment load event of the last 5 years of the simulation?", "# type your answers here", "<font color = green> Assignment 6 </font>\nWhat happens to river discharge, suspended load and bedload if the mean annual precipitation would increase by 50% in this specific river basin over the next 50 years? Create a new simulation folder, High Precipitation, HP, and set up a run with a trend in future precipitation.", "# Set up a new run of the Hydrotrend model \n# Create a new config file indicating the number of years to run, and specify the change in mean annual precipitation parameter\n\n# initialize the new simulation\n\n# your code for the timeloop goes here\n\n## your code that prints out the mean river discharge, the mean sediment load and the mean bedload goes here", "<font color = green> Assignment 7 </font>\nIn addition, climate model predictions indicate that perhaps precipitation intensity and variability could increase. How would you possibly model this? Discuss how you would modify your input settings for precipitation.", "#type your answer here", "Exercise 3: How do humans affect river sediment loads?\nHere we will look at the effect of human in a river basin. Humans can accelerate erosion\nprocesses, or reduce the sediment loads traveling through a river system. Both concepts can\nbe simulated, first run 3 simulations systematically increasing the anthropogenic factor (0.5-8.0 is the range).\n<font color = green> Assignment 8 </font>\nDescribe in your own words the meaning of the human-induced erosion factor, (Eh). This factor is parametrized as the “Antropogenic” factor in HydroTrend. Read more about this in: Syvitski & Milliman, 2007, Geology, Geography, and Humans Battle for Dominance over the Delivery of Fluvial Sediment to the Coastal Ocean. 2007, 115, p. 1–19.", "# your explanation goes here, can you list two reasons why this factor would be unsuitable or it would fall short?", "<font color = green> Bonus Assignment 9 </font>\nModel a scenario of a drinking water supply reservoir to be planned in the coastal area of the basin. The reservoir would have 800 km 2of contributing drainage area and be 3 km long, 200m wide and 100m deep. Set up a simulation with these parameters.", "# Set up a new 50 year of the Hydrotrend model \n# Create a new directory, and a config file indicating the number of years to run, and specify different reservoir parameters\n\n# initialize the new simulation\n\n# your code for the timeloop and update loop goes here\n\n# plot a bar graph comparing Q mean, Qs mean, Qmax, Qs Max, Qb mean and Qbmax for the basecase run and the reservoir run\n\n# Describe how such a reservoir affects the water and sediment load at the coast (i.e. downstream of the reservoir)?", "<font color = green> Bonus Assignment 10 </font>\nSet up a simulation for a different river basin. \nThis means you would need to change the HYDRO0.HYPS file and change some climatic parameters. \nThere are several hypsometric files packaged with HydroTrend, you can use one of those, but are welcome to do something different!", "# write a short motivation and description of your scenario\n\n# make a 2 panel plot using the subplot functionality of matplotlib \n# One panel would show the hypsometry of the Waiapohoa and the other panel the hypsometry of your selected river basin \n\n# Set up a new 50 year of the Hydrotrend model \n# Create a new directory for this different basin\n\n# initialize the new simulation\n\n# your code for the timeloop and update loop goes here\n\n# plot a line graph comparing Q mean, Qs mean, for the basecase run and the new river basin run", "<font color = green> ALL DONE! </font>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ledeprogram/algorithms
class6/donow/Lee_Dongjin_6_Donow.ipynb
gpl-3.0
[ "1. Import the necessary packages to read in the data, plot, and create a linear regression model", "import pandas as pd\n%matplotlib inline\nimport matplotlib.pyplot as plt # package for doing plotting (necessary for adding the line)\nimport statsmodels.formula.api as smf # package we'll be using for linear regression\nimport numpy as np\nimport scipy as sp", "2. Read in the hanford.csv file", "df = pd.read_csv(\"data/hanford.csv\")\ndf", "3. Calculate the basic descriptive statistics on the data", "df.describe()", "4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?", "df.plot(kind='scatter', x='Exposure', y='Mortality')\n\nr = df.corr()['Exposure']['Mortality']\nr", "Yes, there seems to be a correlation wothy of investigation.\n5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure", "lm = smf.ols(formula=\"Mortality~Exposure\",data=df).fit()\nintercept, slope = lm.params\n\n\nlm.params", "6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)", "# Method 01 (What we've learned from the class)\ndf.plot(kind='scatter', x='Exposure', y='Mortality')\nplt.plot(df[\"Exposure\"],slope*df[\"Exposure\"]+intercept,\"-\",color=\"red\")\n\n# Method 02 (Another version) _ so much harder ...than what we have learned\n\ndef plot_correlation( ds, x, y, ylim=(100,240) ):\n plt.xlim(0,14)\n plt.ylim(ylim[0],ylim[1])\n plt.scatter(ds[x], ds[y], alpha=0.6, s=50) \n for abc, row in ds.iterrows():\n plt.text(row[x], row[y],abc )\n plt.xlabel(x)\n plt.ylabel(y)\n \n # Correlation \n trend_variable = np.poly1d(np.polyfit(ds[x], ds[y], 1))\n trendx = np.linspace(0, 14, 4)\n plt.plot(trendx, trend_variable(trendx), color='r') \n r = sp.stats.pearsonr(ds[x],ds[y])\n plt.text(trendx[3], trend_variable(trendx[3]),'r={:.3f}'.format(r[0]), color = 'r' )\n plt.tight_layout()\n\nplot_correlation(df,'Exposure','Mortality')\n\nr_squared = r **2\nr_squared", "7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10", "def predicting_mortality_rate(exposure):\n return intercept + float(exposure) * slope\n\npredicting_mortality_rate(10)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]